Enable caching in Nginx Reverse Proxy (and solve cache MISS only)

Written by - 10 comments

Published on - Listed in Linux Nginx


In a lot of setups where a reverse proxy is needed, nginx is my first choice. But nginx can do more than just making a connection to backend/upstream servers, it can also serve as caching server. This makes sense and is helpful when all upstream servers are down.
On this page I first describe how caching can be enabled and afterwards how I needed to troubleshoot "MISS only" responses from the nginx reverse proxy.


Enable Caching on Nginx

To enable caching, there are two parameters to be set: proxy_cache_path, which defines a "caching zone" and proxy_cache_key, which defines how nginx should organize its internal file hierarchy for the cache.

proxy_cache_path must be defined in the http section of nginx, therefore on Ubuntu 14.04 LTS I added the caching parameters into /etc/nginx/nginx.conf. proxy_cache_key could also be defined in server and even location sections, but in my scenario the caching only serves for one application so I decided to put both into the http section:

http {
[...]
    proxy_cache_path  /var/www/cache levels=1:2 keys_zone=fatcache:8m max_size=1000m inactive=600m;
    proxy_cache_key "$scheme$host$request_uri";
[...]
}

This defines "/var/www/cache" as the caching directory for the zone "fatcache". For a more detailed description, you should take a look at the official documentation of proxy_cache_path.
Make sure the directory exists and Nginx (www-data) has write permissions on it.
The proxy_cache_key setting in this example means that the cache file is created based on the http scheme, the hostname and the requested URI, for example: http://myapp.example.com/caching/my/app.

Now that nginx has the groundstone configuration for caching, we need to tell nginx, where we want to use the caching zone called fatcache. Here I decided to only cache requests to a certain URI, therefore I defined this in a location section:

server {
[...]
  location /api/ {
    include /etc/nginx/proxy-settings.conf;
    proxy_ignore_headers "Set-Cookie";
    proxy_hide_header "Set-Cookie";
    proxy_cache fatcache;
    add_header X-Proxy-Cache $upstream_cache_status;
    proxy_cache_valid  200 302  60m;
    proxy_cache_valid  404      1m;
    proxy_pass       http://127.0.0.1:8080;
  }
[...]
}

In the configuration above you see that the "fatcache" caching zone should be used for the proxy_cache.
The proxy_cache valid parameters define how long upstream responses should be cached. In this case all responses with status codes 200 and 302 are cached for 60 minutes. A 404 is cached for 1 minute.
The additional header "X-Proxy-Cache" is added in the reverse proxie's response to identify, if the served content comes from cache (HIT) or directly from upstream (MISS).

The following important information is crucial to make caching work:
Nginx caching does not work, if the http response from the upstream server contains the http header "Set-Cookie".
Because this application served this http header, I needed nginx to ignore the header. The official nginx documentation mentions this (for the proxy_cache_valid parameter):

If the header includes the “Set-Cookie” field, such a response will not be cached. 

So if you are not aware of this fact, you might be pulling your hair out for hours.

That's it. You can now restart Nginx and test caching with curl.

But I always get a MISS!

After I enabled caching, I tested the response with curl to see if the content would be served from the cache, but to my surprise I always got a "MISS" after several requests:

curl http://myapp.example.com/api/ping -v
[...]
< HTTP/1.1 200 OK
< Server: nginx/1.4.6 (Ubuntu)
< Date: Thu, 17 Sep 2015 08:01:12 GMT
[...]
< X-Proxy-Cache: MISS

Remember the included proxy-settings.conf in the location configuration above? It turned out that the proxy-settings.conf contained the following option:

proxy_buffering             off;

After I removed this parameter from proxy-settings.conf and reloaded nginx, the content was now served from cache:

curl http://myapp.example.com/api/ping -v
[...]
< HTTP/1.1 200 OK
< Server: nginx/1.4.6 (Ubuntu)
< Date: Thu, 17 Sep 2015 08:04:53 GMT
[...]
< X-Proxy-Cache: HIT

Unfortunately this is not mentioned in the official documentation of the proxy_buffering parameter, at least not in combination with caching. But it completely makes sense of course.

Update June 28th 2017: You might want to check out a follow up article "Empty page returned from Nginx with fastcgi caching (HEAD vs GET)".


Add a comment

Show form to leave a comment

Comments (newest first)

Meander from wrote on Jul 29th, 2024:

Thank you! Finally I get a HIT instead of a MISS!


Nikolay from wrote on Apr 17th, 2024:

Thank you!

Your comment about enabling proxy_buffering saves a lot of my time!


ck from Switzerland wrote on Feb 17th, 2023:

ahmad, Nginx should create a lot of directories and small files as cache inside the path you defined in proxy_cache_path. This is normal.


ahmad from wrote on Feb 17th, 2023:

proxy cache only cache a small file. why is that?


Nikolay Mihaylov from Bulgaria wrote on Jul 25th, 2020:

I will just copy / paste DK's comment:

>> You save my day. It's the same problem on myside to.
>> Fix by just remove the config: proxy_buffering off;

I believe you can then direct temp files on a RAM disk,
because "proxy_buffering off;" give some performance penalty
(tried 3-4 years ago on magnetic disk)


DK from wrote on Jul 11th, 2020:

You save my day. It's the same problem on myside to.
Fix by just remove the config: proxy_buffering off;


Thanks again.


ck from Switzerland wrote on Jun 17th, 2020:

Minnu, to my understanding setting proxy_cache somewhere is necessary. This can also be in the http section, but you most likely want to use different cache settings in different vhosts.


Minnu from wrote on Jun 16th, 2020:

Thanks for the explanation. Is it mandatory to give proxy_cache in the server block? For my case only proxy_cache_path is defined in the http section without proxy_cache in the server block. Some files are caching and some or not caching. I can see in the "MISS" for some files and HIT for some files.

I am new to these configurations. Can you please help me on this.


J from wrote on Aug 7th, 2019:

I appreciate you writing this. This was fantastic and helped tremendously.


Justin from wrote on Feb 28th, 2017:

Thanks for this!


RSS feed

Blog Tags:

  AWS   Android   Ansible   Apache   Apple   Atlassian   BSD   Backup   Bash   Bluecoat   CMS   Chef   Cloud   Coding   Consul   Containers   CouchDB   DB   DNS   Database   Databases   Docker   ELK   Elasticsearch   Filebeat   FreeBSD   Galera   Git   GlusterFS   Grafana   Graphics   HAProxy   HTML   Hacks   Hardware   Icinga   Influx   Internet   Java   KVM   Kibana   Kodi   Kubernetes   LVM   LXC   Linux   Logstash   Mac   Macintosh   Mail   MariaDB   Minio   MongoDB   Monitoring   Multimedia   MySQL   NFS   Nagios   Network   Nginx   OSSEC   OTRS   Office   PGSQL   PHP   Perl   Personal   PostgreSQL   Postgres   PowerDNS   Proxmox   Proxy   Python   Rancher   Rant   Redis   Roundcube   SSL   Samba   Seafile   Security   Shell   SmartOS   Solaris   Surveillance   Systemd   TLS   Tomcat   Ubuntu   Unix   VMWare   VMware   Varnish   Virtualization   Windows   Wireless   Wordpress   Wyse   ZFS   Zoneminder