Nginx: Configure bypassing and or purging the cache

Written by - 1 comments

Published on - Listed in Linux Nginx


In a previous post I wrote how nginx can be configured to enable caching (Enable caching in Nginx Reverse Proxy (and solve cache MISS only)).

So now you got your nginx caching working but you want to have a way to serve the content directly from the upstream, without going through the cache.

There are two ways to do this (there are very likely more ways, I only looked at those two), but both are completely different in its implementation. I will explain

proxy_cache_bypass
As the name says, you're actually bypassing the cache and getting the up-to-date content directly from the defined upstream in the proxy_pass definition. To use this feature, a HTTP header can be defined which can be used to "trigger the bypass". I will re-use the same configuration parts from my article to enable caching (see mentioned link above) to make it understandable.

server {
[...]
  location /api/ {
    include /etc/nginx/proxy-settings.conf;
    proxy_ignore_headers "Set-Cookie";
    proxy_hide_header "Set-Cookie";
    proxy_cache fatcache;
    add_header X-Proxy-Cache $upstream_cache_status;
    proxy_cache_valid  200 302  60m;
    proxy_cache_valid  404      1m;
    proxy_cache_bypass $http_cachepurge;
    proxy_pass       http://127.0.0.1:8080;
  }
[...]
}

 The proxy_cache_bypass is activated when the HTTP header "cachepurge" is used. We can verify this with curl:

curl -I myapp.example.com/api/ping -H "cachepurge: true"
HTTP/1.1 200 OK
Server: nginx/1.4.6 (Ubuntu)
Date: Wed, 30 Sep 2015 10:41:31 GMT
Content-Type: application/json; charset=utf-8
Content-Length: 1035
Connection: keep-alive
Cache-Control: public
X-Proxy-Cache: BYPASS

The additional defined header "X-Proxy-Cache" in the HTTP response nicely shows that we have successfully bypassed the cache.

The proxy_cache_bypass is a quick and easy way to compare the delivered content of the cache and the upstream server itself. However this only works with the manual usage of the defined HTTP header (cachepurge in this case) in the HTTP request. As soon as this header is not used anymore, the data will be read from the cache again (until the proxy_cache_valid time to live has arrived):

curl -I myapp.example.com/api/ping
HTTP/1.1 200 OK
Server: nginx/1.4.6 (Ubuntu)
Date: Wed, 30 Sep 2015 10:41:35 GMT
Content-Type: application/json; charset=utf-8
Content-Length: 1035
Connection: keep-alive
Cache-Control: public
X-Proxy-Cache: HIT

If the cache needs to be completely invalidated/purged/deleted, proxy_cache_bypass will not do this for you.

proxy_cache_purge
As the name of this module suggests, the usage of this module is to "purge" the cache. Which is basically the same as invalidate or delete the cache.
But now we're getting into advanced territory. The nginx module ngx_cache_purge is not part of the standard nginx package. But luckily, Debian maintainers of the nginx package have though further and embedded this module into the nginx-extras Debian package. Because Ubuntu pretty much uses the same nginx packages as in Debian, the ngx_cache_purge module can be found in the nginx-extras package on Ubuntu 14.04 LTS, too. This is a big help, otherwise we'd have to compile nginx manually (nothing wrong with that, but it's a huge time saver!).

After having installed nginx-extras, the configuration syntax "proxy_cache_purge" can now be added into location sections:

server {
[...]
  location /api/ {
    include /etc/nginx/proxy-settings.conf;
    proxy_ignore_headers "Set-Cookie";
    proxy_hide_header "Set-Cookie";
    proxy_cache fatcache;
    add_header X-Proxy-Cache $upstream_cache_status;
    proxy_cache_valid  200 302  60m;
    proxy_cache_valid  404      1m;
    proxy_cache_purge PURGE from all;
    proxy_pass       http://127.0.0.1:8080;
  }
[...]
}

By using the -X parameter in curl, we can send a special request command (PURGE in this case) to the URL:

curl -I myapp.example.com/api/ping -XPURGE
HTTP/1.1 200 OK
Server: nginx/1.4.6 (Ubuntu)
Date: Wed, 30 Sep 2015 11:48:19 GMT
Content-Type: text/html
Content-Length: 326
Connection: keep-alive

Nginx now removes the cache for the requested URI (/api/ping). At the next request of this URL the content is delivered directly from the upstream. This can be verified again with the HTTP header "X-Proxy-Cache" in the HTTP response:

curl -I myapp.example.com/api/ping
HTTP/1.1 200 OK
Server: nginx/1.4.6 (Ubuntu)
Date: Wed, 30 Sep 2015 11:48:21 GMT
Content-Type: application/json; charset=utf-8
Content-Length: 1035
Connection: keep-alive
Cache-Control: public
X-Proxy-Cache: MISS

Now the new content is cached again, so the next URL requests are cached again:

curl -I myapp.example.com/api/ping
HTTP/1.1 200 OK
Server: nginx/1.4.6 (Ubuntu)
Date: Wed, 30 Sep 2015 11:53:28 GMT
Content-Type: application/json; charset=utf-8
Content-Length: 1035
Connection: keep-alive
Cache-Control: public
X-Proxy-Cache: HIT

With proxy_cache_purge we have a nice way to clean the cache of a defined URL.
However if you want to invalidate the full cache (fatcache in our example), and not just a single URI, you need to get your wallet and pay for the commercial nginx version (Nginx Plus). Only Nginx Plus allows the invalidation of the full cache or using proxy_cache_purge with wildcards.


Add a comment

Show form to leave a comment

Comments (newest first)

yohan from Djibouti, Djibouti wrote on Aug 11th, 2016:

I didn't realize nginx-extras contained proxy_cache_purge until you mentioned it in this post.

Saved me a lot of wasted effort.

Thanks


RSS feed

Blog Tags:

  AWS   Android   Ansible   Apache   Apple   Atlassian   BSD   Backup   Bash   Bluecoat   CMS   Chef   Cloud   Coding   Consul   Containers   CouchDB   DB   DNS   Database   Databases   Docker   ELK   Elasticsearch   Filebeat   FreeBSD   Galera   Git   GlusterFS   Grafana   Graphics   HAProxy   HTML   Hacks   Hardware   Icinga   Influx   Internet   Java   KVM   Kibana   Kodi   Kubernetes   LVM   LXC   Linux   Logstash   Mac   Macintosh   Mail   MariaDB   Minio   MongoDB   Monitoring   Multimedia   MySQL   NFS   Nagios   Network   Nginx   OSSEC   OTRS   Office   PGSQL   PHP   Perl   Personal   PostgreSQL   Postgres   PowerDNS   Proxmox   Proxy   Python   Rancher   Rant   Redis   Roundcube   SSL   Samba   Seafile   Security   Shell   SmartOS   Solaris   Surveillance   Systemd   TLS   Tomcat   Ubuntu   Unix   VMWare   VMware   Varnish   Virtualization   Windows   Wireless   Wordpress   Wyse   ZFS   Zoneminder