Nginx is able to cache requests on its own. The advantage of using Nginx cache is its simplicity over Varnish.

What to cache?

The essence of server-side caching is not to constantly generate the same scripts (for example, a Wordpress post feed), which can sometimes take whole seconds. Instead, the application generates the page once, and the result is stored in memory. When a visitor requests the same page a second time, there will be no generation, and the client will receive the version stored in memory. Once in a while (called ttl), this saved version will be deleted and a new one generated to keep the data up to date.

Almost all sites can cache pages for unauthorized users. Well suited for sites with content that is publicly available.

Enabling Nginx Caching

First of all, you need to determine the maximum cache size (the total size of all pages in the cache will not exceed this size). This is done in the main configuration file (nginx.conf) in the http section:

Http (... proxy_cache_path / var / cache / nginx levels = 1: 2 keys_zone = all: 32m max_size = 1g; ... }

# Set the cache size to 1G, we will save it to the / var / cache / nginx folder

Do not forget to create a folder for the cache. mkdir / var / cache / nginx

Configuring hosts

For caching to work, we need to create a new host that will listen on port 80. And move the main host to some other port (for example, 81). The caching host will send requests to the main one or give data from the cache.

Caching host

server (listen 80; location / (proxy_pass http://127.0.0.1:81/; proxy_cache all; proxy_cache_valid any 1h;))

# Each page will be stored in the cache for 1 hour

Main host

server (listen 81; location / (# fpm etc.))

# Normal config on port 81 only

Cookies and personalization

Many sites use different personal blocks on their pages. SSI technology allows you to implement advanced caching in cases of a large number of personalized blocks. In a simple case, we can simply disable the cache if the user has any cookies set.

Server (listen 80; location / ( if ($ http_cookie ~ * ". +") (set $ do_not_cache 1;) proxy_cache_bypass $ do_not_cache; proxy_pass http://127.0.0.1:81/; proxy_cache all; proxy_cache_valid any 1h; ))

Errors

It also makes sense to enable caching of erroneous requests for a short time. This will avoid frequent repeated attempts to access the non-working part of the site.

Server (listen 80; location / (if ($ http_cookie ~ * ". +") (Set $ ​​do_not_cache 1;) proxy_cache_bypass $ do_not_cache; proxy_pass http://127.0.0.1:81/; proxy_cache all; proxy_cache_valid 404 502 503 1m; proxy_cache_valid any 1h; ))

Fastcgi caching

Nginx allows you to cache responses from fastcgi. To enable this cache, you must also declare its parameters (in the http section of the nginx.conf file):

Fastcgi_cache_path / var / cache / fpm levels = 1: 2 keys_zone = fcgi: 32m max_size = 1g; fastcgi_cache_key "$ scheme $ request_method $ host $ request_uri";

# Set the maximum cache size to 1G

Do not forget to create a folder mkdir / var / cache / fpm

In the configuration of the main host, add the caching rules:

Server (listen 80; location ~ \ .php $ (fastcgi_pass unix: /var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; fastcgi_cache fcgi; fastcgi_cache_valid 200 60m; } }

# In this case, we will cache 200 responses for 60 minutes

The most important

Take advantage of caching. Quite easy to set up, but it can give a tenfold acceleration of the site and save resources.

|

Nginx includes a FastCGI module that allows you to use directives to cache dynamic content in the PHP interface. FastCGI eliminates the need to look for additional page caching solutions (such as reverse proxies or custom application plugins). Content can also be excluded from caching based on the request method, URL, cookies, or any other server variable.

Enabling FastCGI Caching

To follow this guide, you need to advance. You also need to edit the virtual host configuration file:

nano / etc / nginx / sites-enabled / vhost

Add the following lines to the beginning of the file outside the directive server ():

The fastcgi_cache_path directive sets the path to the cache (/ etc / nginx / cache), indicates its size (100m), memory zone name (MYAPP), subdirectory levels and inactive timer.

The cache can be placed at any convenient point hard disk... The maximum cache size should not exceed the server's RAM + the size of the swap file; otherwise, a Cannot allocate memory error will be displayed. If the cache has not been used for a specific period of time specified using the "inactive" option (in this case, it is 60 minutes), then Nginx deletes it.

The fastcgi_cache_key directive specifies how file names are hashed. According to these settings, Nginx will encrypt files using MD5.

We can now navigate to the location directive, which passes PHP requests to the php5-fpm module. In location ~ .php $ () add the following lines:

fastcgi_cache MYAPP;
fastcgi_cache_valid 200 60m;

The fastcgi_cache directive refers to a memory zone that was already specified in the fastcgi_cache_path directive.

By default, Nginx keeps cached objects for the time specified using one of these headers:

X-Accel-Expires
Expires
Cache-Control.

The fastcgi_cache_valid directive specifies the default cache age if none of these headers are present. As set, only responses with a 200 status code are cached (of course, other status codes can be specified).

Check FastCGI settings

service nginx configtest

Then restart Nginx if everything is ok with the settings.

service nginx reload

At this point, the vhost file should look like this:

fastcgi_cache_path / etc / nginx / cache levels = 1: 2 keys_zone = MYAPP: 100m inactive = 60m;
fastcgi_cache_key "$ scheme $ request_method $ host $ request_uri";
server (
listen 80;
root / usr / share / nginx / html;
index index.php index.html index.htm;
server_name example.com;
location / (
try_files $ uri $ uri / /index.html;
}
location ~ \ .php $ (
try_files $ uri = 404;
fastcgi_pass unix: /var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_cache MYAPP;
fastcgi_cache_valid 200 60m;
}
}

Now we need to check if the caching is working.

Testing FastCGI Caching

Create a PHP file that outputs the UNIX timestamp.

/usr/share/nginx/html/time.php

Add to the file:

echo time ();
?>

Then request the given file several times through curl or your web browser.

[email protected]: ~ # curl http: //localhost/time.php; echo
1382986152

1382986152
[email protected]: ~ # curl http: //localhost/time.php; echo
1382986152

If cached as expected, the timestamp of all requests will match (since the response was cached).

To find the cache of this request, you need to write back the cache

[email protected]: ~ # ls -lR / etc / nginx / cache /
/ etc / nginx / cache /:
total 0
drwx ------ 3 www-data www-data 60 Oct 28 18:53 e
/ etc / nginx / cache / e:
total 0
drwx ------ 2 www-data www-data 60 Oct 28 18:53 18
/ etc / nginx / cache / e / 18:
total 4
-rw ------- 1 www-data www-data 117 Oct 28 18:53

You can also add an X-Cache header to indicate that the request was processed from the cache (X-Cache HIT) or directly (X-Cache MISS).

Above the server () directive, add:

add_header X-Cache $ upstream_cache_status;

Restart the Nginx service and make a verbose request with curl to see the new header.

[email protected]: ~ # curl -v http: //localhost/time.php
* About to connect () to localhost port 80 (# 0)
* Trying 127.0.0.1 ...
* connected
* Connected to localhost (127.0.0.1) port 80 (# 0)
> GET /time.php HTTP / 1.1
> User-Agent: curl / 7.26.0
> Host: localhost
> Accept: * / *
>
* HTTP 1.1 or later with persistent connection, pipelining supported
< HTTP/1.1 200 OK
< Server: nginx
< Date: Tue, 29 Oct 2013 11:24:04 GMT
< Content-Type: text/html
< Transfer-Encoding: chunked
< Connection: keep-alive
< X-Cache: HIT
<
* Connection # 0 to host localhost left intact
1383045828 * Closing connection # 0

Caching Exceptions

Some dynamic content (for example, authentication request pages) does not need to be cached. Such content can be excluded from caching using the request_uri, request_method, and http_cookie variables.

Below is an example of settings that can be used in the server () context.

#Cache everything by default
set $ no_cache 0;
#Don "t cache POST requests
if ($ request_method = POST)
{
set $ no_cache 1;
}
#Don "t cache if the URL contains a query string
if ($ query_string! = "")
{
set $ no_cache 1;
}
#Don "t cache the following URLs
if ($ request_uri ~ * "/(administrator/|login.php)")
{
set $ no_cache 1;
}
#Don "t cache if there is a cookie called PHPSESSID
if ($ http_cookie = "PHPSESSID")
{
set $ no_cache 1;
}

To apply the $ no_cache variable to the appropriate directives, put the following lines in location ~ .php $ ()

fastcgi_cache_bypass $ no_cache;
fastcgi_no_cache $ no_cache;

The fastcgi_cache_bypass directive ignores the existing cache for requests associated with the conditions we set earlier. The fastcgi_no_cache directive will not cache such requests at all.

Clearing the cache

The cache naming convention is based on the variables that were applied in the fastcgi_cache_key directive.

fastcgi_cache_key "$ scheme $ request_method $ host $ request_uri";

According to these variables, when requesting http: //localhost/time.php, the following values ​​will be displayed:

fastcgi_cache_key "httpGETlocalhost / time.php";

After hashing this string in MD5, you would get the following:

b777c8adab3ec92cd43756226caf618e

This will form the name of the cache file in relation to the subdirectories specified by levels = 1: 2. Thus, the first directory level in this MD5 line will be denoted by the last character of the line (in this case, the character e). The second level includes 2 symbols following the first level (18). Thus, the entire directory structure of this cache zone will look like this:

/ etc / nginx / cache / e / 18 /

Based on this caching format, you can create a cache clearing script in any convenient language. This tutorial uses PHP for that. Create a file:

/usr/share/nginx/html/purge.php

Add to it:

$ cache_path = "/ etc / nginx / cache /";
$ url = parse_url ($ _ POST ["url"]);
if (! $ url)
{
echo "Invalid URL entered";
die ();
}
$ scheme = $ url ["scheme"];
$ host = $ url ["host"];
$ requesturi = $ url ["path"];
$ hash = md5 ($ scheme. "GET". $ host. $ requesturi);
var_dump (unlink ($ cache_path. substr ($ hash, -1). "/". substr ($ hash, -3,2). "/". $ hash));
?>

Send a POST request to this file with the url you need to clean up.

curl -d "url = http: //www.example.com/time.php" http: //localhost/purge.php

The script will return true or false depending on whether the cache was cleared or not. Be sure to exclude this script from caching, and also don't forget to restrict access to it.

Tags:,

Long gone are the good old days when websites were built on bare HTML and their pages were tens of kilobytes. And the Internet was on dial-up.

Pages of modern sites already weigh several hundred kilobytes, and often several megabytes. Pictures, scripts, CSS style files - all these are indispensable attributes of a normal modern website, and you can't do without them. And all this "happiness" weighs quite a lot, and every time visitors who come to your site download it all to their computer. And until the download is complete, the page will not open in the browser.

Let's look at a method to combat this problem using the Nginx webserver. The bottom line is that we will do 2 things - we will compress all static files (scripts, style files) using gzip, and cache them together with the pictures in the cache of the visitor's browser, so that they are not downloaded from the site every time, but taken directly from the cache on the computer site visitor.

Configuring data compression using Nginx.

Open the Nginx configuration file located at /etc/nginx/nginx.conf

And in the http (...) section, which is at the beginning of the file, add what is not there from the example below:

Gzip on; gzip_static on; gzip_comp_level 5; gzip_min_length 1024; gzip_proxied any; gzip_types text / plain application / xml application / x-javascript text / javascript text / css text / json;

Configuring caching of static files in the user's browser cache using Nginx.

In the same file /etc/nginx/nginx.conf go down below, find the server construction for the desired site and add it there:

Location ~ * ^. +. (Jpg | jpeg | gif | png | ico | css | pdf | ppt | txt | bmp | rtf | js) $ (root /var/www/user/data/www/site.ru; expires 7d;

where expires 7d is the number of days that the static file cache should be kept on the user's computer. If you don't edit css, js, files of your site and don't change pictures, then it makes sense to increase this parameter, up to several months or even up to a year.

For clarity, here is a section of the server section from the server in which Nginx was installed using the standard tools of the ISPmanager panel:

Server (server_name site.ru www.site.ru; listen 111.121.152.21; disable_symlinks if_not_owner from = $ root_path; set $ root_path /var/www/user/data/www/site.ru; location ~ * ^. + \. (jpg | jpeg | gif | png | svg | js | css | mp3 | ogg | mpe? g | avi | zip | gz | bz2? | rar | swf) $ (root $ root_path; access_log / var / www / nginx- logs / user isp; access_log /var/www/httpd-logs/site.ru.access.log; error_page 404 = @fallback;)

And now we will add here expires 7d, now it will look like this:

Server (server_name site.ru www.site.ru; listen 111.121.152.21; disable_symlinks if_not_owner from = $ root_path; set $ root_path /var/www/user/data/www/site.ru; location ~ * ^. + \. (jpg | jpeg | gif | png | svg | js | css | mp3 | ogg | mpe? g | avi | zip | gz | bz2? | rar | swf) $ (root $ root_path; expires 7d; access_log / var / www / nginx-logs / user isp; access_log /var/www/httpd-logs/site.ru.access.log; error_page 404 = @fallback;)

Reload Nginx with the command:

Service nginx restart

We go to our site and rejoice, it now starts loading several times faster. And the weight of the site pages has dropped from a few megabytes to a couple of tens of kilobytes!

This article will be useful for owners of virtual and dedicated servers, since users of shared hosting do not have access to edit the Nginx configuration. However, clients of shared hosting offered by our organization can always contact support, and our system administrators will make the necessary settings for your site.

Wordpress is far from the most productive blogging platform, and large sites tend to use caching to speed it up. For wordpress, there are many popular add-ons that implement caching, but all of them, in my opinion, are rather complicated, and, as a rule, either require the installation of additional software, such as, for example, Varnish or memcached, or shift caching onto the shoulders of PHP, which is also inefficient. name. In this post I will explain how to set up caching wordpress with nginx, without installing additional software.

Nginx has a FastCGI module that provides directives to cache the fastcgi response. Using this module saves us from having to use third-party caching tools. The module also allows us not to cache some of the resources based on various request parameters, such as, for example: type (GET, POST), cookies, page address and others. The module itself can only add to the cache, but cannot clear it or delete individual entries from it. Without clearing the cache, when adding, editing and adding a comment to a post, the cache will not be updated, and the changes made will only be visible with a long delay, so we will use a third-party nginx module to clear the cache - nginx_cache_purge.

Setting up nginx

In most modern distributions, nginx is already built with the ngx_cache_purge module, but just in case, let's check that it is present. In the console, execute:

nginx -V 2> & 1 | grep -o nginx-cache-purge

If after executing the command you see nginx-cache-purge, then you can continue. If after executing the command nothing appears, then you probably have some of the old ubuntu distributions, in which nginx was built without the support of this module. In this case, you need to reinstall nginx from a third-party ppa:

sudo add-apt-repository ppa: rtcamp / nginx sudo apt-get update sudo apt-get remove nginx * sudo apt-get install nginx-custom

Let's configure nginx. Let's open the file with the virtual host settings, and bring it to something like this:

fastcgi_cache_path / var / run / nginx-cache levels = 1: 2 keys_zone = WORDPRESS: 100m inactive = 60m; fastcgi_cache_key " $ scheme $ request_method $ host $ request_uri "; fastcgi_cache_use_stale error timeout invalid_header http_500; fastcgi_ignore_headers Cache-Control Expires Set-Cookie; # Upstream to abstract backend connection (s) for php upstream php (server unix: /var/run/php5-fpm.sock fail_timeout = 0;) server (listen 80; server_name .example.com; root /var/www/example.com/html; index index.php; error_log /var/www/example.com/log/error.log; access_log /var/www/example.com/log/access.log; set $ skip_cache 0; # POST requests and urls with a query string should always go to PHP if ($ request_method = POST) (set $ skip_cache 1;) if ($ query_string! = "") (set $ skip_cache 1;) # Don "t cache uris containing the following segments if ($ request_uri ~ * "/wp-admin/|/xmlrpc.php|wp-.*.php|/feed/|index.php|sitemap(_index)?.xml")(set $ skip_cache 1;) # Don "t use the cache for logged in users or recent commenters if ($ http_cookie ~ * "comment_author | wordpress_ + | wp-postpass | wordpress_no_cache | wordpress_logged_in")(set $ skip_cache 1;) location / (try_files $ uri $ uri / /index.php? $ args;) location ~ \ .php $ (try_files $ uri = 404; include fastcgi_params; fastcgi_pass php; fastcgi_cache_bypass $ skip_cache; $ fastcgi_no_cache skip_cache; fastcgi_cache WORDPRESS; fastcgi_cache_valid 60m;) location ~ /purge(/.*) (allow 127 .0.0.1; deny all; fastcgi_cache_purge WORDPRESS " $ scheme $ request_method $ host $ 1 "; ) location ~ * ^. + \. (ogg | ogv | svg | svgz | eot | otf | woff | mp4 | ttf | rss | atom | jpg | jpeg | gif | png | ico | zip | tgz | gz | rar | bz2 | doc | xls | exe | ppt | tar | midi | wav | bmp | rtf)$ (access_log off; log_not_found off; expires max;) location ~ / \. (deny all; access_log off; log_not_found off;) location = /favicon.ico (log_not_found off; access_log off;) location = /robots.txt (allow all; log_not_found off; access_log off;) # Deny access to uploads that aren’t images, videos, music, etc. location ~ * ^ / wp-content / uploads /.*. (html | htm | shtml | php | js | swf)$ (deny all;) # Deny public access to wp-config.php location ~ * wp-config.php (deny all;))

Of course, the parameters root, server_name, access_log, error_log must be corrected according to how you have everything configured. On the line fastcgi_cache_path / var / run / nginx-cache levels = 1: 2 keys_zone = WORDPRESS: 100m inactive = 60m; we tell nginx to store the cache in the / var / run / nginx-cache / directory, call the memory zone WORDPRESS, set the maximum cache size to 100 megabytes, and set the reset timer due to inactivity to 60 minutes. A nice bonus of this configuration is that if for some reason our PHP backend stops working, nginx will continue to serve cached pages.

Setting up Wordpress

Nginx itself does not know when to clear the cache, so you need to install an add-on for wordpress, which will automatically clear the cache after changes. Install the add-on nginx -t nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful

And if all is well, reload it:

systemctl reload nginx

We go to any page, and check that nginx has added it to the cache:

ls -la / var / run / nginx-cache

Optional: put the nginx cache in ramdisk (tmpfs)

Now we have caching configured, but the cache is stored on the hard disk, which is not a good solution. It would be more optimal to mount the directory with the nginx cache in memory. To do this, open / etc / fstab, and add there:

tmpfs / var / run / nginx-cache tmpfs nodev, nosuid, size = 110M 0 0

If you specified a larger cache size in the nginx settings, then the size parameter must be changed in accordance with the specified cache size, plus a small margin.
Let's immediately mount the directory into memory:

Now, every time the system boots, the / var / run / nginx-cache directory will be placed in memory, which should reduce the page load time.

Conclusion

Do not count on this method as a panacea. The PHP interpreter, as I wrote above, cannot be called fast, and Wordpress itself is quite large and "heavy". If the page is not in the cache, or it is rarely requested, then nginx will still first access the non-performing backend first. However, in general, caching will give your server a chance to relax a little while generating popular posts, and when a new post is published.

This is far from the only way to speed up your wordpress blog. There are many other techniques that I will write about later.

Caching is a technology or process of creating a copy of data on readily available storage media (cache, cash). Simply put and applied to the realities of site building, it can be the creation of a static html copy of a page or part of it, which is generated using PHP scripts (or other others, such as Perl, ASP.net), depending on what language the CMS of the site is written in. ) and is saved on disk, in RAM, or even partially in the browser (we will consider in more detail below). When a page is requested from a client (browser), instead of reassembling it with scripts, the browser will receive a finished copy of it, which is much more economical in terms of hosting resources, and faster, since it takes less time to transfer the finished page (sometimes much less). than re-creating it.

Why use website caching

  • To reduce the load on hosting
  • To quickly serve the content of the site to the browser

Both arguments, I think, need no comment.

Disadvantages and negative effects of website caching

Oddly enough, website caching has its drawbacks. First of all, this applies to sites, the content of which changes dynamically when interacting with it. Often, these are sites that serve content or part of it using AJAX. In general, AJAX caching is also possible and even necessary, but this is a topic for a separate conversation and does not concern the traditionally used technologies, which will be discussed below.
Also, problems may arise for registered users, for whom a persistent cache can become a problem when interacting with site elements. Here, as a rule, the cache is disabled, or object caching of individual elements of the site is used: widgets, menus, and the like.

How to set up caching on your site

First, you need to figure out what technologies are traditionally used to cache the content of sites.
All possible methods can be divided into 3 groups

Server side caching

Caching with NGINX

Caching with htaccess (Apache)

If you only have access to .htaccess and the production server is only Apache, then you can use techniques such as gzip compression and Expires headers to use the browser cache.

Turn on gzip compression for the appropriate MIME file types

AddOutputFilterByType DEFLATE text / plain text / html AddOutputFilterByType DEFLATE text / css AddOutputFilterByType DEFLATE text / javascript application / javascript application / x-javascript AddOutputFilterByType DEFLATE text / xml application / xml application / xhtml + xml application / rss + xml AddOutputFilterByType DEFLATE application / json AddOutputFilterByType DEFLATE application / vnd.ms-fontobject application / x-font-ttf font / opentype image / svg + xml image / x-icon

Turn on Expires headers for static files for a period of 1 year (365 days)

ExpiresActive on ExpiresDefault "access plus 365 days"

Caching with Memcached

Caching with php accelerator

If the site engine is written in PHP, then every time any page of the site is loaded, php scripts are executed: the code interpreter reads the scripts written by the programmer, generates from them bytecode that the machine can understand, executes it and produces the result. The PHP Accelerator allows you to eliminate the constant generation of bytecode by caching the compiled code in memory or on disk, thereby increasing performance and decreasing the time spent executing PHP. Of the accelerators supported today, there are:

  • Windows Cache Extension for PHP
  • XCache
  • Zend OPcache

PHP version 5.5 and higher has an accelerator built in Zend OPcache, so to enable the accelerator, you just need to update your PHP version

Site side caching

As a rule, this means the ability of the CMS site to create static html-copies of pages. Most popular engines and frameworks have this capability. Personally, I have worked with Smarty, WordPress, so I can assure you that they are doing an excellent job. The original WordPress lacks out of the box caching features that are needed for any lightly loaded project, but there are many popular caching plugins:

  1. , which is just engaged in the generation of static pages of the site;
  2. Hyper Cache, which essentially works the same as the previous plugin;
  3. DB Cache. The essence of the work is caching database queries. Also a very useful feature. Can be used in conjunction with the previous two plugins;
  4. W3 Total Cache. Left for dessert, this is my favorite WordPress plugin. With it, the site is transformed, turning from a clumsy bus into a racing car. Its huge advantage is a huge set of features, such as several caching options (statics, accelerators, Memcached, database queries, object and page caching), code concatenation and minification (combining and compressing CSS files, Javascript, HTML compression by removing spaces ), CDN usage, and more.

What can I say - use the right CMS, and quality caching will be available almost out of the box.

Browser (client) side caching, cache headers

Browser caching is possible because any self-respecting browser allows and encourages it. Perhaps this is due to the HTTP headers that the server sends to the client, namely:

  • Expires;
  • Cache-Control: max-age;
  • Last-Modified;
  • ETag.

Thanks to them, users who repeatedly visit the site spend very little time loading pages. Caching headers must be applied to all cached static resources: template files, images, javascript and css files, if any, PDF, audio and video, and so on.
It is recommended to set the headers so that the statics are stored for at least a week and no more than a year, preferably a year.

Expires

The Expires header is responsible for how long the cache is up to date, and the browser can use the cached resources without asking the server for a new version. It is strong and highly desirable to use, as it is mandatory. It is recommended to indicate in the title a period from a week to a year. It is better not to specify it for more than a year, this is a violation of RFC rules.

For example, to configure Expires in NGINX for all static files for a year (365 days), the NGINX configuration file must contain the code

Location ~ * ^. + \. (Jpg | jpeg | gif | png | svg | js | css | mp3 | ogg | mpe? G | avi | zip | gz | bz2? | Rar | swf) $ (expires 365d;)

Cache-Control: max-age;

Cache-Control: max-age is responsible for the same thing.
Expires is preferred over Cache-Control due to its more prevalence. However, if Expires and Cache-Control are present in the headers at the same time, then Cache-Control will take precedence.

In NGINX Cache-Control turns on in the same way as Expires, by the directive expires: 365d;

Last-Modified and ETag

These headers work like digital prints. This means that a unique id will be set for each URL in the cache. Last-Modified creates it based on the date last change... Heading ETag uses any unique resource identifier (most often a file version or content hash). Last-Modified is a "weak" header because the browser uses heuristics to determine whether to request an item from the cache.

In NGINX for static files ETag and Last-Modified enabled by default. For dynamic pages, it is either better not to specify them, or the script that generates the page should do it, or, best of all, use a properly configured cache, then NGINX will take care of the headers by itself. For example, for WordPress, you can use.

These headers allow the browser to efficiently update cached resources by sending GET requests every time the user explicitly reloads the page. Conditional GET requests do not return a full response if the resource has not changed on the server, and thus provide less latency than full requests, thereby reducing hosting load and response time.

Simultaneous use of Expires and Cache-Control: max-age is redundant, as is the simultaneous use of Last-Modified and ETag. Use in a bundle Expires + ETag or Expires + Last-Modified.

Enable GZIP compression for static files

Of course, GZIP compression is not directly related to caching, however, it saves traffic and increases page loading speed.

How to enable GZIP for static in server (.... gzip on; gzip_disable "msie6"; gzip_types text / plain text / css application / json application / x-javascript text / xml application / xml application / xml + rss text / javascript application / javascript;) How to enable GZIP for statics in To enable gzip compression in .htaccess, you need to insert the following code at the beginning of the file: AddOutputFilterByType DEFLATE text / plain AddOutputFilterByType DEFLATE text / html AddOutputFilterByType DEFLATE text / xml AddOutputFilterByType DEFLATE text / css AddOutputFilterByType DEFLATE application / xml AddOutputFilterByType DEFLATE application / xhtml + xml AddOutputFilterByType DEFLATE application / rss + xml AddOutputFilterByType DEFLATE application / javascript AddOutputFilterByType DEFLATE application / x- javascript