NGINX Server
I use an nginx HTTP server to manage this site. I don't know much about nginx, so I am creating these notes to learn more about it to help me more effectively manage my server.
References
- Mastering NGINX, Dimitri Aivaliotis
- An in-depth guide to configuring NGINX for any situation, including numerous examples and reference table describing each directive
NOTE: I mixed up the use of directive
and parameter
sometimes in the note below. Take care to distinguish between these two entities yourself.
NGINX is a high-performance web server designed to use very few system resources. There are many how-to's and example configurations floating around on the web. This guide will serve to clarify the murky waters of NGINX configuration. You will learn ho w to tune NGINX for various situations, what some of the more obscure configuration options do, and how to design a decent configuration to match your needs.
Installing NGINX and Third-Party Modules
- NGINX was first conceived to be an HTTP server. It was created to solve the C10K problem, described by Daniel Kegel, of designing a web server to handle 10,000 simultaneous connections. NGINX is able to do this through its event-based connection-handling mechanism, and will use the OS-appropriate event mechanism in order to achieve this goal.
Installing NGINX
$ sudo yum install nginx
- This command installs NGINX into standard locations, specific to the operating system. This is the preferred installation method.
Configuring for web or mail service
- NGINX is unique among high-performing web servers in that it was also designed to be a mail proxy server. Depending on your goals in building NGINX, you can configure it for web acceleration, a web server, a mail proxy, or all of them. There are some options that you can specify when installing NGINX to have a slimmed down binary to use in high performance environments where evert extra KB counts.
- I'm not going to list all of the options here, because there are a lot, but you can find the options starting on page 11 of the referenced textbook which allow you to include / exclude some functionality from NGINX.
A Configuration Guide
The NGINX configuration file follows a very logical format. Learning this format and how to use each section is one of the building blocks that will help you to create a configuration file by hand.
- The basic NGINX configuration file is set up in a number of sections. Each section is delineated in the following way. The semicolon marks the end-of-line.
<section> {
<directive> <parameters>;
}
Global Section
- The global section is used to configure the parameters that affect the entire server, and is an exception to the format above. The most important configuration directives are shown below:
- user
- The user and group under which the worker processes run is configured by this parameter.
- worker_processes
- This is the number of worker processes that will be started. These will handle all connections made by the clients. Choosing the right number depends on the server environment, the disk subsystem, and the network infrastructure. A good rule of thumb is to set this equal to the number of processor cores for CPU bound loads and to multiple this number by 1.5 to 2 for I/O bound loads.
- error_log
error_log
is where all the errors are written. If no othererror_log
is given in a separate context, this log file will be used for all errors, globally. A second parameter to this directive indicates the label at which (debug
,info
,notice
,warn
,error
,crit
,alert
, andemerg
) errors are written to the log. Note that debug-level errors are only available if the--with-debug
configuration switch is given at compilation time.
- pid
- This is the file where the process ID of the main process is written, overwriting the compiled in default.
- use
- The use directive indicates which connection processing method should be used. This will overwrite the compiled-in default, and must be contained in an
events
context, if used. It will not normally need to be overwritten when the compiled-in default is found to produce errors over time.
- The use directive indicates which connection processing method should be used. This will overwrite the compiled-in default, and must be contained in an
- worker_connections
- This directive configures the maximum number of simultaneous connections that a worker process may have open. This includes, but is not limited to, client connections and connections to upstream servers. This is especially important in reverse proxy servers - some additional tuning may be required at the operating system level in order to reach this number of simultaneous connections.
Short Example of a section that would be placed at the top of the ngnx.conf
configuration file:
# we want nginx to run as user 'www'
user www;
# the load is CPU-bound and we have 12 cores
work_processes 12;
# explicitly specifying the path to the mandatory error log
error_log /var/log/nginx/error.log;
# explicitly specifying the path to the pid file
pid /var/run/nginx.pid
# sets up a new configuration context for the "events" module
events {
# we're on a Solaris-based system and have determined that nginx will stop
# responding to new requests over time with the default connection-processing
# mechanism, so we switchh to the second-best
use /dev/poll;
# thr opduct of this number and the number of worker_processes
# indicates how many simultaneous connections per IP:port pair are accepted
worker_connections 2048;
}
Include Files
- Include files can be used anywhere in your configuration file, to help it be more readable and to enable you to re-use parts of your configuration.
The HTTP Server Section
- The HTTP server section, or HTTP configuration context, is available unless you have built NGINX without the HTTP module. This section controls all the aspects of working with the HTTP module, and will probably be used the most. The configuration directives found in this section deal with handling HTTP connections, and so there are many directives which can be split up into the four categories seen below. I am not going to list all of the directives here. They can be found starting on page 25.
- Client Directives
- Deal with the aspects of client connection itself, as well as with different types of clients.
- File I/O Directives
- These directives control how NGINX delivers static files and/or how it manages file descriptors.
- Hash Directives
- The set of hash directives controls how large a range of static memory NGINX allocates to certain variables.
- Socket Directives
- These directives describe how NGINX can set various options on the TCP sockets it creates.
Sample Configuration that goes after any global configuration directives in the ngnix.conf
file:
http {
include /opt/local/etc/nginx/mime.types;
default_type application/octet-stream;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
server_names_hash_max_size 1024;
}
The Virtual Server Section
- Any context beginning with the keyword
server
is considered a "virtual server" section. It describes a logical separation of a set of resources that will be delivered under a differentserver_name
directive. These virtual servers respond to HTTP requests, and so are contained within thehttp
section. - A virtual server is defined by a combination of the
listen
andserver_name
directives.listen
defines an IP address/port combination or path to a UNIX-domain socket.
listen address[:port];
listen port;
listen unix:path;
- The
listen
directive uniquely identifies a socket binding under NGINX - there are many optional parameters thatlisten
can take that I will not list here (pg. 30). - The
server_name
directive is fairly straightforward, but can be used to solve a number of configuration problems. Its default value is "", which means that a server section without aserver_name
directive will match a request that has noHost
header field set.
server {
listen 80;
return 444; # The non-standard HTTP code, 444, will cause NGINX to immediately close the connection
}
- Besides a normal string, NGINX will accept a wildcard as a parameter to the
server_name
directive: - The wildcard can replace the subdomain part:
*.example.com
- The wildcard can replace the tip-level-domain part:
www.example.*
- A special form will match the subdomain or the domain part itself:
.example.com
matches *.example.com and example.com
- The wildcard can replace the subdomain part:
- A regular expression can also be used as a parameter to
server_name
by prepending the name with a tilde (~): server_name ~"www\.example\.com$";
server_name ~"www(\d+).example\.(com)$;"
- NGINX uses the following logic when determining which virtual server should serve a specific request:
- Match the IP address and port to the
listen directive
- Match the
Host
header field against theserver_name
directive as a string - Match the
Host
header field against theserver_name
directive with a wildcard at the beginning of the string - Match the
Host
header field against theserver_name
directive with a wildcard at the end of the string - Match the
Host
header field against theserver_name
directive as a regular expression - If all the
Host
headers match fail, then direct to thelisten
directive marked as thedefault_server
- If all the
Host
headers match fail and there is nodefault_server
, direct to the first server with alisten
directive that satisfies step 1.
- Match the IP address and port to the
- It is recommended to always set
default_server
explicitly, so that these unhandled requests will be handled in a defined manner.
Locations - where, when, and how
- The
location
directive may be used within a virtual server section and indicates a URI that comes either from the client or from an internal redirect. Locations may be nested with a few exceptions. They are used for processing requests with as specific a configuration as possible.
# A location is defined as follows:
location {modifier} uri {...}
# Or for a named location
location @name {...}
- A named location is only reachable from an internal redirect. It preserves the URI as it was before entering the location block. It may only be defined at the server context level.
- When a request comes in, the URI is checked against the most specific location as follows:
- Locations without a regular expression are searched for the most-specific match, independent of the order in which they are defined.
- Regular expressions are matches in the order in which they are found in the configuration file, The regular expression search is terminated on the first match. The most-specific location match is then used for request processing.
- The comparison match described here is against decoded URIs; for example, a "%20" in a URI will match against a " " specified in a location. A named location may only be used by internally redirected requests. There are directives that are found only within a location that can be found on page 35.
The mail server section
- The ail server section, or mail configuration context, is available only if you've built NGINX with the mail module (
--with-mail
). This section controls all aspects of working with the mail module. The mail module allows for configuration directives that affect all aspects of proxying mail connections, as well as for specifying them per server. The server context also accepts thelisten
andserver_name
directives that were seen earlier. - NGINX can proxy the IMAP< POP3, and SMTP protocols. The following table lists the directives that are available to this module.
Full Sample Configuration
user www;
worker_processes 12;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
events {
use /dev/poll;
worker_connections 2048;
}
http {
include /opt/local/etc/nginx/mime.types;
default_type application/octet-stream;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
server_name_hash_max_size 1024;
server {
listen 80;
return 444;
}
server {
listen 80;
server_name www.example.com;
location / {
try_files $uri $uri/ @mongrel;
}
location @mongrel {
proxy_pass https://127.0.0.1:8080;
}
}
}
Using the Mail Module
- NGINX was designed to not only serve web traffic, but also to provide a means of proxying mail services. In this chapter, you will learn how to configure NGINX as a mail proxy for POP3, IMAP, and SMTP services.
- The Post Office Protocol is an Internet standard protocol used to retrieve mail messages from a mailbox server. The current incarnation of the protocol is Version 3, thus POP3. Mail clients will typically retrieve all new messages on a mailbox server in one session, then close the connection. After closing, the mailbox server will delete all messages that have been marked as retrieved.
- The Internet Message Access Protocol is an Internet-standard protocol used to retrieve mail messages from a mailbox server. It provides quite a bit of extended functionality over the earlier POP protocol. Typical usage leaves all messages on the server, so that multiple mail clients can access the same mailbox. This also means that there may be many more, persistent connections to an upstream mailbox server from clients using IMAP than those using POP3.
- The Simple Mail Transport Protocol is the Internet-standard protocol for transferring mail messages from one server to another or from a client to a server. Although authentication was not at first conceived for this protocol, SMTP-AUTH is supported as an extension.
- If your organization requires mail traffic to be encrypted, or if you yourself want more security in your mail transfers, you can enable NGINX to use TLS to provide POP3 over SSL, IMAP over SSL, or SMTP over SSL. To enable TLS support, either set the
starttls
directive toon
for STLS/STARTTLS support or set thessl
directive toon
for pure SSL/TLS support and configure the appropriatessl_*
directives for your site. - NGINX does not authenticate requests - it sends authentication requests to mail server.
- Depending on the frequency of clients accessing the mail services on your proxy and how many resources are available to the authentication service, you may want to introduces a caching layer into the setup.
Interpreting Log Files
- Log files provide some of the best clues as to what is going on when a system doesn't act as expected. Depending on the verbosity level configured and whether or not NGINX was compiled with debugging support (
--enable-debug
), the log files will help you understand what is going on in a particular session. - Each line in the error log corresponds to a particular log level, configured using the
error_log
directive. The different levels aredebug
,info
,notice
,warn
,error
,crit
,alert
, andemerg
, in order of increasing severity. Configuring a particular level will include messages for all of the more severe levels above it. The default log level iserror
.
NGINX as a Reverse Proxy
- A reverse proxy is a web server that terminates connections with clients and makes new ones to upstream servers on their behalf. An upstream server is defined as a server that NGINX makes a connection with in order to fulfill the client's request. These upstream servers can take various forms, and NGINX can be configures differently to handle each of them.
- Due to the nature of a reverse proxy, the upstream server doesn't obtain information directly from the client. Some of this information, such as the client's real IP address, is important for debugging purposes, as well as tracking requests. This information may be passed to the upstream server in the form of headers.
- NGINX as a Reverse Proxy
- A reverse proxy is a web server that terminates connections with clients and makes new ones to upstream servers on their behalf. An upstream server is defined as a server that NGINX makes a connection with in order to fulfill the client's request. These upstream servers can take various forms, and NGINX can be configures differently to handle each of them.
- Due to the nature of a reverse proxy, the upstream server doesn't obtain information directly from the client. Some of this information, such as the client's real IP address, is important for debugging purposes, as well as tracking requests. This information may be passed to the upstream server in the form of headers.
Proxy Module
- NGINX can server as a reverse proxy by terminating requests from clients and opening new ones to its upstream servers. On the way, the request can be split up according to its URI, client parameters, or some other logic, in order to best respond to the request from the client. Any part of the request's original URL can be transformed on its way through the reverse proxy.
- The most important directive takes one parameter - the URL to which the request should be transferred. Using
proxy_pass
with a URI part will replace therequest_uri
with this part (except if a regular expression is used for the location or if within the location there is a rewrite rule that changes the URI). Example:
location /uri {
proxy_pass http://localhost:8080/newuri;
}
- See the directives for reverse proxy on page 68
- There are some useful directives for modifying the request:
- You can modify the request retains the original IP address in the
X-Forwarded-For
header. - You can also set a max body size and a timeout for establishing a connection with the upstream server.
- You can modify the cookie domain.
- You can modify the request retains the original IP address in the
Upstream Module
- Closely paired with the
proxy
module is theupstream
module. Theupstream
directive starts a new context, in which a group of upstream servers is defined. These servers may be given different weights (the higher the weight, the greater number of connections NGINX will pass to that particular upstream server), may be of different types (TCP versus UNIX domain), and may even be marked asdown
for maintenance reasons. - The
keepalive
directive deserves special mention. NGINX will keep this number of connections per worker open to an upstream server. This connection cache is useful in situations where NGINX has to constantly maintain a certain number of open connections to an upstream server. If the upstream server speaks HTTP, NGINX can use the HTTP/1.1 Persistent Connections mechanism for maintaining these open connections.
Load Balancing Algorithms
- The
upstream
module can select which upstream server to connect to in the next step by using one of three load-balancing algorithms - round-robin, IP hash, or least connections. - The round-robin algorithm is selected by default, and doesn't need a configuration directive to activate it. This algorithm selects the next server, based on which server was selected previously, which server is next in the configuration block, and what weight each server carries. The round robin algorithm tries to ensure a fair distribution of traffic, based on a concept of who's turn it is next.
- The IP hash algorithm, activated by the
ip_hash
directive, instead takes the view that certain IP addresses should always be mapped to the same upstream server. NGINX does this by using the first three octets of an IPv4 address or the entire IPv6 address, as a hashing key. The same pool of IP addresses are therefore always mapped to the same upstream server. So, this mechanism isn't designed to ensure a fair distribution, but rather a consistent mapping between the client and the upstream server. - The third load balancing algorithm supported by the default upstream module, least connections, is activated by the
least_conn
directive. This algorithm is designed to distribute the load evenly among upstream servers, by selecting the one with the fewest number of active connections. If the upstream servers do not all have the same processing power, this can be indicated using theweight
parameter to theserver
directive. The algorithm will take into account the differently weighted servers when calculating the number of least connections.
Types of upstream servers
- An upstream server is a server to which NGINX provides a connection. This can be on a different physical or virtual machine, but doesn't have to be. The upstream server may be a daemon listening on a UNIX domain socket for connections on the local machine or could be one of many on a different machine listening over TCP. It may be an Apache server, with multiple modules to handle different kinds of requests, or a Rack middleware server, providing an HTTP interface to Ruby applications. NGINX can be configured to proxy to each one of them.
- NGINX could come in handy as a proxy server for a single upstream server die to its ability to handle many simultaneous requests very well with little resource consumption. (An Apache server may only be able to handle so many requests - less than NGINX).
- NGINX could also pass connections to multiple upstream servers by defining the servers like below.
upstream app {
server 127.0.0.1:9000;
server 127.0.0.1.9001;
server 127.0.0.1.9002;
}
server {
location / {
proxy_pass http://app; # references the upstream app defined above
}
}
- Using the configuration above, NGINX will pass consecutive requests in a round robin fashion to the three upstream servers.
- You can also use the
upstream
directive to point to non-http upstream servers, such asmemcahed
servers.
Reverse Proxy Advanced Topics
- A reverse proxy makes connections to upstream servers on behalf of clients. These upstream servers therefore have no direct connection to the client. This is for several different reasons, such as security, scalability, and performance.
- A reverse proxy server aids security because if an attacker were to try to get onto the upstream server directly, he would have to first find a way to get onto the reverse proxy. Connections to the client can be encrypted by running them over HTTPS. These SSL connections may be terminated on the reverse proxy, when the upstream server cannot or should not provide this functionality itself. NGINX can act as an SSL terminator as well as provide additional access lists and restrictions based on various client attributes.
- Scalability can be achieved by utilizing a reverse proxy to make parallel connections to multiple upstream servers, enabling them to act as if they were one. If the application requires more processing power, additional servers can be added to the pool served by a single reverse proxy.
- Performance of an application may be enhanced through the use of a reverse proxy in several ways. The reverse proxy can cache and compress content before delivering it out to the client. NGINX as a reverse proxy can handle more concurrent client connections than a typical application server. Certain architectures configure NGINX to serve static content from a local disk cache, passing only dynamic requests to the upstream server to handle. Clients can keep their connections to NGINX alive, while NGINX terminates the ones to the upstream server s immediately, thus freeing resources on those upstream servers.
Security through Separation
- We can achieve a measure of security by separating out the point to which clients connect to an application. This is one of the main reasons for using a reverse proxy in an architecture. The client connects directly to the machine running the reverse poxy. The machine therefore should be secured well enough that an attacker cannot find a point of entry.
Encrypting Traffic with SSL
- NGINX is often used to terminate SSL connections, either because the upstream server is not capable of using SSL or to offload the processing requirements of SSL connections. This requires that your
nginx
binary was compiles with SSL support (--with_http_ssl_module
) and that you install an SSL certificate and key.
Authenticating Clients Using SSL
- Some applications use information from the SSL certificate the client presents, but this information is not directly available in a reverse proxy architecture. NGINX can validate client SSL certificates before passing the request on to the upstream server:
server {
…
# specifies the path to the PEM-encoded list of root CA certificates that will be considered valid signers of client certificates
ssl_client_certificate /usr/local/etc/nginx/ClientCertCAs.pem;
# indicates the path to a certificate revocation list
ssl_crl /usr/local/etc/nginx/ClientCertCRLs.crl;
# states that we want NGINX to check the validity of SSL certificates presented by clients
ssl_verify_client on;
# How many signers will be checked before declaring the certificate invalid
ssl_verify_depth 3;
# 495 = error with cerificate validation
error_page 495 = @noverify;
# 496 = certifcate invalid
error_page 496 = @nocert;
location @noverify {
proxy_pass http://insecure?status=notverified;
}
location @nocert {
proxy_pass http://insecure?status=nocert;
}
location / {
if ($ssl_client_verify = FAILED) {
return 495;
}
proxy_pass http://secured;
}
}
Blocking Traffic Based on Originating IP Address
- As client connections terminate on the reverse proxy, it is possible to limit clients based on IP address.
- To block client connections based on the country code of the IP address, you will need to compile NGINX with
--with-http_geoip_module
- You can also single out IP addresses to be blocked.
Isolating Application Components for Scalability
- Scaling applications can be described by moving in two dimensions, up and out. Scaling up (vertically) refers to adding more resources to a machine, growing its pool of available resources to meet client demand. Scaling out (horizontally) means adding more machines to a pool of available responders, so that no one machine gets tied up handling the majority of clients. Whether these machines are virtualized instances running in the cloud or physical machines sitting in a datacenter, it is often more cost effective to scale out rather than up.
- Due to its very low resource usage, NGINX acts ideally as the broker in a client-application relationship. NGINX handles the connection to the client, able to process multiple requests simultaneously. Depending on the configuration, NGINX will either deliver a file from its local cache or pass the request on to an upstream server for further processing. The upstream server can be any type of server that speaks the HTTP protocol. More client connections can be handled than if an upstream server were to respond directly.
- NGINX simplifies the process of horizontally scaling.
Reverse Proxy Performance Tuning
- NGINX can be tuned in a number of ways to get the most out of the application for which it is acting as a reverse proxy. By buffering, caching, and compressing, NGINX can be configured to make the client's experience as snappy as possible.
Buffering
- The most important factor to consider performance-wise when proxying is buffering. NGINX, by default, will try to read as much as possible from the upstream server as fast as possible before returning that response to the client. It will buffer the response locally so that it can deliver it to the client all at once. If any part of the request from the client or the response from the upstream server is written out to disk, performance might drop. There are some directives that you can set to control buffer size.
- You can also add the
X-Accel-Buffering
header in the upstream application to buffer or not buffer the response.
Caching
- NGINX is capable of caching the response form the upstream server, so that the same request asked again doesn't have to go back to the upstream server to be served.
- NGINX offers something called a store to serve static files faster. NGINX will store these files on disk and the upstream server will not be asked for them again after the initial request.
Compressing
- Optimizing bandwidth can help reduce a response's transfer time. NGINX has the capability of compressing a response it receives from an upstream server before passing it on to the client. The
gzip
module, enabled by default, is often used on a reverse proxy to compress content where it makes sense. Some file types do not compress well. Some clients do not respond well to compressed content.
The NGINX HTTP Server
- An HTTP server is primarily a piece of software that will deliver web pages o clients when requested. These web pages can be anything from a simple HTML file on disk to a multicomponent framework delivering user-specific content, dynamically updated through AJAX or WebSocket. NGINX is modular, and is designed to handle any kind of HTTP serving necessary.
NGINX Architecture
- NGINX consists of a single master process and multiple worker processes. Each of these is single-threaded and designed to handle thousands of connections simultaneously. The worker process is where most of the action takes place, as this is the component that handles client requests. NGINX makes use of the operating system's event mechanism to respond quickly to these requests.
- The NGINX master process is responsible for reading the configuration, handling sockets, spawning workers, opening log files, and compiling embedded Perl scripts. The master process is the one that responds to administrative requests via signals.
- The NGINX worker process runs in a tight event loop to handle incoming connections. Each NGINX module is built into the worker, so that any requests processing, filtering, handling of proxy connections, and much more is done within the worker process. Due to this worker model, the operating system can handle each process separately and schedule the processes to run optimally on each processor core. If there are any processes that would block a worker, such as disk I/O, more workers than cores can be configured to handle the load.
- There are also a number of helper processes that the NGINX master process spawns to handle dedicated tasks. Among theses are the cache loader and cache manager processes. The cache loader is responsible for preparing the metadata for worker processes to use the cache. The cache manager process is responsible for checking cache items and expiring invalid ones.
- NGINX is built in a modular fashion. The master process provides the foundation upon which each module may perform its function. Each protocol and handler is implemented as its own module. The individual modules are chained together into a pipeline to handle connections and process requests. After q request is handled, it is then passed on to a series of filters, in which the response is processed. One of these filters is responsible for processing subrequests, one of NGINX's most powerful features.
- Subrequests are how NGINX can return the results of a request that differs from the URI that the client sent. Depending on the configuration, they may be multiply nested and call other subrequests. Filters can collect the responses from multiple subrequests and combine them into one response to the client. The response is then finalized and sent to the client.
The HTTP Core Module
- The
http
module is NGINX's central module, which handles all interactions with clients over HTTP.
The Server
- The
server
directive starts a new context. A default server in NGINX means that it is the first server defined in a particular configuration with the samelisten
IP address and port as another server. A default server may also be denoted by thedefault_server
parameter to thelisten
directive.
Logging
- NGINX has a very flexible logging model. Each level of configuration may have an access log. In addition, more than one access log may be specified per level, each with a different
log_format
. Thelog_format
directive allows you to specify exactly what will be logged, and needs to be defined with thehttp
section. - The path to the log file itself may contain variables, so that you can build a dynamic configuration.
Finding Files
- In order for NGINX to respond to a request, it passes it to a content handler, determined by the configuration of the
location
directive.
Client Interaction
- There are a number of ways in which NGINX can interact with clients. This can range from attributes of the connection itself (IP address, timeouts, keepalive, and so on) to content negotiation headers.
Using Limits to Prevent Abuse
- You can implement rate limiting with NGINX
Restricting Access
- You can restrict access based on location (from IP address), based on IP address, and you can restrict access to certain parts of site based on IP address.
Streaming Media Files
- NGINX is capable off serving certain video media types. The
flv
andmp4
modules, included in the base distribution. Can perform what is called pseudo-streaming. - In order to use the pseudo-streaming capabilities, the module needs to be included in at compile time:
--with-http_flv_module
for Flash Video files and/or--with-http_mp4_module
for H.264/AAC files.
NGINX for the Developer
- This chapter explores how NGINX cam be integrated directly into your application.
- NGINX is superb at serving static content. It is designed to support over 100,000 simultaneous connections while using only minimal system resources.
- You can use special headers -
X-Accel-Expires
header - to tell NGINX to cache content. - If your application currently caches prerendered pages in a database, it should be possible without too much additional effort to place those pages into a
memcached
instance instead.
Comments
There are currently no comments to show for this article.