Open Source Business

NGINX Directives that can help in optimizing performance for high traffic loads

Written by Sandeep Khuperkar

| Jul 06, 2015

3 MIN READ

With its versatile feature set, NGINX today has become very popular as a web server, cache and load balancer. Most of the top busiest websites run NGINX. Although for most deployments default settings of NGINX and Linux works, there are still some key settings available in NGINX that can help in optimising it further for high traffic situations. While making these changes, it is recommended to change one setting at a time and set it back to the default value if it does not result in positive performance. We have previously shared some fine tuning techniques in Part 1 and Part 2.  These techniques would be useful to the users who have basic understanding of Nginx. We always recommend to confirm your settings with NGINX team if you are working in production environment.

Here are some more tips to get better performance from your deployment :

Worker Process

NGINX can run multiple worker process. Worker process is key directive of NGINX. Once the master is bound to required IP/ports it will spawn workers as a specified user and they handle connections.

Worker_process – The number of NGINX worker process. This directive allows us to set the number of workers to spawn. Common practice is to run 1 worker process per CPU core which can be achieved by setting value as “auto”.

To decide what value to set for worker_process, look at the number of cores on your server and set accordingly. You may like to increase this number when work process have to do lot of disk I/O.

Worker_connections – Maximum number of connections that can be processed at one time by each worker process. Default is 512 but you may like to increase number of connections each worker can handle if you grow in traffic.

Keepalives

Keepalive allows user to keep the connection alive until it is timed out or the number of specified requests is reached. Keepalive has a huge impact on the load time for end user, if your website loads fast your users are happy and if it’s a e-commerce site more sales is completed.

NGINX supports keepalives for client and upstream servers. Below is the directives for client keepalives :

keepalive_requests – Number of requests client can make over a single keepalive connections. Default value is 100, but for the scenario where single client is making many requests this can be set to higher value.

keepalive_timeout – How long an idle keepalive connection remains open.

Directive for Upstream keepalives as below :

Keepalive – Nunber of idle keepalive conections to an upstream server that remain open for each worker process . No default value.

Access Logs

By default NGINX will write every request to a file on disk which in turn will consume both CPU and I/O cycles. If you do not intended to use access logs for anything you may just turn it off and avoid disk writes but if you do require access logs ( which almost all requires ) then you may consider buffering a log entries and write them together.

access_log – Directive which enables you to buffer log by setting the buffer size with the “ buffer = size “ option. With “ flush = time “ you can tell NGINX to write the entries in the buffer after a specified amount of time.

Limits

Following are few directives which may be set to avoid consuming too many resources and helps optimizing performance of your system.

limit_conn and limit_conn_zone – Limits the number of allowed connections. Setting this can help limiting connections from individual clients and consuming too many resources.

limit_rate – Limits the allowed bandwidth for a client for a single connection.

limit_req and limit_req_zone – Limits the rate of requests process by NGINX.

max_conns – Sets the maximum numbers of simultaneous connection an upstream server can accept. This prevents upstream server from getting overloaded. Default value is set to zero which means no limit.

queue – This directive allows to set the number of request to queue and for how long. This helps to queue the requests when the servers in upstream groups have reached the max_conns limit or if no available servers are there in upstream group .

All the performance tuning and optimization is an on going activity, it needs to be monitored and tweaked for individual cases .

– Sandeep Khuperkar I CTO and Director, Ashnik


Go to Top