Ubuntu Gutsy - Nginx configuration #1

Let's take a look at the main nginx.conf file for our Ubuntu Gutsy install of Nginx.

Although I'll make some suggestions, the aim here is not to change a lot at this point. Rather, we will look at the main settings, see what they mean and what a change will actually do.


Defaults

So why only a few changes to the default? Well, it's difficult to give a definitive configuration as there are so many variables to consider such as expected site traffic, Slice size, site type, etc.

During this, and the next articles, we'll discuss the main settings and you can make any decisions as to what settings you feel are best for your site. Any changes I do suggest are simply that: suggestions.

My advice is very simple: experiment. Find what works best on your setup.

nginx.conf

Open up the main Gutsy Nginx config file:

sudo nano /etc/nginx/nginx.conf

The default (assuming you installed via aptitude) is pretty short:

user www-data;
worker_processes  1;

error_log  /var/log/nginx/error.log;
pid        /var/run/nginx.pid;

events {
    worker_connections  1024;
}

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    access_log  /var/log/nginx/access.log;

    sendfile        on;
    #tcp_nopush     on;

    #keepalive_timeout  0;
    keepalive_timeout  65;
    tcp_nodelay        on;

    gzip  on;

    include /etc/nginx/sites-enabled/*;

}

Let's look at some of the main settings, what they mean and some possible changes.

I won't mention some of the more obvious settings such access logs and pid's.

user

Default:

user www-data;

As you can imagine, this sets the nginx user.

I always push for consistency across servers and the default web server user on Debian based systems is www-data. As such, keep this as the www-data user.

You can also add a group to this setting and it may be an idea to do so as follows:

user www-data www-data;

worker_processes

Default:

worker_processes  1;

Nginx can have more than one worker process running at the same time.

To take advantage of SMP and to enable good efficiency I would recommend changing this to read:

worker_processes  4;

Although you can experiment with this number (and I encourage you to do so) setting it at more than 4 processes may actually cause efficiency issues on your Slice.

worker_connections

Default:

events {
    worker_connections  1024;
}

Note the worker_connections setting is placed inside the 'events' module.

Sets the number of connections that each worker can handle. This is a good default setting.

You can work out the maximum clients value from this and the worker_processes settings:

max_clients = worker_processes * worker_connections

http module

Next comes the http module which contains base settings for http access:

include       /etc/nginx/mime.types;
default_type  application/octet-stream;

Unless you have an overwhelming desire, I would leave these settings alone.

You can, of course, add more includes if you want to customise it but messing with mime-types usually ends up with broken web pages and download errors.

Mind you, it is good fun to play with!

sendfile

Default:

sendfile        on;

Sendfile is used when the server (Nginx) can actually ignore the contents of the file it is sending. It uses the kernel sendfile support instead of using it's own resources on the request.

It is generally used for larger files (such as images) which do not need use of a multiple request/confirmation system to be served - thus freeing resources for items that do need that level of 'supervision' from Nginx.

Keep it an on unless you know why you need to turn it off.

tcp

Default:

#tcp_nopush     on;
tcp_nodelay        on;

tcp_nopush: Sends the HTTP response headers in one packet. You can read more about tcp_nopush on this page.

I would change the default here and uncomment the setting as it is useful when combined with the sendfile option we set earlier.

tcp_nodelay: "disable the Nagle buffering algorithm". There you go!

Actually, it is for use with items than do not require a response. General web use does require a response from the client and so, going against the default, I would change this to off.

You can read more about tcp_nodelay here.

So there you are. I have changed the two default tcp settings. Your experience may show otherwise and, again, all I can say is experiment with your site/app - what do you need?

keepalive

Default:

#keepalive_timeout  0;
keepalive_timeout  65;

The default is very high and can easily be reduced to a few seconds (an initial setting of 2 or 3 is a good place to start and you will rarely need more than that). If no new requests are received during this time the connection is killed.

OK, but what does it mean? Well, once a connection has been established and the client has requested a file, this says "sit there and ignore everyone else until the time limit is reached or you get a new request from the client".

Why would you want a higher time? In cases where there will be a lot of interactivity on the site. However, in most cases, people will go to a page, read it for a while and then click for the next page. You don't want the connection sat there doing nothing and ignoring other users.

gzip

Default:

gzip  on;

Good. We like gzip. It allows for instant, real time compression.

I would add a couple more settings as follows:

gzip_comp_level 2;
gzip_proxied any;
gzip_types      text/plain text/html text/css application/x-javascript text/xml application/xml 
application/xml+rss text/javascript;

I think those are self explanatory and simply add to the gzip setting. You can read more about the various gzip settings on this page.

include

Default:

include /etc/nginx/sites-enabled/*;

Defines what settings to include that are located outside of the main nginx.conf.

In this case, it points to the sites-enabled directory so it will include any symlinks. Thus enabling any 'available' sites.

I talk more on the sites-available and sites-enabled directories in the Ubuntu Gutsy Nginx layout article.

Summary

Phew. There's a lot going on in this article, especially from such a small config file.

However, taking one setting at a time, we can see that each one is not only essential but pretty flexible.

The next articles will take you through setting up virtual hosts and move onto mongrel integration for your Ruby on Rails applications.

PickledOnion.

Article Comments:

Michael commented Thu Dec 20 17:37:52 UTC 2007:

Having followed the install from source route, my nginx.conf file is here:

/usr/local/nginx/conf rather than at /etc/nginx

Should I copy to /etc/nginx, or is that not necessary?

Ian Clifton commented Sat Feb 02 07:41:42 UTC 2008:

In the http section, I recommend adding "server_tokens off;" in order to avoid showing the nginx version. This is similar to setting "ServerTokens" to "Prod" in Apache.

Geoff Cheshire commented Mon Mar 24 17:31:23 UTC 2008:

Ian, that servertokens off; directive is not working for me. I get an unknown directive "servertokens" error when I try starting nginx again.

samotage commented Mon Aug 04 00:27:44 UTC 2008:

lerverley. Thanks for the keepalive tips.

Sam.

Brian Armstrong commented Wed Feb 18 22:16:40 UTC 2009:

Hey Guys,

Just wanted to let you know that this worked fine for me, but once I turned on SSL I encountered a problem. For larger text files, like prototype.js, it would only display part of the file, and then hang the browser.

After much Googling I found a solution. Basically you have to add this under the other gzip options:

gzip_buffers 16 8k;

You can read more here: http://blog.leetsoft.com/2007/7/25/nginx-gzip-ssl And here: http://forums.pragprog.com/forums/66/topics/924

Thanks! Brian Armstrong www.UniversityTutor.com

Want to comment?


(not made public)

(optional)

(use plain text or Markdown syntax)