Ubuntu Hardy - mongrel and mongrel clusters

There are variety of options open to the sysadmin when serving Ruby applications.

One of the original ways is to use the mongrel web server. Requests are proxied to the mongrel(s) from the main web server (Apache, Nginx, etc).

The article may seem quite lengthy but two subjects are tackled here. One is the basic mongrel gem itself but then we move onto the mongrel_cluster gem.

Take each section at a time as each one builds on the previous explanation.


I am assuming you have Ruby and Rubygems installed on your Slice. If you don't, please see the Ubuntu Hardy Ruby on Rails article).


Mongrel is a rubygem and installation is as simple as:

sudo gem instal mongrel

On the test Slice with a basic rubygems and Rails installation, the process installed the following gems:


That can vary depending on what you already have installed.

Mongrel basics

The mongrel package has 3 main commands: start, stop and restart.

However, there are many options you could add to fit your needs such as the environment or the port and so on:

mongrel_rails start -e production -p 6000

You need to be in the rails application directory to issue that command and, perhaps obviously, it would start a mongrel instance in production mode on port 6000.

If you don't run it in the background (daemonised) then the output in the terminal will be similar to when using the inbuilt rails webbrick server.

To run it in the backgroud, simply add the '-d' option:

mongrel_rails start -e production -p 6000 -d

To stop the running process (assuming it is being run in a daemonised fashion):

mongrel_rails stop

Again, the command should be given when in the rails directory.

Mongrel Clusters

You can run as many individual mongrels as you like for your application but it does get a little unwieldy if you have more than one application.

One solution is to create what are called mongrel clusters. These 'clusters' are predefined groups of mongrels which are easy to start and stop, etc as a cluster and be configured to start on a Slice reboot (so your application will start itself on a reboot).


Just as with the original mongrel install, the mongrel_cluster is a rubygem:

sudo gem install mongrel_cluster

As I had already installed the mongrel gem with its dependencies, only the mongrel_cluster gem itself was installed. This may vary on your Slice, depending on what you already have installed.


Configuring a mongrel cluster for your rails application runs along similar lines to the single mongrel options shown above.

To start a cluster of 2 mongrels in production mode starting from port 8000 would be as follows:

mongrel_rails cluster::configure -e production -p 8000 -N 2 -c /home/demo/public_html/testapp -a

Note that I set the full path of the rails application and set the port to bind to (localhost in this case).

There are plenty of option available when configuring a mongrel cluster and the easiest thing is to have a look at the help file:

mongrel_rails cluster::configure -h


You will have noticed the output of the command is as follows:

Writing configuration file to config/mongrel_cluster.yml.

The contents of which are:

cwd: /home/demo/public_html/testapp
log_file: log/mongrel.log
port: "8000"
environment: production
pid_file: tmp/pids/mongrel.pid
servers: 2

Well, no real surprises there, it simply puts the mongrel cluster options into a YAML format.

You can edit the file by hand if you wish to change something and don't want to go through the configure command again.

Mongrel_cluster basics

Starting the cluster is a case of:

mongrel_rails cluster::start

Ensure you are in your rails application folder when you issue the command.

Stopping and restarting:

mongrel_rails cluster::restart
mongrel_rails cluster::stop

init scripts

The final configuration you may want to consider (and I recommend it) is to create an init script so the mongrel cluster is started on a reboot.

Unlike 'thin' or mod_rails there is no easy way of doing this so it does require a some work.

Firstly, create a folder in the /etc folder:

sudo mkdir /etc/mongrel_cluster

Then create a symlink from the cluster configuration file to the newly created folder:

sudo ln -s /home/demo/public_html/testapp/config/mongrel_cluster.yml /etc/mongrel_cluster/testapp.yml

You will have to do that for each and every mongrel_cluster you create (if you want them to start automatically). So if you have two rails applications, you will have two symlinks.

Next, copy the gem init script to the init.d directory:

sudo cp /usr/lib/ruby/gems/1.8/gems/mongrel_cluster-1.0.5/resources/mongrel_cluster /etc/init.d/

Make it executable:

sudo chmod +x /etc/init.d/mongrel_cluster

and then add the script to the runlevels:

sudo /usr/sbin/update-rc.d -f mongrel_cluster defaults

Wow. Quite a long and complicated procedure when compared to using 'thin' or mod_rails.

Cluster control

Let's take a quick look at controlling the clusters.

Getting a status of any running clusters is always nice:

mongrel_cluster_ctl status

The output will show something along the lines of:

Checking all mongrel_clusters...
mongrel_rails cluster::status -C testapp.yml
found pid_file: tmp/pids/mongrel.8000.pid
found mongrel_rails: port 8000, pid 2343

found pid_file: tmp/pids/mongrel.8001.pid
found mongrel_rails: port 8001, pid 2346

That matches the cluster we created earlier so no problems.

To start/stop/restart the cluster(s):

mongrel_cluster_ctl start
mongrel_cluster_ctl stop
mongrel_cluster_ctl restart

Remember you may need to put a sudo in front of the 'stop' command if you have just rebooted as the process started on reboot is owned by root.


There is a lot happening in this article but when followed all the way through, we have all the necessary gems and information to create mongrel clusters for each of our rails applications.

A simple symlink is then all it takes to ensure the cluster is restarted on a reboot.


Article Comments:

Scott Andreas commented Tue Jun 03 06:45:07 UTC 2008:


Thanks so much for providing all of these tutorials. Not having set up Nginx before, there's not a chance I'd consider attempting to set up a VPS environment on my own without them.

Just a note; the very first command listed in this article contains a typo; "sudo gem instal mongrel" is missing the second 'l' on "install."

Thanks again - really appreciate it!

  • Scott

Don Buchanan commented Mon Jul 28 06:30:33 UTC 2008:

Perhaps before looking at the status using

mongrelclusterctl status

you should advise people to manually restart the clusters with

mongrel_rails cluster::restart

otherwise they might get missing pid file errors?

chovy commented Sun Oct 26 23:47:34 UTC 2008:

gem install mongrel

make sh: make: not found

Joe Berkovitz commented Fri Nov 07 02:20:00 UTC 2008:

I just got a mongrel cluster working on my slice and a few noteworthy points came up:

  • It would be helpful to copy over a sample Apache2 virtual server conf from the Mongrel docs to this page that illustrates the correct Proxy directives

  • I found that the mongrel docs failed to mention the need to explicitly allow proxying by including a <proxy> element with nested allows. I was getting 403s until I made this change.

  • The address did not work for me. I found that I had to use the actual domain name of my slice in order for Apache to willingly proxy to my mongrel cluster. Go figure -- the IPs are different of course, but I don't know why that should matter.

buggybunny commented Fri Nov 07 16:41:57 UTC 2008:

Fantastic tutorial -> it helped me a lot.

Thanks, mate!

jack commented Sun Feb 22 08:20:20 UTC 2009:

i thought "instal" was a typo too, but it works.

Pandian.A commented Thu Jul 16 09:31:42 UTC 2009:

I want to stop/start a single port of a mongrel_cluster running 2 servers 8000 and 8001. If 8001 fails i want to start 8001 immediately.but now every time I'm stop and start entire cluster so please help me to solve this problem.

Want to comment?

(not made public)


(use plain text or Markdown syntax)