Debian Etch - mongrel clusters and surviving a reboot

Proxying requests to a mongrel cluster is one of the ways of serving your Ruby on Rails web application with Debian Etch.

Let's look at creating a cluster and configuring it to survive and restart after a reboot.


This article will stand on it's own and is not strictly part of a series. However, the base setup for serving your Ruby on Rails application is discussed here.

Note this is for Debian Etch and may, or may not, work using other distributions.


Let's start by installing the mongrel cluster gem:

sudo gem install mongrel_cluster --include-dependencies

That's it for the install - if you already had the mongrel gem install, this will be a quick process. If you are installing the mongrel_cluster gem from scratch, it may have several dependencies.


Before we configure the cluster, you will need to have a Ruby on Rails application. The basic structure created via the 'rails' command is more than sufficient:

cd ~
rails public_html/railsapp

Move into your rails application:

cd public_html/railsapp


The command to create your application's cluster is as follows:

mongrel_rails cluster::configure -e production -p 8000 -N 2 -c /home/demo/public_html/railsapp -a

-e production: sets the environment. Change this to suit whether you are developing the application or serving the final product.

-p 8000 -N 2: sets the port to start the cluster on and then sets the number of mongrel instances. In this example I am following the vhost setup described in the Apache, rails and mongrels article. Set the port and number of mongrels to suit your application.

-c /home/demo/public_html/railsapp: sets the base directory of your rails application. It is important you use the full path to your rails folder.

-a sets the address to bind to. For most applications the localhost port is sufficient.

As you can see, the separate options for the configure command are quite simple. To find out more enter:

mongrel_rails cluster::configure --help


So what did the command actually do? Well, it created a file called 'mongrel_cluster.yml' in the rails config directory. Let's take a look:

nano config/mongrel_cluster.yml

If you used the example above, the contents will be:

cwd: /home/demo/public_html/railsapp
log_file: log/mongrel.log
port: "8000"
environment: production
pid_file: tmp/pids/
servers: 2

As you can see, the setting are from the configure command. There are also two entries that we did not specifically define: The log location and the pid_file. Feel free to adjust these paths to one of your choosing but both defaults are usually just fine.

Starting and Stopping

There are several ways of starting and stopping the mongrel_cluster.

Ensuring you are in the rails root folder issue this command:

mongrel_rails cluster::start

You will receive a warning regarding the ruby version. Debian Etch has ruby v.1.8.5 as the default install. It is a warning and not an error so, at this stage, we can ignore it.

And to stop or restart the cluster:

mongrel_rails cluster::stop
mongrel_rails cluster::restart


That's all well and good but the cluster won't restart on a reboot. Not very handy.

You can read more about mongrel clusters on the main mongrel website but do be aware the instructions on the site do not work without all of the commands listed below.

We'll start by creating a file in the 'etc' folder:

sudo mkdir /etc/mongrel_cluster

We need to link the mongrel_cluster.yml (which we just created) to the folder:

sudo ln -s /home/demo/public_html/railsapp/config/mongrel_cluster.yml /etc/mongrel_cluster/railsapp.yml

You will have to do that for each and every mongrel_cluster you create (if you want them to start automatically). So if you have two rails applications, you will have two symlinks.

Next, copy the gem's init script to the init.d directory:

sudo cp /usr/lib/ruby/gems/1.8/gems/mongrel_cluster-1.0.2/resources/mongrel_cluster /etc/init.d/

Make it executable:

sudo chmod +x /etc/init.d/mongrel_cluster

Now we need to add the script to the runtime list:

sudo /usr/sbin/update-rc.d -f mongrel_cluster defaults

Finally, to ensure the script correctly initialises the cluster on a reboot, you must have this symlink in place:

sudo ln -s /usr/bin/ruby1.8 /usr/bin/ruby

The indicated symlink assumes you have followed the articles and installed ruby via 'aptitude'. If you installed from source, then you need to adjust the link accordingly.

Final tweaks

You may notice that the init script tries to set the user and group of the cluster to 'mongrel' and throws this warning:

chown: `mongrel:mongrel': invalid user

That's fair enough, we haven't created a mongrel user or a mongrel group. We have no need to.

You have a few choices here, the first of which is to ignore it (it does no harm).

The second is to open up the init script:

sudo nano /etc/init.d/mongrel_cluster

and comment out the chown commands as follows:


If you want to get really jiggy, you can add your own user and group to the init script, but I will leave that to your imagination and skill...

Starting and Stopping v.2

You can also use the command 'mongrel_cluster_ctl' to start, stop and restart your clusters. What's the advantage of this method? Well, for a start you don't have to be in the rails directory to issue to command.

Let's use this to find the status of any clusters:

mongrel_cluster_ctl status

My output:

Checking all mongrel_clusters...
mongrel_rails cluster::status -C railsapp.yml
found pid_file: tmp/pids/
found mongrel_rails: port 8000, pid 3308

found pid_file: tmp/pids/
found mongrel_rails: port 8001, pid 3311

A nice summary of what is happening.

And to start/stop/restart the cluster(s):

mongrel_cluster_ctl start
mongrel_cluster_ctl stop
mongrel_cluster_ctl restart


Quite a lot going on in this article: We covered most areas needed to create, configure, start, stop and restart a mongrel_cluster along with creating an init script to restart the cluster(s) in a reboot.

Once you have gone through the commands a couple of times you will see how easy it is to setup and control your rails applications.


Article Comments:

Andy Croll commented Thu Nov 01 01:38:49 UTC 2007:

I've had my server rebooted a couple of times and the mongrels don't come back up. It's because the /tmp/pids/mongrel.800* files are still there.

Is there anything I can do about that?

Nolan Eakins commented Fri Jan 04 23:40:46 UTC 2008:

Yeah, delete them in an init script.

Terry Heath commented Sun Apr 06 17:22:00 UTC 2008:

If you don't want to write your own script to remove the pid files, mongrelclusterctl supports a clean argument. Just change the script for start and restart to have "--clean" at the end.

Marc commented Mon Apr 07 19:40:33 UTC 2008:

for me, mongrel_cluster is in: /var/lib/gems/1.8/gems/mongrel_cluster-1.0.5

just a short fyi :)

Adam Wilson commented Thu Apr 10 10:29:11 UTC 2008:


This is just what I needed after an out-of-memory problem on my server... had to reboot and then go in and restart all my mongrels again. Scary!

Matt commented Tue Sep 16 05:48:20 UTC 2008:

Running the command mongrelclusterctl status I receive:

missing pid_file: tmp/pids/ missing mongrel_rails: port 8000

missing pid_file: tmp/pids/ missing mongrel_rails: port 8001

Seems the cluster starts, stops, and restarts successfully as I receive no errors with those commands, or am I mistaken?

Matt commented Thu Sep 18 05:51:02 UTC 2008:

Note: I've since resolved this issue. Cluster now locating the pid files. All is well.

Andrey commented Fri Oct 10 07:01:28 UTC 2008:


I've made everything above, but my mongrel cluster does not start after reboot.

Manuall call of "mongrelclusterctl start" works fine.

Where could be a problem?

hobe commented Sun Dec 14 16:11:53 UTC 2008:

hi. my mongrel cluster also couldn't start after reboot.

found a solution which works for me on

just did the first point, the second is also recommended on this site

1) Add a path statement to mongrelcluster file just above the CONFDIR variable:

sudo vi /etc/init.d/mongrel_cluster PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local:/usr/local/sbin:/usr/local/bin

John David Eriksen commented Mon May 31 01:55:18 UTC 2010:

I've followed all of the above as well as the note posted by hobe. I am unable to get the cluster to survive a reboot.

My PATH in /etc/init.d/mongrel_cluster is:


I can execute /etc/init.d/mongrel_cluster commands just fine both using sudo as a regular user and directly while logged in as root.

However, when I reboot my system, the cluster fails to start and my mongrel.*.log log files contain the following error:

Missing the Rails 2.3.6 gem. Please gem install -v=2.3.6 rails, update your RAILSGEMVERSION setting in config/environment.rb for the Rails version you do have installed, or c omment out RAILSGEMVERSION to use the latest version installed.

I am using Rails 2.3.8. Somehow, when the system is rebooting, an older gem that depends on an older version of Rails is being called in some way.

Want to comment?

(not made public)


(use plain text or Markdown syntax)