Ubuntu Gutsy - mongrel clusters and surviving a reboot

Proxying requests to a mongrel cluster is one of the ways of serving your Ruby on Rails web application.

Let's create a cluster of mongrels and configuring it to restart after a reboot.


Setup

This article will stand on it's own and is not strictly part of a series. However, the base setup for serving a Ruby on Rails application using mongrels and Apache is discussed here and the base for mongrels and Nginx can be found in this article.

Install

Let's start by installing the mongrel cluster gem:

sudo gem install mongrel_cluster

I had rubygems v0.9.5 on my demo slice so the dependencies were automatically taken care of.

In all, the following gems were installed:

gem_plugin-0.2.3
cgi_multipart_eof_fix-2.5.0
daemons-1.0.9
fastthread-1.0.1
mongrel-1.1.1
mongrel_cluster-1.0.5

Base

Before we configure the cluster, you will need to have a Ruby on Rails application. The basic structure created via the 'rails' command is more than sufficient:

cd /home/demo/public_html
rails railsapp

Move into your rails application:

cd railsapp

Configure

The command to create your application's cluster is as follows:

mongrel_rails cluster::configure -e production -p 8000 -N 2 -c /home/demo/public_html/railsapp -a 127.0.0.1

-e production: sets the environment. Change this to suit whether you are developing the application or serving the final product.

-p 8000 -N 2: sets the port to start the cluster on and then sets the number of mongrel instances. In this example I am following the vhost setup described in the Apache, rails and mongrels article. Set the port and number of mongrels to suit your application.

-c /home/demo/public_html/railsapp: sets the base directory of your rails application. It is important you use the full path to your rails folder.

-a 127.0.0.1: sets the address to bind to. For most applications the localhost port is sufficient.

As you can see, the separate options for the configure command are quite simple. To find out more enter:

mongrel_rails cluster::configure --help

yaml

So what did the command actually do? Well, it created a file called 'mongrel_cluster.yml' in the rails config directory. Let's take a look:

nano config/mongrel_cluster.yml

If you used the example above, the contents will be:

---
cwd: /home/demo/public_html/railsapp
log_file: log/mongrel.log
port: "8000"
environment: production
address: 127.0.0.1
pid_file: tmp/pids/mongrel.pid
servers: 2

As you can see, the setting are from the configure command. There are also two entries that we did not specifically define: The log location and the pid_file. Feel free to adjust these paths to one of your choosing but both defaults are usually just fine.

Starting and Stopping

There are several ways of starting and stopping the mongrel_cluster.

Ensuring you are in the rails root folder issue this command:

mongrel_rails cluster::start

Which will simply output:

starting port 8000
starting port 8001

And to stop or restart the cluster:

mongrel_rails cluster::stop
..
mongrel_rails cluster::restart

Reboot

That's all well and good but the cluster won't restart on a reboot. Not very handy.

You can read more about mongrel clusters on the main mongrel website but do be aware the instructions on the site do not work without all of the commands listed below.

Let's stop any running mongrels with a:

mongrel_rails cluster::stop

And then creating a folder in the 'etc' folder:

sudo mkdir /etc/mongrel_cluster

We need to link the mongrel_cluster.yml (which we just created) to the folder:

sudo ln -s /home/demo/public_html/railsapp/config/mongrel_cluster.yml /etc/mongrel_cluster/railsapp.yml

You will have to do that for each and every mongrel_cluster you create (if you want them to start automatically). So if you have two rails applications, you will have two symlinks.

Next, copy the gem init script to the init.d directory:

sudo cp /usr/lib/ruby/gems/1.8/gems/mongrel_cluster-1.0.5/resources/mongrel_cluster /etc/init.d/

Make it executable:

sudo chmod +x /etc/init.d/mongrel_cluster

Now we need to add the script to the runtime list:

sudo /usr/sbin/update-rc.d -f mongrel_cluster defaults

Finally, to ensure the script correctly initialises the cluster on a reboot, you must have this symlink in place:

sudo ln -s /usr/bin/ruby1.8 /usr/bin/ruby

Final tweaks

If you take a look at the mongrel init script, you will see it tries to set the user and group of the cluster to 'mongrel'.

Problem is, we don't have a mongrel user or group.

We can do a few things here including changing the init script. The problem with that is if the mongel_cluster gem is updated then we will have to remember any changes we made when we link to the newer version.

Instead, a neat way is to simply create a mongrel user and then add it to the www-data group (thanks to Adam and Tad for this tip):

sudo useradd mongrel
sudo usermod -a -G www-data mongrel

Just to clarify: when we set the permissions on the public_html folder, we ensured www-data had access to the folders. Now, the default mongrel_cluster user also has access to the correct folders.

Starting and Stopping v.2

Now we have the mongrel_cluster init script starting and all the permissions seem nicely sorted, let's have a very quick look at how we can start and stop the cluster from the command line.

You can use the command 'mongrel_cluster_ctl' to start, stop and restart your clusters. The advantage of this method is that you don't have to be in the rails directory to issue it.

Let's use this command to find the status of any clusters:

mongrel_cluster_ctl status

My output:

Checking all mongrel_clusters...
mongrel_rails cluster::status -C railsapp.yml
...
found pid_file: tmp/pids/mongrel.8000.pid
found mongrel_rails: port 8000, pid 2065

found pid_file: tmp/pids/mongrel.8001.pid
found mongrel_rails: port 8001, pid 2068

Good, I have just rebooted so expect to see the two mongrels running.

To start/stop/restart the cluster(s):

mongrel_cluster_ctl start
...
mongrel_cluster_ctl stop
...
mongrel_cluster_ctl restart

Remember you may need to put a sudo in front of the 'stop' command if you have just rebooted as the process started on reboot is owned by root.

Summary

Quite a lot going on in this article: We covered most areas needed to create, configure, start, stop and restart a mongrel_cluster along with creating an init script to restart the cluster(s) in a reboot.

Once you have gone through the commands a couple of times you will see how easy it is to setup and control your rails applications.

PickledOnion.

Article Comments:

Frank commented Tue Nov 13 20:04:32 UTC 2007:

Is there a recommended number of mongrel instances per rails application (ex. xxxx amount of hits)?

PickledOnion commented Wed Nov 14 09:43:25 UTC 2007:

Hi Frank,

I think most people start with 2 or 3 instances and then increase depending on how it performs.

I do not have a formula to give you as it depends on so many variables such as number of visitors but also load, the rails app, database connections and queries and so on.

PickledOnion.

John Griffiths commented Sun Dec 02 01:14:52 UTC 2007:

i asked this same question a while back, i think the general rule of thumb is 4 mongrels max on a 256 slice, doubling up as you go.

i've got 3 web apps running on my slice with one mongrel cluster for each app, seems to work out much nicer performance-wise than having 2 mongrel-clusters per app.

trying out nginx for my new gutsy slice, hoping to replace apache with it.

will let you know how that goes on my blog...

(http://www.red91.com)[http://www.red91.com]

Thomas commented Sat Dec 15 14:54:26 UTC 2007:

This works great for adding one cluster, but I was confused between 'mongrelcluster' the program and 'mongrelcluster' the actual cluster. Please, please edit the article and distinguish between the two. Otherwise this works like a charm.

Thank you for all the articles you have put together; they have really saved me some serious frustration.

Unixmonkey commented Sat Dec 15 16:49:25 UTC 2007:

running your rails app in production mode with these settings (at least with rails 2.0.1) will cause rails to think you are still running locally (yes, even in production mode), and you will still get visible exceptions (e.g. instead of routing to 404.html, 500.html in /public)

Better to change the 127.0.0.1 in mongrelcluster.yml to your external IP address, then access it in your Apache vhost as BalancerMember http://youriphere:8000

Great articles overall. Keep them coming!

PickledOnion commented Sat Dec 15 18:19:15 UTC 2007:

Hi Thomas,

Sorry, not sure what you would be confused about?

Unixmonkey,

I will look at this with the new rails release but I would be doubtful you would have to configure an external IP as that would create extra latency and mean you need to change your firewall rules.

As I say, I will investigate.

Thanks for the comments!

PickledOnion.

Unixmonkey commented Mon Dec 17 20:24:24 UTC 2007:

PickledOnion, Further investigation shows that there is nothing wrong with your setup as I thought. Rails can be funny about determining what a local request is sometimes.

Over-riding local_request? in application.rb to return false will supress that debug trace. I've posted more info here: http://unixmonkey.net/blog/?p=6

PickledOnion commented Tue Dec 18 13:00:59 UTC 2007:

Unixmonkey,

Thanks for getting back to us and thanks for the link as well.

It seems there are some 'gotchas' that are catching people out with the new Rails so, again, thanks for the link.

PickledOnion.

Arthur commented Wed Dec 19 00:30:23 UTC 2007:

Followed the article but ran into this at the end: Checking all mongrel_clusters... mongrel_rails cluster::status -C railsapp.yml missing pid_file: tmp/pids/mongrel.8000.pid missing mongrel_rails: port 8000

missing pid_file: tmp/pids/mongrel.8001.pid missing mongrel_rails: port 8001

The only difference I can see is that I'm running mongrel 1.1.2. Any ideas on what to troubleshoot first? New to this. Thanks!

Nick commented Fri Dec 21 02:39:08 UTC 2007:

Arthur,

Check your version of rails (rails -v). It may be it is missing the correct dependency versions (since rails just got updated to 2.0.2). If that doesn't work check the log directory in your rails app for mongrel.####.log and see what it says.

Arthur commented Fri Dec 21 04:19:28 UTC 2007:

Nick--Thanks for the assistance! I reinstalled but to no avail. My mongrel.8000.log is showing the following error, but I haven't been able to dig up a solution online. Thought you might have an idea. Thanks again!


/usr/lib/ruby/gems/1.8/gems/mongrel-1.1.2/bin/../lib/mongrel/tcphack.rb:12:in initialize_without_backlog': Cannot assign requested address - bind(2) (Errno::EADDRNOTAVAIL) from /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.2/bin/../lib/mongrel/tcphack.rb:12:ininitialize' ...

Nick commented Fri Dec 21 14:55:49 UTC 2007:

Arthur,

Sounds like something else is running on that port perhaps. Check to see if you have any mongrel instances already running (ps -ef | grep mongrel) and if so kill them (kill PID). If none are running check your other configurations to see if anything is running on port 8000.

I found this online: http://www.ruby-forum.com/topic/134895.

Hopefully that will help.

Ryan Lowe commented Mon Dec 24 08:13:33 UTC 2007:

When I used the mongrel_rails cluster::configure line, it gave me an error message:

!!! Path to config file not valid: config/mongrel_cluster.yml

So I had to explictly use the -C option and for the given example it would be:

/home/demo/publichtml/railsapp/config/mongrelcluster.yml

Ryan Lowe commented Mon Dec 24 08:17:02 UTC 2007:

Let's try this again, Markdown is misbehaving:

/home/demo/public_html/railsapp/config/mongrel_cluster.yml

Arthur commented Wed Dec 26 05:29:59 UTC 2007:

Nick--Thanks again for your help. I've found that if I comment out the line address: 127.0.0.1 in the mongrel_cluster.yml file then I can start, stop and restart my mongrel cluster, and everything restarts when I reboot my server as well. Does this indicate that I have a problem with my Apache configuration or with something else? Thanks! --Arthur

behrang javaherian commented Thu Jan 17 13:36:13 UTC 2008:

cann't we use @reboot in cron job to restart the mongrel cluster? wouldn't that be easier?

PickledOnion commented Thu Jan 17 13:44:15 UTC 2008:

Hi,

You can use the @reboot to accomplish the same task.

However, there is a reason that it is not favoured over the longer method I describe.

But for the life of me, i can't remember the reason. Give it a go and when I remember or can track down the reference, I will update you.

PickledOnion

Seth commented Wed Mar 12 19:32:45 UTC 2008:

If you want to keep your mongrels from running as root on reboot you can add a user:, and a group: to your mongrel_cluster.yml file.

Also if your using attachment_fu with ImageScience after rebooting the mongrels will load, but you will get an error.

The solution that worked for me was to call the init script from rc.local instead of from rc2.d

Kyle commented Wed Mar 19 15:29:38 UTC 2008:

For anybody who missed an important detail (like me) after following the previous article (referenced at the top of this one), you must install mongrel_cluster, not just mongrel. I thought that I had just done this and was getting an error...

ERROR RUNNING 'cluster::configure': Plugin /cluster::configure does not exist in category /commands

...when trying to configure the cluster. In the previous article, I had only installed mongrel, not the cluster.

...So don't be dumb like me. (And if you were, I hope that this helps.)

Dimitry commented Tue Mar 25 00:03:22 UTC 2008:

Question,

When I'm in the root of Rails app directory, I can do: --> mongrel_rails cluster::start

BUT, not: --> mongrelclusterctl start

With the latter, I get: !!! Path to log file not valid: log/mongrel.8000.log

Clearly it gets confused on the path, but my cluster definition on the /etc/mongrel_cluster/FILE.yml is a symbolic link to the one on my Rails app, so I can't edit it directly.

Ideas? Thanks!

Marty McGee commented Thu Mar 27 22:12:26 UTC 2008:

If you a getting an error "Path to config file not valid: config/mongrelcluster.yml", make sure you are running your cluster::configure command from inside your Rails application root (cd /home/username/publichtml/railsapp)

Marty McGee commented Sun Mar 30 04:01:10 UTC 2008:

Nick - thanks for your comment - I was able to finally pin down my "missing pidfile: tmp/pids/mongrel.8000.pid" error by running "ps -ef | grep mongrel" and killing all the mongrels currently running with "kill <pid>". Now I can start|restart|stop mongrelcluster without a problem. Thanks.

Richard commented Thu Apr 17 14:58:42 UTC 2008:

Hi,

I may be stupid but I don't see exactly the point of having the whole cluster on the same server. if the server goes down your app will, could someone explain the benefit of it? As well is there any way to have the mongrel cluster running on 3 different servers and to setup pound as a redundant load-balancer to run on 2 servers for full redundancy (maybe with vrrp?)

thanks,

Richard

Will commented Mon Apr 21 08:09:45 UTC 2008:

Word to the wise: the mongrel --prefix option (for having separate apps on different url prefixes) does not work correctly with the version found in Gutsy. When I tried to use it, all the server would return was "NOT FOUND".

Perhaps it works well with the latest mongrel...

I instead set ActionController::AbstractRequest.relativeurlroot in config/environment.rb

Brandon Zylstra commented Thu Apr 24 18:12:28 UTC 2008:

Richard, the point of the mongrel cluster is for performance, not redundancy. Each mongrel takes up so much memory, and if you've got the memory for (say) 4 mongrels, then you'll get the best performance with 4. If you've only got enough for 2, then go with 2.

If you want to cluster for redundancy/failover, you can do that too, and you'd obviously spread them across multiple machines in that case, but that's really a separate thing that this article doesn't cover.

Of course it gets a lot more complex then, because you'll need Apache (or nginx or whatever) to be redundant too, and your database...

Aditya Sanghi commented Wed May 28 11:04:41 UTC 2008:

Guys,

Can anyone explain why they dont have the --clean option by default in the /etc/init.d/mongrelcluster file? If your mongrels crash and leave behind a pid file a restart is not going to work cos it will find a stale pid file. it is important to pass the --clean option to the mongrelrails cluster::start command for it to handle the case when you might have stale pid files.

Any views on that?

Cheers, Aditya

Jesse commented Sun Jul 06 10:30:04 UTC 2008:

Why do I get this for mongrelclusterctl status, and is it causing my overusage of memory? (Also, how do I fix it?)

found pid_file: tmp/pids/mongrel.8000.pid found mongrel_rails: port 8000, pid 2639

found pid_file: tmp/pids/mongrel.8001.pid found mongrel_rails: port 8001, pid 2642

found pid_file: tmp/pids/mongrel.8002.pid found mongrel_rails: port 8002, pid 2645

mongrel_rails cluster::status -C railsapp.yml found pid_file: tmp/pids/mongrel.8000.pid found mongrel_rails: port 8000, pid 2639

found pid_file: tmp/pids/mongrel.8001.pid found mongrel_rails: port 8001, pid 2642

found pid_file: tmp/pids/mongrel.8002.pid found mongrel_rails: port 8002, pid 2645

Patrick Shields commented Wed Sep 17 04:48:33 UTC 2008:

When I first tried to check the status of the mongrel clusters, it said it couldn't find them, but I rebooted it and it worked. Maybe not the ideal solution but I would encourage it before ripping out your hair.

Temruk commented Sat Aug 15 13:26:19 UTC 2009:

Thanks man, very nice article. Saved me a lot of time!

nuks commented Thu Oct 08 10:06:09 UTC 2009:

Two words. Great article!

Marcelo commented Thu Nov 12 19:35:48 UTC 2009:

Great article!!! A question... can you point some other resources were I can find how to tune mongrels + apache2 for productive environment? Currently we are performing stress testing on one rails site and we haver really low marks for concurrent connections. Thanks a lot.

Alex Dean commented Thu Feb 18 10:26:10 UTC 2010:

I'm a bit confused as to why we should be restarting the Mongrel cluster using a command like:

mongrelclusterctl restart

We've created /etc/init.d/mongrel_cluster so surely we should be using this like:

/etc/init.d/mongrel_cluster restart

If you look in /etc/init.d/mongrelcluster then it's essentially a wrapper around mongrelclusterctl, but nonetheless it has some refinements which a raw call to mongrelcluster_ctl doesn't have...

Want to comment?


(not made public)

(optional)

(use plain text or Markdown syntax)