Proxying requests to a mongrel cluster is one of the ways of serving your Ruby on Rails web application with Debian Etch.
Let's look at creating a cluster and configuring it to survive and restart after a reboot.
This article will stand on it's own and is not strictly part of a series. However, the base setup for serving your Ruby on Rails application is discussed here.
Note this is for Debian Etch and may, or may not, work using other distributions.
Let's start by installing the mongrel cluster gem:
sudo gem install mongrel_cluster --include-dependencies
That's it for the install - if you already had the mongrel gem install, this will be a quick process. If you are installing the mongrel_cluster gem from scratch, it may have several dependencies.
Before we configure the cluster, you will need to have a Ruby on Rails application. The basic structure created via the 'rails' command is more than sufficient:
cd ~ rails public_html/railsapp
Move into your rails application:
The command to create your application's cluster is as follows:
mongrel_rails cluster::configure -e production -p 8000 -N 2 -c /home/demo/public_html/railsapp -a 127.0.0.1
-e production: sets the environment. Change this to suit whether you are developing the application or serving the final product.
-p 8000 -N 2: sets the port to start the cluster on and then sets the number of mongrel instances. In this example I am following the vhost setup described in the Apache, rails and mongrels article. Set the port and number of mongrels to suit your application.
-c /home/demo/public_html/railsapp: sets the base directory of your rails application. It is important you use the full path to your rails folder.
-a 127.0.0.1: sets the address to bind to. For most applications the localhost port is sufficient.
As you can see, the separate options for the configure command are quite simple. To find out more enter:
mongrel_rails cluster::configure --help
So what did the command actually do? Well, it created a file called 'mongrel_cluster.yml' in the rails config directory. Let's take a look:
If you used the example above, the contents will be:
--- cwd: /home/demo/public_html/railsapp log_file: log/mongrel.log port: "8000" environment: production address: 127.0.0.1 pid_file: tmp/pids/mongrel.pid servers: 2
As you can see, the setting are from the configure command. There are also two entries that we did not specifically define: The log location and the pid_file. Feel free to adjust these paths to one of your choosing but both defaults are usually just fine.
Starting and Stopping
There are several ways of starting and stopping the mongrel_cluster.
Ensuring you are in the rails root folder issue this command:
You will receive a warning regarding the ruby version. Debian Etch has ruby v.1.8.5 as the default install. It is a warning and not an error so, at this stage, we can ignore it.
And to stop or restart the cluster:
mongrel_rails cluster::stop .. mongrel_rails cluster::restart
That's all well and good but the cluster won't restart on a reboot. Not very handy.
You can read more about mongrel clusters on the main mongrel website but do be aware the instructions on the site do not work without all of the commands listed below.
We'll start by creating a file in the 'etc' folder:
sudo mkdir /etc/mongrel_cluster
We need to link the mongrel_cluster.yml (which we just created) to the folder:
sudo ln -s /home/demo/public_html/railsapp/config/mongrel_cluster.yml /etc/mongrel_cluster/railsapp.yml
You will have to do that for each and every mongrel_cluster you create (if you want them to start automatically). So if you have two rails applications, you will have two symlinks.
Next, copy the gem's init script to the init.d directory:
sudo cp /usr/lib/ruby/gems/1.8/gems/mongrel_cluster-1.0.2/resources/mongrel_cluster /etc/init.d/
Make it executable:
sudo chmod +x /etc/init.d/mongrel_cluster
Now we need to add the script to the runtime list:
sudo /usr/sbin/update-rc.d -f mongrel_cluster defaults
Finally, to ensure the script correctly initialises the cluster on a reboot, you must have this symlink in place:
sudo ln -s /usr/bin/ruby1.8 /usr/bin/ruby
The indicated symlink assumes you have followed the articles and installed ruby via 'aptitude'. If you installed from source, then you need to adjust the link accordingly.
You may notice that the init script tries to set the user and group of the cluster to 'mongrel' and throws this warning:
chown: `mongrel:mongrel': invalid user
That's fair enough, we haven't created a mongrel user or a mongrel group. We have no need to.
You have a few choices here, the first of which is to ignore it (it does no harm).
The second is to open up the init script:
sudo nano /etc/init.d/mongrel_cluster
and comment out the chown commands as follows:
... #USER=mongrel ... #chown $USER:$USER $PID_DIR ...
If you want to get really jiggy, you can add your own user and group to the init script, but I will leave that to your imagination and skill...
Starting and Stopping v.2
You can also use the command 'mongrel_cluster_ctl' to start, stop and restart your clusters. What's the advantage of this method? Well, for a start you don't have to be in the rails directory to issue to command.
Let's use this to find the status of any clusters:
Checking all mongrel_clusters... mongrel_rails cluster::status -C railsapp.yml ... found pid_file: tmp/pids/mongrel.8000.pid found mongrel_rails: port 8000, pid 3308 found pid_file: tmp/pids/mongrel.8001.pid found mongrel_rails: port 8001, pid 3311
A nice summary of what is happening.
And to start/stop/restart the cluster(s):
mongrel_cluster_ctl start ... mongrel_cluster_ctl stop ... mongrel_cluster_ctl restart
Quite a lot going on in this article: We covered most areas needed to create, configure, start, stop and restart a mongrel_cluster along with creating an init script to restart the cluster(s) in a reboot.
Once you have gone through the commands a couple of times you will see how easy it is to setup and control your rails applications.