A collection of computer systems and programming tips that you may find useful.
Brought to you by Craic Computing LLC, a bioinformatics consulting company.

Friday, December 21, 2007

Multihomed Ethernet Interfaces on Mac OS X server 10.4

I had a Mac OS X 10.4 server with two ethernet interfaces configured so that eth0 connected to my internal network and eth1 connected directly to the Internet.

I could access either interface just fine from my internal network. But unbeknownst to
me, the outside world was not able to access the external interface.

Checking the firewall or even turning the firewall off completely had no effect. The output of ifconfig looked like it should, the cables were plugged in where they ought to be. You just couldn't get to the external interface from the Internet...

It turns out that Mac OS X expects eth0 to connect to the Internet and I had it connected to my internal network. The server could see the Internet via that interface. Because it passed through my restrictive firewall, the outside world could not see it but from my subnet I could see it just fine... a nasty little gotcha.

I swapped the network configuration parameters for that eth0 was the external interface and eth1 was internal and the problem was solved.

Look at the Network control panel under 'Network Status'. Both interfaces will be active in a configuration like mine but only one will state 'You are connected to the Internet via...'. That needs to be the external interface.

There must be some way to configure which interface connects to the Internet but it is easier to follow their convention and not worry about it.

Tuesday, December 18, 2007

Ortho - JavaScript Graphics Library

I've written a JavaScript library called Ortho (http://www.craic.com/ortho) on top of Prototype for creating 'diagram-style' graphics in JavaScript. You can create histograms, graphs, timeline plots, 'maps' of genomic data, annotated images, tree diagrams, etc.

Unlike Canvas, it seamlessly integrates text with graphics and the output looks the same across browsers and in *print*. Unlike Flash it does not require third party software.

It uses associated CSS styles to draw rectangles (divs with a border) and horizontal or vertical lines (divs with a border on one side). A bit of a hack? You might say that, but it turns out to be very effective for the sort of graphics that I need to create on the fly.

It cannot draw curved lines or arbitrary shapes - hence the name 'ortho' for orthogonal. But for a range of applications it may offer a simple solution for creating sophisticated graphics.

It is built on top of the wonderful Prototype library. As a result it is very amenable to being extended with Prototype and Scriptaculous.

Ortho is released under an MIT-style license.

The initial release only covers 'static' graphics but functions for user interaction and Ajax are under development.

The Ortho project site (http://www.craic.com/ortho) has a number of examples that show you what you can do with the library.

Friday, November 30, 2007

Running a Rails Applications in a Subdirectory

Part 1: Running a Rails Applications in a Subdirectory

I'm running a Rails application and a separate static web site on the same host.

I want to refer to the static content with a url like http://myserver/public/index.html etc.

I want to refer to the Rails app with a url like http://myserver/db

By default Rails expects an application to reside at the top level of a site, i.e. http://myserver, so I need a way to change this default path. There are two simple steps involved:

1: In config/environment.rb add a line like this at the bottom of the file:

ActionController::AbstractRequest.relative_url_root = "/db"

...where '/db' is the subdirectory where you want your application to appear

This will add that prefix to every URL generated in your Rails app pages and will cause the Rails dispatcher to respond to that path.

2: Adding that prefix will cause your links to stylesheets and javascripts to break. Fix that by adding a symbolic link in your Rails app /public directory

% ln -s . db

... where 'db' is the prefix with no leading slash

Restart your server and try it out.

Part 2: Serving Static Content from Apache and your Rails app from Mongrel

Currently (late 2007) the preferred Web server solution for Rails applications is to use Apache2 as the front end and Mongrel as the Rails application server. All requests come into Apache. Those for a static web site are handled directly, as Apache is great at that job. Requests for the Rails app are proxied to an instance of the Mongrel web server running on a different port. You can find details of this configuration here:

These are the steps needed to set up this configuration.

1: Set up the Rails app under a subdirectory as shown in Part 1.

2: Run Mongrel on a port other than 80 as described in the Apache link above. For example, you might run it on port 8000 using this command in your app directory:

% mongrel_rails start -d -p 8000 -e production -P /my/path/railsapp/log/mongrel.pid

3: Configure Apache2 to proxy Rails requests to Mongrel using a VirtualHost block similar to this:

ServerName yourserver.com
ProxyPass /db http://localhost:8000/db
ProxyPassReverse /db http://localhost:8000/db
ProxyPreserveHost on

DocumentRoot /yourpath/static/html
Options Indexes FollowSymLinks
AllowOverride None
Order allow,deny
Allow from all


The VirtualHost blocks sets up a host on the standard port 80.

The ProxyPass directives tell Apache to pass any requests with URLs that start with '/db' over to the web server running on the same machine (localhost) at port 8000. It also modifies the responses from that server so that they appear to come from Apache instead of Mongrel. So from the user's perspective the Mongrel server is invisible.

Any other requests are handled by Apache directly and will fetch static content from the directory '/yourpath/static/html', which is where you would put your static web site.

For this to work, your Apache server must load the mod_proxy module, and probably some others. See the Mongrel/Apache link given above for details on that.

Look at the various Apache Proxy options if you need finer control over what gets proxied and what gets served directly. You can set up multiple different Rails apps on the same host in this way and/or multiple Mongrel instances that can serve an application under high load. Again, see the Mongrel/Apache link for details.

Tuesday, July 31, 2007

Installing Instiki on Mac OS X

Instiki is a Wiki clone written in Ruby and Rails. In the spectrum of wiki software, it is relatively simple with a correspondingly clean interface, unlike the default mediawiki pages. It is not perfect but if you need to get a wiki up and running quickly it may be just what you need.

The instiki pages suggest that installation is trivial, but that is a little misleading. Here are the steps needed to get it up and running on a Mac OS X 10.4.

It assumes that you have Ruby and MySQL already installed and that you have at least a passing familiarity with Rails applications.

1: Download the instiki package from: http://rubyforge.org/frs/?group_id=186&release_id=10014
The version current at the time of writing is instiki-0.11.pl1.tgz

2: Unpack the .tgz file
Use the default 'Archive Utility' or use tar on the command line. Be aware of this weird gotcha that I encountered with StuffitExpander.

3: Create the database in MySQL
The database name is not critical.
# mysqladmin -u root -p create instiki_production

4: Edit config/database.yml in the instiki directory
Replace the existing 'production' database definition with this:
adapter: mysql
database: instiki_production
username: root
password: xxxxx
host: localhost
# socket: /path/to/your/mysql.sock

Add your own password, and don't worry about the socket line unless you are on Linux.

5: Install the database tables
This is a Rails command that creates all the right tables and sets up some other configuration settings:
# rake environment RAILS_ENV=production migrate

6: Start the Instiki Server

# ./instiki

This will fire up the Webrick web server built into Rails and bind the Instiki to port 2500 on the local machine. Point your browser (on that machine) to this URL:

You should see a setup web page.

7: Configure the Wiki

In the setup page, enter the name of the wiki and the address (by which they mean the subdirectory on this site that will form the root of the wiki), and enter an administrator password.

It will return a home page with a text area into which you can enter the content of the page. Look at the hints on that page and on the Instiki project site for help on the formatting shortcuts.

You will notice the absence of any button or link that adds a new page. The way you do this in Instiki is to enter the name for that page in the parent page and surround it in double square brackets, like this:
[[Another Page]]
When you submit this page, you will see the text 'Another Page' with
a question mark next to it. Click on that to add content to the new page. That can be a little confusing until you get used to it.

With the Home Page in place, do some other setup by clicking 'Edit Web' on that page. Here you can fine tune some of the default styling, you can setup password protection for the entire site and, importantly, you can configure the site to publish a read-only version of itself in parallel to the one that you are editing. This feature can be very useful if you want to block changes to a public site or if you want to use Instiki as a simple Content Management System for a static web site.

If you opt for publishing the read-only version you can then access the two versions from similar but distinct URLs.

The URL for the regular home page looks like this (for a wiki called 'mysite')

The published (read-only) version can be found at

Note you will get Rails errors if you try to access

That had me confused for quite a while when I was testing if my installation was working.

8: Make your instiki available on the Internet

The default configuration for instiki runs on localhost at port 2500. If you want to run it as the sole public web server on your host you could change those two parameters in the installation (file script/server) or when you start instiki.
# ./instiki -b -p 80

But a more likely scenario is that you want to add the wiki to an existing web site. Running instiki directly on a web server other than Webrick is non-trivial so your best bet is to configure your regular web server to proxy wiki requests that it receives to the instiki server.

This fairly straightforward but does involve some server configuration directives.

In this example, the main web site is called 'mysite.com' and the wiki name is 'mysite'. I want the wiki to accessed by urls like http://mysite.com/subdir/HomePage

I modified the following from this page at instiki.org

If your main web server is Apache then you want a configuration something like this, using the ProxyPass directives:
ServerName mysite.com
ServerAlias www.mysite.com
ProxyPass /subdir/ http://localhost:2500/mysite/published/
ProxyPassReverse /subdir/ http://localhost:2500/mysite/published/

If you are using lighttpd as your server, which is common in the Rails world, then things are a little cryptic and look like this:

$HTTP["host"] =~ "(^|\.)mysite.com$" {
server.document-root = basedir + "/web/mysite.com/html/"

# Rewrite the URL *before* entering the HTTP["url"] block
url.rewrite_once = ("^/subdir/(.*)$" => "/mysite/published/$1" )

# pass any /mysite/ urls to instiki on port 2500
$HTTP["url"] =~ "^/mysite/published/\S+" {
proxy.server = ( "" => (
( "host" => "", "port" => 2500),
), )

What this does is to rewrite any input URL that include /subdir to point to /mysite/published and then to pass those on to the instiki web server.

Note that your must have mod_proxy added to your module list for this to work.
server.modules = ("mod_rewrite", "mod_fastcgi", "mod_accesslog", "mod_proxy")
Also note that you need to specify the proxy host as an IP address, not a hostname.

Restart the server and then http://mysite.com/subdir/HomePage will be passed to the instiki server as http://localhost:2500/mysite/published/HomePage

Be careful how you set up the regular expressions. Make sure that you can't mess with the URL and get the editable pages by mistake (unless you want to allow access to those). Also be aware that instiki will give cryptic Rails error dumps if you enter invalid or truncated URLs and that may confuse your users.

9: Set up instiki to run automatically
On Mac OS X you need to create a file under /Library/LaunchDaemons in Apple's plist format.
This should look something like this (here the instiki startup script is located at /Users/mysite/instiki/instiki).

<plist version="1.0">

Call your file net.instiki.plist and then reboot your machine. Assuming your web server starts up in a similar fashion, you should be able to go to your main site URL and then to the wiki link, whereupon you'll see the published version of the wiki.

10: Outstanding issues
From this point you are on your own in terms of creating your content, pages and styles. Refer to the instiki.org site for help with that.

Currently the WEBrick web server is hardwired into the instiki code. This works fine but under heavy load this would not be acceptable. Being able to replace it with lighttpd or mongrel, like you do with regular Rails applications, would solve the problem but for now you'd have to hack instiki to get this.

Thanks and acknowledgements

Instiki was created by David Heinemeier Hansson and further developed by Alexey Verkhovsky, Matthias Tarasiewicz and Michal Wlodkowski. I thank them all for their work!

Monday, July 30, 2007

Serious Gotcha with Mac OS X / Stuffit Expander / Instiki .tgz file

After a lot of screwing around I've figured out the reason why my installation of the Instiki software was failing on my installation of Mac OS X.

Hopefully this is some very esoteric combination of factors but I want to put the story out there in case it helps someone else.

I want to install the Wiki software 'instiki' on my Mac (OS X 10.4.9). I downloaded this version:
from rubyforge

I downloaded it using the Camino web browser v1.5

I copied it from the downloads folder to the target folder in the Finder.
I double-clicked the .tgz file to unpack the archive. Normally the Mac OS X Archive Utility takes care of that. In this case Stuffit Expander popped up and did the job (version 8.0.2).

To cut a long story short, for some reason Stuffit Expander made two copies of certain files (not all of them). For example app/models/web.rb appeared as web.rb and web.1.rb. The real problem was that web.rb was empty (0 bytes) whereas the real content was in web.1.rb. Instiki doesn't know anything about the .1.rb files and only sees the empty versions when you fire it up. Not surprisingly it craps out with a whole slew of odd messages.

The solution for me was to either unpack the archive manually
# tar xzvf instiki-0.11.pl1.tgz
or to remove the Stuffit application. Once you've done that then the default Archive Utility should handle the unpacking and the problem will go away.

Why Stuffit should do this I don't know... very, very strange behaviour and a real pain to troubleshoot...

Tuesday, July 24, 2007

Ignoring Files and Directories in Subversion (SVN)

Subversion allows you to remove specified directories and files from revision control, such as temporary files or logs that would quickly become a real pain if you had to commit and update them like regular source files.

There are various ways to do this with global settings for file types or on a per directory basis.

I have run into a problem several times, checking in projects that have been started outside of SVN. If I import the entire directory tree into SVN, then check out a copy, all the directories are already under revision control and I need a way to remove them from it. There are 3 ways to handle this.

1: Delete the specific directories before you import the project into SVN.
This might be fine for tmp or log directories where all the contents are 'disposable' but this may not work in other situations like Rails doc directories where you have added a custom Readme file, for example.
When you check out the project, create those directories by hand and then use svn:ignore to tell svn to ignore them.

# mkir tmp
# svn propset svn:ignore 'tmp' .
# svn commit -m 'Ignore tmp directory'

2: Use svn propset to ignore the contents of particular directories.
To ignore the contents of a tmp directory, but not the directory itself, you could do this in a checked out copy of the project.

# svn propset svn:ignore '*' tmp
# svn commit -m 'Setting svn:ignore on contents of tmp'
# svn update

Now when you modify a file in tmp and then run svn status you should see no changes.
You can also use this to ignore certain types of files, such as gzipped files in the current directory:

# svn propset svn:ignore '*.gz' .

Follow this with a commit and update.
But note that this will NOT ignore any .gz files that existed prior to issuing the svn:ignore as these were already under revision control. You can use svn rm to get rid of these from the repository.

3. Remove the directory from the repository then recreate with the svn:ignore property.

# svn export tmp tmp1
# svn rm tmp
# svn commit -m 'remove tmp directory from repository'
# mv tmp1 tmp
# svn propset svn:ignore 'tmp' .
# svn commit -m 'ignore local directory tmp'

More information can be found in Chapter 7 of the SVN book

Wednesday, June 20, 2007

Apache 2.2 - Where are the modules ?

The Apache httpd server has played a critical role in most of my work over the past decade but these days I'm finding more and more reasons to look at alternatives. Its configuration syntax is not pretty, the distribution is bloated, and its performance with some applications (especially Rails) is almost unworkable. Here's another reason to grumble...

I recently compiled Apache 2.2.4 from source. I've done that several times in the past and just did the typical steps of:
# ./configure --prefix=/usr/local/apache2
# make
# make install

That works great and I can fire up the server just fine - until I want to serve an application that involves URL rewriting. I get this error:

[Wed Jun 20 17:00:07 2007] [alert] [client] /Users/jones/instiki/public/.htaccess: Invalid command 'RewriteEngine', perhaps misspelled or defined by a module not included in the server configuration

That tells me that the mod_rewrite module is not being loaded in the httpd.conf file. So I go into that file and search for the LoadModule section. In the past there would be a bunch of LoadModule statements, many of which were commented out. But in 2.2.4 there is only one. This is actually a good thing as it makes the httpd.conf file smaller and easier to look through.

I figured I just needed to add a line that loaded mod_rewrite. I checked my apache2/modules directory to get the correct file name - but there are no modules in there. I checked to see if it was compiled into the executable using this:

# /usr/local/apache2/bin/httpd -l

No, it's not in the list. So where are all the modules? Am I supposed to download them separately from apache.org? Looking on that site doesn't tell me what to do. What's the deal?

Turns out that you have to specify the modules you want at compile time. This is a really, really bad move on Apache's part. I'm sure it is done with the best of intentions but what a pain. In order to get my module I have to go and recompile the whole damn thing... great...

*WAIT* If you have already modified your current apache config files, htdocs directory, etc. copy them somewhere safe before you recompile and install over them!

Here's what I need to do to get mod_rewrite and mod_env compiled so that I can load them dynamically.

# ./configure --prefix=/usr/local/apache2 --enable-rewrite=shared --enable-env=shared

To see the full list of options (215 lines of them) do this:

# ./configure --help

You can get 'all' or 'most' (whatever that means) of the modules using these variants

# ./configure --prefix=/usr/local/apache2 --enable-mods-shared=all
# ./configure --prefix=/usr/local/apache2 --enable-mods-shared=most

I chose the latter when I recompiled and that gave the two that I need plus hopefully any others that I might need in the future. So my specific steps were:

# ./configure --prefix=/usr/local/apache2 --enable-mods-shared=most
# make
# make install

Then I went into the httpd.conf file, put back a couple of changes that I had made previously and went looking for the LoadModule lines. Well, I guess I got what I asked for... I asked it to compile most of the modules. It did that and it has set up the config to load all 43 of them!

All I want are two modules, for Pete's sake... so I reckon I want to delete the others, right? WRONG! Without the new configure directive, a bunch of the standard modules are compiled in the server. With the new directive, they are loaded dynamically. So if you delete these LoadModule lines then a load of other stuff will break.

With all this done, I can now get back to where I started with a functioning Apache installation. I hope this note helps you avoid the couple of hours of frustration that I just enjoyed.

Apache is a great piece of work, don't get me wrong, - but nonsense like this is really annoying.

If you are in the market for an alternative you might want to consider Nginx (http://nginx.net/), a relatively unknown http server that is getting a lot of good press in the Rails community.

Tuesday, June 19, 2007

Adding Hard Drives to a Linux System using fdisk

In recent releases of Fedora Linux (e.g. Fedora 6 and 7), the default filesystem configuration uses Logical Volumes, which are a flexible way to manage the hard drives on a machine. But this flexibility can come at a cost if and when the drives go bad.

The old school way of setting up drives is to use the fdisk command to set up specific, fixed partitions on each drive. This requires a bit more planning and a little more execution, but it has the advantage that of a machine fails for some reason, you can still move the drives to a new machine and mount them directly.

I wanted to do that on a system recently where I had 2 x 500GB drives set up as a logical volume and then wanted to add 2 more drives to use as live backup disks for other servers. I want backup disks to be as independent as possible and so adding them to the logical volume was not a good idea. Happily you can mix logical volumes and disks with traditional partitions with no problems.

Here are the steps needed to add two drives to the Fedora 7 system. The same information is available in a load of other places. I'm including it here largely for my own benefit next time I need to do this. All steps assume you are logged in as root.

1: See what physical disks you have on your system

# /sbin/fdisk -l

You'll get a load of output - look for lines that delimit each of the physical disks, such as:
Disk /dev/sda: 500.1 GB, 500107862016 bytes

Below these you will see the partitions on each drive. Note the distinction between a disk (/dev/sda) and a partition on that disk (/dev/sda1, /dev/sda2...)

On my system the disks are SATA drives and these appear under these names /dev/sda, /dev/sdb, /dev/sdc, /dev/sdd.

/dev/sda and /dev/sdb are used in the original Logical Volume that I set up. I'm going to set /dev/sdc and /dev/sdd as two separate disks, each with a single partition on each.

2: Use fdisk to create a single partition on each disk

# /sbin/fdisk /dev/sdc
Then in response to the various prompts give these answers:
n (create a new partition)
p (make it a primary partition)
1 (call this partition #1)
hit enter to start at the first sector
hit enter again to end at the last sector (in other words make the partition fill the entire drive)
w (write the partition to the disk and quit the program)

I repeated this for disk /dev/sdd

3: Create a filesystem on each drive

The standard Linux filesystem type at the moment is ext3, although alternate filesystems can be used. Each filesystem gets a label, which is typically the same as the mount point where you will access the drives. On my system I called the two filesystem /backup0 and /backup1. You create the filesystem with the appropriate variant of the mkfs command. Note that you now refer to the specific partition (/dev/sdc1) and not the disk itself (/dev/sdc).

# /sbin/mkfs.ext3 -L /backup0 /dev/sdc1
# /sbin/mkfs.ext3 -L /backup0 /dev/sdc1

Each command will produce quite a bit of output and may take some time as it creates blocks on the actual drives.

4: Create the mount points for each filesystem

A mount point is the Linux path/directory where this filesystem can be accessed under Linux. In my case the commands were:

# mkdir /backup0
# mkdir /backup1

5: Specify these mount points in the filesystem table file
Make a copy of the existing file - just in case
# cp /etc/fstab /etc/fstab.bak

Edit the current file (/etc/fstab) and add a line for each of the new filesystem.

LABEL=/backup0 /backup0 ext3 defaults 1 2
LABEL=/backup1 /backup1 ext3 defaults 1 2

The label is an alias for the actual partition. The next field is the mount point (same name as the label in this case). The next field is the type of filesystem (ext3) and the final 3 fields are just default options that you should include.

6: Reboot the system

# /sbin/shutdown -r now

You can re-mount the filesystems with 'mount -a' but a full reboot is a safe bet.

Check that the filesystems are mounted using 'df', 'cd' to the mount points and try writing to the disks. You will want to set the directory permissions to something appropriate using chmod and chown.

'Modern' ATX Motherboards have TWO Power Connectors

I just put together a new machine with a ASUS P5B Deluxe motherboard, Core 2 Duo CPU and a load of other good stuff. I built a similar system with the same motherboard a few months back and had no problems at all on the hardware end. I must have actually read the manual that time.

With this system I put everything together, double checked it and powered it on. The fans worked, the red and blue LEDs on the motherboard lit up as expected but there was no video output and no beep from the system... dead to the world.

After changing the video card, unplugging the drives, and swapping out the memory I was still getting nothing. The fact the there was not even a beep code from the motherboard had me thinking it was a bad board or a bad CPU. I needed a break so I took the dog for a walk...

Refreshed, I took another look at the system, checked the cables and... oh... OK...

Modern ATX form factor boards like this Asus P5B have TWO power connectors - the 'regular' 24 pin EATXPWR that has been around for a while, plus a 2x4 pin EATX12V connector.

If, like me, you forget about that one then the system will not boot... says so right there in the manual... in small print, on page 33...

On my board the socket for this is on the 'top left' corner by the keyboard, etc. sockets. You can use either a 4 pin plug or an 8 pin plug of the correct type. The board comes with a small plastic cover over 4 of the socket holes, so you need to remove this if you want to use the 8 pin plug.

So, a simple mistake to make... but one that took me well over an hour to figure out, dang it.

Friday, June 15, 2007

Installing Linux Fedora 7 over a Network

I wanted to upgrade a Linux Fedora 6 machine to Fedora 7, but I didn't want to download and burn a bunch of CDs and then feed these in and out during the upgrade. The machine does not have a DVD drive.

So instead I downloaded the DVD .iso version of Fedora (a single 3.2 GB file) and made that available from another server via NFS.

In order to perform the upgrade, I needed to boot the machine from a minimal Fedora installation so that it could fetch the full install via NFS.

The two simple options for doing this are to use a boot CD or to boot the system off a USB memory stick.

** I'm having problems booting from a USB stick in Fedora 7 so I'll leave that section out until I have things figured out**

Booting off a Minimal Linux CD

1. From the Fedora distribution site, download the minimal boot disk .iso file and burn it onto a CD. This file is called boot.iso and is about 8 MB.

2. Boot the target machine from this CD and follow the initial setup steps. In my case I wanted to perform an upgrade and I wanted to get the distribution via NFS. You can also use FTP or HTTP, which might be easier for some.

3. Using NFS, you will be prompted for the Ethernet interface that will be used, if you have more than one. Then you will be prompted to enter the network address of the host that contains the new distribution file and the path to .iso file.

Note that the installer is expecting to find the DVD .iso file at that location and not some unpacked version of Fedora. I would guess it will work if you have the set of CD .iso files but I've not tried that.

4. Assuming it can access the remote directory then the installer will get under way. It will first check dependencies between the new and old linux packages and then you will see the message 'Preparing transaction from installation source'. This is the stage where the remote .iso file is being copied over the local machine and mounted. If this succeeds then you should be home and dry.

Copying a Disk Image to a USB Memory Stick on Mac OS X

These are the steps needed to copy a disk image (.img file) directly to a USB Flash Memory Stick.

For example, I wanted to copy a Linux boot image file to a memory stick so that I could boot a Linux server directly from the stick.

What you don't want to do is have Mac OS X mount the stick and just drag the image to the stick in the Finder. This will copy the file to the filesystem that is already on the stick.

You want to replace that filesystem with the one contained in the .img file. Note that this is complete replacement - and existing data on the stick will be erased.

1. Insert the memory stick in the Mac

2. Open the Disk Utility application (in Applications/Utilities)
You should see the name of the USB stick appear in the left panel in the application, below your system disk.

3. Click the memory stick name, then click the 'Restore' tab at the top of the right hand panel.

4. Drag and drop the .img file to be copied into the Source box in Disk Utility.

5. Drag the memory stick icon from the left panel into the Destination box.

6. Click the Restore button that has now become active and click OK in the confirmation window which then appears.

7. Enter your password in the authentication window that pops up.

8. The copy is complete when the Source and Destination boxes in Disk Utility go blank.

9. Click on the memory stick icon in the Finder and you should see the contents have been replaced with those of the disk image.

Those familiar with the ever useful UNIX command dd might expect to use this to copy the image file on the command line. The problem here is that Mac OS X will automatically mount the memory stick into the Finder such that a command like 'sudo dd if=/path/to/my.img of=/dev/disk3' will fail with a 'Resource Busy' error.

Setting up NFS mounts in Linux (Fedora 6)

This is a short guide to mounting a directory from one linux system on another on the same internal network using NFS. This was tested on a Linux Fedora system but should be applicable to other unix variants.

In this example I am setting up a NFS mounted directory to use for backing up files on remote systems. All commands should be run as root or via sudo.

On the Server
(Fedora Core 6 with NFS4 installed)

1. Turn off any firewall and SELinux software (or configure it to allow NFS traffic). Use the system-config-securitylevel GUI tool. If you follow all the steps here but you are unable to mount the remote directory then go back and check your security settings.

# system-config-securitylevel

Another place to check is /etc/hosts.allow. Adding this line will open up all services on this server to the specified network.


2. Configure the NFS4 ID to Name mapping daemon. This is not used in versions of NFS before NFS4. It is used on Fedora 6 and above.

Edit the configuration file /etc/idmapd.conf and modify the Domain and Nobody-User/-Group lines thus:

Domain = your-internal-domain.com
Nobody-User = nfsnobody
Nobody-Group = nfsnobody

3. Make sure the portmap, nfslock and nfs services are running. On my system the first two were running by default.

# /sbin/service portmap start
# /sbin/service nfslock start
# /sbin/service nfs start

4. Make sure they will be started whenever the system is rebooted

# /sbin/chkconfig --level 345 portmap on
# /sbin/chkconfig --level 345 nfslock on
# /sbin/chkconfig --level 345 nfs on

5. Create an exports directory for each directory you want to share. This is not the actual directory that contains your data but an indirect link that makes it easier to move the real directory without having to update all your clients. It's a bit like a symbolic link. The name doesn't really matter but something like exports or nfsexports is a good idea.

# mkdir /exports
# mkdir /exports/backups

Then change the directory permissions to suite your needs. For example:

# chmod -R a+w /exports

6. Bind the exports directories to the real directories by editing /etc/fstab and adding lines like this where /proj/backups is the 'real' directory and /exports/backups is the linked directory that will actually be shared by NFS.

/proj/backups /exports/backups none bind 0 0

7. Pick up this change on your system by remounting all filesystems

# mount -a

8. Tell NFS which filesystems can be exported by editing /etc/exports and adding these lines. Look at man exports to learn about all the options. In this example the numbers represent the network and netmask of my internal network and they define the range of IP addresses that will able to access the shared directory. Modify this as needed to restrict access as needed. Of the other options, the rw is most important, signifying that the client will have read/write access to the directory.

Note that all of this should be on Two lines!


9. Pick up this change with this command

# /usr/sbin/exportfs -rv

10. Reboot the system and check that the directories are being exported

# /usr/sbin/showmount -e
Export list for server.int.craic.com

On the Client
(In this case the client was a Fedora 4 system that did not have NFS version 4 installed)

1. If the system did have NFS 4 then you would repeat step 1 in the server configuration to setup the /etc/idmapd.conf file.

2. Create the mount points on the client system. In other words, create the directories that will be linked to the remote shared directories by NFS. In my case this was:

[client] # mkdir /mnt/server_backups

3. Add the shared directories to /etc/fstab on the client. The line defines the hostname of the remote server and the exported directory, separated by a colon. Note that we use the name of the linked/bound directory that we set up, not the real directory name. That way, if we move that directory we only need to change the settings on the server. The second term on the line defines the mount point on the client machine (this machine). 'man fstab' will show what the options mean in the 'nfs' section. Most important of these are 'rw' which define read/write access and 'hard' which sets up a hard mount, which is more robust than a soft mount.

Note that all of this should be on ONE line!

server.int.craic.com:/exports/backups /mnt/server_backups nfs rw,hard,rsize=8192,wsize=8192,timeo=14,intr 0 0

4. Remount the filesystems on this client

[client] # mount -a

If this works then you should not see any messages. Check that the remote directory has been mounted by running 'df' and looking for the appropriate line. Then cd to the local mount point, list the files, etc.

All this information is widely available on the web in various forms. I used this article on fedorasolved.org by Renich Bon Ciric while I was figuring it all out, although that does contain a couple of errors.

Thursday, May 24, 2007

Installing Rubygems on Linux Fedora

If you are using Ruby then you definitely need to have rubygems (the gem command) installed.

I ran into a problem doing this on a Fedora Core 4 system on Amazon Web Services EC2. I had installed Ruby 1.8.6, downloaded rubygems-0.9.4 and ran 'ruby setup.rb'. I got an error like this:
  LoadError: No such file to load -- zlib

The problem is that there is no zlib (compression) library installed in the underlying Linux

To fix the problem:

1: As root,
# yum install zlib-devel
2: Recompile ruby
# cd ruby-1.8.6
# ./configure
# make
# make install

3: Install rubygems
# cd ../rubygems-0.9.4
# ruby setup.rb

Tuesday, April 10, 2007

Setting up a Subversion Repository

Subversion (svn) is a leading software version control system that has a myriad of features. It is heavily documented, however the basic steps involved in setting it up and accessing it are somewhat obscured by descriptions of the many configuration choices.

Here is a concise description of setting a svn repository on a linux server (which I'll call remote) and accessing it from a client computer (local).

1: Creating a Repository

Download the software from http://subversion.tigris.org/ or, on a Fedora system :

[remote ~]#
yum install subversion.

Create a directory for the repository, with whatever name you choose

[remote ~]$ mkdir /proj/svn_repository

Create the svn repository in that directory

[remote ~]$ svnadmin create /proj/svn_repository

Create subdirectories for each project using svn (not regular mkdir!).
There are many ways to do this but a commmon approach is to have a svn directory for each project and three subdirectories called trunk, branches, tags - with trunk containing the primary version of your code.

[remote ~]$ svn mkdir file:///proj/svn_repository/project1 -m 'New directory'
[remote ~]$ svn mkdir file:///proj/svn_repository/project1/trunk -m 'New directory'
[remote ~]$ svn mkdir file:///proj/svn_repository/project1/branches -m 'New directory'
[remote ~]$ svn mkdir file:///proj/svn_repository/project1/tags -m 'New directory'

There is no need for a svn commit command. And, of course, you can add project directories at any time.

Import a test body of code from a directory on that server as a test of the installation

[remote ~]$ cd ~/mytestproj
[remote ~]$ svn import . file:///proj/svn_repository/project1/trunk

Check that the files are in the repository

[remote ~]$ svn list file:///proj/svn_repository/project1/trunk

2. Set up Remote Access to the Repository

There are several ways to do this. Assuming that your users have accounts on the server with the repository, then using svn+ssh is probably the easiest to setup and maintain.

First, install the svn client on the client(local) machine. Look for this on http://subversion.tigris.org/

Without making any changes to the server, connect to it using svn+ssh with the remote hostname and the full path to the repository directory.

[local ~]$ svn list svn+ssh://svn.craic.com/proj/svn_repository/project1/trunk

This will prompt you for your user account password on the server and then return the list of files in the repository. So this just works... but it will ask you for your password for every outgoing connection, which quickly becomes a royal pain.

To fix this you need to set up an RSA private/public key pair and copy the public key to your directory on the remote server. Key pairs may look a little cryptic but the basic is pretty simple.

(Note, that if you already have an RSA keypair for another purpose then you will need to give your new pair a different name - see below for the tweak needed to use this - for now I'll use the default name)

First, create the key pair on your client machine (Do NOT enter a passphrase when prompted - just hit Return)

[local ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/Users/jones/.ssh/id_rsa): /Users/jones/.ssh/id_rsa
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /Users/jones/.ssh/id_rsa.
Your public key has been saved in /Users/jones/.ssh/id_rsa.pub.
The key fingerprint is:

Create a .ssh directory in your home directory on the remote server if
it doesn't already exist, then copy the the public key (id_rsa.pub) to the remote machine that you want to connect to.

[local ~]$ scp .ssh/id_rsa.pub jones@svn.craic.com:/home/jones/.ssh/tmp.pub

Now go to the remote server and install the key in ~/.ssh.
[remote ~]$ cd .ssh

Look for a file called authorized_keys, create it if it does not exist
[remote .ssh]$ mv tmp.pub authorized_keys
Or append the new key on to the end of it.
[remote .ssh]$ cat tmp.pub >> authorized_keys

Now, back on the local client, you should be able to use the svn command I gave above without having to give a password.
[local ~]$ svn list svn+ssh://svn.craic.com/proj/svn_repository/project1/trunk

In addition, you can now ssh into the remote server without having to give a password. In some cases this is a nice side benefit, but some sites may wish to separate svn access from ssh access to a server. Doing this can have several benefits, especially if you have multiple private/public keypairs for different purposes, plus it lets you to simplify your svn client commands a little.

On the client, rename your private key to something that indicates it is specifically for svn.
[local ~]$ cd .ssh
[local ~]$ mv id_rsa id_rsa_svn

Add a line to your shell settings file, such as ~/.bashrc or ~/.bash_login, that tells svn which key to use when it runs ssh.

export SVN_SSH="ssh -i $HOME/.ssh/id_rsa_svn"

Now go back to the remote server and customize the authorized_keys file so that this public key can only be used with svn. Add the string
command='/usr/bin/svnserve -t -r /proj/svn_repository'
to the start of the line for the public key for your client, thus:

command='/usr/bin/svnserve -t -r /proj/svn_repository' ssh-rsa AAFaB3NzaC1[...]2oT+HQ== jones@client.craic.com

What this does is to start up an instance of the SVN server (svnserve) whenever you connect from your client via ssh. But more than that, the string
-r /proj/svn_repository specifies the root of the repository path. That means that the URL that you specify on your client becomes quite a bit shorter.

So instead of using this command to list files in a project:
[local ~]$ svn list svn+ssh://svn.craic.com/proj/svn_repository/project1/trunk

You can use this:
[local ~]$ svn list svn+ssh://svn.craic.com/project1/trunk

That is all you should need to access your remote repository. Check out the svn book and other resources for details on how you actually work with the files stored therein.

Thursday, April 5, 2007

Perl CPAN on Mac OSX

Two things to remember when using CPAN to install or upgrade a Perl module on a Mac OS X system:

1. Set FTP to use Passive Mode
Otherwise it will just spin its wheels trying to find a cpan repository that it can download from.

In your bash shell, or your ~/.bash_login file add this line:
export FTP_PASSIVE=1

2. Run the CPAN interface under sudo
You need root privileges to install the modules after they have compiled.
sudo perl -MCPAN -e "shell"

The CPAN may recommend updating your Bundle::CPAN module. While this is a good thing to do, it does take a long time to install/update all the supporting modules (10-15 minutes), so bear that in mind.

Tuesday, January 30, 2007

Using Oracle as a Backend to Rails

In a recent project I was asked to use Oracle as the database for a Rails application rather than MySQL. I am somewhat of an Oracle newbie and so setting this up was a bit of a learning experience. This note describes the steps that it took to accomplish the task.

For testing purposes I installed a copy of Oracle Database Express Edition (XE) on one of my servers. This is a free download from Oracle that offers much of the core functionality, albeit with some restrictions. I've written up the steps needed to Install Oracle XE on Fedora Core 5 Linux in a separate note. The following steps assume that you have this up and running.

My installation had the Rails application and web server on one machine and the Oracle database on another.

Oracle has its own way of handling client/server comunication over a network. For a Rails application to talk to a remote database it needs to access all that machinery and the preferred way of doing that in Rails is to use a Ruby interface to the Oracle Call Interface. This resides on the machine that hosts the Rails application.

So you need to install an Oracle client interface, the Ruby OCI interface and tell Rails how to connect to the database.

The Ruby OCI interface is called ruby-oci8 and comes in two variants, depending on the Oracle client software that you installed. In the Oracle/Linux installation note I installed the Full Client, however I ran into problems getting ruby-oci8 to work with the libraries included with that client (it works fine with the Server libraries). So I have to recommend the other route using the Oracle Instant Client.

1. Install Oracle Instant Client

You can download the Instant Client packages for free from Oracle, although you will need to create an account for yourself first. Pick your platform, agree to their license and you'll get a page full of packages in different versions. All you need are these two from Version 10.2:
Instant Client Package - Basic
instantclient-basic-linux-x86-64- (36 MB)
Instant Client Package - SDK
instantclient-sdk-linux-x86-64- (0.6 MB)
The ruby-oci8 install guide tells you to install these into /opt/oracle but that is probably not mandatory. I had downloaded the files into /proj/downloads.
# mkdir /opt/oracle
# cd /opt/oracle
# unzip /proj/downloads/instantclient-basic-linux-x86-64-
# unzip /proj/downloads/instantclient-sdk-linux-x86-64-
# cd instantclient_10_2
Then create the following symbolic link
# ln -s libclntsh.so.10.1 libclntsh.so
Edit /etc/bashrc to include the directory in your LD_LIBRARY_PATH environment variable by adding these lines
The start a new shell, or source /etc/bashrc, in order to see the new value for the variable. Installing ruby-oci8 won't work without it!

2. Install ruby-oci8

Download version 1 of the interface from rubyforge:
# wget http://rubyforge.org/frs/download.php/16630/ruby-oci8-1.0.0-rc1.tar.gz
Unpack, make and install the package
# tar -zxvf ruby-oci8-1.0.0-rc1.tar.gz
# cd ruby-oci8-1.0.0-rc1
# make
# make install
The make calls ruby setup.rb to configure and then compile the code. It should all go smoothly provided the Oracle libraries are in place and it knows where to find them.

The official ruby-oci8 install guide may help if you have problems.

3. Test out the Connection to the Remote Database

This assumes that the remote database is running and that you have enable the test HR database that ships with the server. Test the connection with a simple Ruby one-liner. Note that this specifies the remote host using Oracle's Easy Connect naming scheme (bit like a URL). Alternatively you can set up a TNSNAMES.ORA files if you know about all that. Here the username is hr, the password is hr, the remote host is testbed.int.craic.com and the Oracle SID is XE. If the call is successful then you should get back a bunch of lines listing various jobs in the database.
# ruby -r oci8 -e "OCI8.new('hr', 'hr', '//testbed.int.craic.com/XE').exec(
'select * from jobs') do |r| puts r.join(','); end"
AD_VP,Administration Vice President,15000,30000
AD_ASST,Administration Assistant,3000,6000
FI_MGR,Finance Manager,8200,16000
If that works then you have finished the installation and can now move on to creating your Rails application. If, like me, you are not very familiar with setting up a database in Oracle here are some steps to get you started.

4. Create a 'Database' in Oracle

First of all, Oracle does not use that term in the way you might expect if you are coming from MySQL!

Oracle has an instance, identified by an SID (which in the above case is XE). Within that instance you need to create a new User and in doing so you create a new tablespace (I think that is the right term) which is equivalent to creating a database in MySQL.

You need to access the Oracle Server web site on your server, login as system and create the new user. Use the name of your Rails app for convenience. I use myapp in this example. Enter a password and check the boxes Connect and Resource in the Roles section. Don't worry about the Directly Granted System Privileges. Then click Alter User to set it up.

5. Create Your Rails Application

Back to you client machine. Assuming that you have Rails already setup, go to your target directory and create the subdirectories for your app.
# cd /proj/rails
# rails myapp
Then configure your database.yml file. You will actually want three Oracle users, one for the development, production and test databases, but just consider the development version right now. A suitable block for that file would be:
adapter: oci
# database: myapp_development
host: testbed.int.craic.com/XE
username: myapp
password: myapp
Note that the database: key is not relevant for Oracle, as I mentioned. The hostname is this Oracle-specific combination of the real hostname followed by a slash and the SID (which is always XE for Oracle Express Edition)

(Update: you can create your rails app with this option:
# rails myapp --database=oracle
Which specifies the adapter to be oracle instead of oci, but when doing so I got an error telling me that the TNS service name was not properly specified. But if you use the oci adapter as above it should work fine.)

From here on out you are working in regular Rails. You can create and modify your tables using migrations, you can CRUD your data and everything should just work.

If you are coming from MySQL, like me, then be aware that Oracle uses different underlying data types than MySQL and it appears that can be an issue with regard to dates and times in certain cases. Don't use the datatype :tinyint as it will barf - use :integer, :limit => 4 instead.

Note that in Rails 1.2.1 there is a bug that, with Oracle, will put the text empty_clob() in to empty textareas in a form. This has been noted and fixed but is not yet released. A simple work around is to set that field explicitly to blank string in the new action in your controller. For example:
def new
@blog_entry = BlogEntry.new
@blog_entry.text = ''

Monday, January 29, 2007

Configuring Oracle Database XE on Linux

I do all my database work in MySQL but I recently needed to setup a Rails application to use Oracle as a backend database. I didn't want to spend any of my money to do this, so I was pleased to see that Oracle make Oracle Database Express Edition (Oracle Database XE) available as free download.

You can find the software HERE but you will need to create an Oracle account first. Be warned that the Oracle site can be a little frustrating. You will want to download Oracle Database 10g Express Edition (either Western European or Universal version depending on your language requirements), which represents the Server, and Oracle Database 10g Express Client, which allows remote access to the server from another machine.

Oracle provides detailed documentation on these, including an Installation Guide, a Getting Started Guide and an Online Tutorial. While these are welcome, they tend to cover more options than you really need and so they can be a bit confusing. This note trims off that excess verbage and explains what I needed to do to install the database on a Linux Fedora 5 server.

Oracle XE Server Installation

1. Check the prerequisites in the Installation Guide
For hardware these are basically 512 MB memory, 1.5 GB disk.
You'll need at least 1024 MB of swap space. Check that with:
# free -m
If the total for the Swap line is less that 1024 then you need to add a swapfile of an appropriate size. I'll try and add a short note about that to this site when I get the chance - otherwise google for it.

You'll need the linux packages glibc (2.3.2 or higher) and libaio (0.3.96 or higher). You should have glibc already there but you'll have to install libaio (as root):
# yum install libaio
There are also a bunch of kernel parameters listed in the installation guide. I didn't have to mess with these.

2. Install the Downloaded RPM
# rpm -ivh oracle-xe-
Preparing... ########################################### [100%]
1:oracle-xe ########################################### [100%]
Executing Post-install steps...
You must run '/etc/init.d/oracle-xe configure' as the root user to
configure the database.
If you've met all the prerequisites, the rpm should install smoothly

3. Run the Configuration Script
# /etc/init.d/oracle-xe configure
Accept the defaults for the two questions about ports, provide an administrator password and set the database to start on boot. The script will then do a load of work in the background to configure things properly.

This can take quite a while. Run top in another window if you want reassurance that things are happening.

4. Access the Database via its Web Site

On the server with the database, open up a browser and go to http://localhost:8080/apex.
Login as system and give the password that you set earlier. If you want to access this site from other machines on your network then go Administration->Manage HTTP Access and select Available from Local Server and Remote Clients.

You can enable a test database by going to Administration->Database Users->Manage Users and clicking on HR. Provide a password (such as 'hr') and change Account Status to unlocked and click the Alter User button. This test database is handy for testing later on.

5. Set Some Environment Variables

As root, open /etc/bashrc in an editor and add this line at the bottom of the file:
. /usr/lib/oracle/xe/app/oracle/product/10.2.0/server/bin/oracle_env.sh
When you create a shell this will set a few environment variables that are required in order to access the database. These include LD_LIBRARY_PATH, ORACLE_HOME and ORACLE_SID. Look for them in the output of printenv when you create a new shell.

That's it for installing the server...

Oracle XE Client Installation

You would install this on a remote machine that wants to access the database server over the network.

1. Check the prerequisites

Nothing significant here, except the glibc and libaio packages, as above:
# yum install libaio
2. Install the Downloaded RPM

This gave me a couple of errors but they didn't appear to be a problem...
# rpm -ivh oracle-xe-client-
Preparing... ########################################### [100%]
df: `/usr/lib/oracle': No such file or directory
expr: syntax error
/var/tmp/rpm-tmp.86099: line 23: [: -lt: unary operator expected
1:oracle-xe-client ########################################### [100%]
Executing Post-install steps...
3. Setup the Environment Variables

As root, open /etc/bashrc in an editor and add this line at the bottom of the file:
. /usr/lib/oracle/xe/app/oracle/product/10.2.0/client/bin/oracle_env.sh
4. Access the Remote Server

If your paths are setup correctly then you can access sqlplus from a shell. Here I am accessing the HR test database on server testbed. The connect syntax is username/password@host
# sqlplus /nolog

SQL*Plus: Release - Production on Mon Jan 29 10:21:21 2007

Copyright (c) 1982, 2005, Oracle. All rights reserved.

SQL> connect hr/hr@testbed.int.craic.com

SQL> select * from jobs;
You see a load of output returned by the server, indicating that everything is set up correctly.

That's it... fairly simple installations... you'll need to look elsewhere for guidance on actually creating tables and entering data. I'll add a guide to using Oracle as a backend database with Rails shortly.

Thursday, January 25, 2007

Configuration of Web Servers and Rails

When it comes to setting up a web server to host a Rails application, there are a number of choices. Unfortunately, the documentation and HOWTOs that describe these options can be confusing, often getting bogged down in the details of compiling pieces of code, etc.

This note an attempt to present the range of options and to guide you through the installation of each. In summary, your choices are:
1. WEBrick - built into Rails
2. Apache
3. Apache with FastCGI - higher performance
4. Lighttpd - a fast alternative to Apache
5. Lighttpd with FastCGI
6. Mongrel - a server written in Ruby
7. A Hybrid Solution

Lighttpd with FastCGI is a common solution for production Rails servers. Apache is the most common server on the Internet but as you will see there are issues in using it compared to the alternatives. Mongrel is a relative newcomer that is getting a lot of attention.

The installation steps given here are for Linux Fedora Core 5 but these will likely apply to other current Unix variants.

1: WEBrick

The simplest option is to use the WEBrick server that is built into each Rails installation. You don't have to install anything over and above regular Rails.

From a terminal, cd to the top level directory of your application and run:
# script/server
By default this runs a server on localhost (i.e. locally to that machine) on port 3000. Point a browser on that machine to http://localhost:3000 and you should see your application. There is no configuration file.

This is great for development, but doesn't cut it if you want anyone else to use your application.

2: Apache

Most Linux installations will already have the Apache httpd server installed and ready to run. Perhaps the easiest way to see if it is running is (as root):
# /sbin/service httpd status
You can start, stop or restart it with
# /sbin/service httpd start
# /sbin/service httpd stop
# /sbin/service httpd restart
The default configuration file for Apache (/etc/httpd/conf/httpd.conf) needs to be modified in order to run a Rails application. Make a backup copy of the file before you make any changes!

Open the file in an editor, look for these two LoadModule lines and make sure they are not commented out:
LoadModule env_module modules/mod_env.so
LoadModule rewrite_module modules/mod_rewrite.so
Then go to the bottom of the file and add the following block, replacing MYAPP with the path to your application directory:
<VirtualHost *:80>
SetEnv RAILS_ENV development

DocumentRoot /MYAPP/public/
ErrorLog /MYAPP/log/apache.log

<Directory "/MYAPP/public">
Options ExecCGI FollowSymLinks
AddHandler cgi-script .cgi
AllowOverride all
Order allow,deny
Allow from all
Now when you restart Apache and point your browser to http://localhost (you don't need the :3000 here), or its real hostname from a browser on a different machine, you should get the Rails welcome page.

If you get a '403' page telling you that you don't have permission then make sure that all the directories including and above the application directory are world-readable.

So now you have the application running under Apache. Apache is great but does not give you the highest performance possible. You can improve its response by adding in the FastCGI module.

3: Apache + FastCGI

This is typically where the HOWTOs get really confusing, really quickly...

The bottom line is that, for now, you might find another one of the server options an easier path than this one.

FastCGI is an extension to CGI that greatly improves server performance. You can learn more and download it from http://www.fastcgi.com.

You will need to download and compile two components - fcgi, which is the 'core' FastCGI software - and mod_fastcgi, the Apache module component that links fcgi and the web server.

You should be root when you install all the following packages.

Before doing this, make sure you have these Apache development packages installed. They may not all be necessary, but play it safe. Install them like this:
# yum install httpd-devel apr apr-devel apr-util-devel
The following assumes that you can use wget to fetch remote files. If not, fetch them with your browser. Download and compile in a temporary or scratch directory. You might want to check for newer versions of these files, but they have not changed since 2003.

Install the software under /usr/local to isolate it from general Linux software.

To install fcgi:
# wget http://www.fastcgi.com/dist/fcgi-2.4.0.tar.gz
# tar -zxvf fcgi-2.4.0.tar.gz
# cd fcgi-2.4.0
# ./configure --prefix=/usr/local
# make
# make install
If this all works as expected then you'll be able to see a bunch of libfcgi files in /usr/local/lib.

Now you need to install a ruby gem that allows it to use fcgi. You need to tell it where the fcgi libraries and include files are installed.
# gem install fcgi -r -- --with-fcgi-lib=/usr/local/lib --with-fcgi-include=/usr/local/include

Next you install the mod_fastcgi module for Apache... and here's the problem...

At the time of writing (January 2007) the current version mod_fastcgi-2.4.2 does NOT COMPILE with Apache 2.2. The solution requires you to patch the mod_fastcgi source.. messy, very messy... I've written a separate post that will guide you through the process: Compiling mod_fastcgi-2.4.2 for Apache 2.2. I will do my best to update this post if and when the problem is fixed.

Those instructions show you how to patch the source, compile the source and update the httpd.conf file.

If you have been brave, or foolhardy, enough to go through all that rigmarole then there is just two more steps left before your Rails app can utilize the performance boost that FastCGI offers.

In your Rails application directory, go to public and edit the .htaccess file. Change the line:
RewriteRule ^(.*)$ dispatch.cgi [QSA,L]
RewriteRule ^(.*)$ dispatch.fcgi [QSA,L]
Restart Apache and you should be good to go...

Finally, check that the log directory in your application is world-writeable so that fcgi can create a file called fastcgi.crash.log

4: Lighttpd

Lighttpd (aka LightTPD and pronounced 'lighty') is an alternative to Apache that offers many of the same features but with smaller resource requirements and higher performance. It has been long favored by the Rails community, although Mongrel is challenging that status.

Before you start, make sure that you have /usr/local/sbin in your path. You can put this line in the .bash_login file for each user, or in /etc/profile to set up all users:
export PATH="/usr/local/sbin:$PATH"
. Start a new shell to make sure you pick up the new PATH.

Installation is straightforward, but it does have a prerequisite in the PCRE library which is used for regular expression pattern matching. Install PCRE and Lighttpd into /usr/local. You will see a ton of compile commands and warnings but don't worry.

Note the CFLAGS option. This was included in Dan Benjamin's install instructions for Mac OS X and is needed for Intel Macs. You may not need it, but it'll do no harm. The option is written as 'Oh-One' not 'Zero-One'.
# wget ftp://ftp.csx.cam.ac.uk/pub/software/programming/pcre/pcre-6.6.tar.gz
# tar -zxvf pcre-6.6.tar.gz
# cd pcre-6.6
# ./configure --prefix=/usr/local CFLAGS=-O1
# make
# make install
Now fetch the lighttpd distribution (check for a more recent version first) and install it thus:
# wget http://www.lighttpd.net/download/lighttpd-1.4.13.tar.gz
# tar -zxvf lighttpd-1.4.13.tar.gz
# cd lighttpd-1.4.13
# ./configure --prefix=/usr/local --with-pcre=/usr/local
# make
# make install
The lighttpd executable is located at /usr/local/sbin/lighttpd

Assuming that you already had Apache installed on your system, you will need to replace references to it in your system startup scripts, and you will need to modify the lighttpd config file to use your Rails application.

On Linux the httpd startup script is /etc/rc.d/init.d/httpd. Make a backup copy of this before you start messing with it. You need to replace all references to the Apache httpd with the lighttpd equivalents. To avoid cluttering this post, I've put a lighttpd specific startup file on my site HERE. You can also look at the script rc.lighttpd included in the doc subdirectory of the lighttpd source distribution.

The one line that you will need to change is the location of the lighttpd config file. If you only plan to run a single Rails app on your system then you can use the one in your application directory tree. Otherwise you need to place one in a suitable location such as /etc/httpd/conf/lighttpd.conf.

Detailed help on the config file can be found HERE, but for a Rails application you should start with the lighttpd.conf file in your application config directory.

What's that? You don't have lighttpd.conf in your config directory? Rails will take care of that for you!

Assuming you have /usr/local/sbin in your PATH, go to the Rails App directory and start the built-in server thus:
# script/server
=> Booting lighttpd (use 'script/server webrick' to force WEBrick)
=> config/lighttpd.conf not found, copying from
=> Rails application starting on
=> Call with -d to detach
=> Ctrl-C to shutdown server (see config/lighttpd.conf for options)
Rails detects lighttpd in your path and uses that in preference to WEBrick. Not only that but it will copy a lighttpd.conf file into your config directory if it doesn't see one. Pretty neat...

To serve this application to users on other machines you will want to copy this local config file and modify it.

The two most important lines in that file are at the beginning:
server.bind = ""
server.port = 3000
These tell the server to just respond to requests on that machine (localhost) on port 3000. This is the default for the development server. Change these to the real hostname of the machine and port 80.

Next you need to change all references to CWD to the real path to that directory (/proj/rails/myapp for example).

Reference this file in the lighttpd startup script, restart the server and go to that URL in a browser on this or another machine. You should see the Rails app welcome page. You're up and running with lighttpd!

5: Lighttpd + FastCGI

You can add performance to a lighttpd server using FastCGI, just like you can with Apache, although setting it up is much easier. Before you install lighttpd, install the fcgi software as described in the Apache + FastCGI section, install the fcgi gem but do not go through all the mod_fastcgi nonsense! The lighttpd installation comes with its own mod_fastcgi module.

Lighttpd picks up on the fact that you have fcgi installed and makes use of it. You'll see it referenced in the lighttpd.conf file that Rails sets up for you.

6: Mongrel

Mongrel is a web server written in Ruby (with C) that is extremely easy to get running and that plays well with Rails.

Installation involves nothing more than installing a gem.
# gem install mongrel
But note that at the time of writing (01/25/2007) Mongrel 1.0.1 requires Ruby 1.8.4 and does not install on Ruby 1.8.5.

You can run mongrel as a local development server. Instead of running script/server:

# cd myrailsapp
# mongrel_rails start
Its performance is apparently faster than WEBrick and it does not need a configuration file in many cases.

One advantage it offers over other solutions in production systems is a simple way to setup clusters of servers that can help load balancing. But that is beyond the scope of this note.

7: A Hybrid Solution

You would not use Mongrel by itself as a production server. Instead you combine it with Apache (or lighttpd). Apache acts as the front end, proxying certain requests on to mongrel. This hybrid solution may appear complex but it allows you to utilize the best features of both servers. Apache is better at serving static content like stylesheets and images, whereas mongrel will be better at serving dynamic content coming from your Rails application. Setting this up seems fairly straightforward. Check the mongrel web site for details and examples.

Similarly you can combine Apache and lighttpd by adding ProxyPass directives to an Apache configuration. You might want to do this if your server already hosts a site under Apache that you don't want to change. You can add in a Rails application linked to lighttpd on a different port with Apache as the front end, redirecting requests or processing them as appropriate.

An example configuration block is:
<VirtualHost *:80>
ServerName myapp.com
ServerAlias www.myapp.com

ProxyPass / http://www.myapp.com:8000/
ProxyPassReverse / http://www.myapp.com:8000
ProxyPreserveHost on
This redirects all requests to the virtual host to a second server running on port 8000.

A Note About Sources of Information

This note is the result of reading a lot of HOWTOs and notes about web server installation, a lot of experimentation and quite a bit of frustration.

The Ruby on Rails site lists many such documents, but some of these are confusing and/or outdated - caveat emptor.

Some of the useful documents for me have been:
Rails on lighttpd - Duncan Davidson
Building Ruby, Rails, LightTPD, and MySQL on Tiger - Dan Benjamin
Ruby on Rails (with FastCGI) Howto - "goldenratio"
Thanks to those authors and others who have helped guide me through some of the thorny installation issues.

mod_fastcgi-2.4.2 and Apache 2.2

This is a description of the steps needed to compile the mod-fastcgi module for Apache 2.2.

If you are trying to improve the performance of you web server or install the Rails web framework with Apache then this tech tip could save you a lot of frustration.

At the time of writing (01/25/2007) the distribution of this module (mod_fastacgi-2.4.2) will not compile for Apache 2.2 without patching the source. The problem stems from Apache dropping some definitions from an include file that mod_fastcgi relied upon. mod_fastcgi developers have known about the problem, and have had a workaround, since late 2005 but unfortunately the fix has not made it into a new release. Users are required to patch the source themselves... furthermore, to find this out you have to read through a bunch of messages on the fastcgi developers mailing list and be familiar with the patch command, which many of us are not.

So I've written up the necessary steps here with the goal of lessening this unnecessary pain for others. Hopefully the problem will be fixed properly soon but for now here is what you need to do...

This is written for a Linux Fedora Core 5 system but should be applicable to other Unix variants. You will want to be root when you install the software.

1. Install the Apache development libraries and include files
# yum install httpd-devel apr apr-devel apr-util-devel
On Fedora these will install the libraries into /usr/lib/httpd

2. Fetch the mod_fastcgi distribution and unpack
# wget http://www.fastcgi.com/dist/mod_fastcgi-2.4.2.tar.gz
# tar -zxvf mod_fastcgi-2.4.2.tar.gz
# cd mod_fastcgi-2.4.2
3. Fetch the patch

For convenience I've put a file containing the patch on my site
# wget http://www.craic.com/rails/installation/mod_fastcgi-2.4.2-apache2.2.patch
The original patch was created by Daniel Smertnig and can be found HERE.

4. Patch the distribution

Go to the directory that contains the mod_fastcgi distribution and apply the patch:
# patch -p0 < mod_fastcgi-2.4.2-apache2.2.patch
patching file mod_fastcgi-2.4.2/fcgi.h
patching file mod_fastcgi-2.4.2/Makefile.AP2
Hunk #1 succeeded at 20 with fuzz 1.

5. Fix the Makefile

Now cd into the mod_fasta-2.4.2 directory. Copy the Apache 2 specific Makefile into place:
# cp Makefile.AP2 Makefile
Open Makefile in an editor. Change top_dir to this:
top_dir = /usr/lib/httpd

Then uncomment the INCLUDES line and change it to this:
INCLUDES=-I /usr/include/httpd
Close the file and compile the module thus:
# make
# make install
You will see a bunch of warnings and commands that (hopefully) you can ignore. If the make suceeded you will see the module in the Apache modules directory:
# ls -l /usr/lib/httpd/modules/mod_fastcgi*
-rwxr-xr-x 1 root root 202193 Jan 24 18:07 /usr/lib/httpd/modules/mod_fastcgi.so

6. Update httpd.conf

Edit the Apache config file /etc/httpd/conf/httpd.conf add this line at the end of the LoadModules section:
LoadModule fastcgi_module modules/mod_fastcgi.so

Add this block towards the end of the file - just before the Virtual Hosts section. It has to be after the lines the begin User and Group.
<IfModule mod_fastcgi.c>
FastCgiIpcDir /tmp/fcgi_ipc/
AddHandler fastcgi-script .fcgi
Save the file and restart Apache
# /sbin/service httpd restart

You should not have to jump through all these hoops to install this. FastCGI is a great piece of software and it is a shame that whoever maintains it has not been able to keep the distribution up to date.

Archive of Tips