A collection of computer systems and programming tips that you may find useful.
 
Brought to you by Craic Computing LLC, a bioinformatics consulting company.

Wednesday, June 20, 2007

Apache 2.2 - Where are the modules ?

The Apache httpd server has played a critical role in most of my work over the past decade but these days I'm finding more and more reasons to look at alternatives. Its configuration syntax is not pretty, the distribution is bloated, and its performance with some applications (especially Rails) is almost unworkable. Here's another reason to grumble...

I recently compiled Apache 2.2.4 from source. I've done that several times in the past and just did the typical steps of:
# ./configure --prefix=/usr/local/apache2
# make
# make install


That works great and I can fire up the server just fine - until I want to serve an application that involves URL rewriting. I get this error:

[Wed Jun 20 17:00:07 2007] [alert] [client 192.168.2.26] /Users/jones/instiki/public/.htaccess: Invalid command 'RewriteEngine', perhaps misspelled or defined by a module not included in the server configuration

That tells me that the mod_rewrite module is not being loaded in the httpd.conf file. So I go into that file and search for the LoadModule section. In the past there would be a bunch of LoadModule statements, many of which were commented out. But in 2.2.4 there is only one. This is actually a good thing as it makes the httpd.conf file smaller and easier to look through.

I figured I just needed to add a line that loaded mod_rewrite. I checked my apache2/modules directory to get the correct file name - but there are no modules in there. I checked to see if it was compiled into the executable using this:

# /usr/local/apache2/bin/httpd -l

No, it's not in the list. So where are all the modules? Am I supposed to download them separately from apache.org? Looking on that site doesn't tell me what to do. What's the deal?

Turns out that you have to specify the modules you want at compile time. This is a really, really bad move on Apache's part. I'm sure it is done with the best of intentions but what a pain. In order to get my module I have to go and recompile the whole damn thing... great...

*WAIT* If you have already modified your current apache config files, htdocs directory, etc. copy them somewhere safe before you recompile and install over them!


Here's what I need to do to get mod_rewrite and mod_env compiled so that I can load them dynamically.

# ./configure --prefix=/usr/local/apache2 --enable-rewrite=shared --enable-env=shared

To see the full list of options (215 lines of them) do this:

# ./configure --help

You can get 'all' or 'most' (whatever that means) of the modules using these variants

# ./configure --prefix=/usr/local/apache2 --enable-mods-shared=all
# ./configure --prefix=/usr/local/apache2 --enable-mods-shared=most


I chose the latter when I recompiled and that gave the two that I need plus hopefully any others that I might need in the future. So my specific steps were:

# ./configure --prefix=/usr/local/apache2 --enable-mods-shared=most
# make
# make install


Then I went into the httpd.conf file, put back a couple of changes that I had made previously and went looking for the LoadModule lines. Well, I guess I got what I asked for... I asked it to compile most of the modules. It did that and it has set up the config to load all 43 of them!

All I want are two modules, for Pete's sake... so I reckon I want to delete the others, right? WRONG! Without the new configure directive, a bunch of the standard modules are compiled in the server. With the new directive, they are loaded dynamically. So if you delete these LoadModule lines then a load of other stuff will break.

With all this done, I can now get back to where I started with a functioning Apache installation. I hope this note helps you avoid the couple of hours of frustration that I just enjoyed.

Apache is a great piece of work, don't get me wrong, - but nonsense like this is really annoying.


If you are in the market for an alternative you might want to consider Nginx (http://nginx.net/), a relatively unknown http server that is getting a lot of good press in the Rails community.

Tuesday, June 19, 2007

Adding Hard Drives to a Linux System using fdisk

In recent releases of Fedora Linux (e.g. Fedora 6 and 7), the default filesystem configuration uses Logical Volumes, which are a flexible way to manage the hard drives on a machine. But this flexibility can come at a cost if and when the drives go bad.

The old school way of setting up drives is to use the fdisk command to set up specific, fixed partitions on each drive. This requires a bit more planning and a little more execution, but it has the advantage that of a machine fails for some reason, you can still move the drives to a new machine and mount them directly.

I wanted to do that on a system recently where I had 2 x 500GB drives set up as a logical volume and then wanted to add 2 more drives to use as live backup disks for other servers. I want backup disks to be as independent as possible and so adding them to the logical volume was not a good idea. Happily you can mix logical volumes and disks with traditional partitions with no problems.

Here are the steps needed to add two drives to the Fedora 7 system. The same information is available in a load of other places. I'm including it here largely for my own benefit next time I need to do this. All steps assume you are logged in as root.

1: See what physical disks you have on your system

# /sbin/fdisk -l

You'll get a load of output - look for lines that delimit each of the physical disks, such as:
Disk /dev/sda: 500.1 GB, 500107862016 bytes

Below these you will see the partitions on each drive. Note the distinction between a disk (/dev/sda) and a partition on that disk (/dev/sda1, /dev/sda2...)

On my system the disks are SATA drives and these appear under these names /dev/sda, /dev/sdb, /dev/sdc, /dev/sdd.

/dev/sda and /dev/sdb are used in the original Logical Volume that I set up. I'm going to set /dev/sdc and /dev/sdd as two separate disks, each with a single partition on each.


2: Use fdisk to create a single partition on each disk

# /sbin/fdisk /dev/sdc
Then in response to the various prompts give these answers:
n (create a new partition)
p (make it a primary partition)
1 (call this partition #1)
hit enter to start at the first sector
hit enter again to end at the last sector (in other words make the partition fill the entire drive)
w (write the partition to the disk and quit the program)

I repeated this for disk /dev/sdd

3: Create a filesystem on each drive

The standard Linux filesystem type at the moment is ext3, although alternate filesystems can be used. Each filesystem gets a label, which is typically the same as the mount point where you will access the drives. On my system I called the two filesystem /backup0 and /backup1. You create the filesystem with the appropriate variant of the mkfs command. Note that you now refer to the specific partition (/dev/sdc1) and not the disk itself (/dev/sdc).

# /sbin/mkfs.ext3 -L /backup0 /dev/sdc1
# /sbin/mkfs.ext3 -L /backup0 /dev/sdc1


Each command will produce quite a bit of output and may take some time as it creates blocks on the actual drives.

4: Create the mount points for each filesystem

A mount point is the Linux path/directory where this filesystem can be accessed under Linux. In my case the commands were:

# mkdir /backup0
# mkdir /backup1


5: Specify these mount points in the filesystem table file
Make a copy of the existing file - just in case
# cp /etc/fstab /etc/fstab.bak

Edit the current file (/etc/fstab) and add a line for each of the new filesystem.

LABEL=/backup0 /backup0 ext3 defaults 1 2
LABEL=/backup1 /backup1 ext3 defaults 1 2


The label is an alias for the actual partition. The next field is the mount point (same name as the label in this case). The next field is the type of filesystem (ext3) and the final 3 fields are just default options that you should include.

6: Reboot the system

# /sbin/shutdown -r now

You can re-mount the filesystems with 'mount -a' but a full reboot is a safe bet.

Check that the filesystems are mounted using 'df', 'cd' to the mount points and try writing to the disks. You will want to set the directory permissions to something appropriate using chmod and chown.

'Modern' ATX Motherboards have TWO Power Connectors

I just put together a new machine with a ASUS P5B Deluxe motherboard, Core 2 Duo CPU and a load of other good stuff. I built a similar system with the same motherboard a few months back and had no problems at all on the hardware end. I must have actually read the manual that time.

With this system I put everything together, double checked it and powered it on. The fans worked, the red and blue LEDs on the motherboard lit up as expected but there was no video output and no beep from the system... dead to the world.

After changing the video card, unplugging the drives, and swapping out the memory I was still getting nothing. The fact the there was not even a beep code from the motherboard had me thinking it was a bad board or a bad CPU. I needed a break so I took the dog for a walk...

Refreshed, I took another look at the system, checked the cables and... oh... OK...

Modern ATX form factor boards like this Asus P5B have TWO power connectors - the 'regular' 24 pin EATXPWR that has been around for a while, plus a 2x4 pin EATX12V connector.

If, like me, you forget about that one then the system will not boot... says so right there in the manual... in small print, on page 33...

On my board the socket for this is on the 'top left' corner by the keyboard, etc. sockets. You can use either a 4 pin plug or an 8 pin plug of the correct type. The board comes with a small plastic cover over 4 of the socket holes, so you need to remove this if you want to use the 8 pin plug.

So, a simple mistake to make... but one that took me well over an hour to figure out, dang it.

Friday, June 15, 2007

Installing Linux Fedora 7 over a Network

I wanted to upgrade a Linux Fedora 6 machine to Fedora 7, but I didn't want to download and burn a bunch of CDs and then feed these in and out during the upgrade. The machine does not have a DVD drive.

So instead I downloaded the DVD .iso version of Fedora (a single 3.2 GB file) and made that available from another server via NFS.

In order to perform the upgrade, I needed to boot the machine from a minimal Fedora installation so that it could fetch the full install via NFS.

The two simple options for doing this are to use a boot CD or to boot the system off a USB memory stick.

** I'm having problems booting from a USB stick in Fedora 7 so I'll leave that section out until I have things figured out**


Booting off a Minimal Linux CD

1. From the Fedora distribution site, download the minimal boot disk .iso file and burn it onto a CD. This file is called boot.iso and is about 8 MB.

2. Boot the target machine from this CD and follow the initial setup steps. In my case I wanted to perform an upgrade and I wanted to get the distribution via NFS. You can also use FTP or HTTP, which might be easier for some.

3. Using NFS, you will be prompted for the Ethernet interface that will be used, if you have more than one. Then you will be prompted to enter the network address of the host that contains the new distribution file and the path to .iso file.

Note that the installer is expecting to find the DVD .iso file at that location and not some unpacked version of Fedora. I would guess it will work if you have the set of CD .iso files but I've not tried that.

4. Assuming it can access the remote directory then the installer will get under way. It will first check dependencies between the new and old linux packages and then you will see the message 'Preparing transaction from installation source'. This is the stage where the remote .iso file is being copied over the local machine and mounted. If this succeeds then you should be home and dry.

Copying a Disk Image to a USB Memory Stick on Mac OS X

These are the steps needed to copy a disk image (.img file) directly to a USB Flash Memory Stick.

For example, I wanted to copy a Linux boot image file to a memory stick so that I could boot a Linux server directly from the stick.

What you don't want to do is have Mac OS X mount the stick and just drag the image to the stick in the Finder. This will copy the file to the filesystem that is already on the stick.

You want to replace that filesystem with the one contained in the .img file. Note that this is complete replacement - and existing data on the stick will be erased.

1. Insert the memory stick in the Mac

2. Open the Disk Utility application (in Applications/Utilities)
You should see the name of the USB stick appear in the left panel in the application, below your system disk.

3. Click the memory stick name, then click the 'Restore' tab at the top of the right hand panel.

4. Drag and drop the .img file to be copied into the Source box in Disk Utility.

5. Drag the memory stick icon from the left panel into the Destination box.

6. Click the Restore button that has now become active and click OK in the confirmation window which then appears.

7. Enter your password in the authentication window that pops up.

8. The copy is complete when the Source and Destination boxes in Disk Utility go blank.

9. Click on the memory stick icon in the Finder and you should see the contents have been replaced with those of the disk image.


Those familiar with the ever useful UNIX command dd might expect to use this to copy the image file on the command line. The problem here is that Mac OS X will automatically mount the memory stick into the Finder such that a command like 'sudo dd if=/path/to/my.img of=/dev/disk3' will fail with a 'Resource Busy' error.

Setting up NFS mounts in Linux (Fedora 6)

This is a short guide to mounting a directory from one linux system on another on the same internal network using NFS. This was tested on a Linux Fedora system but should be applicable to other unix variants.

In this example I am setting up a NFS mounted directory to use for backing up files on remote systems. All commands should be run as root or via sudo.

On the Server
(Fedora Core 6 with NFS4 installed)

1. Turn off any firewall and SELinux software (or configure it to allow NFS traffic). Use the system-config-securitylevel GUI tool. If you follow all the steps here but you are unable to mount the remote directory then go back and check your security settings.

# system-config-securitylevel

Another place to check is /etc/hosts.allow. Adding this line will open up all services on this server to the specified network.

ALL: 192.168.2.0/255.255.255.0

2. Configure the NFS4 ID to Name mapping daemon. This is not used in versions of NFS before NFS4. It is used on Fedora 6 and above.

Edit the configuration file /etc/idmapd.conf and modify the Domain and Nobody-User/-Group lines thus:

Domain = your-internal-domain.com
Nobody-User = nfsnobody
Nobody-Group = nfsnobody


3. Make sure the portmap, nfslock and nfs services are running. On my system the first two were running by default.

# /sbin/service portmap start
# /sbin/service nfslock start
# /sbin/service nfs start


4. Make sure they will be started whenever the system is rebooted

# /sbin/chkconfig --level 345 portmap on
# /sbin/chkconfig --level 345 nfslock on
# /sbin/chkconfig --level 345 nfs on


5. Create an exports directory for each directory you want to share. This is not the actual directory that contains your data but an indirect link that makes it easier to move the real directory without having to update all your clients. It's a bit like a symbolic link. The name doesn't really matter but something like exports or nfsexports is a good idea.

# mkdir /exports
# mkdir /exports/backups


Then change the directory permissions to suite your needs. For example:

# chmod -R a+w /exports

6. Bind the exports directories to the real directories by editing /etc/fstab and adding lines like this where /proj/backups is the 'real' directory and /exports/backups is the linked directory that will actually be shared by NFS.

/proj/backups /exports/backups none bind 0 0

7. Pick up this change on your system by remounting all filesystems

# mount -a

8. Tell NFS which filesystems can be exported by editing /etc/exports and adding these lines. Look at man exports to learn about all the options. In this example the numbers represent the network and netmask of my internal network and they define the range of IP addresses that will able to access the shared directory. Modify this as needed to restrict access as needed. Of the other options, the rw is most important, signifying that the client will have read/write access to the directory.

Note that all of this should be on Two lines!

/exports 192.168.2.0/255.255.255.0(rw,insecure,no_subtree_check,no_hide,fsid=0)
/exports/backups 192.168.2.0/255.255.255.0(rw,insecure,no_subtree_check,no_hide)


9. Pick up this change with this command

# /usr/sbin/exportfs -rv

10. Reboot the system and check that the directories are being exported

# /usr/sbin/showmount -e
Export list for server.int.craic.com
/exports 192.168.2.0/255.255.255.0
/exports/backups 192.168.2.0/255.255.255.0



On the Client
(In this case the client was a Fedora 4 system that did not have NFS version 4 installed)

1. If the system did have NFS 4 then you would repeat step 1 in the server configuration to setup the /etc/idmapd.conf file.

2. Create the mount points on the client system. In other words, create the directories that will be linked to the remote shared directories by NFS. In my case this was:

[client] # mkdir /mnt/server_backups

3. Add the shared directories to /etc/fstab on the client. The line defines the hostname of the remote server and the exported directory, separated by a colon. Note that we use the name of the linked/bound directory that we set up, not the real directory name. That way, if we move that directory we only need to change the settings on the server. The second term on the line defines the mount point on the client machine (this machine). 'man fstab' will show what the options mean in the 'nfs' section. Most important of these are 'rw' which define read/write access and 'hard' which sets up a hard mount, which is more robust than a soft mount.

Note that all of this should be on ONE line!

server.int.craic.com:/exports/backups /mnt/server_backups nfs rw,hard,rsize=8192,wsize=8192,timeo=14,intr 0 0

4. Remount the filesystems on this client

[client] # mount -a

If this works then you should not see any messages. Check that the remote directory has been mounted by running 'df' and looking for the appropriate line. Then cd to the local mount point, list the files, etc.


All this information is widely available on the web in various forms. I used this article on fedorasolved.org by Renich Bon Ciric while I was figuring it all out, although that does contain a couple of errors.

Archive of Tips