A collection of computer systems and programming tips that you may find useful.
 
Brought to you by Craic Computing LLC, a bioinformatics consulting company.

Friday, May 29, 2009

Configuring DNS servers on Ubuntu and Fedora

Just been setting up new DNS servers on my internal network. The primary was set up on an Ubuntu (Jackalope) system and the secondary on a Fedora Core 9. Both used Bind9 - the Ubuntu server had Bind 9.5.0-P2 and Fedora had 9.4.0.

On Ubuntu I followed the instructions in this Ubuntu guide and all went smoothly. I could test out the primary server and verify that it was working.

I figured the Fedora setup would be similar - but no - VERY different. For a start the Fedora configuration runs chroot around bind (I think that's the correct description) in order to make it more secure. So you have to put your files in /var/named/chroot/etc and /var/named/chroot/var/named (with symlinks from /etc/ in some cases).

The named.conf file format is a lot more involved under Fedora. You really want to use the samples in /usr/share/doc/bind-9.4.0/sample as a starting point. In particular, you need to put your zones in 'views' in the file.

Another problem I ran into was the need for a named.root file which contains the root servers for the Internet. You have to get this yourself and put it into /var/named/chroot/var/named/named.root.
# wget ftp://ftp.rs.internic.net/domain/named.root

And you need to have a file that tells DNS where this file is! (/var/named/chroot/etc/named.root.hints)

After doing all of that (and more) and trying to restart named it crapped out with these lines in /var/log/messages:
May 29 10:17:11 sequence named[12880]: starting BIND 9.4.0 -u named -t /var/named/chroot
May 29 10:17:11 sequence named[12880]: found 2 CPUs, using 2 worker threads
May 29 10:17:11 sequence named[12880]: loading configuration from '/etc/named.conf'
May 29 10:17:11 sequence named[12880]: /etc/named.rfc1912.zones:10: zone '.': already exists previous definition: /etc/nam
ed.root.hints:12
May 29 10:17:11 sequence named[12880]: listening on IPv4 interface lo, 127.0.0.1#53
May 29 10:17:11 sequence named[12880]: listening on IPv4 interface eth0, 192.168.2.25#53
May 29 10:17:11 sequence named[12880]: view.c:625: REQUIRE(view->hints == ((void *)0)) failed
May 29 10:17:11 sequence named[12880]: exiting (due to assertion failure)

Now what?! Thanks to this blog post I was able to comment out the '.' zone in the /var/named/chroot/etc/named.rfc1912.zones file, which is the duplication reported in the errors.

Finally I've got the secondary server up and running and getting my zones from the primary server. It just shouldn't be this difficult...


 

Wednesday, May 27, 2009

Allowing sftp access but not ssh

How you set up a UNIX account to allow remote access via sftp but not by ssh seems to be a common question judging by the number of Google hits. Unfortunately there are a plethora of suggested solutions, some of which seem quite complex. Here is what worked for me (on Ubuntu):

1: Create a regular user account and home directory for your user
# /usr/sbin/adduser jones

2: Add the user to the AllowedUsers in your /etc/ssh/sshd_config file
AllowUsers jones smith

3: Restart sshd
# /etc/init.d/sshd restart

4: Check that you can login remotely as that user - ssh and sftp should both work
% ssh jones@yourhost
% sftp jones@yourhost

5: Figure out where you sftp-server executable lives - look in /etc/ssh/sshd_config for this line:
Subsystem sftp /usr/lib/openssh/sftp-server

6: Edit /etc/passwd and replace the default shell for your user with this path
jones:x:1004:1004:Rob Jones,,,:/home/jones:/bin/bash
becomes:
jones:x:1004:1004:Rob Jones,,,:/home/jones:/usr/lib/openssh/sftp-server

7: Connecting with sftp should now work normally but if you try ssh you will get prompted for the password at which point nothing will happen.


 

Slicehost + Ubuntu + Mysqld + Rails Memory issues

Here are a couple of issues and/or fixes if you are running a Rails/Mysql application on a 256MB server - specifically I am (was) running a web site built using Radiant (a Rails/Mysql based CMS) on a 256MB slice at Slicehost.

The site is effectively static but this weekend Slicehost sent me an automated email telling me the slice was swapping heavily. Turns out that the default settings for Mysqld use the InnoDB engine and that chews up a lot of memory - well over 150MB.

Two solutions - turn off InnoDB or upgrade to a slice with more memory (for just under twice the cost).

To turn off InnoDB go into /etc/mysql/my.cnf, uncomment the line 'skip-innodb' and restart. That should drop your memory use by around 100MB.

But the settings for my existing Radiant app (running with Rails 2.3.2) appear to require InnoDB and Phusion Passenger fails to start the app with that setting.

Rather than mess with an app that has been fine, I decided to upgrade to a large slice. I dumped the database contents as a precaution but then just followed the resize steps from the Slicehost slice management page. Worked like a charm - everything came back up and works fine. Running 'free -m' or 'top' shows I've got memory to spare for now. It's costing me $38/month instead of $20, but I can stay with a default configuration for my database, etc. and that is worth quite a bit in itself.


Wednesday, May 20, 2009

s3sync, Ruby 1.8.7 and Ubuntu

s3sync is a great way to transfer files to and from Amazon S3.

Installing it onto an Ubuntu 8.10 (Linux 2.6.22-11-server) system with Ruby 1.8.7 I got the error 'LoadError: no such file to load -- openssl'.

Turns out you have to fetch the library manually:
# apt-get install libopenssl-ruby
That should solve the problem. Ubuntu is great but in the server installation you are expected to install ALL the packages you need - and you don't find out what you're missing until you try and run your code. Fedora goes the other way, installing more stuff than you probably need but at least you get all the core stuff. Neither of them gets it just right.

Another gotcha is that s3sync looks for your AWS keys in Env variables AWS_ACCESS_KEY and AWS_SECRET_ACCESS_KEY as opposed to AMAZON_ACCESS_KEY, etc.

 

Monday, May 18, 2009

Phusion Passenger and Static Sites

Phusion Passenger (aka ModRails) is a great way to run Rails apps from Apache. You add a couple of lines of Passenger configuration to your apache config file and create a VirtualHost for your application. Passenger does its thing behind the scenes and everything works fine.

But if you want to run a totally static web site as a VirtualHost on the same server as a Passenger-managed Rails app you need some way of telling Passenger what you are doing.

In your VirtualHost block for the Static site add a 'PassengerEnabled off' line:
<virtualhost>
PassengerEnabled off
ServerName mysite.com
DocumentRoot /home/jones/public_html/mysite.com/html
</virtualhost>
You will need to restart Apache as well.

 

Tuesday, May 12, 2009

Character Encodings and AWS SimpleDB

Just been bitten by an ISO-8859-1 character lurking in a string I was trying to load into a Amazon Web Services SimpleDB domain.

It is actually the same darn character that gave me issues in Ruby 1.9 the other week.

In this case the non-UTF8 character resulted in the request being sent to SimpleDB not matching the Signature that is computed from the request. Here is the error message that I kept getting with certain records I was trying to upload.
<?xml version="1.0"?>
<Response><Errors><Error>
<Code>SignatureDoesNotMatch</Code>
<Message>The request signature we calculated does not match the signature you provided.
Check your AWS Secret Access Key and signing method. Consult the service documentation for details.</Message>
</Error></Errors>
<RequestID>021ef5c0-b1fd-73ed-b376-bc292d9736cf</RequestID></Response>


So I spent a good few hours checking my AWS code for errors, trying a different AWS sdb library, trying the latest version of the Signature protocol, etc., etc. - all to no avail. Trying to pinpoint the problem I tried running the code again with a different set of input data and it was working fine. That told me it was data related and not code, per se. Looking more cloesely at the data with 'pp' I saw a non-ascii/non-utf-8 char code. Turns out SimpleDB has known issues when these appear in input queries.

For me, the fix was fairly simple. I'm 99% sure that all I need worry about are ISO-8859-1 codes in my input. So look for a non-ASCII code (> 127) in my input strings and if I find any I use iconv to convert to UTF-8, which SimpleDB can handle. Note that this is for Ruby 1.8 - It does not work in 1.9 - I'll post a fix for that in due course.
require 'iconv'

def convert(in_str)
ascii = 1
in_str.length.times do |i|
if in_str[i] > 127
ascii = 0
break
end
end
out_str = String.new(in_str)
if ascii == 0
in_encoding = 'iso-8859-1' # just a guess
out_encoding = 'utf-8'
out_str = Iconv.new(out_encoding, in_encoding).iconv(in_str)
end
out_str
end

[...]
input_hash.keys.each {|key| input_hash[key] = convert(input_hash[key]) }
[...]


Not pretty but it has got me back up and running.

Moral of this story: When arbitrary weird effects arise halfway through processing a large dataset, look closely at the data. Chances are the problem lies in there somewhere. Make sure you have some accurate log to help you pinpoint exactly where things went wrong.

Archive of Tips