A collection of computer systems and programming tips that you may find useful.
 
Brought to you by Craic Computing LLC, a bioinformatics consulting company.

Tuesday, December 21, 2010

Per Page Inclusion of JavaScript in Rails view pages

I use various jQuery plugins in my Rails views but I don't want to download all that code in pages that don't use them. Furthermore I want to minimize the chance of namespace conflicts between different blocks of code.

A simple way to do this is to add these lines to your application.html.erb file after the main JS includes:
<%= yield :custom_javascript_includes %>
and then add a block similar to this in each view template that needs custom JS.
<% content_for :custom_javascript_includes do %>
<%= javascript_include_tag "your_custom_plugin.js" %>
<script type="text/javascript">
// Your custom JavaScript code
</script>
<% end %>
In this example I'm including a specific file from the public/javascripts directory and then adding some custom JS code that might define some events or trigger some action on page load.

Here is a real example that uses the facebox jQuery plugin which provides 'light box' functionality. This code is placed at the top of my 'show' template.
<% content_for :custom_javascript_includes do %>
<%= javascript_include_tag "facebox.js" %>
<script type="text/javascript">
jQuery(document).ready(function($) {
$('a[rel*=facebox]').facebox({
loadingImage : '/images/facebox_loading.gif',
closeImage : '/images/facebox_closelabel.gif'
});
});
</script>
<% end %>
You can take this further by also including custom stylesheets linked to these plugins in the same block. That can result CSS and JS includes being interspersed in the resulting HTML file which some people may not like, but I don't think there are any practical drawbacks to it.

The big win is that you keep your custom JS code right there in the same file as your HTML. If that JS code is substantial you could move it into a partial that resides in your views directory, as opposed to having it separated in the public/javascripts directory.

For high performance production sites you should weigh the benefits of this approach in terms of clarity with the performance benefit of stashing all your JS code in a single file and minifying it.


 

Tuesday, December 14, 2010

MySQL Data export/import probelm with SQL SECURITY DEFINER

Importing a MySQL database dump from a client into my system I got this error:
$ mysql -u root -p  example_db < example_db_dump.sql 
Enter password:
ERROR 1449 (HY000) at line 5172: The user specified as a definer ('smith'@'localhost') does not exist
Running 'grep' for that user turned up a bunch of lines like this:
/*!50013 DEFINER=`smith`@`localhost` SQL SECURITY DEFINER */
These are created on the 'donor' MySQL system when creating one or more views of the data. That user does not exist on my system and so MySQL complains.

I don't care about those views so the easiest way to deal with this issue is to remove these '50013' lines. You can do that with 'sed':
$ sed '/\*\!50013 DEFINER/d' example_db_dump.sql > example_db_dump_clean.sql
You need to drop the new database as the import process partially worked, re-create it and then you can reimport:
$ mysqladmin -u root -p drop example_db
$ mysqladmin -u root -p create example_db
$ mysql -u root -p example_db < example_db_dump_clean.sql


Now, if you need to use the Views in your copy of the database then you need to either create that user locally and leave the lines, or change the user in those lines to one that does exist locally.

 

Sunday, December 12, 2010

Installing Amazon EC2 tools on Ubuntu 10.04

The Amazon EC2 ami and api tool packages are not found by apt-get using the default sources list. You need to add 'multiverse' to the list. Here is how you install the tools:
$ sudo perl -pi -e 's%(universe)$%$1 multiverse%'  /etc/apt/sources.list
$ sudo apt-get update
$ sudo apt-get install ec2-ami-tools
Of course, you also need to set up your environment variables as per usual so that the tools can access your keys.

 

Sunday, December 5, 2010

Using Bundler with a non-Rails Ruby project

Bundler is a Ruby Gem to help you manage the various gem dependencies in your application.

It has gotten a lot of attention as it is ,er, bundled with Rails3 and represents a welcome addition to that framework. But you can use Bundler in any Ruby application to help make deployments go smoothly.

For a regular Ruby application the steps involved are simple:

1: Install Bundler on your system
$ gem install bundler
2: Create a file called 'GemFile' at the top level of your application directory
Add gem sources to this and each gem that your application relies on. This example includes the main gem sources and a single gem
source :rubygems
source :rubyforge
source :gemcutter

gem "json"
3: Run 'bundle install'
This will now install any missing gems. It does harm to run it more than once.
$ bundle install
Fetching source index for http://rubygems.org/
Fetching source index for http://rubygems.org/
Fetching source index for http://rubygems.org/
Using json (1.4.6)
Using bundler (1.0.3)
Your bundle is complete! It was installed into /Users/jones/.rvm/gems/ruby-1.9.2-p0
If you look in your application directory you will see a new file called Gemfile.lock. Bundler uses this and you want to keep it, adding it to your git repository (or equivalent). You can also find a .bundle directory containing a config file. I guess you can use this for fancier configurations, but a basic app you can ignore it.

4: Deploy your app
Now when you deploy your app in a new location (with the bundler gem installed) you can simply run 'bundle install' in your app directory and you'll be good to go.


 

Deleteing a file with a name that starts with a hyphen

Here's a simple fix to a tricky problem (simple with hindsight, of course)...

By accident I created a file called '--latest' and wanted to delete it. But unix thinks that '--' signifies a program option. Escaping the hyphens didn't work either.
$ rm '--latest'
rm: illegal option -- -
usage: rm [-f | -i] [-dPRrvW] file ...
unlink file
$ rm '\-\-latest'
rm: \-\-latest: No such file or directory
$ rm "\-\-latest"
rm: \-\-latest: No such file or directory

The simple solution is to give a more complete path specification that avoids having the hyphens as the first characters.
$ rm ./--latest
Problem solved!

 

Friday, December 3, 2010

Using Opscode Chef to start up a node on AWS EC2 - A Simple Example

Chef from opscode.com is a suite of tools for managing computing infrastructure, from spinning up new nodes to installing Ruby gems to any custom operation you care to code up. It gets great reviews from those who manage numbers of unix (and other) systems and it has support for Amazon AWS and other cloud vendors baked right in.

I'm just getting started with it and am eager to go further but, like many projects, the documentation is far from clear. It has a lot of options and that typically means complexity. Perhaps because they are targeting folks who do systems administration for a living, their docs skim over some of the crucial basics.

My main interest in Chef is using it to spin up AWS EC2 nodes, configure them to my tastes and then spin them down as needed.

Here are the steps I took to get a node up and running:

0: I'm assuming you know how to fire up EC2 nodes from the management console or the command line on your desktop machine, that you have a keypair set up and that you are familiar with AMIs and the other terminology. If not, then get comfortable with EC2 before going any further.

1: I'm also assuming that you have a basic chef setup on your desktop and an account on Opscode's chef server (it's free for up to 5 nodes) and that you have written and tested a simple recipe on your desktop. If not, get comfortable with that before going further.

2: Add your AWS credentials to your knife.rb file (typically in ~/.chef)
The docs say to enter them in this format, which is different from the other configuration parameters that are generated from the opscode server... I don't know why... but this works:
# EC2 access keys
knife[:aws_access_key_id] = "<your access key id>"
knife[:aws_secret_access_key] = "<your secret access key>"
3: Figure out your EC2 parameters for the node you want to spin up.

The AMI ID - I'm using a 32 bit Ubuntu AMI (ami-480df921)

The Instance Type - I'm testing with a Micro instance (t1.micro)

The Keypair name (mine is called 'craic-ec2-keypair') and the location of the SSH identity file that is linked with this (mine is in ~/.ssh/craic-ec2-keypair.pem).

The User that you (and chef) will ssh in to the node with (Ubuntu 10.04 wants me to use 'ubuntu' instead of 'root')

The Security Group for the server (I just use 'default' which is what I set up when I first started with EC2)

The Chef Recipe(s) and/or Role(s) that you want to run on the new EC2 server (I'm just going to run a simple one that I created called 'craic_test' that writes a file in /tmp) - Make sure this works on a non-EC2 client first!

4: Let's do it...
The chef command that you use is 'knife' with some EC2 specific options. You can find more information HERE in their FAQs, which is MUCH more complete than their Wiki documentation on knife...

The command will be 'knife ec2 server create' followed by a bunch of options. That's pretty self explanatory - but then we get into the options... Here is the full command I used split across multiple lines, which I shall explain below:

$ knife ec2 server create "recipe[craic_test]" \
-i ami-480df921 \
-f t1.micro \
-x ubuntu \
-S craic-ec2-keypair \
-I ~/.ssh/craic-ec2-keypair.pem \
-G default
The first line is the command itself followed by the recipes and/roles in the standard chef format ("recipe[craic_test]").

The EC2 AMI is specified with the -i option, then the Instance Type is specified with the -f option (aka --flavor - I don't know why they chose that...).

Next comes the ssh user with the -x option, the SSH Keypair name with -S (upper case S) and the location of the matching identity file with the -I option (upper case I) - then finally the security group with the -G option (upper case G).

I am really hoping that there is some way to store some of these options in knife.rb or someplace convenient as most of these will not change for my uses.

All being well, what happens next is that the node gets spun up on EC2, chef installs the prerequisites it needs to run on the node and downloads the chef client over there. Chef will create the new node record on the chef server, download your recipes and then execute them.

What you will see is a TON of output as everything runs. I won't subject you to all of that here, but the first few line will look like this:
Instance ID: i-08da8865
Flavor: t1.micro
Image: ami-480df921
Availability Zone: us-east-1b
Security Groups: default
SSH Key: craic-ec2-keypair

Waiting for server............
Public DNS Name: ec2-184-72-190-66.compute-1.amazonaws.com
Public IP Address: 184.72.190.66
Private DNS Name: domU-12-31-39-0F-29-E8.compute-1.internal
Private IP Address: 10.193.42.22

Waiting for sshddone
INFO: Bootstrapping Chef on
0% [Working]90-66.compute-1.amazonaws.com
[...]
This shows the IP info, etc for your new instance and is followed by pages of messages reporting the downloading and installation of all the prerequisites for Chef. At the end you should see your recipes being run and finally a summary once again of the instance information:
Instance ID: i-08da8865
Flavor: t1.micro
Image: ami-480df921
Availability Zone: us-east-1b
Security Groups: default
SSH Key: craic-ec2-keypair
Public DNS Name: ec2-184-72-190-66.compute-1.amazonaws.com
Public IP Address: 184.72.190.66
Private DNS Name: domU-12-31-39-0F-29-E8.compute-1.internal
Private IP Address: 10.193.42.22
Run List: recipe[craic_test]

5: Now you can ssh into your node and you should see that your recipes have run and done whatever you asked of them.

One good thing about all the prerequisites getting installed, for an Ubuntu system at least, is that you now have a node with a lot of development libraries and tools already in place (especially if you work in Ruby).

That's it for the basic installation.

Remember to terminate your node when you are finished as you are being charged for it AND remove any test nodes from your Opscode server account as they will count against your account limit.

There is obviously a lot more that you can, and need to, do to set up your nodes but this walk through gets you to a functioning node.

Happy cooking


System-wide rvm install on Ubuntu

Here are the steps I used to install RVM system wide on a Ubuntu 10.04.1 LTS node on Amazon EC2, using one of the Canonical AMIs. These set up the primary user as 'ubuntu', requiring you to sudo for system commands. I could install rvm under that user but installing System-Wide is recommended.

This HOWTO is based on the RVM page http://rvm.beginrescueend.com/deployment/system-wide/. But I find the RVM site documentation makes a few assumptions with regard to commands.

1: Install git if you don't already have it
$ sudo apt-get install git-core
2: Download RVM
$ sudo bash < <( curl -L http://bit.ly/rvm-install-system-wide )
This creates /usr/local/rvm and the group 'rvm'

3: Add each user to group rvm
$ sudo usermod -aG rvm ubuntu
4: Add startup lines to each user's .profile
[[ -s '/usr/local/lib/rvm' ]] && source '/usr/local/lib/rvm'
5: Add this line to /etc/rvmrc (may not be necessary)
$ echo "export rvm_path=/usr/local/rvm" > /etc/rvmrc
6: Use rvm as a user - best to logout/login first to make sure you get all the paths etc setup
$ rvm list
$ rvm package install zlib ( I needed this to get install to work )
$ rvm install 1.9.2 --with-zlib-dir=/usr/local/rvm/usr
Note that you use rvm as a regular user, not via sudo, even though this is a system wide install.


 

Friday, November 12, 2010

Extracting Text from PDF documents on Mac OS X

There are various ways to extract the text from a PDF document. In Mac OS X there are two effective ways that cost nothing.

1: Open the PDF in Preview or Adobe Acrobat Reader, select all the text, copy and paste into your editor of choice.

This is a quick and easy solution for one or a few documents. But for a large number of documents it would quickly become tedious.

Neither application allows you to Save As Text from the menu, which is unfortunate.

I also see a significant difference between the pasted text when the original PDF has columns of text. Compare the same original text copied from Acrobat Reader:
MOLECULAR FORMULA C6492H10060N1724O2028S42
MOLECULAR WEIGHT 146.0 kDa
TRADEMARK None as yet
MANUFACTURER MedImmune
CODE DESIGNATION MEDI-563
CAS REGISTRY NUMBER 1044511-01-4
and Mac OS X Preview:
MOLECULAR FORMULA MOLECULAR WEIGHT TRADEMARK MANUFACTURER CODE DESIGNATION CAS REGISTRY NUMBER
C6492H10060N1724O2028S42 146.0 kDa None as yet MedImmune
MEDI-563 1044511-01-4
The first version is the right one. So copying from Acrobat Reader is the better solution.

2. To handle many documents you want a script of some sort that just extracts the text. You can do this with the Mac OS X Automator application. I'd not used this before but it gives you access to a whole load of functions/services. Well worth checking out. You'll find Automator in your applications folder. Open it up and...

- Select Application in the template window that appears on opening.
- In the Automator window you have two vertical panels on the left and a grey workspace on the right. Click the 'Actions' button in the top left.
- In the left hand column, under Library, click 'PDFs' to bring up a list of PDF related actions in the second column.
- Select, drag and drop 'Extract PDF Text' onto the workspace. A dialog will appear.
- Select the appropriate options - plain text or RTF, folder in which to output the text, etc.
- Save As... and give your new app a suitable name
- Quit Automator
- Drag and drop a PDF onto the icon for the app and it will extract the text and output that in a file in the directory you configured in the app

Simple! Now you can just drag and drop PDFs and they will be converted automatically.

Automator can do a whole lot more than this, especially if you have a series of steps that must applied to files (e.g. image processing). If you are into scripting, take a look at the way this all works. Right click on the app icon and 'Show Package Contents'.

You can also learn about this via this article by Mathias Bynens on a simple way to convert UNIX scripts into apps ... it's the same mechanism.

So now I've got an automated way to extract text... BUT... the script uses Preview for the extraction, and that still gives me the incorrect ordering of text that I showed in the above example... dang it... not a deal breaker but frustrating all the same


 

Javascript Date.parse browser compatibility issue

Just got burned with a Javascript incompatibility between Firefox and Safari on the Mac...

Date.parse() takes a string representation of a Date and return the number of milliseconds since the epoch. It has always been able to take dates in IETF format, such as "Jan 1, 2010" but as of Javascript 1.8.5 it can handle ISO8601 format as well, such as "2010-01-01".

I use ISO8601 in all my applications and in Firefox 3.6.12 (on the Mac) this works fine:
> Date.parse("Jan 1, 2010 GMT");
1262304000000
> Date.parse("2010-01-01");
1262304000000
Note that the ISO8601 form assumes the GMT timezone, but the IETF form assumes local timezone unless you specify GMT.

The problem for me arose in Safari (5.0.2) on the Mac:
> Date.parse("Jan 1, 2010 GMT");
1262304000000
Date.parse("2010-01-01");
NaN
Hmm...

And just to muddy the waters further, here is the output on Google Chrome 7.0.517.44:
> Date.parse("Jan 1, 2010 GMT");
1262304000000
Date.parse("2010-01-01");
1262332800000
The ISO8601 date is handled OK but is returned in the local timezone.

I need a fail safe way to parse ISO8601 dates - what to do...? Here's what I came up with:
var date_str = '2010-01-01';
var iso8601_regex = /(\d{4})[\/-](\d{2})[\/-](\d{2})/;
var match = iso8601_regex.exec(date_str);
var date = new Date(match[1], match[2] - 1, match[3]);
var milliseconds = Date.parse(date); // -> 1262332800000 (local time)
milliseconds = Date.UTC(match[1], match[2] - 1, match[3], 0, 0, 0); // -> 1262304000000 (UTC/GMT)
Note the '- 1' with the month (match[2]) - the first month is 0 - go figure.
This produces the same results in Mac Firefox, Safari and Chrome - consistent behaviour - yes, really!

What a palaver...

 

Tuesday, November 9, 2010

Rails has_many :through associations and edit/update actions

Consider a basic has_many :through association
class Drug < ActiveRecord::Base
has_many :indications
has_many :diseases, :through => :indications, :uniq => true
Here a drug can be used to treat multiple diseases and any disease can be treated with multiple drugs (that side of the association is not shown). The 'indications' is a basic linking table with drug_id and disease_id.

In the drug#new and drug#edit forms you might use a select menu that allows you to select multiple diseases. I use the 'simple_form' gem for my forms and the 'association' method makes this trivial.
<%= f.association :diseases,
:collection => Disease.all(:order => 'name') %>
In order for this to work you need to add an attr_accessible called :disease_ids to your model. With that, simple form should handle all details needed to create and update the association.

But there is a problem with the edit/update actions when you want to deselect ALL diseases. If you do this in the form then no disease_ids parameter will get passed to your controller and so this column will not get updated. It is a classic issue with HTML form updates and applies to checkboxes as well.

The solution is to add a line to the updater action in your controller that sets the disease_ids parameter to an empty array if it does not exist:
  def update
params[:drug][:disease_ids] ||= []
@drug = Drug.find(params[:id])
[...]
This works fine - adds a bit of clutter to the controller but there you go...

However, this will break a basic functional test for the update action and you will get an error similar to this:
test_update_valid(DrugsControllerTest):
NoMethodError: You have a nil object when you didn't expect it!
You might have expected an instance of Array.
The error occurred while evaluating nil.[]
The problem stems from the basic stub for your object not passing a params[:drug] hash. Now, I'm not an expert on stubs/mocking so there may be a much cleaner way of fixing this, but I fix this by explicitly creating the needed parameters and passing them in the 'put' method.

Here is an example of a basic update test
  def test_update_valid
Drug.any_instance.stubs(:valid?).returns(true)
put :update, :id => Drug.first
assert_redirected_to drug_url(assigns(:drug))
end
and here is the modified one that will work
  def test_update_valid
Drug.any_instance.stubs(:valid?).returns(true)
put :update, { :id => Drug.first, :drug => { :disease_ids => [] } }
assert_redirected_to drug_url(assigns(:drug))
end


Of course I should be building out the tests to provide truly useful tests, but if you can't get beyond this step, the others don't really matter.

Hope this helps....

 

Disabling spell check in HTML forms

I work with DNA and protein sequences and I often have HTML forms with a textarea for entering sequence. Unfortunately my browser sees that text (e.g. 'agctagagctcgatagc') and decides that this is misspelled and underlines all the sequence text with a red dotted line... ugly...

In HTML5 you can now disable spell checking on textarea and text inputs using the option 'spellcheck' = 'false' - EASY!

Note that this is a HTML attribute, NOT CSS - so you have to set it in the form itself.

Browser support for the feature may vary. It works on Firefox and Safari on the Mac for sure.

A related attribute is 'contenteditable' that allows you to control whether specific parts of a textarea's content can be modified - like 'readonly' but with much more control.

 

Tuesday, November 2, 2010

Rails, Factory_girl and Bioinformatics - a gotcha

I've been using factory girl as a replacement for Fixtures in a new Rails application. It has been working well but I just stumbled across a BIG gotcha for my application.

I work on bioinformatics applications with DNA and Protein sequences and having a model with a column called 'sequence' is a natural choice.

The problem is that Factory Girl allows you to create objects in which a specific field is given a sequential id. For example 'user_1', 'user_2', etc. You set that up in your Factory definition like this:
Factory.define :mymodel do |f|
f.sequence(:name) {|n| "name_#{n}" }
end
When you have a model with a field called sequence you would define that like this:
Factory.define :mymodel do |f|
f.sequence(:name) {|n| "name_#{n}" }
f.sequence 'acgtacgtacgt'
end
You see the problem... Running a test with this in it brings up an error message like this:
  1) Error:
test: A Sequence instance should be valid. (SequenceTest):
NoMethodError: undefined method `call' for nil:NilClass
/Users/jones/.rvm/gems/ruby-1.9.2-p0/gems/activesupport-3.0.1/lib/active_support/whiny_nil.rb:48:in `method_missing'
The same problem will arise if you have a column called 'association'.

This is a bad design choice to mix up field names and methods. A better choice would have been to use a 'verb' such as f.generate_sequence(:name), which is much less likely to be used as a column name.

I can't see a way around this other than changing the name of the column in my model, which I am very reluctant to do.
Machinist is an alternative to Factory Girl, so that might be a solution - or hacking the factory_girl gem to change the name of the methods...

UPDATE: Factory Girl will allow alternate syntaxes - see the "Alternate Syntaxes" section. Not sure if its the best path for me but might get me over the immediate hurdle.

UPDATE 2: Just tried Machinist and that solves the first problem... however it fails if you have a column called 'alias' in a blueprint - the trick here is to precede the column name with 'self', i.e. 'self/alias' works.


 

Wednesday, October 27, 2010

Searchlogic versus MetaSearch and Rails3

Searchlogic is a great Rails gem for basic searching of your tables. I've used it for quite a while now. Problem is that at the time of writing (October 2010) it does not work with Rails3.

You can install it just fine but starting up your server will produce and error like this:
/Users/jones/.rvm/gems/ruby-1.9.2-p0/gems/activesupport-3.0.1/lib/active_support/core_ext/module/aliasing.rb:31:in `alias_method': undefined method `merge_joins' for class `Class' (NameError)


What to do? Well there is an alternative search gem called MetaSearch that has a very similar interface to Searchlogic and which does work with Rails3. In fact it may be a little better.

In my code I have search code in three places - the index action of my controllers, a search form at the top of my index view pages and column headers on my index view pages that will reorder the matching rows when I click on them.

So here are the three code blocks in Searchlogic and then MetaSearch (with a very simple model where I'm simple searching the name column). Note that I'm using will_paginate in both cases.

Controller Index Action

Searchlogic
  def index
@search = Product.search(params[:search])
@search.order ||= :ascend_by_name
@products = @search.all.paginate :page => params[:page], :per_page => 20
end

MetaSearch
  def index
@search = Product.search(params[:search])
@search.meta_sort ||= 'name.asc'
@products = @search.all.paginate :page => params[:page], :per_page => 20
end

View Index Page Search Form (omitting some of the html formatting)

Searchlogic
<% form_for @search do |f| %>
<p>SEARCH</p>
<p><%= f.label :name_like, 'Name' %>
<%= f.text_field :title_like, :size => 15 %>
<%= f.submit "SEARCH" %>
<%= @search.count %>Matches</p>
<% end %>

MetaSearch - NO CHANGE!

View Index Page Column Headers (just showing a single column header)

Searchlogic
<%= order @search, :by => 'name', :as => 'name'.humanize %>

MetaSearch
<%= sort_link @search, 'name', 'name'.upcase %>

All in all, a pretty straightforward replacement.

But there is at least on additional bonus, over and about working with Rails3. 'attr_searchable' and 'assoc_searchable' allow you to specify in your models exactly which fields can and cannot be searched. Searching in either Searchlogic or MetaSearch uses GET requests which display the search parameters in the URL in the browser. That opens the door to people trying out other likely field names and searching otherwise private data. This mechanism provides a way to limit that problem.

I have no doubt that searchlogic will get updated soon, as so many people use it. But until then MetaSearch is the way to go.


 

Friday, October 22, 2010

Passenger 3, nginx and rvm on Mac OS X 10.6

I wanted to set up the web server nginx, an alternative to Apache, along with Phusion Passenger on my Mac. I've become a convert to RVM as a way to manage multiple versions of Ruby so I needed to setup Passenger (a Ruby gem) through RVM. Whenever you have a combination like that you can run into installation problems.

Here are the steps that I used (after a couple of false starts):

1. Make sure your rvm setup is working correctly. In particular I've found it best to ignore your system Ruby installation completely and in fact I install the system version (currently 1.8.7) separately under rvm and make that my rvm default. You have to reinstall your gems, a pain, but I think it the way to go - everything lives under rvm.

2. Do not install nginx! Passenger will do that for you. If you already have an installation then rename it or just ignore it.

3. Turn off Apache or Nginx if you have either of them running.

4. Put yourself into your default Ruby and install the Passenger gem
$ rvm default
$ gem install passenger

5. Run passenger-install-nginx-module with rvmsudo

Very important - don't use regular sudo - if you do it will complain about not finding the current gemset.
$ rvmsudo passenger-install-nginx-module
The script will ask you if you want a default installation or a custom/advanced on. I just did the default (option 1).

The script downloads and compiles nginx. It will ask you where you want it installed. I suggest /usr/local/nginx. Don't just say /usr/local as that will create conf, html and logs directory right in /usr/local. You really want these in a nginx specific directory.

All being well the compilation and installation will go smoothly and your fresh installation of nginx will be modified for use with passenger.

If I remember the installation correctly it actually fires up nginx and leaves it running. Check that with 'ps ax | grep nginx'

6. Go to your browser.
'http://localhost' should give a 'welcome to nginx' page. Look in /usr/local/nginx/logs/ for logging output.

7. Configuration

I set up a symlink in /usr/local/sbin to the nginx binary so that it can be found in my regular PATH
$ sudo ln -s /usr/local/nginx/sbin/nginx /usr/local/sbin/nginx

You start nginx with 'sudo nginx' and shut it down with 'sudo killall nginx'. It needs to be run via sudo.

Configuring nginx is done in /usr/local/nginx/conf/nginx.conf and if you are used to Apache config files this will be a breath of fresh air. The main changes I made from the default was to set myself as the 'user' and set the worker_processes to 2.

To set up a Rails application simply add a server block like this:
server {
listen 80;
server_name myapp.local;
root /Users/jones/Documents/myapp/public;
passenger_enabled on;
rails_env development;
}

Be sure to set the rails_env unless you are in production (the default) otherwise it will not work. Restart nginx and with any luck you'll be able to access your application.

I made a few missteps doing my installation - such as thinking I needed to install nginx myself - but overall this went very smoothly.

8. Extra credit

If you want to use rvm and passenger to run multiple apps with multiple versions of Ruby then you will want to look at this post by Phusion:
http://blog.phusion.nl/2010/09/21/phusion-passenger-running-multiple-ruby-versions/

As you see it can get really complicated, but as a way to migrate a Rails app from 2.x to 3.x and/or Ruby 1.8.x to 1.9.x, this seems like the way to go.



 

Thursday, October 21, 2010

Setting Environment Variables for Sudo on Mac OS X

On Mac OS X you typically set up Unix environment variables in your ~/.bashrc or ~/.bash_profile files. To perform actions that need Root privileges you use 'sudo' instead of 'su'.

But when you 'sudo' a command your environment does not pick up your entire user environment and can cause problems in some cases.

To illustrate the issue, compare the output of these two commands in a Terminal (Unix shell):
$ printenv
$ sudo printenv

So how do you make a custom Environment variable visible to sudo? Rather than trying to modify system wide files like /etc/profile (which may not work anyway...) you want to modify the file /etc/sudoers.

Now, to modify this file you want to use the special command 'visudo', which as you can surmise, is a version of the editor 'vi'. Specifically it knows which file to edit and how to set the permissions on it. NOTE: If you don't know how to edit a file with 'vi' then learn the basics BEFORE you try the following steps!

$ sudo visudo
will bring the editor with the file opened. Look for this section:
# Defaults specification
Defaults env_reset
Defaults env_keep += "BLOCKSIZE"
Defaults env_keep += "COLORFGBG COLORTERM"
[...]

The env_keep lines indicate that the named environment variable should be made available to sudo. So you want to add your custom variable to this list by adding lines like this at the end of this section:
Defaults        env_keep += "APXS2"
And I would suggest adding a comment line (preceded by a # character) that makes it clear that this is a custom change. Save the file and look at the output of sudo printenv:
$ sudo printenv
[...]
APXS2=/usr/sbin/apxs
Your custom environment variable should now be available.

 

Wednesday, October 20, 2010

Backtick operator in Jruby 1.5.3 not working

The backtick operator in Ruby (e.g. `date`) executes a system command.

In Jruby 1.5.3 this is not working and in my example caused some odd side effects. This is a known issue (http://jira.codehaus.org/browse/JRUBY-5133) that I'm sure will be fixed in the next release.

For now, use Jruby 1.4.1 as backticks seem to work fine here.


 

rvm on Mac OS X Snow Leopard

rvm, the Ruby Version Manager, is a great way to manage multiple Ruby versions. The documentation is extensive but I find it to be fragmented, such that if you do run into problems you have to look in multiple locations on the site to find relevant guidance.

On Mac OS X 10.6 (Snow Leopard) I had not problem installing rvm or ruby 1.9.2 but it blew up on my when I tried to list the gems under 1.9.2. The solution was to install zlib under the rvm tree:
$ ruby -v
ruby 1.8.7 (2009-06-12 patchlevel 174) [i686-darwin10.2.0]

$ rvm package install zlib
$ rvm install 1.9.2 -C --with-zlib-dir=$HOME/.rvm/usr

$ rvm use 1.9.2
$ gem list
$ gem install rails --pre


I had a similar problem when installing ruby 1.8.7 under rvm. This time it was the readline library that was the problem. Again, if you poke around in the docs you can find a fix:
$ rvm package install readline
$ rvm install 1.8.7 -C --with-readline-dir=$HOME/.rvm/usr


Although rvm makes it easy to switch back to your system ruby (with 'rvm system') I would strongly suggest that you do not use this. Instead, install the same release as your system ruby under rvm (1.8.7 in my case) and make that the default ('rvm --default 1.8.7').

You will have to reinstall gems under that ruby installation, which is a pain, but once you have all your rubies under rvm then you should have no issues with confusion over where specific libraries are located, which has been an issue for me more than once.

rvm also has a notion of gemsets, which I have not explored as yet, which allow you to isolate specific gem versions for specific applications. This could be extremely useful in migrating Rails applications to newer versions of Rails.

One area where I have not yet been successful is getting rvm and passenger to work. The rvm docs explain how to do this - but no luck for me as yet.


 

Tuesday, October 19, 2010

!! in Ruby

I noticed this odd looking construct in Rick Olsen's restful_authentication plugin a while back:
!!current_user

At face value it appears to be a 'double not' operation on the variable current_user. But that didn't make much sense to me at the time. I wondered if it was some special Ruby operator, but I could find no reference to it.

After a couple of experiments I see now that it is indeed a double not operator. It serves here as a very concise way to return true or false depending on whether the variable is nil or not. Here is how it works in an irb shell:
>> foo = nil
=>> nil
>> !foo
=>> true
>> !!foo
=>> false
>> foo = 1
=>> 1
>> !foo
=>> false
>>> !!foo
=>> true

Concise is good, but not at the cost of being unclear. I'm on the fence about this one.

 

Wednesday, October 13, 2010

Stripping non-ASCII characters from text in Ruby

I need to get rid of occasional non-ASCII characters in otherwise plain ASCII text, such as 'curly quotes' like “ and ”. I don't know the real encoding of my source text but I can tell that the characters are encoded as hexadecimal characters such as \x94

Here is the regular expression I use to remove them:
str.gsub!(/[\x80-\xff]/, '')

I'm sure this won't work in many cases but with my text it does the job just fine.



 

Monday, October 11, 2010

/etc/paths.d as a way to configure Paths in Mac OS X

Just found out about a simple way to manage paths to installed software in Mac OS X

As well as, or instead of, setting PATH environment variables in startup files, you can simply add a path as a file to the directory /etc/paths.d

For example, here is a command that sets up a path to a custom installation of MongoDB

$ sudo sh -c 'echo "/Users/jones/Documents/mypath/mongodb/bin" > /etc/paths.d/mongodb'

Start up a new shell and you will find that path added to your PATH environment variable.

Note that this is a system wide setting and so it gets set early in the process of shell creation. If you need to override PATH settings for specific users then you still need to set paths in .bashrc, etc. I don't know the order in which /etc/path.d files are sourced but I would assume alphabetical.

Also note that you need the 'sh -c ' construct in order to create the file as shown above. Simple 'sudo echo' will not work.


 

Wednesday, September 8, 2010

Mouse and Trackball Setup with MacPymol

I was having problems configuring a Kensington Expert Mouse Trackball to work correctly with the MacPymol protein modeling software on a Mac OS X 10.6.3 Intel Mac.

The Kensington mouse drivers (and specifically their MouseWorks software) has not been updated for several years and they are not supporting the trackball on Mac OS X Snow Leopard. But in fact the software works... MouseWorks is a 32bit version of a Preference Pane and so Preferences will have to shift to that version before you can interact with it.

But with just the trackball on that machine I was not able to configure the buttons correctly. That was the only mouse I had on that machine.

I noticed references in the MacPymol docs saying that the Apple Mighty Mouse with the little ball works well with the software. Even though I don't like that mouse, I did have one lying around so I plugged that it and configured it. With following setup it works as expected with MacPymol:

Left Button = Primary Button
Ball Button = Button 3
Right Button = Secondary Button

In 3-Button Viewing Mode this setup gives you Rotation on Left Button down and drag, Zoom on Right Button down and drag and XY translation on Ball Button down and drag.

Interestingly, once I had the Mighty Mouse setup like this, I could then get the Trackball working correctly. In Kensington MouseWOrks I have it set up as follows:
Left Lower button = Click
Right Lower Button = Right-Click
Left Upper Button = Middle-Click

MouseWorks knows about MacPymol (somehow) and you can setup MacPymol application settings - but I don't have this figured out. Unfortunately I'm not finding any more info in the MacPymol docs/wiki.

Hope this is useful


 

Thursday, August 26, 2010

Printing Colored Text in UNIX Terminals from Ruby

Outputting text in color in a UNIX console/Xterminal is kind of old school - some might say obsolete - but once in a while it is very useful. One example is the Red/Green coloring of test results. You don't get a lot of options but enough to highlight relevant text.

The cryptic escape codes are explained in this Wiki page.

This Ruby script outputs fore and background colors and a couple of the other useful modifiers.
#!/usr/bin/ruby

puts "\033[1mForeground Colors...\033[0m\n"
puts " \033[30mBlack (30)\033[0m\n"
puts " \033[31mRed (31)\033[0m\n"
puts " \033[32mGreen (32)\033[0m\n"
puts " \033[33mYellow (33)\033[0m\n"
puts " \033[34mBlue (34)\033[0m\n"
puts " \033[35mMagenta (35)\033[0m\n"
puts " \033[36mCyan (36)\033[0m\n"
puts " \033[37mWhite (37)\033[0m\n"
puts ''

puts "\033[1mBackground Colors...\033[0m\n"
puts " \033[40m\033[37mBlack (40), White Text\033[0m\n"
puts " \033[41mRed (41)\033[0m\n"
puts " \033[42mGreen (42)\033[0m\n"
puts " \033[43mYellow (43)\033[0m\n"
puts " \033[44mBlue (44)\033[0m\n"
puts " \033[45mMagenta (45)\033[0m\n"
puts " \033[46mCyan (46)\033[0m\n"
puts " \033[47mWhite (47)\033[0m\n"
puts ''
puts "\033[1mModifiers...\033[0m\n"
puts " Reset (0)"
puts " \033[1mBold (1)\033[0m\n"
puts " \033[4mUnderlined (4)\033[0m\n"
Note that the exact colors will vary between different terminals. So try it and see.

The script will show you enough to create your own colored terminal output. But what if you sometimes want your output to go to a terminal (TTY) and other times to a file? In the latter, all the escape codes will give you garbage characters when you view the file. The trick here is to test whether our not your script is writing to a TTY with 'STDOUT.tty?'. If so then add the escape codes, if not then leave them out.


 

Wednesday, August 11, 2010

SSL Certificates and Apache2

The process of generating a new or renewal SSL certificate for a web site can appear quite daunting given the terminology involved. In reality it is pretty simple. Here are the steps I used for renewing a Wildcard SSL cert, using rapidssl.com as the vendor.

A wildcard certificate costs more than an single fully qualified domain name, but it lets me secure any FQDN under my main domain.

The basics steps are:
1: Generate a Key
2: Generate a CSR (Certificate Signing Request)
3: Send that the vendor and give them some money
4: Receive an SSL Certificate back via email
5: Paste that into a file on your host
6: Restart Apache and verify operation

In Detail, here are my specific steps:

1: Start the new or renewal certificate process on the vendor's web site

2: Create a SSL Key
You should only need to do this once and then you can reuse it multiple times.
Create a directory in which to create and save your SSL files. For the purposes of this tutorial I'm using 'example.com as the domain. You can put the key into any file but give it one that makes sense, i.e. example.com.key
# openssl genrsa 1024 > example.com.key
Generating RSA private key, 1024 bit long modulus
.................................++++++
.........++++++
e is 65537 (0x10001)
If you want to protect the key with a passphrase then add the -des3 option to the command. If you do then you need to enter the passphrase whenever your server starts up. if you do NOT then set the permissions on your files correctly (see below).

3: Create a CSR (Certificate Signing Request)
This process will ask you several questions, of which the MOST IMPORTANT is the COMMON NAME. This the Domain Name that you want the certificate for. For a single domain this might be www.example.com. For a Wildcard certificate use an asterisk in the name like *.example.com.

The prompt from the openssl program is unclear when it asks for YOUR name - it wants the domain name. Clear on that, right?

It also asks for a challenge password - with rapidssl.com this wasn't used, so don't worry about it. Enter the other info as appropriate for your organization.
# openssl req -new -key ./example.com.key > example.com.csr
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:Washington
Locality Name (eg, city) []:Seattle
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Craic Computing LLC
Organizational Unit Name (eg, section) []:
Common Name (eg, YOUR name) []:*.example.com
Email Address []:youraddress@example.com

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:

4: Send the CSR to your Vendor
Cut and paste the contents of the file in full into the Vendor's web form. With rapidssl.com there is a phone verification process that you have to go through after they receive it. Basically an automated call comes through and you have to enter a confirmation code that they put on their web site. Plus you need to do the whole credit card and contact information thing on their site. It all ends with several emails back from them, one of which includes the Certificate.

5: Paste the certificate into a file on your host
Once again, choose an appropriate name, e.g. example.com.crt
Verify the contents of the certificate file with:
# openssl x509 -text -in example.com.crt
You will see all sorts of information.

6: CHANGE THE FILE PERMISSIONS - CHANGE THEM NOW!
# chmod 400 example.com.key
# chmod a-w example.com.crt


7: Setup Apache2
Modify your configuration files so that Apache knows where to find the certificate. In my case I'm using something like this:
<VirtualHost *:443>
ServerName www.example.com
ServerAlias *.example.com
DocumentRoot "/path/to/my/site"
SSLEngine on
SSLOptions +StrictRequire
SSLCertificateFile /mnt/ssl_certs/example.com.crt
SSLCertificateKeyFile /mnt/ssl_certs/example.com.key
</VirtualHost>
You can put the certificate and key in any files you want, you just need to tell Apache where they. Your Apache configuration might be different from this.

Restart apache!

8: Test it out
Go to the site in your browser. You should see a lock icon. Click on that and view the certificate details. It should all be good.

9: COPY THE KEY AND CERTIFICATE FILES TO A BACKUP MACHINE - COPY THEM NOW!

 

Friday, August 6, 2010

Maximum Fixnum value in Ruby

The Maximum Fixnum value that can be represented in Ruby is calculated as: (2**(0.size * 8 -2) -1)

Wednesday, June 30, 2010

Rails Routes and Models with 'Uncountable' Names

Rails follows the convention of using singular and plural forms of model names for various purposes. For example user.rb for a Model name and users_controller.rb for the controller.

Every once in a while you need a model where the singular and plural forms are the same. I just ran into that with a model called 'species'. This can cause you all sorts of hurt but here is a way round it.

First of all, decide if you really need to use that name. If you can figure out an equivalent where singular and plural forms are different then use it - much simpler... but if like me there is no good alternative then you do this:

1: Tell the built-in pluralization functions to skip your name.

In config/initializers/inflections.rb add this:
ActiveSupport::Inflector.inflections do |inflect|
inflect.uncountable %w( species )
end
2: Create your model, controller, table etc in the normal way using 'species' as the model name.

3: In your routes.rb give the :species resource an artificial singular form:
ActionController::Routing::Routes.draw do |map|
map.resources :species, :singular => :species_instance

Doing this allows the routing magic to distinguish between singular and plural. But the downside is that you need to change a bunch of controller and view paths.

4: run 'rake routes' to see what Rails expects your new routes to look like.

5: In your controller, change any 'redirect_to @species' lines to 'redirect_to species_instance_path(@species)' (Should be just create and update actions).

6: In your index.html.erb view change:
link_to "Show", species
to
link_to "Show", species_instance_path(species)
In doing so you make it explicit that you want the singular form.
Similarly, change the following:
"Edit", edit_species_path(species)
becomes
"Edit", edit_species_instance_path(species)

link_to "Destroy", species ...
becomes
link_to "Destroy", species_instance_path(species) ...

and finally
link_to "New Species", new_species_path
becomes
link_to "New Species", new_species_instance_path

7: In your show, new and edit view pages, make similar changes. Links back to the index page can stay as 'species_path' but edit, delete and new links should be updated as shown above.

It's messy, for sure, but it solves the problem and allows you to retain 'natural' names in the user interface and in the URLs that users will see.



 

Tuesday, May 11, 2010

Compiling Ghostscript on Intel 64 bit Mac OS X

Ran into problems getting Ghostscript to compile on an Intel Mac running Snow Leopard.

The error messages were telling me it was trying to compile a 32 executable, and then after fixing that it complained that libraries in /opt/local/lib were compiled for 32 bit. The thing is, except for some old mac ports stuff, I don't use /opt/local!

I was able to fix compile successfully with three steps.

1: Take the old /opt/local/lib code out of the picture by renaming the directory to a temporary name. I can bring it back if it turns out something actually relies on it.

2: Rub ./configure with CFLAGS and LDFLAGS defined. These end up getting passed to 'make'
$ ./configure CFLAGS='-arch x86_64' LDFLAGS='-arch x86_64'


3: Compile it 'make' (with no flags) and install with 'sudo make install'

Not sure yet if it can pick up all the fonts that it needs - we'll soon see...


 

Image Resizing with Rails Paperclip and ImageMagick

The Rails Paperclip plugin uses ImageMagick for creating thumbnail images, etc.

You specify the size of images using ImageMagick's Geometry syntax which can be a little confusing. Here are the aspects of that which are most useful for Paperclip users.

The basic specification is <width>x<height> in pixels, optionally followed by a modifier. In some cases you can omit either width or height.

  • 256x256

  • This specifies the Maximum dimensions for the image, while preserving the aspect ratio of the image. So if your image were actually 512x256 it would be resized to 256x128 in order to fit inside the specified size.

  • 256x256!

  • This specifies the Exact image size, so a 512x256 image would be changed into 256x256, losing the aspect ratio.

  • 256x

  • This specifies the desired width of the target image, leaving the height to be set so as to preserve the aspect ratio.

  • x256

  • This specifies the desired height of the target image, while preserving the aspect ratio.

  • 256x256^

  • This specifies the Minimum size for the target image, so the resized image may be larger than these dimensions.

  • 256x256>

  • This specifies that the image will be resized only if it is Larger than the dimensions.

  • 256x256<

  • This specifies that the image will be resized only if it is Smaller than the dimensions.


Other options exist within the syntax. See the ImageMagick docs for more details.


 

Monday, May 10, 2010

Rails, Paperclip and ImageMagick

Ran into trouble generating Thumbnail images using the Paperclip plugin, which delegates the image manipulation to ImageMagick.

The error was "Paperclip::NotIdentifiedByImageMagickError: /tmp/stream,33194,0.jpg is not recognized by the 'identify' command".

A LOT of people have run into this - here is my solution.

In my setup I'm on MacOSX with a binary download of ImageMagick in /usr/local/ImageMagick-6.6.1/bin and I'm running Rails under Apache/Passenger. I've got Paperclip installed as a plugin.

There are 3 steps needed to get this working:

1: Make sure you have ImageMagick working at the UNIX command line level. This involves adding it to your path and exporting these environment variables (pointing to your ImageMagick installation, of course)
MAGICK_HOME=/usr/local/ImageMagick-6.6.1
DYLD_LIBRARY_PATH=/usr/local/ImageMagick-6.6.1/lib
Check that identify works with your images at the command line level. If not then fix those problems first.

2: Tell Paperclip where to find the ImageMagick executables
In config/environment.rb add this at the bottom of the file
Paperclip.options[:command_path] = "/usr/local/ImageMagick-6.6.1/bin"
At this point, after restarting Passenger, you would see that 'identify' is run from within Paperclip but is not able to identify the file... the final step is...

3: Identify needs those two exported environment variables - and Apache/Passenger (or other web servers probably) does not pass those through by default!
In your passenger vhost file add these lines:
SetEnv MAGICK_HOME /usr/local/ImageMagick-6.6.1
SetEnv DYLD_LIBRARY_PATH /usr/local/ImageMagick-6.6.1/lib
Restart apache/passenger and it should work fine.

For me, the binary installation of ImageMagick has worked fine so far. Compiling it yourself is a real pain (make sure you compile for x86-64 if you are on an Intel Mac) - but you don't need to do it. Likewise, you do not need the Rmagick gem if you are using Paperclip, at least for basic image resizing, etc.


 

Tuesday, April 20, 2010

Suppressing newlines in Ruby ERB documents outside of Rails

In ERB files the standard <% ... %> tag pair will result in a newline being added to the output.

In Rails you can add a minus sign before the closing tag to suppress this:

<% ... -%>

But if you use ERB outside of Rails, this will produce an ERB compile error.

The way to handle this is to leave out the minus signs and instead specify the trim_mode when you create a new ERB object. There are 3 options for this parameter:
    %  enables Ruby code processing for lines beginning with %
<> omit newline for lines starting with <% and ending in %>
> omit newline for lines ending in %>

Setting the trim_mode requires that you also set a safe_level, but this is normally set to 0. So to suppress newlines on all lines with ERB tag pairs create the ERB object like this:
erb = ERB.new(template_file, 0, '>')

This solves the newline issue for me but it would be nicer to have the per-line option available in Rails.

You can also set the global trim_mode in Rails if you want to. in environment.rb:
config.action_view.erb_trim_mode = ">"




 

Wednesday, March 31, 2010

Script aliases in the style of git

The Git version control software lets you run its commands either through as a single program name followed by the command as an argument (such as git status), or as individual scripts (such as git-status).

The advantage of the second form is that you can use the text completion feature of your shell to save you some typing. It's a personal preference...

It is implemented by a series of symlinks from the longer command names to a single git executable. git itself gets the name of the executable it was called as and breaks that down to get the name of the command.

It is easy to create your own version of this. Here it is in Ruby...

The 'primary' script is called myscript.
#!/usr/bin/env ruby
script = File.basename($0)
if script =~ /^\S+?_(.*)$/
command = $1
else
command = ARGV.shift
end
puts "command #{command}"

if ARGV.length > 0
puts "args #{ARGV.join(', ')}"
end

Create an alias to it by adding a suffix (the sub command name), separated by some delimiter and symlinking this to the primary script:
$ ln -s myscript myscript_cmd_0
$ ln -s myscript myscript_cmd_1

The script looks at how it was called ($0 in Ruby) and sees if it can split off a command. If it does then it takes any other arguments as they are presented. If you call the script as the primary script followed by a separate command then it shifts ARGV to get the command. These examples show how it works.
$ ./myscript_cmd_0 foo bar
command cmd_0
args foo, bar
$ ./myscript_cmd_1 foo bar
command cmd_1
args foo, bar
$ ./myscript cmd_1 foo bar
command cmd_1
args foo, bar


You don't want to use a technique like this all the time - you end up with loads of symlinks in your bin directory, but in the right situation it can be very useful.


 

Thursday, March 18, 2010

Merging PDF Documents in Preview on Mac OS X Snow Leopard

Preview in Mac OS X is not only a viewer for PDF (and other) documents, it allows you to merge multiple PDF documents into one. This is useful for a lot of reasons, especially when you have scanned several pages of a documents into individual files and you want to combine them.

In Snow Leopard the way this works has changed and, as there is no menu item for merging, it can be a little confusing.

Open your first 'page' or document in Preview and open up the sidebar.

If you drag a new document into the sidebar and drop it in a blank region you will see that appear in the viewing window. But this has not added this page to the first. Preview is simply allowing you to view two separate documents.

To combine pages, drag and drop the second page on top of the first. The second page will appear as thumbnail in the sidebar below the first AND the two pages will appear in the same document in the main viewing window.

This is confusing as both scenarios look the same in the sidebar. You can see the true document structure in the sidebar by picking one of the pages and moving it slightly as though you were reordering it. All pages in the same document will become surrounded with a border and shaded background.

You can reorder pages within a document by dragging and dropping as needed and 'Save As' will save the merged document as a single file.

It is a great feature of Preview but the user interface means that it is effectively hidden unless you know about it.



 

Thursday, March 11, 2010

Raphaël Live

Raphaël is an amazing JavaScript Library for creating Vector Graphics in browsers. It was created by Dmitry Baranovskiy. It is goes further than HTML Canvas in that any object is accessible in the DOM and so can be made in to buttons, dragged around the canvas, etc. You need to know about it!

To help my exploration of the library I built a simple in-browser environment with a drawing canvas and the CodeMirror code editor so that I could try out Raphaël calls and see the results immediately. That worked out really well for me and so today I've released a more developed version of the tool, along with a range of code examples.

Raphaël Live allows you to load in code examples into the editor, run them, see the results, modify the attributes, etc., re-run them and thereby learn how to use the library.

The tool is freely distributed. You can use it on the craic.com site, or download you own version from GitHub.

Hope that you'll check it out...



Wednesday, March 10, 2010

Rails searchlogic and confusing column names

Searchlogic is a great Rails gem from Ben Johnson for adding model search capabilities to your Rails app with a minimum of effort.

You can build complex queries very easily such as Company.name_like_or_address_like.

But if you have column names that contain the Model name you can run into problems.

For examples, let's say my Company model has columns 'company_name' and 'company_address', then my query becomes:
Company.company_name_like_or_company_address_like

It gets worse is my Person model has_one :company and I want to search companies through that model - now my query becomes
Person.company_company_name_like_or_company_company_address_like()

Not only is that ugly as sin, it may cause searchlogic to barf when you use it in a search form.

My experience is that it can handle ugly queries like this when run in script/console but for some reason they may fail in the context of a real Rails app.

So what can you do about it?

The best solution is to change the names of your columns to remove the model names, but that is not always possible, especially in legacy databases.

Failing that, you can create a named scope in your model that performs the same query but uses a shorter, more sensible name. Searchlogic will use your named scopes quite happily.

In my example I would create a named scope in the Company model like this:

named_scope :company_name_address_like, lambda { |name|
  { :conditions => ['company_name like ? or company_address like ?', "%#{name}%", "%#{name}%"] }
}

I can then call it as Company.company_name_address_like().

Don't be tempted to include 'or','and', etc. in your custom named scope names. Searchlogic appears to try and split up the scope into components based on these 'operator' words.


 

Nokogiri and Snow Leopard

I'm not alone in having problems installing the nokogiri ruby gem on a Mac that has been upgraded to Snow Leopard. The problem lies in the gem not being able to find a suitable version of libxml2, despite Snow Leopard having a recent version of that library installed. (Not sure if this is by default or only if you have the developer tools installed...).

People have tried various things but the place to look first is /opt/local/lib/xml2.*

If you have the libxml2 installed there then remove (or move) those files, along with /opt/local/libz.*

Try installing the gem again (sudo gem install nokogiri).

If it succeeds, you're good - if not then try removing the libxml2 include files in /opt/local/include - try again - and if still no luck then try trawling through other directories under /opt/local.

You should not have to specify the explicit libxml2 file locations with options to the gem install.


 

Wednesday, March 3, 2010

Git - push and pull between repositories

I've been trying to improve my git skills beyond the basics. But I ran into real confusion when trying to clone one repo into another and then trying to keep them in sync using git pull and git push.

In principle this should work fine but when I pushed changes back to the origin and then ran git status in the origin I would see the 'old' versions of files in the origin marked as having changed - no merge conflicts, just that they were not up to date.

Turns out that you can't (easily) keep two repos in sync where the origin is a regular working copy. What you need to do is create a third repo that is a so called 'bare' repository. Bare repos only contain the contents of the .git directory and their role is simply to track changes.

So here is a simple example of how to set this up:

(I'm just using repos on my local machine to keep things simple)

1: Create your original working directory under git and check your code in. I'll call this repo_original.

2: Clone this into the bare repo with 'git clone --bare repo_original repo_bare'
(Take a look at the contents of repo_bare - it's just like a .git directory)

3: Clone the bare repo into a working copy with 'git clone repo_bare repo_1'

4: Make a second working copy with 'git clone repo_bare repo_2'

5: Try making some changes in repo_1, commit them and push back to the origin (repo_bare) with 'git push'

6: Now go to repo_2 and pull from repo_bare with 'git pull'. Look at your files and you should see the changes you made in repo_1. Try making changes in repo_2, push them, go to repo_1, pull changes and see that they came across.

So far so good.

7: Make conflicting changes in the same file in repo_1 and repo_2 and commit in both repos.

8: Push the changes from repo_1

9: Push the changes from repo_2 and you should be rejected with something like this
$ git push
To /Users/jones/tips/repo_bare
! [rejected] master -> master (non-fast forward)
error: failed to push some refs to '/Users/jones/tips/repo_bare'

That's OK - there is a conflict but you can't resolve those on a bare repo, so it won't let you proceed.

10: Instead, on repo_2, pull the current origin and you will see something like this:
$ git pull
remote: Counting objects: 5, done.
remote: Compressing objects: 100% (2/2), done.
remote: Total 3 (delta 0), reused 0 (delta 0)
Unpacking objects: 100% (3/3), done.
From /Users/jones/tips/repo_bare
0d97e7a..494885c master -> origin/master
Auto-merged README
CONFLICT (content): Merge conflict in README
Automatic merge failed; fix conflicts and then commit the result.

That's what we want to see - there is a real conflict (we created it) and we can resolve it by editing the file and committing in the normal way.

So it's not difficult once you figure it out but it is not well explained in the guides that I've seen. Hope this helps.

Bonus Section - Branches

There are two ways to handle branching in repository setups like this - not sure of the official terms but I think of them as private and shared.

A 'private' branch is something I setup in my clone of a repo which will not get passed back to the origin repo. I create it, work in it and eventually merge it back into my repo's master. That branch will never get copied back to the origin and therefore any collaborators with their own repos will never know about it.

A 'shared' branch is one that I want to share with collaborators, so I need a way for them to access it. To do this, go to the origin repo, 'repo_bare' in the above example and create the branch there: 'git branch dev'. When you go to a cloned repo and do a pull you will see that new branch pulled over and 'git branch -a' will how it listed as 'origin/dev' but when you try to check it out you'll get an error.
$ git co dev
error: pathspec 'dev' did not match any file(s) known to git.
Did you forget to 'git add'?
What you need to do is track this branch in each of the cloned repos. To do this, in the cloned repo, run 'git branch --track dev origin/dev'. Now you can see a local branch called dev and you can checkout the branch. push and pull will now keep the tracked branch in sync as well as the master. You need to setup the tracked branch on each of the cloned repos in order to work with it.

What I haven't figured out yet is how to 'promote' a private branch in a cloned repo up to a shared branch. I suspect you have to use 'git stash' to stash the contents of that branch, delete the branch and then create a shared branch, track it and then put the stash back into it.



 

Monday, March 1, 2010

Apple Numbers and CSV Files

The Mac OS X spreadsheet program 'Numbers', from Apple and part of the iWork suite, is a competitor to Microsoft Excel. There are some things about it I prefer to Excel, others where I prefer Excel.

But one glaring omission in Numbers is that it will not open a Comma Separated Values (CSV) file from the Open menu. CSV files are a standard way to exchange spreadsheet datasets and not being able to load them into Numbers makes no sense.

In fact there is a way to do this by Drag and Dropping the file into a worksheet.

1: Your CSV file MUST have a .csv suffix - you will just copy the filename otherwise.
2: Drag and Drop the file into a single cell of an open worksheet - choose the cell that will take the 'top left' value of your dataset.
3: That's it - simple once you know the trick - seemingly impossible until you do...

Numbers will accept tab delimited files via the Open menu option with no problems.


Update: 2012-08-27
This restriction is no longer the case - Numbers will read .csv files from the Open menu just fine - don't when things changed

Monday, February 15, 2010

Rails check_box_tag and Rails 2.3.x

Basic HTML checkboxes have an inherent problem in that there is no explicit value passed to the server when the box is unchecked.

Under Rails 2.2.2 (and thereabouts) you could circumvent this limitation with an ugly hack.

If you wanted to use a check_box_tag helper in a form you had to use a had to follow it with a hidden_field_tag that contained the unchecked value. For instance:

<%= check_box_tag :approval_check_box, '1' %>
<%= hidden_field_tag :approval_check_box, '0' %>

If the check box was not checked then value in the hidden field would be passed. Ugly, but it worked fine.

But at some point in the move to Rails 2.3.x this hack was turned on its head and the construct show above will now always pass the unchecked value to the server.

The fix is simple - just reverse the two lines:
<%= hidden_field_tag :approval_check_box, '0' %>
<%= check_box_tag :approval_check_box, '1' %>

The real solution if to use a check_box helper where you can explicitly specify the unchecked value, rather than a check_box_tag.

I stumbled across this while updating an older app from Rails 2.2.2 to 2.3.5. It took a while to narrow the issue down to this problem.


 

Wednesday, February 10, 2010

Rails, Time Zones and the current local time

Since version 2.1 it has been easy to handle user-specific time zones in Rails. Take a look at Railscast #106 for details.

Rails takes care of converting times in your models back and forth from local timezones and UTC in the database and it works great.

But I wanted to put the current local time at the bottom of each page, so that if they printed out the page they had a time stamp right there.

My server is set up to with UTC as its time zone and Rails picks up on that, such that if I simply print out Time.now I get it in UTC, regardless of having set Time.zone to 'US Pacific...', which is not what I want.

Because I'm not accessing the time from a database record, it doesn't apply the user specific conversion.

The way to do this is to add 'in_time_zone' which forces Time to use the current Time.zone.

Bottom Line:

To display the current time in a user's local time zone, use Time.now.in_time_zone




 

Wednesday, February 3, 2010

Rails, searchlogic and will_paginate

The searchlogic gem, for active record searching, integrates well with will_paginate, for pagination of the results in index pages, and it comes with simple helpers for setting up sortable column headers.

But I had trouble setting the default sort order for the results if not filtering had taken place or if no column header had been clicked. What I found on the web was a little confusing. Here's how it works (as of Feb 2010).

The model has a list of companies and I want the default search order to use their names.

In your controller:

@search = Company.search(params[:search])
@search.order ||= :ascend_by_name
@companies = @search.all.paginate :page => params[:page], :per_page => 20

Note that you set the order in @search and leave out the :order parameter to paginate.


Simple, elegant, love it...


 

Tuesday, February 2, 2010

Rails, Searchlogic and Dates

I've been using the searchlogic gem from Ben Johnson and its a great way to build complex search forms for your models. Railscast #176 is a great introduction to the topic by Ryan Bates (You can't go wrong with Railscasts!).

One topic missing from the documentation is how searchlogic handles Dates, but the functionality is in there. Here is how you use it.

You treat a date effectively as a number and you can use 'equals', 'gte', 'lte', etc. as comparison operators.

Assuming you have a column named 'date', of type 'date' in your model, you can put something like this in your form:

<%= f.label :date, "Date" %>
<%= f.text_field :date_gte, :size => 12 %> -
<%= f.text_field :date_lte, :size => 12 %>

You can then enter dates into one or both of the fields to retrieve records that fall into that date range.

But what date format should you use? This is where it gets clever. It wants to use YYYY-MM-DD format but if you enter MM/DD/YYYY it accepts is and converts it for you. Brilliant! It even takes text like 'Dec 1 2009' and converts that. Interestingly it won't accept invalid dates like '2009-11-31' and just gives you back the form with the field cleared out.

With a column of type 'Datetime' it converts your entries as well but this time it converts '2010-01-01' into 'Fri Jan 01 00:00:00 -0800 2010'. I'm sure it does something similar with columns of type 'Time'.

Undoubtedly the code is leveraging the Ruby time and date libraries to make the conversions, which can be lifesavers. Incorporating the functionality into searchlogic makes your forms much more tolerant of user inputs and it does so in a totally unobtrusive way.



 

Archive of Tips