Lakou Lapè is a non-profit organization based in Port-au-Prince, Haiti, that is working to end violence and conflict in some of the impoverished communities in that country.
Lakou Lapè was created to extend the achievements of the past few years in the Port-au-Prince neighborhood of St.Martin. Starting in 2004, members of that community, the charity Concern Worldwide, and the Irish group Glencree Center for Peace and Reconcilliation, came together to limit and reduce the level of gang violence within that community.
By using the approaches of reconcilliation, dialogue and respect that helped bring peace to Northern Ireland, the community was able to work together and greatly reduce the violence that was holding the entire community back.
Lakou Lapè is applying the same approach to other communities in Haiti. Through training, dialogue and engagement with the people that live in these communities, their goal is to reduce conflict and enable people to improve their lives.
I have been working with the group over the past year as it has started its work and I am proud to have helped build their web site http://lakoulape.org/ which describes their plans and their acheivements in French, Haitian Keyol and English.
It has been a pleasure to work with old friends from Ireland and new friends in Haiti and I wish Lakou Lapè the best of luck in 2014.
A collection of computer systems and programming tips that you may find useful.
Brought to you by Craic Computing LLC, a bioinformatics consulting company.
Tuesday, December 24, 2013
Thursday, November 14, 2013
Web Apprentice - Tutorials on Web Technologies
The range of web technologies that are available to web designers and developers is just incredible right now - from maps and graphs to translation and speech recognition.
Many of these are easy to incorporate into your sites but getting started can be confusing. There are examples and tutorials that are obsolete or needlessly complex - I've written some of them myself !
I want to help people get up and running with simple examples that they can use in their work today.
So I have created Web Apprentice (http://apprentice.craic.com) - a site that contains clear, straightforward tutorials on web technologies across the spectrum.
Tutorials range from simple embedding of Twitter feeds or Maps through to Typography, Translation and Geolocation.
Each tutorial has a self-contained live demo and all the code is available and free to use. Tutorials walk you through all the steps and explains what is going on in the code.
There are three levels of tutorial:
Basic - embedding widgets into your pages - no need to know JavaScript
Intermediate - involve some JavaScript
Advanced - complex JavaScript and Server side programming - may involve new technologies that are not in all browsers.
The goal is to add new tutorials every couple of weeks at least and over time build the site into a destination for learning and applying web technologies.
Please check it out at http://apprentice.craic.com and let me know what you think.
Many of these are easy to incorporate into your sites but getting started can be confusing. There are examples and tutorials that are obsolete or needlessly complex - I've written some of them myself !
I want to help people get up and running with simple examples that they can use in their work today.
So I have created Web Apprentice (http://apprentice.craic.com) - a site that contains clear, straightforward tutorials on web technologies across the spectrum.
Tutorials range from simple embedding of Twitter feeds or Maps through to Typography, Translation and Geolocation.
Each tutorial has a self-contained live demo and all the code is available and free to use. Tutorials walk you through all the steps and explains what is going on in the code.
There are three levels of tutorial:
Basic - embedding widgets into your pages - no need to know JavaScript
Intermediate - involve some JavaScript
Advanced - complex JavaScript and Server side programming - may involve new technologies that are not in all browsers.
The goal is to add new tutorials every couple of weeks at least and over time build the site into a destination for learning and applying web technologies.
Please check it out at http://apprentice.craic.com and let me know what you think.
Labels:
google maps,
javascript,
ruby,
sinatra,
web audio
Monday, November 11, 2013
Setting up a new Application on Heroku
You can set up a new web application on Heroku in several ways. Here is my preferred way:
1: Build and test your application locally
2: Set it up as a git repository and commit all changes
3: Go to your account on the Heroku web site and cerate your new app there with your preferred name.
4: In your local git repo run:
$ heroku git:remote -a <your-app-name>
The response should be 'Git remote heroku added'
5: Upload your code with:
$ git push heroku master
6: Finish up your Heroku configuration on the we b site
7: Import any existing database as directed by this article.
1: Build and test your application locally
2: Set it up as a git repository and commit all changes
3: Go to your account on the Heroku web site and cerate your new app there with your preferred name.
4: In your local git repo run:
$ heroku git:remote -a <your-app-name>
The response should be 'Git remote heroku added'
5: Upload your code with:
$ git push heroku master
6: Finish up your Heroku configuration on the we b site
7: Import any existing database as directed by this article.
Tuesday, November 5, 2013
HABTM relationships in Rails 4
Rails 4 has changed the way has_and_belongs_to_many (HABTM) relationships are handled as well as introducing strong parameters in the controller as a replacement to having :attr_accessible in your model.
Here is how you set up HABTM with a joining table.
My models are Tutorial and Category - a tutorial can have more than one category and vice versa.
Both models already existed and I wanted to create the relationship between them.
1: Create a migration for the joining table
The default table name should be the plural of the two models, in alphabetical order, separated by an underscore. (You can give it any name you want - you just need to include that later in the model)
class CreateCategoriesTutorials < ActiveRecord::Migration
def change
create_table :categories_tutorials do |t|
t.belongs_to :tutorial
t.belongs_to :category
end
end
end
2: Add the relationships to the models
Previously you would have done this (in the Tutorial model):
has_many :categories_tutorials, :dependent => :destroy
has_many :categories, :through => :categories_tutorials, :uniq => true
That is now simplified to this:
has_and_belongs_to_many :categories
If you have a custom table name then specifiy it like this:
has_and_belongs_to_many :categories, join_table: :my_category_tutorial_table
That last step is critical - if you miss this then the associated rows are not created in the database.
For more information on HABTM take a look at:
http://guides.rubyonrails.org/association_basics.html#the-has-and-belongs-to-many-association
Here is how you set up HABTM with a joining table.
My models are Tutorial and Category - a tutorial can have more than one category and vice versa.
Both models already existed and I wanted to create the relationship between them.
1: Create a migration for the joining table
The default table name should be the plural of the two models, in alphabetical order, separated by an underscore. (You can give it any name you want - you just need to include that later in the model)
class CreateCategoriesTutorials < ActiveRecord::Migration
def change
create_table :categories_tutorials do |t|
t.belongs_to :tutorial
t.belongs_to :category
end
end
end
2: Add the relationships to the models
Previously you would have done this (in the Tutorial model):
has_many :categories_tutorials, :dependent => :destroy
has_many :categories, :through => :categories_tutorials, :uniq => true
That is now simplified to this:
has_and_belongs_to_many :categories
If you have a custom table name then specifiy it like this:
has_and_belongs_to_many :categories, join_table: :my_category_tutorial_table
The Category model has:
has_and_belongs_to_many :tutorials
The CategoryTutorial model has:
belongs_to :category
belongs_to :tutorial
In my example I want to specify Categories when I create a Tutorial.
In order for this to work I need to allow the tutorials_controller to accept the category_ids parameter which is passed from the tutorial form.
At the bottom of the controller Rails has created this:
def tutorial_params
params.require(:tutorial).permit(:title, :description, :status)
end
I need to add category_ids and have it set up as an array
def tutorial_params
params.require(:tutorial).permit(:title, :description, :status, :category_ids => [])
end
That last step is critical - if you miss this then the associated rows are not created in the database.
For more information on HABTM take a look at:
http://guides.rubyonrails.org/association_basics.html#the-has-and-belongs-to-many-association
Friday, September 20, 2013
Stop e-mail spam from creating events in Google Calendar
An ever growing number of email spam messages will attempt to create events in your Google calendar. This doubles your spam experience - in your email and in your calendar - great...
The messages exploit the default settings in Google Calendar and fortunately this is easy to fix.
Open up your Google Calendar, click on the Setting icon in the top right of the panel (the Gear wheel icon) and then select 'Settings' from the menu that appears.
That gives you a long page of settings - look for this one, most of the way down:
Change the current setting to 'No, only show invitations to which I have responded' and Save your settings.
That should do it !
Why Google doesn't have this as the default I have no idea...
Thursday, September 5, 2013
Adding extensions to Postgresql database with Rails on Heroku
This is a follow up to my previous post Adding extensions to Postgresql database with Rails
If your database is running on Heroku you need to access it slightly differently.
Using the 'rails db' command does not work:
Instead access psql like this and then add the extension. \dx lists the current installed extensions
Note that this works just fine with a pg:basic database plan. There is no need to upgrade just to use postgres text search.
If your database is running on Heroku you need to access it slightly differently.
Using the 'rails db' command does not work:
$ heroku run rails db Running `rails db` attached to terminal... up, run.6499 Couldn't find database client: psql. Check your $PATH and try again.
Instead access psql like this and then add the extension. \dx lists the current installed extensions
$ heroku pg:psql psql (9.2.1, server 9.1.9) [...] SSL connection (cipher: DHE-RSA-AES256-SHA, bits: 256) Type "help" for help. dftmhe7vpg4dhs=> create extension unaccent; CREATE EXTENSION dftmhe7vpg4dhs=> \dx List of installed extensions Name | Version | Schema | Description ----------+---------+------------+--------------------------------------------- plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language unaccent | 1.0 | public | text search dictionary that removes accents
Note that this works just fine with a pg:basic database plan. There is no need to upgrade just to use postgres text search.
Wednesday, August 21, 2013
Adding extensions to Postgresql database with Rails
The default Postgresql installation on Mac OS X using homebrew includes a single extension - plpgsql (PL/pgSQL procedural language) but there are a number of them available in the lib directory (/usr/local/Cellar/postgresql/9.2.1/lib on my machine)
To install an extension into the database, the easiest way I found was to open a database console using the 'rails db' command and then create it directly. I have seen mention of doing this in a Rails migration but this did not work for me.
Note that an extension is installed in a specific database, rather than being added to all databases.
The command to list the installed extensions is '\dx'
To install an extension into the database, the easiest way I found was to open a database console using the 'rails db' command and then create it directly. I have seen mention of doing this in a Rails migration but this did not work for me.
Note that an extension is installed in a specific database, rather than being added to all databases.
The command to list the installed extensions is '\dx'
$ rails db psql (9.2.1) Type "help" for help. tabs_development=# \dx List of installed extensions Name | Version | Schema | Description ---------+---------+------------+------------------------------ plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language (1 row) tabs_development=# create extension unaccent; CREATE EXTENSION tabs_development=# \dx List of installed extensions Name | Version | Schema | Description ----------+---------+------------+--------------------------------------------- plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language unaccent | 1.0 | public | text search dictionary that removes accents (2 rows)
Friday, August 16, 2013
Testing for the presence of a loaded Gem in a Rails application
I'm migrating the search function in an application from meta_search to ransack (both from Ernie Miller)
The syntax has changed quite a bit between the two gems and I use the search functionality in more than 30 controllers / index pages. So I want a simple way to update each instance while still being able to switch back to meta_search if I need to.
So I need to test which of the two gems has been loaded in the running application.
The best way that I have found is this:
You can see the full list of loaded gems with Gem.loaded_specs.keys
The syntax has changed quite a bit between the two gems and I use the search functionality in more than 30 controllers / index pages. So I want a simple way to update each instance while still being able to switch back to meta_search if I need to.
So I need to test which of the two gems has been loaded in the running application.
The best way that I have found is this:
if Gem.loaded_specs['ransack'] '... do this...' else '... do that ...' end
You can see the full list of loaded gems with Gem.loaded_specs.keys
Wednesday, August 7, 2013
Default Sort Order in Ransack Rails Gem
Ernie Miller's Ransack gem is a great way to handle searching and sorting of data in a Rails 3 application. It is the successor to his meta_search gem. There is a railscast episode on ransack.
But while Ransack offers benefits over meta_search, it is lacking in documentation which is a shame.
The problem I ran into was how to set a default sort order for records in the controller, before one had been set in the view.
Here is how you specify this:
The other problem that I am still dealing with is that Ransack does not appear to let you use scopes from your model in the search form. meta_search did allow this. That turns out to be a big issue in a current project to port an older application into Rails 3.
But while Ransack offers benefits over meta_search, it is lacking in documentation which is a shame.
The problem I ran into was how to set a default sort order for records in the controller, before one had been set in the view.
Here is how you specify this:
@search = Post.search(params[:q]) @search.sorts = 'name asc' if @search.sorts.empty? @posts = @search.result.paginate(:page => params[:page], :per_page => 20)
The other problem that I am still dealing with is that Ransack does not appear to let you use scopes from your model in the search form. meta_search did allow this. That turns out to be a big issue in a current project to port an older application into Rails 3.
Tuesday, July 2, 2013
Recording from a Microphone in the browser using the Web Audio API
I have posted a code example that shows how to record audio from a microphone in a web browser using the Web Audio API.
Take a look a the demo and the code HERE
It turns out that the API does not support recording directly so you have to add consecutive samples of audio data to your own recording buffer. You need to grow the size of the buffer whenever you add a new sample. Once you have your recording you can use it as the source of a BufferSourceNode and play it back.
Take a look a the demo and the code HERE
It turns out that the API does not support recording directly so you have to add consecutive samples of audio data to your own recording buffer. You need to grow the size of the buffer whenever you add a new sample. Once you have your recording you can use it as the source of a BufferSourceNode and play it back.
Friday, June 28, 2013
Vocabrio - a new way to learn a language
I am very pleased to announce the launch of Vocabrio - a new web service from Craic that helps you improve your vocabulary in a foreign language.
Vocabrio lets you read any web site in your target language and translate those words that are new to you. The words are added to your custom dictionary and you can test what you have learned with custom quizzes.
Unlike a general textbook, Vocabrio lets you build up a vocabulary of the words that are relevant to your interests, making it more useful than existing tools.
Vocabrio makes use of some amazing new web technologies.
You can listen to how any word is pronounced using Speech Synthesis and HTML5 Web Audio in your browser (no Flash or plugin required!)
You can test your own pronunciation using Web Audio and cutting edge Speech Recognition - all in your browser.
Vocabrio is free to use while it is in Beta release - over time it will shift to a paid subscription.
You can learn more and sign up for an account at http://vocabrio.com
Vocabrio lets you read any web site in your target language and translate those words that are new to you. The words are added to your custom dictionary and you can test what you have learned with custom quizzes.
Unlike a general textbook, Vocabrio lets you build up a vocabulary of the words that are relevant to your interests, making it more useful than existing tools.
Vocabrio makes use of some amazing new web technologies.
You can listen to how any word is pronounced using Speech Synthesis and HTML5 Web Audio in your browser (no Flash or plugin required!)
You can test your own pronunciation using Web Audio and cutting edge Speech Recognition - all in your browser.
Vocabrio is free to use while it is in Beta release - over time it will shift to a paid subscription.
You can learn more and sign up for an account at http://vocabrio.com
Wednesday, June 19, 2013
Example of Web Speech Recognition and Audio Capture
I'm doing a lot of work with the Web Audio API for both playing audio streams and capturing them from a user's microphone. I'm also working with the Speech Recognition component of the Web Speech API, as implemented by Google.
I wanted to combine both of these and so I've written up a simple example:
On this page you can speak and have your words transcribed via Google Speech Recognition and at the same time see the frequency spectrogram of your voice.
Both of these browser technologies offer tremendous potential for web developers. It is great fun experimenting with them, even before they are fully mature - and I am already using them in a new application Vocabrio, that helps you improve your vocabulary in a foreign language.
The page only works in the Google Chrome browser at the moment. The page and JavaScript is released under the MIT license.
I wanted to combine both of these and so I've written up a simple example:
On this page you can speak and have your words transcribed via Google Speech Recognition and at the same time see the frequency spectrogram of your voice.
Both of these browser technologies offer tremendous potential for web developers. It is great fun experimenting with them, even before they are fully mature - and I am already using them in a new application Vocabrio, that helps you improve your vocabulary in a foreign language.
The page only works in the Google Chrome browser at the moment. The page and JavaScript is released under the MIT license.
Labels:
google chrome,
speech recognition,
vocabrio,
web audio
Thursday, June 13, 2013
Migrating from Rails HABTM to has_many :through
I had a has_and_belongs_to_many (HABTM) association that I needed to convert to has_many through with an explicit model that contains additional columns.
In my example a User knows about many Words and each Word is known by many Users.
You can find many pages on the web about the differences. The consensus is that has_many :through is the way to go from the start - and after this process I agree.
Making the change in itself is not a big deal - drop the existing joining table 'users_words', create the new one, update the User and Word models and you're good to go.
Problem is that I already had data in the joining table...
And because the standard way of setting up a HABTM joining table does not include an id column, you can't just use that table or directly copy each record from it. Dang it...
Here were my steps - hopefully I got them all - don't skip any !
1: Backup your database and prevent users from accessing it
2: Do not touch any of the old associations, table, etc
3: Create the new table and model with a different name from the old joining table.
My HABTM table was users_words and my new table is user_work_links
4: Update the two models
My original association was this - do not change it yet !
The new association is this - NOTE the second line is commented out for now - VERY important !
5: Copy over the data from the old joining table with a rake task
You need to go through the existing associations one by one to get the ids for records in the two tables.
Here is my script:
6: Update the two models
Now you can comment out the old associations and uncomment the new ones
In the attr_accessible lists in the two models be sure to add :user_ids in the Word model and :word_ids in the User model. If your forget this it will silently fail to create the records
7: Test Test Test
You should be back up and running with the new model
8: Remove the old table
Finally create a migration that drops the old table and run it
Not too bad as long as you think it through before you start and don't rush the steps
In my example a User knows about many Words and each Word is known by many Users.
You can find many pages on the web about the differences. The consensus is that has_many :through is the way to go from the start - and after this process I agree.
Making the change in itself is not a big deal - drop the existing joining table 'users_words', create the new one, update the User and Word models and you're good to go.
Problem is that I already had data in the joining table...
And because the standard way of setting up a HABTM joining table does not include an id column, you can't just use that table or directly copy each record from it. Dang it...
Here were my steps - hopefully I got them all - don't skip any !
1: Backup your database and prevent users from accessing it
2: Do not touch any of the old associations, table, etc
3: Create the new table and model with a different name from the old joining table.
My HABTM table was users_words and my new table is user_work_links
4: Update the two models
My original association was this - do not change it yet !
has_and_belongs_to_many :words, :uniq => true
The new association is this - NOTE the second line is commented out for now - VERY important !
has_many :user_word_links, :dependent => :destroy
# has_many :words, :through => :user_word_links, :uniq => true
5: Copy over the data from the old joining table with a rake task
You need to go through the existing associations one by one to get the ids for records in the two tables.
Here is my script:
namespace :craic do
desc "move user word data"
task :move_user_word_data => :environment do
users = User.all
users.each do |user|
user.words.each do |word|
record = UserWordLink.new({ :user_id => user.id, :word_id => word.id })
record.save
end
end
end
end
6: Update the two models
Now you can comment out the old associations and uncomment the new ones
# has_and_belongs_to_many :words, :uniq => true
has_many :user_word_links, :dependent => :destroy
has_many :words, :through => :user_word_links, :uniq => true
In the attr_accessible lists in the two models be sure to add :user_ids in the Word model and :word_ids in the User model. If your forget this it will silently fail to create the records
7: Test Test Test
You should be back up and running with the new model
8: Remove the old table
Finally create a migration that drops the old table and run it
Not too bad as long as you think it through before you start and don't rush the steps
Friday, June 7, 2013
Unix command 'comm' for comparing files
I needed a simple way to compare two similar files and only output lines that were unique to the second file.
Sounded like a job for 'diff' but I was not finding the right options to give me what I needed. And then I stumbled across 'comm' - a standard UNIX command that I don't think I have ever used.
That does exactly what I needed. My two files look like this
File A
File A
A B D EFile B
A B C DI want the command to just output 'C' By default comm compares two files and produces 3 columns of text - lines that are only in file A, lines that are only in file B and lines that are in both. So with these two files I get:
$ comm tmp_A tmp_B
A
B
C
D
E
Ugly, and not what I want... But then you can suppress the output of one or more of these columns using -1, -2, -3 options and combinations of those. I want to suppress lines that are only in file A and those in common:
$ comm -13 tmp_A tmp_B
C
Simple - does exactly what I need - can't believe I didn't know about it...
Friday, May 31, 2013
Listing all the models and numbers of records in a Rails application
Here is a rake task that will list all the ActiveRecord models in a Rails application along with the number of database records in each and the last time that database table was updated. Note the call to eager_load which ensure that all models are loaded.
This is a simple but very useful way to assess the contents of your database.
This is a simple but very useful way to assess the contents of your database.
desc "List model in application" task :list_models => :environment do Rails.application.eager_load! ActiveRecord::Base.descendants.each do |model| name = model.name count = model.count last_date = count == 0 ? 'no data' : model.order('updated_at desc').first.updated_at printf "%-20s %9d %s\n", name, count, last_date end end
Wednesday, April 17, 2013
Gotchas when working with HTML5 Audio
I've run into a few issues while building a demo web app that plays audio files, so I wanted to post my experiences...
1: Firefox does NOT play MP3 files (as of 2013-04-17)
This is because the MPEG format is proprietary and the Firefox/Mozilla folks insist on only open source formats. You need to use .wav, .ogg or .webm. See https://developer.mozilla.org/en-US/docs/HTML/Supported_media_formats for the allowed options.
Unfortunately they do not seem to produce a useful error message if you try and play an MP3 file.
Google Chrome plays anything - so your application may appear to work fine but may fail on Firefox.
2: Firefox is VERY strict in regard to Cross-Origin Resource Sharing (CORS)
Firefox will refuse to play an audio file on a host other than the one that served the original web page. So you can't store an audio file on, say, Amazon S3, and play it on your web site. The same goes for Ajax requests to another site. The Firefox console will, or may, display an error but this is not necessarily clear.
Google Chrome doesn't seem to care about this restriction.
See this page for more details https://developer.mozilla.org/en-US/docs/HTTP/Access_control_CORS
To work around this you can specify a header for the audio file which means that as the owner of the audio file you allow it to be referenced from any other site.
"Access-Control-Allow-Origin" = "*"
To work around the problem with Ajax requests you can use JSONP instead of JSON as I have described.
The special header ought to work with AWS S3... except....
3: AWS S3 does NOT allow the Access-Control-Allow-Origin header
You can set metadata headers with S3 files but not this one. The Amazon support forums are full of requests/complaints about this but it does not seem to have been resolved.
There does seem to be a way round this on Amazon Cloudfront, which pretty much exists to server assets for other sites, but from what I've seen this is pretty ugly.
My workaround for this was to add a proxy method to my server. The client requests a file that resides on S3 from the proxy. It fetches it from S3 and echoes it to the client, which sees it as coming from the originating server. This introduces an unnecessary delay but works file. Here is a snippet of code in Ruby for a Sinatra server. It uses the aws-sdk gem and leaves out set up for that.
get '/proxy' do
s3obj = bucket.objects[params['file']]
response["Access-Control-Allow-Origin"] = "*"
content_type 'audio/wav'
s3obj.read
end
Apparently Google's equivalent of S3 does not have this issue - I've not tried it.
4: You cannot play audio files from localhost
This makes testing a pain. Your audio files must be hosted on a server other than localhost. I recommend Heroku as a way to get development servers up and running quickly and at no initial cost.
If you don't realize this, then there is no error message in the browser console - it simply doesn't work and you are left scratching your head until you figure it out. I hate stuff like that...
1: Firefox does NOT play MP3 files (as of 2013-04-17)
This is because the MPEG format is proprietary and the Firefox/Mozilla folks insist on only open source formats. You need to use .wav, .ogg or .webm. See https://developer.mozilla.org/en-US/docs/HTML/Supported_media_formats for the allowed options.
Unfortunately they do not seem to produce a useful error message if you try and play an MP3 file.
Google Chrome plays anything - so your application may appear to work fine but may fail on Firefox.
2: Firefox is VERY strict in regard to Cross-Origin Resource Sharing (CORS)
Firefox will refuse to play an audio file on a host other than the one that served the original web page. So you can't store an audio file on, say, Amazon S3, and play it on your web site. The same goes for Ajax requests to another site. The Firefox console will, or may, display an error but this is not necessarily clear.
Google Chrome doesn't seem to care about this restriction.
See this page for more details https://developer.mozilla.org/en-US/docs/HTTP/Access_control_CORS
To work around this you can specify a header for the audio file which means that as the owner of the audio file you allow it to be referenced from any other site.
"Access-Control-Allow-Origin" = "*"
To work around the problem with Ajax requests you can use JSONP instead of JSON as I have described.
The special header ought to work with AWS S3... except....
3: AWS S3 does NOT allow the Access-Control-Allow-Origin header
You can set metadata headers with S3 files but not this one. The Amazon support forums are full of requests/complaints about this but it does not seem to have been resolved.
There does seem to be a way round this on Amazon Cloudfront, which pretty much exists to server assets for other sites, but from what I've seen this is pretty ugly.
My workaround for this was to add a proxy method to my server. The client requests a file that resides on S3 from the proxy. It fetches it from S3 and echoes it to the client, which sees it as coming from the originating server. This introduces an unnecessary delay but works file. Here is a snippet of code in Ruby for a Sinatra server. It uses the aws-sdk gem and leaves out set up for that.
get '/proxy' do
s3obj = bucket.objects[params['file']]
response["Access-Control-Allow-Origin"] = "*"
content_type 'audio/wav'
s3obj.read
end
Apparently Google's equivalent of S3 does not have this issue - I've not tried it.
4: You cannot play audio files from localhost
This makes testing a pain. Your audio files must be hosted on a server other than localhost. I recommend Heroku as a way to get development servers up and running quickly and at no initial cost.
If you don't realize this, then there is no error message in the browser console - it simply doesn't work and you are left scratching your head until you figure it out. I hate stuff like that...
Labels:
AWS S3,
CORS,
firefox,
google chrome,
html5,
javascript
Tuesday, April 16, 2013
Web Text To Speech using Bing Translate - Demonstration Sinatra App
Given the variety of sophisticated web services available today, it is surprising that a good Text To Speech API (TTS) is not readily available.
Google has one as part of its Translation API but it is not publicized or actively supported. The Bing/Microsoft Translate API also has a TTS feature and this is supported. In my experience this works very well and allows you to specify the language of the text, which changes the voice and pronunciation that is used.
Accessing this API is easy enough using a wrapper library, such as https://github.com/CodeBlock/bing_translator-gem (disclaimer: I added the speak() function to this ruby gem) but it returns binary data that represents an MP3 file. The HTML5 audio tag allows you play audio files, but not, apparently, to play the data itself.
The result is that the TTS output must be first written to a file and then played via the audio tag.
To demonstrate how these different pieces fit together, I have written a TTS demo app that consists of a Javascript in a web page that sends the query text to a Sinatra app, using ajax. The server in turn sends this to Bing and get back the audio. The server then writes this to a file on Amazon S3 and returns the URL for this back to the web page where the audio is played.
The Live Demo for this is at http://bing-translate-tts-demo.craic.com/ and the code required to implement the whole thing is freely available at https://github.com/craic/bing_translate_text_to_speech
The demo has several moving parts and setting it up for yourself requires experience with Sinatra, S3, etc.
Google has one as part of its Translation API but it is not publicized or actively supported. The Bing/Microsoft Translate API also has a TTS feature and this is supported. In my experience this works very well and allows you to specify the language of the text, which changes the voice and pronunciation that is used.
Accessing this API is easy enough using a wrapper library, such as https://github.com/CodeBlock/bing_translator-gem (disclaimer: I added the speak() function to this ruby gem) but it returns binary data that represents an MP3 file. The HTML5 audio tag allows you play audio files, but not, apparently, to play the data itself.
The result is that the TTS output must be first written to a file and then played via the audio tag.
To demonstrate how these different pieces fit together, I have written a TTS demo app that consists of a Javascript in a web page that sends the query text to a Sinatra app, using ajax. The server in turn sends this to Bing and get back the audio. The server then writes this to a file on Amazon S3 and returns the URL for this back to the web page where the audio is played.
The Live Demo for this is at http://bing-translate-tts-demo.craic.com/ and the code required to implement the whole thing is freely available at https://github.com/craic/bing_translate_text_to_speech
The demo has several moving parts and setting it up for yourself requires experience with Sinatra, S3, etc.
Labels:
ajax,
amazon s3,
bing translate,
Heroku,
sinatra,
text to speech
Friday, April 12, 2013
Zillow Neighborhood Boundaries in GeoJSON and KML for Seattle
The good people at Zillow have invested a lot of time defining the boundaries of around 7,000 neighborhoods in cities around the USA.
These are in the form of ESRI ARC Shapefiles that can be used as overlays on maps.
You can find these at Zillow Neighborhood Boundaries.
Zillow have kindly made these available to the rest of us under a Creative Commons license. This allows you to share and modify the data but you need to attribute Zillow and must distribute any derivative forms of the data under the same or similar licenses. Attribution should take the form of including their logo with a link back to their site.
ESRI shapefiles can be read by many types of mapping software, but one big exception to this is Google Maps. Overlays in Google Maps are loaded from KML format files. In addition GeoJSON is a JSON-based format that is gaining in popularity.
I needed neighborhood boundaries for the City of Seattle in KML format so I can use them in Google Maps. I couldn't find a convenient way to convert from Shapefile directly to KML (although these do exist in some GIS packages) and the solution I came up with involved converting first from Shapefile to GeoJSON and them from GeoJSON to KML. I'll make that software available elsewhere.
My Github Project zillow_seattle_neighborhoods contains the GeoJSON and KML files for the 78 Seattle neighborhoods defined in Zillow's dataset.
The same approach could be applied to derive GeoJSON and KML files for the entire Zillow US city dataset. I'll see if I can do that in the future... for now it just covers Seattle.
Using the Files
You can use either format of file in many GIS packages and mapping programs. Here are a few ideas for people who are new to this area of coding.
You can load KML files into your own maps using Google Maps directly. Look under 'My Places' -> 'Create Map' -> 'Import'. See Google Maps documentation for details.
You can load the KML files into Google Earth for a different experience. The command 'open .kml' on a Mac with Google Earth installed should be enough.
To use these overlays in custom Google Maps you want to use the Google Maps JavaScript API.
I'll add an example HTML page that shows how to do this in due course...
Two important things to note about testing KML overlays in Google Maps in your own HTML pages
1: Your KML files MUST be hosted on a remote server. They will be ignored if you give try and host them from a server on localhost !
2: KML files are CACHED in web pages, just like images. So if you are changing the files, such as changing the colors, these will not be visible unless you work around the caching issue. The easiest way to do this is to add a question mark followed by a random string at the end of the url for the KML file, and change this on every page reload.
For example: http://yourserver.net/yourfile.kml?1234567
Friday, April 5, 2013
Fixing incorrect timezones in Ruby
Timezones can be tricky to manage and sometimes I just get them wrong - like now...
I had been parsing strings representing local times in Seattle and then storing these without an explicit timezone on a machine which uses UTC by default - so they got stored as that time in UTC (not converted to UTC - the exact time string in UTC.
For example: Fri, 05 Apr 2013 14:24:39 (the local time in Seattle) was stored as Fri, 05 Apr 2013 14:24:39 UTC +00:00 which is 7 hours earlier than it should be.
The easiest way to fix this, that I found is to convert back to a string, strip of any record of a timezone and then parse it back into a time, having temporarily set the TZ environment variable to the correct local timezone.
NOTE that this works on UNIX/Mac OS X systems - not sure about Windows...
puts ENV['TZ'] => nil
puts t => 2013-04-05 14:24:39 UTC
s = t.to_s
puts s => "2013-04-05 14:24:39 UTC"
s.sub!(/\s+UTC/, '')
puts s => "2013-04-05 14:24:39"
ENV['TZ'] = 'US/Pacific'
t1 = Time.parse(s)
ENV['TZ'] = nil
puts t1 => 2013-04-05 14:24:39 -0700
Remember to reset the TZ environment variable back to nil
Be very careful about using gmtime and localtime as these methods operate in place. So ALWAYS make a copy of your time before applying those conversions
t => 2013-04-05 15:25:41 -0700
t1 = t.dup
t.gmtime => 2013-04-05 22:25:41 UTC
t => 2013-04-05 22:25:41 UTC
t1 => 2013-04-05 15:25:41 -0700
I had been parsing strings representing local times in Seattle and then storing these without an explicit timezone on a machine which uses UTC by default - so they got stored as that time in UTC (not converted to UTC - the exact time string in UTC.
For example: Fri, 05 Apr 2013 14:24:39 (the local time in Seattle) was stored as Fri, 05 Apr 2013 14:24:39 UTC +00:00 which is 7 hours earlier than it should be.
The easiest way to fix this, that I found is to convert back to a string, strip of any record of a timezone and then parse it back into a time, having temporarily set the TZ environment variable to the correct local timezone.
NOTE that this works on UNIX/Mac OS X systems - not sure about Windows...
puts ENV['TZ'] => nil
puts t => 2013-04-05 14:24:39 UTC
s = t.to_s
puts s => "2013-04-05 14:24:39 UTC"
s.sub!(/\s+UTC/, '')
puts s => "2013-04-05 14:24:39"
ENV['TZ'] = 'US/Pacific'
t1 = Time.parse(s)
ENV['TZ'] = nil
puts t1 => 2013-04-05 14:24:39 -0700
Remember to reset the TZ environment variable back to nil
Be very careful about using gmtime and localtime as these methods operate in place. So ALWAYS make a copy of your time before applying those conversions
t => 2013-04-05 15:25:41 -0700
t1 = t.dup
t.gmtime => 2013-04-05 22:25:41 UTC
t => 2013-04-05 22:25:41 UTC
t1 => 2013-04-05 15:25:41 -0700
Wednesday, April 3, 2013
Using Latitude and Longitude with Google and Bing Maps
When you work with maps you sometimes want to extract a Latitude/Longitude pair from the map, or you want to enter a pair and see the corresponding map.
Google Maps and Bing Maps do things slightly differently. In general I go with Google but there is not a lot in it. Here are how you work with Latitude/Longitude in both of them:
1: Enter an Address and get the Latitude/Longitude pair
Bing - Enter the address - the Lat/Lon pair will be shown below the Address in the left hand panel
Google - Enter the address - then Right Click the marker and select What's here? - the Lat/Lon pair will appear in the address entry box
2: Browse to a Location and get the Latitude/Longitude pair
Bing - Browse to the location - then Right Click and the Lat/Lon pair will be shown in the popup
Google - Browse to the location - then Right Click the marker and select 'What's Here - the Lat/Lon pair will appear in the address entry box
3: Enter a Latitude/Longitude pair and view the Location
Bing and Google - Enter the Lat/Lon pair, separated by a comma, into the address box (e.g. 47.619905,-122.320844) - Google makes a better choice of zoom level in my opinion.
Google Maps and Bing Maps do things slightly differently. In general I go with Google but there is not a lot in it. Here are how you work with Latitude/Longitude in both of them:
1: Enter an Address and get the Latitude/Longitude pair
Bing - Enter the address - the Lat/Lon pair will be shown below the Address in the left hand panel
Google - Enter the address - then Right Click the marker and select What's here? - the Lat/Lon pair will appear in the address entry box
2: Browse to a Location and get the Latitude/Longitude pair
Bing - Browse to the location - then Right Click and the Lat/Lon pair will be shown in the popup
Google - Browse to the location - then Right Click the marker and select 'What's Here - the Lat/Lon pair will appear in the address entry box
3: Enter a Latitude/Longitude pair and view the Location
Bing and Google - Enter the Lat/Lon pair, separated by a comma, into the address box (e.g. 47.619905,-122.320844) - Google makes a better choice of zoom level in my opinion.
Thursday, March 21, 2013
SODA - Socrata Open Data API
Socrata is a company that helps organizations make their data available to outside users. They work with people like the World Bank and a number of city governments, including the City of Seattle.
You can access these public data sources via SODA - the Socrata Open Data API which provides a SQL-like query language (SoQL) that you submit via HTTP request to an API endpoint for that resource. The default response is JSON but you can get CSV, XML and RDF.
They provide quite a bit of documentation but it is a bit short on examples and I had to poke around a bit to get the right syntax for the data sources that I am interested in.
A very useful resource is the 'live' query console they provide for a test dataset, which lets you try various queries.
I access their resources from Ruby. I generate the URL, use open-uri to fetch the data and then parse the JSON response. Here is an example fo the process:
The data source I want to query list the Seattle Fire Department 911 calls. That page lets you create custom views and download the data. If you click on 'Export' you can see the column names used in the database as well as the url for the API endpoint.
I want to get all records in the past hour, so I need a query along the lines of 'where datetime > my_timestamp'. Specifically I need to build a URL similar to this:
This is not a real query just yet but note a couple of things...
The API endpoint is http://data.seattle.gov/resource/kzjm-xkqj.json
This version ends in '.json' and so it will return the response in JSON. If you want CSV then replace '.json' with '.csv' - same goes for XML and RDF.
Also note the dollar sign before the 'where' - it is easy to overlook - that signifies the following word is a SoQL clause and not the name of a column.
A big problem I ran into was figuring out the format for timestamps. The web page showing the table formats them as '03/21/2013 07:35:00 AM -0700' whereas the JSON response uses "datetime" : 1363872960. But neither of these works in a query... and the documentation on data types does not describe what you should use. Trial and error with that console page led me to the correct format of '2013-03-21 07:00:00' which is similar to the ISO 8601 format.
Also note that the time needs to be quoted... again, apparently not in the documentation
So my query should now look something like this:
But that URL needs to be escaped before submission. That was the other issue - URI.escape() handles the spaces, quotes, etc but for some reason it does not escape the dollar sign. So I had to do that for myself.
Here is a chunk of Ruby code that gets all the 911 calls in the past hour:
That will show you the query, the escaped URL, the JSON returned and the number of records in the parsed JSON - enough to get you started. Different data sources will use different column names and perhaps different formats for date/time (always be aware of time zone issues).
Having a common API to many government/public data sources is great but the Socrata-backed sources that I've looked a so far don't seem to be quite as consistent as one would hope. This is an issue for the different data providers - it's not a Socrata issue.
I would hope that one could query the endpoint in some way and get a listing of all the column names and data types. There is one for SODA v1.0 but this now deprecated... so perhaps there is a newer version but it has not been documented.
Part of the answer lies in two custom HTTP response headers (X-SODA2-Fields and X-SODA2-Types) that are returned with your data. Here they are for the data source used here:
You can see the headers if you use 'curl -i "your_url"' on the command line with your escaped url.
You can access these public data sources via SODA - the Socrata Open Data API which provides a SQL-like query language (SoQL) that you submit via HTTP request to an API endpoint for that resource. The default response is JSON but you can get CSV, XML and RDF.
They provide quite a bit of documentation but it is a bit short on examples and I had to poke around a bit to get the right syntax for the data sources that I am interested in.
A very useful resource is the 'live' query console they provide for a test dataset, which lets you try various queries.
I access their resources from Ruby. I generate the URL, use open-uri to fetch the data and then parse the JSON response. Here is an example fo the process:
The data source I want to query list the Seattle Fire Department 911 calls. That page lets you create custom views and download the data. If you click on 'Export' you can see the column names used in the database as well as the url for the API endpoint.
I want to get all records in the past hour, so I need a query along the lines of 'where datetime > my_timestamp'. Specifically I need to build a URL similar to this:
http://data.seattle.gov/resource/kzjm-xkqj.json?$where datetime > my_time_stamp
This is not a real query just yet but note a couple of things...
The API endpoint is http://data.seattle.gov/resource/kzjm-xkqj.json
This version ends in '.json' and so it will return the response in JSON. If you want CSV then replace '.json' with '.csv' - same goes for XML and RDF.
Also note the dollar sign before the 'where' - it is easy to overlook - that signifies the following word is a SoQL clause and not the name of a column.
A big problem I ran into was figuring out the format for timestamps. The web page showing the table formats them as '03/21/2013 07:35:00 AM -0700' whereas the JSON response uses "datetime" : 1363872960. But neither of these works in a query... and the documentation on data types does not describe what you should use. Trial and error with that console page led me to the correct format of '2013-03-21 07:00:00' which is similar to the ISO 8601 format.
Also note that the time needs to be quoted... again, apparently not in the documentation
So my query should now look something like this:
http://data.seattle.gov/resource/kzjm-xkqj.json?$where datetime > '2013-03-21 07:00:00'
Here is a chunk of Ruby code that gets all the 911 calls in the past hour:
require 'open-uri'
require 'time'
require 'json'
endpoint = 'http://data.seattle.gov/resource/kzjm-xkqj.json'
timestamp = (Time.now - (60 * 60)).strftime("%F %H:%M:%S")
query = "$where=datetime > '#{timestamp}'"
url = "#{endpoint}?#{query}"
url = URI.escape(url)
url.gsub!(/\$/, '%24')
puts "query: #{query}
puts "url: #{url}
json = open(url).read
puts json
result = JSON.parse(json)
puts result.length
That will show you the query, the escaped URL, the JSON returned and the number of records in the parsed JSON - enough to get you started. Different data sources will use different column names and perhaps different formats for date/time (always be aware of time zone issues).
Having a common API to many government/public data sources is great but the Socrata-backed sources that I've looked a so far don't seem to be quite as consistent as one would hope. This is an issue for the different data providers - it's not a Socrata issue.
I would hope that one could query the endpoint in some way and get a listing of all the column names and data types. There is one for SODA v1.0 but this now deprecated... so perhaps there is a newer version but it has not been documented.
Part of the answer lies in two custom HTTP response headers (X-SODA2-Fields and X-SODA2-Types) that are returned with your data. Here they are for the data source used here:
X-SODA2-Fields: [":updated_at","address","longitude",":id","latitude","incident_number","datetime","type",":created_at","report_location"]
X-SODA2-Types: ["meta_data","text","number","meta_data","number","text","date","text","meta_data","location"]
X-SODA2-Legacy-Types: true
You can see the headers if you use 'curl -i "your_url"' on the command line with your escaped url.
Friday, March 8, 2013
HTML5 and web API code examples
Whenever I am learning a new feature of HTML5, JavaScript/Jquery or a new API, such as Google Maps, I always look for example code that I can learn from.
In many cases the examples are great, but in others they can be too clever and too heavily styled, such that it can be hard to understand the core of the feature that they are demonstrating.
So in writing my own example code I try and strip things down to the bare minimum - very little styling and code that tries to do one, and only one, thing.
I've been collecting examples that I think have some substance and that can help others learn about a feature with the minimum of confusion.
Take a look at http://html5-examples.craic.com
This site has working examples of a number of HTML5-related features, including Data Attributes, Geolocation, Web Audio and Speech Recognition.
Look at the Source for each of these pages to see annotated JavaScript, etc. which illustrates the target feature of each page.
All the code is distributed freely under the terms of the MIT license and you are encouraged to build your own applications using it.
Note that some of the examples involve new web technologies, so they may not work in some browsers and what does work now may not work in the future as the APIs mature.
In many cases the examples are great, but in others they can be too clever and too heavily styled, such that it can be hard to understand the core of the feature that they are demonstrating.
So in writing my own example code I try and strip things down to the bare minimum - very little styling and code that tries to do one, and only one, thing.
I've been collecting examples that I think have some substance and that can help others learn about a feature with the minimum of confusion.
Take a look at http://html5-examples.craic.com
This site has working examples of a number of HTML5-related features, including Data Attributes, Geolocation, Web Audio and Speech Recognition.
Look at the Source for each of these pages to see annotated JavaScript, etc. which illustrates the target feature of each page.
All the code is distributed freely under the terms of the MIT license and you are encouraged to build your own applications using it.
Note that some of the examples involve new web technologies, so they may not work in some browsers and what does work now may not work in the future as the APIs mature.
Labels:
google chrome,
google maps,
html5,
javascript,
web audio,
web speech
Tuesday, February 26, 2013
General approach to building Rails applications with Devise
I have built a number of complex Rails applications that include user authentication. I want to describe my general approach to getting these up and running.
In most of them I chose the Devise gem to handle authentication and added a number of custom fields to the User model (first name, last name, etc). The gem documentation and this Railscast can help you get started with all that.
One approach is to build your core application without authentication and then add it in at a later stage but I don't like that.
Authentication can quickly get very complicated once you start adding user confirmation via email, an admin role and other custom fields. I prefer to get all that machinery up and running in the context of a very simple application and only then add my real application code into it.
Similarly, when I'm starting with a new application I don't usually hit on the right data architecture right away. That leads to me adding new columns to my database tables and models over time and it always involves quite a lot of 'rake db:rollback / rake db:migrate' operations. I prefer to go through this phase without the extra baggage of authentication, and often, without worrying a lot about CSS etc. I just need to try several alternate views of the problem before I settle on a final design.
So may approach is to build two applications side by side. The first is a minimal application with, say, a single table (usually called 'posts') to which I add Devise and configure all the options that I want.
This is a very useful piece of code for other applications, so I always create a good README and store a copy of the code for future use.
Separately I work through my core application and, once that is working, I condense all the migrations that add/remove/rename columns into a few 'clean', simple migrations.
Finally I add the core application code to the minimal authenticated application. That way it is easier for me to update the controllers, etc. to use authentication.
The bottom line is that you want to minimize the number of moving parts in the early stages of a new application design. This is a simple approach that works very well for me.
In most of them I chose the Devise gem to handle authentication and added a number of custom fields to the User model (first name, last name, etc). The gem documentation and this Railscast can help you get started with all that.
One approach is to build your core application without authentication and then add it in at a later stage but I don't like that.
Authentication can quickly get very complicated once you start adding user confirmation via email, an admin role and other custom fields. I prefer to get all that machinery up and running in the context of a very simple application and only then add my real application code into it.
Similarly, when I'm starting with a new application I don't usually hit on the right data architecture right away. That leads to me adding new columns to my database tables and models over time and it always involves quite a lot of 'rake db:rollback / rake db:migrate' operations. I prefer to go through this phase without the extra baggage of authentication, and often, without worrying a lot about CSS etc. I just need to try several alternate views of the problem before I settle on a final design.
So may approach is to build two applications side by side. The first is a minimal application with, say, a single table (usually called 'posts') to which I add Devise and configure all the options that I want.
This is a very useful piece of code for other applications, so I always create a good README and store a copy of the code for future use.
Separately I work through my core application and, once that is working, I condense all the migrations that add/remove/rename columns into a few 'clean', simple migrations.
Finally I add the core application code to the minimal authenticated application. That way it is easier for me to update the controllers, etc. to use authentication.
The bottom line is that you want to minimize the number of moving parts in the early stages of a new application design. This is a simple approach that works very well for me.
Thursday, February 14, 2013
Street Intersections in Google Maps
I need to display Google Maps with markers placed at street intersections, starting with an address that looks like '13 Av E / E John St, Seattle, WA'.
Google Maps doesn't know how to parse this and gives me a location on John Street but it's not correct.
Various sources on the web tell you use 'at' as the conjugation between the two streets - so '13 Av E at E John St, Seattle, WA'
This works fine in most cases but in some, like this one, it fails - this gives the intersection of 13th Ave East and E Prospect (7 blocks away).
What seems to work much better is to use 'and' ... '13 Av E and E John St, Seattle, WA' is spot on.
Google Maps doesn't know how to parse this and gives me a location on John Street but it's not correct.
Various sources on the web tell you use 'at' as the conjugation between the two streets - so '13 Av E at E John St, Seattle, WA'
This works fine in most cases but in some, like this one, it fails - this gives the intersection of 13th Ave East and E Prospect (7 blocks away).
What seems to work much better is to use 'and' ... '13 Av E and E John St, Seattle, WA' is spot on.
Friday, February 8, 2013
Jquery snippet to disable / enable links on a page
Here are two approaches to hiding all the anchor (a) tags on a page while retaining the html that was contained within the tags. They both use Jquery.
Approach #1 - Remove all the 'a' tags while retaining their content - no way to retrieve the links, etc.
This lets you effectively disable all the links without losing any text. Using the .html() call instead of simply .text() means that any images or other formatting is preserved.
Approach #2 - Reversibly Hide and Show the tags
Replace the 'a' tags with 'span' tags and store the original tag pair in a custom data attribute
Custom data attributes all have the prefix 'data-' followed by a unique name. You can store arbitrary data in these. With this approach I am storing the original 'a' tag pair and its contents.
In the example, my custom data attribute is called 'data-craic' and note that the 'span' tags are given a unique class (I'm using 'hidden-link') that allows you to identify them
To hide/disable the links:
To show/enable the links:
At least in my use cases I have not had to encode the original 'a' tag pairs but base64 encoding might be a good idea to avoid any possible unintended consequences.
Custom data attributes are extremely useful and were the perfect to solution to this problem
You can find a self contained web page with these scripts at https://gist.github.com/craic/4987192
Approach #1 - Remove all the 'a' tags while retaining their content - no way to retrieve the links, etc.
This lets you effectively disable all the links without losing any text. Using the .html() call instead of simply .text() means that any images or other formatting is preserved.
$("a").each(function() { var t = $(this).html(); $(this).replaceWith(t); });
Approach #2 - Reversibly Hide and Show the tags
Replace the 'a' tags with 'span' tags and store the original tag pair in a custom data attribute
Custom data attributes all have the prefix 'data-' followed by a unique name. You can store arbitrary data in these. With this approach I am storing the original 'a' tag pair and its contents.
In the example, my custom data attribute is called 'data-craic' and note that the 'span' tags are given a unique class (I'm using 'hidden-link') that allows you to identify them
To hide/disable the links:
$("a").each(function() { // Get the html within the 'a' tag pair var t = $(this).html();
// Get the entire tag pair as text by wrapping in an in-memory 'p' tag and fetching its html var original_tag = $(this).clone().wrap('<p>').parent().html();
// Replace the 'a' tag with a 'span' - put the original tag in the data attribute $(this).replaceWith("<span class='hidden-link' data-craic='" + original_tag + "'>" + t + "</span>"); });
To show/enable the links:
$(".hidden-link").each(function() { // Retrieve the original tag from the data attribute var original_tag = this.dataset.craic;
// Replace the 'span' with it $(this).replaceWith(original_tag); });
At least in my use cases I have not had to encode the original 'a' tag pairs but base64 encoding might be a good idea to avoid any possible unintended consequences.
Custom data attributes are extremely useful and were the perfect to solution to this problem
You can find a self contained web page with these scripts at https://gist.github.com/craic/4987192
Monday, January 28, 2013
JSON versus JSONP Tutorial
I recently found myself a bit confused on how to convert a JQuery $.getJSON() call to use JSONP (JSON with Padding).
JSONP is a way to get around the Same Origin Policy restriction when a script in one domain wants to fetch JSON data from a server in a different domain.
The approach is simple enough but I had trouble finding a clear explanation. So I wrote up a simple example in both JSON and JSONP with server and client code.
The live demo is at http://json-jsonp-tutorial.craic.com and all the code is on Github at https://github.com/craic/json_jsonp_tutorial
I hope that you find it useful...
JSONP is a way to get around the Same Origin Policy restriction when a script in one domain wants to fetch JSON data from a server in a different domain.
The approach is simple enough but I had trouble finding a clear explanation. So I wrote up a simple example in both JSON and JSONP with server and client code.
The live demo is at http://json-jsonp-tutorial.craic.com and all the code is on Github at https://github.com/craic/json_jsonp_tutorial
I hope that you find it useful...
Subscribe to:
Posts (Atom)