Nick Gauthier's Blog

Nick Gauthier is the CTO and Co-Founder of Nomics, a cryptocurrency financial data and pricing company.

Previously, Nick Gauthier was a web freelancer, consultant, and trainer specializing in Ruby on Rails, JavaScript (especially Backbone.js and jQuery), and PostgreSQL. In 2011 he co-wrote Recipes with Backbone with Chris Strom and in 2012 he wrote Mobile Web Patterns with Backbone.js.

Nick has spoken at a bunch of Ruby and Open-Source conferences including RailsConf, WindyCityRuby, GoRuCo, Ruby Hoedown, Lone Star Ruby Conf, and RubyNation, and he's always interested in speaking about web development, Ruby, Rails, JavaScript, Backbone, and other topics.


A reverse chronological archive of all posts by Nick Gauthier.

Rails Systems Tests with Headless Chrome on Windows Bash (WSL)

I’ve been using the Windows Subsystem for Linux for a few months now, but one thing I hadn’t figured out was how to run selenium tests from it. WSL doesn’t support GUI applications, and Chrome is especially difficult to get working. So even thought I want to use chromedriver with the new headless option, Chrome still doesn’t work properly in WSL out of the box.

Elm on Rails with Webpack

Continuing from part 1: Preact on Rails with Webpack, today we’re going to look at how to set up Elm in front of Rails using Rails 5.1’s new webpack scaffolding.

Preact on Rails with Webpack

I needed to brush up my front-end skills, since I’ve been in Gopherland for a couple years now, so I decided to write a few tutorials about using Rails with modern front-end frameworks. I’m learning this as I go, so please feel free to comment or message me on Twitter if you have some tips about the setup.

Linux Workstation PC Build

I got great response on Twitter from people who were interested in building a Linux workstation. In this post, we’ll build a ridiculous PC for under $1,000, plus show a couple add-ons you can make to fit your use case.

Staying Motivated While Working Solo

I recently started MeetSpace, a video conferencing app for distributed teams. It’s not the first time I’ve started a product company, nor is it the first time I’ve worked alone. But, it’s the first product I’ve worked on alone. In the past I had either partners or clients I’ve worked with on any project, but this is the first one where I’m the solely responsible person (insert irresponsible coworkers joke here :P ). In this post I’ll outline the ways that I keep myself motivated and working each day.

Minimum Viable Project Management

I started my career in software at a consultancy, and over the years I’ve spent more than half my time as a consultant or freelancer. In all this time across many projects, I’ve experimented with many ways of doing software project management. From processes like waterfall, agile, and kanban to tools like pivotal tracker, trello, github issues, and basecamp.

Interview Notes

I was asked recently to write up how I give interviews, and I realized that it could be very helpful to publish these publicly. As you will understand while reading, knowledge of my interview and its rubrick doesn’t give a potential candidate an edge. In fact, I think it would lead to a more accurate interview. Enjoy, and share your thoughts!

Scraping the Web with Ruby

When you run into a site that doesn’t have an API, but you’d like to use the site’s data, sometimes all you can do is scrape it! In this article, we’ll cover using Capybara and PhantomJS along with some standard libraries like CSV, GDBM, and OpenStruct to turn a website’s content into CSV data.

Using Docker to Parallelize Rails Tests

Docker is a new way to containerize services. The primary use so far has been for deploying services in a very thin container. I experimented with using it for Rails Continuous Integration so that I could run tests within a consistent environment, and then I realized that the containers provide excellent encapsulation to allow for parallelization of test suites.

PostGIS and Rails: A Simple Approach

PostGIS is a geospatial extension library for PostgreSQL that allows you to perform a ton of geometric and geographic operations on your data at high speeds. For example:

Learning Angular on Rails

Last night I had the best idea for a JavaScript framework. It was going to use the dom with data attributes in a totally unobtrusive way. It would have global repositories for remote data, do caching, and attach controllers to the dom automatically.

Rails Controller Accessors

Recently, I’ve been reading Practical Object-Oriented Design in Ruby by Sandi Metz (I highly recommend it!) and it got me thinking more about OO design in Rails. I realized that one of the patterns I’ve been using synced really well with the messages in the book, and I wanted to share it.

WebSockets in Rails 4

I’ve been using Rails 4 (beta) a lot recently. In a previous post we looked at how ActionController::Live can be used with Server-Sent Events, but the problem with that is that there’s no way for the client to communicate back to the web server. Enter: WebSockets.

Rails 4 Server Sent Events with ActionController::Live and PostgreSQL NOTIFY/LISTEN

I had a simple problem: one user takes an action, and I want it to be reflected immediately on another user’s screen.

Backbone Screencast 02 - Mixins

Backbone Screencast 01 - Booting your Application

Unsubscribe Links in Rails with ActiveSupport::MessageVerifier

If you’re setting up an unsubscribe link for your emails in a Rails application, it’s important to make it secure and seamless. We want to have it function properly if the user is not logged in without having them log in first. We also want to make sure it’s not easy to forge. The url should be something like this:

Ruby on Rails on the Nexus 7

This evening I decided to put Ubuntu on my Nexus 7 to see how it performs and what packages are available on ARM. I’m happy to report that Ruby 1.9.3 (p194) and Rails 3.2.9 work perfectly (albeit slowly :-D).

Intro to Backbone.js

Here is a talk I gave at the Baltimore Javascript users group. It is an introduction to backbone.js, featuring about 25 minutes of slides and an hour of live coding an example application.

One Click Development

The Problem

RSpec with Domino

Using Domino with RSpec is awesome. I’ll let the code speak for itself.

var self = lame

Deploy Ruby as a Gem


Raphael.js + Backbone.js + Traer.js

Raphael.js is a cool vector graphic drawing library for javascript. It uses SVG (VML on IE) to draw just about anything, and provides lots of easy helper methods. The coolest thing about SVG is that since it’s XML it can be inserted directly into the dom, so every element has its own dom node.

Playing with Ember.js

Today I played around with Ember.js. I wanted to make my own Pomodoro timer, and I figured it would be a good way to try it out.

Quick Ruby Tests with Bash

In Ruby on Rails development, we have great gems like Guard that will re-run tests or other tasks based on changing files. I was interested in finding something more lightweight but less configurable and flexible that I could use on smaller projects.

Ruby and the Web

Here's a talk I gave at Bmore on Rails (Baltimore's Rails user group). I talk about MVC vs Model 2 and how they apply to Rails. I talk about Frameworks vs Libraries and White Box vs Black box. Also I talk about a theoretical framework I experimented with to try to implement a more object oriented system for dealing with the web in Ruby (based on Rack).

Simple ruby setup on ubuntu

Today I setup a new development machine. My preferred OS is Xubuntu, which is Ubuntu + XFCE (a light window manager). In the past I’ve used RVM and have been happy with it, except for one thing: compiling.

Galaxy Nexus First Impressions

I got a Samsung Galaxy Nexus yesterday and @reillyhawk asked me to share a review. Disclaimer: I’ve had this device for 20 hours.

Open for Business

I’m excited to announce that I’m now open for business!

Recipes with Backbone Released!

Chris Strom and I have finished our e-book on Backbone JS: "Recipes with Backbone". It is now available at for $24.

The book is targeted at the intermediate to advanced level backbone developer, but that's not to say beginners won't get anything out of it. To quote the site:

  • This is not the definitive guide to Backbone.js.
  • This is not an introduction to Backbone.js.
  • This is the book you read after you read the tutorial.
  • This is the book that teaches you to kick ass with the hottest Javascript MVC framework around.
  • This is Recipes with Backbone
  • Preview the table of contents

So if you want to learn more about backbone js, buy it!

Mocking on Rails

Gregory Moeck's awesome post Stubbing is Not Enough got my brain back on the subject of mocking. Readers of this blog may note that I had quite a rant against mocking almost a year ago and Gregory posted a response. I think the result of that post and the discussion that ensued was not that mocking and/or stubbing were bad practices, but that when they are applied inappropriately they can quickly deteriorate the tests and the design of an application.

After reading Gregory's article, I wanted to revisit the state of mocking in Rails applications. One of the things I noticed in his article was that he was addressing some ruby classes and their interactions. He enforced the OO concepts of message passing and how mocks are better than stubs at testing very OO code. While I really like his solution, something was nagging at me: how can I do this in Rails?

For example, here is the stock rails scaffold controller and functional test:

class PostsController < ApplicationController
# GET /posts
# GET /posts.json
def index
@posts = Post.all

respond_to do |format|
format.html # index.html.erb
format.json { render :json => @posts }

class PostsControllerTest < ActionController::TestCase
setup do
@post = posts(:one)

test "should get index" do
get :index
assert_response :success
assert_not_nil assigns(:posts)

What can I mock? In Gregory Moeck's example, the ticket reservation object was passed in to the ticket machine interface (dependency injection) so we could easily mock the ticket reservation role and assert that the ticket machine interface interacted with it properly. He also intentionally doesn't touch the @current_display variable because it is internal to the system.

In our Rails controller and functional test, we can observe the following:

  1. The actions taken by the controller are internal, not based on roles it should interact with
  2. The functional test is testing internal state (assigns) and not messages passed by the controller

My gut at this point says I'm on an integration boundary between the user and my internal system. So that means I would have an integration test on the controller. But still, the signature of a rails controller to call activerecord and render doesn't seem to lend itself to encapsulation and mocking.

At this point I attempted to write what I thought was a change to the way controllers work using dependency injection and object composition, but I failed at it. So I'm leaving this post with some open questions:

  1. What is the proper way to test rails requests with mocking?
  2. What is the proper way to do an integration test, and how deep should it go (i.e. beyond the point that you have covered with unit tests)?
  3. Are there other web frameworks out there with great encapsulation as a best-practice?

Super extra bonus points if you post links to open source projects that have a test suite that actually do these things well. Thanks.


Greg Moeck
A good number of people have asked me that question since I wrote the post. My general response is two fold.

First, if I'm just doing CRUD reading and writing, I don't really feel the need to have unit tests for the project, so long as I have end-to-end acceptance tests. The logic in Rails is simple enough that a computer could write it, so I don't really feel a high degree of risk there.

However if I'm dealing with a complex domain then I tend to separate out my domain layer from Rails, and treat my controllers as ports (from Alistair Cockburn's ports and adapters architecture) into and out of the web. I will have them talk to an adapter within my domain, which is all well encapsulated and heavily unit tested. I generally don't unit test the controller or the view layer , and just let my end-to-end acceptance tests ensure that everything is plugged together correctly. However if something is particularly hairy I will cover the rails part in an integration test.

The basic approach is similar to what Eric Evans calls the "Anticorruption Layer" in Domain Driven Design (page. 366).

I'm working on a sample application for an auction house which you can at see at There isn't much to see in the actual domain layer yet, but it will give you a general idea.
Nick Gauthier
awesome, thank you for chiming in. One thing I've been mulling over is how much of the variability in the controller's form=>params=>domain model do you cover w/ acceptance tests?

If a form has a bunch of params, some options (radio buttons) some non-required some required, you have to make sure the fields are wired up some how to get passed in to your domain model. Do you test all the paths?
Greg Moeck
I personally tend to try and leave that stuff to the adapter to decide what to do with and think of the controller as more of just the way that I receive and send data. That's why i don't generally don't feel the need to unit test them because the controller is just getting all the data relavent to the request and handing it off to the domain to decide what it means. My adapter object will generally then read the parameters and send messages into the domain in the domain's language according to whatever the parameters mean. This isolates my actual domain objects from changes in the system and allows me to plug in the same domain into another delivery mechanism so long as I write an adapter for it.

The more complex side is when the controller then queries the "view side" of the domain to get a response object, which then passes that data to the view, or renders an error or something. This is where my integration sort of tests will sometimes come in if I want to unit test that logic because it is getting complex.

I certainly don't feel like I have this all figured out, and I'm personally excited that people are starting to think more along these lines because I feel like the Rails side of the equation is going to clean up a bit in the immediate future. Sometimes I do feel like Rails is a bit of an overkill though since I can use Rack and accomplish most of what I'm wanting to do with the "web framework" part of my application.
Nick Gauthier

So generally your domain objects play into the controller like:

if obj.create(params); head :ok; else; render :json => obj.errors

I've been looking more into goliath and it seems to strike a nice balance of rack-like directness (no magic) and also basic HTTP API niceties like content encoding.

Personally, I've been on an "acceptance test everything no unit tests" kick. It has proven to:

1) make very reliable software
2) make very slow test suites
Sam Goldman
This is perhaps an unfairly literal response to a contrived example, but if you are writing controllers like that, another option would be to use a library like inherited_resources, which is already well-tested, and just test the happy path in an integration.
Nick Gauthier
well yeah if you're using the stock scaffold you don't need to test it either, since scaffolds are well tested.

Mostly just interested because this is the standard way of writing a controller action.

As an aside, I'm not a big fan of inherited resources. I prefer scaffolding. Tracking down bugs and determining behavior w/ inherited resources is a pain. I'd rather type it out.

Alpha of "Recipes with Backbone" Released

Chris Strom and I have released the alpha of Recipes with Backbone. It's an e-book containing intermediate to advanced design patterns and best practices for Backbone.js. Grab it now for 50% off the list price. You'll get future versions of the book for free when you buy the alpha.


Using Exceptions to manage control flow in Rails Controllers

Ah yes, the Rails Controller, a source of much contention among Rails developers. So many different ways to manage control flow, load objects, respond in standard and erroneous ways. My opinion up until recently was "I'll just put a bunch of conditionals in there for different situations."

Recently, I've been working more on API endpoints and so responding with nice error messages has been more of a priority. I started using Exceptions more throughout my code thanks to Avdi Grimm, and I recently wrote and action that I'm particularly proud of. Check it out:

# This controller's job is to exchange twitter credentials for Shortmail credentials
class TwitterReverseAuthController < ApplicationController
# First, let's make our own subclass of RuntimeError
class Error < RuntimeError; end

def api_key_exchange
# Here are our required parameters. If any are missing we raise an error
screen_name = params.fetch(:screen_name) { raise'screen_name required') }
token = params.fetch(:oauth_token) { raise'oauth_token required') }
secret = params.fetch(:oauth_secret){ raise'oauth_secret required') }

# OK now let's authenticate that user. If we can't find a valid user, raise an error
@user = User.by_screen_name(screen_name).where(
:oauth_token => token,
:oauth_secret => secret
).first or raise'user not found')

# Now we'll build a device. I'm not catching an exception on create! here because
# It should never fail. (I.e. a failure is actually a 500 because we don't expect it)
@device = Device.find_or_create_by_token!(
params.slice(:token, :description).merge(:user_id =>

render :json => { :api_key => @device.api_key }

# Now I can simply catch any of my custom exceptions here
rescue Error => e
# And render their message back to the user
render :json => { :error => e.message }, :status => :unprocessable_entity

Here are the things I really like about this solution:

  • The happy path is really clear because there's no if/else branching
  • Errors are really obvious because I'm raising an exception (as opposed to "else render json that complains" which looks like a render which is not immediately apparent as a failure)
  • It's super easy to handle the errors in the same way. Instead of repeating the json render with a different message all throughout the method (i.e. it's DRY)

How's this look to you? How do you organize controller control flow?


Jim Gay
But this will only catch one error at a time, so you'd return the result when the first raise is hit even if multiple parameters are missing.

And Avdi points out that there is performance overhead with raising exceptions. Did you opt not to worry about that? Why not just collect an array of errors and test for their presence?
Nick Gauthier
I think the performance overhead isn't an issue here. I expect users only to get it wrong while they're figuring it out. If this was on a form or something, then I'd raise an exception if the user is invalid, and I'd have all the params right there. But this is an API call, so it's more of a "while I'm developing I get errors".

Also keep in mind it's more performant the sooner I can bail out of processing the request :-)

Collecting and returning all errors makes sense for fetching the params, but if I can't find a user I can't proceed with the rest of the call. So I have to stop execution anyways.
I've used a similar approach although I was a little bit more granular in the HTTP response codes. For example I'd send back a 404 if an entity was not found.
- There's too much going on this method.
- From my limited reading of Avdi's blog posts / presentation slides, I don't feel that he would agree with using exceptions this way.
- If you really want to subclass RuntimeError, I would choose a more descriptive name.

An alternative:
Nick Gauthier
@codecraig great point on the 404. Could be an additional subclass that has the status set.

@james I'm looking up a user and creating a device. This is the minimum amount that can be done in a nested route on a create action. I disagree with your extraction because it will be only used in this single situation and obfuscates the method.

I might make a User.authenticate_via_oauth(screen_name, token, secret) that returns nil or a user, but creating a separate class is overkill here IMO.
Avdi Grimm
Assorted thoughts...

* So many wonderful fetches! My cup overflows.

* Nice use of 'or' as a statement modifier too, putting the error case last, where it belongs :-)

* Maybe I'm missing something... why are you explicitly instantiating the exceptions? Why not 'raise Error, "some message"'?

* Because this is in an API endpoint, it makes sense to use exceptions liberally. We expect humans to make occasional mistakes. Conversely, we expect API clients to be fixed when they make mistakes, and then to never make that mistake again. We also generally don't need to present "Here's what you said, maybe you meant something else..." type feedback to robots, so we don't need to worry about keeping context around that the exceptions might throw away.

* As james pointed out, I do think there's a lot going on in this method. E.g. I personally don't think a #where() call has any business in a controller, and then you've got a #first on the end of that, which makes it a third-order digression into querying minutia.

@Jim: Exception performance is not on the order to worry about in a case like this. It only becomes a worry inside tight loops. Here network latency is going to drown any latency the exceptions add.
Nick Gauthier
Forgot about "raise Error, "message"" :-)

Yeah the "where" on user should be "User.authenticate_via_oauth(screen_name, token, secret) => user or nil"

Thanks for the feedback everyone. Glad to know the controller is still an interesting area to experiment with.
Why didn't you just use a goto? It would be more explicit, and the exact same thing you are trying to accomplish here.
Nick Gauthier
Lack of self confidence.
Brian Cardarella
Brian Cardarella
That was a reference to spaghetti code incase it was over anybody's head.
Nick Gauthier
I made a gist:

fork it!
Brian Cardarella
Nick Gauthier

your incorrect assumption is that every api call fails. When 1 per 1 million calls fail, exceptions are 3% slower, which is acceptable for readability purposed.
Nick Gauthier
I wanted to log this response post here for people reading the comments:
Nick, your solution is pretty interesting. Thanks for posting this.

It feels like you're using Exceptions for flow control. At least I don't see missing parameters as an exceptional case. It's possible and expected to happen. That's why we test for it.
Ok, so now that I've actually read the title of your post ... ;) Using Exceptions for control flow feels like writing goto statements again?!
I just posted a deeper refactoring of this code and thought it might be of interest. I'm curious to get others' thoughts on this approach.

Nick Gauthier
@Patrick thanks! You're the third person to suggest a domain model as a solution. I like the idea of making the model encapsulate the multiple actions and have the controller simply perform the standard Create action on the domain model.

How I test EventMachine

EventMachine's asynchronous and evented nature can be pretty tough to test. Here are some simple Test::Unit helpers I use along with a sample example:

def eventmachine(timeout = 1)
Timeout::timeout(timeout) do do
rescue Timeout::Error
flunk 'Eventmachine was not stopped before the timeout expired'

This is a helper that runs eventmachine in a timeout so that if it hangs the test suite flunks out after a second. Very handy.

def set_em_steps(*steps)
@@_em_steps = *steps

def em_step_complete(step)
EM.stop if @@_em_steps.empty?

This is a flow-control helper to make sure I complete all the steps I expected. Sometimes you run two chunks of EM code and then make assertions in the callbacks. Generally, you call EM.stop in your last callback, but what if they don't chain one after another? Then you have to call stop after both have finished. These two helpers just make it so that I can define my steps, then mark each as completed. They stop EM once all the steps are completed.


Here is an example test from's test suite for iPhone push notifications:

test 'send a push notification to the push daemon' do
token = :iphone_device_token
message = 'hello world'
# define two steps that must be completed before stopping
set_em_steps :payload, :notification

# run the following code, but time out after 1 second
eventmachine do
# MockServer is a fake iphone push server (pretending to be apple)
# it yields responses back to the instantiator
Test::Helpers::PushD::MockServer.listen do |response|
# Unpack the push
id, exp, device, payload = PushD::Pusher.unpack(response)

# make sure that the payload has the right into in it
assert_equal token, device
assert_equal message, JSON.parse(payload)['aps']['alert']

# mark the payload step as complete, meaning we've received
# and verified it
em_step_complete :payload

# This is another part of code under test.
# This is how we send a push message
PushNotifier.notify(token, :message => message) do |success|
# When the push message is sent it runs the callback block with
# a boolean and we ensure it's ok
assert success
# now make sure that this step is marked as finished
em_step_complete :notification

Now, we can be sure our assertions are run, or it will time out because it won't stop.

How do you test eventmachine?

Backbone JS: View signatures to prevent repaints

A nice thing about backbone is being able to bind a view render to new data. But sometimes you get new data, but it's not actually new, it's the same data that has not been updated. This will still cause the view to repaint because the event will fire.

To combat this, I've started putting signatures on my views.

In the initializer for the view:

this.signature = "";

Then in the render method, I do a reduction on the data that drives the view and compare signatures and decide to paint or not:

// Create a signature from the "Posts" that this view renders
var new_signature = _.reduce(posts, function(memo, post) {
// Store the id and number of comments. This represents what a
// "changed" post is for the view
memo.push([post.get('id'), post.get('num_comments')].join(','));
return memo;
}, []).join('|'); // joined with pipes for each post

// If the signature is the same, end the render
if (new_signature === this.signature) {
this.signature = new_signature;
// Otherwise store the signature
this.signature = new_signature;
// Your view code down here

This is essentially caching with a custom cache key. Except instead of retrieving a cached value, we leave the dom as-is. This cuts down on repaint "flickering".

jQuery Deferred and Backbone JS

Backbone is a really interesting framework, and my favorite part so far is the following idea:

If you use a callback, you're Doing It Wrong

This has held true for me for my development with Backbone so far. When you make data calls to the server, you let the appropriate events notify interested parties when objects are changed or updated or refreshed.

However, with non-data operations, callbacks can be really useful. Today, I needed to animate an object, using jQuery's slideUp. I wanted the slideUp to go hand-in-hand with a deletion of an object. Because slideUp is asynchronous, and the deletion action is asynchronous, I needed a callback to synchronize them. The reason I couldn't do them simultaneously is that when an object is deleted, many view elements refresh themselves, and if the slideUp wasn't finish the dom refreshing would interrupt the slideUp and it looked gross.

So, I need a callback on slideUp to call remove. Here arose an SRP problem: the view should not concern itself with the removal of the model from the collection. One solution is for the view's removal method to take a callback and pass it along to slideUp. But I wanted something more flexible. Enter jQuery Deferred.

Deferred objects let you chain callbacks and return promises as objects. For you Rails readers, imagine AREL had a hot sister written in Javascript that was into AJAX.

So here is my view method that runs slideUp and returns a deferred object:

return $(this.el).slideUp(200).promise().done(
_.bind(function() { this.remove(); }, this

Now when we archive and email, we want to remove the dom element by sliding, then we want to tell the email model to archive itself. Here is the archive method:


Pretty straightforward. We call remove (the previous method) then we tell the model to archive itself. This method is bound to the click event on the archive button using Backbone's view event binding methods. When the model archives itself, Backbone events automatically fire, so other Views can listen for "change" and "remove" to update their elements.

EDIT: changed code reflecting Julian's comments.


Using jQuery 1.6, you can considerably simplify your method:

remove: function() {
.done(function() { this.remove(); });

1.6 adds jQuery.fn.promise() that returns a Promise to observe when a collection has no more animation going on. It's resolved with the collection is was called on as its context and first & only argument.

(sorry for how the code is formatted, Blogger's comments just plain suck :/)
Nick Gauthier
Cool! Yeah I noticed slideUp didn't return a promise, so I didn't know what to do with it.

I think I'd still need the _.bind within the done to bind to the object though, right?

Backbone and Rails Forgery Protection

I just had a tough time getting Rails 3 to play nice with Backbone JS, and it turned out to be a simple problem with a simple solution. Backbone was not sending the csrf authenticity token embedded in the page when it sent create/update/delete requests, and Rails was destroying the session when it detected the invalid request.

Here is all the javascript it took to get Backbone to include the token with all requests:

/* alias away the sync method */
Backbone._sync = Backbone.sync;

/* define a new sync method */
Backbone.sync = function(method, model, success, error) {
/* only need a token for non-get requests */
if (method == 'create' || method == 'update' || method == 'delete') {
/* grab the token from the meta tag rails embeds */
var auth_options = {};
auth_options[$("meta[name='csrf-param']").attr('content')] =
/* set it as a model attribute without triggering events */
model.set(auth_options, {silent: true});
/* proxy the call to the old sync method */
return Backbone._sync(method, model, success, error);

Note that this depends on the meta tags being present, which require you to call the helper "csrf_meta_tag" in your rails view for the page (put it in the head).


i'd love to hear more about using backbone with rails. i really like backbone, but to me it still feels like i have to write "a lot" of additional code. routes, models and validations for example.

i'm hacking on retrieving these things from rails, but maybe there's a better way to do this?!
Nick Gauthier
I'm going to put together something larger. It may fit in a blog post, but probably not. It may be a talk, I'm not sure yet.

I don't want to do a tutorial. For me, that's not very interesting. I want to write about backbone "6 months in" because that's where I think it will really shine.

So far, my initial impressions are that the code you write on the front end is code that you don't have to write on the back (especially crazy rails view code).

The most amazing part so far has been that if you wire everything up right by embracing event-driven coding, changes cascade to all the right places automatically. That is *huge* for "6 months in" when you can't afford to update an ajax call in 24 places when you change a template.
got it. i also love the evented part of backbone!

so how do you do templating? do you load templates upfront (via something like jammit) and then request json data, do you load the templates and the json and rely on caching or do you have rails render the template and retrieve the complete template?

also, did you run any metrics comparing the performance of your app with and without backbone?
Nick Gauthier
jammit + underscore templates + json api.

Benchmarks are tough because it depends heavily on many factors. However, I can tell you that a specifically heavy page went from 10s just for the Rails request, to 0.6s according to chrome's load time. So, speed improvements are massive.

Also, the user's subjective speed is even higher, because we can manipulate elements on the front-end while doing an asynchronous call behind the scenes. So everything is instant :-)
Maciej Adwent
Heya Nick,

Thanks for this code, it really cleared up some initial confusion for me.

I've wrapped your code up in a little github project:
Nick Gauthier
awesome! Thanks for packaging it up nicely!
Thanks a lot!

View Abstraction in Integration Tests

Goal: Make integration tests drier by adding a view abstraction layer

Ruby on Rails has a bunch of popular of test frameworks, such as:

  • RSpec
  • Cucumber
  • Test::Unit
  • Steak

But one common aspect to all of the frameworks, out of the box, is that they're very procedural. Cucumber is designed so that each scenario has a set of steps. There is a single, global, collection of steps. RSpec is global by nature, there are no test classes, just describe blocks.

There is no doubt in my mind that these frameworks have made testing easier by adding lots of common actions, and allowing you to define your own common actions.

However, there is one thing that I don't see much of in tests: Object-Oriented Patterns. Most of us use the Factory pattern through a variety of gems. Internally, many frameworks use the Visitor pattern to execute the tests. But that's all I've ever seen (disclaimer: I am young, and have much to learn).

Here are some pain points I've felt while writing integration tests in the past:

  • Hard to reference objects throughout Cucumber scenarios. Often resulting in global variables used between steps
  • Cumbersome to check the view for expected output, often resulting in lots of css selectors
  • Brittleness introduced by coding html and css structure into test code that prevents refactoring the view

So, I recently had to write some integration tests from scratch, and I decided to do something different. I decided to implement the Bridge Pattern in my integration tests. I was confident enough in the solution that I decided to just use Test::Unit and Capybara to write my test code.

One of the major goals of this implementation is to make it easy to interact with the UI and objects within it. I think it's time I show some code.

Post Test for a simple blog site
# Given I am on the posts page
visit posts_path
# When I create a new post
click_link 'New Post'
:title => 'View abstraction in integration tests',
:body => 'We must go deeper'
# Then I should see a success message
assert_see 'Successfully created post.'
# When I visit the posts index
visit posts_path
# Then I should see one post
assert_equal 1, View::Post.all.size
# And it should have the correct title
assert_equal 'View abstraction in integration tests',
# And it should have the correct body
assert_equal 'We must go deeper', View::Post.all.first.body

Let's take a look at a few interesting things:

  • I'm directly using Capybara's dsl to navigate
  • When I create a post, I'm using a View module so that it won't use the ActiveRecord object
  • When I create a post, I pass in a hash of the fields I'd like to fill in
  • When I check to see if the post is created, I'm using methods on View::Post that return ruby objects, like an array, and that instances of View::Post have methods like "title" and "body"
  • There are no css selectors or html, but I do have button test present

OK, hopefully that piqued your interest. Let's look at some of the implementation. First, let's check out the base View module and a barebones implementation of View::Post:

module View
def self.body

class Abstract
# Access capybara dsl in the view helper classes too
include Capybara
extend Capybara

def initialize(node)
@id = node['id']
def self.all{|node| new(node) }
def id
class Post < View::Abstract
attr_reader :title
attr_reader :body
def initialize(node)
@title = node.css('.title').first.text.strip
@body = node.css('.body').first.text.strip

def self.nodes

OK what's going on here? Let me step you through it:

  • When I call View::Post.all, that calls View::Abstract.all, which iterates over View::Post.nodes and builds instances of View::Post
  • View::Post.nodes runs Capybara's current page through Nokogiri, then selects all the HTML nodes with the class "post"
  • When a View::Post is initialized, it uses Nokogiri to set attributes on itself, from the view. Like title and body
  • View::Abstract always stores an object's dom id as @id so that it can be used internally, which we'll see next.

Now lets take a look at how View::Post.create works:

module View
class Post
def self.create(opts =''))
fill_form opts
click_button 'Create Post'
def self.fill_form(opts)
fill_in 'Title', :with => opts[:title]
fill_in 'Body', :with => opts[:body]

Here we show how the class method uses capybara to take care of filling in the form for us. Now, if we change how our forms are rendered, we can change them in one place. Nice and DRY.

Let's look at one of the biggest pain points for me in cucumber: deleting an object in a list of objects. Why is this a pain point? I usually have to write a custom step like "When I delete the Post 'My Post'", which will use dom_id to find the id of a Post object found in the DB with the title "My Post". I find this really roundabout, because you don't need to look in the database for an object to figure out its dom id. It's right there in the view. Any competent internet user would be able to click on the "Delete" button for the post called "My Post" if you showed them the page in a browser. Here is the test code:

# Given I made two blog posts
2.times do |i|
visit posts_path
click_link 'New Post'
:title => "Post #{i}",
:body => "Body for #{i}"
# When I go to the posts path
visit posts_path
# Then I should see two posts
assert_equal 2, View::Post.all.size
# When I delete Post 0
View::Post.find_by_title('Post 0').delete
# Then I should see one post
assert_equal 1, View::Post.all.size
# And I should not see Post 0
assert_nil View::Post.find_by_title('Post 0')
# And I should see Post 1
refute_nil View::Post.find_by_title('Post 1')

Notice specifically the line where the post is deleted. I grabbed the instance of View::Post corresponding to the title I wanted and called .delete on it. Then I thoroughly check that the correct post was removed. Here is the implementation:

module View
class Post
def delete
within(id) { click_button 'Delete' }

Expecting more? In View::Abstract we defined the method "id" which returns the dom id of the object which was stored when we initialized it. I simply told capybara to click "Delete" inside that node's div. This was the "eureka moment" for me. Something that is frustrating and difficult in other styles of testing is just plain simple with this pattern.

There is a lot more than can be done here, and I'm just scratching the surface. If you'd like to try it out for yourself, I've created a Rails project with this environment setup on github: View Abstraction Demo. Here's how to use it:

git clone git://
cd view-abstraction-demo

Note: you must use ruby 1.9.2. Important files to look at are "test/test_helper.rb" and "test/integration/post_test.rb".


I am sure your idea is gonna turn into a gem. It just fits so well in a view testing pattern and adds performance to it also. I don't fancy Cucumber, use Steak, but still I Repeat Myself with # and . so unql.

Gem would be nice. Generating test/views/abstract.rb and test/views/abstract/model.rb - so nice to modify you know. I'm looking forward to help out with gem development. Wish you luck.
Nick Gauthier
Hey Jakub, glad you liked the post.

I don't think this would make a good gem, because only about 20 lines would be packaged. The generator would create very bare files because it would have no understanding of your css structure.

Also, I've found that UI elements don't always map 1:1 with database elements.

Lastly, I find that the power of this pattern is simple and can be included directly in your code. Putting it in a gem would make it hard to change it on a project-by-project basis. I expect to add more helpful methods to abstract as I use this more.

Thanks for the answer Nick! You can be right, maybe it's too much for a gem, however having that in a generator could be a trick.

There is no need for the gem to understand css nor thoughtlessly map db - it can generate some conventional files that can be easily modified to fit the view. Actually, modifying those files would be a first step of a view specification!

The thing about that pattern is, for me, that those methods should not be stuffed in helpers (or additional spec/test methods) which is usually done. And to not forget that, the gem hooked to a initializer would be nice.
Nick Gauthier
yeah, I think moving them to separate files is definitely a cleaner way of doing it. Feel free to make a gem / plugin with generators to make it easier for yourself.
Corey Haines
Hi, Nick,
Good thoughts.
This looks similar to the page object pattern that a lot of people are using these days in Watir and Selenium tests. Have you looked at that? It could help influence your stuff, hopefully guide you past any dark corners they've already dealt with.
Nick Gauthier
Thanks Corey, that's exactly what I'm trying to do. I figured there was no way I could be the first person to think of this :-)
Nick Gauthier

I've been adding more functionality, and it seems like you're right and it would make a nice gem. Mostly to make it easier to define selectors and attributes, and make a bunch of convenience methods (like the enumerable methods).

Also, I am planning on bundling some assertions with the gem, which will take advantage of capybara's automatic delay on failed assertions. I'll post on this blog soon when it is released.

Nick Gauthier
This has been released as a Gem, and the syntax is changed a bit:

My Workflow

Stemming from a workflow discussion on twitter featuring @bryanl, @eee_c, @pjb3, @stevenhaddox, and @webandy, I decided to share my workflow. Originally, we were talking about Mac Apps (specifically the paid ones), but that turned into "how do you work". Here are my goals for a good workflow:


  1. Easily navigate applications and contexts
  2. Minimize distraction
  3. Maximize focus
  4. Get work done

Easily navigate applications and contexts

I need to be able to quickly get to certain applications in order to maximize productivity. I want to be able to swap to a browser, terminal, editor, or other application in under a second. The mouse won't cut it, I need keyboard shortcuts.

My solution here is Alt-Tab. One of the oldest and most basic ways of switching apps. Alt-Tab is very fast for me because 95% of the time I have 3 windows open on any given workspace (OS X read: spaces). This means that Alt-Tab goes back to the previous window, and Alt-Tab-Tab goes to the alternative window.

Minimize distraction

I layout my workspaces in a 2x2 layout. Here is what I put on each window:

  • [1,1] Chrome holding "distracting" sites like gmail, twitter, campfire, pivotal, etc
  • [1,2] one Vim, one Terminal, one Chrome with only development related websites open
  • [2,1] passive applications. Usually Xvfb and Rhythmbox. I don't go here often
  • [2,2] alternative dev context. Same as [1,2] but for a different project. This is only up occasionally. Sometimes I use it for transferring files to remote servers.

I spend 99% of my work day in [1,1] and [1,2].

I disable all notifications. Nothing pops up. Nothing makes noise. Nothing "pulses" in the taskbar. Nothing. My phone is on vibrate and no apps on it make notifications except calls and texts. No one ever needs your attention immediately in gmail or campfire or twitter or pivotal or basecamp. When I am at a solid stopping point in my real work (30m to 2h in between) I go check all my distraction tabs. If the server is really on fire, someone will call my phone.

Maximize focus

This is a combination of the previous two, plus I always maximize my windows so I'm only looking at one thing at a time. No distractions so nothing gets in my face while I'm working. Many people like putting up a terminal next to their editor, but they usually end up alt-tabbing anyways. I prefer larger windows so I can have more vim splits (usually 3x3 on each tab) and more terminal output.


This last section is in response to our twitter debate about paid mac apps. I'm on linux, so I have no access to any of those apps. Let's start with the apps I spend the most time in:

  • Edit code: Vim
  • Run tests, servers, and other commands: gnome-terminal
  • Web browser: Google Chrome
  • File browsing and remote FS management: nautilus w/ gvfs
  • Administer databases: mysql or psql console

That is where I spend the majority of my day. There are very few alternatives to the above applications that aren't very similar. Everyone has a browser and a terminal. Editors is an entirely different can of worms, so I won't get into that here. Some people use GUIs for DBs, but I prefer the console. I learned SQL before I did web development, so it's more natural for me. And if it's too hard, I write a test :-).

There are many excellent command line applications at your fingertips that I use through my terminal. Apps like git, grep, find, screen, top, ps, kill, and thousands of others. Learning the command line root of the application is usually more productive than a gui that was built on top of the command line application. The only exception I've found is when I need something shown to me graphically, like a pdf. I like to avoid the mouse, so the terminal is my friend.

Here are a bunch of Mac Apps that a couple of people brought up during our twitter discussion:

  • mailplane
  • echofon
  • divvy
  • propane
  • alfredapp

I think it's a nice small selection of apps that illustrate the differences in my workflow from many other developers. Mailplane, echofon, and propane are for Gmail, Twitter, and Campfire. I try to ignore those applications as much as possible. Ideally, I want to spend less time in those apps in order to get more done.

Divvy and Alfredapp are all about launching and organizing applications. I have three applications in three windows. Vim manages its own splits and tabs. In my terminal, screen or tabs works. In the browser, I use tabs. Everything is maximized to improve focus.

What it really boils down to, though, is that I don't do a lot of stuff during the work day. I write code. I avoid anything that is not essential to writing code. I want less applications and I like having my life "in the cloud" so I don't have to install a lot of stuff on a new computer. My knee jerk reaction to new applications is not "oooh shiny", it's "I don't need that". Another really nice thing about this workflow is that it's scalable. I've worked from 1024x768 to 2560x1600, and it always makes optimal use of screen real estate. When I swap to my netbook, I don't feel cramped, I just open more tabs.

Join the discussion

Please post comments about the applications you use that I haven't addressed. I'll try to reply to everyone about how I handle different scenarios. Most of the time, my response will be "I try to avoid that because I'm trying to write code here!"


A couple of GUI tools I prefer over command line:

* GitX

I find it easier to review diffs in GitX than diff at the command line. Also easy to unstage unwanted changes before commiting

* Rubymine

Hard to explain this. I'm planning on doing a screencast soon to show how I work in Rubymine. I find it more efficient than vim + command line or emacs.

* Querious

I'm fine with writing SQL on the command line, but viewing the results, esp. when you have large columns and/or large tables, sucks. A GUI fits things in tabs, windows and grids nicely.

I agree with you about the mouse. As much as possible, I'm trying to have keyboard shortcuts for all common operations.

As for distractions, depends on the project you are working on. If you are working on a project by yourself, "leave me alone I'm coding" works really well. If you have many devs working on the same code base, near real-time communication is important. It's a balance though.
Bill Mill
pjb, check out dbext for viewing the results of queries in Vim. If you're a vimmer, it's a godsend.

Also, my workflow is nearly exactly the same as yours, but on OS X. iTerm2 is as good as Linux terminals.
Since it appears you are critiquing that apps that I posted earlier, I now feel the need to reply.

Having unique apps which do their jobs really well is the key to helping me focus. The web browser is a lot of things to many people, but it most definitely isn't the end-all-be-all. I actually prefer to have as few browser tabs open as possible. There is nothing more frustrating to me than seeing all the wasted screen space wasted by tabs.

Workflows don't have to be dead simple to be scalable. I work on a 11.6" macbook air, a 15" macbook pro and a 27" imac, and I pretty much have the same configurations on all three workstations. I don't believe in external monitors, and as you know, the keyboard is most definitely the place to be.

For instance: I use mailplane, so I can have easy access to multiple gmail accounts. Sure the web browser works fine, but it doesn't easily scale past one account if you would like to check multiple ones easily.

You hack code 100% of the time. What you describe is ample for a person in that situation. My day is filled with much than just hacking on code, so tool sets will differ. No problems with that. There isn't one true way. The only thing you really need to work on is constant refinement on your path to happiness.

My work here is done. I've inspired yet another blog post ;-) I'll wait for you to troll me again another day.
Nick Gauthier

I use git-gui from time to time to review a large commit.

I'd like to see a rubymine screencast. The few times I've seen it I didn't see any advanced features. The "mining" that I saw looked the same as ctags.

As for the sql gui, I tend to write a test when stuff gets pretty complex. psql's console automatically pipes output to "less" so it's pretty easy to read.

Some projects I pay more attention to campfire, but it's never in realtime. If we really need a real time discussion, it's time for a phone call or a face-to-face.


I didn't mean for it to come off as a critique. I was trying to explain why I don't buy those apps, not "these apps suck because you could just do it this way".

I am pretty OCD about keeping a tidy environment, and I close tabs a lot while I browse.

I like the "single monitor" approach too. I was on a 24" for a long time, then I added a 20" that, but recently ditched both for a 30"@2560x1600.

@mailplane I only have one account. I could see needing this if I had more than one.

It's definitely a 100% hacker setup. If I had to do any graphic design or PM, it would be very different. When I do have to do PM stuff, I use another workspace and a whole new set of applications.

I don't mean to troll you. I find it very interesting that you and I have very different perspectives and setups for doing very similar work. Also, I know you can take it :-)
I use gitg as well for organizing commits, it's nearly equivalent to gitx and beats the hell out of git log :)

I also use Linux, and I found that for me tiled window managers do wonders for focus - though I understand that Compiz has that possibility now.
I use the Awesome window manager (, which has all manner of keyboard shortcuts and can be customized using Lua. It's also ideally suited to the kind of organization you're talking about, naming tabs and allocating applications to it.
Nice post Nick, I've been wanting to learn some more keyboard shortcuts for moving around the linux desktop and this has been very useful.

Those windows look tight in your [1,2] screenshot. Do you have a window-placer util or something to get them so nice and even?
oops, nevermind. I see that they are vim-splits in a full size terminal.
Nick Gauthier
yup :-)

There are a bunch of window managers and things for placing windows in linux, but I don't know them.

If you install the ubuntu package compizconfig-settings-manager and run it you'll get the desktop effects settings, which have a keyboard shortcut for tons of stuff.

Everything that is wrong with mocking, bdd, and rspec

Below is a public excerpt from the recently release RSpec book :

module Codebreaker
  describe Game do
    describe "#start" do
      it "sends a welcome message" do
        output = double('output')
        game =
        output.should_receive(:puts).with('Welcome to Codebreaker!')
      it "prompts for the first guess"

This example illustrates what is wrong with standard BDD as practiced today . Consider the following correct implementation of the spec above:

module Codebreaker
  class Game
    def initialize(output)
      @output = output
    def start
      @output.puts("Welcome to Codebreaker!")

Now consider the following refactorings of the start method:

@output.write("Welcome to Codebreaker!\n")
@output.print("Welcome to Codebreaker!\n")
> "Welcome to Codebreaker!".split.each{|c| output.write(c)}; output.write "\n"
> "Welcome to Codebreaker!\n"

All of these produce the exact same result, which is sending the string to the output stream. However, all of these refactorings will fail the test suite.

Now consider the following implementation:

@output.puts "Welcome to Codebreaker!"
> "You smell bad and are also ugly!"

That test passes the spec above, but it also insults our (paying?) users!

One of the primary reasons for having a test suite is so that I can refactor and have the tests validate my changes . If I have to change my tests and my code just to re-write the way I cause the same end result, then my test suite is useless for refactoring.

So now the obvious question is "Well Nick, how would you test it?"

require 'stringio'
output =
game =
assert_equal "Welcome to Codebreaker!\n",

This test passes for all the implementations listed initially, and will fail for the implementation in which we insult our users.

Think about the difference here. The original example is saying "Make sure the game calls puts with the following argument" whereas my test is saying "Make sure the game outputs the following string to the output object". The second case is far more useful.

Mocking is a great tool for dealing with stable APIs. However, when introduced in areas of the codebase that have just a small amount of churn, they can cripple the test suite with their brittleness. How can we expect to embrace change when our codebase can't?

RSpec has become so intertwined with mocking and stubbing practices that many people are taking mocking to the extreme and stubbing out their own code.

I will close by deferring to the wisdom of a higher power:

It is the high coupling between modules and tests that creates the need for a mocking framework. This high coupling is also the cause of the dreaded “Fragile Test” problem. How many tests break when you change a module? If the number is high, then the coupling between your modules and tests in high. Therefore, I conclude that those systems that make prolific use of mocking frameworks are likely to suffer from fragile tests.


Keeping middle-class test doubles (i.e. Mocks) to a minimum is another way of decoupling. Mocks, by their very nature, are coupled to mechanisms instead of outcomes. Mocks, or the setup code that builds them, have deep knowledge of the inner workings of several different classes. That knowledge is the very definition of high-coupling.

- Uncle Bob Martin on Object Mentor


The test you propose will fail as soon as an additional message (praising the user?) needs to be displayed in response to start(), so it, too, is brittle due to its constraints. It's all trade-offs, my friend.

But thanks for promoting the book!
Nick Gauthier
Of course it will fail, because it is a spec on the start method, whose behavior has changed :-)
It's behavior has been _extended_. If the existing message changed, that would be one thing, but the fact that there is an additional side-effect of calling start() should not cause this expectation to fail.

The issue here is that the behavior of the start() method is reflected in another object. Both examples inspect the collaborator to specify the outcome. Both have constraints that will cause future failures depending on how things change.

You could probably make something more flexible by using assert_match instead of assert_equal, but that wouldn't protect against a mean-spirited developer insulting users. Again, it's all trade-offs. Pick your poison.
Nick Gauthier
Yeah I agree. It depends on how much you want to assert. I was considering doing it as a regex match. I decided that I wanted to asset that the method output the exact message, no more and no less. It could be a looser test, and allow for other tests to check other behaviors.

I probably would do that in practice when I added more behavior to the method.

Decisions that aren't trade-offs never instigate an interesting debate.
Extended or replaced, the behavior *changed*, and that's what tests are meant to detect, after all.

I think Nick's example highlights exactly why overmocking is a bad thing. IMO, I don't think this is a failure (per se) of BDD, RSpec, or even Mocking as a concept, it's just not the best test for the situation.
I definitely agree with your point that mocking and stubbing can easily be taken too far. What I hate the most is when I see an entire chain of internal implementation stubbed out (foo.hidden.internal.method.baz). It ends up coupling your tests to your implementation, and since tests are also code, you make your code spaghetti that much more tangled. I'm just as guilty as the next guy when it comes to thinking abstractly about what the functionality should be, rather than reduplicating my code in test form. Like dchelimsky said, it's a hard problem!
Evan Light
Agreed -- including that perhaps you should be using an "assert =~ //.

This is why, as a general rule, I have learned to largely avoid mocking and stubbing unless I am writing unit tests against an external system.

Yes, test doubles are great for isolating the unit under test. However, that isolation comes at a cost.

Should an external dependency change, your test doubled tests will lie to you! The test double is blissfuly unaware that the interface it has doubled has changed. Therefore, your test will inaccurately pass. Under such circumstances, I want my tests to fail loudly!

In my experience, this has resulted in significant time lost tracking down production application problems!

TL;DR White-box testing coupled with test doubles is dangerous. I strongly prefer an integration test that exercises the external dependency.
Evan Light
David: While Nick's test may have been brittle, at least the test would have failed as expected. Should the output string change, his test test fails. Should the implementation use print, his test fails.

The failure of the test is adequate to ensure that the test is revisited and updated.

Honestly, I admire so much of what you have done for the Ruby community. However, your tenacious adherence to mocking and stubbing as a routine TDD technique still utterly bewilders me. The brittleness of the resulting test hardly seems worth the benefit in almost any case except the rare external dependency.
Careless mocking can definitely make a test suite excessively brittle, but my overall experience with mocking has been the opposite. I used to practice a more classical TDD approach, where I used little-to-no-mocking and I wound up with bloated, slow, brittle test suites. After experimenting a bit and reading the RSpec book, I've started using mocking as a regular technique to achieve isolation in my unit tests and it's helped me create far less brittle test suites (not to mention they are much, much faster!).

It's definitely easy to shoot yourself in the foot with mocking (just like it is with ruby, git, or any of our other standard, powerful tools we use daily), but that doesn't mean it's not a useful, worthwhile tool: you just have to be aware of the dangers of over-mocking and mock with care.
Nick Gauthier
Test suite speed is not a valid argument for mocking.
Evan Light
@Myron: How were your tests more brittle before introducing test doubles?

If you're test has clear preconditions, execution of the code under test, and clear expectations, I don't see how that leads to brittleness.

Can you give an example of one of your fragile pre-test double tests?
Jim Gay
I agree. If you're doing should_receive(:puts) then you're saying that calling "puts" is important.

All of this is behavior of the code, you just need to spec what behavior is important: the methods called, or the output. Sometimes it might be important that a method is called, but not in this situation.
I completely agree with Jim. It comes down to what behaviour is important, and thus needs to be tested.

If Codebreaker only cares that @output begins with "Welcome to Codebreaker!", then Nick Gauthier's approach is ideal.

If Codebreaker relies on #puts being called on @output , then the "bad test" is ideal.

As always, everything's contextual: what's important to you?
Nick Gauthier
@Jim @Nick

I totally agree. What I dislike is the tendency for people to write a test against methods being called, when that is almost always not what is important in the test.

Continuously asking "What am I testing and why am I testing it" is very important to writing good code and good tests.
> Continuously asking "What am I testing and why am I testing it" is very important to writing good code and good tests.

Hear hear!
If you are using mocks and stubs in your unit tests then you need to have a good set of acceptance/integration tests as well.

I like using mocks and stubs in my unit tests as they allow me to isolate the area of the code I'm working in, and I don't have to worry about side-effects or complex set-up of other objects. I also use cucumber though, this gives me great black-box coverage of the system.

So while changing the original example will make this spec break, it won't make any other specs in the system break, because no other specs will depend on its behaviour. So long as the behaviour is preserved it won't break the acceptance/integration tests either.
Pat Maddox
The mock-based example specifies the interaction with the collaborator. It demonstrates that you can pass in any object that responds to #puts(str) and expect it to work.

Your refactoring is valid -- when the output object responds to #puts, #write, #print, and #<<, and implements them all in basically the same way.

You are free to change the internals of a method however you want, but remember that certain changes will have broader-reaching effects than others. Changing how you build up the string to, say, an array of strings that then get joined, should certainly be encapsulated. Changing how you call collaborators is a totally different thing -- you're changing the contract, potentially in a way that's incompatible with existing clients.

Focused examples (or unit tests, or microtests, or whatever) help answer the question, "what do I need to know to use this code?" When there is a collaborator involved, one of the key things to know is the protocol that the collaborator must adhere to. An interaction-based example makes that protocol explicit. Moreover, it establishes a checkpoint. Should you decide to call #write instead of #puts, you have to consider the possible implications to existing code.

I understand where you're coming from. When you run the program, there is no externally visible difference in behavior. That is why we tend to separate the duties of checking program behavior into acceptance tests. Your unit tests then become a place for you to check an object's correctness, which includes its own logic and its interactions with other objects. Don't assume that because the external behavior of the program stays the same, that the behavior internal to objects has stayed the same as well.
tl;dr of Pat's comment: I'mma let you finish, but the difference between unit tests and acceptance tests is one of the best differences of all time.
Nick Gauthier
haha thanks @admin.

@Pat I understand that is the case when you are unit testing between collaborators where specific method invocation is important. However in the example I used, it was clear to me that the caller was not a collaborator but a higher level controller or supervisor. Actors at a higher level of abstraction should not be concerned with the implementation details but rather the results.

To sum it up, I interpreted this as an acceptance test.
Evan - there is a chapter in The RSpec Book on Mock Objects that talks about when to use them and the potential pitfalls (including the one that Nick brings up here).

Mocking external dependencies is actually considered to be a mocking no-no, because then you create a binding between your app and the external dependency's API. Instead, write a thin wrapper around that dependency that speaks in the domain of your app, and mock that as a means of isolating your app's code/specs from that dependency.

But that is only the tip of the iceberg when it comes to use cases served well by mocks. Others include: non-deterministic collaborators (intentionally random behavior) of collaborators), polymorphic collaborators (no need to have 3 examples when 1 would do), protocol definitions (where which methods are called represent adherence to a contract), collaborators that don't even exist yet, etc, etc.
Nick Gauthier

I read your slides from rubyconf and I agree with your stance on mocking. However about 95% of people that copy rspec examples don't understand the implications and often code themselves into mocking hell. They are my target audience :-)
"To sum it up, I interpreted this as an acceptance test."

Part of the aim of the book is to make a distinction between application and object behavior: Cucumber is well suited to specify things at the application level (end to end tests/acceptance tests/customer tests, etc), and RSpec is well suited to specify things at the object level (unit tests, micro tests, developer tests, etc).

The example you cite comes from a chapter on describing the behavior of objects with RSpec (i.e. unit tests), after a chapter on describing the behavior of applications with Cucumber (i.e. acceptance tests), and is not at all intended to be an acceptance test. Hope that clears at least that part of this up for you.
@nickgauthier - I appreciate wanting to help people understand the subtle waters of mocking. Unfortunately, the title of this post is not in any way nuanced or subtle. If your target audience is people who blindly copy examples without thinking about their implications or trying to learn to understand them, it's likely that they won't actually read this full post, much less the thoughtful discussion that follows.
Nick Gauthier
If I had to change the title, it would be expanded to "Everything that is wrong with how everyone uses mocks in rspec to do bdd"

Every time I've encountered mocking in test code (aside from 3rd party calls) I've seen it fall under the category of the content of this post.

While you describe a sound way to use mocks to test object behavior, I have never seen it executed properly. This is not to say it cannot be done, but that everyone is doing it wrong.

And yeah, you can't defeat the cargo-cult.
Evan Light
@David: I'm familiar with that practice RE: APIs. You took me a little too literally. I always abstract them a level.

I'll have to take another gander at the (finally ;-) ) completed RSpec book. I bought the beta a very long time ago and haven't read it much since. I'm curious about what you have to say about mocking.

However, I've generally found white box testing to be too implementation specific. Granted, I realize that there are almost "camps" among TDD'ers: the 'behavioral statists' that I likely fall under, the 'unit mockists' that you likely fall under, and others. I tend to prefer writing acceptance/integration tests with only supporting unit tests as necessary to help me "fill in the gaps" where the feature is too complex to develop without unit tests to handle complex logic (see

Via minitest (or test/unit in 1.9):

def test_start_output
assert_output "Welcome to Codebreaker!\n", "" do

(blogger.... you suck. no pre tag allowed? no formatting help? bite me)

@dchelimsky: the rspec code would break too because EVERYONE uses .once. And remember, since we do test-first, it wouldn't break because we changed the output, it'd break because we added additional assertions to the test that we then needed to make pass. That isn't brittle. That's process.
@Evan: Re: "Can you give an example of one of your fragile pre-test double tests?"

The code base I'm thinking of was a rails app, and simple changes to models (i.e. adding a new required field or changing a validation) would cascade across the test suite and break lots of other tests because those tests depended on the existing behavior of the model when they shouldn't have. For example, a controller functional test that passed in valid attributes and would assert some result failed because the passed attributes were no longer valid. You may argue that the breaking controller test was a good thing, but I don't agree. I like to have lots of fast, isolated unit tests (which do use mocking carefully), as well as some full-stack integration tests (which don't use mocking). In this case, a change to a model my break a unit test and an integration test, but it's not going to cascade across my test suite.

Having controller tests that depended on the knowledge of what constituted a valid record was a very brittle approach. Mocking has worked well to help me unit test my controllers.
@zenspider - that works assuming you have a test named test_start_output and it's the only test ever written that cares about the output of the start() method. As soon as additional context requires that start provide different output under different conditions, either this test would start to get unruly (too many states/assertions), or new tests would emerge that, in making them to pass, might cause this one to break.

re: the rspec code and "once" - in fact, it breaks just 2 or 3 pages later in the book, and a wonderful learning opportunity emerges. Hooray for examples taken out of context!
Sean DeNigris
Of course, none of these pitfalls have anything to do with BDD or RSpec. Just saying...
Pat Maddox
Can you please gist a file that demonstrates that the spec passes even when you include the insulting message to the user? Here's what I got:
Nick Gauthier
@Pat ah you're right, since it's a double it will complain about extra method calls. This is why mocking becomes annoying when your method does a few things with its input. Any test will need an elaborate mocking setup.
@Nick - a page or two later in the book we add "as_null_object" to the double, and the need for an elaborate mocking setup is eliminated.

That said, we've all been missing the deeper problem here, which was recognized immediately by Nat Pryce and reflected in these two tweets:

We've been talking about pain in the example, but none of us have "listened to the test". The real problem here is that there is a missing abstraction. Change "output" to "reporter" (or similar) and change "puts" to "message" (or similar) and the refactoring concern you raised goes away, everything is "speaking" in the domain, and all is right with the world.
Nick Gauthier
@dchelimsky I think the underlying concept still stands.

A mocking unit testing strategy encourages testing the implementation of a method because it's easy. My argument is that it doesn't get you very much in the way of code validation.

When I write tests, I think about the effects of my code, and then I validate that the effects occurred.

For example, consider testing user registration. Which of these scenarios below is most useful?

When I register on the site


1) Then the database should be asked to insert a user
2) Then there should be 50 users
3) Then there should be an additional user
4) Then I can log in

I like #4, because it focuses on the input and the ultimately desired result and goal of a feature. This user-acceptance, and not unit testing.

But in unit testing I like to focus on the same goals and ask the same questions.

For example, if I'm implementing a "users sorted by join date" method, I can see two ways of going about it:

1) Mock the user model and ensure that the storage adapter is called with the appropriate method and parameters to return a sorted list of users.
2) Create a couple of users, call the method, and see if the results are sorted.

#1 is white-box and #2 is black box. I'll pick black box every time because it focuses on my goals, and not my process.

Regarding the tweets:

Yes, it's easy to blame the tools, but what if a particular tool can be easily misused? In the best of hands, it is revolutionary and powerful, but in most people's hands it's destructive. For example, morphine.

I think mocking is being abused and overused and is causing people pain in the long run. It's enticing because its simple to implement, but I find the assertions it makes to be weak and brittle.

I will agree, however, that in cases where you have a stable API that mocking can make your life easier (especially in large and complicated codebases). However, I've always found that it's not that hard to create helper methods to setup elaborate scenarios in which to do a real goal-based test. There are great gems out there like cucumber and factory girl that allow you to encapsulate complicated functionality in simple steps.
@Nick - "but what if a particular tool can be easily misused? In the best of hands, it is revolutionary and powerful, but in most people's hands it's destructive. For example, morphine."

By that logic, we should all stop using Ruby :)

re: the "sorted users" example, I agree that is likely better specified with a more black box approach.

The distinction is that, in the sorted users example, the User class returns the object (a list of users) on which expected outcomes can be specified.

In the output example, the outcome is specified _on a collaborator_. Whether we specify that as an interaction or post-action state-based verification, the example is still bound to the collaborator object, so there is little difference in terms of dependencies and brittleness.

Using a test double for "reporter" gives us the freedom to introduce different "reporter" solutions (command line, text file, web page, etc) without these examples failing. In that context, the use of a test double and a message expectation is actually less brittle than the alternatives.
Nick Gauthier
@dchelimksy no, I'm not saying stop using ruby. I'm not saying stop using morphine in hospitals either.

It's more of an education perspective. I want to show people how they should and should not use tools so they don't dig themselves into a hole.

I think we're in agreement that when a collaborator's interface is well defined and stable that mocking is a good solution.

The problem is that whenever I see examples of mocking it's always in (what I deem to be) inappropriate scenarios.

If you showed a reporter class with a simple API of reporting messages, then showed the Codebreaker class working with a mocked Reporter, I'd be much happier with your example. I worry about a generation of rubyist who will mock their test suite to hell and then be stuck in a broken world.

Of course, anyone can write bad code in any framework, but I want to teach people the appropriate uses with solid examples.

I was very disappointed to see this example in the excerpts of the rspec book, because I've had to refactor and fix code like that and it's not fun.
Well you've got me there. This is clearly a bad excerpt and I'll have it changed soon. I hope you'll consider reading the rest of the tutorial in spite of the excerpt. I'd welcome your feedback.

I think we have the same goals in terms of deepening the understanding of when/where to use different tools. I don't think we agree on everything here (I don't "always" prefer anything), but I think we probably agree on more than we disagree on.


Microgem: JSLintRB-v8

GOAL: Provide a Ruby interface for running JSLint on javascript files using v8.
require 'jslintrb-v8'
puts"var x = 5")
Will return the string:
Error at line 1 character 1: Missing "use strict" statement.
var x = 5
Error at line 1 character 10: Missing semicolon.
var x = 5
Also, all of jslint's configuration options are available in the constructor: => false, :sub => true)
JSLintRB-v8 on Github
JSLintRB-v8 on

Using Rails's Object#to_json to create a clean JSON Presenter

The problem: You have an ActiveRecord model (or any object, really) and you want to output it to json, but you need some specific methods and maybe some processing of its attributes before you can call .to_json.

Let's pretend we have the following object:

class Widget
attr_accessor :id
attr_accessor :name
attr_accessor :gadgets

class Gadget
attr_accessor :id
attr_accessor :name

And we want this json from a widget:

id : 'widget-47',
name: 'name of widget',
children: [
id : 'gadget-63',
name : 'name of gadget'

Note that ID has the class name on the front, the name is passed through directly. Also, the sub-objects (Gadget) have their id processed.

First, let's make the Gadget presenter:

class GadgetPresenter
attr_reader :id
attr_reader :name
def initialize(gadget)
@id = %{gadget-#{}}
@name =

Rails provides Object extensions that give us a to_json method on every object. By default, this will call all of our attr_readers. So, all we have to do is provide attr_readers for the attributes we want in the json. Now, Gadget#to_json would give us:

id: 'gadget-63',
name: 'name of gadget'

Now the Widget presenter:

class WidgetPresenter
attr_reader :id
attr_reader :name
attr_reader :children
def initialize(widget)
@id = %{widget-#{}}
@name =
@children = widget.gadgets.collect{|g|}

Pretty much the same, except for the children. This lets us call the associate "children" instead of "gadgets". And all we have to do is collect up a bunch of gadget presenters. When .to_json is called on the widget, it calls to_json on the attributes. Since children is an array, it collects .to_json from each entry in the array, which calls the GadgetPresenter's to_json.

I really like this implementation because it's really just an adapter, and it only concerns itself with the data. It is also external to the Widget and Gadget class, allowing for multiple types of presenters easily.

Here is a running example: (requires rails 3)


Nice article, thanks for posting this. Quick question...where do you recommend storing the presenter classes in you Rails project? I thought about 'app/presenters', but maybe that's overkill?
app/presenters works well. Keeping everything in app/models makes little babies cry.

Minimal cucumber stack for testing a rich javascript application

These are instructions for setting up a very bare cucumber stack for testing a rich javascript application.

All code available here:

The following stack is used:

  • Cucumber
  • Capybara
  • Selenium
  • WEBrick

First, we make our awesome javascript powered webapp:

<!DOCTYPE html>
<script type='text/javascript' src='jquery-1.4.3.min.js'></script>
<script type='text/javascript'>
$(function() {
$('#special-link').click(function(event) {
$('#results').html("This is totally awesome");
<h1>My Webapp</h1>
<a href='#' id='special-link'>Click on this link to make cool stuff happen</a>
<div id="results"></div>

All we're doing here is showing some text when we click a link. But we're using jquery's events to do it.

Now here is our feature file:

Feature: My Awesome Link
Scenario: Clicking the awesome link shows awesome text
Given I am on the home page
Then I should see "My Webapp"
And I should see "Click on this link to make cool stuff happen"
And I should not see "This is totally awesome"
When I click "Click on this link to make cool stuff happen"
Then I should see "This is totally awesome"

And here are our simple step definitions:

Given /^I am on the home page$/ do
visit 'index.html'

Then /^I should see "([^"]*)"$/ do |content|
within('body') do
assert page.has_content?(content)

Then /^I should not see "([^"]*)"$/ do |content|
within('body') do
refute page.has_content?(content)

When /^I click "([^"]*)"$/ do |link_text|

And here is the cucumber environment file where all the magic happens. See comments for description:

require 'rubygems'
require 'cucumber'
require 'capybara/cucumber'
require 'webrick'

# setup capybara on selenium
Capybara.default_driver = :selenium

# This maps to webrick below
Capybara.app_host = ''
Capybara.default_selector = :css

# Extend Cucumber's World with minitest assertions

# Launch a webrick server in a thread
AfterConfiguration do
server_thread = do
project_root = File.join(File.dirname(__FILE__), '..', '..')
:Port => 3000,
# TODO use the real application directory
# Using the prototype directory for now
:DocumentRoot => File.join(project_root, 'public'),
:Logger =>, 'cucumber.log')),
:AccessLog => # to nowhere
# Kill the server when cucumber is done
at_exit do

The idea here is to boot webrick in a thread to serve the public directory, then connect selenium up to it. Very simple and light.

All code available here:


Sharad Jain
Nice, thanks!

PRY - Please Repeat Yourself

It's the canonical ruby metaprogramming example. You have this:

def large_image
image_base + '-large.png'

def medium_image
image_base + '-medium.png'

def small_image
image_base + '-small.png'

And you do some dynamic programming to define the methods with similar content:

%w(large medium small).each do |size|
class_eval %{
def #{size}_image
image_base + '-#{size}.png'

I have two problems with this pattern:

1) The dynamic code is far less readable and clear. It might even need a comment! The static code is very easy to read.

2) The dynamic code gives terrible stack traces
(eval):3:in `small_image': undefined local variable or method `image_base' for # (NameError)
from test.rb:28
Versus the static stack trace:
test.rb:15:in `small_image': undefined local variable or method `image_base' for # (NameError)
from test.rb:28
Here, test.rb:15 is the line inside "small_image" that calls "image_base".

The argument for metaprogramming is so that there is only one place to change the content. However this is easily accomplished with a helper. In fact I already had that helper in place, image_base. If you find yourself repeating the code *inside* the method, it's worth extracting into a helper.

Metaprogramming is for a dynamic system, not for enumerations. For example, Rails controllers use dynamic programming to allow you to define your own action methods. These methods are called dynamically by the router. This is a huge oversimplification, but the idea is that the Rails authors have no idea what method names could be used, so they can't type them all out. It's a fundamentally different problem.

If you find repetition in coding to be a difficult task, consider investing some time in learning a powerful editor with a regular expression matching and replace function.

So, please repeat yourself!

RESTful Many to Many Relationships in Rails

GET    /people => List people
POST /people => Create person
PUT /people => Replace the collection
DELETE /people => Delete the collection

GET /people/:id => List attribute of a person
PUT /people/:id => update the attributes of a person
POST /people/:id => create a new collection under a person
DELETE /people/:id => Delete a person

In rails, we use

index:   GET    /people     => List people
create: POST /people => Create person
show: GET /people/:id => List attribute of a person
update: PUT /people/:id => update the attributes of a person
destroy: DELETE /people/:id => Delete a person

Now we may also have:

index:   GET    /groups     => List groups
create: POST /groups => Create group
show: GET /groups/:id => List attribute of a group
update: PUT /groups/:id => update the attributes of a group
destroy: DELETE /groups/:id => Delete a group

Consider a relationship of Many To Many between users and groups.

How do I express the following?

"Add user X to group Y"

First thought may be:

POST /groups/Y/users?user_id=X

But technically, this means "Create a user whose attributes are {user_id => X} under group Y". This is wrong in two ways, the first is that we don't want to create a user inside a group. The second is that the user attributes are wrong, it's {id => X}.

The correct request would be:

POST /groups_users?user_id=X&group_id=Y

This means "Create a new GroupUser linking node whose attributes are {user_id => X, group_id => Y}".

OK now how do I express the following?

"Put user X into a new group, whose attributes are {name => 'My Group'}"

This is *two* requests:

POST /groups?name="My Group" => returns ID Z
POST /groups_users?user_id=X&group_id=Z

However, we can actually combine the requests like this:

POST /users/X/groups?name="My Group"

In a situation where a Group Belongs To a User, this would create a group under the user.

You would get in trouble if Group Belongs To a User and Group Has and Belongs To Many Users. However, in that case, you should be using another name for one of the assets. For example, Group could belong to a Creator, and a User would have a created_groups association. So the routes would actually be different:

POST /users/X/created_groups?name="My Group"

Which would mean create a group, where User X is the creator of that group.

So, how do we handle this with Rails routing and controllers?

resources :groups do
resources :users, :controller => 'GroupsUser', :only => [:index, :create, :destroy]
resources :users do
resources :groups, :controller => 'UserGroups', :only => [:index, :create, :destroy]

Now, for user and group resources, we would use a traditional controller. For the GroupsUser and UserGroups controller, it would be a bit different.

UserGroups controller:

index: return all the groups this user is in
create: create a new group, and add this user to that group
destroy: remove this group from the user's list of groups

Note we don't have:

show: this is redundant with groups/show.
update: this would update the attributes on a membership. If it's just a join there are no attributes, however this may be useful if the join has attributes (for example, role)

The most interesting action here is create:

@user = User.find(params[:id])
@group =[:group])
Group.transaction do
if GroupUser.create(params[:group_user].merge{:user => @user, :group => @group})
# success
# Could not add user to group
# Could not create group

Note that this controller is getting a bit dangerous because there are three logic paths. Generally, controllers will only have two logic paths: success and failure.

Note also that we are merging the params[:group_user] when creating it. This is because we may want to have attributes on the GroupUser. This would all have to be in the form for posting to this action.

Technically, we are creating a group under a user. While this makes perfect sense in a belongs_to relationship, it is a little mind-bending in a many-to-many situation where the relationship is reciprocal. So, I'll end with a question. Do you think that this "double action" is a violation of REST?


First of all, let me say that I am honored to be the first commenter on :)

Second, what I would do is push the logic of creating a group down into the controller. The nested attributes feature of Rails gives you this automatically. Assuming your User model looks like this:

class User < ActiveRecord::Base
has_many :group_users
has_many :groups, :through => :group_users

accepts_nested_attributes_for :groups

Then the controller goes back to having just two paths, because will take care of creating the group before creating the group_user. So when you do:

PUT /users/X/?group_attributes[][name]="My Group"

this would happen:

user.update_attributes :groups_attributes => [{:name => "My Group"}]
# => INSERT INTO "groups" ("name") VALUES ('My Group')
# => INSERT INTO "group_memberships" ("group_id", "user_id") VALUES (1, 1)
Nick Gauthier
You also get the honor of the first reply!

While that is convenient, it is not RESTful. The route you described is designed to modify the user resource, not any groups resources.

Also, you're going to go through "error hell" if a bunch of those groups and user attributes are invalid.

Doing nested attributes through a controller is trying to squeeze two entire controllers into one. For the sake of simplicity, testing, and maintainability, I'd avoid it.