PartialDebugger Plug-in for Rails

Ever look at the HTML output from an old project and wonder which the hell partial rendered which the heck bit of the HTML?

Wonder no more.

./script/plugin install git://github.com/mattwynne/partial_debugger.git

Then add this to your config/environments/development.rb

PartialDebugger.enable

…because you probably don’t want this running in production.

Rails Tip: Use Polymorphism to Extend your Controllers at Runtime

Metaprogramming in Ruby comes in for quite a bit of stick at times, the accusation being that code which modifies itself at runtime can be hard to understand. As Martin Fowler recently described, there’s a sweet spot where you use just enough to get some of the incredible benefits that Ruby offers, without leaving behind a minefield for future developers who’ll have to maintain your code.

One of my favourite techniques uses the Object#extend method, which allows you to mix in the methods from a module to a specific instance of a class at run-time. In my quest to eliminate as much conditional logic as possible from my code, I’ve seen a common pattern emerge a few times. Here’s an example from a refactoring session I paired on with my colleague craig.

We start with a Rails controller which handles user authentication. Over the passing iterations, it has grown to support not only bog-standard logins from users of the main web application, but a form that’s displayed on a 3rd-party partner site, as well as during the installation of a rich-client GUI app. All these clients need slightly different behaviour – different templates or layout to be rendered, and different destination pages to redirect to when the login has succeded.

Sadly the hackers passing through this controller have not been great boy scouts, and the code has started to get pretty unpleasant. This code is simplified for clarity:

{{lang:ruby}}
class SessionsController < ApplicationController

  def new
    if params[:installer]
      render :layout => 'installer_signup', :action => 'installer_signup')
    else
      render :layout => 'modal'
    end
  end

  def create
    if params[:username].blank?
      flash[:error] = "Please enter a username"
      return render_new_action
    end

    unless user = User.authenticate(params[:username], params[:password])
      flash[:error] = "Sorry, that username was not recognised"
      return render_new_action
    end

    set_logged_in_user(user)

    if params[:installer]
      @username = user.username
      return render(:template => 'installer_done', :layout => 'installer_signup' )
    elsif params[:third_party]
      return render(:template => "third_party/#{params[:third_party]}")
    else
      return redirect_to(success_url)
    end
  end
end

Notice how the conditional logic has a similar structure in both actions. Our refactoring starts by introducing a before_filter which works out the necessary extension:

{{lang:ruby}}
class SessionsController < ApplicationController

  before_filter :extend_for_client

  ....

  private

  def extend_for_client
    self.extend(client_exension_module) if client_exension_module
  end

  def client_extension_module
    return InstallerClient if params[:installer]
    return ThirdPartyClient if params[:third_party]
  end

  module InstallerClient
  end

  module ThirdPartyClient
  end
end

Notice that we don’t bother extending the controller for the ‘else’ case of the conditional statements – we’ll leave that behaviour in the base controller, only overriding it where necessary.

Now let’s extract the client-specific code out of the create action into a method that we’ll override in the modules:

{{lang:ruby}}
class SessionsController < ApplicationController

  ...

  def create
    if params[:username].blank?
      flash[:error] = "Please enter a username"
      return render_new_action 
    end

    unless user = User.authenticate(params[:username], params[:password])
      flash[:error] = "Sorry, that username was not recognised"
      return render_new_action 
    end

    set_logged_in_user(user)

    handle_successful_login
  end

  private 

  def handle_successful_login
    if params[:installer]
      @username = user.username
      return render(:template => 'installer_done', :layout => 'installer_signup' )
    elsif params[:third_party]
      return render(:template => "third_party/#{params[:third_party]}")
    else
      return redirect_to(success_url)
    end
  end

  ...

Finally, we can the client-specific code into the appropriate module, leaving the default behaviour in the controller:

{{lang:ruby}}
class SessionsController < ApplicationController

  before_filter :extend_for_client

  def new
    render :layout => 'modal'
  end

  def create
    if params[:username].blank?
      flash[:error] = "Please enter a username"
      return render_new_action 
    end

    unless user = User.authenticate(params[:username], params[:password])
      flash[:error] = "Sorry, that username was not recognised"
      return render_new_action 
    end

    set_logged_in_user(user)

    handle_successful_login
  end

  private 

  def handle_successful_login
    return redirect_to(success_url)
  end

  private

  def extend_for_client
    self.extend(client_exension_module) if client_exension_module
  end

  def client_extension_module
    return InstallerClient if params[:installer]
    return ThirdPartyClient if params[:third_party]
  end

  module InstallerClient
    def new
      render :layout => 'installer_signup', :action => 'installer_signup')
    end

    private 

    def handle_successful_login
      @username = user.username
      return render(:template => 'installer_done', :layout => 'installer_signup' )
    end
  end

  module ThirdPartyClient
    def handle_successful_login
      return render(:template => "third_party/#{params[:third_party]}")
    end
  end
end

Polymorphism is one of the power-features of an object-oriented language, and Ruby’s ability to flex this muscle at run-time opens up some really elegant options.

Striking the Balance

This afternoon I paired up with a colleague to fix a bug that had been introduced some time ago but, because the effects weren’t very noticeable, had only just come to our attention. Fixing the defect itself was actually quite easy – the real pain was writing a script to clean up the bad data that the bug had been silently strewing all over the database since it sneaked into production.

mopping-up

On my way home I reflected on the root cause of the defect, and how we could have avoided it. The faulty code was pretty good: it read nicely and was obviously written test-first but there was a tiny leak in the logic, obvious enough with hindsight but easy to see how it had been overlooked. I pondered whether the with the investment of a little bit more attention at the time of writing, we might have saved my pair and I an afternoon of relatively tedious data-clean up. My reckoning was that we probably could have.

At the time the original defective code was written, the team were under a fair amount of pressure. As a start-up, we’re living on borrowed time, and we had a great product that was almost finished hidden away in private beta while our public offering languished behind the competition. There was real urgency to get the new product released. I’m extremely proud of the team that under that pressure we stuck to practices we knew were making us most effective: writing code test-first, working in pairs and keeping the code clean and defect-free as we went. Or so we thought.

On reflection, I’m surprised to find I’m perfectly comfortable with the fact that we allowed that bug to creep through. Given the option of going more slowly and avoiding these kind of fairly subtle mistakes, and going at the pace we did and getting launched when we did with the product we did, it seems to me that going more quickly was the right choice to make at the time. Despite the fact that the net amount of programmer time was eventually greater, we exchanged that cost for the benefit of being able to launch the product at an earlier date.

This is an extremely dangerous lever to fiddle with. A programming team that allows itself to make too many mistakes will certainly not be able to ship to a predictable schedule, and may never even manage to ship at all. We were lucky that the damage this bug made to the data could be completely repaired: a more serious error might have left us having to contact users and apologise for losing their data, for example.

In this case I think we got the balance just about right, but that took skill, experience, and probably a bit of luck too.

Upcoming Conferences

Talking on the internet is fine and everything, but there’s nothing like a face-to-face chinwag. This year I’ll be hanging around at a couple of conferences so maybe we can meet up?

At Agile 2009, I’ll be running a workshop called ‘Debugging Pair Programming’. This is a spin-off from an impromptu open-space session I organised at XP Day 2008 where about 40 people helped pull apart the reasons that inhibit teams from adopting pairing as a core practice. I’m hopeful that the experience we’ll be able to gather at the session in Chicago will help gather some really revealing answers.

Later in the year I’m going to be on a panel at the UK Lean Conference discussing my experiences of adopting lean practices in my teams at the BBC and Songkick.com. Rob Hathaway (@kanbanjedi) and Karl Scotland (@kjscotland) have put together a terrific line-up of speakers and sessions and I’m really looking forward to the discussions to be had.

See you there?

BDD Joy in 10 Easy Steps

(1) Pick a new feature to add to the product you’re working on.

(2) Sit down with the customer (or their representative on your team) and brainstorm all the scenarios that might happen when a user tries to use the new feature. Try to make the scenarios as small and granular as you can, so that you can prioritise them independently, but make sure that each one constitutes some tangible difference to the user’s experience of the product.

(3) Prioritise the scenarios in terms of

  • the value they’ll give to the customer if you can make them actually work for real
  • how easy they’ll be to build
  • how likely they are to actually happen in real life

(4) Take the highest priority scenario, sit down with your customer (or their representative on your team) and write an automated test that simulates the scenario. The canonical way to express the scenario is using the Given… When… Then… structure to

  • put the application into a certain state
  • do something to it
  • then check that it has ended up in the state you expect.

Use your favourite test automation tool – it doesn’t have to be fancy. At this stage, it doesn’t have to pass, either. In fact it would be quite weird if it did.
Make sure that the customer stays engaged and can understand the test that you’ve written. Make sure you both agree that the test you’ve written seems to be an accurate reflection of the scenario you discussed in step 2.
We’ll call this your acceptance test.

(5) Say good-bye to the customer (or their representative on your team) and find a programmer to pair with – you have some code to write.

(6) Working from the outside-in with your pair, first make sure that the automated acceptance test you wrote in step 4 can actually drive the application. This can be awkward at first, but once you get the hang of it you’ll build up a battery of hooks that can make your application sing and dance with a few lines of code. It’s an investment worth making.

(7) Now it’s time to change the code in the product to make this scenario come to life. Run your acceptance test to find out what to do next: Are you trying to click a button that isn’t there? Then add one! Work your way in from the outer-most layer (the failing test) into the user-interface and then down into the main body of the code.

(8) As you follow the sign-posts given by your failing acceptance test, you may find you run into a couple of problems:

  • You get bored waiting for the acceptance test to run, because it’s slow driving the whole application stack
  • You can’t easily pinpoint the cause when the acceptance test fails.
  • You discover some subtle variations or edge cases in the scenario that would not interest the customer, or will be hard to write an acceptance test for, but will need to be dealt with to make the product robust.

Listen to your boredom: Now it’s time to drop down and write a unit test. Or should I say microtest?

(9) When your unit test have gone from red to green, and you and your pair are satisfied with your refactoring, run the acceptance test again and get your next clue.

(10) Repeat from step (8) until the acceptance test passes.

This post was heavily inspired by a talk I saw some time ago from Dan North and Joe Walnes, and by all the fun I have hacking on Songkick.com

Kanban State of Mind

  • There are no iterations: only now. Work at a pace you can truly sustain.
  • Done means it is in the user’s hands. Nothing less.
  • Limit the Work in Progress. This forces you to get things done, or you’ll have nothing else to do.
  • Get better all the time. Keep tuning your process and tools to fit the way you need to work today – make kaizen a culture, not an event. Everyone is responsible.
  • Decide with data. Collect the data you need in time to make responsible decisions.

Team Analysis

My team has been working with a Kanban pull / flow system which means we have the luxury of being able to do just-in-time analysis of individual features, rather than trying to do them in a batch for an iteration. We’re getting better at this analysis, and I thought it was worth sharing what I think are some good habits we’ve developed.

Leave it until the last responsible moment

Because we move too fast to be able to write down all the details of a feature, part of the point of our analysis process is to build the shared understanding that will live in the heads of the people who’ll build the feature. The closer you do this to the time when the feature will be built, the better quality this understanding will be.

Bring the right people

Picking the people to do the analysis is important. You need a good cross-section of the people who have a vision of what they’d like the feature to be, and those with the various skills required to build it. After the meeting, these people will carry away the shared understanding of exactly what the feature is, so it’s also important that they’re likely to be the ones who will actually work on it.

Agenda

Start by talking over the feature and identifying the new scenarios that the product will need to support in order to deliver this feature. At this point it’s best not to get drawn into how hard each scenario might be to build or how much value it will add to the product – you will prioritise them later. It’s best if someone has already prepared a list of these in advance, but that list should be regarded as a catalyst for this stage of the discussion rather than a way to skip it, as the rest of the group may well identify missing scenarios or suggest innovative alternatives.

Write each scenario on a pink card with a short-hand identifier for the feature in the top-right-hand corner. Nominate a scribe in the group to do this.

Once you have your list of scenarios, get someone else in the group (other than the scribe) to read out each card in turn. Does the description make sense to everyone? Don’t be afraid to rip up cards and re-write them if the description of the scenario isn’t as clear as it could be. Now focus on the scenario in detail. Try to think of circumstances under which it might fail, noting any edge cases or success / failure criteria on the back of the card. You may find that whole new scenarios emerge at this stage. That’s fine and all part of the process – better to find out now than when you’re hacking on it.

Finally, you have a list of possible scenarios and an idea of the complexity of each one. At this point you should be able to prioritise the individual scenarios, and decide which ones are going to make it within the scope of the feature. Not everyone needs necessarily be involved in this decision, but it’s good if each of the group at least understands and has agreed to the reasons why it was made.

Sign Up

Before you leave the meeting, make sure everyone initials the green card to say they were there at the meeting. This serves to help whoever comes to build the feature so that they can easily find the experts who were there when the feature was discussed. Signing up means you understand what all the pink scenario cards mean – could you re-write them if they got lost?

Timebox

Analysis is important work, but it’s also tough and tiring. If the meeting lasts longer than an hour, take a break and come back to it later.

Don’t Confuse Estimates with Commitments

Estimate: an approximate calculation of quantity or degree or worth

Commitment: the act of binding yourself (intellectually or emotionally) to a course of action

  • Estimates are not commitments.
  • Plans based only on estimates are bad plans.
  • Commitments based only on estimates are at best foolish, at worst dishonest.

Estimates are certainly a useful tool for making realistic commitments, but there are many others too that are equally, if not more, useful. Working at a sustainable pace, measuring data about past performance, and facilitating the genuine consensus of the whole team when committing are the way to make plans that can actually be achieved.

Goodbye CruiseControl.rb, Hello Hudson

Imagine you have a friend who writes a blog. Maybe you actually do. Let’s call him ‘Chump’. One day you’re chatting, and the conversation turns to technology. It turns out that Chump is using Dreamweaver to write his blog entries, and manually uploading them to his site via FTP. You’re appalled.

How do you update the RSS feed?

you enquire, trying to conceal the horror in your voice.

Oh, I just edit the Atom file manually, it’s not that hard.

says Chump.

Maybe nobody ever told Chump about wordpress.

At work, we just switched our build server from CruiseControl.rb to Hudson, and we won’t be looking back.

Ruby people, for some reason, seem distinctly inclined to use build servers made out of Ruby too. That’s nice and everything, but these things are childsplay in comparison to the maturity, usability, and feature-set of hudson.

Here’s why I recommend you switch to hudson for your Ruby / Git projects:

  • open source
  • piss easy to set up, even if you have no idea what java even is
  • solid git support
  • works with CCMenu (or your favourite CruiseControl monitoring desktop widget)
  • kill builds from the GUI
  • in fact, manage everything from the GUI
  • distributed architecture, allowing you to delegate builds to multiple machines
  • huge, active plug-in support
  • you have better things to do with your time than faff around hacking on your build server

The problem is, it doesn’t have a smug website with fancy branding, so you probably overlooked it the first time. Go back and take another look.

The Future of Automated Acceptance Testing

I participated in a workshop today run by Willem van den Ende and Rob Westgeest at SPA2009 on acceptance testing.

As usual at such conferences, it was great to be able to mingle with other people who take the value of acceptance testing for granted, and be able to concentrate on the issues that are facing us as we go ahead and use them. A few people seemed pre-occupied with how best to express their acceptance criteria, and what technology to use to automate their application but our choice of Cucumber and Rails made me feel pretty satisfied we were winning in this regard.

I was more interested in issues of scale:

> Given that you want to drive out each new piece of functionality with automated acceptance tests, and you want to keep adding new functionality to your product, how do you keep your test run fast so that you still get rapid feedback from your test suite?

At Songkick this is starting to become a significant issue for us.

One option is to distribute your test run across multiple machines and run them in parallel. We’ve been impressed by the likes of IMVU and the guys at weplay have spiked a mechanism for doing this with Cucumber test suites. SeleniumGrid has been blazing a trail to parallelize notoriously slow Selenium tests.

Another option, the importance of which only became clear to me today, is to be judicious about which tests to run. At CITCON Amsterdam last year, Keith Braithwaite described how he’s overcoming long test runs by ‘retiring’ tests that have not failed for a long time from the commit built to the nightly build. I have started to play with ideas that map each test to the source code it exercises in order to run only the tests that will cover code that has been changed. Sir Kent Beck has recently revamped JUnit as an intelligent Eclipse plug-in which uses statistical models to run the tests most likely to fail first.

This idea of prioritising the tests with the highest chance of failure seems to be the key here: The reason write automated tests is because we want rapid reliable feedback about any mistakes we’ve made. Fail fast has long been a motto of XP teams, but perhaps we’ve forgotten how it can relate to our build.

Update: Following some discussion about this post on the CITCON mailing list, I discovered the output from this workshop at last year’s agile conference along very similar lines: https://sites.google.com/a/itarra.com/test-grids/