(1) Pick a new feature to add to the product you’re working on.
(2) Sit down with the customer (or their representative on your team) and brainstorm all the scenarios that might happen when a user tries to use the new feature. Try to make the scenarios as small and granular as you can, so that you can prioritise them independently, but make sure that each one constitutes some tangible difference to the user’s experience of the product.
(3) Prioritise the scenarios in terms of
- the value they’ll give to the customer if you can make them actually work for real
- how easy they’ll be to build
- how likely they are to actually happen in real life
(4) Take the highest priority scenario, sit down with your customer (or their representative on your team) and write an automated test that simulates the scenario. The canonical way to express the scenario is using the Given… When… Then… structure to
- put the application into a certain state
- do something to it
- then check that it has ended up in the state you expect.
Use your favourite test automation tool – it doesn’t have to be fancy. At this stage, it doesn’t have to pass, either. In fact it would be quite weird if it did.
Make sure that the customer stays engaged and can understand the test that you’ve written. Make sure you both agree that the test you’ve written seems to be an accurate reflection of the scenario you discussed in step 2.
We’ll call this your acceptance test.
(5) Say good-bye to the customer (or their representative on your team) and find a programmer to pair with – you have some code to write.
(6) Working from the outside-in with your pair, first make sure that the automated acceptance test you wrote in step 4 can actually drive the application. This can be awkward at first, but once you get the hang of it you’ll build up a battery of hooks that can make your application sing and dance with a few lines of code. It’s an investment worth making.
(7) Now it’s time to change the code in the product to make this scenario come to life. Run your acceptance test to find out what to do next: Are you trying to click a button that isn’t there? Then add one! Work your way in from the outer-most layer (the failing test) into the user-interface and then down into the main body of the code.
(8) As you follow the sign-posts given by your failing acceptance test, you may find you run into a couple of problems:
- You get bored waiting for the acceptance test to run, because it’s slow driving the whole application stack
- You can’t easily pinpoint the cause when the acceptance test fails.
- You discover some subtle variations or edge cases in the scenario that would not interest the customer, or will be hard to write an acceptance test for, but will need to be dealt with to make the product robust.
Listen to your boredom: Now it’s time to drop down and write a unit test. Or should I say microtest?
(9) When your unit test have gone from red to green, and you and your pair are satisfied with your refactoring, run the acceptance test again and get your next clue.
(10) Repeat from step (8) until the acceptance test passes.
This post was heavily inspired by a talk I saw some time ago from Dan North and Joe Walnes, and by all the fun I have hacking on Songkick.com
Cracking article Grommet! Nothing better than a no-mess practical article we can all gain from.
Nice article, but I think step 1 is a gross over simplification. Surely it’s the customer who picks the next feature? And they take some coaching before they get the fact that it makes sense to do this a step at a time.