Before you write any code, start by brainstorming all the scenarios you’ll need to cover to make the story done. Do this collaboratively with everyone (devs, testers, UX, business people, product owner) who is interested in the story. Don’t try to make them valid Cucumber scenarios, just make a list of them on a whiteboard, index cards, or in a text file.
Now look at all the scenarios you have. Does the product owner really want you to build the product to satisfy all of them? Can you cut any out and defer them as another story to build later? Can you drop any of them altogether? Get rid of as many as you can until the story is as small as you can make it.
Now count how many scenarios you have left, and write that number on the story card. At the end of the iteration, count up how many scenarios you’re managed to deliver in total across all the stories you’ve done, and start using that as your velocity metric. It’s much, much more accurate in my experience than estimated story points. What’s more, the process of exploring the scenarios means you can agree a clear scope for the story before you get started.
Teams who are doing this well are getting things done much more quickly than they did before. Not only do they build a suite of automated regression tests, but they waste a lot less time writing the wrong code because of misunderstood requirements.
Update: This training is now available as a public course, starting October 8th in London.
Would you like to learn how Behaviour-Driven Development can help your company get better at software development?
I’ve helped several teams learn BDD, and I’ve started to formalise the training I’ve been doing into a set of course modules. The modules aim to provide the foundations for a teamÊ¼s successful adoption of BDD.
We start by immersing the whole team in BDD for a day to get everyone enthusiastic about the process. Then I take the programmers and testers and implement their very first scenario, end-to-end, on their own code. Now that we’ve proved it can be done, I work with project managers, product owners, and development leads, to streamline their agile process to get the best from BDD. We practice collaborative scenario-writing sessions, we learn how to use metrics to track progress, and how Kanban and BDD can fit into your existing agile process.
Please take a look at the course prospectus and get in touch to see how I can help.
I love using Cucumber to help me write software. I almost find it hard to imagine doing it any other way.
I want more people to discover this for themselves, so for the last year or so Aslak and I have been writing a book all about using Cucumber for Behaviour-Driven Development:
We hope we’ve captured some of the passion and sheer enjoyment we get from working with this amazing tool. Whether you’re a complete novice or an experienced Cucumber user, I think you’ll get a lot from the book.
What are you waiting for? Go and get yourself a copy!
Okay I’m bored of this. I need to talk about it.
I love to use Ruby, RSpec, Cucumber and Rails to do test-driven development, but my tools for running tests are just infuriatingly dumb. Here’s what I want:
- When a test fails, it should be kept on a list until it has been seen to pass
- When more than one test fails:
- Show me the list, let me choose one
- Focus on that one until it passes, or I ask to go ‘back up’ to the list
- When it passes, go back up to the list and let me choose again
- When the list is empty, I get a free biscuit
- When a test case is run, a mapping should be stored to the source files that were covered as it ran so that:
- When a file changes, I can use that mapping to guess which test cases to run. Fuck all this naming convention stuff, it’s full of holes.
- At any time, I can pipe the git diff though the tool to figure out which test cases to run to cover the entire commit I’m about to make.
When I say test case, I personally mean:
- An RSpec example
- A Cucumber scenario
…but it should work for any other testing framework too.
I feel like having a tool like this that I trusted would make a huge difference to me. There are all these various scrappy little pieces of the puzzle around: guard plugins, autotest, cucover, cucumber’s rerun formatter. None of them seem to quite do it, for me. Am I missing something?
Or shall we make one?
If you work in a regular weekly iteration rhythm, it’s quite normal to think about starting the week with a planning session, and ending the week with a retrospective. I have a new idea for you, which my team have just happened upon, but which I rather like: Swap them around.
Instead of trying to reflect at the end of the week when you’re tired, leave it until Monday morning. You’ll be fresh, you’ll have had a chance to privately reflect on the last week over your weekend. Crucially, when you leave the meeting with new ideas about how to work, you’ll have a whole week ahead of you to try them out.
Instead of leaving work at the end of the week with no clear idea what you’ll be doing the next, get ahead of the game and make a plan before you leave on Friday. You’ll have prepared the ground for getting down to business almost straight away when you walk in on Monday morning, and I suspect this means you’ll enjoy a more relaxing weekend too.
I’ve never looked at things this way before, but now I do it makes perfect sense. Why not try it and let me know how it works for you?
Recently we had a user who runs the relish gem on JRuby, and needed jruby-openssl to be loaded. He kindly submitted this patch, which I merged in without really thinking about it too much. Then the problems started.
That’s not the right way to express dependencies for different platforms using RubyGems and Bundler. I’ve done some research and I think I understand the current good practice for this, so I’m going to document it here.
The .gemspec is read at the time you build and release your gem, so any conditional logic in that file will be evaluated once when the gem is built and released on your machine. So a line like this:
s.add_runtime_dependency('jruby-openssl') if RUBY_PLATFORM == 'java'
Will bake-in the dependency based on whatever platform you run
gem build on.
What you need instead is to evaluate the platform at runtime. Bundler offers you a way to do this, in your Gemfile:
platforms :jruby do
What a jruby user will now experience is that when your gem is loaded, they’ll see a warning:
â†’ relish help
JRuby limited openssl loaded. http://jruby.org/openssl
gem install jruby-openssl for full support.
It’s now up to the user to manually install the gem themselves. It seems a shame that there isn’t any way to specify this information in the gem’s manifest, so that it can be installed on a platform-specific basis when you gem is installed, but as far as I can tell there’s no way to do that right now.
I really enjoyed Jim Weirich’s session on polite programming at the Scottish Ruby Conference. He covered a problem that’s been vexing me for some time, about avoiding the use of method aliasing, by using inheritance instead. Unfortunately, his suggested solution didn’t tell me anything I hadn’t already tried. I still think this must be possible, but that I just don’t know quite enough about Ruby to be able to achieve it. Maybe you do?
Here’s the puzzle:
Can you solve it?
Itâ€™s perfectly possible to write automated acceptance tests without using Cucumber. You can just write them in pure Ruby. Take this test for withdrawing cash from an ATM:
Scenario: Attempt withdrawal using stolen card
Given I have $100 in my account
But my card is invalid
When I request $50
Then my card should not be returned
And I should be told to contact the bank
We could automate that test using good old Test::Unit, perhaps something like this:
class WithdrawlTests < Test::Unit::TestCase
bank = Bank.new
account = Account.new(bank)
card = DebitCard.new(account)
atm = Atm.new(bank)
assert atm.card_withheld?, "Expected the card to be withheld by the ATM"
assert_equal "Please contact the bank.", atm.message_on_screen
The big disadvantage of writing acceptance tests in pure Ruby like this is that itâ€™s unlikely youâ€™ll be able to show this test to your team’s analyst without their eyes glazing over.
Unless your analyst is, or has recently been, a programmer themselves, they wonâ€™t be able to see past the noise of Rubyâ€™s syntax, clean as it may be, to understand the actual behaviour thatâ€™s being specified. The specification of behaviour and the implementation of the test are all mixed up together, and thatâ€™s a problem if we want to get feedback from our stakeholders about whether weâ€™ve specified the right thing before we go ahead and build it.
If we want the benefits of using plain language to write our behaviour specification, then we need a way to translate that into automation code that actually pulls and pokes at our application. Step definitions give you a translation layer between the plain-language specification of behviour and the test automation code, mapping the Gherkin steps of each scenario to Ruby code that Cucumber can execute.
The cost of this extra layer is complexity: Yes, you have more test code to maintain than you would if you stuck to writing your tests in pure Ruby. The benefit is clarity: by separating the what (the features) from the how (the ruby automation code), you keep each part simpler and easier for its target audience to understand.
It’s funny, you’d think, from reading about planning poker that the purpose of this exercise is to come up with accurate estimates. I think that’s missing the point.
The estimates are a useful by-product, if your organisation values such things, but actually the most important benefit you get from planning poker is the conversation. As part of the exercise, you explore the story as a team, and uncover any misunderstandings about the scope and depth of the work to be done to satisfy the story. The result of this exploration is a shared understanding of what the story means.
There are other ways to have this same conversation. My favoured practice is to hold a specification workshop where the team explores the scenarios that a user could encounter when using this new functionality. These scenarios are a much more useful product, to me, than an estimate. They give me a starting point for writing my automated acceptance tests, and they also give us all a concrete reference point as to the scope of the story. If my organisation needs estimates to be happy, we can count the number of scenarios to give a realistic feel for the relative size of the story.
I’m going to be speaking at CukeUp!, Cucumber’s very own one-day conference in London on March 24th 2011. It’s going to be a great little conference, I’m really looking forward to hearing talks from people like Gojko Adzic, Dan North, Liz Keough, Capybara’s creator Jonas Nicklas, Joseph Wilk, Chris Matts, Antony Marcano and of course Aslak Hellesoy.
I know it’s only a couple of weeks away, but if you’re in or around the UK and interested or curious about ATDD / BDD, get yourself there: it’s going to be fun.