The Future of Automated Acceptance Testing

I participated in a workshop today run by Willem van den Ende and Rob Westgeest at SPA2009 on acceptance testing.

As usual at such conferences, it was great to be able to mingle with other people who take the value of acceptance testing for granted, and be able to concentrate on the issues that are facing us as we go ahead and use them. A few people seemed pre-occupied with how best to express their acceptance criteria, and what technology to use to automate their application but our choice of Cucumber and Rails made me feel pretty satisfied we were winning in this regard.

I was more interested in issues of scale:

> Given that you want to drive out each new piece of functionality with automated acceptance tests, and you want to keep adding new functionality to your product, how do you keep your test run fast so that you still get rapid feedback from your test suite?

At Songkick this is starting to become a significant issue for us.

One option is to distribute your test run across multiple machines and run them in parallel. We’ve been impressed by the likes of IMVU and the guys at weplay have spiked a mechanism for doing this with Cucumber test suites. SeleniumGrid has been blazing a trail to parallelize notoriously slow Selenium tests.

Another option, the importance of which only became clear to me today, is to be judicious about which tests to run. At CITCON Amsterdam last year, Keith Braithwaite described how he’s overcoming long test runs by ‘retiring’ tests that have not failed for a long time from the commit built to the nightly build. I have started to play with ideas that map each test to the source code it exercises in order to run only the tests that will cover code that has been changed. Sir Kent Beck has recently revamped JUnit as an intelligent Eclipse plug-in which uses statistical models to run the tests most likely to fail first.

This idea of prioritising the tests with the highest chance of failure seems to be the key here: The reason write automated tests is because we want rapid reliable feedback about any mistakes we’ve made. Fail fast has long been a motto of XP teams, but perhaps we’ve forgotten how it can relate to our build.

Update: Following some discussion about this post on the CITCON mailing list, I discovered the output from this workshop at last year’s agile conference along very similar lines: https://sites.google.com/a/itarra.com/test-grids/

Published by Matt

I write software, and love learning how to do it even better.

Join the Conversation

3 Comments

Leave a comment

Your email address will not be published. Required fields are marked *