- Automation – the glue that binds the tests to the code
- Vocabulary – the language that the tests are expressed in
- Syntax – the technology that the tests are expressed in (C#, Java)
- Intent – the actual scenario being tested
- Harness – the thing that runs the tests and tells you if they passed
Four roles. People might fill more than one, or more than one person might be in a role:
Taking a requirement, the Stakeholder and the Analyst have a conversation:
- what does that requirement mean?
- how can we create a shared understanding?
Then the Analyst and the Tester have a conversation:
- what is the scope of (‘bigness’) of this requirement?
- how will we know when we’re done?
- => Scenarios (examples)
Tester then ‘monkeyfies’ the scenarios, using the following template:
Given … - assumptions, context in which the scenario occurs.
When … - user action, interaction with the system
Then … - expected outcome
e.g. Given we have an account holder Joe and their current account contains $100 and the interest rate is 10% When Joe withdraws $120 Then Joe’s balance should be $-22
The tester and the developer sit down and write an automated test to implement each scenario.
You might chain these up, but you can always categorise test code into these three partitions. This really helps how you look at test code.
Consistency Validation Between ‘Units’
Tooling for Automation
Consider extending / creating the domain model to cover the application itself – the UI, the back end.
Loads of tools are availlable. Use whatever works and build on it.
Building a Vocabulary
Ubiquitous Language – Start with a shared language. It becomes ubiquitous when it appears everywhere – documents, code, databases, conversations.
You will use different vocabularies in different bounded contexts. A context might be your problem domain, testing domain, software domain, or the user interface domain.
Beware which roles understand you when you’re talking in a particular domain. Often terms will span domains.
e.g. NHibernateCustomerRepository <– 1 –><– 2–><– 3 –>
1 = 3rd Party Provider Domain 2 = Problem Domain 3 – Software Domain
Make your tests tell a story – make it flow. Don’t hide away things in Setup methods that will make the test hard to read. If that means a little bit of duplication, so be it. ‘Damp not DRY’.
Syntax – Implementing Your Tests
- write your own
- keep it simple. don’t fart around writing too fancy a DSL. you’ll be surprised what testers / analysts / stakeholders will be prepared to read.
- great way to learn
- training wheels?
- very nice.
- create templates for each given / when / then which you can plug together with parameter values into scenarios
- nbehave – joe ocampo
Basically what you need is a way to assemble different permutations and combinations of Given / When / Then with different parameters to make different scenarios.
Think in terms of narrative, flow. Think in terms of bounded contexts, and who the audience (role) is for that context. Who will understand that vocabulary?
Make sure the intent is clear – that’s the main thing.
Do you want to hook into continuous integration build?
Which version of the code is it going to run against?
Keep the tests in two buckets: * in progress * done
Those which are in the ‘done’ bucket, should always work, those which are in progress are allowed to be failing, until you make them pass.
Things you can do today.
- Try it for your next requirement
- Given When Then helps guide the tests
- It’s a collaborative process – get people involved
- Works for bug fixes
- a bug is a scenario that you missed in the first place use the tools you’re most comfortable with
- doesn’t have to be perfect
Down The Line
What to aim for.
- ALL requirements have acceptance criteria specified up front
- helps with estimation
- acceptance tests are automated where appropriate
- just having thought about it helps – you may come back to automating it later.
- Push button, availlable to all.
- helps build trust with stakeholders
- Automate pragmatically
- Don’t try to automate what you can’t do manually
- Testing is validating an outcome against intention
- Non functional requirements
- Plan for false positives
- Quality is a variable
- doesn’t mean you don’t go test first
- doesn’t mean low quality code
- does mean how complete is the solution? – how many scenarios / edge cases are you going to try and meet?
- Have a shared understanding of done
- There is no Golden Hammer
- Be aware of the five aspects of test automation
- Automation, Vocabulary, Syntax, Intent, Harness
- Start simple, then you can benefit now
As with most stupid questions like this, the answer is “neither”. There are times when integration tests really help, and there are times when they can be a pain in the neck.
I was prompted to write this post when a colleague pointed me towards this page on the behaviour-driven wiki, which mentions the disadvantages of integration tests, which usually involve some complex (and often slow to run) procedure to set up an expected state in the system your code is integrating with. This tight coupling with the external system reduces agility and makes the test code brittle.
The page does point out that “even in this state the code is often much more robust and a much better functional fit than code developed under more traditional methods based on large-scale up-front design”
I agree with the principles in the article, and I believe BDD is a great way to think, but I do think as long as the integration tests are well-factored (and hence easy to change) then the problems highlighted don’t apply – you’re still going to be quick on your feet if requirements change.
The question is whether you’re going to spend more time fixing your unit tests than you would debugging the code – if you’re confident you can write it correctly first time and anybody needing to change it is highly unlikely to introduce bugs in the area you’re coding, it’s a waste of everybody’s time to write a unit test – the test just becomes baggage for the team to drag around.
Conversely, if there’s a risk that future changes could break what you’re coding, or you’re bored of hitting F5 in the browser to test some subtle tweak in a function way down deep in a subsystem, thinking of an imaginative way to write a lightweight unit test that isolates that function and proves that it works as you want it to, is probably going to save you some dull debugging time.
2007 07 10