If you’ve ever tried to copy the source files from a Visual Studio 2005 ASP.NET solution, especially if you’re using TFS and Resharper, you’ll have probably noticed all great steaming heaps of fluff and nonsense these tools leave all over your hard drive. Not to mention all the built assemblies lurking in your bin/Debug folders.
- Automation – the glue that binds the tests to the code
- Vocabulary – the language that the tests are expressed in
- Syntax – the technology that the tests are expressed in (C#, Java)
- Intent – the actual scenario being tested
- Harness – the thing that runs the tests and tells you if they passed
Four roles. People might fill more than one, or more than one person might be in a role:
Taking a requirement, the Stakeholder and the Analyst have a conversation:
- what does that requirement mean?
- how can we create a shared understanding?
Then the Analyst and the Tester have a conversation:
- what is the scope of (‘bigness’) of this requirement?
- how will we know when we’re done?
- => Scenarios (examples)
Tester then ‘monkeyfies’ the scenarios, using the following template:
– assumptions, context in which the scenario occurs.
– user action, interaction with the system
– expected outcome
Given we have an account holder Joe
and their current account contains $100
and the interest rate is 10%
When Joe withdraws $120
Then Joe’s balance should be $-22
The tester and the developer sit down and write an automated test to implement each scenario.
You might chain these up, but you can always categorise test code into these three partitions. This really helps how you look at test code.
Consistency Validation Between ‘Units’
Tooling for Automation
Consider extending / creating the domain model to cover the application itself – the UI, the back end.
Loads of tools are availlable. Use whatever works and build on it.
Building a Vocabulary
Ubiquitous Language – Start with a shared language. It becomes ubiquitous when it appears everywhere – documents, code, databases, conversations.
You will use different vocabularies in different bounded contexts. A context might be your problem domain, testing domain, software domain, or the user interface domain.
Beware which roles understand you when you’re talking in a particular domain. Often terms will span domains.
<– 1 –><– 2–><– 3 –>
1 = 3rd Party Provider Domain
2 = Problem Domain
3 – Software Domain
Make your tests tell a story – make it flow. Don’t hide away things in Setup methods that will make the test hard to read. If that means a little bit of duplication, so be it. ‘Damp not DRY’.
Syntax – Implementing Your Tests
- write your own
- keep it simple. don’t fart around writing too fancy a DSL. you’ll be surprised what testers / analysts / stakeholders will be prepared to read.
- great way to learn
- training wheels?
- very nice.
- create templates for each given / when / then which you can plug together with parameter values into scenarios
- nbehave – joe ocampo
Basically what you need is a way to assemble different permutations and combinations of Given / When / Then with different parameters to make different scenarios.
Think in terms of narrative, flow. Think in terms of bounded contexts, and who the audience (role) is for that context. Who will understand that vocabulary?
Make sure the intent is clear – that’s the main thing.
Do you want to hook into continuous integration build?
Which version of the code is it going to run against?
Keep the tests in two buckets:
* in progress
Those which are in the ‘done’ bucket, should always work, those which are in progress are allowed to be failing, until you make them pass.
Things you can do today.
- Try it for your next requirement
- Given When Then helps guide the tests
- It’s a collaborative process – get people involved
- Works for bug fixes
- a bug is a scenario that you missed in the first place
use the tools you’re most comfortable with
- doesn’t have to be perfect
Down The Line
What to aim for.
- ALL requirements have acceptance criteria specified up front
- helps with estimation
- acceptance tests are automated where appropriate
- just having thought about it helps – you may come back to automating it later.
- Push button, availlable to all.
- helps build trust with stakeholders
- Automate pragmatically
- Don’t try to automate what you can’t do manually
- Testing is validating an outcome against intention
- Non functional requirements
- Plan for false positives
- Quality is a variable
- doesn’t mean you don’t go test first
- doesn’t mean low quality code
- does mean how complete is the solution? – how many scenarios / edge cases are you going to try and meet?
- Have a shared understanding of done
- There is no Golden Hammer
- Be aware of the five aspects of test automation
- Automation, Vocabulary, Syntax, Intent, Harness
- Start simple, then you can benefit now
So I got to SPA yesterday afternoon, but this is my first day proper.
The sessions are longer than at other conferences I’ve been to, which allows for more depth. I’ve been to three today:
– Code Debt (Workshop)
– Is Software Practice Advancing? (Panel Discussion)
– Real Options (Workshop)
The evening is ensuing with more BoF sessions, so there’s no rest!
I have a couple of metaphors of my own for Scrum.
Scrum tells you to build ‘potentially shippable’ changes to your product (let’s call them ‘User Stores’) in fixed-length iterations. By estimating the relative complexity of delivering each of these changes using arbitrary units (let’s call them ‘Story Points’) you can measure how much estimated complexity was turned into ‘potentially shippable’ software over a fixed duration.
So far, so good. Where could it possibly go wrong?
A while back I alerted you to a post Karl Scotland had written on his implementation of a kanban system for producing software. Kenji Hiranabe has posted a very informative and well-researched article on the InfoQ website which also sheds a great deal of light on the practical application of this exciting emerging practice. Well worth a read if you’re open to fresh ideas on how to get stuff done.
I facilitated our regular end-of-iteration retrospective last week, and although the feedback from the team was positive, I was left with a feeling that something wasn’t right.
With our second major live release looming large on the horizon, I focussed the session on the theme of ‘Success’. My aim was to give the team a blueprint for a successful iteration to keep in mind when things were tough, and to help ensure that we were all pulling in the same direction by agreeing as a team what constitutes success for us.
Karl Scotland has posted a great description of how his team solved some issues they were having within their Scrum team by moving over to using a lean-thinking or Kanban system, based on a short buffer or Queue of Minimum Marketable Features (MMFs). It’s probably the clearest explanation I’ve seen yet of why and how to employ this emerging technique, and Karl certainly makes a compelling case for considering this as a progression for teams who are experienced with Scrum and need to be able to adapt rapidly during the development of a story or feature.
One of my key questions about Kanban is how it’s possible to predict long-term delivery dates for specific features, and although Karl goes some way to answering those questions, it looks as though you need, as well as a mature agile team, a fairly mature and trusting organisation to make this work.
I guess you also need to be working on a product that’s already in production and being updated regularly.