Our Consumer Culture and What it Means for Software Craftsmanship

We have an old Kenwood Chef A701 that’s hardly out of use in our kitchen. A few days ago, the gearbox went. I bought a reconditioned one on Ebay for £23, and this morning I made the time to do the repair.

Kenwood Chef

As I tinkered away with my screwdriver, I reflected on how grateful I was that the designers of this machine had made sure it could be maintained by an idiot like me. I actually made our A701 by cannibalising two broken machines, and I have always been delighted by how easy it is to work on.

Perhaps it’s my confirmation bias, but it seems that machines like this no longer tend to be designed with maintenance in mind. In our consumer society, we’re encouraged to throw things away rather than repair them, and globalisation ensures that is often the apparently rational economic decision to take: paying a local repair man to fix your DVD player will cost more than popping down to Tesco and buying a new one. Thank goodness for those cheap Chinese factories, eh?

If we don’t design our manufactured goods for maintainability, perhaps it’s no wonder we often feel like we’re swimming against the tide trying to persuade people that it’s important in software.

Agile / Lean Software Development

Comments (5)

Permalink

Relish Roadmap

I want to give you some news about the future of Relish. A lot has happened since we first started the project just over a year ago. Justin quit the project to concentrate on his new role on the RSpec core team, The Cucumber Book went into beta, and my wife gave birth to our second child. All of those things have meant that Relish hasn’t been able to progress as quickly as I’d have ideally liked, so thanks for sticking with us!

I’m still passionate about the vision for Relish, and as I teach training courses on using Cucumber and BDD, it seems to resonate with a lot of people. With The Cucumber Book pretty much behind us now, Relish will come back into focus and the pace will pick up again. I hope to have it launched by the end of the first quarter of 2012.

Here’s a rough outline of what we plan to do:

Usability & Information Architecture Enhancements

I have an IA specialist on the team now, and you may have noticed the first tweaks we’ve been making this week. There’s a lot to catch up, so we’ll focus on this until the existing features are polished to his (and hopefully your) satisfaction. Please let us know how we’re doing as we progress, and let us know if any of the changes annoy or delight you.

If there’s anything specific you’d like to see done that isn’t already logged in UserVoice, please add it there or reply to this post.

Test Results

Relish can’t deliver on the promise of living documentation until the non-technical readers can tell, when they’re looking at a scenario, whether it’s passing or not. We’ll be building a plugin for Cucumber that allows developers and testers to send their test results to Relish, so each scenario can be rendered with a big green tick if it’s passing. This will also enable us to start building all kinds of exciting dashboards for project managers to get a high-level overview of what’s going on.

Commenting / Feedback

Just being able to read features is not enough. I want Relish to be a collaboration tool, and that means that stakeholders should be able to comment on and give feedback about the documentation they’re reading.

RSS / Activity Feeds

People who want to stay up to date with changes to a project will be able to get a nice high-level summary via RSS, and possibly other means too.

Enterprise Install

As the codebase stabilises, I’m becoming increasingly comfortable with the idea of a stand-alone installer for customers who are not happy about having their features stored outside of their firewall. In the new year, I’ll be looking for a couple of friendly enterprise customers to help me shape this. If you’re interested, please get in touch.

Plans & Pricing

Yes, it had to happen eventually :)

When we come out of private beta (which is still a few months away), it will be with paid accounts for private projects. This is a hard call to make, and I’ll be in working with our beta tester community to get an idea of what they feel is a reasonable price to pay for the service. Public accounts like RSpec and VCR will continue to be free.

As always, I’m keen to hear what you think of these plans. Let me know in the comments.

Agile / Lean Software Development

Comments (0)

Permalink

BDD Training

Update: This training is now available as a public course, starting October 8th in London.

Would you like to learn how Behaviour-Driven Development can help your company get better at software development?

I’ve helped several teams learn BDD, and I’ve started to formalise the training I’ve been doing into a set of course modules. The modules aim to provide the foundations for a teamʼs successful adoption of BDD.

We start by immersing the whole team in BDD for a day to get everyone enthusiastic about the process. Then I take the programmers and testers and implement their very first scenario, end-to-end, on their own code. Now that we’ve proved it can be done, I work with project managers, product owners, and development leads, to streamline their agile process to get the best from BDD. We practice collaborative scenario-writing sessions, we learn how to use metrics to track progress, and how Kanban and BDD can fit into your existing agile process.

Please take a look at the course prospectus and get in touch to see how I can help.

Agile / Lean Software Development
BDD

Comments (11)

Permalink

The Real Point of Planning Poker

It’s funny, you’d think, from reading about planning poker that the purpose of this exercise is to come up with accurate estimates. I think that’s missing the point.

The estimates are a useful by-product, if your organisation values such things, but actually the most important benefit you get from planning poker is the conversation. As part of the exercise, you explore the story as a team, and uncover any misunderstandings about the scope and depth of the work to be done to satisfy the story. The result of this exploration is a shared understanding of what the story means.

There are other ways to have this same conversation. My favoured practice is to hold a specification workshop where the team explores the scenarios that a user could encounter when using this new functionality. These scenarios are a much more useful product, to me, than an estimate. They give me a starting point for writing my automated acceptance tests, and they also give us all a concrete reference point as to the scope of the story. If my organisation needs estimates to be happy, we can count the number of scenarios to give a realistic feel for the relative size of the story.

Agile / Lean Software Development

Comments (5)

Permalink

Hi-Fidelity Project Management

If the only metric you use for measuring and forecasting your team’s progress is their iteration velocity, you’re missing out on a great deal of richer information that, for just a few extra minutes per day, you could easily be collecting. This is information that the team can use during the iteration to help spot and fix problems that are holding them back.

If you’re using scrum, you’re probably familiar with using a Release Burndown chart to track the team’s progress, iteration by iteration, towards a bigger goal:

Release Burn-Down Chart

Here we can see the team’s velocity appears to be slowing down. They might still release at the end of Sprint #9, but there’s a chance, if the flattening trend in their velocity continues to worsen, that they may never finish the project at all.

We have enough information here to know that something may be wrong. We can use this information to warn stakeholders that our release date may be at risk of slipping, and may also start looking though the backlog of stories for things we can drop.

The problem with these responses is that they’re purely reactive. Assuming there is a systemic cause at the root of this slow-down, we’re not doing anything to deal with it and get the team back on track. We can go and ask the team what they think might be wrong, and try to help them correct any problems they can identify. This can often work, but sometimes the team may not have the experience or perspective to be able to see what’s going wrong.

Another problem is that we’ve had to wait until the end of the iteration to get this information. For an organisation that’s used to going into the darkness of 9-month waterfall projects, getting iteration velocity figures once a fortnight can see pretty decent. An awful lot can happen inside an iteration though, and having that summed up by a single number loses a lot of important detail. Like the signal on an old short-wave radio, this is low-fidelity data.

One crucial piece of data that’s hidden within the burn-down chart is what I call the rate of discovery. The whole point of agile development is that it’s iterative: we build and ship something, get some feedback, learn from that, build and ship some more, and repeat until our stakeholders are happy with it. If we’re doing iterative (as opposed to repetitive) development, we’re going to discover new user stories as we go through the project, perhaps even as we go through the iteration. This is a good thing: it means we’re listening to our customers. We need to make sure our project plans can handle this as a matter of course.

Going back to our release burn-down, we want to separate the rate of discovery from the rate of delivery. A great way to do this is simply to flip the vertical axis of the chart, and use a Release Burn-Up instead. On here we can start tracking two lines instead of one. First we draw the number of completed stories (or story points), and then stacked on top of that, we draw the number of stories still to do. That includes any story not yet done – whether it’s in the backlog or being worked on in the next iteration.

I love these charts – they seem to easily map to people’s understanding of the project. When you explain that the area underneath the bottom line represents all the features that have been done, it’s easy for anyone involved with the project to quickly understand what it means. In the case of the chart above, we can identify that while the team are delivering at a pretty steady rate, they’re discovering features at a steady rate too. They’ll probably need to de-scope if they want to meet their release date.

We can add extra fidelity to this chart in two dimensions: We can collect samples more often, and we can collect details about just how done the stories are as they move from ‘not done’ into ‘done’. Let’s start by collecting more detail about how done the stories are. Imagine our task board looks like this at the end of an iteration:

One story (Story A) is done, and 8 other stories are not done. As we track these counts over time, we can draw one line on our Release Burn Up chart for each category of ‘not done’ and stack the lines:

This chart has another name, Cumulative Flow Diagram (CFD). We call it this because as stories flow from ‘not done’ to ‘done’ across the task board, we’re drawing the accumulation of that flow on the diagram. There are lots of things we can gleam from this diagram.

If we look at the example above, we can see that work is stacking up in the design stage of our process. Because our CFD chart highlights this, we can put more directed effort into relieving the bottleneck on the designers, perhaps by adding an extra analyst to the team to run ahead and do some more detailed analysis of the upcoming stories in the backlog, or by helping the designers to break the existing stories up into smaller ones that are easier to understand.

You can wait until the end of the iteration to count these numbers, but why stay ignorant? If you collect this data every day, you’ll get quick feedback about where bottlenecks are appearing in your team, and be able to try small tweaks to correct them.

Agile / Lean Software Development

Comments (1)

Permalink

Random Notes from SPA2010

Usage-Centric Design

Narrative Journey Maps

  • Duncan Prefers the term Usage-Centred Design to User-Centric Design. There was a book reference here but I missed it.
  • Narrative Journey Maps (NJM) are a way to model and visualise the steps a user has to follow as they try to achieve a goal.
  • Each Step is decorated with:
  • Comments
  • Questions
  • Ideas
  • An Emotional Barometer is draw across the top which highlights pain-points where the user may be particularly frustrated with the process.
  • NJMs are usually created from a Contextual Study where users are quietly observed trying to reach the goal.
  • They are a way to record the current state of play, and a place to start thinking about how things could change.

Personas

  • Alan Cooper 1985
  • Based on real data: capture and use direct observations of real experience
  • Group and merge those observations to form personas
  • Personas have:
  • Motivations
  • Goals
  • Frustrations
  • Behaviours

It seemed that what had worked for the presenters was to focus on just one persona at a time, when it became obvious who their core user was, and work at satisfying their needs. Once they’d started to make significant inroads into this, it became clearer and more worthwhile to look for other personas.

To Read

Luke Hohman’s Innovation Games Mr Squiggle (seed drawings for workshop excercises) Consensus:

  • Quaker Business Method
  • Sociocracy
  • Formal Consensus (CT Butler)
  • “Participatory Decision Making” – Sam Kaner

“The Logical Thinking Process” – H William Dettner. A book on conflict clouds. “Round the Bend” – Neville Chute. Like Zen and The Art of Motorcycle Maintenance but English

Other Conferences

Keith Braithwaite was heading out to ‘Next Generation Testing’ and said he was really looking forward to it. He said it was quite easy to stand out from the crowd of big vendors, and that if you have something original to say you’ll likely be asked back. He also mentioned ‘Test Expo’ as being another conference in this vein.

Really interesting it taking the Cucumber message to these conferences.

Remote Pairing and Wolf-pack programming

I met several people who are making use of or would like to make use of this more. Many of them were using Ubuntu so don’t have access to iChat, and were struggling with slow VNC connections. I suggested screen + emacs/vim to a few people (not that I’ve used it myself, but I’ve heard good things). People mentioned plug-ins for eclipse, and my old favourite SubEthaEdit came up. It does feel like there’s product missing here.

Some guys did a BoF trying out a crazy contraption they had using a smalltalk environment that allowed a dozen people all edit the same code at the same time, on their own workstations. It sounded pretty amazing.

My Session

I ran a session, Robot Tournament at the conference. Despite what I had considered thorough preparation, I had some rather exciting moments when the tournament engine spluttered and needed some running repairs. Overall though the feedback I got was positive. Some observations:

  • The (accidental) downtime gave people an opportunity to make build scripts and so on. I wonder whether this could be engineered deliberately another time.
  • More logging and visibility of exactly what’s going on when a player runs would be useful to help participants with debugging.
  • The warm-up should include calling a robot with a command-line argument so that any teething problems with reading the input can be resolved at a relaxed pace.
  • A better explanation (role play?) of how the tournament works would help.
  • Need to limit the number of players to 1 per team. Although it was worth experimenting with allowing more than one, there were a couple of disadvantages that seemed to outweigh the advantages:
  • when people realised they could write scripts to add several robots, this really slowed down the time to run a round due to the number of permutations of matches. I guess here you could deal with this by using a league system, but for now the simplest thing seems to be to just limit the number of players.
  • there is a strategy (which the winning team used) where you use a patsy player which can recognise a game against another player from the same team and throw the game, thus giving that player an advantage. By releasing several patsy players you leverage that advantage.
  • I was surprised (and a bit disappointed) at how conservative most of the language choices were. I think we had 3 Ruby robots, 2 Java ones and one Haskell robot. Sadly I couldn’t get smalltalk working for the guy who wanted to use that. It seemed clear that rather than one language being particularly better than another for the problem at hand, teams who used a language they were familiar with did best.
  • It was hard for people to see what was going on when they were getting their robots running. More visibility how exactly what it looks like when their program is run on the server environment would be helpful.
  • Also more functionality on the UI to slice the results and look at just how your own robot had performed.
  • The problem was so small that tests were hardly needed. Pivoting, changing the rules of the game half-way through the tournament might have helped here.
  • I would also be interested in trying out variable-length iterations – some long ones, some short ones.
  • Shipping simple solutions early was definitely a strategy that had worked for everyone.
  • People enjoyed the fact that the goal – getting points – was clear, so that rather than it being about writing clean code or writing tests, it was more realistic to a business context.
  • Trying a more open game where you could learn more about your opponent might be interesting
  • Getting teams to swap code might also be interesting
  • Doing a code show & tell wasn’t in my plan but worked really well

The session format ended up being something like this:

  • 10 minutes introduction
  • 25 minutes warm-up
  • 30-45 minutes faffing around fixing the engine while people started to build their real robots break
  • 7 x 7 = 50 minutes tournament rounds
  • 25 minutes code show & tell
  • 15 retrospective looking at what we’d learned and any insights

Agile / Lean Software Development

Comments (1)

Permalink

7 Reasons Why Pair Programming Makes Sense

Did I mention that I like pair programming? Here are some reasons why it’s good for you and your boss.

More Trucks

The higher the truck number of your team, the more resilient it is to losing people. Pair programming spreads knowledge of the code around the whole team meaning each team member has a much more broad knowledge of the system than on a traditional team.

Free Training

Every time I pair with someone, I learn at least one new trick, be it a keyboard shortcut for my editor, a new command-prompt command, or a language idiom. This training costs nothing and builds the confidence of even the most junior team members who always have something to contribute, to say nothing of how much they will learn from sessions with their more senior colleagues.

Team Bonding

Working together in pairs breaks down barriers between people much more quickly than when they are working alone, and provides a natural environment for teams to learn to trust one another, and enjoy one another’s company. These are vital attributes that will get your team though the tough times, ensuring they can pull together when necessary.

Better Designs

Experienced programmers know that coding is much more a process of design than of mere implementation. When subjective design decisions are taken by at least two people on the team, those decisions have much more authority and are more likely to be understood and accepted by the rest of the team than if they had been taken alone. When one member of the pair has a good idea for a solution to a problem, this may stimulate an even better one from their pair. Science says so.

Less Bugs

The perpetual code review that’s done during a pairing session not only produces better designs, but it also catches a lot more defects as they are written. Pairs will tend to write more thorough automated tests, and are more likely to spot holes in logic that would otherwise have to be picked up by subsequent manual testing.

More Fun

Generally, programming in pairs makes for more a more relaxed and enjoyable atmosphere in a team. People become comfortable with their own strengths and weaknesses and are able to talk and joke about their work much more readily, since they are already used to that dialogue. This is obviously terrific for morale and staff retention.

More Focus

Pair programmers do not check their emails when they’re waiting for a script to run, disappear into their inbox for 20 minutes, then come back and have to remind themselves what on earth they were doing. When they hit a little break-point, they review what they’ve done so far and make a plan for their next steps. They stay focussed. Twitter can wait.

Agile / Lean Software Development

Comments (1)

Permalink

Debug Pair Programming with Me in London

I’ll be running a preview of my Agile 2009 workshop ‘Debugging Pair Programming’ at Skills Matter on Monday 10th August.

If you’re on a team that’s using or trying to adopt pair programming, this is a great chance to explore and understand some of the complex reasons why this is such a difficult skill to master.

The session is free, so sign up soon to avoid disappointment: http://skillsmatter.com/event/agile-scrum/debugging-pair-programming

Agile / Lean Software Development

Comments (0)

Permalink

Don’t Confuse Estimates with Commitments

Estimate: an approximate calculation of quantity or degree or worth

Commitment: the act of binding yourself (intellectually or emotionally) to a course of action

  • Estimates are not commitments.
  • Plans based only on estimates are bad plans.
  • Commitments based only on estimates are at best foolish, at worst dishonest.

Estimates are certainly a useful tool for making realistic commitments, but there are many others too that are equally, if not more, useful. Working at a sustainable pace, measuring data about past performance, and facilitating the genuine consensus of the whole team when committing are the way to make plans that can actually be achieved.

Agile / Lean Software Development

Comments (4)

Permalink

Acceptance Tests Trump Unit Tests

At work, we have been practising something approximating Acceptance Test Driven Development now for several months. This means that pretty much every feature of the system that a user would expect to be there, has an automated test to ensure that it really is.

It has given me a whole new perspective on the value of tests as artefacts produced by a project.

I made a pledge to myself when I started this new job in August that I would not (knowingly) check in a single line of code that wasn’t driven out by a failing test. At the time, I thought this would always mean a failing unit test, but I’m starting to see that this isn’t always necessary, or in fact even wise.

Don’t get me wrong. Unit testing is extremely important, and there’s no doubt that practising TDD helps you to write well-structured, low-defect code in an really satisfying manner. But I do feel like the extent to which TDD, at the level of unit testing alone, allows for subsequent changes to the behaviour of the code, has been oversold.

If you think you’re doing TDD, and you’re only writing unit tests, I think you’re doing it wrong.

As new requirements come in, the tensions influencing the design of the code shift. Refactoring eases these tensions, but by definition means that the design has to change. This almost certainly means that some, often significant, portion of the unit tests around that area of the code will have to change too.

I struggled with this for a long time. I had worked hard on those tests, for one thing, and was intuitively resistant to letting go of them. More than that, I knew that somewhere in there, they were testing behaviour that I wanted to preserve: if I threw them out, how would I know it still worked?

Yet those old unit tests were so coupled to the old design that I wanted to change…

Gulliver

In my mind, I have started to picture the tests we write to drive out a system like little strings, each one pulling at the code in a slightly different direction. The sum total of these tensions is, hopefully, the system we want right now.

While these strings are useful to make sure the code doesn’t fall loose and do something unexpected, they can sometimes mean that the code, like Gulliver in the picture above, is to restrained and inflexible to change.

The promise of writing automated tests up front is regression confidence: if every change to the system is covered by a test, then it’s impossible to accidentally reverse that change without being alerted by a failing test. Yet how often do unit tests really give us regression alerts, compared to the number of times they whinge an whine when we simply refactor the design without altering the behaviour at all? Worse still, how often do they fail to let us know when the mocks or stubs for one unit fail to accurately simulate the actual behaviour of that unit?

Enter acceptance tests.

By working at a higher level, acceptance tests give you a number of advantages over unit tests:

  • You get a much larger level of coverage per test
  • You get more space within which to refactor
  • You will test through layers to ensure they integrate correctly
  • They remain valuable even as underlying implementation technology changes

Admittedly, the larger level of coverage per test has a downside: When you get a regression failure, the signpost to the point of failure isn’t as clear. This is where unit tests come in: if you haven’t written any at all yet, you can use something like the saff squeeze to isolate the fault and cover it with a new test.

They’re also much slower to run, which can be important when you’re iterating quickly over changes to a specific part of the system.

To be clear, I’m not advocating that you stop unit testing altogether. I do feel there’s a better balance to strike, though, than forcing yourself to get 100% coverage from unit tests alone. They’re not always the most appropriate tool for the job.

To go back to the metaphor of the pulling strings, I think of acceptance tests as sturdy ropes, anchoring the system to the real world. While sometimes the little strings will need to be cut in order to facilitate a refactoring, the acceptance tests live on.

The main thing is to have the assurance that if you accidentally regress the behaviour of the system, something will let you know. As long as every change you make is driven out by some kind of automated test, be it at the system level or the unit level, I think you’re on the right track.

Agile / Lean Software Development

Comments (3)

Permalink