Agile / Lean Software Development

Sandi Metz’s Practial Object Oriented Design comes to London!

I’m delighted to announce that I’ll be joining Sandi Metz to teach two courses on Object-Oriented design this summer in London. We’re nick-naming it POODL.

You have two options:

  1. A two-day class running from July 3-4
  2. A more in-depth three-day class running from June 25-27

Sandi mentoring at a RailsGirls workshop

Sandi is not only one of the most experienced object thinkers I know, she’s also a wonderful teacher. If you’ve read her book, Practial Object Oriented Design in Ruby, you’ll already know her knack of bringing scary-sounding design principles gently down to earth with simple, practical examples.

 Not just for Rubyists

POODL is definitely not just for Rubyists. We’ll teach the examples in Ruby, but any programmer familiar with an Object-Oriented language will be absolutely fine. The challenging stuff will be learning the trade-offs between different design patterns, and how to refactor between them.

The feedback from this course in the US has been amazing, and I’m really looking forward to being a part of it over here in Europe.

Book your ticket now!

Agile / Lean Software Development
BDD
Hexagonal Rails
OOP

Comments (1)

Permalink

What is BDD and why should I care? (Video)

This is the pitch that I give right at the beginning of my BDD Kickstart classes to give everyone an overview of what BDD is, and why I think it matters.

In this video, I cover:

  • How BDD improves communication between developers and stakeholders
  • Why examples are so important in BDD
  • How BDD builds upon Test-Driven Development (TDD)
  • Why business stakeholders need to care about refactoring

If you’d like to learn more, there are still a few tickets left for the next public course in Barcelona on 11th September 2013.

Agile / Lean Software Development
BDD

Comments (0)

Permalink

Half-arsed agile

Transitioning to agile is hard. I don’t think enough people are honest about this.

The other week I went to see a team at a company. This team are incredibly typical of the teams I go to see at the moment. They’d adopted the basic agile practices from a ScrumMaster course, and then coasted along for a couple of years. Here’s the practices I saw in action:

  • having daily stand-up meetings
  • working in fixed-length iterations, called sprints
  • tracking their work as things called Stories
  • estimating their work using things called Story Points
  • trying to predict a release schedule using a thing called Velocity

This is where it started to fall down. This team have no handle on their velocity. It seems to vary wildly from sprint to sprint, and the elephant in the room is that it’s steadily dropping.

I see this a lot. Even where the velocity appears to be steady, it’s often because the team have been gradually inflating their estimates as time has gone on. They do this without noticing, because it genuinely is getting harder and harder to add the same amount of functionality to the code.

Why? Sadly, the easiest bits of agile are not enough on their own.

Keeping the code clean

Let’s have a look at what the team are not doing:

  • continuous integration, where everyone on the team knows the current status of the build
  • test-driven development
  • refactoring
  • pair programming

All of these practices are focussed on keeping the quality of the code high, keeping it malleable, and ultimately keeping your velocity under control in the long term.

Yet most agile teams, don’t do nearly enough of them. My view is that product owners should demand refactoring, just as the owner of a restaurant would demand their staff kept a clean kitchen. Most of the product owners I meet don’t know the meaning of the term, let alone the business benefit.

So why does this happen?

Reasons

Firstly, everyone on the team needs to understand what these practices are, and how they benefit the business. Without this buy-in from the whole team, it’s a rare developer who has the brass neck to invest enough time in writing tests and refactoring. Especially since these practices are largely invisible to anyone who doesn’t actually read the code.

On top of this, techniques like TDD do –despite the hype– slow a team down initially, as they climb the learning curve. It takes courage, investment, and a bit of faith to push your team up this learning curve. Many teams take one look at it and decide not to bother.

The dirty secret is that without these technical practices, your agile adoption is hollow. Sure, your short iterations and your velocity give you finer control over scope, but you’re still investing huge amounts of money having people write code that will ultimately have to be thrown away and re-written because it can no longer be maintained.

What to do

Like any investment decision, there’s a trade-off. Time and money spent helping developers learn skills like TDD and refactoring is time and money that could be spent paying them to build new features. If those features are urgently needed, then it may be the right choice in the short term to forgo the quality and knock them out quickly. If everyone is truly aware of the choice you’re making, and the consequences of it, I think there are situations where this is acceptable.

In my experience though, it’s far more common to see teams sleep-walking into this situation without having really attempted the alternative. If you recognise this as a problem on your team, take the time to explain to everyone what the business benefits of refactoring are. Ask them: would you throw a dinner party every night without doing the washing up?


Does the world need to hear this message? Vote for this article on Hacker News

Agile / Lean Software Development

Comments (9)

Permalink

How much do you refactor?

Refactoring is probably the main benefit of doing TDD. Without refactoring, your codebase degrades, accumulates technical debt, and eventually has to be thrown away and rewritten. But how much refactoring is enough? How do you know when to stop and get back to adding new features?

TDD loop (image credit: Nat Pryce)

I get asked this question a lot when I’m coaching people who are new to TDD. My answers in the past have been pretty wooly. Refactoring is something I do by feel. I rely on my experience and instincts to tell me when I’m satisfied with the design in the codebase and feel comfortable with adding more complexity again.

Some people rely heavily on metrics to guide their refactoring. I like the signals I get from metrics, alerting me to problems with a design that I might not have noticed, but I’ll never blindly follow their advice. I can’t imagine metrics ever replacing my design intuition.

So how can I give TDD newbies some clear advice to follow? The advice I’ve been giving them up to now has been this:

There are plenty of codebases that suffer from too little refactoring but not many that suffer from too much. If you’re not sure whether you’re doing enough refactoring, you’re probably not.

I think this is a good general rule, but I’d like something more concrete. So today I did some research.

Cucumber’s new Core

This summer my main coding project has been to re-write the guts of Cucumber. Steve Tooke and I have been pairing on a brand new gem, cucumber-core that will become the inner hexagon of Cucumber v2.0. We’ve imported some code from the existing project, but the majority is brand new code. We use spikes sometimes, but all the code in the master branch has been written test-first. We generally make small, frequent, commits and we’ve been refactoring as much as we can.

There are 160 commits in the codebase. How can I look back over those and work out which ones were refactoring commits?

Git log

My first thought was to use git log --dirstat which shows where your commit has changed files. If the commit doesn’t change the tests, it must be a refactoring commit.

Of the 160 commits in the codebase, 58 of them don’t touch the specs. Because we drive all our changes from tests, I’m confident that each of these must be a refactoring commit. So based on this measure alone, at least 36% of all the commits in our codebase are refactorings.

Sometimes though, refactorings (renaming something, for example) will legitimately need to change the tests too. How can we identify those commits?

Commit message

One obvious way is to look at the commit message. It turns out that a further 11 (or 7%) of the commits in our codebase contained the word ‘refactor’. Now we know that at least 43% of our commits are refactorings.

This still didn’t feel like enough. My instinct is that most of our commits are refactorings.

Running tests

One other indication of a refactoring is that the commit doesn’t increase the number of tests. Sure, it’s possible that you change behaviour by swapping one test for another one, but this is pretty unlikely. In the main, adding new features will mean adding new tests.

So to measure this I extended my script to go back over each commit that hadn’t already been identified as a refactoring, check out the code and run the tests. I then did the same for the previous commit, and compared the results. All the tests had to pass, otherwise it didn’t count as a refactoring. If the number of passing tests was unchanged, I counted it as a refactoring.

Here are the results now:

Refactoring vs Feature Adding Commits

Wow. So according to this new rule, less than 25% of the commits to our codebase have added features. The rest have been either improving the design, or perhaps improvements to the build infrastructure. That feels about right from my memory of our work on the code, but it’s still quite amazing to see the chart.

Conclusions

It looks as though in this codebase, there are about three refactoring commits for every one that adds new behaviour.

There will be some errors in how I’ve collected the data, and I may have made some invalid assumptions about what does or does not constitute a refactoring commit. It’s also possible that this number is artificially high because this is a new codebase, but I’m not so sure about that. We know the Cucumber domain pretty well at this stage, but we are being extremely rigorous to pay down technical debt as soon as we spot it.

We have no commercial pressure on us, so we can take our time and do our best to ensure the design is ready before forcing it to absorb more complexity.

If you’re interested, here’s the script I used to analyse my git repo. I realise it’s a cliche to end your blog post with a question, but I’d love to hear how this figure of 3:1 compares to anything you can mine from your own codebases.

Update 28 July 2013: Corrected ratio from 4:1 to 3:1 – thanks Mike for pointing out my poor maths!

Agile / Lean Software Development
BDD

Comments (4)

Permalink

Two ways to react

Every organisation that makes software, makes mistakes. Sometimes, despite everybody’s best efforts, you end up releasing a bug into production. Customers are confused and angry; stakeholders are panicking.

Despite the pressure, you knuckle down and fix the bug.

Now it gets interesting: you have to deploy your fix to production. Depending on how your organisation works, this could take anywhere between a couple of minutes and a couple of weeks. You might simply run a single command, or you might spend hours shuffling emails around between different managers trying to get your change signed off. Meanwhile, your customers are still confused and angry. Perhaps they’ve started leaving you for your competitors. Perhaps you feel like leaving your job.

With the bug fixed, you sit down and try to learn some lessons. What can you do to avoid this from happening again?

There are two very different ways you can react to this kind of incident. The choice you make speaks volumes about your organisation.

Make it harder to make mistakes

A typical response to this kind of incident is to go on the defensive. Add more testing phases, hire more testers. Introduce mandatory code review into the development cycle. Add more bureaucracy to the release cycle, to make sure nobody could ever release buggy code into production again.

This is a reaction driven by fear.

Make it easier to fix mistakes

The alternative is to accept the reality. People will always make mistakes, and mistakes will sometimes slip through. From that perspective, it’s clear that what matters is to make it easier to fix your mistakes. This means cutting down on bureaucracy, and trusting developers to have access to their production environments. It means investing in test automation, to allow code to be tested quickly, and building continuous delivery pipelines to make releases happen at the push of a button.

This is a reaction driven by courage.

I know which kind of organisation I’d like to work for.

Agile / Lean Software Development

Comments (5)

Permalink

A coding dojo story

It was 2008, and I was at the CITCON conference in Amsterdam. I’d only started going to conferences that year, and was feeling as intimidated as I was inspired by the depth of experience in the people I was meeting. It seemed like everyone at CITCON had written a book, their own mocking framework, or both.

I found myself in a session on refactoring legacy code. The session used a format that was new to me, and to most of the people in the room: a coding dojo.

Our objective, I think, was to take some very ugly, coupled code, add tests to it, and then refactor it into a better design. We had a room full of experts in TDD, refactoring, and code design. What could possibly go wrong?

One thing I learned in that session is the importance of the “no heckling on red” rule. I watched as Experienced Agile Consultant after Experienced Agile Consultant cracked under the pressure of criticism from the baying crowd of Other Experienced Agile Consultants. With so many egos in the room, everyone had an opinion about the right way to approach the problem, and nobody was shy of sharing his opinion. It was chaos!

We got almost nowhere. As each pair switched, the code lurched back and forth between different ideas for the direction it should take. When my turn came around, I tried to shut out the noise from the room, control my quivering fingers, and focus on what my pair was saying. We worked in small steps, inching towards a goal that was being ridiculed by the crowd as we worked.

The experience taught me how much coding dojo is about collaboration. The rules about when to critique code and when to stay quiet help to keep a coding dojo fun and satisfying, but they teach you bigger lessons about working with each other day to day.

Agile / Lean Software Development

Comments (0)

Permalink

Optimising a slow build? You’re solving the wrong problem

At the time I left Songkick, it took 1.5 hours to run all the cukes and rspec ‘unit’ tests on the big ball of Rails. We were already parallelising over a few in-house VMs at the time to make this manageable, but it still took 20 minutes or so to get feedback. After I left, the team worked around this by getting more slave nodes from EC2, and the build time went down to under 10 minutes.

Then guess what happened?

They added more features to the product, more tests for those features, and the build time went up again. So they added more test slave nodes. In the end, I think the total build time was something like 15 hours. 15 fucking hours! You’re hardly going to run all of that on your laptop before you check in.

The moral of this story: if you optimise your build, all you’ll do is mask the problem. You haven’t changed the trajectory of your project, you’ve just deferred the inevitable.

The way Songkick solved this took real courage. First, they started with heart-to-heart conversations with their stakeholders about removing rarely-used features from the product. Those features were baggage, and once the product team saw what it was costing them to carry that baggage, they were persuaded to remove them.

Then, with a slimmed-down feature set, they then set about carving up their architecture, so that many of those slow end-to-end Cucumber scenarios became fast unit tests for simple, decoupled web service components. Now it takes them 15 seconds to run the tests on the main Rails app. That’s more like it!

So by all means, use tricks to optimise and speed up the feedback you get from your test suite. In the short term, it will definitely help. But realise that the real problem is your architecture: if your tests take too long, the code you’re testing has too many responsibilities. The sooner you start tackling this problem head-on, the sooner you can start enjoying the benefits.

Agile / Lean Software Development

Comments (7)

Permalink

Building software backwards

I am utterly dismayed by the number of so-called Agile teams I meet who are still, after all this time, building software backwards.

Toast

What do I mean by that? Let’s defer to the great W. Edwards Deming as he ridiculed the approach of 1970s American manufacturing to quality:

Let’s make toast the American way! I’ll burn it, you scrape it.

What an incredibly wasteful way to make things: Have one team cobble it together, have another team find all the mistakes, then send all the mistakes back to the first team to be fixed. Still, as long as we’re tracking it all with story points on our burn-down chart, and talking about it in our stand-up meetings, we must be Agile, right?

Wrong. A really Agile team would learn how to prevent those mistakes from happening in the first place.

Yet time-after-time, I come across scrum teams with a separate test team working an iteration behind the developers, finding all their mistakes.

Here’s why I think this is such a bad idea:

  1. The cost of fixing a defect increases exponentially with time. Defects that fester undetected in the code become harder and harder to remove safely. The longer the defective code is there, the more chance there is that other code will be built on top of it. When the original mistake is rectified, that other code may be broken by the change.

  2. The cheapest fix is the one you never had to make. When you write tests up-front, you often spot edge-cases as you write the tests; then you build in support for those edge-cases as you go. If you build software backwards, leaving a team of testers to discover those edge cases, you’ll pay a large penalty for having to go back and introduce the change to existing code.

  3. You discover requirements too late. When you practice test-driven development, you prepare for any change to the software by first defining how it should behave when you’re done. If you build software backwards, you may never discover the behaviour you really want until the testers get their hands on it and start asking all those awkward questions. The toast has already been burned.

  4. Continuous Delivery will always be a pipe-dream. Many companies, perhaps your competitors, deploy to production several times a day, just minutes after a developer checks in their change. If you have to wait for a separate team of testers to scrape all your burned toast, this will only ever be a pipe-dream for you.

When I talk to people about adopting BDD, the most frequent objection I hear is that it must take longer. This is true, in a sense, because it does take longer for a change to get to the stage where a developer is done with it. If you’re used to burning toast you’ll find this frustrating, because you don’t realise yet that the time and effort you’re putting in to write the tests up-front won’t pay off until later, when you hand the change to your testers and they can’t find anything wrong with it.

To do this takes a leap of faith, so hold your nerve. It will be worth it.


Stop building software backwards: come to BDD Kickstart

If you’d like to hear more about these ideas and learn concrete techniques to make them work in your organisation, I’m teaching a course from March 11-13 in Edinburgh, UK just for you. Click here to find out more.

Agile / Lean Software Development
BDD

Comments (0)

Permalink

Interview on QCon: BDD, Cucumber and Hexagonal Architectures

I was interviewed recently by QCon’s Peter Bell about various subjects dear to my heart: BDD, Cucumber, Relish and hexagonal architectures.

Click here to watch the video

Agile / Lean Software Development
BDD
Hexagonal Rails
Relish

Comments (0)

Permalink

Thinking outside the shu box

I just got back from the fantastic Lean Agile Scotland conference, where I spoke about why agile fails. I’ve been doing a lot of travelling this year since The Cucumber Book came out, consulting and training different companies in BDD. A pattern I keep seeing are companies who adopted agile a few years ago but have hit a glass ceiling. This talk was a chance for me to reflect on why that happens.

One point I made in the talk was about cargo cults. The term ‘cargo cult agile’ is used to laugh at teams who go through the same ceremonies – daily stand-up meetings, planning poker meetings, story points, velocity etc. – yet in some very fundamental ways, they just don’t seem to be succeeding the same way as organisations who get it. I see this frequently, but I don’t think it’s particularly funny. I think it’s a sad waste of human potential.

I’m curious to understand why it happens.

There are various models for how we learn things, and one of my favourites is Shu-Ha-Ri. This model comes from Japanese martial arts and says that in order to really master something, we need to progress through three stages. The first stage, shu, is about learning the basics. Here’s a quote from Aikido master Seishiro Endo:

In shu, we repeat the forms and discipline ourselves so that our bodies absorb the forms that our forbearers created. We remain faithful to the forms with no deviation.

That’s shu: absorbing the basic mechanics by repetition, without really understanding why.

The next stage is ha:

In the stage of ha, once we have disciplined ourselves to acquire the forms and movements, we make innovations. In this process the forms may be broken and discarded.

Imagine that? Broken and discarded! This is the point where we start to ask why; where we start to understand the value of the practices we’ve been using and have enough experience to make decisions about when they’re actually valuable, and when they could be getting in our way.

The final stage, ri is transcendence:

Finally, in ri, we completely depart from the forms, open the door to creative technique, and arrive in a place where we act in accordance with what our heart/mind desires, unhindered while not overstepping laws.

I think that sounds like fun.

Anyway, my point is that cargo-culting is just what happens when your team is stuck in the shu stage. If you’ve been on some basic agile training, you might have been led to believe that’s all there is too it; unless you’re aware that the other stages of learning exist, how could you aspire to reach them?

This morning I was asked a great question about the talk: How would I help a team that’s stuck in a cargo cult shu learning stage? I felt like it deserved a blog post, and that’s why we’re here.

My first step with that team is to understand how well they’d really learned the basics. Have the team been given enough support to really get to grips with the tougher agile practices like retrospectives, TDD, or pair programming? What problems are they suffering from and how can they be traced back to core agile practices that the team need more expertise in?

Then it’s time to help them learn and practice those basic techniques. Coding dojos are a great way to practice TDD in a fun environment, for example. Coaching willing members of the team how to facilitate effective retrospectives and helping make sure that actions coming out of those retrospectives are followed up helps to start building the culture of empowerment the team need to become confident enough to move beyond the shu learning stage.

It’s also important to have conversations with management to ensure that the team are being given the space to learn. I often see teams who are driven so hard by the demand to deliver features that they simply don’t have the time to invest in self-improvement. Once a product owner is made aware of this situation, and the benefits of having a higher-functioning team, they’re usually keen to make this investment.

Agile / Lean Software Development

Comments (1)

Permalink