Words Matter: Growing Software

Brian Marrrick makes a good point that ‘incremental’ and ‘iterative’ just look and sound way too similar to make the decent brand-names for ‘evil’ and ‘good’ software development practice respectively.

Note to self: say ‘incremental assembly’ (boo!) and ‘iterative growth’ (yeah!).

The more I think about it, the more I like the growth vs assembly metaphor for software: creating a fertile environment etc… nice.

You, Sir, are an Anti-Pattern

Last night in the pub I was introduced to the term ‘corncob’, a label apparently used in some software development circles for a disruptive team member. I’d love to know how on earth that particular word was chosen.

I rather dislike this tendency to put people into boxes, as the next obvious step is to write the person off, but I guess giving these stereotypes a name at least gives you a framework for discussing them and starting to figure out what is driving their dysfunctional behaviour.

Plus we can all recognise a little bit of them in ourselves, and our fellow team-mates. Anyone else lucky enough to have a bit of an EXTRA TERRESTRIAL on their team? (Hi Brett!)

TFS and Renames

Does it really have to suck quite this badly?

I have a trunk branch and a stable branch. When I want to promote a cut of code from trunk to stable, I use ‘merge’, and for every single file in every single folder that’s been renamed in trunk, I have to fucking well confirm, with three mindless bastard clicks, that I want to use the name/path from trunk.

Do the people who wrote this thing use it themselves? How do they sleep at night?

XP Day 2007

…was good fun, and well worth a couple of days off.

There’s a mixed crowd – some die-hard extreme programmers, quite a few self-styled (and self-promoting!) ‘coaches’ and a few newbies.

The atmosphere was really friendly – you would quite often find yourself sat in one session right next to the person who had been leading the previous one, so that and the often fun and interactive sessions broke the ice pretty quickly.

The first day was a bit flat for me, but after a night on the free beer courtesy of the sponsors, the second day picked up and really made it worth it. I wish I’d had time to stay for another session in the pub on Tuesday evening.

Key things I took from this conference, more detail on these later if you’re lucky:

  • There’s a fairly serious backlash going on against scrum (I heard more than one person drop the ‘r’, although I think one of those was accidental!), certainly within the London / UK agile scene.
  • There’s tension, which as I see it is between the coaching community and the programmer / creator community, who disagree about how much it’s OK to compromise on the original agile manifesto values and XP practices. I heard the phrase ‘valuing pragmatism over orthodoxy’ which summed it up rather well.
  • Nobody, but nobody, works for ThoughtWorks anymore, not even Fred George
  • You probably need to be working in Java if you want to get decent XP gigs, especially for a bank, and there’s quite a few people working for banks – particularly front office.
  • Kanban is where it’s at, baby.

Avoid Spaghetti Execution with the Judicious use of Inline Scripting

Rob Conery has kicked up a bit of a stink posting about the use of inline scripting in modern ASP.NET apps.

I may post more on this subject when I have time, but I had to just weigh in with my support for disciplined use of this technique, which can save you hundreds of pointless lines of code and execution cycles in a large app if used wisely.

One trick we use is to replace all those silly asp:LinkButton tags with a good ole anchor tag, and a call to a special utility class called LinkBuilder which knows how the site’s URLs hang together, like so:

<a href="<%= LinkBuilder.Show(catalogueItem) %>">Cancel</a>

This sort of markup can hardly be acused of being spaghetti code (the LinkBuilder method has a single line inside it), and means that we avoid what you might call ‘spaghetti execution’ when you actually come to spin the thing up.

Think about what a sequence diagram of the equivalent use of an asp:LinkButton with a wired OnClick event would look like. Eugh.

Generics and NMock

Maybe NMock supports this, but I can’t see how.

[Update 20/10/2007 – of course it does! See the comments for Aybars and Rob’s solution.]

Let’s say I have a dependency of my SUT (system under test). That dependency looks like:

public interface IDatabase
{
  void Get<TTypeToFetch> ();
}

Now in my test, I want to expect that the SUT will try to fetch a Widget.

NMock gives me something which looks like it could help – a matcher. So I can code something like this:

Mockery mockery = new Mockery();
IDatabase db = mockery.NewMock<IDatabase>();
Expect
  .Once
  .On(db)
  .Method(new GenericMethodMatcher(Widget));

So I code up a little GenericMethodMatcher which casts the object passed to it as a MethodInfo and takes a look at the result of a call to GetGenericArguments() to see if it fins a Widget. So far, so exceedingly proper bo.

Unfortunately, the MethodInfo which NMock passes to the matcher is ‘open’, meaning that it still contains just the templated definitions of the types that could be passed to it at run-time. What I need in this instance, is a ‘closed’ MethodInfo object.

I guess this is hard for NMock to do, since it only works with the interface and not a concrete class. I have no idea what’s going on under the hood… Maybe it it possible but I’m just too stupid… I wonder how other mocking frameworks handle this scenario.

Safety in Numbers

How important is it to measure how long something took?

Well, so the received wisdom goes, by comparing how long it took you to complete your a task against the estimate you made before starting it, you get an idea of how good your estimate was. So far, so good.

But what if your estimate turned out to be wrong? What are you going to do about that?

One of the culture shocks I think scrum introduces it the idea that we almost always just look forward. I don’t care how long you’ve spent on a task, I just want to know how long you think it will take you to finish it, based on what you know right now.

Tomorrow, if you turned out to be wrong, that’s OK. All we need to know is how much longer it’s going to take to get it done.

Except it’s not OK.

Because if your task, which you originally estimated would take 2 hours has taken you two days, and it’s still not done, then something is impeding you.

Fortunately in a team practicing scrum, we have a daily 15-minute meeting where impediments like this are made visible to the entire team, and someone takes away the action to resolve the impediment.

Rather than having to retrospectively scan reports of esimates vs actuals in a spreadsheet, the problem can be highlighted as it’s happening, and hopefully resolved.

Agile teams also have a second line of defence against more stubborn impediments: The retrospective.

With these two facilities in place, it becomes unnecesary to track the details of estimates vs actuals. The administrative overhead on developers is reduced, and they can get back to writing solid code.

Some people find this difficult to accept – there’s a familiarity to the recording of the numbers. We’ve allways done it. It makes me feel safe. Somebody will ask us for them.

If someobody asks you how good your estimating is – show them your burn-down chart.

scrum burn-down chart

It’s bloody obvious from the shape of the chart if your estimating is any good. And if they want figures, measure the difference in area beneath your real curve (blue line) and your ‘ideal’ diagonal curve (red line). Every hour above the diagnoal line is an under-estimated hour. Show this to your team. Get them to look at it and reflect on how realistic they’re being when they estimate tasks.

If somebody asks you why your estimates were too low – listen to people. Look at your impediments log, and the write-up of your retrospectives. You need a culture where impediments get surfaced quickly from the team, where it’s OK to say things aren’t going well, and where problems raised get solved and cleared out of the way.

Printing Your Todo.txt Lists to Index Cards at the Command Line

Like a few other people, I’m over kGTD. In the first flushes of my infatuation with the way of GTD she was good to me, showed me a few tricks I’d never seen before. We had some good times, syncing away. But my iCal started to fill up with billions of pointles calendars, my projects started to indent to the point where I couldn’t find them anymore, and I never quite got the hang of those… unique keyboard combos needed to navigate around Omni Outliner Pro. Sometimes, important things would go missing, and I gradually started to trust her less, and go back to paper and pens for my lists.

Until now. Todo.txt is a series of command-line scripts for slicing and dicing a text-based todo list. If you stick to a few conventions, you can use the scripts to suck out relevant information as and when you need it. Combined with the humble yet awesome power of the bash shell’s pipe, there are a multitude of ways you can shove your action lists in front of your lazy face. Trust me, if you keep or have ever kept your lists in a text file, you owe it to yourself to check the site out.

Something I always wanted to do with kGTD but never managed to in satisfactory manner was to sync my digital lists to index cards for perusing whilst (gasp!) off-line. Enter linux’s lp command:

todo.sh list | lp -o PageSize=Custom.3x5in -o page-top=10 -o page-bottom=10 -o page-left=5 -o page-right=5 -o lpi=8 -o cpi=15

See here for an explanation of all those crazy lp options.

I just love this stuff. Sometimes it’s almost as good as being back at the terminal of my faithful BBC Micro.

Web-Based Backup… Via a Trickle

A project I’ve been meaning to do for some time is set up a backup of the crucial folders on my home server to somewhere on the web. Preferably somewhere free, like my existing dreamhost space.

What I didn’t really consider is… and I bet you’ve already guessed it, dear reader: the piddly-poor upload speed on my ADSL connection. Quoted at 448 Kbps, by my reckoning that means I’ll get about 3.3GB up the wire in a 24 hour period… which means we may be here for quite some time. Better turn off that pesky Windows Update.

The nice thing is that, because I’m using rsync, once the initial sync is done, only the changes to files will be uploaded, so traffic should drop back to normal… some time in 2008.

SSH on Cygwin

I’m following Gina Trapani’s outstanding tutorials on lifehacker to get me some of that unix command-line joy on the rusty old windows box in the corner.

Note to other linux-naive cygwin users out there. If you want to install the ssh command, look for the package called ‘openssh’. No amount of staring at the packages squid and ssmtp is going to make it appear where you might expect it to.