What Time’s Your Stand-Up?

The stand-up meeting has had quite a lot of attention on my team over the last week. At our last retrospective, it was brought up as a problem that we nearly always had at least one person missing when we started the meeting. Obviously this hampers us reaching the goal of having a rich and motivating exchange of information throughout the whole team. We still get a chance to adapt the plan, but not everyone’s going to know about that if they weren’t there. I also feel it really drags down morale when the same one or two people (and it’s nearly always the same ones) appear not to be as committed as the rest.

So, like all good self-organising teams, we got together and talked about it. Yep, we had a meeting about a meeting.

During this, at times heated, discussion it became clear that we were talking about two separate issues. As well as using the stand-up to synchronise our work, we were using it as our starting-point for the day. Those who like a lie-in will tend to get themselves in just in time for stand-up, and thus their day begins.

So while the first and most obvious issue for debate was what time to have the stand-up meeting, lurking in the background was the much more thorny issue of how late it is acceptable for a team-member to turn up for work.

Fortunately we spotted this pretty much in time before the mud-slinging between the early-risers and the alarm-clock-o-phobes began. On reflection of the benefits of the meeting as a whole, we judged that providing a start-point for the day was not as important as the other benefits about communication which require every member of the team to be present. So we decided to abandon the use of the stand-up as our start to the day.

So that we can still harness the motivation that comes out of a good stand-up, we decided to hold it at 2pm, when everyone should be back from lunch and ready to blast through the afternoon.

All we can do is try it and see how it works.

What time do your team have their stand-up, and why?

Software as an Art Form

I just came across an article written five years ago by Richard Gabriel proposing a university course run along the lines of his own Master’s in Fine Arts in poetry. The idea evidently gathered some momentum at the time, but now seems to have come to a halt.

What a shame. I’ve had an idea to get back to university on a ‘some day’ list for a while, and this would have been pretty much exactly my thing, I think. Seems to me like it was just an idea ahead of its time… I wonder how popular something like that would be now, with the increased focus on Quality that is sneaking in the back door behind the noisy razzmatazz of agile adoption.

Casting with ‘as’ in C#

Anyone else hate this?

Doofer doofer = GetWidget(widgetId) as Doofer;

Then, somewhere miles away from this innocuous little line, I find that my doofer is unexpectedly null. This is a great way to introduce hard-to-trace bugs, unless of course it’s complimented with guard clauses everywhere to test for nulls, which most of the time means unneccesary clutter.

What’s wrong with good old…

Doofer doofer = (Doofer)GetWidget(widgetId);

?

In this second case, if the result of GetWidget() can’t be turned into a Doofer, I’ll get an exception thrown right there and then. Assuming that my code here works specifically on doofers, I may as well spit out an exception right up here – how the heck am I supposed to work with this other type of widget you’ve sent me?

I think the only time to use a soft ‘as’ cast is in the tiny minority of cases where you really don’t care if the result of GetWidget can be turned into a Doofer. It seems to me that 90% of the time, you really do care, and you’d like to know as soon as it’s not. Even in that 10%, you’d probably be better off using a null object.

It’s a shame, because the ‘as’ operator is a bit more readable, and we all hate having to slam all those brackets in everywhere. Fortunately, C# provides a little-known solution to this, known as implicit casting.

The Path to Greatness – Anyone Got a Map?

I just bumbled into a great post by Raganwald on the subject of certification for professional software developers.

It’s something I’ve been giving some thought to lately. I coach my team informally on a daily basis as we work together and more formally at the end of each iteration during our retrospective. I’ve also been asked to start a process of coaching other teams too.

In my other life, I teach people to snowboard. BASI, the organisation who taught me to teach, have developed a clear progression from complete novice through experienced recreational riders, right up to instructor trainer. Similarly, I just took the PADI Open Water Diver course, and again there is a clear progression where you learn the skills required to dive safely on your own.

Like Raganwald, I feel that certifying people for technical skills like expertise in coding a particular language is really missing the point. I thing he’s on to something by wanting to prove that a person knows how to produce a bullet-proof product – after all, that’s the whole point isn’t it?

I’m interested in the series of learnings that people go though in their time at work that take them to the point where they do a lot of these things as second nature. Where they don’t claim something is ‘finished’ when they first manage to get some data up on the screen; where they know exactly when to step away from the keyboard and make a cup of tea to mull something over, or when to ask for some help instead of thrashing themselves, quietly, into a rotting hole. The things that make people valued members of my team are much more the ‘soft’ skills that mean they spend their time at work as effectively as possible, like confidence, humility, and empathy.

In a way it’s a good thing that we don’t just have a simple scale to measure people against, we’re dealing with real human beings, after all. On the other hand, it might be handy to have a map to help us all along the way.

Integration Tests – Good or Evil?

As with most stupid questions like this, the answer is “neither”. There are times when integration tests really help, and there are times when they can be a pain in the neck.

I was prompted to write this post when a colleague pointed me towards this page on the behaviour-driven wiki, which mentions the disadvantages of integration tests, which usually involve some complex (and often slow to run) procedure to set up an expected state in the system your code is integrating with. This tight coupling with the external system reduces agility and makes the test code brittle.

The page does point out that “even in this state the code is often much more robust and a much better functional fit than code developed under more traditional methods based on large-scale up-front design”

I agree with the principles in the article, and I believe BDD is a great way to think, but I do think as long as the integration tests are well-factored (and hence easy to change) then the problems highlighted don’t apply – you’re still going to be quick on your feet if requirements change.

The question is whether you’re going to spend more time fixing your unit tests than you would debugging the code – if you’re confident you can write it correctly first time and anybody needing to change it is highly unlikely to introduce bugs in the area you’re coding, it’s a waste of everybody’s time to write a unit test – the test just becomes baggage for the team to drag around.

Conversely, if there’s a risk that future changes could break what you’re coding, or you’re bored of hitting F5 in the browser to test some subtle tweak in a function way down deep in a subsystem, thinking of an imaginative way to write a lightweight unit test that isolates that function and proves that it works as you want it to, is probably going to save you some dull debugging time.

The Power of a Good Story

When I sat Mike Cohn‘s scrum-master course last year, one of the concepts I learned was the idea of templating all your user stories like this:

As a [type of user], I want to [do something useful with the system], so that [reason / motivation for wanting to do it].

I’ve come to realise that capturing these three perspectives on any deliverable is actually really powerful stuff. When you break it down, what you’re essentially getting is:

  1. who is the stakeholder who wants this work done?
  2. what is the goal for the person trying to deliver it?
  3. why does the stakeholder want it done?

(1) and (3) really help you prioritize: Does this user really need this? If so, do they need it right now, or is there another feature / story we should be delivering first? (3) especially, can really settle disagreements here, or make otherwise opaque choices become more obvious.

The whole story, but especially (2) gives the developer (or whoever is delivering the work) a clear goal to work towards.

I think it’s far too easy to dismiss this format as wordy and unnecessary, as I did it myself at first. Like so many agile practices though, when I force myself to use it in a consistent and disciplined way, it’s then that the power of it really emerges.

Painless RMagick Install for Ruby on Rails

One of the ironies and frustrations I find of working with open-source software is that things are changing so fast (yay!) that the documentation you find is nearly always out of date (boo!).

Trying to figure out how to get the rmagick component of the handy file_column rails plug-in to play nice, I came across numerous gruesome posts involving 3 hour sessions building rmagick from source, sacrificing goats, etc. Yikes.

Then I came across this great piece of news and suddenly things clicked into place. Ahhh.

Ten minutes later, a sprinkle of this, and I have thumbnail images all over my site. Too easy!

Nested Controllers and Broken link_to

So my first little rails app is taking it’s baby steps in the wild today, and some users should hit the site over the weekend… Exciting stuff.

I’ve written an admin area for the site, and with only a tentative grasp on the whys and wherefores, I’ve copied something I saw in the mephisto source code and used ‘nested controllers’ to keep the admin code tucked away from the rest:


$script/generate controller admin/users
$script/generate controller admin/stuff

For want of a better way, I put a base class for the admin controllers into the application.rb file (can’t remember where I picked this tip up) and put a one-line before_filter on that class to check whether the user has the appropriate rights. Sweet.

The little gotcha with this was that a lot of my existing link_to and redirect_to calls didn’t work from within the admin area, because I guess the :controller value in the options is relative. Adding a forward-slash to the controller name fixes it up:


<%= link_to 'home', :controller => '/home' %>

Looks like not the only one to have hit this little bump.

I think I need a new rails book: I’m starting to leave my trusty pragprog book behind. Any recommendations out there people?

Model-View-Presenter (MVP) Anti-Patterns

As I work with the MVP / Passive View / Humble View pattern more in ASP.NET, I’m gaining an understanding of some common smells or anti-patterns that can happen when you try to implement this pattern and haven’t quite grokked it yet. Some of these are problems I’ve had to pull out of my own code:

TOO MANY PRESENTERS SPOIL THE BROTH

When you re-use a user-control or even just a view interface on different pages, it’s tempting to re-use a special little presenter inside each of those pages whose sole responsibility is for presenting that usercontrol. If you have two different flavours of page, you might even put the call to the presenter into a BasePage.

This is fine as long as those pages need no other pesenter logic, but there are usually other controls on the page too, and they’ll need to be presented, and before you know it you have two or three or a whole crowd of presenters yelling at different bits of your page, telling them what to do.

Lesson: This is the path of darkness and complexity on the tightrope of your untestable UI layer. Re-use presenter logic by all means, but do it inside the presenter layer, where you can test. Everybody say Ahh… Tests…

Maxim: No more than one presenter to be constructed per page / request.

THE SELF-AWARE VIEW

Within a request / response cycle, only the Page should be aware of the existence of a presenter. The HUMBLE VIEW (implemented in a UserControl) does what it’s told without any knowledge of who is dishing out the instructions. Find this smell by looking for UserControls which construct their own private little presenters, or worse still accept them as properties (usually so that they can do filthy things like call methods on them – see ARROGANT VIEW, below).

ARROGANT VIEW

The arrogant view is not only SELF-AWARE, but has the temerity to imagine that it has the right to give the presenter direct instructions. Thus a presenter can easily be relegated to nothing more that a PROXY MODEL, with logic creeping up into the untestable view layer. Find this smell by looking for presenters which offer public methods. Replace these direct method calls from view->presenter with a more humble OBSERVER PATTERN, where the view raises events containing the relevant information in the EventArgs, and the presenter reacts accordingly (or decides not to, as is its privilege).

PRESENTER BECOMES PROXY MODEL

When you have an ARROGANT VIEW that commits the crime of directly calling methods on a presenter, you start to see the logic in the presenter thin down to the point where it becomes little more than a proxy for the ‘model’ (in classic MVC terminology) or service layer. The whole idea of MVP is to delegate the responsibility of figuring out what to fetch from the service layer and when to fetch it, so that the view just gets on with displaying whatever the presenter decides it should display. When an ARROGANT VIEW gets these kind of notions, logic that belongs in a (testable) presenter is tangled up with display logic in the view, and it’s time to roll up your sleeves and refactor.