Narrative Journey Maps
- Duncan Prefers the term Usage-Centred Design to User-Centric Design. There was a book reference here but I missed it.
- Narrative Journey Maps (NJM) are a way to model and visualise the steps a user has to follow as they try to achieve a goal.
- Each Step is decorated with:
- An Emotional Barometer is draw across the top which highlights pain-points where the user may be particularly frustrated with the process.
- NJMs are usually created from a Contextual Study where users are quietly observed trying to reach the goal.
- They are a way to record the current state of play, and a place to start thinking about how things could change.
- Alan Cooper 1985
- Based on real data: capture and use direct observations of real experience
- Group and merge those observations to form personas
- Personas have:
It seemed that what had worked for the presenters was to focus on just one persona at a time, when it became obvious who their core user was, and work at satisfying their needs. Once they’d started to make significant inroads into this, it became clearer and more worthwhile to look for other personas.
Luke Hohman’s Innovation Games Mr Squiggle (seed drawings for workshop excercises) Consensus:
- Quaker Business Method
- Formal Consensus (CT Butler)
- “Participatory Decision Making” – Sam Kaner
“The Logical Thinking Process” – H William Dettner. A book on conflict clouds. “Round the Bend” – Neville Chute. Like Zen and The Art of Motorcycle Maintenance but English
Keith Braithwaite was heading out to ‘Next Generation Testing’ and said he was really looking forward to it. He said it was quite easy to stand out from the crowd of big vendors, and that if you have something original to say you’ll likely be asked back. He also mentioned ‘Test Expo’ as being another conference in this vein.
Really interesting it taking the Cucumber message to these conferences.
Remote Pairing and Wolf-pack programming
I met several people who are making use of or would like to make use of this more. Many of them were using Ubuntu so don’t have access to iChat, and were struggling with slow VNC connections. I suggested screen + emacs/vim to a few people (not that I’ve used it myself, but I’ve heard good things). People mentioned plug-ins for eclipse, and my old favourite SubEthaEdit came up. It does feel like there’s product missing here.
Some guys did a BoF trying out a crazy contraption they had using a smalltalk environment that allowed a dozen people all edit the same code at the same time, on their own workstations. It sounded pretty amazing.
I ran a session, Robot Tournament at the conference. Despite what I had considered thorough preparation, I had some rather exciting moments when the tournament engine spluttered and needed some running repairs. Overall though the feedback I got was positive. Some observations:
- The (accidental) downtime gave people an opportunity to make build scripts and so on. I wonder whether this could be engineered deliberately another time.
- More logging and visibility of exactly what’s going on when a player runs would be useful to help participants with debugging.
- The warm-up should include calling a robot with a command-line argument so that any teething problems with reading the input can be resolved at a relaxed pace.
- A better explanation (role play?) of how the tournament works would help.
- Need to limit the number of players to 1 per team. Although it was worth experimenting with allowing more than one, there were a couple of disadvantages that seemed to outweigh the advantages:
- when people realised they could write scripts to add several robots, this really slowed down the time to run a round due to the number of permutations of matches. I guess here you could deal with this by using a league system, but for now the simplest thing seems to be to just limit the number of players.
- there is a strategy (which the winning team used) where you use a patsy player which can recognise a game against another player from the same team and throw the game, thus giving that player an advantage. By releasing several patsy players you leverage that advantage.
- I was surprised (and a bit disappointed) at how conservative most of the language choices were. I think we had 3 Ruby robots, 2 Java ones and one Haskell robot. Sadly I couldn’t get smalltalk working for the guy who wanted to use that. It seemed clear that rather than one language being particularly better than another for the problem at hand, teams who used a language they were familiar with did best.
- It was hard for people to see what was going on when they were getting their robots running. More visibility how exactly what it looks like when their program is run on the server environment would be helpful.
- Also more functionality on the UI to slice the results and look at just how your own robot had performed.
- The problem was so small that tests were hardly needed. Pivoting, changing the rules of the game half-way through the tournament might have helped here.
- I would also be interested in trying out variable-length iterations – some long ones, some short ones.
- Shipping simple solutions early was definitely a strategy that had worked for everyone.
- People enjoyed the fact that the goal – getting points – was clear, so that rather than it being about writing clean code or writing tests, it was more realistic to a business context.
- Trying a more open game where you could learn more about your opponent might be interesting
- Getting teams to swap code might also be interesting
- Doing a code show & tell wasn’t in my plan but worked really well
The session format ended up being something like this:
- 10 minutes introduction
- 25 minutes warm-up
- 30-45 minutes faffing around fixing the engine while people started to build their real robots break
- 7 x 7 = 50 minutes tournament rounds
- 25 minutes code show & tell
- 15 retrospective looking at what we’d learned and any insights