Fixing my testing workflow

Okay I’m bored of this. I need to talk about it.

I love to use Ruby, RSpec, Cucumber and Rails to do test-driven development, but my tools for running tests are just infuriatingly dumb. Here’s what I want:

  • When a test fails, it should be kept on a list until it has been seen to pass
  • When more than one test fails:
    • Show me the list, let me choose one
    • Focus on that one until it passes, or I ask to go ‘back up’ to the list
    • When it passes, go back up to the list and let me choose again
    • When the list is empty, I get a free biscuit
  • When a test case is run, a mapping should be stored to the source files that were covered as it ran so that:
    • When a file changes, I can use that mapping to guess which test cases to run. Fuck all this naming convention stuff, it’s full of holes.
    • At any time, I can pipe the git diff though the tool to figure out which test cases to run to cover the entire commit I’m about to make.

When I say test case, I personally mean:

  • An RSpec example
  • A Cucumber scenario

…but it should work for any other testing framework too.

I feel like having a tool like this that I trusted would make a huge difference to me. There are all these various scrappy little pieces of the puzzle around: guard plugins, autotest, cucover, cucumber’s rerun formatter. None of them seem to quite do it, for me. Am I missing something?

Or shall we make one?

Published by Matt

I write software, and love learning how to do it even better.

Join the Conversation


  1. Are you running these tests in your CI? We run all our various automated tests (in Canoo WebTest, FitNesse and JUnit) in Hudson, and the tools we use have results that lend themselves for plugging into Hudson. That way we can easily investigate failures, and Hudson keeps all the history for us of each build’s results. And all our tests are in SVN so we can see what changed between updates. But, I don’t use RSpec or Cucumber, maybe they don’t have hudson or other CI plug-in capability?

  2. Matt

    Have you tried using autotest for your test runner? It should handle the list of failed tests and only running those failed tests until they pass.

    • Matt
  3. Hi Matt,

    Last time I tried autotest it was really scrappy, but I’ve heard it’s come on a lot recently. Should I give it another shot?

  4. Yeah, I would agree that it seems like we have a lot of testing tools, but none of them are really that smart, you’re still left to do the grunt work. I like your autotest idea — my biggest problem with it has always been that it runs ALL the tests after a failure, which is completely retarded in my opinion — that should be done right before you commit, not as you’re testing. I haven’t seen any sort of attempt to make the Cucumber workflow faster, so an improvement on that would be welcome, I’m sure. Same goes for Jasmine tests, now that I think about it. And no one’s tried to lump them all together under one roof. Go for it!

  5. Hi Oriol,

    I’ve found Guard comes the closest to what I want, but both the rspec and cucumber drivers seem to be buggy and don’t always seem to find tests that are failing. Maybe I should give them some love.

  6. We used it everyday (together with Spork) and I haven’t noticed any problems with it, it works well even with minitest/spec. Maybe you should try the newest version and see if it works for you 🙂

Leave a comment

Your email address will not be published. Required fields are marked *