Photoshop Driven Development, Anyone?

This afternoon I came across a presentation by Michael Tamm on his libary Fighting Layout Bugs. This is an idea that I too, have had bubbling around in my head for some time. Michael, to his credit, has decided to actually do something about it.

Michael’s library takes screenshots of sample pages during a test run and analyses the bitmap, checking for text that’s overflowing a bounding line, for example, and highlighting it. You can even use this to fail the build!

Screen Shot 2010 10 01 at 17.31.34
Screen Shot 2010 10 01 at 17.32.01

I can see these kind of checks being really useful to HTML / CSS hackers to help catch basic mistakes, but I still would love to see a test driven approach become available to front-end coders. When we drive out code from a failing tests, we test the tests as well as the code, because we see the test fail before we see it pass. What if we could do that for our markup and CSS?

I therefore propose: Photoshop-Driven Development.

We can take a PSD design for a page, and compare the design with the rendered page. If the rendered page doesn’t match the PSD, we fail the build until it does.

Of course it could never be as simple as this. Not every pixel in the PSD needs to be in the rendered page: some of it will just be dummy text that you’d never need or want to validate. Also, while a PSD is a fixed size, the way a page behaves when it resizes is an important (and potentially buggy) piece of the puzzle.

Could there be a way to highlight the parts of a PSD that are important? Could the designer use naming conventions for their layers to indicate whether they are just dummy content to be ignored, that they’re bounding boxes that need to flex, or key content that must be there—and precisely there—at all costs?

Crazy? Possible? I know I haven’t got time to play with it right now, but maybe you have?

Published by Matt

I write software, and love learning how to do it even better.

Join the Conversation

3 Comments

  1. With our music typesetter project, GNU LilyPond, we have a similar challenge.

    We have ~1000 regression test snippets (pieces of music notation that describe and show one feature).

    We haven’t found a fully automated way of testing this yet.

    What we do is use ImageMagick’s `compare’ command with alpha channel colouring on png images between releases.

    These images are presented on a web page, ordered by their difference. This makes it fairly easy to catch errors while allowing for development.

    This page which shows the differences between 2.13.3 as compared to 2.13.0

    http://lilypond.org/test/v2.13.3-0/compare-v2-13/index.html

    For more test results, see http://lilypond.org/test

    Jan

Leave a comment

Your email address will not be published. Required fields are marked *