As with most stupid questions like this, the answer is “neither”. There are times when integration tests really help, and there are times when they can be a pain in the neck.
I was prompted to write this post when a colleague pointed me towards this page on the behaviour-driven wiki, which mentions the disadvantages of integration tests, which usually involve some complex (and often slow to run) procedure to set up an expected state in the system your code is integrating with. This tight coupling with the external system reduces agility and makes the test code brittle.
The page does point out that â€œeven in this state the code is often much more robust and a much better functional fit than code developed under more traditional methods based on large-scale up-front designâ€
I agree with the principles in the article, and I believe BDD is a great way to think, but I do think as long as the integration tests are well-factored (and hence easy to change) then the problems highlighted donâ€™t apply â€“ youâ€™re still going to be quick on your feet if requirements change.
The question is whether youâ€™re going to spend more time fixing your unit tests than you would debugging the code â€“ if youâ€™re confident you can write it correctly first time and anybody needing to change it is highly unlikely to introduce bugs in the area youâ€™re coding, itâ€™s a waste of everybodyâ€™s time to write a unit test â€“ the test just becomes baggage for the team to drag around.
Conversely, if thereâ€™s a risk that future changes could break what youâ€™re coding, or youâ€™re bored of hitting F5 in the browser to test some subtle tweak in a function way down deep in a subsystem, thinking of an imaginative way to write a lightweight unit test that isolates that function and proves that it works as you want it to, is probably going to save you some dull debugging time.