Software Development

Never trust a passing test

One of the lessons when practising TDD is to never trust a passing test. If you haven’t seen the test fail, are you sure it can fail?

Red Green Refactor

Getting used to the red-green-refactor cycle can be difficult. It’s very natural for a developer new to TDD to immediately jump into writing the production code. Even if you’ve written the test first, the natural instinct is to keep typing until the production code is finished, too. But running the test is vital: if you don’t see the test fail, how do you know the test is valid? If you only see it pass, is it passing because of your changes or for some other reason?

For example, maybe the test itself is not correct. A mistake in the test setup could mean we’re actually exercising a different branch, one that has already been implemented. In this case, the test would already pass without writing new code. Only by running the test and seeing it unexpectedly pass, can we know the test itself is wrong.

Or alternatively there could be an error in the assertions. Ever written assertTrue() instead of assertFalse() by mistake? These kind of logic errors in tests are very easy to make and the easiest way to defend against them is to ensure the test fails before you try and make it pass.

Failing for the Right Reason

It’s not enough to see a test fail. This is another common beginner mistake with TDD: run the test, see a red bar, jump into writing production code. But is the test failing for the right reason? Or is the test failing because there’s an error in the test setup? For example, a NullReferenceException may not be a valid failure – it may suggest that you need to enhance the test setup, maybe there’s a missing collaborator. However, if you currently have a function returning null and your intention with this increment is to not return null, then maybe a NullReferenceException is a perfectly valid failure.

This is why determining whether a test is failing for the right reason can be hard: it depends on the production code change you’re intending to make. This depends not only on knowledge of the code but also the experience of doing TDD to have an instinct for the type of change you’re intending to make with each cycle.

When Good Tests Go Bad

A tragically common occurrence is that we see the test fail, we write the production code, the test still fails. We’re pretty sure the production code is right. But we were pretty sure the test was right, too. Eventually we realise the test was wrong. What to do now? The obvious thing is to go fix the test. Woohoo! A green bar. Commit. Push.

But wait, did we just trust a passing test? After changing the test, we never actually saw the test fail. At this point, it’s vital to undo your production code changes and re-run the test. Either git stash them or comment them out. Make sure you run the modified test against the unmodified production code: that way you know the test can fail. If the test still passes, your test is still wrong.

TDD done well is a highly disciplined process. This can be hard for developers just learning it to appreciate. You’ll only internalise these steps once you’ve seen why they are vital (and not just read about it on the internets). And only by regularly practising TDD will this discipline become second nature.

Reference: Never trust a passing test from our JCG partner David Green at the Actively Lazy blog.

David Green

David Green is a developer and aspiring software craftsman. He has been programming for 20 years but only getting paid to do it for the last 10; in that time he has worked for a variety of companies from small start-ups to global enterprises.
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Back to top button