My pal Justin Searls posted an interesting Tweet last week asking for feedback on a slide from the RubyNation conference stating “‘Bad tests’ are 100% more awesome than no tests.” Of course, being Twitter, there was absolutely no context around that slide. Perhaps the presenter was in the midst of a sarcastic, ironic stint on why you need to take care in writing your tests. Being a tester, I also wanted to know more about what was meant by “bad tests” since one person’s bad may mean totally different things than another’s.
I thought the straight reading of the slide was good fodder for some thoughts around bad tests. I’ve some strong opinions on the topic.
Also, setting some important context, I’m not focusing on automated tests. A test is a test, regardless of whether it’s automated, manual, exploratory, or whatever. Some in the testing domain descend into pedantry debating “testing versus checking” or “sapient versus non-sapient”
What’s a Bad Test?
Good tests give the organization useful, valuable information about the state of the system. Good tests increase the “trust surplus” others in the organization have about the system and about the delivery team. Good tests minimize the cost of building and maintaining the test over time.
Everyone’s vernacular is different, but I’m the one writing this blogpost, so my definition of “bad test” wins.
To me, a bad test is one that:
- Provides no useful information about the system
- Provides false information about the system
- Provides a false sense of security about the system
- Increases cost of building and maintaining the test suite over time
- Decreases trust in the system and the delivery team
Why Are Bad Tests Worse Than No Tests
With the above definitions in mind, it’s not too hard to see why bad tests might be worse than no tests at all. Bad tests make everyone dig the quality hole deeper and increase everyone’s frustration and stress levels.
False Sense of Security
Worse yet, organizations may be led into a horribly misplaced sense of safety when in reality the system is in a very bad state. Bad tests may be missing huge swaths of functionality, despite impressive but useless metrics. (“But we have 95% code coverage!” Yes, but they were lousy tests that did no effective checks or provided incorrect information.)
Absolution of Responsibility
A particularly insidious effect of bad tests is driving a cultural sense absolving the team of responsibility. Instead of constantly asking “What other information do we need to search for?” teams end up falling back on excuses like “We had 3,342 test cases with a 94.6% pass rate. Someone else missed something.”
Loss of Trust
Worst of all is the downward spiral of trust that bad tests inevitably create. Intermittent failures lead to frustration with other delivery team members. Bad tests miss critical bugs that your users end up finding–and ask your support folks how they feel when they’re getting hammered by angry customers.
Don’t Settle for Bad Tests
Delivering high-value software isn’t an easy thing. Avoid getting sucked into bad tests that mislead your team, organization, and customers into a never-ending spiral of doom. Work hard to have your entire team involved in creating high-value, high-quality tests regardless of whether they’re automated, manual checklists, exploratory sessions, or other tests.
Bad tests aren’t better than no tests. They’re potentially far, far worse. I know, you’re shocked.
Latest posts by Falafel Posts (see all)
- Matching Complex Query String Rewrite Rule in IIS - March 22, 2017
- Using Google Services in UWP C# Apps – Part 2 - February 7, 2017
- Using Google Services in UWP C# Apps – Part 1 - February 6, 2017
- Redis Caching in the Google Cloud Platform - February 3, 2017
- Entity Framework with Google Cloud SQL - February 2, 2017