en
de

Tests gone bad (part 1) – make sure it does what it does

10 April 2013
| |
Reading time: 3 minutes

Automated testing is mandatory for successful software development in an agile process. However, with some man years poured into a project, teams often find themselves slowed down by a tangle of tests that no-one understands any more, cause builds to take a lot of time, and that randomly fail without any conceivable reason. In this series of blog posts, I’m going to dig up some anti-patterns, and discuss possible remedies.

Part 1: Make sure it does what it does
Part 2: Code coverage cargo cult

This article shows you how adding tests can hurt your quality.

Make sure it does whatever it does

[Fact]
public void App_should_crash_causing_maximum_confusion()
{
    var subject = new Subject();
    Assert.Throws<NullReferenceException>(
                () => subject.DoSomeWork());
}

What this little gem in C# does is to check that when method DoSomeWork is called on an instance of type Subject, a NullReferenceException is thrown. So, what’s wrong about that?

A NullReferenceException is just bad news. It means that some variable has not been initialized, and some property or method has been called on it. This is a typical developer mistake. This kind of exception happens frequently, it is totally generic, and normally leaves you with no clue what the heck is going on until you figure out all the preconditions and reproduce the error in a debugger. This is as costly and unpleasant as it sounds. You never ever want a NullReferenceException to occur in your production code.

Now this test asserts that a NullReferenceException does in fact occur. It defines it as the correct, expected behavior of the code that it produces an absolutely useless error. Why on earth has someone written this test?1

Recipe for failure

This, and in fact a lot of tests, check that the code does what it does, not what it should do. A workflow that produces this kind of tests looks like this:

  1. Implement the feature. Yeah, we love to implement a feature. It feels so good to get something done.
  2. Run the application. Hmm, there’s an error, let me fix this quickly…
  3. Run the application. Oh look, I can click the button, it works!
  4. Write the tests. Automated tests are totally awesome. We’ll have a test for all the code. Great, everything’s passing, on to the next feature!

First writing some code and then running it is the most natural, intuitive way of programming. The problem is that by the time the automated test are written, the developer has acquired a strong positive bias that the code is working correctly – she’s tried it herself already2. The tests capture this perceived perfect state and make sure that it does not break afterwards. And if in this state the code throws a NullReferenceException, then that’s what the test expects.

Why is this so important?

Any test plan should be aligned to requirement. This can be a use case, a user story, technical design, or verbal agreement. Any automated test specifies a test plan too – it is expected that the application behaves in the defined way. A test that captures the code as-is becomes part a part of the specification. It states that a random implementation detail must stay that way – even if it’s just not relevant, or even incorrect.

In our example, the method is now required to throw the expected exception. If in some later refactoring someone notices that there is an unchecked reference and fixes this bug, the test fails. At this point, there are two possible unpleasant outcomes:

  • Startled, the developer tries to figure out what the test means. Maybe there is a hidden wisdom in it? Do other modules rely on this behavior? Can I safely remove or change it? Let’s check who’s the author… (about 15 minutes have passed by now).
  • The developer reverts the bugfix so the test does not fail.

So surely we’ve lost productivity, and maybe we’ve lost a chance for better software.

Now what can we do?

  • Each test should be justified. Before writing a test, ask: what is the behavior that I am testing here? What is the requirement this test helps to implement correctly?
  • Write tests first (Test Driven Development – TDD). Yes, it takes some a lot of getting used to, but for a maintainable test suite that actually makes sense, TDD is worth it.
  • Give test names that express the intention. Set up a convention for test names, and why not use this one.
  • Tests aren’t cathedrals. If a test is not aligned with any requirement, it should be deleted.

Conclusion

Tests become an extension of the application’s requirements. By writing tests that just mirror the state of the code, the test suite becomes clogged up with implementation details. This can be avoided by aligning tests with requirements, optimally by writing tests before productive code.

In the next part, I’m going to examine why enforcing a high code coverage metric might be not such a good idea.


1 And the test was actually written – not in the shortened form, but yes, it was there in a large, successful project staffed with reasonable people.
2 Of course developers should run their code, and poke around in the application. Just maybe not so early.

Comments (1)

×

Sign up for our Updates

Sign up now for our updates.

This field is required
This field is required
This field is required

I'm interested in:

Select at least one category
You were signed up successfully.

Receive regular updates from our blog

Subscribe

Or would you like to discuss a potential project with us? Contact us »