Is the code coverage a sufficient metric
When creating unit tests, it is generally attempted to have the highest possible code coverage. The ultimate goal is to have 100%. But once this goal is achieved, the code is actually tested correctly. Can you say there is no bug in your application? Let's look at it with an example.
public class Foo
{
private int _add = 42;
public int Bar(int x)
{
x += _add;
return x;
}
}
We logically write the following test:
[TestMethod()]
public void BarTest()
{
Foo target = new Foo();
Assert.AreEqual(42, target.Bar(0));
}
With this test, we have a code coverage of 100%. Nothing extraordinary here. Let's change the function:
public int Bar(int x)
{
x += _add;
_add++;
return x;
}
We added an error. Indeed now we increment the field _add
at each call. However, the test is still successful and the code coverage is always 100%. Checking that the result is the same when calling the method multiple times can solve the problem.
[TestMethod()]
public void BarTest()
{
Foo target = new Foo();
Assert.AreEqual(target.Bar(0), target.Bar(0));
}
To conclude, code coverage is a good way to know when you forget to test a piece of code. However, it is not enough. Indeed you can test all the lines, if you do not test properly, it's useless…
Do you have a question or a suggestion about this post? Contact me!