The notion that Test Coverage alone can provide an adequate metric for determining how well a particular piece of code or, an entire codebase, is being tested has always troubled me. In my experience this can be a very misleading assumption.
Like many of my previous themes, with Test Coverage context trully is key. The issue I find is that relying on a predetermined percentage threshold to provide a “true” measure of successful testing simply fails to take into account the numerous factors involved. For example, the preceived thoroughness of a systems tests can easily be increased – beit mistakenly or intentionally – by testing parts of the system which could be considered irrelevant, or provide very little tangible value; such as getters, setters and the like.
Personally, I advocate focusing first on testing the most critical behaviors and state of a particular piece of code. The tests should not be limited to just testing the expected cases but also, and of equal importance, the testing of exceptional and negative tests.
I could go on about this in great detail; however, I recently came across a really good post from googletesting (which I found via Mike Labriola) which I think pretty much sums it up.
code coverage, tdd, test coverage, test driven design, test driven development, unit testing
{Sorry, Comments are currently Closed! }