Saturday, April 16, 2011

Testcoverage in general and with GWT MVP

I have always been criticized about my view on minimal test coverage of code.

I always advocated that any test coverage below 100% may lead to the following:
Only the easy to cover parts of the software get tested. The hard parts will be left out. (I already have a sufficient part tested, why spend more time?). The really tricky edge cases will be left out sometimes entirely.

To make myself clear: I am not taking about 100% input coverage. Every line of code that will run in production should behave as expected (or intended by the developer). To make sure you should run through this code at least one time to state your intention. This will not in any way prove that your product doesn't have any bugs. It will just reduce the number of bugs.

Sometimes I heard the argument that this is fine for backend code where it's easy to mock out dependencies, but does not translate to UI programming. That's just not true: if you take a look at GWT MVP you get a very good idea how to isolate your presentation logic from the UI fields and you can test most of your code in a simple JUnit Test Case.

Me personally I like EMMA a lot. It's an open-source toolkit for measuring and reporting Java code coverage, which has a nice integration in eclipse and maven as well. Basically in eclipse it marks a line in your code green, yellow or red. Red meaning that this part of the code was never executed during your tests. Green meaning this part of the code was executed. Yellow meaning that only part of your code was executed. It also sums up test coverage for packages.

Using EMMA you can quickly discover the dirty spots in your project and improve their test coverage. I even trigger built failures if the test coverage drops from a previous built, so that you can only check in tested code.

But if you think about a normal GWT MVP project you will end up with a few untested places like your Entrypoint with all the initialization, and those view implementations. It's not possible to test them in simple JUnit Test and so it's not possible to test them very quickly at all. Quick feedback to your commits is very important, so what can we do about those. First we exclude them from EMMA so that we only measure the testable code. Second thing is that most of the code is simple UI plumbing stuff. So we can get a decent coverage (which is sufficient) in our UI Test with selenium for instance.