Maintenance Matters: Code Coverage

Dylan Lederle-Ensign, Former Senior Developer

Article Category: #Code

Posted on

At Viget we strive to keep every project at 100% test coverage. This post explores what that number really tells you, why we think it’s valuable, and some tactics we use to achieve it.

This article is part of a series focusing on how developers can center and streamline software maintenance. The other articles in the Maintenance Matters series are: Continuous IntegrationDocumentation, Default Formatting, Building Helpful Logs, Timely Upgrades, Code Reviews, Good Tests, and Monitoring.

What #

A test coverage percentage is a feature of the automated testing tools we use. There are differences between language environments, but they all work approximately the same way. The coverage tool runs the test suite and counts which lines of code were executed during the tests. It then divides that by the total number of lines in the project to give us some percentage of covered lines.

We then enforce this coverage percentage in our Continuous Integration builds. A build will fail if the coverage number dips below the specified threshold, in our case 100. Why 100? It’s an easy, round number and it makes my monkey brain happy to see three digits instead of two.

Now, you might be saying “Wow, 100% coverage. So everything is tested?” And, that's not quite right. That coverage number just tells you the line of code was executed during the test, not anything about its behavior. It doesn’t say if the tests are good, or if they test the right requirements. It is necessary, but not sufficient, to a well tested application.

It is still the developers' responsibility to think carefully about what they are testing and how. At a minimum, enforcing 100% coverage in your builds gives you a little robot that bugs you to write tests whenever you check in new code.

Why #

Legacy code is any code not covered by automated tests.

Michael Feathers, more or less

At Viget we are starting on greenfield projects most of the time, not adding coverage to existing code. It’s difficult to go back and raise coverage on an existing app with tons of code. But at the beginning, the project will only have a few hundred lines of mostly framework code anyways. If we enforce coverage from the beginning, all subsequent code changes will come with tests, and the whole system will stay covered.

We often start a project, do the first 3-9 months of work to get a prototype out the door, then someone else will take over maintenance. We always try to write clean, idiomatic code in whatever language or framework we’re working with, and we try to document where maintainers should look for what features. But “clean” and “idiomatic” are in the eye of the beholder, and documentation can get out of date. Executable tests covering the entire application gives maintainers the most confidence in their changes. We don’t want to hand off untested “legacy” code that is scary to modify; we want to deliver flexible code bases that are a good base to support wherever the business might grow.

When to ignore coverage #

Okay, so we set the bar at 100 percent coverage. What does that really mean? Every code coverage tool gives you some wiggle room in how you define what needs to be covered. At Viget we use them pragmatically. One of the easiest ways to keep coverage high is to learn how to ignore files or lines of code in your tool.

Sometimes, there are things that you don’t want to test, like config files. You can ignore those entire files.

Sometimes, there are things that you can’t test, like external integrations. You can stub / mock them in your tests, but that might still leave parts of your code un-exercised by your tests. Wrap those external calls in ignore blocks.

Sometimes, you might have environment specific code blocks, something like if Rails.env.production?, which won’t get run by your test suite. Ignored.

Sometimes, your code might just hard to test. Here’s where you have to exercise some judgement. Code that’s hard to test is an indication that the structure of the code isn’t quite right. This is a “smell”, telling you to try again. But, if you can’t figure out a cleaner solution, the code meets the client’s requirements and you’ve got a deadline: Wrap that function in an ignore block and file a refactor ticket in your bug tracker of choice. It’s essential that you keep an open line of communication with your PMs so this ticket doesn’t slip forever. Code quality is your responsibility as a developer, advocate for it during planning phases. But don’t let arbitrary coverage requirements block you from shipping useful software.

How to ignore coverage #

On JavaScript projects using jest, you can ignore entire files (such as your node_modules directory), or individual lines like:

/* istanbul ignore next */

Depending on your coverage provider.

On Elixir projects, we use parroty/excoveralls with ex_unit, which looks like:

  # coveralls-ignore-start
  def ignored do
    annoying_integration
  end
  # coveralls-ignore-stop

On Ruby projects, we like simple_cov, which gives you the ability to "nocov" blocks of code like:

# :nocov:
def untestable_method
  my_perfect_code
end
# :nocov:

Whatever tool you’re using for coverage, keep the docs for how to ignore code handy.

Conclusion #

As we’ve seen, this coverage number is pretty easy to manipulate. You can achieve 100% trivially by ignoring all the source files, but then you’re just burning dollars on CI for no benefit. The coverage number also just says that this line was executed during a test, not that you have assertions on the code’s behavior. Most web applications can achieve high coverage numbers with an end-to-end test (using something like Playwright) that logs in and clicks on the main buttons. Use this number as a floor, and a tool to enforce adding tests with any new code. Don’t let it become an end in itself, and remember that it doesn’t tell you anything about the quality of the tests themselves.

The next article in this series is Maintenance Matters: Documentation.

Related Articles