Monthly Archives: May 2014

A Tale of Two Qualities

Green and rotten. Isolated odject. Element of design.

“Here then, as I lay down the pen and proceed to seal up my confession, I bring the life of that unhappy Henry Jekyll to an end.”

— Robert Louis Stevenson, The Strange Case of Dr. Jekyll and Mr. Hyde

The old saying “What gets measured gets done” makes immediate sense: before you can improve anything, you first have to gain insight into it. If you act, you want to see the consequences, as soon as possible, compare them to desired results and adapt accordingly. Further, measurements not only give you a picture of the past and present — through extrapolation you get an impression of the future as well.

In modern software development, we constantly measure the quality of our product right from the beginning. We check for build breakers and failing test cases, code coverage, memory consumption and execution times on a check-in basis, as part of our continuous integration process. We always know the quality of our product — there won’t be any big surprises at major milestones or the end of the project. Running a software project like this removes the chance factor — this is quite the opposite of what happens when you follow a waterfall model with its dreaded big-bang integration phases.

But let’s face it: we mainly focus on controlling external quality; that is, everything that is visible to the customer.

The level of external quality is usually (relatively) easy to determine since external requirements are specified such that they are unambiguously verifiable (at least they should be). Thus, developers and testers can implement automated tests that will reveal any deviations. Because of the fact that external quality is directly visible to customers and directly influences whether they buy (or return) a product or not, it is not difficult to get proper funding for people and tools. In this sense, external quality really lives on the sunny side of software life.

Sadly, external quality’s brother, internal quality, lives an unhappy life in the dark: internal quality requirements are usually not nailed down precisely, in fact, they are often fuzzy and neglected. Sure, there are coding guidelines that mandate a certain indentation style and whether to use tabs or spaces (among other things) but are there actually checks against violations? And what about compiler warnings, bad coding practices, size of classes/methods, cyclomatic complexity, coupling of classes, missing API documentation and so on? And who is willing to invest in work the customer doesn’t see, anyway?

In most cases the answer is a blatant ‘No’. Neither are internal quality requirements stated in a precise and verifiable way, nor are they given high priority and almost never are they automatically checked as part of the continuous integration process. Since internal quality is not observable by the user, you can get away without paying attention. But then you have to pay for something else: an ever-increasing amount of technical debt and the compound interest that ensues.

Why does this all matter? In order to compete, a software product undergoes a multitude of major and minor modifications over many years, often carried out by developers other than the original authors. Once the first release is shipped, maintainability becomes a crucial factor and internal quality determines whether future changes this will be easy (cheap), hard (expensive), or outright impossible. Don’t underestimate the impact of even the smallest things; even a single broken window may be responsible for decay and increase of crime level in town districts.

What we need is a shift in mindset: first, internal quality requirements must get the same priority as external quality requirements and second, internal quality is perpetually tracked (and acted upon) as part of the continuous integration process.

It is of utmost importance to keep track of internal quality right from the beginning, when the mental distance is low and issues can be corrected with the least amount of work. Further, the whole team immediately benefits from the improved quality and the learning effect ensures that the amount of effort that needs to be spent goes rapidly down. Postponing internal quality work to late phases of the project is a costly and frustrating experience.

For some internal quality metrics it is necessary to build special-purpose analysis tools that cannot be bought off the shelf, but dynamic/scripting languages and heuristic regex-based parsing can go a long way. But even if you need more precision there are often free/open-source tools and frameworks at your disposal: I know of one C++ project that implemented several clang plugins to ensure that (among other things) identifier names where in line with the coding conventions.

Every developer has different standards as to what good-enough internal quality is. To make matters worse, day-to-day routine and especially schedule pressure will lure developers into accepting the status quo of their code once testing has proved that external quality criteria are met. But the converse is also true: without indisputable measurements, developers might spend more time than needed, endlessly polishing their beloved code.

We should give both sides of the quality coin equal consideration. While external quality is the necessary basis for the short-term success of a product, it is internal quality that ultimately determines the long-term success of a software company. Hence, we should remove the chance factor of internal quality, too.