PPSD: The Non-Painting Painter And The Non-Painter

“If you put off everything till you’re sure of it, you’ll never get anything done.”
— Norman Vincent Peale

I once read about an individual who wanted to become a painter. But instead of painting, he spend a lot of time and money on finding the right canvases, brushes, and paints. He read dozens of books on painting, went to museums, studied the great masters but never got around to paint a picture.

Inspired by this story I coined the phrase “Non-Painting Painter” to label individuals who behave in such manner.

It’s highly likely that you will encounter non-painting painters in your career, so it’s worthwhile to shed more light on their attributes and behaviors, because it’s easy to misdiagnose them. It’s especially common to confuse them with a more extreme form, which I call the “Non-Painter”. Non-painting painters and non-painters have a lot in common (after all, they don’t paint), but while there is hope for cure for the former, all is lost for the latter.

APPEARANCE

Both, non-painting painters and non-painters are indistinguishable by looks, sight, or sound. You recognize them by their personality traits.

PERSONALITY TRAITS

Both, non-painting painters and non-painters procrastinate, they are on the constant lookout for reasons to evade their primary task. This doesn’t imply that they are lazy — far from it! They spend a lot of effort on side issues. What they don’t do is get the task done that they are supposed to do.

To them, there is always more research to be done, more code samples to study, more design documents to write, more meetings to call for. They cannot start to code unless the coding standards have been approved by QA and to avoid this from ever happening, they permanently submit change requests. Then, some day, they hear about a new build system, programming language, software tool that need to be evaluated because this would “save us from so many problems in the future”.

Other characterizing attributes of both, the non-painting painter and the non-painter are that they are usually smart people with above-average social skills. They have a lot to say, seem to be well-versed with many contemporary technologies (at least superficially) as well as everyday topics. They are a naive hiring manager’s dream come true.

To qualify as a non-painting painter, it’s imperative that such an individual still possess a willingness and ability to actually perform the task they so eagerly avoid to do. The chief reason why they don’t perform is rooted in fear. Fear to make mistakes, fear to look stupid in front of coworkers. But contrary to the non-painting painter’s believe, it’s no shame to have little experience, as long as there’s a strong desire to learn and improve. In his book “The Passionate Programmer”, Chad Fowler goes as far as giving this advice: “Always be the worst guy in every band you’re in. – so you can learn. The people around you affect your performance. Choose your crowd wisely.”

But let’s now turn to the pathological, incurable sibling of the non-painting painter — the non-painter. Non-painters also don’t produce what they’re supposed to, they also waste time on unimportant, minute details and side issues. But contrary to the non-painting painter, they have neither the intention, nor the ability to get the job done. If that wasn’t bad enough, in almost all cases they go to great length to prevent also teammates from getting their jobs done, thus multiplying their own unproductivity. Why? Non-painters are in constant fear that well-performing teammates steal the limelight from them and expose them for what they are: freeloading charlatans.

RATING

According to the Q²S² framework, the rating of a non-painting painter and a non-painter is the same: 2/1/4/2; the Q²S² framework is unfortunately not able to discern them.

POLAR OPPOSITE

The Codenator

CONCLUSION

A non-painting painter is still a painter, alas an inhibited one. My recommendation is to give non-painting people the benefit of the doubt and not by default label them as non-painters. If they don’t have the courage to speak about their fears, a senior developer should approach them and provide support and guidance. Just like Attaboys, non-painting painters often are diamonds in the rough. If they are open about their inexperience and strive hard to improve, they can turn into a valuable asset. However, if their productivity doesn’t improve, relabeling them as non-painters is more than deserved and non-painters have no place in the team.

Why We Count From Zero

“A zero itself is nothing, but without a zero you cannot count anything; therefore, a zero is something, yet zero.”
— Dalai Lama

If you do a Google search for why programmers typically start counting from zero, you’ll likely find two reasons. Today, I’m going to add another one. But let’s start with the usual explanations.

DIJKSTRA’S HALF-OPEN RANGES

On August 11, 1982, Edsger Dijkstra wrote a short paper on why numbering should start at zero. He first demonstrates that half-open ranges with an excluded upper bound are superior to other alternatives:

Here’s a quick summary of his reasoning, in case you don’t want to read it yourself. The main advantages are that a) you can easily represent empty ranges (i. e. lower equals upper) and b) compute the number of elements in a range by subtracting the lower bound from the upper bound.

Based on ranges where the upper bound is excluded, Dijkstra goes on to show that for sequences of N elements, there are only two ways of indexing:

Obviously, the latter is much more elegant and hence we see code like this everywhere:

BASE-RELATIVE ADDRESSING

Sequences of heterogeneous data in contiguous memory are almost universally laid-out like this:

A sequence starts with the first element at some base address, the second follows sizeof(elem) bytes after the first element and so on. Computing the start address of the n-th element can be done using this formula:

However, this formula only applies if you index your elements from 0 to N – 1. If instead you chose to number indices from 1 to N, the formula would need to be adapted:

This alternative is not only less pleasant to look at, but because of the additional subtraction also harder for the CPU to compute. Consequently, the fathers of C employed the zero-based array access syntax that we are all so familiar with:

which is really just a shorthand notation for

If i is 0, you get the first element, if i is positive, the i-th successor of elem, and if i is negative, the i-th predecessor of elem. The latter fact often surprises developers because they either assume that negative offsets are illegal in the first place or yield elements from the end of the array, like in Python.

Incidentally, there’s another secret to C array indexing: since the addition operation is commutative, you can equally well write

As obvious as this is in hindsight, it’s not well known amongst C programmers and a good opportunity to show off at parties. However, I strongly advise against writing code like this for production use.

MODULAR ARITHMETIC

As you know, applying the mod operator like this

yields values ranging from 0 to N – 1. This dovetails nicely with zero-based indices into sequences.

Take hash maps for example. To determine the position of an element in a hash map containing N slots, just apply a hash function and take the result modulo N to get the index. That’s it!

Another use case is the ring buffer, one of my favorite containers: to advance an index into a ring buffer, just add the desired offset and apply the mod operator to get wrap-around — no need for extra if/else logic. Again, starting indices at 1 instead of 0 would entail extra additions and subtractions.

There you have it — one more reason to start numbering from zero. (As if you still needed to be convinced…)

A Neverending Story

“Nothing is lost. Everything is transformed.”
― Michael Ende, The Neverending Story

I explained in this post that I don’t view technical debt as something that is bad per se. Rather, I believe that “good technical debt” — at times — should be employed for strategic reasons. As an example, when developing a feature, you might take shortcuts in order to unblock stalling teammates who need your changes in order to carry on with their own work.

Let me reiterate: to qualify as good technical debt, it must be

1. taken on consciously
2. managed
3. repaid timely

How can this be achieved in practice?

Once you’ve made a deliberate decision and considered the pros and cons, you implement your makeshift solution. But before you mark your task as done, you create another task which has the goal of removing the technical debt you just introduced. But where should you put this task?

Don’t hide it in the product backlog as it a) contains usually many issues already b) is more concerned with externally observable features and c) is actually owned by the product owner. If you did, most likely technical debt issues would be delayed (or rather ignored) in favor of “real” stories, which would grossly violate good technical debt requirement #3: “repaid timely”. Instead, put it under a story of the current sprint titled “Repay technical debt”.

An immediate advantage of this approach is that it makes your shortcuts visible to the whole team. Further, everybody, including you, can pick up this task and just start working on it. However, in typical cases, such clean-up tasks won’t be done in the current sprint. So what happens with the “repay” story and all attached tasks at the end of a sprint?

You move it to the next sprint, of course! Consider it a story that never ends, which nicely mirrors a well-known software engineering truth: the fight against software entropy goes on forever. Since the story doesn’t go away, it’s a great reminder for the whole team that repaying technical debt (or constant improving of internal software quality) is of super-high importance.

Having this story in place allows you to manage technical debt, as stipulated by good technical debt requirement #2. There are too many technical debt tasks in this story? Maybe it’s time for a “technical debt sprint” where everyone focuses on getting the list shorter instead of adding new functionality. Are there no technical debt tasks at all? Maybe the team isn’t really tracking their technical debt. Another possibility is that the team doesn’t use “good technical debt” as a strategic tool and instead make other people wait for their gold-plated implementation.

In my view, a neverending “payback technical debt” story is a great tool. It’s a quality backlog maintained by developers which puts dirt right in front of everybody’s noses. I believe that this drastically mitigates the risk of creating a maintenance nightmare while still allowing for occasional shortcuts.

Working the Bash Shell Like a Pro, Service Pack 1

“Man is a tool-using animal. Without tools he is nothing, with tools he is all.”
— Thomas Carlyle

“A fool with a tool is still a fool.”
— anonymous

“A fool with a tool is a dangerous fool.”
— anonymous

Five years ago, I wrote a popular post called “Working the Bash Shell Like a Pro“, a short collection of essential tips and tricks for working efficiently with the Bash shell.

Like everyone, during my daily interaction with Bash I use many “tricks”, however, only few of them qualify for being added to my original list. Now, the time has come for a small update.

1. The Bash Curse

As you already know from the previous post, ‘!#’ selects the “entire command-line typed so far”, ‘:1’ takes the first argument in the “command-line typed so far” and ‘:r’ strips the extension from it. So changing the file extension of a file can be done with a fairly short command-line:

which is — depending on the path length — a lot shorter than

Instead of picking the first argument by employing the ‘:1’ word designator you could equally well take the last argument by specifying ‘:$’. Why? There is only one argument typed so far (“some/long/path/to/file.c”), so the first argument is the last argument. Consequently, this achieves the same effect as the previous command:

You probably think that this hasn’t gained us much since the number of characters to type hasn’t changed. That’s true but Bash has a special shortcut for ‘!#:$’ which is ‘!#$’. This saves us the colon and hence one character:

Since ‘!#$’ looks like a grawlix, I call this character sequence the “Bash curse”. I often use it many times a day, whenever I need to reuse (parts) of the argument preceding the argument I’m currently typing.

2. Skipping History

All of the mentioned tricks depend heavily on Bash’s history feature. After all, we want to save time by reusing something we typed earlier. Sometimes, however, we would rather not mess up our history. Take this command-line, for example:

Let’s further assume that you need to execute this regularly, many times a day. If you haven’t created an alias for it, you’ll probably pull the CTRL-R stunt*, something which I also explained in my previous post. So you hit CTRL-R and start typing “conan install”. Immediately, the desired command-line shows up and all you have to do is hit “Enter” to execute it.

So far so good. Sometimes, however, you want to fine-tune the command-line, for instance, to do a release build:

After you executed this command, your next attempt to retrieve the original command-line via CTRL-R will first find the one containing ‘-s build_type=Release’ which is not what you want as you only rarely want to do release builds. So it would have been better if the release build command-line had never been recorded in Bash’s history.

Another example of when you don’t want something to be entered in you Bash history is when you provide passwords/credentials on the command-line, as in

So how do you avoid that something is remembered in Bash? It’s easy. Just put a single space before the command you are about to execute:

This concludes service pack 1. Don’t expect service pack 2 anytime soon…

*) If I had to give up all the Bash tricks and only keep one, I’d would keep — hands down — the CTRL-R trick [back]

Why I Still Use std::lock_guard

“Better a lively old epigram than a deadly new one”
— Helen Rowland

If you’re programming in C++17 or above, you can choose among three different RAII-like mutex wrappers: std::lock_guard, std::scoped_lock, and std::unique_lock. But which one should you pick?

Most C++ programmers today advise you to use std::scoped_lock by default, unless you have to use std::unique_lock (if you want to move locks or if an API (like std::condition_variable) forces you to do so). One answer on StackOverflow goes as far as saying that

“… scoped_lock is a strictly superior version of lock_guard that locks an arbitrary number of mutexes all at once (using the same deadlock-avoidance algorithm as std::lock). In new code, you should only ever use scoped_lock.”

Despite of such advice, good ‘ol std::lock_guard is still my go to mutex wrapper. Why? Because most of the time, I

– only need to lock a single lock at a time
– don’t need to move or swap locks
– rarely need to manually lock/unlock the mutex once it’s wrapped
– prefer using features from earlier C++ standards, provided they fulfill my needs
– avoid code that violates Scott Meyers’ “most important design guideline”

Let me elaborate on the last point. Consider this code

This code looks like a typical case for a RAII-wrapper: You want to automatically lock and unlock (even if an exception is thrown, for instance) and reduce the time the lock is held to a minimum — hence the extra curly braces. Code like this gets written. I wrote code like this myself. I also saw similar code when reviewing somebody else’s code.

The problem with this code, however, is that it’s dead wrong. In the heat of the moment, the poor programmer forgot to pass a mutex to the std::scope_lock’s constructor, as in

In the original code sample, nothing is locked. Since the variadic constructor can be called without providing arguments, the compiler is happy. The application might actually work for quiet some time until it suddenly produces “strange results”. Try this with std::lock_guard and the compiler will rightfully reject your code, as std::lock_guard doesn’t have a default constructor.

In my view, the fact that std::scoped_lock’s constructor can be called without arguments is a gross violation of Scott Meyers’ “most important design guideline”, a guideline that I already wrote about recently: “Make interfaces easy to use correctly and hard to use incorrectly”.

To Pluralize, Or Not To Pluralize, That Is The Question

“Consciousness is a singular for which there is no plural”
— Erwin Schrödinger

When it comes to naming directories and files, some folks seem to insist on adding plural ‘s’ letters, as in

docs/, tests/, srcs/, recipes.txt

People like this have a collection-oriented view of the world. To them, “docs” is a label for a folder holding documents, just like a label on a box saying “screws” denotes that there’s a collection of screws in it.

When “pluralists” come across a directory named “doc”, it causes them grief. Why do people do that, they whine, there’s clearly an ‘s’ missing — is it just a typo or was it done deliberately, to save typing?

Let me tell you this: it’s usually done deliberately, but not to save a measly character. It’s done by individuals who have an identity-oriented view of the world and don’t care about containment and multiplicity. To such people, a folder named “doc” is the documentation of a project. It may hold a single text file, multiple PDFs and even some markdown documents. Likewise, “src” is the “source code” and “test” is the corresponding test in its entirety.

So there you have it: the reason why there is no ‘s’ in a name is just a coincidence in cases where the abbreviated name of what something is looks like the singular form of an item of a collection. “doc”, mind you, is not the abbreviation of “document” — it’s meant to be the abbreviation of “documentation”. (Incidentally, the Linux project avoids this confusion by keeping all the documentation in a folder named “Documentation“.)

My general advice is to strive hard to name something after what it is for the sake of better abstraction. Clearly, “essay” is more meaningful than “characters” and by the same token, I prefer “cookbook.txt” over “recipes.txt”. Only when a container has no higher-level purpose other than containment it should be given the plural form of items it contains, but this should rarely be the case.