Category Archives: Code

Circular Adventures IX: The Poor Ring Buffer That Had No Tail

“If you hold a cat by the tail, you learn things you cannot learn any other way.”
― Mark Twain

The typical use case for a ring buffer is a producer consumer scenario, where a producer sometimes produces data faster than the consumer can consume. Eventually, however, the consumer will catch up. This typical use case can be summarized as “short term buffering of bursts of data”.

But what happens if buffered data is not consumed fast enough and the ring buffer becomes full? Answer: adding another element will drop the oldest element, as can be seen by looking at the implementation of the ‘add’ method:

After the new element is added to the buffer, the head index is incremented (modulo ‘BUFSIZE’). If it now points at the tail index, the tail index is incremented (modulo ‘BUFSIZE’) as well, which effectively removes the oldest element from the ring buffer, making space for a new ring buffer element to be added. Here’s an example of this happening to a ring buffer that already holds 9 elements (why not 10? The head index is ‘exclusive‘ and points at the location where the next added element will go):

Calling ‘add’ in this situation advances the head and the tail index by 1:

This will be continue for every subsequent ‘add’. Here’s how the ring buffer looks like after six more calls to ‘add’:

Why am I telling you all this? Because there’s a ring buffer use case, where a producer endlessly produces data that is never consumed. I call this the “history buffer” use case where record is kept over the last N events, similar to how a flight recorder operates.

If a ring buffer is used as a history buffer, it is always fully loaded. Thus, the ‘add’ operation is unnecessarily inefficient, as the ‘if (head == tail)’ conditions is always true. So let’s get rid of it:

This naturally leads to another optimization: the incremented value of ‘head_’ is the current value of ‘tail_’. Thus, we just set the new ‘head_’ to the current value of ‘tail_’ and only increment ‘tail_’:

That’s even better, isn’t it? But there’s more! The tail is usually only used when consuming elements, which we don’t do for the history buffer use case. Why maintain it’s value at all? Plus, if we need it later (as we shall see shortly), we can always compute it from the head, since it is one element ahead of head (sounds a bit weird, doesn’t it? The tail is always ahead of the head):

Adding to the ring buffer has definitely become simpler and cheaper, but you probably ask yourself how to retrieve the history data stored in our ring buffer. As usual, there are many ways, but the one that I find most useful is through the standard library’s way of doing things, namely iterators. ‘begin()’ and ‘cbegin()’ simply return the tail, which is — as we already know — one element ahead of ‘head_’:

‘end()’ and ‘cend()’ just return ‘head_’.

In the simplest case, the iterators returned are forward iterators, which means that they only provide ‘operator++()’. As limited as forward operators are, they allow you to use many useful algorithms from the standard library, including:

But there’s probably another nagging question on your mind: what if the ring buffer is not fully loaded? What if, for instance, only 3 elements have been added so far to a 10-element ring buffer holding doubles? In this case, the 7 unused elements will hold default-initialized doubles, which can be easiliy skipped over:

Most of the time, this extra work is unnecessary in history buffer use cases, as a) the ring buffer is quickly filled and from this point on always fully loaded and b) default constructed elements don’t harm in typical history buffer use cases.

I like to call this particular variant of ring buffer “Manx buffer”, named after a peculiar breed of cat, called Manx, whose salient feature is the lack of tail.

Choosing a Manx buffer over a regular ring buffer may make sense in the following cases:

  • The elements added are never consumed but rather kept to have chronological, historical data.
  • The size of the element type is small, such that the cost of maintaining the tail is significant compared to the overall cost of adding an element.
  • Elements are added at a rapid rate.

An example of when a Manx buffer can prove useful is a data collection system for an aircraft that records the last 10 minutes of accelerometer readings for roll, pitch, and yaw as double values.

You can find my 70 lines implementation of a Manx buffer here.

Avoid Static Class Members As Much As You Can

“Most people want to avoid pain, and discipline is usually painful.”
— John C. Maxwell

Ah, static class members! They pop up in every C++ project. In most cases, however, their use is not justified. I try to avoid static members as much as possible, and so should you. Here’s why.

Let’s start by revisiting the mother of all static member text book examples. It demonstrates that by using the ‘static’ keyword, state can be shared among all instances of a class:

The reasoning in these text books usually goes like this: The total number of employees obviously doesn’t belong to a particular employee instance. Such information is shared by all instances, so let’s use ‘static’ here.

Don’t write code like this. It’s so flawed I don’t even know where to start. But I’ll try anyway.

If you believe that something is not the responsibility of an instance, you should not add this responsibility to the instance’s class either. Calling the static member function like so:

Is still a call on Employee, so it’s still a responsibility of the class. Instead, assign the responsibility to keep track of employees to a higher-level entity, like a Company or an Employer class, which manages Employees in a container, a std::vector, for instance. Such a higher-level class would provide a regular instance method that returns the total number of employees:

With this new design, in order to find out how many employees work for a company, you do the natural thing — you ask the company, not a static member function of class Employee. (Also note that in class Company, employeeCount is declared as ‘const’ to signify that it doesn’t update any data. There’s no such thing as ‘const’ for a static member function.)

What else is uncool about the original Employee class? Like every non-const static member ‘s_employeeCount’ needs to be defined exactly once outside the class, which is typically done in the corresponding .cpp file.

Further, manually incrementing and decrementing the count is cumbersome — the std::vector in Company will do this automatically for us. On top of that, the original design is not exception-safe. Imagine that initialization code in the constructor body of Employee throws an exception after the count has already been updated. The Employee instance will have never officially existed but the count indicates otherwise. You need to add extra exception handling code around the count increment to safeguard against such cases.

The original design is also sub-optimal as static members impede unit testing. After every unit test case, you have to make sure that the static data is cleaned up (or reinitialized) for the next unit test case. Since the static data is usually private, one often has to add extra public static initialization or setter methods to do the job, thus complicating the class interface even more. In general, static member functions can’t be mocked (at least if you’re using Google Mock) so it might be difficult if not impossible to simulate particular behavior of a static method to obtain code/branch coverage in a client component.

So are there any legitimate uses of static members in C++ classes? In my view, there’s only one: public (and protected) symbolic constants:

Private constants or private utility methods that do not touch instance data are usually better defined as anonymous namespace members within the class’ .cpp file — this avoids cluttering up the class definition in the header file with things that bear no relevance to the user of a class (aka implementation details).

But what about public utility functions that don’t use instance data? Shouldn’t they be declared ‘static’?

The official answer is probably ‘yes’, and there are compilers and tools (like clang-tidy) that suggest that you should declare a method ‘static’ in such situations. These days, however, I rather tend to ignore/suppress such warnings based on this reasoning: if a function is declared ‘public’, it’s part of a class’ interface. The fact that its implementation doesn’t touch instance data (today) is just an implementation detail that doesn’t need to be conveyed to the user of a class. I’ll pick a regular (virtual) method that I can mock over a static method any day:

This design clearly differentiates between interface and implementation and achieves loose coupling between the class and its clients. At runtime, clients can replace one implementation with another (e. g. optimized for speed, with caching, with logging, a mock and so on) without having to change the client code. All this would not have been possible had ‘factorial’ been declared ‘static’.

To sum it up:

  • There’s only one legitimate use-case for static members: symbolic constants
  • State shared by all instances of a class should not be shared in that class, but rather in an instance of a higher-level class
  • Static members prevent late (i. e. runtime) binding
  • Static data and static member functions make unit testing difficult

Bug Hunting Adventures #16: Lame Surveillance

“Under observation, we act less free, which means we effectively are less free.”
― Edward Snowden

Imagine a distributed surveillance system where recorded video files are uploaded to a central server at regular intervalls.

Due to limitations of the transport protocol, video files must be split up in chunks and no chunk may exceed 1 GB (10^9 bytes). On top of that, in high-load scenarios, the server might shorten a chunk even more, in which case instead of N bytes only K bytes are transmitted. Naturally, the N-K bytes that were not transmitted need to be sent with the next chunk upload.

Everything works fine, all unit and system tests passed. Once deployed, however, sysadmins from the central server team started lamenting that the video files were arriving at a glacial pace. What’s wrong with this code?

Solution

Do They Treat You Like A Superuser?

“A good workman is known by his tools”
— proverb

The process of getting admin rights as a corporate software developer is definitely on a spectrum. Over the last 20+ years, I’ve written code for more than ten companies and boy, do their policies differ!

In one case, I had full admin rights from day one. In more typical cases, I had to start a workflow to request admin rights which would arrive within hours to days. In one extreme case, I had to do an online training about the dangers of working with admin rights before I could start the workflow. After I passed the exam and once my request was approved (7 days later), I would be granted admin rights only for a limited number of time (180 days at most). Even worse — the online training course would need to be taken again as well!

Let’s meditate a little bit on this latter case. Too me, it’s an utter catastrophe. As software developers, we constantly need to maintain and tweak our PC, our beloved toolbox. We need to install or upgrade development tools, device drivers and the like, sometimes just for the purpose of experimentation and learning. What if I wanted to switch to a newer version of g++ one day only to find out that my ‘sudo’ rights had expired? Sure, I could start the workflow again, wait a couple of days for approval, but why? Such processes are nothing but a nuisance that break developers’ flow and inspiration while not adding any real security.

A software developer is not a regular user — a software developer is a superuser, literally. If a company has to have their software developers take training courses to ensure that they don’t work in a root shell all day they should not have been hired in the first place. Doesn’t it border on insulting if you learn in such a training that you should not open email attachments from unknown senders, especially while being logged-in as root? You don’t say!

If a company doesn’t give you unlimited superuser rights within a couple of hours, you’re definitely not treated like a superuser. You’re rather treated like a regular office worker who has no clue about how computers work, let alone computer security.

It’s not just about wasted time. It’s about lack of empowerment and trust. But it’s mainly about a missing software culture: are you viewed as precious human capital that develops top-notch software products which will make the company thrive, or are you rather viewed as a schmuck that poses an severe risk to the company?

A company with good software culture understands the chief need of creative makers, which is: working on interesting projects in a frictionless, libertarian environment where they can spend most of their time doing what they love most: craft exciting software.

Restricting software developers in terms of admin rights is just one problem of companies lacking good software culture, but it’s symptomatic. While such shops might manage to lure in great creators, they will certainly not be able to retain them in the long run.

Why We Count From Zero

“A zero itself is nothing, but without a zero you cannot count anything; therefore, a zero is something, yet zero.”
— Dalai Lama

If you do a Google search for why programmers typically start counting from zero, you’ll likely find two reasons. Today, I’m going to add another one. But let’s start with the usual explanations.

DIJKSTRA’S HALF-OPEN RANGES

On August 11, 1982, Edsger Dijkstra wrote a short paper on why numbering should start at zero. He first demonstrates that half-open ranges with an excluded upper bound are superior to other alternatives:

Here’s a quick summary of his reasoning, in case you don’t want to read it yourself. The main advantages are that a) you can easily represent empty ranges (i. e. lower equals upper) and b) compute the number of elements in a range by subtracting the lower bound from the upper bound.

Based on ranges where the upper bound is excluded, Dijkstra goes on to show that for sequences of N elements, there are only two ways of indexing:

Obviously, the latter is much more elegant and hence we see code like this everywhere:

BASE-RELATIVE ADDRESSING

Sequences of heterogeneous data in contiguous memory are almost universally laid-out like this:

A sequence starts with the first element at some base address, the second follows sizeof(elem) bytes after the first element and so on. Computing the start address of the n-th element can be done using this formula:

However, this formula only applies if you index your elements from 0 to N – 1. If instead you chose to number indices from 1 to N, the formula would need to be adapted:

This alternative is not only less pleasant to look at, but because of the additional subtraction also harder for the CPU to compute. Consequently, the fathers of C employed the zero-based array access syntax that we are all so familiar with:

which is really just a shorthand notation for

If i is 0, you get the first element, if i is positive, the i-th successor of elem, and if i is negative, the i-th predecessor of elem. The latter fact often surprises developers because they either assume that negative offsets are illegal in the first place or yield elements from the end of the array, like in Python.

Incidentally, there’s another secret to C array indexing: since the addition operation is commutative, you can equally well write

As obvious as this is in hindsight, it’s not well known amongst C programmers and a good opportunity to show off at parties. However, I strongly advise against writing code like this for production use.

MODULAR ARITHMETIC

As you know, applying the mod operator like this

yields values ranging from 0 to N – 1. This dovetails nicely with zero-based indices into sequences.

Take hash maps for example. To determine the position of an element in a hash map containing N slots, just apply a hash function and take the result modulo N to get the index. That’s it!

Another use case is the ring buffer, one of my favorite containers: to advance an index into a ring buffer, just add the desired offset and apply the mod operator to get wrap-around — no need for extra if/else logic. Again, starting indices at 1 instead of 0 would entail extra additions and subtractions.

There you have it — one more reason to start numbering from zero. (As if you still needed to be convinced…)

Technical Debt Is Not Bad!

“There once was a Master Programmer who wrote unstructured programs. A novice programmer, seeking to imitate him, also began to write unstructured programs. When the novice asked the Master to evaluate his progress, the Master criticized him for writing unstructured programs, saying, “What is appropriate for the Master is not appropriate for the novice. You must understand Tao before transcending structure.”
The Tao of Programming, Book 3, Verse 2.

I consider Ward Cunningham’s “technical debt” analogy as one of the most important metaphors in software development. According to him, he invented it to explain to his manager why his team needed to do refactorings: like financial debt, technical debt tends to grow over time and the burden gets bigger and bigger due to compound interest.

During the last decade, quality-minded developers have done a great job of educating their managers about technical debt but it seems to me that there is now the notion that all technical debt is bad and must be avoided. But Ward Cunningham never said such a thing — and rightly so.

In business, not all debt is considered bad. It’s a huge difference if someone takes out a 200k loan to buy a flashy sports car or if the owner of a construction company takes out the same 200k loan to buy a bulldozer. The sports car is a pure liability that makes the owner poorer whereas the bulldozer is an investment that will generate money in the future.

The same holds for technical debt. If you initially implement a limited version of a feature (e. g. slow, bloated, lacking proper error-handling) just to enable other teammates to carry on with their own work, that’s good technical debt.

So when does technical debt qualify as good technical debt? To me, good technical debt is

1. taken on consciously
2. managed
3. repaid timely

By contrast, bad technical debt arises out of laziness or incompetence, it lurks in the code base and won’t be repaid. Rather, like credit card debt in real life, more debt is accumulated over time and new loans are taken out just to repay the interest rates of other loans. This is the kind of technical debt that is to be avoided in the first place.

You can use a knife to kill somebody, but you can also use a knife to whittle and prepare food. Surgeons even use knives to save people’s lives. Thus, a knife is neither good nor bad per se and so is technical debt.