Category Archives: C/C++/Embedded

How Long Is A Millisecond?

“Time is relative; its only worth depends upon what we do as it is passing.”
― Albert Einstein

Different trades and professions have their favorite units of measurement. For astronomers it’s light years, for farmers it’s acres, and inches for carpenters. For us systems programmers it’s clearly milliseconds.

Images from a video camera running at 60 FPS arrive every 16.7 milliseconds. An operating system’s time-sliced task scheduler might switch tasks every 50 milliseconds. During full-breaking, an anti-lock breaking system typically applies/releases breaking pressure every 75 milliseconds. Milliseconds seem to be perfectly suited for close-to-the-metal programming and for smoothly controlling systems.

But how long is a millisecond, actually? Sure, everybody knows that it’s one thousandth of a second, but I’m rather curious as to how much work can be accomplished in a millisecond. Is it, for instance, a waste of CPU cycles if a thread wakes up every 5 ms to poll a sensor?

People often use “dog years” to better understand the behavior of dogs. We can apply the same idea and use “CPU years” to better understand CPUs.

For us humans, a second is a very important unit of measurement as the human heart beats (roughly) once every second. Let’s use a fictitious modern-day RISC CPU that runs at a clock rate of one GHz and requires one clock cycle to execute an instruction as an example. If we view the clock frequency as the heartbeat of a CPU, then a CPU’s heart beats every nanosecond[1]. Thus, one millisecond corresponds to one million CPU hearbeats.

One million human heartbeats equal roughly 12 days (i. e. 1,000,000 / 60 / 60 / 24), a time-span in which a human being can get a lot of work done[2]. Likewise, for our CPU, a millisecond corresponds to 12 CPU days. A full second corresponds to — lo and behold! — 32 CPU years (i. e. 12,000 CPU days).

Now, you might be coding for an embedded system that runs at “only” 100 MHz, so in this case a millisecond is “just” 1.2 CPU days. Or, you might have a Raspberry Pi 4 running at 1.5 GHz whose Cortex-A72 CPU sports a 3-way out-of-order superscalar pipeline, in which case you can probably multiply the 12 days by three.

While not perfect, this “a millisecond is a dozen CPU days” rule-of-thumb is a nice reminder that on a modern CPU, a millisecond really stands for “a lot of time”.

________________________________

[1] A nanosecond is such a mind-boggingly short period of time that even the fastest thing in the universe (i. e. “light”) can only move one foot (30 cm) in a nanosecond.

[2] At least theoretically. Definitely not if you work for a large corporation and your schedule is packed with meetings.

Circular Adventures IX: The Poor Ring Buffer That Had No Tail

“If you hold a cat by the tail, you learn things you cannot learn any other way.”
― Mark Twain

The typical use case for a ring buffer is a producer consumer scenario, where a producer sometimes produces data faster than the consumer can consume. Eventually, however, the consumer will catch up. This typical use case can be summarized as “short term buffering of bursts of data”.

But what happens if buffered data is not consumed fast enough and the ring buffer becomes full? Answer: adding another element will drop the oldest element, as can be seen by looking at the implementation of the ‘add’ method:

After the new element is added to the buffer, the head index is incremented (modulo ‘BUFSIZE’). If it now points at the tail index, the tail index is incremented (modulo ‘BUFSIZE’) as well, which effectively removes the oldest element from the ring buffer, making space for a new ring buffer element to be added. Here’s an example of this happening to a ring buffer that already holds 9 elements (why not 10? The head index is ‘exclusive‘ and points at the location where the next added element will go):

Calling ‘add’ in this situation advances the head and the tail index by 1:

This will be continue for every subsequent ‘add’. Here’s how the ring buffer looks like after six more calls to ‘add’:

Why am I telling you all this? Because there’s a ring buffer use case, where a producer endlessly produces data that is never consumed. I call this the “history buffer” use case where record is kept over the last N events, similar to how a flight recorder operates.

If a ring buffer is used as a history buffer, it is always fully loaded. Thus, the ‘add’ operation is unnecessarily inefficient, as the ‘if (head == tail)’ conditions is always true. So let’s get rid of it:

This naturally leads to another optimization: the incremented value of ‘head_’ is the current value of ‘tail_’. Thus, we just set the new ‘head_’ to the current value of ‘tail_’ and only increment ‘tail_’:

That’s even better, isn’t it? But there’s more! The tail is usually only used when consuming elements, which we don’t do for the history buffer use case. Why maintain it’s value at all? Plus, if we need it later (as we shall see shortly), we can always compute it from the head, since it is one element ahead of head (sounds a bit weird, doesn’t it? The tail is always ahead of the head):

Adding to the ring buffer has definitely become simpler and cheaper, but you probably ask yourself how to retrieve the history data stored in our ring buffer. As usual, there are many ways, but the one that I find most useful is through the standard library’s way of doing things, namely iterators. ‘begin()’ and ‘cbegin()’ simply return the tail, which is — as we already know — one element ahead of ‘head_’:

‘end()’ and ‘cend()’ just return ‘head_’.

In the simplest case, the iterators returned are forward iterators, which means that they only provide ‘operator++()’. As limited as forward operators are, they allow you to use many useful algorithms from the standard library, including:

But there’s probably another nagging question on your mind: what if the ring buffer is not fully loaded? What if, for instance, only 3 elements have been added so far to a 10-element ring buffer holding doubles? In this case, the 7 unused elements will hold default-initialized doubles, which can be easiliy skipped over:

Most of the time, this extra work is unnecessary in history buffer use cases, as a) the ring buffer is quickly filled and from this point on always fully loaded and b) default constructed elements don’t harm in typical history buffer use cases.

I like to call this particular variant of ring buffer “Manx buffer”, named after a peculiar breed of cat, called Manx, whose salient feature is the lack of tail.

Choosing a Manx buffer over a regular ring buffer may make sense in the following cases:

  • The elements added are never consumed but rather kept to have chronological, historical data.
  • The size of the element type is small, such that the cost of maintaining the tail is significant compared to the overall cost of adding an element.
  • Elements are added at a rapid rate.

An example of when a Manx buffer can prove useful is a data collection system for an aircraft that records the last 10 minutes of accelerometer readings for roll, pitch, and yaw as double values.

You can find my 70 lines implementation of a Manx buffer here.

Avoid Static Class Members As Much As You Can

“Most people want to avoid pain, and discipline is usually painful.”
— John C. Maxwell

Ah, static class members! They pop up in every C++ project. In most cases, however, their use is not justified. I try to avoid static members as much as possible, and so should you. Here’s why.

Let’s start by revisiting the mother of all static member text book examples. It demonstrates that by using the ‘static’ keyword, state can be shared among all instances of a class:

The reasoning in these text books usually goes like this: The total number of employees obviously doesn’t belong to a particular employee instance. Such information is shared by all instances, so let’s use ‘static’ here.

Don’t write code like this. It’s so flawed I don’t even know where to start. But I’ll try anyway.

If you believe that something is not the responsibility of an instance, you should not add this responsibility to the instance’s class either. Calling the static member function like so:

Is still a call on Employee, so it’s still a responsibility of the class. Instead, assign the responsibility to keep track of employees to a higher-level entity, like a Company or an Employer class, which manages Employees in a container, a std::vector, for instance. Such a higher-level class would provide a regular instance method that returns the total number of employees:

With this new design, in order to find out how many employees work for a company, you do the natural thing — you ask the company, not a static member function of class Employee. (Also note that in class Company, employeeCount is declared as ‘const’ to signify that it doesn’t update any data. There’s no such thing as ‘const’ for a static member function.)

What else is uncool about the original Employee class? Like every non-const static member ‘s_employeeCount’ needs to be defined exactly once outside the class, which is typically done in the corresponding .cpp file.

Further, manually incrementing and decrementing the count is cumbersome — the std::vector in Company will do this automatically for us. On top of that, the original design is not exception-safe. Imagine that initialization code in the constructor body of Employee throws an exception after the count has already been updated. The Employee instance will have never officially existed but the count indicates otherwise. You need to add extra exception handling code around the count increment to safeguard against such cases.

The original design is also sub-optimal as static members impede unit testing. After every unit test case, you have to make sure that the static data is cleaned up (or reinitialized) for the next unit test case. Since the static data is usually private, one often has to add extra public static initialization or setter methods to do the job, thus complicating the class interface even more. In general, static member functions can’t be mocked (at least if you’re using Google Mock) so it might be difficult if not impossible to simulate particular behavior of a static method to obtain code/branch coverage in a client component.

So are there any legitimate uses of static members in C++ classes? In my view, there’s only one: public (and protected) symbolic constants:

Private constants or private utility methods that do not touch instance data are usually better defined as anonymous namespace members within the class’ .cpp file — this avoids cluttering up the class definition in the header file with things that bear no relevance to the user of a class (aka implementation details).

But what about public utility functions that don’t use instance data? Shouldn’t they be declared ‘static’?

The official answer is probably ‘yes’, and there are compilers and tools (like clang-tidy) that suggest that you should declare a method ‘static’ in such situations. These days, however, I rather tend to ignore/suppress such warnings based on this reasoning: if a function is declared ‘public’, it’s part of a class’ interface. The fact that its implementation doesn’t touch instance data (today) is just an implementation detail that doesn’t need to be conveyed to the user of a class. I’ll pick a regular (virtual) method that I can mock over a static method any day:

This design clearly differentiates between interface and implementation and achieves loose coupling between the class and its clients. At runtime, clients can replace one implementation with another (e. g. optimized for speed, with caching, with logging, a mock and so on) without having to change the client code. All this would not have been possible had ‘factorial’ been declared ‘static’.

To sum it up:

  • There’s only one legitimate use-case for static members: symbolic constants
  • State shared by all instances of a class should not be shared in that class, but rather in an instance of a higher-level class
  • Static members prevent late (i. e. runtime) binding
  • Static data and static member functions make unit testing difficult

Bug Hunting Adventures #16: Lame Surveillance

“Under observation, we act less free, which means we effectively are less free.”
― Edward Snowden

Imagine a distributed surveillance system where recorded video files are uploaded to a central server at regular intervalls.

Due to limitations of the transport protocol, video files must be split up in chunks and no chunk may exceed 1 GB (10^9 bytes). On top of that, in high-load scenarios, the server might shorten a chunk even more, in which case instead of N bytes only K bytes are transmitted. Naturally, the N-K bytes that were not transmitted need to be sent with the next chunk upload.

Everything works fine, all unit and system tests passed. Once deployed, however, sysadmins from the central server team started lamenting that the video files were arriving at a glacial pace. What’s wrong with this code?

Solution

Why We Count From Zero

“A zero itself is nothing, but without a zero you cannot count anything; therefore, a zero is something, yet zero.”
— Dalai Lama

If you do a Google search for why programmers typically start counting from zero, you’ll likely find two reasons. Today, I’m going to add another one. But let’s start with the usual explanations.

DIJKSTRA’S HALF-OPEN RANGES

On August 11, 1982, Edsger Dijkstra wrote a short paper on why numbering should start at zero. He first demonstrates that half-open ranges with an excluded upper bound are superior to other alternatives:

Here’s a quick summary of his reasoning, in case you don’t want to read it yourself. The main advantages are that a) you can easily represent empty ranges (i. e. lower equals upper) and b) compute the number of elements in a range by subtracting the lower bound from the upper bound.

Based on ranges where the upper bound is excluded, Dijkstra goes on to show that for sequences of N elements, there are only two ways of indexing:

Obviously, the latter is much more elegant and hence we see code like this everywhere:

BASE-RELATIVE ADDRESSING

Sequences of heterogeneous data in contiguous memory are almost universally laid-out like this:

A sequence starts with the first element at some base address, the second follows sizeof(elem) bytes after the first element and so on. Computing the start address of the n-th element can be done using this formula:

However, this formula only applies if you index your elements from 0 to N – 1. If instead you chose to number indices from 1 to N, the formula would need to be adapted:

This alternative is not only less pleasant to look at, but because of the additional subtraction also harder for the CPU to compute. Consequently, the fathers of C employed the zero-based array access syntax that we are all so familiar with:

which is really just a shorthand notation for

If i is 0, you get the first element, if i is positive, the i-th successor of elem, and if i is negative, the i-th predecessor of elem. The latter fact often surprises developers because they either assume that negative offsets are illegal in the first place or yield elements from the end of the array, like in Python.

Incidentally, there’s another secret to C array indexing: since the addition operation is commutative, you can equally well write

As obvious as this is in hindsight, it’s not well known amongst C programmers and a good opportunity to show off at parties. However, I strongly advise against writing code like this for production use.

MODULAR ARITHMETIC

As you know, applying the mod operator like this

yields values ranging from 0 to N – 1. This dovetails nicely with zero-based indices into sequences.

Take hash maps for example. To determine the position of an element in a hash map containing N slots, just apply a hash function and take the result modulo N to get the index. That’s it!

Another use case is the ring buffer, one of my favorite containers: to advance an index into a ring buffer, just add the desired offset and apply the mod operator to get wrap-around — no need for extra if/else logic. Again, starting indices at 1 instead of 0 would entail extra additions and subtractions.

There you have it — one more reason to start numbering from zero. (As if you still needed to be convinced…)

Why I Still Use std::lock_guard

“Better a lively old epigram than a deadly new one”
— Helen Rowland

If you’re programming in C++17 or above, you can choose among three different RAII-like mutex wrappers: std::lock_guard, std::scoped_lock, and std::unique_lock. But which one should you pick?

Most C++ programmers today advise you to use std::scoped_lock by default, unless you have to use std::unique_lock (if you want to move locks or if an API (like std::condition_variable) forces you to do so). One answer on StackOverflow goes as far as saying that

“… scoped_lock is a strictly superior version of lock_guard that locks an arbitrary number of mutexes all at once (using the same deadlock-avoidance algorithm as std::lock). In new code, you should only ever use scoped_lock.”

Despite of such advice, good ‘ol std::lock_guard is still my go to mutex wrapper. Why? Because most of the time, I

– only need to lock a single lock at a time
– don’t need to move or swap locks
– rarely need to manually lock/unlock the mutex once it’s wrapped
– prefer using features from earlier C++ standards, provided they fulfill my needs
– avoid code that violates Scott Meyers’ “most important design guideline”

Let me elaborate on the last point. Consider this code

This code looks like a typical case for a RAII-wrapper: You want to automatically lock and unlock (even if an exception is thrown, for instance) and reduce the time the lock is held to a minimum — hence the extra curly braces. Code like this gets written. I wrote code like this myself. I also saw similar code when reviewing somebody else’s code.

The problem with this code, however, is that it’s dead wrong. In the heat of the moment, the poor programmer forgot to pass a mutex to the std::scope_lock’s constructor, as in

In the original code sample, nothing is locked. Since the variadic constructor can be called without providing arguments, the compiler is happy. The application might actually work for quiet some time until it suddenly produces “strange results”. Try this with std::lock_guard and the compiler will rightfully reject your code, as std::lock_guard doesn’t have a default constructor.

In my view, the fact that std::scoped_lock’s constructor can be called without arguments is a gross violation of Scott Meyers’ “most important design guideline”, a guideline that I already wrote about recently: “Make interfaces easy to use correctly and hard to use incorrectly”.