Circular Adventures VIII: The Eternal Quest For Mod-Like Behavior

“A circle is a round straight line
with a hole in the middle.”
— Mark Twain

In circular situations, we often wish to employ the mod operator to achieve “wrap-around” at both ends of a linear system to mimic a circular system:

Achieving clockwise wrap-around (from n to 0) usually poses no problem:

The % operator is the compiler vendor’s approximation of the mod operator. It ensures that values which exceed n wrap-around nicely:

Alas, most operator % implementations are not so useful when it comes to counter-clockwise wrap-around (from 0 to n):

If index – 1 becomes negative, we likely get negative results — which is far from what we desire:

To readers of this column (part I in particular) this is of course old hat. They know that you can’t trust the % operator when negative operands are involved. Circular coders often work around this shortcoming like so:

Not nice, but no big deal either, you probably think. But what if you need a function that advances an index in either direction, depending on the sign of the offset:

Now this looks disgusting, this is a big deal! And all just because our wretched % operator doesn’t behave like it should. Remember, if it did, we could simply write:

Did you notice that our convoluted circular_advance function actually contains a bug? No? Here’s a hint. The bug is in this line:

What happens if the offset is larger than n? Answer: the expression n – offset will underflow producing a wrong result. That’s why we would have to write this, at least if we wanted to be 100% correct:

which makes the code even uglier. But do we really need to support such large offsets? If we don’t (which is almost always the case in circular scenarios) and can live with offsets whose value is less than or equal to n, we can simplify circular_advance significantly. Here’s how.

We know that adding a multiple of n to the dividend x doesn’t change the result of the modulo operation:

If we include an arbitrary offset, we get this:

This allows us to turn negative arguments to operator % into positive arguments by adding i*n, without changing the result. Further, if we ignore cases where the offset value is greater than n, we only have to add 1*n to enforce a positive argument to operator %. Here’s an example:

Since all operands are positive, we can use 6 % 8 to compute -2 mod 8. Equipped with this knowledge, we can eradicate the ugliness and circular_advance boils down to this:

If we apply the same technique to the ring buffer’s original ring_buffer::size method from the last installment:

we get a simple branch-free solution:

which can be optimized easily by the compiler, especially if BUFSIZE is a base-2 number.

So here’s the takeaway. To get efficient, branch-free mod-like behavior for positive divisors n:

If n is a base-2 number, use bitwise-AND instead of the % operator:

If there’s only clockwise wrap-around (ie. a >= 0), % works just fine

If a is either positive or negative, but |a| is <= n, add n to the operand before applying the % operator:

More circular adventures…

Flashme

“It is the flash which appears,
the thunderbolt will follow.”
– Voltaire

Flashme is a flashcard program for command-line lovers, just like me. To avoid repeating myself, here’s the introduction from the GitHub project’s README file:

“I felt compelled to write flashme because I couldn’t find anything like it. While there are many freeware flashcard programs out there, none met my chief requirement: it must be fully operable from the command-line. My next most important requirement was that cards must be kept in plain text files (I call them “deckfiles”): I want to be able to diff, grep, and edit deckfiles with standard Unix command-line tools and editors. Binary file formats, including zipped XML files, are a nuisance and make it difficult to store flashcards in version control systems.”

I’ve used flashme extensively (and successfully) for more than a year to learn English, Linux command-line options, vi hacks, and much more. I think the time has come to release it to the public.

Share and enjoy!

Let’s Unit-Test Everything!

“More than the act of testing, the act of designing tests is one of the best bug preventers known. The thinking that must be done to create a useful test can discover and eliminate bugs before they are coded – indeed, test-design thinking can discover and eliminate bugs at every stage in the creation of software, from conception to specification, to design, coding and the rest.”
– Boris Beizer

Some programmers are definitely hard to convince that unit testing is a good idea. Especially die-hard embedded C coders go to great lengths to “prove” that for their project, unit testing is unthinkable.

One frequently presented argument against unit testing is that the project is real-time and/or deeply embedded and that the code is so low-level and hardware-dependent that it’s impossible to unit test it.

To me, this argument doesn’t hold water. You just need to introduce the right abstractions. Obviously, you can’t unit-test the precision of a timer with a one microsecond resolution, still, you can test against its (mocked) interface. And you can definitely test the effects of an expired timer, when an “timer expired” callback method is executed. It’s just a matter of isolating hardware (or in general, things that are difficult to unit test) from business logic. If the abstractions are right, unit testing is not just possible, it’s a breeze! As a bonus, you usually get lots of portability and reuse for free.

Another excuse for not unit testing is lack of time. “We can’t do unit testing because our schedule is so tight. Boohoo!”. That’s a fake lament, too. Granted, implementation might initially take longer, but implementation isn’t everything: software integration and system testing contribute major parts to the overall schedule, and both phases can be significantly reduced by employing unit testing, as many studies have demonstrated (refer to Roy Osherove’s book “The Art of Unit Testing, 2nd Edition”, for instance). The later you find a bug, the more expensive it is to fix. And while unit testing obviously can’t find all bugs it finds a lot of bugs and it finds them early in the development life-cycle.

In order to lead by example, I’m going to do something outrageous: I’m going to unit-test one of the most simple pieces of software ever written, the famous “Hello, World!” program. Even though it might sound crazy, I chose “Hello, World” because most developers would think that it’s a) impossible to unit-test and b) hard to get wrong in the first place.

Let’s start by taking a look at the venerable implementation by Kernighan and Ritchie as shown in their famous “The C Programming Language” book:

Actually, “Hello World” does two things: apart from printing the famous words, it (implicitly) returns an exit code of zero to the operating system. My first refactoring makes the original version’s behavior more explicit:

Since calling main from a unit test is not possible, let’s do a another quick modification and encapsulate the “Hello World” functionality in a function of its own:

Now, if this isn’t an example of “Design for Testability”, I don’t know what is ;-) Contrary to the original code, this new design allows calling/observing the behavior from a unit test. While verifying the return code is easy, ensuring that the right string is printed is a bit more involved and requires some creativity.

My initial idea was to redirect stdout to a file during the test and later check if the file contents are what I expect:

Even though this approach works, I rather prefer the more traditional mock-based approach, which uses the preprocessor to redirect calls to printf to a mocked version of printf:

The test itself is straightforward:

As you can see, it’s quite simple to create your own mocks, even in plain old C, where no fancy mocking frameworks are available.

But of course, unit testing is impossible if your insist on writing monolithic code with functions comprising 100+ lines that directly do I/O or manipulate device registers. It’s also impossible if you prefer delving through bug reports from field testing when you instead could have found and fixed that off-by-one error from the comfort of your home office, ten minutes after you introduced it.

Unit testing is no panacea, it’s by no means a substitute for system testing — it’s yet another rung on the software quality ladder. Let’s be honest: the real reason for not doing unit testing is not a technical one — it’s pure laziness.

(You can find the source code, tests and makefiles here.)

Simplified III: The Goat Problem

“The solution often turns out more beautiful than the puzzle.”
— Richard Dawkins

The goat problem, or Monty Hall problem, named after the host of the game show “Let’s Make a Deal” confuses the heck out of people. Here’s the original formulation from Marilyn vos Savant:

Suppose you’re on a game show, and you’re given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what’s behind the doors, opens another door, say No. 3, which has a goat. He then says to you, “Do you want to pick door No. 2?” Is it to your advantage to switch your choice?

As counter-intuitive as it might seem, the correct answer is — yes, it is!

Everyone (unless they knew this problem already) including software developers and physicists that I confronted with this answer, shook their heads in disbelieve. They are, however, in good company. According to this Wikipedia article:

Many readers of vos Savant’s column refused to believe switching is beneficial despite her explanation. After the problem appeared in “Parade”, approximately 10,000 readers, including nearly 1,000 with PhDs, wrote to the magazine, most of them claiming vos Savant was wrong (Tierney 1991). Even when given explanations, simulations, and formal mathematical proofs, many people still do not accept that switching is the best strategy (vos Savant 1991a). Paul Erdős, one of the most prolific mathematicians in history, remained unconvinced until he was shown a computer simulation demonstrating vos Savant’s predicted result (Vazsonyi 1999).

Fascinating, isn’t it? The problem (or rather the solution) is so hard to grasp, yet it’s so easy to follow this simplified™ explanation:

– If you initially picked a goat and you switch, you’ll win.
– If you initially picked a car and you switch, you’ll lose.
– It’s twice as likely that you initially picked a goat.
– Thus, it’s twice as likely to win when you switch!

Still not convinced? I wrote a little simulation in Python. Here’s the result after one million experiments:

The first number is the total number of experiments. The second number is the percentage of games won when the contestant stayed, while the third number shows the percentage of games won when the contestant switched.

Grokking Git Merge Commits And Combined Diffs

“A poor workman blames his tools.”
— anon

The other day, I overheard two developers discussing pros and cons of various version control systems. I only caught this fragment: “… What sucks about Git is that when you look at a merge commit, you can’t really see what changed!”. It wasn’t the first time I heard such complaints. Time to debunk Git merge commits.

I’ll use this repository which contains two merge commits (labelled ‘M’):

In my initial commit I added a file called ‘song_of_the_bell.txt’ which contains the first stanza of an English translation of a famous poem by Friedrich Schiller:

The next commit (ddfebcd) just added blank lines after every other line and was done on a branch called ‘better_formatting’. You don’t see the branch ‘better_formatting’ (graphically as a branch) because the merge to ‘master’ was a so-called fast-forward merge.

Then, I made a change on ‘master’ that replaced the comma on the last line with a period:

and a concurrent change on branch ‘add_title’:

Afterwards, ‘add_title’ was merged into ‘master’. Since there were modifications on both branches, a fast-forward merge was not possible, so there’s an explicit merge commit (fd63230). What do you think you’ll see when you show this merge commit?

Nothing, or rather, not much!

You don’t see the typical diff-like line changes and that’s what the developers lamented about. Other version control systems would give them what they want, namely the delta between the merge commit and the previous commit on ‘master’. This would allow them to easily figure out what was changed on ‘master’. Git can do it as well, but for merge commits you have to be explicit, ‘git show’ doesn’t cut it:

fd63230^- is a shortcut and translates to “show the difference between the predecessor of commit fd63230 and the commit fd63230 itself”. In general, ^- is short for hash^-n where n defaults to 1 (the first parent, aka. merge-to parent). You can show the delta between the second (merge-from) parent and the merge commit like this:

There’s a reason why a regular ‘git show’ doesn’t show much. In order to understand, we need to talk about combined diffs first. Combined diffs show the delta between the merge commit and the merge commit’s both parents in a single diff. Let’s produce a combined diff for our merge commit by using the -c option:

First of all, notice the line containing “Merge: 69a7968 6be3af1”: The first hash is the hash of the first parent of the merge commit (aka. the merge-to parent, fd63230^1), while the second hash belongs to the second parent of the merge commit (aka. the merge-from parent, fd63230^2). Next comes the diff output, in which the first column is used to show the delta between the merge commit and the first parent, whereas the second column is used to display the delta between the merge commit and the second parent:

The +/- markers are in the first column (i. e. markers are not indented) which means that “SONG OF THE BELL” and a single blank line were added to the first parent (on master). This change must have come from the merge-from commit (the branch ‘add_title’). Conversely,

shows the difference between the second parent (on ‘add_title’) and the merge commit, since the +/- markers are in the second column (i. e. markers are indented by one space). These are the changes that were done on master.

There’s also a –cc option (think “compact combined”) that gives an even tighter output than -c in that it only shows modifications that occur on both parents in the same lines. In other words, it’s a combined diff showing only merge conflicts. Since the changes in the merge commit fd63230 are non-conflicting (they’re on different lines), –cc produces no diff at all:

Wait a minute! Isn’t this the same output that we got above when we executed a plain ‘git show fd63230’ without the –cc option? Precisely! When showing merge conflicts, ‘git show’ defaults to “compact combined format”, which displays only conflicts. That’s why most merge commits are empty and that’s why there’s so much whining. On the other hand, this little feature makes the life of an integrator much easier, as (s)he can focus on the parts of a merge commit that are criticial: conflicts.

Now let’s take a look at the other merge commit, the one at the top of the history:

Here you do have some output, which means there was a conflict. Again, the first column shows what changed between the first parent (merge-to parent) and the merged version, which is just the addition of the author name “Friedrich Schiller”. Obviously, this change originated from the ‘add_author’ branch. The second column shows what has changed between the second parent (merge-from parent) and the merged version. Clearly, the title “SONG OF THE BELL” was indented on ‘master’. But why is the author name “Friedrich Schiller” marked as a change in the second column as well? It shouldn’t appear, because this is the change that was done in the merge-from parent, right? As always, Git is right, it should. During the merge, as part of the conflict resolution, the author name “Friedrich Schiller” was indented (in the spirit of the change in the merge-to parent which indented the title). It’s this indentation that has changed in the merge commit compared to the merge-from parent.

Understanding combined diffs definitely takes a little getting used to. That’s the reason why most people only care about what changed between the merge-to commit and the merge commit. You already know how to obtain these changes painlessly:

Circular Adventures VII: A Ring Buffer Implementation

“The Limited Circle Is Pure”
— Franz Kafka

In systems programming, ring buffers are ubiquitous. Like regular queues, they’re first-in first-out data structures but contrary to classic queues they’re fixed in size and don’t grow dynamically. This is especially important in real-time contexts where time and space determinism is paramount.

In today’s circular episode I want to show you my go-to ring buffer implementation. I like it, because it’s short and decently efficient. I deliberately used only basic C++ syntax, so it should compile fine with even compilers from the previous millennium (i. e. C++98):

You can specify the data type of ring buffer elements via template parameter T and the total number of elements to track via template parameter N, respectively. Here’s a little usage scenario:

A common problem that you face when implementing a ring buffer is discriminating between empty and full states, as in both cases the head and tail index point to the same location. I solved it by allocating one extra element, which might waste a little bit of space but makes the code easier on the eye and probably faster compared to the alternative approach which requires you have to maintain an extra boolean flag.

When the buffer becomes full during an add operation (head == tail), the tail index is immediately advanced by one, thus dropping the oldest element. As this all happens within the add method, from a ring_buffer user’s point of view the head index is only ever equal to the tail index when the ring buffer is empty.

Contemporary compilers will replace the potentially costly modulo operation in the advance helper method with an AND operation for buffer sizes that are base-2 numbers. However, keep in mind that the advance method uses the internal buffer size (BUFSIZE), which is one greater than the requested ring buffer size (N), so this is most likely an efficient ring buffer:

while this isn’t:

[Update 2020-05-29: Part VIII of this series shows how to optimize/simplify ring_buffer::size().]

More circular adventures…