Simplified III: The Goat Problem

“The solution often turns out more beautiful than the puzzle.”
— Richard Dawkins

The goat problem, or Monty Hall problem, named after the host of the game show “Let’s Make a Deal” confuses the heck out of people. Here’s the original formulation from Marilyn vos Savant:

Suppose you’re on a game show, and you’re given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what’s behind the doors, opens another door, say No. 3, which has a goat. He then says to you, “Do you want to pick door No. 2?” Is it to your advantage to switch your choice?

As counter-intuitive as it might seem, the correct answer is — yes, it is!

Everyone (unless they knew this problem already) including software developers and physicists that I confronted with this answer, shook their heads in disbelieve. They are, however, in good company. According to this Wikipedia article:

Many readers of vos Savant’s column refused to believe switching is beneficial despite her explanation. After the problem appeared in “Parade”, approximately 10,000 readers, including nearly 1,000 with PhDs, wrote to the magazine, most of them claiming vos Savant was wrong (Tierney 1991). Even when given explanations, simulations, and formal mathematical proofs, many people still do not accept that switching is the best strategy (vos Savant 1991a). Paul Erdős, one of the most prolific mathematicians in history, remained unconvinced until he was shown a computer simulation demonstrating vos Savant’s predicted result (Vazsonyi 1999).

Fascinating, isn’t it? The problem (or rather the solution) is so hard to grasp, yet it’s so easy to follow this simplified™ explanation:

– If you initially picked a goat and you switch, you’ll win.
– If you initially picked a car and you switch, you’ll lose.
– It’s twice as likely that you initially picked a goat.
– Thus, it’s twice as likely to win when you switch!

Still not convinced? I wrote a little simulation in Python. Here’s the result after one million experiments:

The first number is the total number of experiments. The second number is the percentage of games won when the contestant stayed, while the third number shows the percentage of games won when the contestant switched.

Grokking Git Merge Commits And Combined Diffs

“A poor workman blames his tools.”
— anon

The other day, I overheard two developers discussing pros and cons of various version control systems. I only caught this fragment: “… What sucks about Git is that when you look at a merge commit, you can’t really see what changed!”. It wasn’t the first time I heard such complaints. Time to debunk Git merge commits.

I’ll use this repository which contains two merge commits (labelled ‘M’):

In my initial commit I added a file called ‘song_of_the_bell.txt’ which contains the first stanza of an English translation of a famous poem by Friedrich Schiller:

The next commit (ddfebcd) just added blank lines after every other line and was done on a branch called ‘better_formatting’. You don’t see the branch ‘better_formatting’ (graphically as a branch) because the merge to ‘master’ was a so-called fast-forward merge.

Then, I made a change on ‘master’ that replaced the comma on the last line with a period:

and a concurrent change on branch ‘add_title’:

Afterwards, ‘add_title’ was merged into ‘master’. Since there were modifications on both branches, a fast-forward merge was not possible, so there’s an explicit merge commit (fd63230). What do you think you’ll see when you show this merge commit?

Nothing, or rather, not much!

You don’t see the typical diff-like line changes and that’s what the developers lamented about. Other version control systems would give them what they want, namely the delta between the merge commit and the previous commit on ‘master’. This would allow them to easily figure out what was changed on ‘master’. Git can do it as well, but for merge commits you have to be explicit, ‘git show’ doesn’t cut it:

fd63230^- is a shortcut and translates to “show the difference between the predecessor of commit fd63230 and the commit fd63230 itself”. In general, ^- is short for hash^-n where n defaults to 1 (the first parent, aka. merge-to parent). You can show the delta between the second (merge-from) parent and the merge commit like this:

There’s a reason why a regular ‘git show’ doesn’t show much. In order to understand, we need to talk about combined diffs first. Combined diffs show the delta between the merge commit and the merge commit’s both parents in a single diff. Let’s produce a combined diff for our merge commit by using the -c option:

First of all, notice the line containing “Merge: 69a7968 6be3af1”: The first hash is the hash of the first parent of the merge commit (aka. the merge-to parent, fd63230^1), while the second hash belongs to the second parent of the merge commit (aka. the merge-from parent, fd63230^2). Next comes the diff output, in which the first column is used to show the delta between the merge commit and the first parent, whereas the second column is used to display the delta between the merge commit and the second parent:

The +/- markers are in the first column (i. e. markers are not indented) which means that “SONG OF THE BELL” and a single blank line were added to the first parent (on master). This change must have come from the merge-from commit (the branch ‘add_title’). Conversely,

shows the difference between the second parent (on ‘add_title’) and the merge commit, since the +/- markers are in the second column (i. e. markers are indented by one space). These are the changes that were done on master.

There’s also a –cc option (think “compact combined”) that gives an even tighter output than -c in that it only shows modifications that occur on both parents in the same lines. In other words, it’s a combined diff showing only merge conflicts. Since the changes in the merge commit fd63230 are non-conflicting (they’re on different lines), –cc produces no diff at all:

Wait a minute! Isn’t this the same output that we got above when we executed a plain ‘git show fd63230’ without the –cc option? Precisely! When showing merge conflicts, ‘git show’ defaults to “compact combined format”, which displays only conflicts. That’s why most merge commits are empty and that’s why there’s so much whining. On the other hand, this little feature makes the life of an integrator much easier, as (s)he can focus on the parts of a merge commit that are criticial: conflicts.

Now let’s take a look at the other merge commit, the one at the top of the history:

Here you do have some output, which means there was a conflict. Again, the first column shows what changed between the first parent (merge-to parent) and the merged version, which is just the addition of the author name “Friedrich Schiller”. Obviously, this change originated from the ‘add_author’ branch. The second column shows what has changed between the second parent (merge-from parent) and the merged version. Clearly, the title “SONG OF THE BELL” was indented on ‘master’. But why is the author name “Friedrich Schiller” marked as a change in the second column as well? It shouldn’t appear, because this is the change that was done in the merge-from parent, right? As always, Git is right, it should. During the merge, as part of the conflict resolution, the author name “Friedrich Schiller” was indented (in the spirit of the change in the merge-to parent which indented the title). It’s this indentation that has changed in the merge commit compared to the merge-from parent.

Understanding combined diffs definitely takes a little getting used to. That’s the reason why most people only care about what changed between the merge-to commit and the merge commit. You already know how to obtain these changes painlessly:

Circular Adventures VII: A Ring Buffer Implementation

“The Limited Circle Is Pure”
— Franz Kafka

In systems programming, ring buffers are ubiquitous. Like regular queues, they’re first-in first-out data structures but contrary to classic queues they’re fixed in size and don’t grow dynamically. This is especially important in real-time contexts where time and space determinism is paramount.

In today’s circular episode I want to show you my go-to ring buffer implementation. I like it, because it’s short and decently efficient. I deliberately used only basic C++ syntax, so it should compile fine with even compilers from the previous millennium (i. e. C++98):

You can specify the data type of ring buffer elements via template parameter T and the total number of elements to track via template parameter N, respectively. Here’s a little usage scenario:

A common problem that you face when implementing a ring buffer is discriminating between empty and full states, as in both cases the head and tail index point to the same location. I solved it by allocating one extra element, which might waste a little bit of space but makes the code easier on the eye and probably faster compared to the alternative approach which requires you have to maintain an extra boolean flag.

When the buffer becomes full during an add operation (head == tail), the tail index is immediately advanced by one, thus dropping the oldest element. As this all happens within the add method, from a ring_buffer user’s point of view the head index is only ever equal to the tail index when the ring buffer is empty.

Contemporary compilers will replace the potentially costly modulo operation in the advance helper method with an AND operation for buffer sizes that are base-2 numbers. However, keep in mind that the advance method uses the internal buffer size (BUFSIZE), which is one greater than the requested ring buffer size (N), so this is most likely an efficient ring buffer:

while this isn’t:

More circular adventures…

The Happy Path to Modern C++ (C++11 done!)

I’ve finally found time and motivation to resume work on the “Happy Path to Modern C++” project. I’m sure it now contains enough code snippets to give you a good overview of C++11. As a matter of fact, I can’t think of an easier way to learn C++11, provided you’re already well-versed in C++ and C++98, in particular.

Here’s the project on GitHub.

“Dèyè mòn, gen mòn”, as the Haitians say: “Behind mountains there are more mountains”. The next goal is adding plenty of C++14 snippets. Again, contributions are highly appreciated!

Dangerously Confusing Interfaces V: The Erroneous ERRORLEVEL

“Design interfaces that are easy to use correctly and hard to use incorrectly.”
— Scott Meyers

Dangerously confusing interfaces can lurk anywhere, even in the venerable (yuck!) DOS batch scripting language. Some time ago, I burnt my fingers when I made a tiny tweak to an existing batch file, deploy.bat, which was part of a larger build script:

Because we had seen the ‘copy’ command fail in the past, I tried to improve things a little by adding an ‘if’ statement to ensure that we would get a clear error message in such events:

Alas, it didn’t work. There still was no error message produced in case the copy command failed. Worse yet, the outer build script happily continued to run. Puzzled, I opened a DOS box and did some experiments:

Hmm. Everything worked as expected. Why didn’t it work in deploy.bat? Next, I changed deploy.bat to output the exit code:

And tried again:

What? The copy command failed and yet the exit code was zero? How can this be? After some head scratching, I vaguely remembered that there was another (arcane) way of checking the exit code, namely ERRORLEVEL (without the percentage signs), so I tried it out:

I never really liked this style of checking the exit code, because ‘ERRORLEVEL n’ actually doesn’t test whether the last exit code was n; it rather checks if the last exit code was at least n. Thus, this statement

doesn’t check if the exit code is zero (ie. no error occurred). What it really does is check if the exit code is greater to or equal to zero, which is more or less always true, no matter the value of the exit code. That’s pretty confusing, if you’d ask me.

Anyway, for some reason, it seemed to work nicely in deploy.bat:

I hardly couldn’t believe my eyes. The copy command obviously failed, %ERRORLEVEL% was obviously zero, still the if statement detected a non-zero exit code. What was going on? I delved deeply into the documentation of the DOS batch language. After some searching I found this paragraph:

%ERRORLEVEL% will expand into a string representation of the current value of ERRORLEVEL, provided that there is not already an environment variable with the name ERRORLEVEL, in which case you will get its value instead.

Whoa! There are two kinds of ERRORLEVEL, who knew? One (the one whose value you can query with %ERRORLEVEL%) will be set to the value of the former, provided there is no variable named ERRORLEVEL already. Now I had a suspision what was going on. I opened the parent batch file and came across the following:

In an attempt to clear the error level, some unlucky developer introduced a variable named ERRORLEVEL which shadowed the value %ERRORLEVEL% from this point on. This can be easily verified:

Once the problem was understood, it was easy to fix: clear the error level in an “accepted way” (yuck, again!) instead of wrongly tying it to zero:

Even though the interface to DOS exit codes is dangerously confusing (and disgusting as well), it facilitates a nice practical joke: next time a colleague leaves the room without locking the screen, open Windows control panel, create a new global environment variable called ERRORLEVEL and set it to 0.

Grokking Integer Overflow

Please visit xkcd.comEven experienced C/C++ programmers often mix-up the terms integer overflow and wrap-around. Likewise, they are confused about the ramifications. This post attempts to clear things up a little.

OVERFLOW AND UNDERFLOW

In C (and C++), integer types (like the signed and unsigned versions of char, short, and int) have a fixed bit-size. Due to this fact, integer types can only support certain value ranges. For unsigned int, this range is 0 to UINT_MAX, for (signed) int INT_MIN to INT_MAX. On a typical platform, these constants have the following values:

The actual values depend on many factors, for instance, the native word size of a platform and whether 2’s complement represantation is used for negative values (which is almost universally the case), consult your compiler’s limits.h header file for details.

Overflow happens when an expression’s value is larger than the largest value supported by a type; conversely, underflow occurs if an expression yields a value that it is smaller than the smallest value representable by a type. For instance:

It’s common among programmers to use the term overflow for both, overrun and underrun of a type’s value range. And so shall I, for the rest of this discussion.

WRAP-AROUND

Now that we know what overflow is, we can tackle the question what happens on overflow. One possibility is what is conventionally referred to as wrap-around. Wrap-around denotes that an integer type behaves like a circle; that is, it has no beginning and no end. If you add one to the largest value, you arrive at the smallest; if you subtract one from the smallest value, you get the largest.

Wrap-around is, however, only one way to handle integer overflow. Other possibilities exist, ranging from saturation (the overflowing value is set to the largest/smallest value and stays there), to raising an exception, to doing whatever an implementation fancies.

AND THE C LANGUAGE SAYS…

If you want to find out how C (and C++) handles integer overflow, you have to take a look at chapter 6.7.5 “Types”, the following sentence in particular:

“A computation involving unsigned operands can never overflow, because a result that cannot be represented by the resulting unsigned integer type is reduced modulo the number that is one greater than the largest value that can be represented by the resulting type”

Which means in plain English:

0. Apparently, there is overflow (as defined above) and C-overflow (as used by the standard). C-overflow is more like an error condition that occurs if overflow behavior is not defined for a type.

1. Unsigned integer types wrap-around on overflow, “reduced module the number that is one greater than the largest value” is just a fancy name for it. Thus, unsigned integer overflow is well defined and not called overflow by the language standard.

2. Nothing is explicitly said about signed integer types. There are, however, various hints in the standard that signed integer overflow is undefined, for instance:

3.4.3 Undefined Behavior: An example of undefined behavior is the behavior on integer overflow.
J.2 Undefined behavior: The value of the result of an integer arithmetic or conversion function cannot be represented.

To sum it up: on overflow, unsigned integers wrap-around whereas signed integers “overflow” into the realm of undefined behavior (contrary to Java and C#, BTW, where signed integers are guaranteed to wrap around).

SIGNED OVERFLOW

You might have believed (and observed) that in C signed integers also wrap around. For instance, these asserts will hold on many platforms:

Both asserts hold when I compiled this code on my machine with gcc 7.4; the following only if optimizations are disabled (-O0):

From -O2 on, gcc enables the option -fstrict-overflow, which means that it assumes that signed integer expressions cannot overflow. Thus, the expression i + 42 < i is considered false, regardless of the value i. You can control signed integer overflow in gcc, check out the options -fstrict-overflow, -fwrapv, and -ftrapv. For maximum portability, however, you should always stay clear of signed integer overflow and never assume wrap-around.

SIGNED OVERFLOW THAT ISN’T

What about this code? Does this summon up undefined behavior, too? Doesn’t the resulting sum overflow the value range of the short type?

The short (pun intended!) answer is: it depends!

It depends because before adding x and y, a conforming C compiler promotes both operands to int. Thus, to the compiler, the code looks like this:

Adding two integers that hold a value of SHRT_MAX doesn’t overflow, unless — and that’s why it depends — you are hacking away on an ancient 16-bit platform where sizeof(short) == sizeof(int).

But even on a typical 32- or 64-bit platform, what about the assignment of the large integer result to the short variable sum. This surely overflows, doesn’t it? Doesn’t this yield undefined behavior? The answer in this case is a clear ‘no’. It’s rather ‘implementation specified’. Let’s see.

SIGNED INTEGER TYPE CONVERSIONS

In the previous example, a larger signed type is converted into a smaller signed type. This is what the C99 standard has to say about it:

6.3.1.3 Signed and unsigned integers
When a value with integer type is converted to another integer type other than _Bool, if the value can be represented by the new type, it is unchanged.

Otherwise, if the new type is unsigned […]

Otherwise, the new type is signed and the value cannot be represented in it; either the result is implementation-defined or an implementation-defined signal is raised.

What an implementation will choose to do, in practice, is wrap-around.

Why does a compiler behaves like this? You can find an explanation by Linus Torvalds himself:

“Bit-for-bit copy of a 2’s complement value. Anything else would be basically impossible for an optimizing compiler to do unless it actively _tried_ to screw the user over.”

To sum it up:

1. Unsigned integers wrap around on overflow. 100 percent guaranteed.
2. Signed integer overflow means undefined behavior. Don’t rely on wrap-around.
3. Type conversions to smaller signed types will very likely wrap-around. Let’s call this “Torvalds-defined behavior”.