« Posts under Code

Grokking Integer Overflow

Please visit xkcd.comEven experienced C/C++ programmers often mix-up the terms integer overflow and wrap-around. Likewise, they are confused about the ramifications. This post attempts to clear things up a little.

OVERFLOW AND UNDERFLOW

In C (and C++), integer types (like the signed and unsigned versions of char, short, and int) have a fixed bit-size. Due to this fact, integer types can only support certain value ranges. For unsigned int, this range is 0 to UINT_MAX, for (signed) int INT_MIN to INT_MAX. On a typical platform, these constants have the following values:

The actual values depend on many factors, for instance, the native word size of a platform and whether 2’s complement represantation is used for negative values (which is almost universally the case), consult your compiler’s limits.h header file for details.

Overflow happens when an expression’s value is larger than the largest value supported by a type; conversely, underflow occurs if an expression yields a value that it is smaller than the smallest value representable by a type. For instance:

It’s common among programmers to use the term overflow for both, overrun and underrun of a type’s value range. And so shall I, for the rest of this discussion.

WRAP-AROUND

Now that we know what overflow is, we can tackle the question what happens on overflow. One possibility is what is conventionally referred to as wrap-around. Wrap-around denotes that an integer type behaves like a circle; that is, it has no beginning and no end. If you add one to the largest value, you arrive at the smallest; if you subtract one from the smallest value, you get the largest.

Wrap-around is, however, only one way to handle integer overflow. Other possibilities exist, ranging from saturation (the overflowing value is set to the largest/smallest value and stays there), to raising an exception, to doing whatever an implementation fancies.

AND THE C LANGUAGE SAYS…

If you want to find out how C (and C++) handles integer overflow, you have to take a look at chapter 6.7.5 “Types”, the following sentence in particular:

“A computation involving unsigned operands can never overflow, because a result that cannot be represented by the resulting unsigned integer type is reduced modulo the number that is one greater than the largest value that can be represented by the resulting type”

Which means in plain English:

0. Apparently, there is overflow (as defined above) and C-overflow (as used by the standard). C-overflow is more like an error condition that occurs if overflow behavior is not defined for a type.

1. Unsigned integer types wrap-around on overflow, “reduced module the number that is one greater than the largest value” is just a fancy name for it. Thus, unsigned integer overflow is well defined and not called overflow by the language standard.

2. Nothing is explicitly said about signed integer types. There are, however, various hints in the standard that signed integer overflow is undefined, for instance:

3.4.3 Undefined Behavior: An example of undefined behavior is the behavior on integer overflow.
J.2 Undefined behavior: The value of the result of an integer arithmetic or conversion function cannot be represented.

To sum it up: on overflow, unsigned integers wrap-around whereas signed integers “overflow” into the realm of undefined behavior (contrary to Java and C#, BTW, where signed integers are guaranteed to wrap around).

SIGNED OVERFLOW

You might have believed (and observed) that in C signed integers also wrap around. For instance, these asserts will hold on many platforms:

Both asserts hold when I compiled this code on my machine with gcc 7.4; the following only if optimizations are disabled (-O0):

From -O2 on, gcc enables the option -fstrict-overflow, which means that it assumes that signed integer expressions cannot overflow. Thus, the expression i + 42 < i is considered false, regardless of the value i. You can control signed integer overflow in gcc, check out the options -fstrict-overflow, -fwrapv, and -ftrapv. For maximum portability, however, you should always stay clear of signed integer overflow and never assume wrap-around.

SIGNED OVERFLOW THAT ISN’T

What about this code? Does this summon up undefined behavior, too? Doesn’t the resulting sum overflow the value range of the short type?

The short (pun intended!) answer is: it depends!

It depends because before adding x and y, a conforming C compiler promotes both operands to int. Thus, to the compiler, the code looks like this:

Adding two integers that hold a value of SHRT_MAX doesn’t overflow, unless — and that’s why it depends — you are hacking away on an ancient 16-bit platform where sizeof(short) == sizeof(int).

But even on a typical 32- or 64-bit platform, what about the assignment of the large integer result to the short variable sum. This surely overflows, doesn’t it? Doesn’t this yield undefined behavior? The answer in this case is a clear ‘no’. It’s rather ‘implementation specified’. Let’s see.

SIGNED INTEGER TYPE CONVERSIONS

In the previous example, a larger signed type is converted into a smaller signed type. This is what the C99 standard has to say about it:

6.3.1.3 Signed and unsigned integers
When a value with integer type is converted to another integer type other than _Bool, if the value can be represented by the new type, it is unchanged.

Otherwise, if the new type is unsigned […]

Otherwise, the new type is signed and the value cannot be represented in it; either the result is implementation-defined or an implementation-defined signal is raised.

What an implementation will choose to do, in practice, is wrap-around.

Why does a compiler behaves like this? You can find an explanation by Linus Torvalds himself:

“Bit-for-bit copy of a 2’s complement value. Anything else would be basically impossible for an optimizing compiler to do unless it actively _tried_ to screw the user over.”

To sum it up:

1. Unsigned integers wrap around on overflow. 100 percent guaranteed.
2. Signed integer overflow means undefined behavior. Don’t rely on wrap-around.
3. Type conversions to smaller signed types will very likely wrap-around. Let’s call this “Torvalds-defined behavior”.

The Thinking That Cost Me The Fish

“The fishermen know that the sea is dangerous and the storm terrible, but they have never found these dangers sufficient reason for remaining ashore.”
— Vincent Van Gogh

Last December, I had a lot of fun working on “Linux Magazin’s” programming contest. I was both, surprised and excited when the editors contacted me in January and told me that I got second place and would even receive a price! But when I found out about the details, my initial joy quickly subsided—partly because of them, but much more because of me. Let met explain.

THE CHALLENGE

Since “Linux Magazin” is only available in German, I will briefly summarize what the competition was all about.

Imagine there’s a fisherman (or fisherwoman, or fisherperson, whatever you prefer) sitting in a boat on a pond. The pond is rectangular in shape and divided into M x N squares. The initial position of the boat is the square at column 0, row 0. Thus, using x and y as column and row coordinates, the boat’s initial position corresponds to x = 0 and y = 0. The fisherman can steer the boat through the pond by incrementing (or decrementing) either x or y by one with every move he makes.

Each of the squares in the pond may contain a fish. The aim is to write a Python script that picks up all the fish using a minimum number of moves.

Here’s an example of a 7 x 5 pond with the boat ‘B’ at the initial position and three fish, denoted by asterisks:

One straightforward solution is to scan the whole pond, column by column, row by row:

Obviously, this exhaustive scan wouldn’t win you a prize as it requires too many moves (M multiplied by N, to be precise).

GO FOR GOLD, ER, FISH

After scratching their heads for a while, most developers will realize that this is a variant of the traveling salesman (or saleswoman, or salesperson, whatever you prefer) problem. The difference is that the fisherman, unlike the traveling salesman, doesn’t have to return to his origin after having visited all other nodes [1].

After some more head-scratching, some developers will even remember that this is an NP-complete problem and finding the optimal solution (by trying out all possible routes) requires time proportional to O(n!), where n is the number of fish in the pond.

Due to the exponential nature of this problem, a better (even if not optimal) idea is needed. A reasonable choice to start with is the “next nearest neighbor” strategy, which always visits the node (fish) that is closest to the current position next, and so on. This is what I tried out first, it required only 30 lines of code and allowed me to implement some initial unit tests [2].

Most developers have learned the hard way that it’s essential to find out about the typical use-cases before optimizing a system. Otherwise, you will end up with what is referred-to as “fast slow code”. Unfortunately, there are no boundaries in the pond problem: the pond could consist of millions of squares and there could be millions of fish. How can you differentiate yourself from all the other contestants who will most likely also employ a nearest neighbor search?

IMPROVEMENTS

I wasn’t satisfied with my nearest neighbor approach. All kinds of problems and corner cases came to my mind. Consider the following pond set-up:

According to the nearest neighbor strategy, the fisherman would first visit x = 1, y = 0 and then process the whole first row by repeatedly moving one square to the right. After reaching x = 6, y = 0, he would move back all the way to x = 0, y = 2. What a waste!

Even though I knew about this problem’s exponential nature, I though about doing an exhaustive O(n!) search at least for smaller ponds with little fish. This approach would yield the optimal solution (which would start with picking up the fish at x = 0, y = 2 first, in the example above). When the problem size crossed a certain threshold, I would fall back to the nearest neighbor strategy.

Alas, my measurements showed discouraging results. It took only a couple of seconds to solve ponds with up to ten fish. However, it took hours to find the optimal solution if there were more than 15 fish in the pond [3]. Since I assumed that the ponds against which the submissions are tested were large and contained lots of fish, I ditched this idea immediately.

Then followed a period of days of frustration, with many dead-end experiments. Two days before the submission date, I decided to try out one final idea. I knew from my previous testing that visiting all permutations was not feasible. But what I could do instead of always aiming towards the closest square containing a fish with my initial move, I could try out all cells containing fish as the initial target square and use the nearest neighbor strategy only from this square onward. Of all the experiments, I would pick the route that required the least number of moves.

This was easy to implement and increased the overall run-time only by time proportional to n, the total number of fish. Even though this approach wasn’t able to always find the optimal route, it solved my degenerate pond example nicely, even for large N, M, and n, so I submitted it.

AND THE WINNER IS… NOT ME!

When the editors contacted me about a month later, I couldn’t wait to find out why I didn’t win, what the actual test data was and so on. What I saw, however, was quite frustrating: neither of the 5 ponds they used on the submitted scripts were large or complex. In fact, all of them were laid-out in such a way that a simple nearest neighbor search would yield optimal solutions, so the improvement that I just explained to you had no effect, whatsoever. When I fed my degenerate pond to the winner’s script, it required 14 moves, while mine only needed 10.

But I lost anyway. I lost, because one pond was set up such that at some point, there where multiple, equally near fish to choose from. While my naive implementation simply picked the first one, the winning algorithm explored all choices and selected the one that resulted in the shortest overall path, which is not only smart, but also easy to implement.

LESSONS LEARNED

There’s no point in sugar-coating it: I was quite disappointed. Why were the ponds so small and simple? I could have used the sluggish O(n!) search for almost all of the ponds and won easily. But I also beat myself up for not even thinking about the possibility of multiple nearest neighbors, which would have naturally led to the improvement done by the winner.

A couple of days later, I finally got over it. After all, I came in second, had a lot of fun, and something to post about. But the biggest benefit is the valuable lesson that I (re-)learned: when working on optimization problems, assume nothing!

I assumed that the ponds where big, that’s why I didn’t include the O(n!) search, even though I had already implemented it. I assumed that the ponds where obscenely complex, which led me to implement the improvement mentioned above. I was so focused on tricky ponds that the problem of simple ponds with multiple nearest neighbors didn’t even occur to me—that’s target fixation at it’s best (or worst, if you prefer).

Once again, the journey was the real reward. What I got for a prize, you wonder? A 500+ pages “Java for Beginners” tome, in German. ‘Nuff said!

________________________________

[1] Computer scientists would call this a Hamiltonian path with a fixed start point, but no fixed end point.

[2] Why unit tests for a programming contest? Because a fast solution that is incorrect is worth nothing! Some of my first tests revealed that a fish at the initial boat position (x = 0, y = 0) wasn’t picked up. Another test showed that I forgot to update the pond after picking up the fish, which obviously makes a difference when you come across a cells containing a fish twice. Unit tests are always necessary and always good.

[3] Python + O(n!) == no-no

Pointers in C: Part VII, Being Relaxed About The Strict Aliasing Rule

“I am free, no matter what rules surround me. If I find them tolerable, I tolerate them; if I find them too obnoxious, I break them. I am free because I know that I alone am morally responsible for everything I do.”
― Robert A. Heinlein

The largely unknown “Strict Aliasing Rule” (SAR) has the potential to send tears to the eyes of even the most seasoned C/C++ developers. Why? Because of it, a lot of the code they have written over the years belongs to the realm of “undefined behavior”.

Despite its name, the term “undefined behavior” itself is well-defined by the C language standard: it’s “behavior, upon use of a nonportable or erroneous program construct or of erroneous data, for which this International Standard imposes no requirements. Which means anything can happen: your program could randomly crash or even send suggestive emails to your boss.

THE PROBLEM

Let’s start with the code snippet that I used in my original post on SAR:

Here, data that has been received into a buffer (‘data’) is converted into a high-level data structure (‘measurements’). From the compiler’s point of view, what ‘data’ refers to is just a single ‘uint8_t’ but we access it through a pointer to type ‘struct measurements_t’. What we’ve got here is a clear violation of SAR, which entails undefined behavior.

SAFE ALTERNATIVES

“But, Ralf”, you might respond, “this can’t be true. I write code like this every day and it works flawlessly, even in safety-critical systems like medical devices!”

This doesn’t surprise me in the least. “Undefined behavior” can — get this — also mean “works flawlessly”. But there are no guarantees, whatsoever. It might work on one platform, with a particular compiler or compiler version, but might fail on another platform, or with a different compiler version. Hence, to err on the truly safe side (which you should, especially if you work on safety-critical systems), you should use truly safe alternatives.

One obvious and time-proven approach is to do such so-called type punning through unions. It works by storing data via a member of one type and reading it via another member of a different type:

The receiving function would store byte-wise into the ‘receive_buffer.data’ array, while high-level functions would use the ‘receive_buffer.measurements’ member. This will work reliably in any version of C, but it might fail in C++.

Bulletproof type-punning, one that works in both, C and C++, uses ‘memcpy’. ‘memcpy’!? ‘memcpy’, that’s right:

Believe it or not, there’s a high probability that your compiler will optimize-out the call to ‘memcpy’. I’ve observed this, among others, with ‘gcc’ and ‘clang’, but I’ve also seen compilers always call ‘memcpy’, even for the smallest amounts of data copied, regardless of the optimization level used (Texas Instruments ARM C/C++ compiler 19.6, for instance). Nevertheless, this is my go-to type-punning technique these days, unless performance is paramount. (You first have to prove that your code really impacts overall performance by profiling. Otherwise, your optimizations are like buying Dwayne Johnson an expensive hair brush — it doesn’t really harm, but it’s not of much use, either.)

BUT I REEELLY, REEELLY MUST USE CASTS

Sometimes, you have to use SAR-breaking casts, if only to maintain social peace in your team. So how likely is it that your compiler will do something obscene?

VERY unlikely, at least in this example. Let me explain.

First of all, compiler vendors know that most developers either haven’t heard about SAR or at least don’t give a foo about it. Therefore, they usually don’t aggressively optimize such instances. This is particularly true for compilers that are part of toolchains used in deeply (bare-metal) embedded systems. However, ‘gcc’ as well as ‘clang’, which are used in all kinds of systems, take advantage of SAR from optimization level 2 on. (You can explicitly disable SAR-related optimizations regardless of the optimization level by passing the ‘-fno-strict-aliasing’ option.)

Second, what ‘convert’ is doing is pretty much well-behaved. Sure, it aliases the ‘data’ and ‘measurements’ pointers, but it never accesses them concurrently. Once the ‘measurements’ pointer has been created, the ‘data’ pointer is not used anymore. If the caller (or the whole call-chain) are equally well-behaved, I don’t see a problem (don’t trust me!).

Third, there’s no aliased read/write access. Even if ‘data’ and ‘measurements’ were used concurrently, it wouldn’t be a problem, as long as both are only used for reading data (don’t trust me on this one, either!). By contrast, this I consider harmful:

To the compiler ‘data’ and ‘measurements’ are two totally unrelated pointers to unrelated memory areas. The original value of ‘data[0]’ might be cached in a register and not refetched from memory, hence the ‘assert’ might fail. In general, this is what will most likely happen when SAR is violated in contexts where it does matter: instead of suggestive emails being sent to your boss, you are much more likely got get stale values (which of course could lead to crashes later on).

NO PUN INTENDED

Let’s get real about SAR. Here are some relaxed, pragmatic rules on how to deal with the Strict Aliasing Rule:

0. Fully understand SAR
1. Try hard to adhere to SAR
2. Type-pun using ‘memcpy’
3. If you can’t, disable SAR-related compiler optimizations
4. If you can’t, avoid concurrent, aliased read/write access

But don’t assume that just because you didn’t get a ticket for speeding in the past, you will never ever get a ticket for speeding. What you’re doing is against the law. If you get busted someday, don’t whine and don’t complain I didn’t warn you. Rather own your failures and move on.

Breakin’ rocks in the hot sun
I fought the law and the law won
I fought the law and the law won
I USED SOME CASTS FOR TYPE PUN
I fought the law and the law won
I fought the law and the law won

(with apologies to Sonny Curtis)

Bug Hunting Adventures #14: Bitmap [BM]adness (Solution)

It’s a given fact of life that something that’s deemed totally safe in one environment may be totally unsafe in another. Every German who has ever used an American sauna knows what I’m talking about.

Similar (but far less embarrassing!) traps lurk in situations where you reuse perfectly working C++ code in a C environment. Some time ago, I integrated a little home-grown C++ library into a plain C project. However, instead of the expected, proven functionality I got plenty of core dumps. After some assembly-level debugging, I came to the conclusion that I had found a compiler bug. Code along these lines

was compiled to this:

Why the heck did the compiler insert an offset of 4 instead of 1?

The answer to this question, which is also the answer to our bug hunting adventure, can be found here.

Bug Hunting Adventures #14: Bitmap [BM]adness

“What’s the meaning of goodness if there isn’t a little badness to overcome?”
― Anne Revere

The code below is part of a C graphics processing library, which parses data in the venerable bitmap (BMP) file format. A bitmap file consists of a two parts: a header and the pixel data block. More specifically, a bitmap file is laid-out like this:

Offset Size Content
0 1 Character ‘B’
1 1 Character ‘M’
2 4 Size of the bitmap file
6 4 Reserved
10 4 Offset to the first byte of the pixel data (ofs)
14 n Info block
ofs m Pixel data

All multi-byte integer values (like the bitmap file size and the offset to the pixel data) are stored in little-endian format.

The function ‘bmp_pixel_data’ takes a pointer to a bitmap file data and returns a pointer to the bitmap’s pixel data area within the bitmap. The size of the pixel data area is returned via the ‘size’ out parameter. In case the provided bitmap file data is malformed, a NULL pointer is returned and the ‘size’ out parameter is set to zero.

As always, the code compiles cleanly without warnings (at ‘-W -Wall’), but when the function ‘bmp_pixel_data’ was put to use, it failed miserably. Where did the programmer goof?

Solution

Simplified: The Weasel That Teaches Evolution

“Weaseling out of things is important to learn. It’s what separates us from the animals… except the weasel.”
— Matt Groening

It’s Towel Day again! What a great opportunity to reflect upon Life, the Universe, and Everything. In this installment of the “Simplified” series, I want to tackle nothing less than Darwin’s theory of evolution. Why? Because I seriously believe that the world would be a better place if people finally understood it.

One common misconception, for example, is that some falsely believe that evolution occurs in a linear fashion, from single-celled organisms to homo sapiens. Such individuals, like the taxi driver that I mentioned in a previous post, think that they can smash the theory of evolution by posing this cunning question:

“If we humans are really decedents of animals like apes and dogs, why are there still apes and dogs around?”

Just in case this reasoning sounds conclusive to you as well: In reality, there is no single line of evolution, it’s rather a tree with many, many branches. On these branches, the universe performs experiments and decides which species survive, through natural selection. Consequently, humans did not evolve from apes, they are rather cousins whose common ancestor was neither ape nor human.

Another fallacy is Fred Hoyle’s famous Boeing 747 analogy. Hoyle was a brilliant scientist, no doubt, yet he refused to accept that complex life-forms can emerge by chance:

“A junkyard contains all the bits and pieces of a Boeing-747, dismembered and in disarray. A whirlwind happens to blow through the yard. What is the chance that after its passage a fully assembled 747, ready to fly, will be found standing there?”

A variant of this argument is the infinite monkey theorem: a monkey typing random letters at a typewriter will never be able to produce Shakespeare’s works.

In 1986, evolutionary biologist Richard Dawkins set out to dispel such misunderstandings by implementing a computer program that works—in Dawkins’ own words—like this:

“It […] begins by choosing a random sequence of 28 letters, […] it duplicates it repeatedly, but with a certain chance of random error – ‘mutation’ – in the copying. The computer examines the mutant nonsense phrases, the ‘progeny’ of the original phrase, and chooses the one which, however slightly, most resembles the target phrase, METHINKS IT IS LIKE A WEASEL.”

Here’s a more detailed explanation of his weasel program: it starts with a string of 28 random letters. Next, it creates N offspring strings by copying the original 28 random letters N times. When being copied, the chance of a letter being changed into another (random) letter is P. Now that we have a set of N new strings, we compare each string letter by letter against “METHINKS IT IS LIKE A WEASEL” (a quote from Shakespeare’s Hamlet, by the way) and pick the one with the most character matches (the highest match score). This one is deemed the “fittest” and kept as the survivor of the first generation; the other N-1 strings are discarded. We repeat the whole process by creating again N offspring from the survivor, then pick a new survivor and so on until the survivor finally matches “METHINKS IT IS LIKE A WEASEL”.

Let’s choose N to be 100 (one hundred offspring per generation) and P to be 5%, as in Dawkins original experiment. How many generations would it take until we finally reach “METHINKS IT IS LIKE A WEASEL”? Just guess a number, I’ll wait here…

Answer: about 50 generations! To me, this number was so ridiculously small that I decided to implement the weasel program myself. Please check it out here. Through command-line parameters, you can change the values of N and P, as well as the target phrase. Toying around with this program is so eye-opening, you suddenly realize that evolution is possible, that complexity can emerge by mutation and survival of the fittest.

Interestingly, if you set P to a high value (say 20%), you are likely to never arrive at the target phrase. A high value of P simulates an early universe, or any environment with a high natural radioactivity level. In one experiment where I set P to 20%, it took 800 generations, while a P value of 1% reached the target phrase after only 90 generations.

I also played with much longer target phrases with a length of more than 150 letters. What I’ve found is that the longer the phrase gets, the more detrimental a high mutation probability becomes. If you think about it, that’s not surprising, as the likelihood that already matching letters are replaced again with non-matching letters increases. Conclusion: the evolution of complex life-forms requires an environment that provides the right mutation probability—neither too low, nor too high. But while a low mutation probability can always be compensated by time (evolution will just progress slower), a high mutation probability stifles evolution entirely.

Anyway, while the weasel program is a blatant simplification of real life, I think it’s a great tool for demonstrating that random variation combined with non-random cumulative selection can produce complex structures. This is what evolution is all about; not monkeys hacking away at keyboards and winds blowing through airplane junkyards.

Share and Enjoy!