« Posts under C/C++/Embedded

Random Casting

Recently, a security-related bug slipped into libcurl 7.52.0.

For those of you who don’t know, libcurl is a popular open source library that supports many protocols and greatly simplifies data transfer over the Internet; an uncountable number of open- and closed-source projects depend on it.

Because of the bug, this particular version of libcurl doesn’t use random numbers when it should, which is really bad for security:

Since all the surrounding code is stripped away it is pretty easy to see what went wrong, right?

Within ‘randit’ there is an attempt to obtain a random number by calling ‘Curl_ssl_random’. However, ‘Curl_ssl_random’ is not passed the pointer ‘rnd’, but instead a pointer to ‘rnd’. Hence, the memory pointed to by ‘rnd’ is not filled with a random number but rather the pointer ‘rnd’ will point to a random memory location.

How did this bug come about? I’m pretty sure that — initially — the unlucky developer had accidentally typed this:

When (s)he compiled the code with gcc, the following error message was produced:

Which exactly explains the problem, but most likely, the developer only skimmed the error message and jumped to the wrong conclusion; that is, (s)he thought that a cast was needed because of a simple pointer incompatibility (unsigned int* vs. unsigned char*) when in fact there is a severe pointer incompatibility (pointer to pointer vs. pointer).

I’ve seen this many times before: developers apply casts to get rid of warnings from the compiler (or a static analysis tool) without a second thought. Don’t do this. Be very considerate when your compiler speaks to you. Casting, on the other hand, will silence it forever.

“inline” Is Yet Another Word For “Premature Optimization”

The fact that some C++ developers use the ‘inline’ keyword so much has always been a conundrum to me — I’ve never liked it. Why? First and foremost because it clutters up header files and exposes implementation details to the users of a class.

Most likely, inline aficionados believe that these disadvantages are more than compensated for by the fact that inlining gives them faster code, but this is not necessarily the case: according to the C++ standard (ISO/IEC 14882:2014), the compiler is allowed to silently ignore the ‘inline’ keyword:

“An implementation is not required to perform this inline substitution at the point of call”

Believing is not knowing, as the old saying goes. This is another reason why I don’t like the ‘inline’ keyword: it doesn’t guarantee you anything.

But let’s attack the ‘inline’ keyword from another angle. Even if we knew that declaring a method inline made it faster, shouldn’t we have to ask ourselves first if there is actually a performance case? Without profiling, without a proven need, any optimization is premature optimization, which — according to Donald Knuth — is the root of all evil. The fact that an optimization gives a local improvement doesn’t justify it sufficiently — it’s the overall improvement of the major use cases that matters. Otherwise we would implement all of our functions with inline assembly, wouldn’t we?

In the old days of C programming, developers used the ‘register’ keyword as a hint to tell the compiler what variables should be kept in registers for performance reasons. Nowadays, every C compiler is much better at allocating variables to registers than any human being. Consequently, the ‘register’ keyword has been deprecated in C11.

By the same token, today’s C++ compilers do a much better job of figuring out which functions should be inlined than we are able to do. Therefore, instead of giving hints to the compiler we should rather rely on automated, transparent inlinining that doesn’t clutter up class interfaces.

As an example, at optimization level -O2, the g++ compiler automatically inlines all functions that are small or called only once. Specifying -finline-functions (enabled by default at -O3) uses a heuristic to determine if its worthwhile to inline a function or not — without the need for any developer intervention.

To me, it’s about time that ‘inline’ goes the way of the ‘register’ keyword.

Counting Down Correctly in C

The countdown for the New Year is near to its end, so I want to take this opportunity to discuss how to implement loops that count down from an upper boundary to a lower boundary. I know it sounds mundane, but I will present a technique that is — at least in my experience — not widely known, not even amongst seasoned C coders (with the notable exception of Chuck Norris, of course).

But first, please take a moment to look at the following routine that employs a countdown for-loop and decide if it works correctly or not:

This code appears to be fine, but it has a flaw that shows only when the ‘lower’ index is 0: ‘size_t’ is an unsigned type, and when ‘i’ becomes 0, subtracting 1 yields a very large positive number (due to integer wrap-around) which in turn causes an out-of-bounds access to the given ‘array’. So what do we need to change such that the code works as expected, even for a lower bound of 0?

Most developer’s knee-jerk reaction is to change the type of the indices to a signed type, like ‘int’, but this is unfortunate, as it limits (at least halves) the available value range. As often in life, the proper solution is not to fight the enemy but to turn him into a friend: Let’s use unsigned wrap-around to our advantage:

Instead of using the greater-than operator, we now use the not-equals operator and instead of comparing against ‘lower’ we now compare against one less than ‘lower’. If ‘lower’ happens to be 0, ‘lower’ – 1 (again, due to integer wrap-around) will yield the maximum possible value representable by type ‘size_t’. The same will happen to the loop counter ‘i’ when it has a value of 0 and is decremented once more. As a consequence, the expression ‘i != lower – 1’ becomes false and the loop terminates — as desired.

A Happy New Year to all of my faithful readers!

Dangerously Confusing Interfaces III

confused.jpgJust like the other “Dangerously Confusing Interfaces” posts, this one was also inspired by a real-world blunder that I made.

Here’s the background: usually, routines that accept data via a pointer from the caller either execute synchronously or copy the data into their own internal data structures for later processing. Take the venerable ‘fwrite’ from the C standard library as an example:

‘fwrite’ blocks until the data has been written, either to disk or to an internal buffer. In either case, once ‘fwrite’ returns, it doesn’t care about the original data anymore. That’s why it’s safe (and common practice) to pass a pointer to a local buffer on the stack:

All standard library and POSIX APIs behave like ‘fwrite’, which is both, safe and convenient. However, with embedded systems, the story is different: in some cases, memory is so tight that additional buffers/internal storage can’t be afforded. Such functions don’t copy the provided data but only store a pointer to your data and expect the memory pointed-to by this pointer to be still valid long after the function call has returned. Here is an example from the AUTOSAR standard, which is used by almost all embedded automotive products:

‘NvM_WriteBlock’ is used to store data to a given non-volatile memory block. However, what this function does is only enqueue a request for the given block ID together with the data pointer (not a copy of your data). This is done for the sake of efficiency, because there can be multiple write requests in parallel. The queue is later processed in another task, long after any local buffer would have been removed from the stack.

Passing a pointer to a buffer with automatic storage is an easy mistake to make, especially since such “non-copy” interfaces are so rarely encountered. How can “write-like” interfaces that don’t make a copy of the provided data be made safer, such that misuse is less likely? Obviously, just adding documentation is not enough — nobody reads documentation, especially in the heat of the moment.

In my view, the root of the problem is that such functions accept just about any pointer. What if the caller was forced to explicitly cast the pointer to another type? A type with a cunningly chosen typename, one that reminded the caller of the potential pitfall? Here is my approach:

Whenever a pointer is passed to this function, developers have to write something like this to make the compiler happy:

Typing ‘uncopied_memory’ should shake up even the most focused developers and remind them to double-check what they are passing into this function.

Of course, within ‘SomeWritelikeFunction’, the provided pointer needs to be cast back into something more useful, like a ‘const uint8_t*’. Further, note that the ‘dummy’ member within ‘uncopied_memory’ must not be used; it only exists to make sure that the cast to ‘uncopied_memory*’ in the calling function is safe: a pointer to a struct is aligned such that it is compatible with the struct’s most-aligned member, which is ‘void*’ and ‘void*’ is by definition compatible with any other pointer type.

Using the C Preprocessor to Perform Compile-time Computations

Sometimes, it is desirable to perform computations already at compile-time — either for efficiency or to avoid redundancy. Alas, what a compiler can compute at compile-time is rather limited — mostly just a combination of unary and binary operators. What if you need to do something more complex?

For the sake of illustration, I chose computing the (floored) log2 of an integer as an example, but the techniques presented below can be easily adapted to other use cases:

In C++ — provided you’re brave enough — you can always resort to template metaprogramming:

But since template metaprogramming was not deliberately built into C++ but rather discovered by accident, template metaprogramming code is far from pleasant to look at. If you are lucky and your compiler supports C++11 (or rather C++11’s constexpr feature), you have a better option:

It’s still recursive, but at least this solution is using real functions and not structs to achive its goal — much easier on the eyes!

But what if you code at the other end of the spectrum? What if you are limited to plain C?

Many years ago, I discovered the following technique that has proven useful in quite a few situations; it is used like this:

The “argument” is passed via a symbolic constant (STATIC_LOG2_ARG), the computation is done by “calling the function” (by including static_log2.h) and the “return value” is stored in another symbolic constant (STATIC_LOG2_VALUE).

Here’s an excerpt of what’s contained in the static_log2.h header file:

In the C++ examples, iteration is done using recursion, but here everything is unrolled/inlined.

For another case where this approach is employed, checkout this post. You probably don’t need this technique very often, but it’s good to have it in your bag of macro tricks.

Bug Hunting Adventures #11: Bad Weather

“It is only in sorrow bad weather masters us;
in joy we face the storm and defy it”
— Amelia Barr

Imagine a weather monitoring system where environmental data is collected by various sensors and distributed via messages to other components for further processing.

In the code below, produce_env_measurement() represents a task that constantly produces messages containing various environmental measurements while another task (represented by process_env_measurement()) consumes them. To ensure data integrity, a Fletcher-16 checksum is appended to every message, but the application nevertheless doesn’t work reliably. Where’s the bug?

Code
Solution