« Search results for “inline”

“inline” Is Yet Another Word For “Premature Optimization”

The fact that some C++ developers use the ‘inline’ keyword so much has always been a conundrum to me — I’ve never liked it. Why? First and foremost because it clutters up header files and exposes implementation details to the users of a class.

Most likely, inline aficionados believe that these disadvantages are more than compensated for by the fact that inlining gives them faster code, but this is not necessarily the case: according to the C++ standard (ISO/IEC 14882:2014), the compiler is allowed to silently ignore the ‘inline’ keyword:

“An implementation is not required to perform this inline substitution at the point of call”

Believing is not knowing, as the old saying goes. This is another reason why I don’t like the ‘inline’ keyword: it doesn’t guarantee you anything.

But let’s attack the ‘inline’ keyword from another angle. Even if we knew that declaring a method inline made it faster, shouldn’t we have to ask ourselves first if there is actually a performance case? Without profiling, without a proven need, any optimization is premature optimization, which — according to Donald Knuth — is the root of all evil. The fact that an optimization gives a local improvement doesn’t justify it sufficiently — it’s the overall improvement of the major use cases that matters. Otherwise we would implement all of our functions with inline assembly, wouldn’t we?

In the old days of C programming, developers used the ‘register’ keyword as a hint to tell the compiler what variables should be kept in registers for performance reasons. Nowadays, every C compiler is much better at allocating variables to registers than any human being. Consequently, the ‘register’ keyword has been deprecated in C11.

By the same token, today’s C++ compilers do a much better job of figuring out which functions should be inlined than we are able to do. Therefore, instead of giving hints to the compiler we should rather rely on automated, transparent inlinining that doesn’t clutter up class interfaces.

As an example, at optimization level -O2, the g++ compiler automatically inlines all functions that are small or called only once. Specifying -finline-functions (enabled by default at -O3) uses a heuristic to determine if its worthwhile to inline a function or not — without the need for any developer intervention.

To me, it’s about time that ‘inline’ goes the way of the ‘register’ keyword.

‘static’ With Array Parameters? Oh My!

There’s a rather obscure feature in C99 which allows you to put the ‘static’ keyword between the brackets when declaring array-like parameters to functions, as in

What this means, in a nutshell, is that the caller of this function is expected to pass an array of at least 42 uint8_t’s. Calling ‘transform’ like this would be perfectly OK:

whereas this would constitute a breach of contract:

Likewise, passing a NULL pointer is not allowed, as it — by definition — doesn’t point to anything, not even a single element:

THE QUEST FOR MEANING

While the semantics of this feature are pretty clear, the purpose isn’t. The C99 language standard doesn’t give a hint, either.

Even when ‘static’ is used, the called function ‘transform’ doesn’t know the exact size of the array. It only knows that it comprises at least 42 elements, so it can only reliably utilize 42 elements. There’s nothing ‘transform’ can do with any (potential) extra elements, because there might be none. So what’s different to this?

This question haunted me for days (if not weeks) and I just couldn’t find a satisfying answer. But let’s put it aside for a moment, as I still have a lot to critisize regarding the technicalities.

THE SYNTAX

First off, why on earth ‘static’? In C, the keyword ‘static’ has already overloaded meanings: a) module scope (an object or function declared ‘static’ is private to a translation unit) and b) lifetime (a static object defined within a function retains its value across invocations). Thanks to C99, ‘static’ now can also mean “at least”. Yuck!

Had I been forced to specify this feature, I would have used the ‘>=’ operator instead of ‘static’:

THE DANGERS

One of the biggest and most infamous blunders in engineering is when you don’t have a single point of truth; that is, the same information is stored in more than one place. Over time, inconsistencies arise in the copies which can lead to all kinds of havoc. All software engineers are reminded again and again to honor the DRY principle.

According to the grammar, ‘static’ can appear in function declarations as well as function definitions. Nobody prevents you from declaring ‘transform’ like this in your header file:

and like so in a C file:

The C99 standard says nothing about such inconsistencies, whether it constitutes an error, which one takes precedence, whether the compiler has to emit a warning message.

Another problem is that C99 also introduced variable-length arrays (VLAs) and hence an array’s size is not required to be a compile-time constant, like it used to be in C89 and like it still is in C++. Therefore, the C99 grammar blesses such abominations:

What’s the contract now? At least 42 elements (the value of ‘my_global’ at program start-up) or the value of ‘my_global’ when ‘transform’ is actually called:

Again, if you’d forced me to specify this feature, I’d required the size specifier to be a compile-time constant:

AN EPIPHANY

While ranting about this feature, I suddenly had an epiphany. Let’s revisit the question I asked above: What’s really the difference between

and

since my doubts regarding the usefulness of this feature are 100% founded in this question. Previously, I believed that in the second case, there are at least 42 elements, while in the first case there are exactly 42 elements. Not a big difference, because what benefit would a compiler be able to derive from this subtle difference?

But I obviously fell into a trap that I warned my readers a long time ago: believing that array parameters are really arrays, which is mistaken, of course, because arrays cannot be passed directly to functions. From the C language’s point of view, the former interface of ‘transform’ is equivalent to

which means that ‘buffer’ is a pointer to a single ‘uint8_t’, nothing more, nothing less. The compiler can’t make any assumptions beyond that, it’s obliged to ignore the fact that you indicated that there are 42 elements. Specifically, all these declarations are equivalent and effectively declare a function taking a pointer to a ‘uint8_t’:

By contrast, the version of ‘transform’ that uses ‘static’ tells the compiler that a pointer to the first of at least 42 elements is passed. Armed with this knowledge, the compiler can now perform certain optimizations, the most likely one being accessing memory in units much larger than a ‘uint8_t’, to save memory access times. Ah!

ADVICE

Even though we now understand the motivation behind this feature, I still wouldn’t recommend using it. Just like it’s the case with many other so-called “optimization” features (e. g. ‘inline‘ and ‘restrict‘) you, as a developer, have to keep a promise but get nothing dependable back from the compiler. As usual, my advice is this: if you think something needs to be optimized, prove it first; if evidence shows that optimization is necessary, optimize in a dependable way — don’t rely on something that might change from one compiler version to another, especially if it comes with an hideous interface that invites inconsistencies.

ONE MORE THING

Thinking about this odd feature once again, what it all boils down to is that ‘static’ with function parameters actually tries to deliver what regular array-like parameters can’t, namely inform the compiler that the pointer that is passed is really an array, not just a pointer to a single element. But there’s already a feature in the C language (available since C89) that achieves that, namely pointers to arrays:

Contrary to ‘static’ with array parameters, pointers to arrays is a clean, proven, and type-safe concept. Since the compiler knows how many elements the array comprises, it can apply the same optimizations that the ‘static’ with array parameters feature promises, that is, accessing multiple elements at once when traversing the array. There’s really no reason to put ‘static’ between your array brackets, trust me.

Pointers in C, Part V: The ‘restrict’ Qualifier

Pointers in C

“Le vrai est trop simple, il faut y arriver toujours par le compliqué.”
(“The truth is too simple: one must always get there by a complicated route.”)
― George Sand, Letter to Armand Barbès, 12 May 1867”

Exactly one year ago, I started this series on pointers, but what I really wanted to blog about originally was a rather arcane and rarely used keyword that first appeared in the C99 language standard: the ‘restrict’ qualifier. But after trying to digest the formal definition in chapter 6.7.3.1 I decided that taking a little detour would make my and my reader’s life much easier.

Let me set the stage for ‘restrict’ by summarizing what I wrote in episode 3 about the “strict aliasing rule”:

1. The compiler might optimize code involving multiple pointers, provided the pointers are not aliased; that is, they don’t point to the same object or memory.

2. The compiler assumes that pointers to incompatible types never alias.

3. The compiler assumes that pointers to compatible types (same types, apart from CV-qualification and signedness) potentially alias.

Therefore, a function with this signature is eligible for compiler optimization:

whereas this one is not:

This is unfortunate, because most likely, the arrays passed to the second version of ‘transform’ are in completely different, non-overlapping memory regions. But the compiler doesn’t know and hence stubbornly adheres to the strict aliasing rule.

The ‘restrict’ qualifier, which — contrary to the ‘const’ and ‘volatile’ qualifiers — can only be applied to pointers, is a promise given by the programmer to the compiler that pointers don’t alias even though they point to objects of the same type. Therefore, this version of ‘transform’ can be optimized by the compiler:

Let’s put this to the test with the ‘silly’ example from episode 3:

Before knowing about the strict aliasing rule, we were surprised to see that the memory access to ‘x’ in the return statement was not replaced with a simple ‘return 0’. After having learned about the strict alias rule, it’s clear: since ‘x and ‘y’ point to the same type, the compiler must assume that they may point to the same memory location and hence it loads the value pointed to by ‘x’ from memory afresh:

Now, if we tell the compiler that ‘x’ and ‘y’ never point to the same memory location, optimization is possible:

Nice, isn’t it?

If you use the ‘restrict’ qualifier on a pointer, you promise that — at least for the lifetime of the restricted pointer — the object pointed to is only accessed through this pointer. Break that promise and you get undefined behavior. (In the ‘silly3’ example, the lifetime of the pointers ‘x’ and ‘y’ end once the call to ‘silly3’ returns.)

In the C99 language standard, many functions from the standard library have been revised and now make use of the ‘restrict’ keyword. Take ‘memcpy’, for instance:

As everybody knows, ‘memcpy’ can only copy non-overlapping blocks of memory and this fact is nicely highlighted by the use of the ‘restrict’ keyword: during the call to ‘memcpy’ the memory regions src[0] to src[n] as well as dst[0] to dst[n] are exclusively owned and may not be accessed by other pointers. Since ‘memmove’ can copy overlapping blocks of memory (with a little speed penalty, of course), ‘memmove’ consequently doesn’t declare restricted pointers:

Please be aware that ‘restrict’ is not supported by the C++ language standard and it’s unclear whether it ever will be. If you mix C99 and C++ code, you might have to strip the ‘restrict’ keyword from C99 headers to avoid compilation errors:

In general, I’m not a big fan of optimization features that the compiler is free to ignore. If utmost performance is important, you want dependable performance. Most likely, your routine is not on the performance critical path, anyway. If you think it is, carefully profile your code and after you proved that it is, you’d better code that part in assembly language. Without such evidence, sprinkling your code with ‘restrict’ is little short of premature optimization. (I complained here about the unnecessarily overused ‘inline’ keyword for the same reason.)

What I do like about the ‘restrict’ keyword, though, is that by unraveling it, we’ve made a beautiful journey through important everyday programming topics like “pointers vs. arrays”, “type qualifiers”, “pointer conversion rules”, and the “strict aliasing rule”. The journey was the destination.

Pointers in C, Part II: CV-Qualifiers

“A teacher is never a giver of truth; he is a guide, a pointer to the truth that each student must find for himself.”
— Bruce Lee

In part I of this series, I explained what pointers are in general, how they are similar to arrays, and — more importantly — where, when, and why they are different to arrays. Today, I’ll shed some light on the so-called ‘cv qualifiers’ which are frequently encountered in pointer contexts.

CV-QUALIFIER BASICS

CV-qualifiers allow you to supplement a type declaration with the keywords ‘const’ or ‘volatile’ in order to give a type (or rather an object of a certain type) special treatment. Take ‘const’, for instance:

‘const’ is a guarantee that a value isn’t (inadvertently) changed by a developer. On top of that, it gives the compiler some leeway to perform certain optimizations, like placing ‘const’ objects in ROM/non-volatile memory instead of (expensive) RAM, or even not storing the object at all and instead ‘inline’ the literal value whenever it’s needed.

‘volatile’, on the other hand, prevents optimizations. It’s a hint to the compiler that the value of an object can change in ways not known by the compiler and thus the value must never be cached in a processor register (or inlined) but instead always loaded from memory. Apart from this ‘don’t optimize’ behavior, there’s little that ‘volatile’ guarantees. In particular — and contrary to common belief — it’s no cure for typical race condition problems — It’s mostly used in signal handlers and to access memory-mapped hardware devices.

Even if it sounds silly at first, it’s possible to combine ‘const’ and ‘volatile’. The following code declares a constant that shall not be inlined/optimized:

Using both ‘const’ and ‘volatile’ together makes sense when you want to ensure that developers can’t change the value of a constant and at the same time retain the possibility to update the value through some other means, later. In such a setting, you would place ‘MAX_SENSORS’ in a dedicated non-volatile memory section (ie. flash or EEPROM) that is independent of the code, eg. a section that only hosts configuration values*. By combining ‘const’ and ‘volatile’ you ensure that the latest configuration values are used and that these configuration values cannot be altered by the programmer (ie. from within the software).

To sum it up, ‘const’ means “not modifiable by the programmer” whereas ‘volatile’ denotes “modifiable in unforeseeable ways”.

CV-QUALIFIERS COMBINED WITH POINTERS

Like I stated in the intro, cv-qualifiers often appear in pointer declarations. However, this poses a problem because we have to differentiate between cv-qualifying the pointer and cv-qualifying the pointed-to object. There are “pointers to ‘const'” and “‘const’ pointers”, two terms that are often confused. Here’s code involving a pointer to a constant value:

Since the pointer is declared as pointing to ‘const’, no changes through this pointer are possible, even if it points to a mutable object in reality.

Constant pointers, on the other hand, behave differently. Have a look at this example:

The takeaway is this: if the ‘const’ keyword appears to the left of the ‘*’, the pointed-to value is ‘const’ and hence we are dealing with a pointer to ‘const’; if the ‘const’ keyword is to the right of the ‘*’, the pointer itself is ‘const’. Of course, it’s possible to have the ‘const’ qualifier on both sides at the same time:

The same goes for multi-level pointers:

Here, ‘v’ is a regular (non-‘const’) pointer to ‘const’ pointer to a pointer to a ‘const’ integer.

Yuck! Sometimes, I really wish the inventors of C had used ‘<-‘ instead of ‘*’ for pointer declarations — the resulting code would have been easier on the eyes! Consider:

versus

So

would read from right to left as “v is a POINTER TO const POINTER TO const int”. Life would be some much simpler… but let’s face reality and stop day-dreaming!

Everything I said about ‘const’ equally applies to pointers to ‘volatile’ and ‘volatile’ pointers: pointers to ‘volatile’ ensure that the pointed-to value is always loaded from memory whenever a pointer is dereferenced; with ‘volatile’ pointers, the pointer itself is always loaded from memory (and never kept in registers).

Things really get complicated when there is a free mix of ‘volatile’ and ‘const’ keywords with pointers involving more than two levels of indirection:

Let’s better not go there! If you are in multi-level pointer trouble, remember that there’s a little tool called ‘cdecl‘ which I showcased in the previous episode. But now let’s move on to the topic of how and when cv-qualified pointers can be assigned to each other.

ASSIGNMENT COMPATIBILITY I

Pointers are assignable if the pointer on the left hand side of the ‘=’ sign is not more capable than the pointer on the right hand side. In other words: you can assign a less constrained pointer to a more constrained pointer, but not vice versa. If you could, the promise made by the constrained pointer would be broken:

If the previous statement was legal, a programmer could suddenly get write access to a read-only variable:

Again, the same restrictions hold for pointers to ‘volatile’. In general, pointers to cv-qualified objects are more constrained than their non-qualified counterparts and hence may not appear on the right hand side of an assignment expression. By the same token, this is not legal:

ASSIGNMENT COMPATIBILITY II

The rule which requires that the right hand side must not be more constrained than the left hand side might lead you to the conclusion that the following code is perfectly kosher:

However, it’s not, and for good reason, as I will explain shortly. But it’s far from obvious and it’s a conundrum to most — even seasoned — C developers. Why is it possible to assign a pointer to non-const to a pointer to ‘const’:

but not a pointer to a pointer to non-const to a pointer to a pointer to ‘const’?

Here is why. Imagine this example:

Graphically, our situation is this. ‘ppc’ points to ‘p’ which in turn points to some random memory location, as it hasn’t been initialized yet:

Now, when we dereference ‘ppc’ one time, we get to our pointer ‘p’. Let’s point it to ‘VALUE’:

It shouldn’t surprise you that this assignment is valid: the right hand side (pointer to const int) is not less constrained than the left hand side (also pointer to const int). The resulting picture is this:

Everything looks safe. If we attempt to update ‘VALUE’, we won’t succeed:

But we are far from safe. Remember that we also (indirectly) updated ‘p’ which was declared as pointing to a non-const int and ‘p’ was declared as pointing to non-const? The compiler would happily accept the following assignment:

which leads to undefined behavior, as the C language standard calls it.

This example should have convinced you that it’s a good thing that the compiler rejects the assignment from ‘int**’ to ‘const int**’: it would open-up a backdoor for granting write access to more constrained objects. Finding the corresponding words in the C language standard is not so easy, however and requires some digging. If you feel “qualified” enough (sorry for the pun), look at chapter “6.5.16.1 Simple assignment”, which states the rules of objects assignability. You probably also need to have a look at “6.7.5.1 Pointer declarators” which details pointer type compatibility as well as “6.7.3 Type qualifiers” which specifies compatibility of qualified types. Putting this all into a cohesive picture is left as an exercise to the diligent reader.

________________________________
*) Separating code from configuration values is generally a good idea in embedded context as it allows you to replace either of them independently.

Using the C Preprocessor to Perform Compile-time Computations

Sometimes, it is desirable to perform computations already at compile-time — either for efficiency or to avoid redundancy. Alas, what a compiler can compute at compile-time is rather limited — mostly just a combination of unary and binary operators. What if you need to do something more complex?

For the sake of illustration, I chose computing the (floored) log2 of an integer as an example, but the techniques presented below can be easily adapted to other use cases:

In C++ — provided you’re brave enough — you can always resort to template metaprogramming:

But since template metaprogramming was not deliberately built into C++ but rather discovered by accident, template metaprogramming code is far from pleasant to look at. If you are lucky and your compiler supports C++11 (or rather C++11’s constexpr feature), you have a better option:

It’s still recursive, but at least this solution is using real functions and not structs to achive its goal — much easier on the eyes!

But what if you code at the other end of the spectrum? What if you are limited to plain C?

Many years ago, I discovered the following technique that has proven useful in quite a few situations; it is used like this:

The “argument” is passed via a symbolic constant (STATIC_LOG2_ARG), the computation is done by “calling the function” (by including static_log2.h) and the “return value” is stored in another symbolic constant (STATIC_LOG2_VALUE).

Here’s an excerpt of what’s contained in the static_log2.h header file:

In the C++ examples, iteration is done using recursion, but here everything is unrolled/inlined.

For another case where this approach is employed, checkout this post. You probably don’t need this technique very often, but it’s good to have it in your bag of macro tricks.

Circular Adventures VI: When the Winner is Known Beforehand

    “Girls have an unfair advantage over men: if they can’t get what they want by being smart, they can get it by being dumb.”
    — Yul Brynner

    In part III and IV I discussed solutions for a circular problem where two indices performed a race within a circular buffer; that is, either index could be ahead of the other.

    Sometimes, however, life is simpler and it is always known which index is the leader, and thus the winner — from the outset:

    In this example, provided we know that b must always be ahead of a, we can deduce that b has wrapped around and the distance between a and b is 4.

    Either implementation (the one given in part III and the one given in part IV) of circular_distance will return the same result:

    However, both will fail for this case:

    Why? Under the premise that b is always ahead of a, the distance is +6 not -4. circular_distance computes the wrong result because it assumes that the leading index is less than half the circular buffer size ahead of the other index. This assumption was made (I called it ‘invariant’ at the time) to be able to compute the circular distance even if it is not known which index is ahead of which.

    Based on the prerequisite that b is always ahead of a we can give a simplified version of the circular_distance function from part III:

    I call this function circular_lead instead of circular_distance to emphasize that it returns how much b is ahead of a, which is always a positive number. As usual, all the restrictions (and optimizations) regarding the mod operator apply.

    In situations where one index is known to be ahead of the other, circular_lead has an edge over circular_distance because it supports distances in the range 0 to N-1, whereas circular_distances only supports ranges from 0 to (N-1)/2. This is always the case in “monotonically increasing” scenarios, like run-time measurement, at least until the flux capacitor is invented. Hence, the last example of part IV can be rewritten like this:

    If we replace the mod operator with a cast to uint32_t, circular_lead_32bit boils down to this:

    [For the mathematically inclined: What we are really talking about here is residue class rings, a concept used by all contemporary computers. I might explore the number theory behind this and other circular topics in a future post.]

    More circular adventures…