Finding and killing latent bugs in embedded software is a difficult business. Heroic efforts and expensive tools are often required to trace backward from an observed crash, hang, or other unplanned run-time behavior to the root cause. In the worst case scenario, the root cause damages the code or data in such a subtle way that the system still appears to work fine -or mostly fine- for some time before the malfunction.
Too often engineers give up trying to discover the cause of infrequent anomalies--because they can't be easily reproduced in the lab--dismissing them as "user errors" or "glitches." Yet these ghosts in the machine live on.
So here's a guide to the most frequent root causes of difficult-to-reproduce firmware bugs.
#5: Heap Fragmentation
Dynamic memory allocation is not widely used by embedded software developers—and for good reasons. One of those is the problem of fragmentation of the heap.
All data structures created via C’s malloc() standard library routine or C++’s new keyword live on the heap. The heap is a specific area in RAM of a pre-determined maximum size. Initially, each allocation from the heap reduces the amount of remaining “free” space by the same number of bytes. For example, the heap in a particular system might span 10 KB starting from address 0x20200000. An allocation of a pair of 4-KB data structures would leave 2 KB of free space.
The storage for data structures that are no longer needed can be returned to the heap by a call to free() or use of the delete keyword. In theory this makes that storage space available for reuse during subsequent allocations. But the order of allocations and deletions is generally at least pseudo-random—leading the heap to become a mess of smaller fragments.
To see how fragmentation can be a problem, consider what would happen if the first of the above 4 KB data structures is free. Now the heap consists of one 4-KB free chunk and another 2-KB free chunk; they are not adjacent and cannot be combined. So our heap is already fragmented. Despite 6 KB of total free space, allocations of more than 4 KB will fail.
Fragmentation is similar to entropy: both increase over time. In a long running system (i.e., most every embedded system ever created), fragmentation may eventually cause some allocation requests to fail. And what then? How should your firmware handle the case of a failed heap allocation request?
Best Practice: Avoiding all use of the heap may is a sure way of preventing this bug. But if dynamic memory allocation is either necessary or convenient in your system, there is an alternative way of structuring the heap that will prevent fragmentation. The key observation is that the problem is caused by variable sized requests.
If all of the requests were of the same size, then any free block is as good as any other—even if it happens not to be adjacent to any of the other free blocks. Thus it is possible to use multiple “heaps”—each for allocation requests of a specific size—can using a “memory pool” data structure.
If you like you can write your own fixed-sized memory pool API. You’ll just need three functions:
- handle = pool_create(block_size, num_blocks) – to create a new pool (of size M chunks by N bytes);
- p_block = pool_alloc(handle) – to allocate one chunk (from a specified pool); and
- pool_free(handle, p_block).
But note that many real-time operating systems (RTOSes) feature a fixed-size memory pool API. If you have access to one of those, use it instead of the compiler’s malloc() and free() or your own implementation.
#4: Stack Overflow
Every programmer knows that a stack overflow is a Very Bad Thing™. The effect of each stack overflow varies, though. The nature of the damage and the timing of the misbehavior depend entirely on which data or instructions are clobbered and how they are used. Importantly, the length of time between a stack overflow and its negative effects on the system depends on how long it is before the clobbered bits are used.
Unfortunately, stack overflow afflicts embedded systems far more often than it does desktop computers. This is for several reasons, including:
- embedded systems usually have to get by on a smaller amount of RAM;
- there is typically no virtual memory to fall back on (because there is no disk);
- firmware designs based on RTOS tasks utilize multiple stacks (one per task), each of which must be sized sufficiently to ensure against unique worst-case stack depth; and
- interrupt handlers may try to use those same stacks.
Further complicating this issue, there is no amount of testing that can ensure that a particular stack is sufficiently large. You can test your system under all sorts of loading conditions but you can only test it for so long. A stack overflow that only occurs “once in a blue moon” may not be witnessed by tests that run for only “half a blue moon.” Demonstrating that a stack overflow will never occur can, under algorithmic limitations (such as no recursion), be done with a top down analysis of the control flow of the code. But a top down analysis will need to be redone every time the code is changed.
Best Practice: On startup, paint an unlikely memory pattern throughout the stack(s). (I like to use hex 23 3D 3D 23, which looks like a fence ‘#==#’ in an ASCII memory dump.) At runtime, have a supervisor task periodically check that none of the paint above some pre-established high water mark has been changed. If something is found to be amiss with a stack, log the specific error (e.g., which stack and how high the flood) in non-volatile memory and do something safe for users of the product (e.g., controlled shut down or reset) before a true overflow can occur. This is a nice additional safety feature to add to the watchdog task.
#3: Missing 'volatile' Keyword
Failure to tag certain types of variables with C’s ‘volatile’ keyword, can cause a number of symptoms in a system that works properly only when the compiler’s optimizer is set to a low level or disabled. The volatile qualifier is used during variable declarations, where its purpose is to prevent optimization of the reads and writes of that variable.
For example, if you write code that says:
g_alarm = ALARM_ON; // Patient dying--get nurse!
// Other code; with no reads of g_alarm state.
g_alarm = ALARM_OFF; // Patient stable.
the optimizer will generally try to make your program both faster and smaller by eliminating the first line above–to the detriment of the patient. However, if g_alarm is declared as volatile this optimization will not take place.
Best Practice: The ‘volatile’ keyword should be used to declare any:
- global variable shared by an ISR and any other code;
- global variable accessed by two or more RTOS tasks (even when race conditions in those accesses have been prevented);
- pointer to a memory-mapped peripheral register (or register set); or
- delay loop counter.
Note that in addition to ensuring all reads and writes take place for a given variable, the use of volatile also constrains the compiler by adding additional “sequence points”. Accesses to multiple volatiles must be executed in the order they are written in the code.
#2: Race Condition
A race condition is any situation in which the combined outcome of two or more threads of execution (which can be either RTOS tasks or main() plus an ISR) varies depending on the precise order in which the instructions of each are interleaved.
For example, suppose you have two threads of execution in which one regularly increments a global variable (g_counter += 1;) and the other occasionally resets it (g_counter = 0;). There is a race condition here if the increment cannot always be executed atomically (i.e., in a single instruction cycle). A collision between the two updates of the counter variable may never or only very rarely occur. But when it does, the counter will not actually be reset in memory; its value is henceforth corrupt. The effect of this may have serious consequences for the system, though perhaps not until a long time after the actual collision.
Best Practice: Race conditions can be prevented by surrounding the “critical sections” of code that must be executed atomically with an appropriate preemption-limiting pair of behaviors. To prevent a race condition involving an ISR, at least one interrupt signal must be disabled for the duration of the other code’s critical section. In the case of a race between RTOS tasks, the best practice is the creation of a mutex specific to that shared object, which each task must acquire before entering the critical section. Note that it is not a good idea to rely on the capabilities of a specific CPU to ensure atomicity, as that only prevents the race condition until a change of compiler or CPU.
Shared data and the random timing of preemption are culprits that cause the race condition. But the error might not always occur, making tracking down such bugs from symptoms to root causes incredibly difficult. It is, therefore, important to be ever-vigilant about protecting all shared objects.
Best Practice: Name all potentially shared objects—including global variables, heap objects, or peripheral registers and pointers to the same—in a way that the risk is immediately obvious to every future reader of the code. Barr Group's Embedded C Coding Standard advocates the use of a ‘g_‘ prefix for this purpose.
Locating all potentially shared objects is the first step in a code audit for race conditions.
#1: Non-Reentrant Function
Technically, the problem of a non-reentrant function is a special case of the problem of a race condition. For that reason, the run-time errors caused by a non-reentrant function are similar and also don’t occur in a reproducible way—making them just as hard to debug. Unfortunately, a non-reentrant function is also more difficult to spot in a code review than other types of race conditions.
Figure 1 shows a typical scenario. Here the software entities subject to preemption are RTOS tasks. But rather than manipulating a shared object directly, they do so by way of function call indirection. For example, suppose that Task A calls a sockets-layer protocol function, which calls a TCP-layer protocol function, which calls an IP-layer protocol function, which calls an Ethernet driver. In order for the system to behave reliably, all of these functions must be reentrant.
Figure 1. Shared resources in a TCP/IP protocol stack
But the functions of the driver module manipulate the same global object in the form of the registers of the Ethernet Controller chip. If preemption is permitted during these register manipulations, Task B may preempt Task A after the Packet A data has been queued but before the transmit is begun. Then Task B calls the sockets-layer function, which calls the TCP-layer function, which calls the IP-layer function, which calls the Ethernet driver, which queues and transmits Packet B. When control of the CPU returns to Task A, it finally requests its transmission. Depending on the design of the Ethernet controller chip, this may either retransmit Packet B or generate an error. Either way, Packet A’s data is lost and does not go out onto the network.
In order for the functions of this Ethernet driver to be callable from multiple RTOS tasks (near-)simultaneously, those functions must be made reentrant. If each function uses only stack variables, there is nothing to do; each RTOS task has its own private stack. But drivers and some other functions will be non-reentrant unless carefully designed.
The key to making functions reentrant is to suspend preemption around all accesses of peripheral registers, global variables (including static local variables), persistent heap objects, and shared memory areas. This can be done either by disabling one or more interrupts or by acquiring and releasing a mutex; the specifics of the type of shared data usually dictate the best solution.
Best Practice: Create and hide a mutex within each library or driver module that is not intrinsically reentrant. Make acquisition of this mutex a pre-condition for the manipulation of any persistent data or shared registers used within the module as a whole. For example, the same mutex may be used to prevent race conditions involving both the Ethernet controller registers and a global (or static local) packet counter. All functions in the module that access this data must follow the protocol to acquire the mutex before manipulating these objects.
Beware that non-reentrant functions may come into your code base as part of third party middleware, legacy code, or device drivers. Disturbingly, non-reentrant functions may even be part of the standard C or C++ library provided with your compiler. For example, if you are using the GNU compiler to build RTOS-based applications, take note that you should be using the reentrant “newlib” standard C library rather than the default.