When it comes to memory, embedded systems engineers face many challenges, especially when it comes to space and cost.
Barr Group Principal Engineer Salomon Singer discusses the challenges faced by embedded engineers when dealing with memory, memory allocation (malloc), and dynamic memory, and steps that can be taken to solve these issues.
Andrew Girson: Hi, I’m here today with Salomon Singer, Principal Engineer at Barr Group and we’re here today to talk about memory and memory allocation in embedded systems. So, Salomon, obviously embedded systems are constrained. Why do embedded software engineers need to be so careful with the use of memory in general in their systems?
Salomon Singer: Right. That is a very good question. Unlike your typical PC that has several gigs of memory
Salomon: – your typical embedded system has – maybe only a few K, mostly due to space and cost. And so, you have to be very careful how you use that memory. You don’t want to willy-nilly allocate memory because you run the risk of running out memory.
Andrew: Okay. And so, dynamic memory specifically is an area I have heard you speak about it in somewhat negative terms as embedded systems are in general. So, what is it about dynamic memory that embedded software engineers need to be extra concerned about?
Salomon: Right. So, there are four big issues from my point of view. The first one is a memory leak.
Salomon: Memory leak is a situation where you allocate in a chunk memory, but for some reason - maybe because you forgot or maybe because there was a bug - the memory is not being freed and returned to the system.
Salomon: That obviously will cause tremendous problems because sooner or later, regardless of the size of the leak, you will eventually run out of memory. So, this is a system that needs to be up for a long time. This will be a mega problem. Problem number two is that you now need to be able to handle out of memory conditions. And so, every time you do a malloc, you need to test for a null pointer before you do reference pointer. Otherwise again, bad things will happen. In some processors, it might cause a reset. And some of the processors, it might trigger an exception. Either way, not a good thing to happen. Reason number three is fragmentation. Due to the fact that you’re allocating and freeing chunks of memory of different sizes, you create this memory that looks a little bit Swiss cheese.
Salomon: Like there are plenty of holes. And so, there is chunks of memory that you can use and then there other chunks that are free. And it is entirely possible that you want a chunk that is 2K big. And it is entirely possible also that if you added up all the free memory that you have, it amounts to 4K, yet there is no 1-2 K chunk to satisfy –
Andrew: So, it can take you a chunk.
Salomon: Exactly, exactly. And the fourth problem is actually a problem created by fragmentation itself, is the fact that you cannot be certain how long malloc is going to take. The more fragmentation you have and the bigger your heap is, the more time malloc is going to take to return to you.
Andrew: So if you have a real-time system with hard deadlines, using malloc in your tasks in runtime can create unpredictable timings and unpredictable behavior.
Salomon: In fact, you couldn’t use malloc because RMA – the bands and you being able to measure the worst case execution time and because you don’t know how long malloc is going to take. Now you cannot measure your worst case execution time.
Andrew: Okay. Well, that is important. But obviously dynamic memory serves a purpose. Sometimes you don’t know upfront or at the beginning of your development when the program is running how much memory you are going to need. So dynamic memory allocation matters. So, are there other ways or what are the other ways that embedded software engineers can use memory to get the same effect or the same capability of being able to not necessarily allocate memory at runtime, but to be able to get the damages of it?
Salomon: Right. So, as you pointed out, dynamic memory allocation is used because you have no idea a priority how much memory you’re going to need.
Salomon: And so, you’re allocating on the go. And so, the way to solve this – the majority of the problems created by malloc is to create a pool of buffers or multiple pools of buffers of the same size.
Salomon: And because all the buffers are at the same size, then fragmentation is no longer a problem.
Salomon: You still have to worry about memory leaks and you still have to worry about out of memory conditions. But the leaks and the time and – not the leaks, but the fragmentation and the time that malloc takes to return are no longer a problem.
Andrew: And we cover aspects of these issues and how to allocate memory and how to use memory in the Embedded Software Boot Camp, correct?
Salomon: I spend a fair amount of time during the boot camp talking about the problem and the solution to the problem.
Andrew: And I also know we have white papers on our website. So hopefully engineers can read about all of that and get what they need.
Andrew: Okay, thank you, Salomon.
Salomon: You’re welcome.