Barr Group's 2016 Embedded Systems Safety & Security Survey had participation from more than 2,500 embedded systems design engineers and revealed a number of interesting trends in the embedded industry. In this webinar, Barr Group executives Michael Barr (CTO) and Andrew Girson (CEO) discussed some of the most interesting findings.

Slide 1: 2016 Embedded Systems Safety & Security Survey

Andrew: Good afternoon. I’m Andrew Girson, CEO of Barr Group, and I want to thank you for joining us for the survey results webinar [Indiscernible] [0:00:18] for today. I’m sitting here with Michael Barr, our CTO. I want to say, in addition to good afternoon, I want to say good evening to our friends in Europe and Asia, and good morning to our friends on the west coast of North America. We’ve got a fairly big audience here today, and so we’re really excited to – to present these results to you.

Today’s webinar will go on for approximately an hour, we have a lot of content to go over, and we will have a Q&A session at the end for you. We would appreciate if you would wait to submit your questions until we reach the point in the webinar where we start the Q&A session, otherwise we’ll be getting a tremendous of questions all during the webinar, itself. Please make sure your other programs are closed so they don't affect the audio and video feeds for this webinar. So let’s – let’s go ahead and get started. Just switching slides, here. And we’ll get going.

Slide 2: About Barr Group

Okay. First, a little bit about Barr Group. As many of you know, we’re all about training and services around creating safer, more reliable, more secure embedded systems. You see our website there, I’m not going to spend a lot of time doing advertising for our company. I think many of you know who we are and what we do. If you have any questions, certainly let us know.

Slide 3: Webinar Format

First, just kind of the format that we’re going through today. We have a few slides on the methodology. A survey’s only as good as the methodology and the demographics. And so we thought it was appropriate to spend a few minutes up front going through our methodology, and we will describe that to you. We’ll also give you some background on the demographics of the people that filled out the survey that will help as we look at the results to understand better what people are saying and why they're saying it. Then, we’ll get into the meat of the presentation, which will be the bulk of the hour.

We’ll focus on a number of key findings, we have eight key findings and a number of supplemental findings. But we’ll focus on key findings related to safety, security, and then we also have some general industry data that we want to share with you that we think you’ll find interesting. Finally, for those of you that took the survey, which was quite a few, we have a prize drawing that we’re going to announce at the end of the webinar for the logic analyzers and the gift card. And finally, we’ll do Q&A at the very end. And at this point, I’m going to turn it over to Mike.

Slide 4: Survey Goals

Mike: Thank you, Andrew. This is Michael Barr, the CTO of Barr Group. I’d like to talk briefly about the survey goals and methodology before we get into the findings. This is our second annual survey, and we anticipate doing one each year at the beginning of the year going forward. The purpose of this was primarily to deepen our understanding of the market of embedded systems developers, particularly as it relates to safety and security trends and practices. And by this, not only to improve our company’s understanding of the market, but also to help to improve the embedded systems industry as a whole.

So this year’s survey is a little bit more of a deeper dive than last year on the subjects of safety and reliability, as well as security. You’ll find, if you compare the results of our survey, as I have, two other industry surveys, that we have some similar demographics and the types of engineers who are taking it that strongly suggest we’re reaching the same audiences. But a lot of those other surveys are put on by magazines and others and more vendor-focused. For example, which brand of RTOS are you using, or which model and brand name of ARM chip are you using? And that’s not really what we were interested. We were interested in processes, and trends, and practices that revolve around and affect the safety, and reliability, and security of embedded devices.

Slide 5: Survey Methodology

Our methodology here is pretty simple. We had a web-based survey, we used Survey Monkey, which is a well-known platform for web-based surveys. The survey was designed to take less than five minutes to complete. It had about 30 questions. The – it was open for about a month, from January 6th to February 8th of this year. And it was accessible via a specific URL. So it wasn’t something that the general public would come across. Rather, it was provided as a link to – in over 300,000 targeted email invitations. And by that, I mean, our own mailing list, as well as partner mailing lists, to reach engineers who are designing embedded systems, specifically. And we also promoted it on our website, and in my blog, and on Twitter, and LinkedIn. But generally, most of the traffic came through those targeted emails from ourselves and partners. We did have, as an incentive to participate, a small prize drawing: two logic analyzers, USB logic analyzers from Saleae and a triplet of Amazon gift cards. And we’ll talk more about that at the end.

Slide 6: Worldwide Response

The response that we received was overwhelming. In all, we had nearly 3,000 completed responses. We actually had many more responses than that, but some people abandoned the survey or for other reasons, were not complete results. So we filtered it out first, and then we looked at, where was the response from in the world? Of the 2,953 completed surveys we received, 46% of them are from the United States and Canada, a third from Europe, and 11% from Asia. And then the rest of the world, South America, Latin America, Africa, Australia, etc., added up to about 10%.

So we have actually a very good distribution. So for example, with nearly 3,000 completed surveys, and less than half from the United States and Canada, we have almost 1,500 – or over 1,500, actually, complete survey responses from the rest of the world.

Slide 7: Qualifications of Respondents

I did – as part of the analysis, the first thing was to qualify the respondents, so that we would not be looking at people, for e, who had never done any paid design work. Some of the people who completed the survey may know about embedded systems, may be on those targeted mailing lists, but it might be students, for example, grad students, even. They might know a thing or two about embedded systems. But right now, what we were really looking for, in terms of our understanding of the market, is not what they were up to, but what paid professionals of embedded systems were up to.

So we eliminated 239 of those responses because they said they had zero years of paid design work. There were another 147 that were not directly involved in the design. For example, they were a senior executive in a company, or they were involved with testing, or they identified themselves as an academic. So they might’ve had prior professional experience doing this stuff, but they weren’t currently doing it. They were primarily an academic, for example, or primarily an executive. So we eliminated 147 responses based on that. And then we had an additional number, where they were being paid to do this work and currently involved in designs, but when we asked them to identify a specific current project, which mattered because all the rest of the questions were asked about that, their understanding of the current project was vague. Maybe it was simply too early in the project for them to answer those questions, so we eliminated 150 on that basis.

The result of this, actually, is still remarkable. We had 2,452 completed survey responses form active professional engineers. And with such a high number, and having worked as an editor-in-chief of Embedded Systems Programming Magazine for many years, I know that it’s nearly double the typical survey response in our industry. And these are all the qualified people who are really doing this stuff. Statistically, that means that if we did this study 100 times, 99 out of 100, we’d get these same results, within a margin of error on any given question of about 2.5%, plus or minus. And with that, I’m going to turn it back to Andrew to start talking about some of the – more of the details.

Slide 8: Qualified Respondent Experience

Andrew: Thanks, Mike. So let’s talk a little bit more about some of the interesting demographic information about some of our respondents before we get into the findings. A question we asked was, to help qualify people, was their years of paid experience. This is experience after college. You can see that we’ve got a fairly healthy distribution throughout the range of ages. Obviously, a lot of people earlier in their career. You know, the rough estimate average of experience is a little under 16 years. And one thing that we found interesting in this, and you’ll see this throughout the presentation today, when we note – because we have such a decent response in multiple regions, whenever we see something that might be a little different from one region or another, we’re going to try and note it. So you can see on this chart on the right that – that the US, in terms of our survey, the US is older, that the average years of experience for the US is closer to you know, 18 to 20 years, whereas in Europe, it’s under 15, and in Asia, it’s approximately ten years of experience.

Slide 9: Product Categories

Okay, also on [Indiscernible] [0:10:44], we asked a whole lot of questions about your current project. So as we go into these results, I want you to keep in mind that when we’re asking these questions, we’re asking them as it relates to your current project. And so when they're answering these questions, it wasn’t about something they worked on in the past or something they might work on in the future, we asked them about the project they're working on today. And all analysis applies to the current project. So when you see our findings, and you see results today, you’ll see that.

These are the top sectors that were involved in our responses. Obviously, the top four being industrial automation, automotive, medical devices, communications equipment. No real surprises there, but you can see the relative distribution of different segments. Now, speaking of regional differences, this was somewhat interesting, that overall, the defense and aerospace, as an industry, was the number six. But in the US, it was number two. So obviously, in the United States, defense and aerospace play a very large role. In Europe and Asia, perhaps a little lower role, lesser role, relatively speaking. They're obviously still big industries, defense and aerospace, overall. But relatively speaking, the US had a higher instance, there.

Slide 10: (Some) Participating Organizations

I’m not going to go through these companies, but I just wanted to give you a sense of the breadth and the number of different organizations that participated in the survey. You can see many of these organizations had multiple individuals that responded, and of course, there are a lot of companies, this is only a partial list.

Slide 11: Team Sizes and Respondent Roles

All right, let’s look at team size and roles of respondents. In this case, we’re focusing on the software team. And the question, as is noted in the lower left corner, at peak effort, how many people were involved in the project in writing embedded software? And you can see that the teams are relatively small. If you look at the bars for one person teams and teams from two to four people, that easily makes up about 60% of the total respondents.

So software teams are relatively small, comparatively speaking. And if you look at primary design role – now, again, this is primary role. This isn’t necessarily the only thing that these people, these engineers were doing. But you know, almost half were primarily doing software. Another one-fourth were doing a combination of hardware and software as their primary goal. So software obviously, no surprise, plays a very large role in – in modern design of embedded systems.

Slide 12: Finding #1: Safety Risks Abound

Okay, now, let’s get into the meat. Finding number one, safety risks abound. This is a very interesting concept. We asked the respondents, what’s the worst thing that could happen if your embedded system failed form a malfunction? If the system you're designing were to malfunction or fail, what’s the worst thing that could happen? And 28%, well over 500 respondents, noted that an injury, or death, or more than one death could occur. I think the point here is that a lot of us are designing systems that have a safety component to them. That if something went wrong, if something malfunctioned, something bad could happen to real human beings.

This is an important point, and because it was such a high number, we really wanted to dive in, and look, and see, what are these engineers that are designing these safety critical systems, what are they doing to protect the welfare of their users?

Slide 13: Relative Importance of Reliability

So this slide has some more information on that in terms of the relative importance of reliability. We asked questions related to schedule, and you’ll notice, you know, the good thing is that when you ask of the importance of reliability versus schedule, you’ll see that in general, reliability, from the top bar chart, and also on the bottom bar chart, which shows trends on recent projects, that generally, reliability is considered as important or more important by a lot of people. That’s a very interesting concept, and it’s very important, and we’re very glad to see that. But let’s see how that pans out as we go to a few more slides on this and see some of the other findings related to it.

Also, I just see that there was a question that was sent in in the upper right corner. It says, response is 2,444, and on the previous slide, it noted 2,452. So let me just explain that really quick. As Mike noted at the beginning, we had 2,452 qualified responses. Each sheet where we’re showing results, we’re going to show the actual number of respondents for which this related. There are certain questions that were optional or where it was – to answer, or that could be skipped, or there was a result that was not needed. So some of these people – eight of these people here decided either not to answer this question or skipped it. Obviously, that’s a very small number, but in the interest of accuracy, we want to know that. So you’ll see that on other slides, too. You’ll see some of the numbers maybe jump around a little, and that’s purely because of that. It’s not statistically significant in this case. Where it is statistically significant, we will note it as appropriate.

Slide 14: Finding #2: Too Much Chance in Safety

All right, finding number two. When we talked about how reliability was as important as project schedule, but let’s really dive into that a little. This slide is all about those 543 people, that 28%, who said that serious injury or death could occur. Okay, so this isn’t just minor injury, in this case, this is actually a slightly smaller number who said serious injury or death could occur. And I think the point here is that if something that could happen to people, if people could die, we’d like to see that best practices, such as coding standards, code reviews, and static analysis are followed in a universal sense.

And we have three type-guards here that show, frankly, that’s not the case. On the left, 16% of the respondents said they were designing systems where they were not using coding standards. 40%, moving across to the right, said they were either doing no code reviews, no peer code reviews, or just partial code reviews, not comprehensive code reviews. And as much as 30% said they were using no static analysis. These are well-known industry practices that are widely available, widely understood, and frankly, we should be seeing 100% coverage here. Because we’re dealing with systems in this subset where people can die. And yet, we’re not seeing it. That, in itself, is a troubling result, one that concerns us.

Slide 15: Test Plans

Okay. Another slide related to that, we asked about – select all the testing, all the different types of testing that you apply. But there’s really just one thing I want to note, here. Regression testing, which looks at doing testing based on changes over a period of time with different versions of your software, just make sure as you made changes in your system, you're not introducing old errors or different errors. This also should be close to 100%, if not 100%. And yet, just 59% are doing regression testing.

That’s – I’m going to turn it back over to Mike to talk a little bit more about some of the analysis, here.

Slide 16: Analysis Re: “Serious Injury/Death” (S)

Mike: Okay, thanks, Andrew. I just wanted to do a little bit deeper dive on, who are these 543 people, the 22%, which can cause serious injury or death if their current project malfunctions? So one thing to look at is you know, which industries are they in? So they're actually in a slightly different mix of industries than the overall 2,452, the top four industries accounting, in fact, for nearly 75% of all of these people. And the number one, the largest group, 20% of them are working in the automotive field, 19% in the medical devices, 18% in defense and aerospace, just didn’t have room to put both, there, and then the automation and industrial controls category, which was the largest category overall, is the number four, here, in terms of serious injury and death.

And that makes some sense, right? Where do you find embedded systems that can injure and kill people? You find them around workers in factories, you find them in the defense industry, where obviously, things are dangerous by design, and also, in the related aerospace field, where multiple people could die at once. You find them in the medical device field, of course. Not all products that are medical could seriously injure someone or kill someone, but the ones that are are accounted for, here. And then automotive, as well. So these are just sort of, where are the dangerous embedded systems being designed for, what markets?

A related question to this that we asked was, if you designed your current project could cause someone to be seriously injured or killed, does it meet relevant safety standards? Standards like you know, the MISRA standards in the automotive space, for example. The FDA standards, if you're making a medical device. And we had a response where about two-thirds of these – this subset, of the people who could kill or seriously injure one or more people, about two-thirds of them said, yes. Now, we didn’t dive down deep to make sure that they're following the safety standards properly. This is just self-reporting, that they are striving, at least, to meet the relevant safety standards.

But interestingly, 22% of the people whose designs could seriously injure or kill one or more people said, no. So either there is no relevant safety standard for them to meet, or they don't care, they're not bothering, or they're not being forced to. And then interestingly, we had another 11% of these people, something like 60 of them, who didn’t know. They answered that, I don't know. So in my experience, generally, if you're working on a product that’s dangerous, and you're trying to meet a safety standard, you generally do know. So the I don't know here suggests to me that it’s more of the no type of answer. But it could also be that some of those folks were early in the design project, not sure. But ultimately, we see too little of this meeting of safety standards.

Slide 17: Finding #3: Standards Use Low in Auto

Our third finding –and we have eight total findings that we’ll present here, today, our third finding is related to this. We did a deeper dive, comparing of that subset that could seriously injure or kill the folks who were of the different industries. We have compared the industries. And an interesting observation that we had in that data is that those of you who are designing systems for automotive applications are the most likely of those industries to risk multiple lives. And you can see, for example, that versus medical, it’s about – it’s more than two-to-one. It’s nearly 40% versus about 15% there on the graph.

Yet, the same people who are working in the automotive space are much less likely, below a half instead of around two-thirds, to be following a safety standard. So this either means that there’s not a relevant safety standard that applies or that they're not choosing to follow one.

Slide 18: The Safety Landscape

Just thinking about this for a minute, you’ve got the safety landscape – you know, what are the safety standards? You know, many of you will be familiar with IEC 61508, or ISO26262, which is a version of that specifically designed for automotive safety. There are obviously other ISO and IEC standards that apply to different industries and different types of systems. And then there’s the MISRA standards for automotive safety. Those are all what you can think of as voluntary standards.

And then in some industries, you have more regulation and oversight. For example, the FDA and the FAA have their own policies and procedures for dangerous systems and dangerous software. One thing we don't see, and that I think this accounts for some of this data on the prior slide with automotive designs not following safety standards is, there’s really – it appears to be the case in our years that there’s not an insistence in the automotive industry, no one’s overlooking and insisting on an enforcement of a safety standard that applies specifically to automotives. Now, there are voluntary standards in that market, but I think that this may account for some of what we’re seeing, at least with the lesser application of safety standards in the automotive space versus, for example, medical.

Slide 19: Risk Should Dictate Process

Just taking a minute to talk about this more generally, whatever the industry is, if the product has the potential to seriously injure or kill, that is a worst-case risk that should dictate greater process than other design of other types of embedded systems. Like let’s say, a set-top box or a smart watch. And in fact, if you read these safety standards, the ISOs, and the IECs, and the MISRAs, they talk about, generally, safety integrity levels. Which is this idea that you look at the worst-case thing that could happen with your product, and then, depending on how many people could be killed, you know, zero, one, several, many, like in a bus or a train, and you apply an appropriate safety integrity level, or SIL, to that product, and then on the basis of that, the standards will actually dictate what things you need to do.

So for example, the MISRA automotive standard, and don’t be confused, by the way, many of you may be familiar with the MISRA C and C++ coding guidelines, this is a broader software development standard, I’m initializing here – or referring to it here by the acronym MISRA-SW for MISRA software standard, it’s a mid-‘90s standard that actually predates the coding guidelines. But specifically says, for example, that SIL two or higher code review is a necessarily process step. And that SIL three or higher, static analysis, is a necessary process step. So what we see in this data that we’ve just gone through is people who are, in fact, in these safety integrity levels, who are not letting that risk dictate the process that they reusing, as they should.

And as it says here, and I actually took this from the MISRA document, safety, like justice, needs to be seen to be present. Now, that means that you should be justifying, in writing, and in planning, and in process, that you are making sure that the system is safe, and what will happen if there is a malfunction, and how to make people safe. You need to bake in that software reliability, that system reliability. You can’t bolt it on later.

Slide 20: Finding #4: Connectivity Also Rising

So now, changing gears, and starting to look at security, our fourth finding is that connectivity, or connections to the internet, actually, are really on the rise – is really on the rise. So this is current projects, right? So this is the designs that are going on, right now. And by the way, we asked what kinds of products they were, and we found out about half of them were new products, either from scratch or from re-use. And then the other half were a mix of minor upgrades to products and major upgrades to products. But – so some of these products that are currently being engineered will already be out there in the marketplace, and they're being improved. And about half of them are brand-new-to-the-world products. And overall, across that mix, we found that 60% of respondents said that their current project would be online always or sometimes. And by online, I’m going to drill down into this because I think this is important – so we had 2,348 people answer this question. That is, some people said, I don't know. And that’s fair. Not everybody knows at every stage of the project what the situation will be. We didn’t want people to guess, so we offered them an I don't know option. You can see that 104 people or so took that option. But of the rest, 62% will have their product always on the internet or sometimes on the internet. And this is very important, and it’s important that we get this data right. Because this has implications, in particular, for security as a product that, say, could kill someone is put onto the internet, even if it’s just sometimes, that opens up a new attack vector for a hack that could cause a serious injury or death. So I actually want to drill down into this to see exactly what the data means.

Slide 21: Is Internet-Connectivity That High?

Generally, we’ve been talking about the results of the findings of our study. We haven’t been talking about the details – you know, how was the question phrased, and things like that. And I can assure you that we’ve done a careful job of being scientific abut how we asked the questions and also about how we’re presenting the findings. But I did want to drill down on this one to make sure that we’ve got something that we really understand.

So one of the questions that we asked was, what kind of external connections, or connections to the outside world, if you prefer, does your current project have? And you can select more than one. And you didn’t have to answer at all. So it was phrased, if you know. And some people didn’t answer at all, they didn’t know. Some people picked other, and they could put in their own freeform answer. But the majority of the answers were one of the wired, wireless, line of sight, or bus or backplane. And then following – and I can tell you, by the way, that 92% of all the people who took this survey, all the qualified people, 92% of them have one or more wired connections between their current project and others. And more than 50% have a wireless connection. So think about it, that’s you're baseline.

And the next question we asked was, when, if at all, will your current project be connected directly or indirectly to the internet? The choices were never, sometimes, always, or I don't know. That’s the data we were just looking at on the previous slide. So if you make – let’s suppose that your product is a garage door opener. And for decades, you’ve been making garage door openers. And for some of that time, in recent history, they’ve had software in side them. It is a big deal when you make that transition from having software inside, controlling the garage door opener to software inside that connects to a Wi-Fi network in the home, and now, that garage door opener is connected to the internet sometimes or always. That creates an opportunity for a hack of your product that you haven’t had to worry about before, and it is a new attack surface for hackers for your product. And your product may not have been designed with security in mind, and you bolted on Wi-Fi, and now, you have a security surface that wasn’t there before.

So whether it’s a direct connection to the internet, or an indirect connection to the internet, a sometimes connection to the internet, or an always connection to the internet, we have nearly two-thirds of current designs that are being connected in some way to the internet some of the time. And that’s a big deal. And that changes the security landscape fundamentally.

Slide 22: External Interface Types

And here’s the data, by the way, on that earlier question, 92% saying they had a wired connection, and nearly 50% having a wireless connection.

Slide 23: Finding #5: Processor Count Rising

And number five is that processor counts are also on the rise. So this actually is an interesting question. I hadn’t seen this in any other surveys before, we didn’t ask this last year, and the surveys I’ve reviewed of our industry have never touched on this. Often, you see surveys asking, you know, do you use a processor from ARM, or a processor from Intel, or do you use a processor from whatever, is it a 32 bit processor, is it a 16 bit processor, whatever? The question we asked was a simple one. How many processors do you have? Do you have one, do you have two, do you have four? Etc. And the buckets we offered were one, two or three, or four. [Audio cuts out] interesting buckets to me when I was designing the survey. If you just have one processor, it’s pretty straightforward, you write software for one processor. If you have two or three processors, it’s a little bit more complicated. If you have four or more processors, that’s a whole different category. And frankly, I wasn’t expecting that one quarter of all current projects have four or more processors. So that’s a big change and something we’re going to be keeping an eye on as a trend this year versus next year.

I should point out here, though, the data we have, because of the way the question was phrased, which includes both micro controllers and cores in the processor count, this data does not distinguish a design with one main CPU on a chip and three micro controllers hanging out, doing various I/O or real-time related stuff from another design where there’s a four-core main processor on one single die. So at present, we don't have that data. That’s something we may investigate in future years. We weren’t expecting, frankly, that so many of the designs had so many processors.

Slide 24: Security as Design Consideration

Okay, so now, getting into security, how do we connect all this information about the internet, this information about the number of processors, etc.? What does this mean from a security point of view? Well, of our full set of responses, the 2,452, 61% said that they had security as a design requirement on that project. That’s 1,459, if you're keeping track, and that’s broken out here on the right. So the majority and the largest group are making their current project more secure, or need to make their current project more secure than their prior project or recent projects. And another large chunk, more than 40%, are putting in at least or about the same amount of security. So we have basically the whole market’s putting in more security than – at least as much security as before, and often more security features than they have before.

Slide 25: No One-Size Fits Security

This leads us to our sixth finding, which is, if we look at that 1,459 who are designing with security in mind, there is no one-size fits way to secure these devices. 19% of them have one processor, 40% of them have four or more processors. You’ll note those numbers are different than they were on the earlier slide. That’s because here, we’re just looking at the subset that have security requirements. Here’s the – on the top right, you see the operating system mix. You see that slightly more than a third of folks with a security requirement are using a real-time operating system, either a commercial paid operating system, or one that was provided by their vendor, such as a chip vendor, 21% are using Linux, 13% are using open source, including open source RTOSs, 12% have no operating system. 9% wrote their own operating system.

And then some of you have wired interfaces, or one or more wired, one or more wireless, maybe some of each and a backplane. And so how do you secure that with all these different attack surfaces, different operating systems, different design architectures? And the answer is, every security project is unique, every embedded design needs its own custom security. And so it’s not like you could just say, well, the way we secure things is, we download such and such open source package for our Linux. Well, maybe that applies to some of you with Linux, but it doesn’t apply to those of you who are using an RTOS, and vice versa. And so unfortunately, this is a problem – what I’m trying to say, here, what we’re finding is that this is a problem that’s not going to be easy to solve. We need an increasing amount of security for all these embedded devices. More and more, they're on the internet, more and more, they can injure, or hurt, or kill. And yet, we have a very complicated landscape.

Slide 26: Primary Security Concerns

One of the interesting findings we had, or one of the interesting observations related to this is, we asked, why do you have a security requirement? What are all of the reasons why you – what are the primary security concerns that you're trying to – you know, what kind of hacks are you trying to prevent? Well, one thing you might be trying to prevent is someone tampering with your product. Let’s say you're making electronic loading machine, and you want to design it in such a way where it’s tamper-proof or tamper-evident, etc., that could be one thing you could be concerned about in security. Another thing you could be concerned about in security is that someone would hack the device and kill your customer. You see that a little further down in the list is injury or death. Or maybe even blackmail or ransom type of motive. For example, we were motivated to put blackmail and ransom on the list because we read an article about some hackers out of Russia or environs who held a utility for ransom because they had hacked into the utility meters, the electronic-connected electricity meters, and they were able to fudge the data so that the utility would mis-bill its customers. And obviously, if a utility mis-bills all of its customers, it’s a big nightmare. And so they held this utility ransom, made this utility pay a ransom in order to not do that.

Now, what’s interesting is, I color-coded these choices, right? So this is a select all that applies situation. But what’s interesting is that the blue ones approximately, more or less, apply to hacks that affect directly the maker of the device. Like they steal their IP, they hack into their device, etc. Whereas the orange ones affect the customer, the user of the device. Their health data is exposed, which in the United States is a violation of the HIPPA laws, for example. Or they're subject to a denial of service, they can’t watch their TV when they want to. Well, it turns out that more designers care about their self interests in aggregate than care about their customers’ interests. And when their customers’ interests could be injury, or death, or blackmail, etc., they – not all devices have that risk, obviously, so we should expect that to be lower. But overall, when you look at this as a total mix, you see a lot more concern for self than for customer.

Slide 27: Analysis Re: “Injury/Death” Via Internet

Looking specifically, now, like we drilled down earlier on the safety issues around devices that could seriously injure or kill, here, we’re looking at the security issues around devices that can both seriously injure or kill and are on the internet. Which is an interesting subset, right? And so I put together a couple of different graphs on this one slide, here. So we’re talking about, now, a small subset of the total data. 194 who answered both that they're on the internet some of the time or all of the time and that their product, if it malfunctions, could kill, or seriously injure, or injure a person or more than one person.

So you see the mix of injury types in the bottom right for that group, and it leans most heavily towards multiple serious injuries or death. You see the online status in the top right, and for this mix, it’s one-quarter are always on the internet and 75% are sometimes on the internet. And then you can see, in the left, which industries have these issues. And the top three, by a big margin, are medical – and this makes sense, right? You get a hospital device, for example, or pacemaker, maybe, that is in some way connected to a network, and thereby, onto the internet. You have industrial and other automation equipment at 18%. And you have automotive at 18%, increasingly seeing internet connections in the car. So when you put this together, that’s an interesting mix.

Slide 28: Finding #7: Best Practices Often Skipped

So now, what can we – what can we see about these people? This is just these people who can kill and are on the internet. Again, we find that 50% of them aren’t applying coding standards. That is, they don't even have a written coding standard. A separate issue is how coding standards are enforced, and that, we’ll talk about later, in general. But here, we have 15% not even having a coding standard. For code reviews, we have 17% who never do them, we have 24% who maybe, under certain circumstances, review some of it. And we’ll talk more about the code review question and breakdown later. And then for static analysis, we have more than a third not doing static analysis. With all three of these, as Andrew said earlier in relation to safety, they're all simple practices, best practices, that are well-known, known to be beneficial, and relatively inexpensive, and certainly cost-effective, in terms of keeping bugs out and making systems more reliable. And here, we have systems that are on the internet and could kill, potentially, and people are not following these best practices. That – that, to me, is travesty.

Now, I made a note here in the slide, in the wording, and it’s important to understand that designing a system to be secure is one challenge, designing a system to be safe is another challenge. But if you draw the Venn diagram of these two challenges, you find that systems that need to be secure are best built on reliable or safe systems. So if you build a system that’s safe, and reliable, and won't malfunction, then a hacker can’t force it to malfunction. And so that’s why there’s an overlap here between this finding and the earlier finding, I think it was number two, about these best practices being skipped. If you simply follow these best practices, it applies to both safety and security. And they're relatively straightforward to implement.

Slide 29: Finding #8: Heads in the Security Sand!

All right, and now, our last finding, but then we do have some general industry data that Andrew will go through. Last finding here is that if you take the intersection of the people who can – okay, so this is a different subset than – I was previously talking about people who said security mattered, and they were on the internet, and they could kill. This is looking at people who could kill with their current design, injure or kill, and people who are on the internet, always or sometimes, and calculating an intersection of that, finding 291 people, and 20% of them are – 20% of those people are saying, security doesn’t matter. This makes no sense. There’s one in five people who’s on the internet and could injure or kill with their current product who doesn’t have security as a design requirement. That’s actually a larger number than the number I was concerned about not applying those best practices, who at least knew that security was a requirement.

Slide 30: General Industry Snapshot!

All right, now, I’m going to turn it back over to Andrew, and he’s going to talk about some general industry data before we do the question and answer.

Andrew: Thanks, Mike. We’re going to spend a few minutes, now, we’re coming into the home stretch, here, as Mike noted, going through a general industry snapshot on questions we made through programming languages, operating systems, tools, coding standards. So let’s just spend a few minutes on that, right now.

Slide 31: “Primary” Programming Language!

First of all, in terms of programming language, I don't think anybody’s going to be particularly surprised, here, but well over 70%, about three-in-four engineers, are using C as a programming language. The next largest, by far, is C++ at about 20%. So between C and C++, we’re well over 90%. And then if you look at what that other is, that’s a combination of others, like C#, and Java, and some other languages, there. So again, I don't think this is a surprise. One thing I will mention about C++, you know, there’s a lot of uncertainty about that in embedded systems, how to use it, what the issues are, the pros and cons. We are going to be doing another free webinar on using C++ in embedded systems, that’s our next webinar in June, and I believe we put that up on our website, now. So if you are interested in attending that webinar in June, you should go to our website and sign up for that, the C++ in embedded systems webinar now.

Slide 32: “Primary” Operating System!

Now, as far as primary operating system, again, a major – a significant spread, about one-in-four, 25%, are using a paid or vendor – essentially a third party RTOS. That second – that light blue that has just a little dash with it is basically no operating system. So writing code straight to the metal, you know, bare bones embedded systems without any operating system of any kind. Any formal operating system, that is. Around 19%, around one-in-five, are using a form of Linus. And then, of course, open source, including you know, open source RTOSs, open source other operating systems, is a percentage, as well. Very few apparently using Windows, or Android, or any of the other you know, minor options out there.

Slide 33: Tool and Process Adoption Rates

Okay. On tool and process adoption rates, we looked at some other areas, such as stack analysis overviews. Well, let’s look at some other common things. And this is – this is not limited, you’ll notice the responses is 2,452, so this is everybody. We’re looking here at version control, TED, defect tracking. And again, you know, we think that 100% of people should be using all of these. Well, certainly version control, defect tracking. 9% are not using any form of version control. Interesting, on test-driven development, a lot of people are doing that, almost two-in-five are doing test-driven development.

As you know, at Barr Group, we do have some training courses. So there is a course for anybody who is interested in test-driven development, there’s a course coming up in a couple of weeks here in Maryland that we’re doing with James Grennan on test-driven development. So if you're interested in that, I encourage you to take a look at that on our website. As far as defect tracking, the question here was, do you use any sort of “formal defect tracking system?” That does not mean you have to be using Bugzilla, that means even, like, say, a spreadsheet. Are you using a spreadsheet to track defects? And one-in-five, 20%, were not doing any formal defect tracking of any kind. That was, frankly, a rather interesting and unfortunate result.

Slide 34: Peer Code Reviews: If/When/How?

Peer code reviews, we talked about code reviews, both in the context of safety and security, but let’s look at it in the context of the overall response rate. As a regular process step, 38%, that’s significant, about two-in-five are doing it as a regular process step and doing it all the time, including via Pair Programming. But over 50% were doing only partial code reviews or rarely doing them at all. Again, this is across the entire response base, so it includes both safety-critical and non-safety-critical systems. But again, you're trying to create devices that are secure and reliable, peer code reviews is a very appropriate and necessary step and could be a very positive way of improving the reliability of your code.

I want to note, here, that there is a – there’s a regional difference. You’ll notice that in the US and in Asia, always doing code reviews is a little bit higher than in Europe. Found that to be a somewhat interesting result.

Slide 35: Coding Standards

Now, let’s – let’s switch to coding standards. Under – again, we’re looking at – we talked about coding standards in the context of safety and security. Here, we’re looking at the entire response base. About two-thirds are using a written standard. Again, coding standards are a common practice, but perhaps not common enough for the idea of creating more maintainable, more reliable code, more readable code. We also looked within this at what was the basis of the coding standard that were being used. And the vast majority – or I shouldn’t say the vast majority, about half of the people who were doing coding standards had developed their own coding standard. The MISRA standard also enjoys a lot of support, about one-third of the people. Barr Group has, as many of you know, our own quality-based and quality-focused coding standard, and that’s available to people, and we see a certain number of people use that. And then there’s a number of other standards that are used at various levels.

Another regional difference, the MISRA-based standard seems to be a lot more popular in Europe and in Asia versus the US. Obviously, there’s a lot of automakers in Europe. I think MISRA is based in Europe, if I remember correctly, and that may have something to do with it, as well. But that’s just a regional difference that we wanted to point out.

Slide 36: Coding Standard Enforcement

All right, let’s look at coding standard enforcement. And again, a coding standard is only as good as how it’s enforced, how you ensure that it actually is used, and there are static analysis tools and various tools that can be used to enforce coding standards in an automated fashion, and partly or fully automated. You can see that very few, about 37%, are automated fully, whereas about 23% partly. A significant portion, about two-in-five in code reviews. About 30%, just 29% are voluntary compliance, which means that you're basically on your own, on your honor, to follow the standard, and a relatively small percentage are not following this.

Slide 37: Thank You Survey Partners!

Okay, that’s –we’re at the end, here. And I wanted to take a moment to thank our survey partners. You know, to get to approximately 2,500 valid responses, you know, that’s pretty amazing. We – this exceeded our expectations, and companies and the individuals that you see up here, the media organizations, whatever, they all participated in helping to get the word out, through – whether it was through rented mailing lists, or just tweets, or other social media, or web announcements, or blogs, or other newsletters, and we just wanted to say, thank you, to all of these individuals and organizations for helping us.

Slide 38: Winners of Our Prize Drawings

And now, onto the prize drawings. We gave away – we’re giving away two Saleae logic analyzers for those who took the survey. And Michael Breunig in Germany and N. Satyish in India each won a Saleae logic analyzer. We also are giving away three Amazon gift cards in your favorite local currency. Christophe Pradervand in France, and by the way, I apologize in advance if I’m getting your names wrong. Paul L. Cox in the UK, and Greg Jandl in the United States each one a 25-unit currency Amazon gift card.

Question & Answer

Q: A lot of people are asking if – how they can get this webinar.

So I think I mentioned at the beginning, but I’ll mention it again, here, that within the next few days, there will be a – the webinar will be posted on our website. That will include the audio that you're hearing right now, from Mike and myself, as well as the slides as a pdf, so you’ll be able to replay it for you, yourself, and your colleagues. And just give us a couple of days to get that out, but keep an eye on our website for that. There’s a webinars page on our website.

Someone was also asking about the – the EE Times article. So I’m not going to go into too much detail, here. There’s an article on the EE Times website, written by Andrew, describing some of the results of this survey. If you have questions or comments, I encourage you to go to the EE Times website and just type your comments in there. It’s a great way to continue to dialogue, here, on all of that. All right, Mike’s going to take a few questions on some, I’m going to turn it over to him.

Q: Does the data show a trend towards increasing use of C++?

The short answer to that is yes, but it’s a very slow moving trend. So I think – I’d have to look back at the data, but I have data from ten years ago where C++ answers were similar, like maybe 15%. And now, it’s 20%. So – and the phrasing of the question is pretty similar, and the audience is presumably fairly similar. So my guess is it’s yes, and we see more use of C++ these days than ever. But it really doesn’t seem to be a fast-moving train, if you will. Still surprisingly, in fact, C remains the lingua franca of the embedded programmer, with more than 75% of survey respondents saying that that is the primary programming language on their current project. Of course, there might be some C++ mixed in there, too.

Q: Do you know which safety integrity level the respondents who said they could seriously injure or kill one or more people should be at, so that we can assess whether it’s the 30% who are at lower integrity levels, who aren’t doing it?

No. We don't have that granular of data. One of the obvious tradeoffs in survey design is you need to keep short in order to collect a lot of results and have statistically meaningful data. And the more detailed and obscure questions you ask and drill down into, especially ones that not everybody’s going to know the answer to, the more meaningless the results of those questions can become. So we made the trade off we thought we could, we can tell that people are putting people’s lives at risk and could be following these best practices. And we would advocate that they should be following these best practices, even if they're not at those higher levels. But the questioner is right, that we don't know exactly which of those 30% they are, in terms of their integrity level.

Q: did you see any relationship between the years of the experience of the designers and safety critical projects, or really anything else?

That’s interesting. So Andrew and I both talked about some regional differences as we went through the data. I did some analysis, not only of regional differences, looking for statistically significant differences only. So we didn’t talk about minor differences, we only talked about statistically significant regional differences. Similarly, I went through, and I looked at, for example, what if I compared all the people who had less than ten years of paid experience with all the people who had 30 or more or 20 or more? And I took those two data sets, and I did some statistical analysis. And they had different roles on projects and were sometimes on different team sizes, but in terms of best practice use, there really was not a statistically significant difference. It surprised me. I went in there expecting to see something like that, like the questioner is asking about, but it wasn’t there in our data. It was a minor difference, and not one that we’re sure is outside the margin of error of the statistical analysis that we have.