On May 3, 2017, Barr Group CTO and software expert Michael Barr delivered the keynote address at the Embedded Systems Conference in Boston. In this important presentation, Michael discusses safety and security concerns that exist in the IoT landscape, including designing security into connected safety-critical devices to prevent serious attacks, which can be deadly. Michael also goes into the call to action plan that as an industry, we must follow to ensure that we are effectively and responsibly creating a safer, more secure connected world. This is an important talk not to be missed.
Full Presentation Transcript
Michael Barr: I want to talk with you today about a serious subject, about the dangerous world which we are entering into, which we live in. It’s a world where our embedded systems that we build are increasingly a battlefield, a place where anonymous hackers sitting safely in their bedrooms or in their offices are able to remotely injure and potentially kill people on the other side of the world. Because dangerous devices are connected to the internet, and as you know the internet is not always a safe place.
One type of threat that exists is governments hacking into systems. We’ve seen some of that already. Stuxnet, if you haven’t heard about it, was a cyberweapon that targeted the Iranian centrifuges that were being used to process nuclear fuel to build nuclear weapons. And this Stuxnet virus first infected more than 100,000 computers around the world spreading itself from computer to computer and sought a way to find its way into the laboratories in Iran. And indeed, it did do that successfully--even leaping an air gap. Their network in the lab obviously in the Iranian processing facility was not even on the internet.
This virus succeeded in finding its way on to thumb drives which found their way into the lab. And once it found its way in that right place, it found that there were the right types of PLC controllers manufactured by Siemens and it started fiddling with them. And it successfully slowed down the Iranian nuclear fuel processing by quietly disturbing the way in which those centrifuges worked. So it was an attack on an embedded system that wasn’t even directly connected to the internet. And it was successful. Stuxnet damaged several thousand centrifuges. This was very expensive equipment that was hard, obviously, for the Iranians to acquire in the first place given trade embargoes and other situations.
Perhaps you heard also about the more recent power outage in Eastern Ukraine. On December 23rd, 2015 there was an attack--believed to be perpetrated by the Russians--against the Ukrainian power utility. More than a million homes and businesses were affected by a virus that found its way into the computers in the utility company and allowed the remote hackers to then telnet (SSH) into those computers and flip switches and also delete the drivers that allowed those switches to be flipped back on. Again, targeting the electrical utilities, the critical infrastructure.
And you don’t have to even have the budget of a government to attack embedded systems, unfortunately.
Here’s an example of an attack that was perpetrated and written about in the news on an implanted device, a medical device worn by patients with heart problems, an implantable defibrillator. The attack was successful because the wireless protocols that are used to communicate with these devices in doctors’ offices are not secured appropriately and it was possible to talk to the device and actually deliver shocks of nearly 1,000V from a laptop in the room.
And of course, the attack surface for medical devices is increasing all the time. The big thing with these types of devices these days is that when you have a device implanted in your body, it communicates with a watch or it communicates with a home router so that you can deliver information about the functioning of the heart or the functioning of the device to your doctor who can remotely monitor it. So now you have this device in your home that’s attached to this thing in your body which can produce shocks, which could potentially kill you, and it’s all connected to the internet. It’s communicating with your doctor. And so the attack surface has grown for the same attack: you don’t even have to be in the same room; you don’t have to be within range of the wireless protocol in order to accomplish things like this.
Even your car could potentially be attacked.
<Video Clip 1>
Male Speaker 1: From iPhones to websites to cars, Charlie Miller of Ladoo makes it his business to hack the computers that drive modern life.
Chris Valasek: Okay, hold on tight. Hold on.
Male Speaker 1: Tuesday, he made a lot of commuters uncomfortable.
Chris Valasek: I can’t see anything.
Male Speaker 1: In this video report posted on wire.com, Miller and his business partner showed how they can exploit the internet connected infotainment system of a 2014 Jeep Cherokee to take control of the SUV.
Charlie Miller: So I’m going to get on the road so don’t do – don’t kill the engine or the brakes or anything like that, just do like the simple stuff.
Male Speaker 1: With his partner hacking in from another city, Miller showed us how their hack can range from pranks to the potentially dangerous ability to cut power to the brakes. Even taking the wheel.
Charlie Miller: I give that an A plus.
Male Speaker 1: Miller plans to share part of his hack next month at a hacker convention, forcing automakers to take security issues seriously. He has already worked with Chrysler on a fix for the jeep. But he says there are potentially hundreds of thousands of unsecured cars on the road.
Charlie Miller: It doesn’t matter if you use your navigation or you pay extra for the Wi-Fi, you’re vulnerable if you have one of these vulnerable cars.
Male Speaker 1: Chrysler says it has a team focused on identifying and implementing software best practices. Miller says as cars come with more features, drivers will need more protection.
Charlie Miller: It’s definitely something I’m worried about.
<End Video Clip 1>
Michael Barr: How is this possible? How can someone attack your car remotely? Well, it’s a simple matter. Cars these days are nothing but networked computers. Some cars have as many as a hundred processors in them. They have CAN networks, sometimes two or more networks in the car that are used to communicate between these processors. And you know what the CAN protocols don’t have? They don’t have encryption. They don’t have authentication of the device signals.
So, for example, if you could find your way on to the network, you could replay packets. You can send false packets: lie that things that are happening in part of the car aren’t; or vice versa; or issue commands. If all it takes to turn the brakes off is a command sent over the CAN, then it’s possible to turn off the brakes if you can get on to network. But how can you get on to the network? Well, one of the biggest computers in the car and one of the easiest to attack (but not the only one) is the infotainment system.
You’ve been in cars these days, perhaps you’ve bought a car in the last couple of years. You see this big computer that looks like a PC in there. You know what it looks like inside? It’s like a hardened PC. It’s running Linux or Windows or QNX or some other operating system, right? And these infotainment devices are increasingly connected. They’re connected so that you can have WiFi in the car. They’re connected so that you can have OnStar-like functionality so someone can remotely unlock your car or find it for you. They are connected so that your car can communicate to the dealer and receive updates and things of that sort.
And so, if your car has a connection--and these Jeep cars had a connection to the Sprint wireless network, not even relying on satellites, just having built-in cellular. (You didn’t even know your car had a cellphone account? Well, sometimes it does.) And the infotainment system being insecure, it was possible to get on to the Jeep's CAN network. And once on the network, it was possible to send and issue false command packets as well as see what’s going on inside the car and change the behaviors, lock the doors, (because the door locks are on the network too), etc. Turn off the brakes, change the steering direction, all sorts of different things.
And of course there’s more. And we’re going to see more, right? Recently, how many of you heard about Mirai? Not as many as I would hope, since you’re designing embedded systems. How many of you are designing embedded systems that have Linux? More than have heard of Mirai, scarily. So Mirai is a Linux worm that infects primarily embedded systems. It propagates over the Internet, finding its way on to devices that have default root usernames and passwords.
The worm has a list of 60 or so username+password combinations that are known to be, you know, in certain routers and security cameras and other types of devices. And the worm gets into that device, it becomes part of a zombie "botnet." More than a million embedded devices are already infected with Mirai [audience gasps] and more are being infected as I speak. This botnet has successfully been used to attack the Internet and various major websites like Twitter and Facebook. You might have heard about--late last year, I think it was--that this Mirai botnet was used to perform a distributed denial-of-service (DDoS) attack. So instead of PCs being hijacked, now it’s embedded systems that are being hijacked. Your router at home may be infected with this and its extra spare cycles are being used to attack other computers. And the only thing you might notice is that maybe your device is not operating as efficiently as it used to.
And there are others too. And this one is particularly scary if you’re a manufacturer of these devices... the latest one is BrickerBot. BrickerBot goes around using a similar way of finding its way into specifically BusyBox--which is a version of Linux, a distribution of Linux--Linux systems. And once it gets there, BrickerBot kills the device. It wipes out the filesystem in what I call permanent denial-of-service. It’s not clear what the motive of the hacker even is. It’s not like they’re taking control of a million computers, instead they’re shutting down a million computers. Maybe they just don’t like BusyBox?
And of course, we’re going to see many more to come. So we’re living in this scary world and obviously it’s a dangerous world, an interesting time to be an embedded systems developer.
But this is not the digital utopia we were told and promised, was it? What were we told? We were told that if we put more intelligence into more things--more smart systems--that our homes would be more energy efficient, anticipating our needs, ordering remote grocery deliveries because we’re out of eggs, for example, right?
And our doctors and our medical and our health situation would improve because we'd all have devices that we wear on our wrists or maybe inside our bodies or on our hips that are monitoring us continually and communicating with our doctors. And maybe our doctors can even issue remote treatments.
Cars, right? We see increasingly smart cars that self-drive. The idea is smart highways. The idea being that we can be zipped along in a congestion-free environment from home to work or work to home. And all along the way, we can just be texting our friends because we don’t have anything else to do.
And even some people suggested that with John Deere tractors that are robotic and can go through the fields planting crops and harvesting them and the factories that are automated so that there is a surplus of stuff: food, energy, materials, so that all of our basic needs are met, that we are about to live in a digital utopia where no one will be able to work. We will all just have to live a lifestyle of leisure and art and pursuing wonderful things and maybe the government will have to even provide us what’s called a basic income, right?
And you hear about these ideas, and this is the promise. This is where we’re headed sort of, kind of, right, we must be? All these things are getting smarter more and more every year.
In fact, it’s already begun in a big way. Right now, Gartner, an organization that’s involved heavily in forecasting the future of certain markets and the economy and things of that sort, says that in just three years, they expect the count of Internet-connected sensors and smart embedded systems--"smart devices"--will be 25 billion: in a range of industries from utility companies, transportation sector, government’s smart highways for example, transportation, medical, consumer, scarily, defense. Other forecasters say that just five years after that, there’ll be 100 billion of these smart devices. That’s in addition to our PCs and smartphones and other things, and servers. That’s just the embedded devices down at the node.
So, how are we going to – what’s going wrong, right? What is the problem here? There is some sort of disconnect. We are living in this world where the attacks are clearly increasing and going to increase. Well, we should be living in this world where things are – these systems are benefiting us.
So, one of the pieces of data that we have to look at is a survey that the Barr Group conducted in January of this year, our third survey of this, and that’s the 2017 Embedded Systems Safety and Security Survey. I’m not going to talk about the whole survey which you can find the results of online, but I want to talk about a piece of it, a piece of it that’s related to this internet-connected dangerous things.
This year, just to give you a little background of the survey, we had a little over 1,700 engineers like you who are paid professional engineers who are working on real projects. (We weeded out all the academics and people who were one of these.) Seventeen hundred really working on systems, about half in the U.S. and Canada and the rest around the world, predominantly in Europe and Asia, of course. And you can see here – I’m not going to read the names – so you can see that these engineers are working for a variety of different companies, more than 1,000 companies in all but you can see there are some big names here too. And so this is what engineers are actually doing, the engineers who are designing these systems.
One of the questions we asked them was what’s the worst thing that can happen with your product? And we found actually that 28% of the designers of embedded systems currently working on things could, with their product, potentially kill or injure someone. We also found that 60% of new designs were going on the internet. Sixty percent, sometimes or always on the Internet!
Now, we put together a little statistics here and we saw that 60% were on the Internet and of those a slightly smaller proportion, 25% (not 28 because it’s just the ones on the Internet) were potentially dangerous. I call this “The Internet of DANGEROUS Things.”
I want to talk about what those designers are doing with respect to security for example. What practices are they engaged in to secure these dangerous devices that are connected to the internet, that they are putting on the internet? What do you think they are doing? Somebody venture a guess. What do you think the engineers who are designing dangerous things and putting them on the internet are doing with security?
[Off-mic Conversation]
Michael Barr: He says it’s a secondary item, less important than meeting the schedule, for example.
Any other thoughts about what some engineers might be doing? Yeah?
[Off-mic Conversation]
Michael Barr: They used a particular security device and they think they’re secure. For example, he gave a particular example that they’re doing AES encryption for something so they assume the whole system is secure.
Any other thoughts?
[Off-mic Conversation]
Michael Barr: Nothing. This guy – actually a really good lead in, a really segue for this. So he says, sometimes the customer is not willing to pay for security. And the designers are thinking about that.
So we asked – now this is a different survey, but we did some man on the street interviews to find out what everyday people (not you and I, not engineers), what everyday people think should be happening with security in devices that are going on the Internet that could kill people. This is what they had to say:
<Video Clip 2>
Female Speaker 1: When internet-connected products like cars and medical devices could injure or kill people, should the engineers include security in their designs to prevent hacking?
Female Speaker 2: Yes, absolutely.
Female Speaker 1: Do you think that the engineer should include security in their designs to prevent hacking?
Male Speaker 2: Oh, definitely. I think that’s a – it’s a big risk. When you buy these products, you cannot assume that it would be protected from outside interference. I think it would be pretty – an important thing to have.
Female Speaker 3: It seems to me like that should be the first priority if you’re risking people’s lives.
Male Speaker 3: I’d be surprised if they didn’t already.
<End Video Clip 2>
Michael Barr: Well guess what? Twenty-two percent more than one in five designers of a current embedded system that will be on the Internet and could kill or injure a person, one or more people are doing nothing. They have zero requirements related to security. They’re not implementing anything. They’re not even putting in an AES encryption engine. They’re doing nothing. These are products you could buy, your doctor could prescribe to you, you could buy at the auto dealer or will buy in a year or two when those designs are done.
We also found a lot of other more specific simple things that were ignored. This is not just by the 22%. These are, in fact, by the people who said they were doing things about security. If you add up the people who don’t have a coding standard and the people who don’t enforce the one they have, that’s more than a third of designers not paying attention to coding standards. We’ve got well more than a third, more than 40% even not doing coding reviews at all or not doing them in any reliable manner. And we’ve got 36% not doing static analysis.
Now our survey goes further and asks other questions and gets into some more detail. I don’t have time, unfortunately, today to talk about that.
But just these simple best practices which are inexpensive to implement and known to reduce bugs--for example, the buffer overflows by which attacks can take place--the code reviews could reveal, you know, we’re using a dumb username and password combination. We need to make every device have its own.
Or they can reveal that you are using encryption but the key is right there on the screen of the device, or the key is the code on the back, right?
Some commonsense steps in the software development process and the system design could make systems more secure. I’m not saying that at all but that’s all you have to do is to secure a system. But that is something you should be doing. And too many engineers are not.
We asked those same people to react to the situation of security being ignored. And here is what they had to say:
<Video Clip 3>
Female Speaker 1: If an engineer who’s designing an internet-connected car or medical device for you told you that they were ignoring designing security into the device, what would your first reaction be?
Male Speaker 2: I think I’d pretty upset.
Female Speaker 3: That seems absurd.
Female Speaker 2: I’d be very worried if they are leaving devices vulnerable. There definitely should be security.
Male Speaker 4: I would be very worried about my own safety.
<End Video Clip 3>
Michael Barr: Now look, obviously, there’s no magic bullet to securing these systems. Just simple facts of the design tells us that, right? Embedded system are these, by their nature, by the very way we define embedded systems, they are a fragmented environment. There’s not one company or even two companies that rule in the operating systems space for embedded systems.
In fact, there is not – not only not one processor company or two processor companies, you might hear a lot about ARM, and they go, “Oh, I’m using an ARM,” and lots of people are using ARMs. But do you know that the number of 8-bit processors still being sold every year is still going up, the number of 16-bit processors sold every year is still going up? That now more than 20% of new designs have four or more processors in them?
And then when we talk about having an Internet connection, there’s all kinds of different connections that these devices have. Some have wired connections, some have wireless connections, sometimes the wireless connection is 802 a or b or g or n, or all four or maybe it’s Bluetooth or maybe it’s Bluetooth 2.0 or maybe it’s Bluetooth LTE or maybe it’s something else like ZigBee. And maybe they have both, maybe there’s an Ethernet. And the point being that no two of these systems look the same, right? A medical device that goes in somebody’s heart has security issues but there’s no one-size solution that’s going to secure that and also secure the infotainment system in the car, which is a different type of embedded system.
And even if you back up for a minute, stop thinking about the individual system, just think about the strategic challenges here, right? Security is always an arms race. Attackers get stronger, they learn more, they’re able to – Suzanne who introduced me talked about my work in testifying in relation to satellite TV piracy and hacking. And I’ve worked in a number of cases related to that. And what you always see is that the security of the satellite TV system provider goes up over time but the knowledge and skills of the attackers who want free TV--or to make money off of free TV, ironically--goes up as well. And then the defenders improve their security more and that arms race is always out there.
The problem strategically for designers of embedded systems is a lot of stuff we’re going to make. Let’s say you’re making a red light controller. You’re going to put it on the Internet and it’s dangerous--but on the Internet for convenience of the government officials who are operating the smart highway.
Well, it’s going to be out there on the road for a long time. Meanwhile, the attackers have more computing power at their disposal each year. The attackers have increasing knowledge. There’s more types of viruses. They learn about the hacks. You’re not even patching your system, to upgrade the Linux operating system or whatever. And so, you have this static device living in a dynamic security challenge environment.
And there’s already however many billions of products connected to the Internet. And many of them don’t have a mechanism to be upgraded at all. Other than physical replacement.
Furthermore, there are embedded systems that have to be cheap or they won’t happen, right? Eventually that chair could have a processor in it. Why? So that the chair maker can sell more chairs than the guy who is selling "just a chair." What’s special about the one with the processor in it? Well, maybe it can raise and lower or maybe it automatically detects your butt sitting in it and adjusts it to your preferences. Think about Aeron chairs, right, $700 a chair! Who thought that would sell? But it did. Start putting processors in to cheap chairs. I don’t know what it’s for but it will happen. And when it does, what will make it possible is that it will have a cheap BOM cost, right, a cheap cost of putting the processor and the software and everything inside that device, whatever it is. Otherwise that chair wouldn’t exist as a smart product.
And so, if you are going to build a product like that, you want the cheapest processor (that can do the job). You don’t want to spend money on security. You don’t want to have, you know, exotic software packages or AES encryption packages and things like that in there.
Just as one example from the car scenario. One of the attack surfaces in cars are the tire pressure monitoring system (TPMS). If you have a car that’s less than 10 years old, and I presume that most of us do, it has four TPMS sensors. One in each wheel that is detecting the tire pressure and relaying wirelessly--because it’s in the wheel--to another device on the CAN network, the status of the air pressure.
Well this has been shown as a potential attack factor as a way to attack the car. What’s the simple thing you could do, not the only thing you could do? Well, let's say you wanted to rob someone or attack someone. You could drive down the road and spoof that they had a flat tire. Send the wireless packet that’s stronger than the one that’s coming from their front tire, and now they think they have a flat tire and they pull over. You’re able to rob them, steal their car, you know, hurt them in some other way.
But there’s no way that we would all be driving around in cars that had four sensors in them if everyone of them had to be $100, you know, rock solid security processor. So this is a challenge that is not going to go away.
So what are we going to do, we the designers of embedded systems? For one possibility, although not a near term solution, I want to turn to an analogy for a minute. And I owe this analogy to some book I read and I can’t remember who it was, so forgive me for stealing the idea but it’s a great analogy, which is that there are multiple ways of preventing car thefts. They don’t always, at first, occur to us. We might think, “Oh, I’ll alarm my car,” or back in the day, “I’ll buy an alarm to upgrade the security of my car.” Or, “I’ll – as the manufacturer of the car, I’ll put in a locking steering wheel, so even if they can start the car, they can’t move the car.” Maybe some of you at one time or maybe even now have The Club--an add-on security device. All of these are mechanisms that merely say, “Don’t steal my car, steal the car next to my car,” right? They shift the security risk down the line. Even if 80% of us get The Club and put it in their car or have the alarm and the locking system, there’s still enough cars that aren’t secure for there to be – to make car thieves happy, because they all have to be secure in order to put the thieves out of business.
So we’re just – we’re making all of us pay to update our cars but we’re not truly eliminating the problem. That’s like saying that every embedded system has to be made rock solid secure in order for us all to have – to live in a safe environment or in a safe world.
Another way that people thought of to prevent car theft is LoJack. Anybody ever have a LoJack on a car? Okay, not as many has heard of Mirai. LoJack is completely different. It turns the problem on its head. LoJack these days doesn’t seem like such a big deal to be able to see where your car is, right, because we all have GPS in our phones, and in our cars and we can find our cars. But when LoJack first came out, it was this device that you attached on to your car in a secret location. This is part of the importance of this analogy.
That raised the risk to the car thieves, because it made it possible to find your car when you called the police. You could call the police and you say, “My car has been stolen!” And they'd say, “Oh, okay, well you have LoJack and we have the system for finding it.” And so the police would login to their system for finding it, enter your car’s ID and there on their map, is the chop shop the guy drove it to and his business, his theft business, right? So then you can shutdown or respond to the attacks in a stronger way that raises the risks for the attackers.
We need something like this for embedded systems ultimately, I contend. We need something where from my hospital network that I want to protect or my body or my car, I have the option of upgrading its security as do you. But even if all of us don’t do it, it makes us all safer. For example, maybe we can have a network firewall that goes in our car and monitors the traffic on that network. So why is this important? It’s important because of course there’s lots of ways to attack embedded systems, right? I can see here that there’s an embedded system in the back of a speaker. I can use a screwdriver and take it apart and I can probe electronically and all that stuff.
But I’m talking to you today about remote attacks over the internet, right? Someone physically is able to take apart the computer in my car, they’re also physically able to cut the brake line. That’s a different problem. But for remote attacks, they have to get a packet on to that network, and it has to be received in the intended way to shut off the brakes for example at the right time.
And so, if we could have first of all, a first piece, a network device that could be installed by the car manufacturer for example to make sure that the traffic on the network makes sense. And we put that into a broader system that could also detect attacks as they emerge, you know, the next Mirai, the next Stuxnet, the next BrickerBot, learning in the same way that spam filtering systems on, for example, Gmail and Microsoft’s mail learn about spam in real time and upgrade the security. We could have device makers like say medical device companies watching out and learning about attacks on their devices and submitting to a cloud, you know, based system that updates the security of all of our networks.
So we have this one device that’s getting smarter and smarter over time in our car or in our hospital, et cetera, that’s enabling us to protect that environment, prevent those remote attack packets from arriving, whether they’re automated or whether they’re from some hacker working for a government or whatever.
And then we need two kinds of reactions of course when these threats do emerge. The first reaction we need is something that can control as engineers which is we need the firewall system to be self-updating so that when the cloud learns about new attacks, just as, you know, this kind of system already exists, right? It’s like automatic network fire scanners that people use in offices, big businesses. But we need it for embedded systems. We need it in the networks that are keeping us safe.
And then the other thing we need is coordination of this knowledge with law enforcements, the attacks are coming from here, the – you know, the study of those attacks leading to arrests and extraditions, which does sometimes happen, right, but it doesn’t happen enough to scare away that guy in their bedroom safely behind their lock door who is fiddling around and hacking.
And you might say, “Isn’t encryption good enough?” To this gentleman’s point in the back, “Hey, I got encryption, aren’t I secure?” Well, no. Actually, it’s not that simple. Sadly, it’s not that simple. Encryption is amazing stuff. I don’t want to undersell it. It’s awesome. The math works. And at least for a few decades, a particular algorithm, particular key length, et cetera, can provide real security.
But the problem is, it’s always part of a protocol. It always has to be coded. And there could be bugs in the implementation. There could be weaknesses in the protocol, and those can be exploited. As well, it’s great that you’re encrypting this part over here and that you bought a strong chain. But wherever the weakest link is, that’s where the hacker is going to find their way in. And so you’re – so security by encryption is never enough. It’s not the only thing that you’re going to need to do.
One important thing I want you to think about is that hackers don’t think about the system the way we do. They think about it in a different way. They are thinking – and this is by the way called an attack tree if you haven’t ever seen one of these. But the basic idea is, “How can I open the safe and get all the money?” Well, if you’ve watched a lot of movies, heist movies for example, you know, they famously pick the lock, cut open the safe, et cetera. Maybe in sort of – some sort of grand scheme they were involved in the installation of the safe and they, you know, made a secret trap door to get in there.
But for most hackers, they can’t do those things, right? Most attackers of the safe, they can’t do those things. They have a certain amount of time, they have a certain amount of access. Probably the easiest pass for most people would be to figure out a way to learn the combo, maybe to corrupt a particular person who worked at the bank so that you can open the lock when you wanted to. Because that’s the easiest path, and the one most resistant to getting caught, because attackers and hackers are real people, and they don’t want to go to jail, they don’t want to be exposed and caught. And it’s – the perception that there is no risk of that, that increases the prevalence of hacking.
Now, of course we might not ever stop a nation state attacker because they don’t care if they’re caught. It’s their job. But we can prevent a guy who has vengeance against someone from shutting down their car while they’re driving down the road in this way.
So hackers do think about how could they get caught and they can also think about what’s the worst that happens to me if I get caught. And they think about it when they’re plotting out this attack tree. And you have to do the same when you’re designing your system. You have to think, “Who could attack? Why would they attack? And what skills and tools and motivations would they have?” And then you can practice what’s called defense in depth. Defense in depth is for example saying, “You know what, one of the things that scares me is that the way the money can get stolen is that an employee could be threatened somehow.” Their life, their spouse’s life, et cetera. And so maybe they will open the safe and let the money out because they feel threatened.
Well, how could you as a clever system designer design your safe to have an additional layer of security there? Well how about this? Every safe has two combinations. It has the combination that they use to open the safe and the combination they use when they’re under duress, which also opens the safe and saves their life, but it allows for automatic notification of police so when they guy comes walking out with the money, he gets caught. If an attacker knows that your system is like that, it becomes much more challenging because they have no way to know which one of those codes is going to be entered. And if they don’t know it, then they have a likelihood of getting caught. You have to think like this at each layer and that’s called practicing defense in depth. What can I do at each layer to add additional layers of security, so that there is no weak – one weak link?
So, to wrap up, I want to give you an action plan. You are designers of dangerous things potentially now or in the future. What are you going to do differently so that next year when we take our survey, there’s not 22% ignoring security and all these people ignoring best practices?
First, we’re not going to ignore security. You know why? Because we have an ethical duty. How many of you are members of the ACM, the Association for Computing Machinery? Or the IEEE? Did you know that they both have rules of ethics, codes of ethics?
The ACM has a code of ethics rule, Rule 1.2 which is the title of it is avoid harn to others. But it goes on in great length talking about using best software practices, blowing the whistle, if you know that someone could get injured or killed using your product and management ist taking it seriously, and a variety of other professional ethics that relate to designing potentially dangerous things.
The IEEE in fact its Rule 1 that you accept personal and professional responsibility for the safety and welfare and health of society or at least those people who are using your product and potentially endangered by it. So the number one thing we’re not going to do is ignore security anymore, at least not when we’re designing dangerous things.
The rest are all prescriptive, things that you should do. First, do adopt software best practices. I’m not limiting this to the three I mentioned earlier, but those could help, and those are relatively inexpensive to get started with and to use on a continuing basis. They will reduce bugs and by reducing bugs, they will reduce the weak links in the chain, they will eliminate the ability potentially of buffer overflow attacks. That’s just one among many examples.
If you’re not familiar with it, there is a coding standard – we have our own coding standard at Barr Group but there’s also a coding that’s put out by the computer emergency response team, CERT, that specifically is focused on security, ways in which the lines of code that every engineer writes can make the system less secure or more secure, right? Everyone who works on the project needs to do their job better.
Do also use cryptography. It’s important. We found sadly in our survey that less than half of those who were concerned about security in their systems were encrypting anything at all. And particularly if you are designing something dangerous, it makes sense that you should be by default encrypting your data, going back and forth over the network to the cloud for example.
But also, and this is really important and I want all of you to write this down and do this, when you make a bootloader for your system, a way of booting up the system and downloading potentially new software to it, you should be making it possible to update the firmware in their so that you can respond to attacks and threats in real time. And that bootloader needs to be secured. It can’t just accept any download. If you haven’t read about it, there is a paper called “The Flaming-Printer Attack” that you might find interesting in the presentation that shows how certain HP printers could accept a download, the download of new firmware essentially just looked like, maybe still does for some a postscript file, didn’t have any authentication that it came from HP even.
And so, it was possible to download new firmware into the system and thereby to potentially for example monitor everything that was printed by it. Let’s say it’s a printer that’s used by the government in your foreign government, or maybe every file that goes through the printer you’d like to get a copy of. Well, you could attack the printer in that way. Build a secure bootloader, and a secure bootloader means probably that the data is encrypted but definitely that the data is signed, signed and authenticated from you. It’s not perfect but you have to do it. It’s the best thing you can do.
Also practice defense in depth like I talked about. And finally, get and stay educated about security. You have to pay attention. You should know about Marai. Set up Google Alerts, find publications that write about computer security. Pay attention to what’s going on also in the desktop computing world and in the smartphone computing world because that’s where the trends are emerging that are going to increasingly – you know, whoever thought that embedded systems would become botnets? Well, I did. But probably you aren’t thinking, like, “Oh, I see botnets happening. That’s interesting. My dad’s computer is probably being used to attack, you know, other people. Not much I can do about that.” But guess what, maybe the product you designed could become one of those systems if you’re not paying attention to what’s going in the news, what’s going on.
So that’s my best advice for you and I hope that you all follow it. Thank you for coming.