Barr Group FacebookBarr Group TwitterBarr Group LinkedInBarr Group Vimeo

In this webinar we discuss an overview of Android software architecture, hardware requirements, licensing terms, and security considerations. We also examine a few case studies which illustrate the pros and cons of building embedded systems around the Android operating system.

Once upon a time, customers were happy to control their embedded devices with nothing more than buttons and a couple of LEDs. Over a decade ago, that began to change: customers started preferring embedded devices with more sophisticated human interfaces, including touch-enabled LCDs. Since then, customer expectations have continued to grow—embedded devices should now have the same sort of slick, intuitive GUI that customers see on their smartphones and tablets—leaving devices that fail to measure up at a competitive disadvantage.

These developments pose significant challenges for embedded device developers. How to create low-cost devices with the performance, memory, and storage requirements needed by modern user interfaces? How to do this while maximizing battery life? And how to quickly create embedded applications that provide the sort of user experience that customers have come to expect?

One increasingly attractive solution is to build embedded devices with the same operating system used on most smartphones and tablets, Google's Android OS. This approach makes it possible for embedded developers to leverage the huge ecosystem of tools, software frameworks, and processor support packages available for Android devices, as well as the millions of Android app developers worldwide. Furthermore, through Google's Android Open Source Project, embedded developers have access to full source code for the Android OS—fully modifiable and available for use in any application—without licensing fees. Of course, using Android also comes with a unique set of challenges related to code complexity, footprint, real-time support, and licensing.

The remainder of this page is a transcript of a 1-hour webinar. A recording of the webinar is available at https://vimeo.com/121385638.

Slide 1: UI Convenience at What Cost?

Good afternoon and thank you for attending Barr group's webinar on UI Convenience at What Cost? The Embedded Android Dilemma. My name is Jennifer and I will be moderating today's webinar. Our presenters today will be Michael Barr, Chief Technical Officer at Barr Group, and Nathan Tennies, Principal Engineer at Barr Group.

Today's presentation will last for about 40 to 45 minutes. After which, there will be a moderated question and answer session. To submit a question during the event type your questions into the rectangular panel at the bottom right of the WebEx and click the send button. Questions will be addressed after the presentation is complete.

At the start of the webinar check to be sure that your audio speakers are on and the volume is turned up. Please make sure to close out all background programs and turn off anything that might affect your audio feed during the presentation.

I am pleased to present Michael Barr and Nathan Tennies' webinar, UI Convenience at What Cost? The Embedded Android Dilemma.

Slide 2: Michael Barr, CTO

Welcome everyone, thank you for coming to today's webinar. My name is Michael Barr; I'm the Chief Technical Officer and the Cofounder of Barr Group.

My background is as an electrical engineer and embedded software developer. With more than 20 years of experience doing the actual embedded software development and a number of those years spent consulting with companies in a variety of industries with a variety kinds of systems and also training other engineers who develop embedded software in particular.

I've also taught at the University of Maryland and John Hopkins in Baltimore. In addition I was the Editor in Chief for several years of Embedded Systems Programming magazine and for a longer period of time a monthly columnist there. That's the magazine that now is just Embedded.com. I've also been involved with a number of conferences and invited to speak at private companies all around the world. Including the Embedded Systems conferences in Europe and India and all over the United States for many years since back to about 1997.

In addition to my work consulting with and training engineers who write embedded software and also doing the same. I've also been an expert witness, testifying before judges and juries. The number of different types of disputes included product liability and safety as well as patents and other related issues. I'm also the author of three books and more than 70 articles and papers about embedded software.

Slide 3: Barr Group - The Embedded Systems Experts

Our company, in a nutshell what we do is we help companies make their embedded systems safer and more secure. So by safer I mean that the systems are more reliable and also designed to fail safely if they do. And also by secure I mean to secure against theft of service or against hackers or IP thefts and other types of threats that products face in the market. The primary ways that we do this is we don't have any products of our own, rather we do product development so we take on some or all of a system's design from the electronics and mechanical enclosure including – up through and including the embedded software and sometimes associated software on a desktop computer or a smartphone. And we also do parts of that so the most common area we get involved is with the embedded software specifically.

We also provide consulting services or engineering guidance, often getting involved with issues of software process improvement with software and system architecture and re-architecture and generally providing higher level value than just doing the engineering. And then finally we also do training such as this webinar and such as our public boot camps and private trainings as well.

Slides 4-5: Upcoming Public Courses

Before we get to today’s course, I just want to alert you that we regularly do trainings both at private companies and you can see on our website a list of all the courses that we offer. And we also have public trainings including some upcoming boot camps. These are our weeklong, four and a half day, hands-on, intensive software training courses.

The course titles, dates, and other details for all of our upcoming public courses are available at the Training Calendar area of our website.

Slide 6: Survey! Please Help ...

One more quick note before I introduce today's speaker and that is we are currently conducting an online survey. This just a quick survey of embedded systems developers to better understand what's currently going on in the market.

There are only about 15 questions and should take just a few minutes to complete. And there will be a drawing at the end for three people who complete the survey to win a smart watch, either an Apple Watch or Samsung Gear smart watch of your choosing up to 500 dollars. So three chances to win and all you need to do is give a couple minutes of your time and here is the URL. You might want to go ahead and load that up on your web browser right now and you could do it for example after today's webinar.

Slide 7: Nathan Tennies, Principal Engineer

Okay so without further ado let me introduce today's speaker. Mr. Nathan Tennies is a principal engineer here at the Barr Group. He has been working like me for more than 20 years in embedded systems design field. And like me also worked in a number of different industries. For him specifically medical devices, transportation, wireless and handheld big area consumer and sensors. He has a ton of expertise, particularly in the area of embedded Linux although it's not limited to that. Also Windows CE and other operating systems, all the way down to the driver level, the boot-loader level, the kernel itself. And then also application level for example is you'll hear some about with Android other – in today's webinar or in the upcoming boot camp.

For Barr Group he serves as an engineer doing projects. He teaches and does this webinar for example and the boot camp and he was also developer of these materials and he is also one of our top consultants. He's worked with lots of different real time operating systems as well as I mentioned Linux and Android, different processors and platforms. So what I'd like to do is now turn the course over to Nathan. And Nathan if you could please take it away and again thank you everyone for coming.

Slide 8: Overview of Today's Webinar

Thank you Michael. Android is a great operating system and having it as an option for certain types of embedded projects is an exciting one. What we're going to do today is take a look at the advantages that Android brings as an embedded operating system. And we're going to also explore the challenges of using Android that may make it an inappropriate choice for some embedded projects. And we'll wrap up with a few hypothetical embedded projects for which we might consider using Android.

Along the way we're going to take a quick look at some of the growing influence that the mobile market is having on the design of user interfaces for embedded devices. And we're also going to dive into Android architecture in order to understand how you go about modifying it for use on embedded devices and how this impacts the tradeoffs of using Android.

Slide 9: Evolution of Embedded User Interfaces

One of the main of the main reasons to consider using android of course is that you are designing a product with a modern user interface. One way to understand what we mean by that is to consider the evolution of embedded user interfaces over the last 30 years. In the lower left you can see a watch and a thermostat that represent the sort of user interface that was pretty standard on a lot of embedded devices over 20 years ago and of course is still used on many products today. These are characterized by segment based LEDs or LCDs and usually handle user input using a lot of buttons.

Starting in the late 1990s we began to see a second of generation of embedded devices such as those in the middle column. Devices with graphical user interfaces based on bitmap monochrome or color LCD panels and sometimes with touch grains. These user interfaces were directly influenced either by desktop computers complete with menus and windows and even scroll bars or by what was happening on the web. The watch in this case is a Microsoft Spot watch from 2004 and the thermostat shows the sort of user interface that could've been built with windows CE or Qt embedded even 15 years ago.

The debut of the iPhone 8 years ago and Android the following year marked the introduction of a revolutionary new type of graphical user interface. One that has really changed the landscape not only for mobile devices in general but is now having influence on the web, on desktop computers and now on other embedded devices.

In the upper right we see two consumer embedded devices; the Nest thermostat and the upcoming Apple watch that use exactly this sort of modern user interface. What's driving the introduction of these modern user interfaces into traditional embedded devices is the intense competition in the huge mobile market. The competition between Apple and Google and Microsoft and between thousands of app developers is continuing to drive advances in these interfaces and in the tools you need to create them. And the competition between phone manufacturers and also between processor vendors has resulted in greatly improved technology at a dramatically lower cost.

Today you can buy a no contract Android phone on amazon for under $50, which is pretty amazing. And this makes it easier to add these sorts of modern user interfaces into a wider range of embedded devices. None of which is to suggest that the Nest and the Apple watch are better or will be more successful than other thermostats and watches although the Nest is doing pretty well. It's reportedly selling over 1,000,000 units per year and Google bought the company last year for 3 billion.

The embedded market is obviously very large. It can support a wide variety of products at different price points and focused on different market niches but I believe these products are evidence of a trend in which improvements in mobile devices eventually generate demand that is felt more widely in the embedded market place.

Slide 10: What Constitutes a Modern UI?

But what are we really referring to when we say something has a modern user interface? Well first of all this is not an established term by any means. If there is an established term I haven't been able to find it. And just to confuse matters more, Microsoft started referring to its Windows 8 user interface as modern UI style a few years back. But we're definitely not talking about Windows 8 in particular. Even worse, there's no agreed upon definition for what elements are required for this sort of modern user interface.

For example the sort of adjectives that customers use to describe the Nest thermostat, things like elegant, sleek, clean, simple, cool, fluid, iPhone-like point in a certain direction but they don't provide a concrete definition. But I think most people know it when they see it. So I'm going to take a stab at providing some definition.

First, there is no question that user centered design is an important aspect but that term has been in use for decades so it really doesn't tell us that much. My experience is that these modern UIs are simpler with less on screen clutter yet they are still highly functional. They often rely on fewer input methods than their predecessors. The Nest for example relies on nothing but a rotary encoder and a single button. But the associated user interface is designed to invite users to explore. Your parents can figure them out without using a manual. And they also have a greater sophistication in their use of advanced graphical features and including typography and lay out.

But I think the real defining characteristic of these modern user interfaces is their use of fluid animation as a pervasive and intrinsic element of the user interface; something that actually improves the user experience instead of just being decoration. The four bullets here are the titles of the major sections of Google's new material design language for Android that deal with animation. And I think they captured the sense of how animation is being used in this new class of embedded devices.

Slide 11: Features of Modern UI Frameworks

In order to support these sorts of modern user interfaces, application developers rely on a user interface framework.

While there may not be agreement on precisely what constitutes a modern user interface, there is substantial agreement on the feature set required by the underlying framework. One of the most important features is compositing, the ability to create a composition made up of layers of bitmaps or other graphics primitives that are automatically combined when written to the screen's frame buffer. Compositing is even more powerful when combined with 2D affine transformations. These are matrix transformations that allow the graphics primitives in each layer to be scaled, rotated or translated as part of the composition process. Compositing usually also supports transparency and an alpha channel for each layer as well as automatic anti-aliasing when graphics primitives are drawn, scaled or rotated.

An equally important feature is an animation engine. While the animation engine may support traditional frame-based animation what's more important is that it supports the ability to animate the transparency, affine transforms, and other properties of the layers and graphics primitives. If you look at the animation happening on an iPhone or an Android device, the various transitions, scrolling and other visual effects that make these modern user interfaces so compelling, you'll find that they are primarily made up of these sorts of animations.

In a well-designed framework the animation engine should have fluid performance even when rendering full screen animations. This is not an easy requirement especially on embedded devices with relatively limited memory and processor horsepower. Other features that are commonly included in these modern UI frameworks are things like: sophisticated lay out and typographical controls or support for vector fonts that can be drawn on a path or modified using affine transforms, support for internationalization, and support for the rendering of 3D objects. In practice 3D rendering is usually reserved for drawing specific UI elements or to produce certain effects. Not to render the entire user interface in 3D.

Slide 12: Modern UI Framework Implementation

So how do these modern UI frameworks get implemented? Well one option is to do it all on software which is how the Nest thermostat works. Tear downs of the Nest show that it is running a Texas Instruments AM3703 processor, which has an ARM Cortex-A8 core but that it is lacking a GPU or other graphics acceleration in the hardware.

The fact is that the Nest is using a relatively small 320x320 pixel display and that only makes it easier to generate fluid full screen animation using the ARM core alone. But it also points to a well-designed user interface framework and a highly optimized animation engine. The more common approach however is to use a user interface framework that is designed around the GPU, a graphics processing unit.

A GPU is capable of handling compositing of graphics primitives using affine transforms and transparency entirely in hardware. Which offloads these tasks from the device's main processor. Many GPUs are also capable of using hardware to accelerate the rendering of vector fonts or 3D graphics. And GPUs are also capable of holding user interface layers in a backing store allowing them to be animated without much impact on the main processor. The benefit of this approach is that modern user interfaces can be supported on resources constrained embedded devices.

Compared to the Nest the original iPhone had a display that was 50% larger and had a much more complex user interface and it had much more demanding applications such as Google maps. But it could still handle fluid, full screen animation even though it was using an ARM core that was 25% slower than the Nest. This is the advantage offered by a GPU centric user interface framework.

Slide 13: Modern UI Frameworks

Since Android and the iPhone helped introduce these sort of modern user interfaces, it's clear that Android has an application framework that is capable of supporting them. And while we'll discuss a number of other advantages that Android can bring to embedded projects, it's also the case that some of the challenges of using Android will make it a poor choice for many embedded devices. So given that, what are the alternatives to Android for creating these sorts of modern user interfaces on an embedded device?

Well there are a number of products that claim they support modern user interfaces but I can't vouch for any of them and I've yet to see a demo that makes it clear they can replicate the sort of interface we see on the Nest or the Apple watch. Qt for example has been around for a long time, is widely used in the embedded world, and has been upgraded to support most of the modern UI features we discussed earlier. Several years ago they rolled out Qt Quick, which is an application framework aimed at modern UIs like those found on mobile devices. It's interesting to know that Qt now requires open GL which really means it's now target at devices with GPUs.

In the case of Windows CE, Microsoft promotes using Silverlight, their alternative to adobe flash. Being driven by the .Net framework as their solution for a modern UI. And of course you can always build your own framework layered on top of open source APIs like OpenGL ES, DirectFB and OpenVG, all of which support hardware accelerated graphics.

Slide 14: Advantages of Android Marketshare

Another important advantage of using Android in embedded projects stems from its market share. Google hasn’t released numbers for 2014 yet but reported that over 500 million Android devices were activated in 2013 alone. Making at the largest smartphone and tablet operating system worldwide. Last summer Google reported that there were over 1 billion active Android users in the 30 days prior to their announcement. And the 450,000 app developers that Google announced were developing for Android in 2011 has, according to most estimates, swollen to between 1 and 3 million worldwide. So why do we care?

Well first because every embedded product requires development of an embedded application. And that in turn requires a ready pool of experienced application developers. 10 years ago this was a good argument for using windows CE since many of the millions of Win32 or .Net framework developers could be immediately productive writing Windows CE applications. Today that same calculus applies to Android app developers.

Second, the huge Android market means that there is a vibrant software ecosystem with many companies providing tools and libraries aimed at Android app developers. In addition commercial Android applications themselves offer a potential source of intellectual property that can be licensed to reduce time to market embedded devices. Third, processor vendors and other chip vendors are actively competing to have their products used in Android devices. Which means they are providing software to help integrate their products into Android.

If you make an embedded processor with an ARM core and a GPU today, chances are you already have a processor support package for Android or have one in the works. Likewise other embedded chip vendors are supplying drivers and libraries targeted at reducing immigration time and for their products with Android.

Slide 15: Android Architecture

To better explain some of the other pros and cons of using Android, let's take a quick look at the Android software architecture.

Like any Linux distribution, the heart of Android is the Linux kernel running in privilege mode. On top of that Google has replaced the GNU OS found in most Linux distributions with its custom Android OS. Underlying the Android OS is a hardware abstraction layer and a collection of native apps, many of which were built by the open source community.

Android applications gain access to operating system services through the Android frameworks. Which includes the user interface framework we discussed earlier. Android phones and tablets also include a bunch of built in Google applications such as Google maps, Gmail, Chrome, etc. Many Google and third-party apps also make use of an optional component called Google Play services, which provides an ever-expanding API of extended services.

In terms of code, the upper layers of android are all written in Java, the lower layers and the Linux kernel are all written in C. And almost everything is available through open source licenses, which represents another advantage for android over commercial alternatives. No royalty payments and no restrictions on how Android can be used. The exceptions are Google applications and Google Play services, which can be licensed from Google at no cost but not without a large number of restrictions and requirements.

But in practice, if you were like most embedded developers, you don't care about licensing Google’s applications or even telling your customers that your device is running Android. You'll take Google’s open source software, you'll customize it as you see fit, replace their apps with your custom embedded app and ship your product without ever having to deal with Google at all.

Slide 16: Why Does Android Need a HAL?

One thing that may seem surprising is that Android has a hardware abstraction layer at all. For one thing the hardware on your board is going to typically require a driver in the Linux kernel anyway. And for another, the Linux kernel already has established interfaces that Android could rely on for the types of hardware you typically find on phones and tablets. Yet Google had several good reasons for creating a hardware abstraction layer that sits under the Android OS but on top of the Linux kernel.

Foremost, Google really wanted to prevent Android phone and tablet OEMs for modifying the Android OS or Android frameworks in any way at all. So they needed to establish a code layer that OEMs could use to implement device specific modifications to the behavior of android. And while some of that could have been done in the kernel level, Google wanted to let them use off the shelf kernel drivers wherever possible. Given these requirements a hardware abstraction layer makes a lot of sense.

Note that embedded Android devices don't really follow this model. While it's naturally wise to limit the number of changes to the Android OS or Android frameworks as much as possible, in practice you should expect you'll have to do it. For the simple reason that your embedded device is not a phone or a tablet. You are going to want to modify the way Android behaves in more profound ways than can be supported by the hardware abstraction layer alone.

Another argument for a HAL is that support for some types of hardware is typically implemented in user space code but Google wanted to give Android a standardized way to interface to that hardware. A great example is software support for GPS. This software typically relies on a UR driver or other kernel driver for communication with the GPS but the parsing happens in user space code that may have vendor specific requirements. For Android it makes sense for this GPS parsing code to sit in the hardware abstraction layer.

The final reason is that Google wanted to provide chip vendors with a way to support their hardware using proprietary software. Which is difficult to do in the Linux kernel given the constraints of the GNU public license. The GPL is considered a copy left license, which in practice means that, in most cases, you are required to release source code for any modifications or additions that you make to the Linux kernel under the same license. With Android chip vendors will still need to have Linux kernel driver to access their hardware but they can move their secret sauce into the android HAL. Where it can be protected by the vendor's proprietary license.

Slide 17: Android Architecture Takeaways

Given this high level review of the Android architecture, what does this mean for embedded developers? Well first, consider the implications of the kernel being licensed under the GNU public license. The requirement that you need to release your modifications and additions to the Linux kernel can be a tough pill to swallow if you are unused to working with open source code. But it's important to remember that the GPL is one of the reasons that Linux can be brought up on new hardware so quickly and is often used to improve time to market.

This is because the standard Linux kernel code base includes drivers for hundreds of processors and thousands of peripherals as well as actual board support packages for thousands of boards. With even more available through private code repositories across the web. In this way the GPL acts like a double-edged sword. While it may be true that your competitors can view and even use code from the Linux kernel on your device, it's just as true that you can use code from the Linux kernel on theirs or anyone else’s.

In contrast, the Android open source project which encompasses all of Android except for Google apps and Google Play services is distributed by Google under the Apache license or other so-called Permissive licenses that don't require any disclosure of your modifications. As with the GPL however, this is a double-edged sword.

On one hand, the Apache license makes it possible to keep your modifications and additions to Android proprietary. But on the other, it makes it very difficult to find sample code for the one part of Android you are definitely going to need to modify, the hardware abstraction layer, beyond what is provided by Google for their Nexus devices or by your processor vendor in their Android support package.

It's also important to recognize that Android is fairly unique as embedded operating systems go. On one hand, unlike Linux or many other open source projects, Android is controlled entirely by Google. And the fact that Google used permissive licenses for this code means that future modifications and additions don't have to be released. While highly unlikely, Google could decide tomorrow that future versions of Android will not be published open source or that some portions of the code will be restricted to official licensees only.

On the hand, unlike commercial embedded operating systems, Google couldn't really care less if we decide to use Android for our embedded products because we don't help them sell eyeballs to advertisers. Which is really what Android is all about for Google. One implication of this is that the internals of Android are not well documented. Again because they don't expect phone and tablet OEMs to be making modifications there. Another implication is that Google only cares about maintaining compatibility at the Android application layer, which means they won’t hesitate to make architecture changes under the hood that might make your life difficult if you decide to upgrade to a new version of Android.

Slide 18: Android Security

One common concern about using Android for embedded projects is security. There is barely a phone or tablet on the market that hasn't been rooted and we hear regular news reports of Android devices being compromised in various ways. But in fact you really can't judge Android security by what happens with consumer mobile devices because the security on those devices is compromised in ways that would not necessarily happen on a well-designed embedded device. For instance consumer Android devices have to support the Google Play store, which allows third party applications and operating system updates to be downloaded and executed on the device, which significantly increases the attack surface for the device.

Likewise, consumer mobile devices also have to support software development and debugging as well as some level of access to the Android shell using ADB, the Android Debug Bridge that runs over USB. All of which makes security more challenging. And some OEMs have made mistakes, like failing to change the default Android security keys, which can further weaken security. This is why we cover how to lock down an Android device as part of the Embedded Android Boot Camp.

Of course it is definitely the case that Android has security holes and will undoubtedly continue to do so in the future. That's hard to avoid with an operating system this large. But what Android has going for it is that it is running on the Linux Kernel which is widely considered to have good security; and also that Google designed the Android software architecture to provide a higher level of security than you might find in many embedded Linux distributions.

Specifically Google utilized the multi-user security features of Linux, which prevent one user from accessing another user's data in a unique way. They force the Android OS, many of the native apps, and each Android app, to run as a separate user without root access. Effectively sand boxing the Java and native code in each process. As a result of this architecture Android apps can make direct calls to the Linux Kernel for services like memory allocation, or thread creation, using bionic, Google's implementation of the standard C Library. But in order to make calls to the Android operating system, which is running as a separate user, Android apps have to use inner process communication. And Android provides an efficient form of IPC called Binder.

Slide 19: Supporting Unique Hardware in Android

Another question embedded developers frequently ask is, "How does an Android application go about working with unique or custom hardware that isn't supported by Android out of the box?” Common examples include: FPGAs or other programmable logic devices, CAN or other embedded buses, wireless interfaces other than Wi-Fi and Bluetooth, data acquisition hardware ect..

As we've already noted Android apps are written in Java and only code running in the Linux Kernel can directly access hardware. How do we go about exposing a Kernel driver to the user space in such a way that it can be accessed directly from and Android app running in a Java virtual machine?

Well one option is shown by the left hand fork in this diagram. It involves adding a new system service into the Android OS that can handle interfacing to your custom driver and wrapping it in a new Java frame work that can be used by your Android application. This is not as hard as it sounds; it's something students learn to do as part of Barr Group's Embedded Android Boot Camp. But it does require modifications to a number of files within the Android open source project and has the side effect of adding new calls to the standard Android SDK.

There is of course nothing wrong with extending the Android SDK and generating a custom SDK that can be used by application developers and Eclipse or Android Studio is straightforward. But it is the sort of thing that gives Google nightmares. What's vitally important to Google is that all Android applications are able to run on all Android devices. That the app market is not fragmented by phones and tablets with custom APIs, which is exactly what Amazon has done with its Kindle Fire products. To this end Google prohibits OEMs who license the Google apps or Google Play services from doing anything that would fragment the Android market. But there is not much they can do about everybody else.

But there is another option, shown by the right hand fork in this diagram. In Linux all of you custom Kernel driver’s interfaces to user space will look just like files within the file system. Which application code can open and read from or write to in order to communicate with the driver and Java supports opening and reading or writing files. So these sorts of drivers can be accessed directly from and android application assuming the drivers permissions are set to allow this. This approach doesn’t require any modification to the Android OS or Android frameworks but may be less secure. It is also important to note that Android applications written in Java can also call native code in shared libraries written in C, using a technology call JNI, the Java Native Interface.

Beyond reading from and writing to drivers Linux also supports more sophisticated ways of interacting with drivers. Such as watching a group of drivers to see if they have any new data, or directly accessing a buffer exposed by a driver. While these sorts of operations aren't supported in Java, they can be easily supported in a shared library called by an Android application.

Slide 20: Real - Time Considerations: Pros

Real-time software requirements can introduce additional challenges when using either Android or Linux on embedded systems. When determining if embedded Android or Linux can meet your real-time requirements an important first step is determine which tasks require hard or real-time software deadlines and how tight those deadlines are. For instance many embedded devices may find that their hard real-time deadlines apply only to lower level code and not require such deadlines at all for their user interface code.

Well Linux was not originally designed for use with real-time systems over the last decade a lot of effort has been put into upgrading the standard Linux Kernel to support real-time software. Although this effort is not complete the level of real-time software offered by Linux today maybe sufficient for many embedded projects. For example recent versions of the Linux Kernel now support two special fixed-priority scheduling policies. These can be used for real-time task that are capable of blocking all lower priority threads on the system.

The Kernel now also has an improved architecture for interrupt handling, that supports interrupt service routines that are paired with separate interrupt service threads. These IST run at real-time priority levels and make it easier to move lengthy interrupt handling operations out of an interrupt context and into a thread context.

The Kernel now also supports priority inheritance for mutexes and a new blocking sleep function that can be scheduled with microsecond resolution. It's important to note that – well Linux application can be scheduled at real-time priority levels the rest of these features are restricted to code running in the Linux Kernel. In practice this means that Android may only be a reasonable choice if your real-time code can be restricted to the driver level.

Slide 21: Real - Time Considerations: Cons

There are however some very real challenges to meeting real-time requirements with Android. One is that Android apps themselves can't be scheduled at real-time priority levels. Both because of limitations in Java, and because the extensive use of thread pools for inter-process communication with Binder would make that impractical.

Setting Android aside even the standard Linux Kernel presents a number of challenges to writing real-time software. One is that Kernel services are not guaranteed to complete within a specific amount of time. Another is that the standard Linux Kernel is not yet fully pre-emptible. But the biggest challenge may be that many off the shelf Linux drivers have not yet been updated to work well in a real-time environment. In particular they may spend too much time in a state in which interrupts are disabled, either in an interrupt service routine or when holding spin locks, which are kind of light weight mutex. Of course off the shelf drivers used in any RTOS aren’t completely immune from these issues but they are going to be more common in Linux given its history.

Even in cases in which you have hard real-time requirements that cannot be met by Android that's not necessarily a showstopper. Some products today use a virtualization hypervisor to allow an RTOS to run alongside Linux but at a higher priority all on a single core. Other products today use a dual core solution in which Android runs on one core in order to provide the devices user interface, and traditional RTOS runs on another core and handles the real time software. In fact it's not unusual today for ARM Cortex-A processor vendors to include an ARM Cortex M core on the same chip for exactly this reason.

Slide 22: Other Android Challenges

But ultimately the biggest impediment that many embedded projects will encounter in using Android are its high resource requirements. As we’ve already noted Android requires a reasonable fast processor with a GPU. For versions since Android 4.0, Ice-cream Sandwich, a processor with an ARM Cortex-A8 core running at 800 MHz probably represents a realistic minimum. 256 MB of Flash is pretty typical, though naturally this will depend on your application’s resource requirements. For comparison, except for the GPU, these specs exactly match that of the Nest thermostat. But whereas the Nest only needed 64 MB of SD RAM, Android really requires 256 MB at a minimum.

In the very unlikely event that your embedded product would need Google apps or Google Play services to be installed, you'll have to also have to meet the requirements of Google’s CDD, Compatibility Definition Document. Which dictates a large number of requirements that your device’s hardware and software must meet. For example, according to CDD for the most recent version of Android (version 5), your device will need at least 512 MB of SD RAM and 3 GB of storage.

Another daunting challenge of using Android for embedded projects is the not insignificant learning curve. Both Linux and the Android open source project are very large code bases. To make matters worse, official documentation for the Linux Kernel is poor by industry standards. And while there is excellent documentation for Android application developers, documentation is pretty much nonexistent for anyone working at the Android operating system level. While this may sound like and insurmountable challenge, “How on Earth can anyone deal with code bases this large especially without documentation?” In practice that is not the case. The reason for this is that Linux and Android are built on softeware frame works and once you understand how these frame works work in the general case you can apply this knowledge using the code itself as documentation to understand the particulars.

Incidentally this is precisely what we do at the Barr Groups' Embedded Android Boot Camp. In a little under a week we teach students the frameworks that all Linux Kernel drivers are based on and likewise the frameworks that make up the Android operating system. By the fourth day of class students are expected to write a custom Linux driver on their own from scratch and modify the Android system sever to support custom data from this driver. While this is just the first step it give students the fundamentals they'll need to start exploring Linux and Android independently so they can learn to bring up these code bases on your custom hardware.

Slide 23: Case Study #1: Wireless GPS

Next we're going to look at the suitability of Android for a few different types of fictional embedded devices. In our first case we are going to build a ruggedized handheld GPS with a modern user interface and mapping features.

Because we expect this GPS to be used by teams in remote areas without cell reception this GPS also includes a long range 900 MHz wireless chipset, which will allow team members to track each other’s locations and even send text messages to each other. The device also supports Wi-Fi and Bluetooth for downloading maps or syncing data between units.

Because this is a ruggedized unit designed to be used by people wearing gloves it doesn't have a touch screen, relying solely on a three-axis joystick and a few buttons for user input. Which raises the stakes for the modern user interface created by the application development team. To augment these input devices for entering texts messages the devices will also have audio capabilities and an offline speech recognition engine.

Slide 24: Big-Ticket Software Components

Based on these requirements if we were going to implement this in Android which software components are likely to prove most challenging? Obviously creating a GPU driver and the Android hardware abstraction layer components for the graphics services would be extremely challenging. But we'll eliminate the need to build those ourselves by selecting a processor service vendor that already has an Android support package with these implemented.

In the Kernel we'll need drivers for our Audio Codec and our Wi-Fi chipset. Again we don't want to create these from scratch so limit our chip selection to those that have good Linux support. Likewise we choose a Bluetooth chipset that is already well supported in the version of Android we are using. The long-range wireless driver on the other hand will need to incorporate a custom frequency-hopping algorithm. So we'll end up creating that driver ourselves, using sample code from the chip vendor and borrowing code from similar drivers created for other Linux projects and were released under the GPL.

At the Android OS level we’ll need to modify the audio service HAL library to support the custom audio features of our device. If our GPS vendor provides a location HAL library we'll use that, otherwise we we'll be on the hook for creating one on our own. And we may decide to create a new Android service for the long-range wireless driver, although we could also access that driver directly from our application.

At the app layer we will probably need to license third-party mapping and speech recognition libraries. While it is true Android apps can incorporate Google Maps, we'll need to support topographical maps and other custom maps not supported by Google Maps. And well Android does have a speech recognition library that can be used offline; it's very possible that this won't meet our requirements.

And of course the biggest software component we'll need to create is the embedded application itself. While it is possible that a single software engineer could handle the modifications and additions that we need to make to both the Linux kernel and Android operating system, the size and complexity of embedded applications often requires a team of engineers; especially if the project involves advance features like modern user interfaces, mapping and speech recognition.

Slide 25: Android is a Strong Fit

So is Android a good fit for this project? Definitely. The popularity of Android and Linux means that we'll have a good selection of processors, audio codecs and Wi-Fi and Bluetooth chipsets that are already well supported by Linux and Android. For the long range wireless driver the popularity of Linux would make it easier to find developers who can tackle this. And the real time capabilities of the Linux Kernel would be sufficient for our frequency-hopping algorithm.

At the application layer there is no question the Android framework can support our requirement for modern user interface and the popularity of Android should make it easier to find application developers with Android experience. The result should be a reduced time to market for this product.

Our biggest challenge will likely be finding a developer who knows how to work with the Android open source project, who knows how to create hardware abstraction layer libraries and modify the Android OS. Training an existing embedded engineer on our team to work with Linux and Android may be the best option. The good news is that because this product is a handheld device it will hopefully help us produce the number of customizations we need to make to the standard behavior of Android.

Slide 26: Case Study #2: Low cost Nest

For our second case: We're going to consider whether it might make sense to build a low-cost competitor to the Nest using Android. This will require some suspension of disbelief, since in the real world Nest Labs would likely sue us for violating their design patents. In this hypothetical scenario however our goal is to create a device that looks and behaves just like the Nest but that sells from – maybe half of the $250 price tag of the Nest today.

To reduce the bill of materials and manufacturing costs, one major change we're going to make is to replace the smart back plate, which on the Nest has a Cortex M processor and the battery, with a dumb one which will allow us to rely on a single processor for the entire device. But doing this means we need to handle the sensors and the HPNC circuitry alongside our GUI on the main processor. We're also going to leave out 802.15.4 wireless chipset. But we obviously need to support Wi-Fi. We'll also work from the assumption that our margins are likely to be less than those enjoyed by Nest Labs and likewise that our industrial designers and mechanical engineers are able to shave some cost off the package.

But even given those reductions we're still going to need to figure out how to reduce the cost of the bill of materials needed to duplicate the modern user interface of the Nest.

Slide 27: Android is Probably Not a Good Fit

There's no question that the Android user interface framework is sophisticated enough to allow us to create an Android application that could reproduce the modern user interface of the Nest. And for all of the reasons we've discussed previously the benefits of both Android and Linux would likely speed our time to market. For all of these reasons it would be very tempting to use Android for our Nest clone.

One challenge of using Android in this case is that the real-time software requirement for our Nest clone might be hard enough or tight enough that we can't support them easily on Linux. As we discussed earlier, using a virtualization hypervisor or dual core solution could allow us to work around this limitation and use a traditional RTOS for those real-time tasks. But there's no way to work around Android resource requirements. Android is going to require four times the 64 megabytes of RAM that are included in the Nest. And Android doesn't provide us with any relief on storage requirements or processor speed relative to the Nest. In addition unlike the Nest, Android will require our processor to also have GPU. These resource requirements alone are enough to make Android not a good fit for our project.

Slide 28: Better Alternatives

If we don't use Android what alternatives might we consider that would still let us build the Nest style user interface? Well, one would be to still use Linux but rely on one of the other modern UI frameworks that we discussed earlier or build our own. This is exactly the approached that the Nest uses. But in our case we’re going to further reduce our bomb cost by choosing a processor with a GPU.

This may seem counterintuitive given that one of the reasons we excluded Android was that it requires a processor with a GPU. But what is important to keep in mind is that Android carries enough of our performance penalty, due to its extensive use of Java and the need for inter-process communication whenever the application makes calls to the operating system, that we still need a fast processor.

If we get rid of Android we can write our Linux application in C or C++ and rely on a slower processor. In fact just as was the case of the original iPhone, if we use a modern UI framework one way to leverage the capabilities of the GPU. We can rely on the GPU to match the graphics performance of the Nest while relying on a much lower and cheaper ARM core for everything else.

For example instead of the 800 megahertz TI AM3703 processor on the Nest, we might choose the 300 megahertz version of the TI AM3354 processor which is around $10 less but still includes a power VR GPU. Interestingly this processor also includes an ARM Cortex M3 core that we might use for our real-time tasks. Beyond the processor it's likely that we can streamline Linux in order to significantly reduce the 64 megabytes of RAM and 256 megabytes of storage required for the Nest. And switching from Linux to a traditional RTOS might let us reduce this footprint even further. Assuming that we could still manage good support for Wi-Fi and a modern UI framework.

Of course these sorts of changes, made to reduce the cost of the bill of materials, will likely end up adding to the time required to bring the product to market. And if we continued down the same path, it's worth asking whether we might be able to reproduce the Nest user interface on something as small as an ARM Cortex M4 micro controller. Although these processors currently lack a full GPU, some vendors are now including 2D graphics acceleration features. My hunch is that you might not be able to reproduce all aspects of the Nest user interface but it I think that was some work you might be able get pretty close.

Slide 29: Start with UI Design

One of the things we stress at Barr Group is the need to understand your system requirements before you get too far along on designing your hardware and software architecture. You obviously don't want to end up with a system that is not capable of delivering what your product needs. But by the same token you also don't want to include hardware or software components that aren't really needed and won't provide a good return on your investment. This philosophy certainly applies to user interface design as well.

What I think we're saying today is that the incredible success of mobile phones and tablets is starting to change our expectations for how we interact with all of the embedded devices we come into contact with. And that means that a modern user interface, however you define that for your product, can be an important competitive advantage but that's going to come with a cost. Which is why it's important to start your user interface design process earl,y beginning with story boards and animatics, and moving on to simple prototypes, and allow this to drive your hardware and application framework requirements.

Android is a great operating system and has plenty of advantages besides its application framework, but it would be a mistake to use it if your product could be built with something simpler and smaller.

Question & Answer

Q: Do Embedded Android devices have to license Google Play?

No, most embedded Android devices would not need to license either Google Play or Google Apps from Google. In order to do that, you have to meet the requirements of something called the CDD—the Compatibility Definition Document—which places a lot of requirements on your device from both a hardware and software standpoint, and most embedded devices would not be able to meet that full set of requirements.

Q: Why didn’t the list of GUI application frameworks include ones that are commonly used in the embedded arena, like emWin and PEG?

The reason is that, while these are perfectly fine application frameworks, they are more first-generation GUI frameworks that lack a lot of the features that you need to have in order to create the sorts of modern user interfaces discussed in the webinar. Win32 and the .NET Framework would fall into the same category.

If you can create the GUI you want for your product using these sorts of first-generation application frameworks, you probably shouldn’t be using Android. At the very least, the benefit of using Android goes way down. There certainly still are some benefits in terms of access to developers and some architectural benefits, but you’ll have more complexity and size issues to deal with using Android versus using one of these more lightweight application frameworks if they can give you what you need .

Q: What is an example of a "traditional RTOS" that is an alternative to Embedded Android/Linux?

A "traditional RTOS" would be an operating system designed for use in real-time embedded systems, including resource-constrained devices that may be based on a microcontroller. There are many open source and commercial examples: FreeRTOS, MicroC/OS, QNX, ThreadX, VxWorks, OSEK, and Nucleus, to name a few.

Q: What about power consumption / battery life when Android is used?

Because Android was designed for mobile phones and tablets, minimizing power consumption was obviously a critical goal for its architecture. For instance, Google designed Android to go into low-power states automatically following a period of user inactivity, unless apps specifically request that the device stay in a high-power state. This stands in contrast to the traditional Linux model, which leaves the device in a high-power state unless an application specifically requests that it enters a low-power state.

What Android can't directly impact, however, is whether application software and drivers are written with power consumption in mind. If you design your application or drivers to poll for long periods, or your drivers handle high-rate data using interrupts instead of DMA, power consumption is going to suffer.

Q: What support does Android have for updating Android and application software in the field?

Android supports a recovery mode, which is a special root filesystem that your device can boot into and which supports two modes. One mode allows your device to boot to a graphical menu that you can customize to offer options like erasing the flash, restoring factory settings, or applying a software update using some sort of external media like USB or microSD. The other mode allows your device to boot using a Google-defined update package, which contains an update script and the software components that are to be updated. Using this update package, you can update everything from the bootloader to the Linux kernel to the Android OS to the application, either as individual pieces or as an integrated package.

Typically, there’s going to be some work involved in getting that implemented. For instance, recovery mode doesn’t use the Google Play Services, so you’re going to have to handle downloading the update package to the device, which will likely mean writing some sort of application-level software to handle that piece. But once the update package is loaded on the device and you configure the device to boot into update mode, the rest of the update is handled automatically.

Q: What are the options for porting a C application to Android?

It is possible to run native applications written in C underneath Android. These are typically referred to as daemons in the Linux world or low-level services in Android. In this way, you can certainly take a C application and port it to Android and run it directly on top of the Linux kernel underneath Android. The downside to this approach is that your application won't have access to any of the Android application frameworks, so you can’t draw to the screen or do anything GUI-related using those frameworks.

There is another option, however, which is that Android also supports the NDK—the Native Development Kit—that gives you the ability to write shared libraries in C that can be called directly from an Android application. The Native Development Kit is an SDK that provides some additional features that you wouldn’t have available if you were just running directly on top of the Linux kernel. It’s important to remember that Bionic—the Android standard C library—is not completely POSIX compliant, so you may run into some issues porting code depending on the specifics of your application.

Q: How do Embedded Android and Embedded Linux fare against each other?

Embedded Android is, in a sense, an Embedded Linux distribution because it is based on the Linux kernel. The main difference is what happens up in user space: Google has replaced the GNU OS, which is the user space code for most Embedded Linux products, with the custom Android OS.

In terms of pros and cons, you can obviously build a more streamlined device using Embedded Linux, but there really isn’t a standard there for application development like there is for Android, so you’ll have go out and find an app framework that does what you want. Options include some of the application frameworks mentioned in the webinar and some that are more traditionally Linux-oriented, like GTK. So with Embedded Linux, it’s a matter of shopping around and finding the pieces that work for your project.

Q: What topics do you cover in your Embedded Android Boot Camp?

The Embedded Android Boot Camp is split into a couple of different sub courses.

The first course, which takes about a day and a half, focuses on Linux and especially Linux driver development. Almost every Embedded Android or Embedded Linux requires customizing, writing, or configuring drivers for your specific hardware, so it’s important to understand how those drivers work and what the driver architecture is. Because this is a very hands-on and coding-intensive boot camp, you will write a driver from scratch using actual hardware in that first day and a half.

In the second day and a half, we focus on the Android operating system level. We dive into the architecture, look at how to customize Android for specific hardware, go into the hardware abstraction layer, and take a close look at the System Server. Even if you don’t know Java, Java is close enough to C that, given the introduction we provide, you’ll still be able to pick up Java quickly as long as you know C well.

In general, we provide a lot of information about Android, including how to do the sorts of customizations that are typical for embedded devices, This includes things like setting Android up into kiosk mode, so that your device boots into your application and that’s the only application that runs on the device.

The fourth day of the course consists of a Capstone Project, which is a daylong coding project where you apply everything that you’ve learned up until that point. This includes writing another Linux driver from scratch and then modifying Android to allow the custom data from that driver to be exposed through the Android software stack to an Android application. In the last half-day, we cover Embedded Android security and also information about licensing, covering the various open source licenses in a lot more detail.

Needless to say, this is a very coding-intensive and challenging Boot Camp in which everyone learns a lot, even if they are very experienced developers.

Q: How is the Embedded Software Boot Camp different from the Embedded Android Boot Camp?

The Embedded Software Boot Camp is also an intensive hands-on training week for embedded programmers, but the big difference is that we’re really looking at how to use the C programming language—with or without a real time operating system—to build reliable embedded software. We start all the way down at the driver level and look at hardware interaction from C and then move to looking at real-time operating systems and their alternatives. There is also a capstone project in that course where you build an actual system in a team. That’s what’s different, mostly the subject matter but not the format.

Q: What should I do to prepare for the Embedded Android Boot Camp?

You don’t really have to do very much. The main requirement is that you have to know the C programming language well. If you have that, and you have some experience with embedded software in general, especially how to work with hardware, then you’re going to be in good shape.

There are a few things you can do to further prepare. For instance, if you want to play around with a Linux command line, such the BASH shell, and get up to speed on that, you’ll be ahead of the game. Likewise, taking a look at Java will make things easier, but we provide enough material in the course that people who come in and don’t have Java experience are still fine.

What’s happening and how it’s done. Get in the know.

Sign up for our newsletter today!

Receive free how-to articles, industry news, and the latest info on Barr Group webinars and training courses via email. 

To prevent automated spam submissions leave this field empty.