Part of the  

Chip Design Magazine


About  |  Contact



Dual Core Embedded Processors Bring Benefits And Challenges

By John Blyler
The embedded processor market has now fully embraced the multicore world with the recent introduction of the dual core option for Intel’s Atom devices. Dual-core embedded processors offer designers many new benefits while presenting new challenges. How will the multicore option affect low power designs, virtualization, and single-threaded legacy software? Will these devices lead to more connectivity? Is the embedded processor market looking like the ASSP market of the future?

To answer these questions, Low-Power Engineering talked with Jonathan Luse, Director of Marketing for the Low-Power Embedded Products Division of Intel.

LPE: How does dual-core affect power consumption?
Luse: It’s best to think of the Atom as roughly split into two vectors–performance and power. The performance vector is a little less power constrained and a little more performance oriented, but still low power compared to Intel’s Core family of processor. The other major vector is low power. At the winter Embedded World Conference in Nürnberg, Germany, we introduced our entry performance level processors, which included the dual-core option at about 13watts thermal design power (TDP) to 5.5 watts for the single-core kit at 1.6kHz. This was designed to have a little more tolerance for power, with the expectation that Input/Output (IO) interface and performance would be increased over time.

LPE: Is there a target wattage for future embedded Atom processors? Low power competition is stiff, especially in the mobile markets.
Luse: The low power vector is a strategic imperative for Intel. But the low-power roadmap is a journey, not a destination. The minute that I have 5w products, then the 4w market calls me up saying, “You’re so close to our needs that if you just string another watt out, then we’ll start consuming your products.” But the minute that I have 4 w processors, then the 3w market will call me and on it goes. Ideally, you could go to the spaces below Atom, i.e., into the application-specific standard product (ASSP) chips and microcontroller spaces where power is measured in milliwatts. Strategically, I look at that as a direction to move, provided we get the performance and the technology challenges to match the low power goals.

LPE: Does scalability remain intact with the new dual-core Atom?
Luse: Yes, it’s completely instruction-set compatible up and down the processor chain, from the embedded Xeon to the Atom. Obviously, there are some advanced functions in the higher end processors that won’t be executed in the low end ones.

LPE: How about virtualization?
Luse: The standard utilization of virtualization remains applicable. Nowadays, the trend has been to allocate functions to a core as opposed to splitting virtual machines across the same core, such as trying to emulate a quad core if you have a dual core. Today, many discussions focus around the blending of real time operating systems (RTOSes), as well as traditional operating systems, using some virtualization techniques. The goal is to mesh applications that have been on physically discrete systems into a virtualized environment.

This goal comes from vendor sensitivity about their RTOS performance being adversely affected by a general purpose OS. Historically, mission-critical applications like safety systems have a real time, deterministic operating system that is physically separate from a supervisor type of controller. However, today’s customers are both form factor and cost constrained in their applications. This has encouraged designers to be creative in the way they use virtualization, such as with the blending of RTOS and OS applications. This is a virtualization phenomenon, not a processor one.

LPE: Let’s turn to the software side of design. How are legacy single-threaded applications being addressed?
Luse: The readiness of software in multicore systems is an ongoing challenge. Within embedded systems, you have a long history of vast lines of code that are all single threaded to run on a single core processor. Most programmers and their companies don’t want to recode everything just to make it multithreaded so it will run better on multicore systems. But these programmers do want to take advantage of extra processors. That is where virtualization techniques can increase the processor compute density, i.e., to take advantage of multiple cores in applications the use existing single threaded software.

LPE: What are some of the more interesting applications that you’ve seen?
Luse: There is no way to predict all of the innovative ways in which customers create applications. For example, one customer is developing a smart energy harvester that supplies power to a wheel bearing monitoring system in a rail car. This system monitors and manages the wheel bearing motion to make sure the bearings are solid. It’s powered by an indirect mechanism based on the motion of the rail car itself. Like the mechanism in a Rolex watch, the energy harvester uses a cantered pendulum that swings back and forth, thus powering the system. The battery mechanism is powered by the motion of the rail car!

LPE: Do you see any emerging trends in the embedded space?
Luse: Perhaps the biggest trend is toward the connectivity of embedded devices. The cost of embedded connectivity and intelligence continues to go down. The next move for devices will be a growing awareness of their surroundings. Consider Amazon’s Kindle. Today, it’s completely unaware if another Kindle is nearby. The next generation Kindle or similar devices may be more aware and will creative in the ways it utilizes that awareness.

LPE: Many connected devices require new sensors. Is Intel considering the addition of embedded MEMs and sensor in it devices?
Luse: The classic challenge is what to integrate and what to keep discrete. What type of sensors might be included? If you include those sensors in the die, then you affect the cost models. But that is the business challenge of the future. If you want those sensors to be close to the CPU, then you must add more specialization into the chip itself. It’s getting to the point where, in addition to general purpose CPUs, there will also be a market for more application specific features and derivatives that almost look like application-specific standard products (ASSPs). If I look at the ASSP market, it starts to look like the CPU market 10 years from now, i.e., the amount of processing horsepower that will be put into an ASSP is increasing.

Tags: ,

Leave a Reply