Without being aware of it, an American encounters on average hundreds, if not thousands, of embedded systems every day. Just consider a modern car where hundreds of microprocessors control a large multitude of engine functions, the entire lighting system, seat positioning and so on all the way to the activation of the windshield wipers when it rains. Add to this cell phones, digital cameras, entertainment systems, memory sticks, microwaves and coffee makers.
A key feature of these systems, often presented in the form of system-on-chip (SoC) devices, is that they contain an intimate mixture of hardware and embedded software. In many cases, the way in which these systems function is mission critical. As a result, the hardware and software must undergo extreme levels of co-verification to ensure that they perform to the highest standards possible.
In the not-so-distant past, software content of embedded systems was relatively small compared to the hardware portions of the design. Thus, verification solutions, such as simulation and traditional big-box hardware emulation, were focused more toward testing hardware.
Today, however, the software content of embedded products is increasing dramatically; a major system design house reports a year-to-year software growth rate of 200 percent compared to only 50 percent growth rate in the hardware portions of designs.
It is not uncommon for an embedded system to feature several millions of lines of code written by squadrons of software engineers, vastly exceeding the size of their hardware counterparts with obvious implications on the associated development budget.
Limitations of conventional verification solutions mean that about half of all embedded systems development projects come in months behind schedule, and less than half of all designs meet their original feature and performance objectives.
Without sufficiently detailed verification, the lost market share and revenue associated with late product introduction can be immense. Even worse, costs associated with the bad publicity that would arise from releasing a product that intermittently fails in the field are incalculable.
Let’s consider the advantages and disadvantages associated with conventional verification solutions, along with a new class of hardware emulation solutions that addresses the needs of embedded system hardware and software engineers.
Instruction Set Simulators
First, an observation is worth mentioning here. Software developers and hardware engineers use different performance metrics. Software developers rate speed by instructions per second, whereas hardware engineers measure the pace of execution in cycles per second or hertz (Hz). Since most embedded processors execute roughly one instruction per cycle, a million instructions per second (MIPS) equates to one million cycles per second (MHz).
One common verification option used by software developers is the software-based instruction set simulator (ISS), which features a high-level behavioral model of the processor (usually written in C/C++). An ISS can execute at 10 MIPS to 100 MIPS, when it is instruction accurate and is used in stand-alone mode without any other part of the design.
A standalone ISS provides visibility into what the software is doing. However, in order for a verification solution to be really useful to software engineers, it needs to allow them to develop and verify everything from the system initialization routines and hardware abstraction layer, through the real-time operating system (RTOS) and its associated device drivers, all the way up to the embedded application code itself.
As such, the entire design must be modeled either at the register transfer level (RTL) or at a higher level of abstraction. If the whole design but the processor is modeled at the RTL, the ISS is coupled to a hardware description level (HDL) software simulator or a hardware emulator that handles peripherals and the rest of the design.
Unfortunately, coupling an ISS to an HDL simulator causes dramatic performance degradation to no more than a few kilohertz (kHz) at best due to the switching between the ISS and the RTL at every cycle. Higher performance of at most 500 kHz can be reached by interfacing the ISS to a traditional big-box emulation system.
If the verification system runs at anything less than five MHz, it’s of no use for software developers in the context of an edit-compile-run-debug cycle. When running on an off-the-shelf, high-end PC, an ISS coupled an emulator falls well below this level.
Software Virtual Prototyping Platforms
A software virtual prototyping platform models the entire system at a high level of abstraction. Depending on the level of abstraction and the degree of accuracy built into the model, an entire SoC can be simulated at a speed from low 100kHz, if cycle accurate, all the way to 10MHz, if completely untimed.
An Soc model described at a high level of abstraction can be useful for the early development of embedded software prior to the RTL code becoming available. Also, this technique provides a useful way to develop non-timing critical portions of the design, such as the graphical user interface (GUI). However, these virtual platforms trade off accuracy for speed, which means they are of limited use when it comes to detecting and identifying subtle problems in hardware/software integration.
Furthermore, many embedded systems run in a real-time mode, where the correctness of computations not only depends on their logical results, but also on the times at which those results are produced. This means that the highest quality verification has to be performed using the RTL code because this provides the closest representation possible to the actual silicon. Changing a single line of RTL code typically requires re-verification of the entire design. This level of detail is not available in the software virtual prototyping platform.
Traditional Big-Box Hardware Emulators
A common verification approach for large SoC designs with tens of millions of equivalent logic gates is to use a conventional processor-based or custom FPGA-based hardware emulator housed in large chassis. However, in addition to being expensive, the size and complexity of these solutions have a significant impact on performance.
The maximum speed a big-box hardware emulator can achieve is in the ball park of 1MHz, and this assumes that everything (including the testbench) is represented in the emulator or that the input/output is connected to a real-world (physical) system environment. If the testbench is running in an HDL software simulator, the emulator’s speed slows down to few kilohertz.
These speeds allow the simulation to run sufficient cycles to be of use to hardware design engineers, but they are of little use to software engineers who may require the ability to simulate hundreds of billions, if not trillions, of clock cycles.
In-House Hardware Prototypes
Many design teams opt to create an in-house FPGA-based prototype for their design to achieve the required simulation speeds, but avoid the costs associated with traditional hardware emulators. The common perception is that this is an inexpensive solution, because the final board for a five-million application specific integrated circuit (ASIC)-gate design can be produced for around $10,000.
Developing such a board demands a high level of expertise and can require a fair amount of engineering time and resources. Although these boards can provide the required speeds of 30-50 MHz, they provide limited visibility into the hardware. The restricted hardware debug capability associated with this poor visibility can result in weeks or months of additional development time, which equates to lost market opportunities.
A New Class if FPGA-Based Verification Solutions
The introduction of high-density FPGAs that accommodate millions of ASIC-equivalent gates has paved the way for a new generation of off-the-shelf fast FPGA emulation platforms.
For example, the ZeBu (for Zero Bugs) platform from EVE offers the best of the big-box emulator at higher speeds of execution at a fraction of the cost.
Figure 1: EVE’s ZeBu represents a new class of fast emulation.
Coupled with robust and comprehensive software, this class of emulation provides high-level visibility into the design necessary to satisfy the requirements of conventional hardware verification. Further, these platforms typically execute at speeds of five to 10MHz, the speeds necessary to address the requirements of hardware/software co-verification.
With regard to real-time embedded systems, these new platforms support a transaction-level interface, based on the Accelera industry standard, that allows the design to be tested in the context of its real-world environment. And, the affordable nature of these solutions makes it possible to equip a software engineering team with a set of platforms to dramatically accelerate the development of the embedded software portion of the design.
As one example of the use of these platforms, consider a high-definition video decoder such as H.264, a popular design in blu-ray players and game consoles. In real silicon, such decoder would process 20 to 30 (1920x1080) video frames-per-second. A traditional big-box emulator would handle the video stream at a rate of one (1920x1080) frame every 10 seconds or longer. A fast emulator can process a video stream at one or more (1920x1080) frames-per-second, fast enough to run a movie at slow motion.
Equally important, these fast emulators offer unlimited waveform generation of any net, register and memory of the SoC design without compiling visibility into the design. Triggers, monitors, checkers and SystemVerilog assertions augment the debugging capabilities essential for tracing hardware bugs that may be buried deep in the designs.
Speed and interactive debugging features are also essential to trace design errors across the boundary of software and hardware. In fact, a software bug may show up in hardware malfunctioning or a hardware bug may lead to a software failure.
Every verification solution has its advantages and disadvantages. In some cases, it is appropriate to use multiple solutions. For example, a software virtual prototype might be used to start creating embedded software early in the development cycle prior to RTL availability. Then, some form of hardware emulation could be used to ensure good hardware/software integration as the RTL code becomes available.
Some designs are so huge that they demand the use of multi-million dollar hardware emulation systems. But, a vast number of embedded system designs fall in the one-million gate to 50-million gate region, and these designs can be easily addressed by the new generation of affordable fast FPGA-based emulation platforms.
These platforms have the ability to simulate the hardware and embedded software at high speed while still providing source-level debugging capability to the software engineers and a complete visibility into the design for the hardware engineers. The ability to model the external environment such as a HDMI video port using industry standard transaction-level interfaces means that these platforms fully satisfy the requirements associated with verifying real-time systems. And, their affordability means that a software engineering team can have access to a pool of fast emulation platforms.
Lauro Rizzatti is general manager of EVE-USA. He has more than 30 years of experience in EDA and ATE, where he held responsibilities in top management, product marketing, technical marketing and engineering. email@example.com