Part of the  

Chip Design Magazine

  Network

About  |  Contact

Posts Tagged ‘Verilog’

Specialists and Generalists Needed for Verification

Friday, December 16th, 2016

Gabe Moretti, Senior Editor

Verification continues to take up a huge portion of the project schedule. Designs are getting more complex and with complexity comes what appears to be an emerging trend –– the move toward generalists and specialists. Generalists manage the verification flow and are knowledgeable about simulation and the UVM. Specialists with expertise in formal verification, portable stimulus and emulation are deployed when needed.  I talked with four specialists in the technology:

David Kelf, Vice President Marketing. OneSpin Solutions,

Harry Foster, Chief Scientist Verification at Mentor Graphics

Lauro Rizzatti, Verification Consultant, Rizzatti LLC, and

Pranav Ashar, CTO, Real Intent

I asked each of them the following questions:

- Is this a real trend or a short-term aberration?

- If it is a real trend, how do we make complex verification tools and methodologies suitable for mainstream verification engineers?

- Are verification tools too complicated for a generalist to become an expert?

David: Electronics design has always had its share of specialists. A good argument could be made that CAD managers were specialists in the IT department, and that the notion of separate verification teams was driven by emerging specialists in testbench automation approaches. Now we are seeing something else. That is, the breakup of verification experts into specialized groups, some project based, and others that operate across different projects. With design complexity comes verification complexity. Formal verification and emulation, for example, were little-used tools and only then for the most difficult designs. That’s changed with the increase in size, complexity and functionality of modern designs.

Formal Verification, in particular, found its way into mainstream flows through “apps” where the entire use model is automated and the product focused on specific high-value verification functions. Formal is also applied manually through the use of hand-written assertions and this task is often left to specialist Formal users, creating an apparently independent group within companies who may be applied to different projects. The emergence of these teams, while providing a valuable function, can limit the proliferation of this technology as they become the keepers of the flame, if you like. The generalist engineers come to rely on them rather than exploring the use of the technology for themselves. This, in turn, has a limiting factor on the growth of the technology and the realization of its full potential as an alternative to simulation

Harry: It’s true, design is getting more complex. However, as an industry, we have done a remarkable job of keeping up with design, which we can measure by the growth in demand for design engineers. In fact, between 2007 and 2016 the industry has gone through about four iterations of Moore’s Law. Yet, the demand for design engineers has only grown at a 3.6 percent compounded annual growth rate.

Figure 1

During this same period, the demand for verification engineers has grown at a 10.4 percent compounded annual growth rate. In other words, verification complexity is growing at a faster rate than design complexity. This should not be too big a surprise since it is generally accepted in the academic world that design complexity grows at a Moore’s Law rate, while verification complexity grows at a much steeper rate (i.e., double exponential).

One contributing factor to growing verification complexity is the emergence of new layers of verification requirements that did not exist years ago. For example, beyond the traditional functional domain, we have added clock domains, power domains, security domains, safety requirements, software, and then obviously, overall performance requirements.

Figure 2

Each of these new layers of requirements requires specialized domain knowledge. Hence, domain expertise is now a necessity in both the design and verification communities to effectively address emerging new layers of requirements.

For verification, a one-size-fits-all approach to verification is no longer sufficient to completely verify an SoC. There is a need for specialized tools and methodologies specifically targeted at each of these new (and continually emerging) layers of requirements. Hence, in addition to domain knowledge expertise, verification process specialists are required to address growing verification complexity.

The emergence of verification specialization is not a new trend; although, perhaps it has become more obvious due to growing verification complexity. For example, to address the famous floating point bug in the 1990’s it became apparent that theorem proving and other formal technology would be necessary to fill the gap of traditional simulation-based verification approaches. These techniques require full-time dedication that generalist are unlikely to master because their focus is spread across so many other tools and methodologies. One could make the same argument about the adoption of constrained- random, coverage-driven testbenches using UVM (requiring object-oriented programing skills, which I do not consider generalist skills), emulation, and FPGA prototyping. These technologies have become indispensable in today’s SoC verification/validation tool box, and to get the most out of the project’s investment, specialist are required.

So the question is how do we make complex tools and methodologies suitable for mainstream verification engineers? We are addressing this issue today by developing verification apps that solve a specific, narrowly focused problem and require minimal tool and methodology expertise. For example, we have seen numerous formal apps emerge that span a wide spectrum of the design process from IP development into post-silicon validation.  These apps no longer require the user to write assertions or be an expert in formal techniques. In fact, the formal engines are often hidden from the user, who then focuses on “what” they want to verify, versus the “how.” A few examples include: connectivity check used during IP integration, register check used to exhaustively verify control and status register behavior

against its CSV or IP-XACT register specification, and security check used to exhaustively verify that only the paths you specify can reach security or safety-critical storage elements. Perhaps one of the best- known formal apps is clock-domain crossing (CDC) checking, which is used to identify metastabilty issues due to the interaction of multiple clock domains.

Emulation is another area where we are seeing the emergence of verification apps. For example, deterministic ICE in emulation, which overcomes unpredictability in traditional ICE environments by adding 100 percent visibility and repeatability for debugging and provides access to other “virtual- based” use models. In addition, DFT emulation apps that accelerate Design for Test (DFT) verification prior to tape-out to minimize the risk of catastrophic failure while significantly reducing run times when verifying designs after DFT insertion.

In summary, the need for verification specialists today is driven by two demands: (1) specialized domain knowledge driven by new layers of verification requirements, and (2) verification tool and methodology expertise. This is not a bad thing. If I had a brain aneurysm, I would prefer that my doctor has mastered the required skills in endoscopy and other brain surgery techniques versus a general practitioner with a broad set of skills. Don’t get me wrong, both are required.

Lauro: In my mind, it is a trend, but the distinction may blur its contours soon. Let’s take hardware emulation. Hardware emulation always required specialists for its deployment, and, even more so, to fully optimize it to its fullest capacity. As they used to say, it came with a team of application engineers in the box to avoid that the time-to-emulation would exceed the time-to-first silicon. Today, hardware emulation is still a long way from being a plug-and-play verification tool, but recent developments by emulation vendors are making it easier and more accessible to use and deploy by generalists. The move from the in-circuit-emulation (ICE) mode driven by a physical target system to transaction-based communications driven by a virtual testbed designates it a data center resource status available to all types of verification engineers without specialist intervention. I see that as a huge step forward in the evolution of hardware emulation and its role in the design verification flow.

Pranav: The generalist vs. specialist discussion fits right into the shifting paradigm in which generic verification tools are being replaced by tools that are essentially verification solutions for specific failure modes.

The failure modes addressed in this manner are typically due to intricate phenomena that are hard to specify and model in simulation or general-purpose Assertion Based Verification (ABV), hard to resolve for a simulator or unguided ABV tool, whose propensity for occurrence increases with SOC size and integration complexity, and that are often insidious or hard to isolate. Such failure modes are a common cause of respins and redesign, with the result that

sign-off and bug-hunting for them based on solution-oriented tools has become ubiquitous in the design community.

Good examples are failures caused by untimed paths on an SOC, common sources of which are asynchronous clock-domain crossings, interacting reset domains and Static Timing Analysis (STA) exceptions. It has become common practice to address these scenarios using solution-oriented verification tools.

In the absence of recent advances by EDA companies in developing solution-oriented verification tools, SOC design houses would have been reliant on in-house design verification (DV) specialists to develop and maintain homegrown strategies for complex failure modes. In the new paradigm, the bias has shifted back toward the generalist DV engineer with the heavy lifting being done by an EDA tool. The salutary outcome of this trend for design houses is that the verification of SOCs for these complex failures is now more accessible, more automatic, more robust, and cheaper.

My Conclusions

It is hardtop disagree with the comments by my interlocutors.  Everything said is true.  But I think they have been too kind and just answered the questions without objecting to their limitations.  In fact the way to simplify verification is to do improve the way circuits are designed.  What is missing from design methodology is validation of what has been implemented before it is deemed ready for verification.  Designers are so pressed for time, due to design complexity and short schedules, that they must find ways to cut corners.  They reuse whenever possible and rely on their experience to assume that the circuit does what is supposed to do.  Unfortunately in most cases where a bug is found during design integration, they have neglected to check that the circuit does not do what is not supposed to do.  That is not always the fault of EDA tools.  The most glaring example is the choice by the electronic industry to use Verilog over VHDL.  VHDL is a much more robust language with built-in checks that exclude design errors that can be made using Verilog.  But VHDL takes longer to write and design engineers decided that schedule time took precedence over error avoidance.

The issue is always the same, no matter how simple or complex the design is: the individual self-assurance that he or she knows what he or she is doing.  The way to make design easier to verify is to create them better.  That means that the design should be semantically correct and that the implementation of all required features be completely validated by the designers themselves before handing the design to a verification engineer.

I do not think that I have just demanded that a design engineer also be a verification engineer.  What may be required is a UDM: Unified Design Methodology.  The industry is, may be unconsciously, already moving in that directions in two ways: the increased use of third party IP and the increasing volume of Design Rules by each foundry.  I can see these two trends growing brighter with each successive technology iteration: it is time to stop ignoring them.

Formal, Logic Simulation, hardware emulation/acceleration. Benefits and Limitations

Wednesday, July 27th, 2016

Stephen Bailey, Director of Emerging Technologies, Mentor Graphics

Verification and Validation are key terms used and have the following differentiation:  Verification (specifically, hardware verification) ensures the design matches R&D’s functional specification for a module, block, subsystem or system. Validation ensures the design meets the market requirements, that it will function correctly within its intended usage.

Software-based simulation remains the workhorse for functional design verification. Its advantages in this space include:

-          Cost:  SW simulators run on standard compute servers.

-          Speed of compile & turn-around-time (TAT):  When verifying the functionality of modules and blocks early in the design project, software simulation has the fastest turn-around-time for recompiling and re-running a simulation.

-          Debug productivity:  SW simulation is very flexible and powerful in debug. If a bug requires interactive debugging (perhaps due to a potential UVM testbench issue with dynamic – stack and heap memory based – objects), users can debug it efficiently & effectively in simulation. Users have very fine level controllability of the simulation – the ability to stop/pause at any time, the ability to dynamically change values of registers, signals, and UVM dynamic objects.

-          Verification environment capabilities: Because it is software simulation, a verification environment can easily be created that peeks and pokes into any corner of the DUT. Stimulus, including traffic generation / irritators can be tightly orchestrated to inject stimulus at cycle accuracy.

-          Simulation’s broad and powerful verification and debug capabilities are why it remains the preferred engine for module and block verification (the functional specification & implementation at the “component” level).

If software-based simulation is so wonderful, then why would anyone use anything else?  Simulation’s biggest negative is performance, especially when combined with capacity (very large, as well as complex designs). Performance, getting verification done faster, is why all the other engines are used. Historically, the hardware acceleration engines (emulation and FPGA-based prototyping) were employed latish in the project cycle when validation of the full chip in its expected environment was the objective. However, both formal and hardware acceleration are now being used for verification as well. Let’s continue with the verification objective by first exploring the advantages and disadvantages of formal engines.

-          Formal’s number one advantage is its comprehensive nature. When provided a set of properties, a formal engine can exhaustively (for all of time) or for a, typically, broad but bounded number of clock cycles, verify that the design will not violate the property(ies). The prototypical example is verifying the functionality of a 32-bit wide multiplier. In simulation, it would take far too many years to exhaustively validate every possible legal multiplicand and multiplier inputs against the expected and actual product for it to be feasible. Formal can do it in minutes to hours.

-          At one point, a negative for formal was that it took a PhD to define the properties and run the tool. Over the past decade, formal has come a long way in usability. Today, formal-based verification applications package properties for specific verification objectives with the application. The user simply specifies the design to verify and, if needed, provides additional data that they should already have available; the tool does the rest. There are two great examples of this approach to automating verification with formal technology:

  • CDC (Clock Domain Crossing) Verification:  CDC verification uses the formal engine to identify clock domain crossings and to assess whether the (right) synchronization logic is present. It can also create metastability models for use with simulation to ensure no metastability across the clock domain boundary is propagated through the design. (This is a level of detail that RTL design and simulation abstract away. The metastability models add that level of detail back to the simulation at the RTL instead of waiting for and then running extremely long full-timing, gate-level simulations.)
  • Coverage Closure:  During the course of verification, formal, simulation and hardware accelerated verification will generate functional and code coverage data. Most organizations require full (or nearly 100%) coverage completion before signing-off the RTL. But, today’s designs contain highly reusable blocks that are also very configurable. Depending on the configuration, functionality may or may not be included in the design. If it isn’t included, then coverage related to that functionality will never be closed. Formal engines analyze the design, in its actual configuration(s) that apply, and does a reachability analysis for any code or (synthesizable) functional coverage point that has not yet been covered. If it can be reached, the formal tool will provide an example waveform to guide development of a test to achieve coverage. If it cannot be reached, the manager has a very high-level of certainty to approving a waiver for that coverage point.

-          With comprehensibility being its #1 advantage, why doesn’t everyone use and depend fully on formal verification:

  • The most basic shortcoming of formal is that you cannot simulate or emulate the design’s dynamic behavior. At its core, formal simply compares one specification (the RTL design) against another (a set of properties written by the user or incorporated into an automated application or VIP). Both are static specifications. Human beings need to witness dynamic behavior to ensure the functionality meets marketing or functional requirements. There remains no substitute for “visualizing” the dynamic behavior to avoid the GIGO (Garbage-In, Garbage-Out) problem. That is, the quality of your formal verification is directly proportional to the quality (and completeness) of your set of properties. For this reason, formal verification will always be a secondary verification engine, albeit one whose value rises year after year.
  • The second constraint on broader use of formal verification is capacity or, in the vernacular of formal verification:  State Space Explosion. Although research on formal algorithms is very active in academia and industry, formal’s capacity is directly related to the state space it must explore. Higher design complexity equals more state space. This constraint limits formal usage to module, block, and (smaller or well pruned/constrained) subsystems, and potentially chip levels (including as a tool to help isolate very difficult to debug issues).

The use of hardware acceleration has a long, checkered history. Back in the “dark ages” of digital design and verification, gate-level emulation of designs had become a big market in the still young EDA industry. Zycad and Ikos dominated the market in the late 1980’s to mid/late-1990’s. What happened?  Verilog and VHDL plus automated logic synthesis happened. The industry moved from the gate to the register-transfer level of golden design specification; from schematic based design of gates to language-based functional specification. The jump in productivity from the move to RTL was so great that it killed the gate-level emulation market. RTL simulation was fast enough. Zycad died (at least as an emulation vendor) and Ikos was acquired after making the jump to RTL, but had to wait for design size and complexity to compel the use of hardware acceleration once again.

Now, 20 years later, it is clear to everyone in the industry that hardware acceleration is back. All 3 major vendors have hardware acceleration solutions. Furthermore, there is no new technology able to provide a similar jump in productivity as did the switch from gate-level to RTL. In fact, the drive for more speed has resulted in emulation and FPGA prototyping sub-markets within the broader market segment of hardware acceleration. Let’s look at the advantages and disadvantages of hardware acceleration (both varieties).

-          Speed:  Speed is THE compelling reason for the growth in hardware acceleration. In simulation today, the average performance (of the DUT) is perhaps 1 kHz. Emulation expectations are for +/- 1 MHz and for FPGA prototypes 10 MHz (or at least 10x that of emulation). The ability to get thousands of more verification cycles done in a given amount of time is extremely compelling. What began as the need for more speed (and effective capacity) to do full chip, pre-silicon validation driven by Moore’s Law and the increase in size and complexity enabled by RTL design & design reuse, continues to push into earlier phases of the verification and validation flow – AKA “shift-left.”  Let’s review a few of the key drivers for speed:

  • Design size and complexity:  We are well into the era of billion gate plus design sizes. Although design reuse addressed the challenge of design productivity, every new/different combination of reused blocks, with or without new blocks, creates a multitude (exponential number) of possible interactions that must be verified and validated.
  • Software:  This is also the era of the SoC. Even HW compute intensive chip applications, such as networking, have a software component to them. Software engineers are accustomed to developing on GHz speed workstations. One MHz or even 10’s of MHz speeds are slow for them, but simulation speeds are completely intolerable and infeasible to enable early SW development or pre-silicon system validation.
  • Functional Capabilities of Blocks & Subsystems:  It can be the size of input data / simuli required to verify a block’s or subsystem’s functionality, the complexity of the functionality itself, or a combination of both that drives the need for huge numbers of verification cycles. Compute power is so great today, that smartphones are able to record 4k video and replay it. Consider the compute power required to enable Advanced Driver Assistance Systems (ADAS) – the car of the future. ADAS requires vision and other data acquisition and processing horsepower, software systems capable of learning from mistakes (artificial intelligence), and high fault tolerance and safety. Multiple blocks in an ADAS system will require verification horsepower that would stress the hardware accelerated performance available even today.

-          As a result of these trends which appear to have no end, hardware acceleration is shifting left and being used earlier and earlier in the verification and validation flows. The market pressure to address its historic disadvantages is tremendous.

  • Compilation time:  Compilation in hardware acceleration requires logic synthesis and implementation / mapping to the hardware that is accelerating the simulation of the design. Synthesis, placement, routing, and mapping are all compilation steps that are not required for software simulation. Various techniques are being employed to reduce the time to compile for emulation and FPGA prototype. Here, emulation has a distinct advantage over FPGA prototypes in compilation and TAT.
  • Debug productivity:  Although simulation remains available for debugging purposes, you’d be right in thinking that falling back on a (significantly) slower engine as your debug solution doesn’t sound like the theoretically best debug productivity. Users want a simulation-like debug productivity experience with their hardware acceleration engines. Again, emulation has advantages over prototyping in debug productivity. When you combine the compilation and debug advantages of emulation over prototyping, it is easy to understand why emulation is typically used earlier in the flow, when bugs in the hardware are more likely to be found and design changes are relatively frequent. FPGA prototyping is typically used as a platform to enable early SW development and, at least some system-level pre-silicon validation.
  • Verification capabilities:  While hardware acceleration engines were used primarily or solely for pre-silicon validation, they could be viewed as laboratory instruments. But as their use continues to shift to earlier in the verification and validation flow, the need for them to become 1st class verification engines grows. That is why hardware acceleration engines are now supporting:
    • UPF for power-managed designs
    • Code and, more appropriately, functional coverage
    • Virtual (non-ICE) usage modes which allow verification environments to be connected to the DUT being emulated or prototyped. While a verification environment might be equated with a UVM testbench, it is actually a far more general term, especially in the context of hardware accelerated verification. The verification environment may consist of soft models of things that exist in the environment the system will be used in (validation context). For example, a soft model of a display system or Ethernet traffic generator or a mass storage device. Soft models provide advantages including controllability, reproducibility (for debug) and easier enterprise management and exploitation of the hardware acceleration technology. It may also include a subsystem of the chip design itself. Today, it has become relatively common to connect a fast model written in software (usually C/C++) to an emulator or FPGA prototype. This is referred to as hybrid emulation or hybrid prototyping. The most common subsystem of a chip to place in a software model is the processor subsystem of an SoC. These models usually exist to enable early software development and can run at speeds equivalent to ~100 MHz. When the processor subsystem is well verified and validated, typically a reused IP subsystem, then hybrid mode can significantly increase the verification cycles of other blocks and subsystems, especially driving tests using embedded software and verifying functionality within a full chip context. Hybrid mode can rightfully be viewed as a sub-category of the virtual usage mode of hardware acceleration.
    • As with simulation and formal before it, hardware acceleration solutions are evolving targeted verification “applications” to facilitate productivity when verifying specific objectives or target markets. For example, a DFT application accelerates and facilitates the validation of test vectors and test logic which are usually added and tested at the gate-level.

In conclusion, it may seem that simulation is being used less today. But, it is all relative. The total number of verification cycles is growing exponentially. More simulation cycles are being performed today even though hardware acceleration and formal cycles are taking relatively larger pieces of the overall verification pie. Formal is growing in appeal as a complementary engine. Because of its comprehensive verification nature, it can significantly bend the cost curve for high-valued (difficult/challenging) verification tasks and objectives. The size and complexity of designs today require the application of all verification engines to the challenges of verifying and validating (pre-silicon) the hardware design and enabling early SW development. The use of hardware acceleration continues to shift-left and be used earlier in the verification and validation flow causing emulation and FPGA prototyping to evolve into full-fledged verification engines (not just ICE validation engines).

Rapid Prototyping is an Enduring Methodology

Thursday, September 24th, 2015

Gabe Moretti, Senior Editor

When I started working in the electronic industry hardware development had prototyping using printed circuit boards (PCB) as its only verification tool.   The method was not “rapid” since it involved building and maintaining one or more PCBs.  With the development of the EDA industry simulators became an alternative method, although they really only achieved popularity in the ‘80s with the introduction of hardware description languages like Verilog and VHDL.

Today the majority of designs are developed using software tools, but rapid prototyping is still a method used in a significant portion of designs.  In fact hardware based prototyping is a growing methodology, mostly due to the increased power and size of FPGA devices.  It can now really be called “rapid prototyping.”

Rapid Prototyping Defined

Lauro Rizzatti, a noted expert on the subject of hardware based development of electronics, reinforces the idea in this way: “Although one of the oldest methods used to verify designs, dating back to the days of breadboarding, FPGA prototyping is still here today, and more and more it will continue to be used in the future. “

Saba Sharifi, VP Business Development, Logic Business Unit, System LSI Group at Toshiba America Electronic Components, describes the state of rapid prototyping as follows: “While traditional virtual prototyping involves using CAD and CAE tools to validate a design before building a physical prototype, rapid prototyping is growing in popularity as a method for prototyping SoC and ASIC designs on an FPGA for hardware verification and early software development. In a rapid prototyping environment, a user may start development using an FPGA-based system, and then choose either to keep the design in the FPGA, or to transfer it into a hard-coded solution such as an ASIC. There are a number of different ways to achieve this end.”

To support the hardware based prototyping methodology Toshiba has introduced a new type of Device: Toshiba’s Fast Fit Structured Array (FFSA).   The FFSA technology utilizes metal-configurable standard cell (MCSC) SoC platform technology for designing ASICs and ASSPs, and for replacing FPGAs. Designed with FPGA capabilities in mind, FFSA provides pre-developed wrappers that can realize some of the key FPGA functionality, as well as pre-defined master sizes, to facilitate the conversion process.

According to Saba “In a sense, it’s an extension to traditional FPGA-to-ASIC rapid prototyping – the second portion of the process can be achieved significantly faster using the FFSA approach.  The goal with FFSA technology is to enable developers to reduce their time to market and non-recurring engineering (NRE) costs by minimizing customizable layers (to four metal layers) while delivering the performance, power and lower unit costs associated with standard-cell ASICs.   An FFSA device speeds front-end development due to faster timing closure compared to a traditional ASIC, while back-end prototyping is improved via the pre-defined master sizes. In some cases, customers pursue concurrent development – they get the development process started in the FPGA and begin software development, and then take the FPGA into the FFSA platform. The platform takes into consideration the conversion requirements to bring the FPGA design into the hard-coded FFSA so that the process can be achieved more quickly and easily.  FFSA supports a wide array of interfaces and high speed Serdes (up to 28G), making it well suited for wired and wireless networking, SSD (storage) controllers, and a number of industrial consumer applications. With its power, speed, NRE and TTM benefits, FFSA can be a good solution anywhere that developers have traditionally pursued an FPGA-based approach, or required customized silicon for purpose-built applications with moderate volumes.”

According to Troy Scott, Product Marketing Manager, Synopsys: “FPGA-based prototypes are a class of rapid prototyping methods that are very popular for the promise of high-performance with a relatively low cost to get started. Prototyping with FPGAs is now considered a mainstream verification method which is implemented in a myriad of ways from relatively simple low-cost boards consisting of a single FPGA and memory and interface peripherals, to very complex chassis systems that integrate dozens of FPGAs in order to host billion-gate ASIC and SoC designs. The sophistication of the development tools, debug environment, and connectivity options vary as much as the equipment architectures do. This article examines the trends in rapid prototyping with FPGAs, the advantages they provide, how they compare to other simulation tools like virtual prototypes, and EDA standards that influence the methodology.

According to prototyping specialists surveyed by Synopsys the three most common goals they have are to speed up RTL simulation and verification, validate hardware/software systems, and enable software development. These goals influence the design of a prototype in order to address the unique needs of each use case. “

Figure 1. Top 3 Goals for a Prototype  (Courtesy of Synopsys, Inc)

Advantages of Rapid Prototyping

Frank Schirrmeister, Group Director for Product Marketing, System Development Suite, Cadence provides a business  description of the advantages: “The growth of FPGA-based prototyping is really primarily driven by software needs because software developers are, in terms of numbers of users, the largest consumer base. FPGA-based prototyping also provides satisfactory speed while delivering the real thing at RTL accuracy and at a reasonable replication cost.”

Stephen Bailey, Director of Emerging Technologies, DVT at Mentor graphics puts it as follows: “Performance, more verification cycles in a given period of time, especially for software development, drives the use of rapid prototypes.  Rapid prototyping typically provides 10x (~10 MHz) performance advantage over emulation which is 1,000x (~1 MHz) faster than RTL software simulation (~1 kHz).
Once a design has been implemented in a rapid prototype, that implementation can be easily replicated across as many prototype hardware platforms as the user has available.  No recompilation is required.  Replicating or cloning prototypes provides more platforms for software engineers, who appreciate having their own platform but definitely want a platform available whenever they need to debug.”

Lauro points out that: “The main advantage of rapid prototyping is very fast execution speed that comes at the expense of a very long setup time that may take months on very large designs. Partitioning and clock mapping require an uncommon expertise. Of course, the assumption is that the design fits in the maximum configuration of the prototyping board; otherwise the approach would not be applicable. The speed of FPGA prototyping makes it viable for embedded software development. They are best used for validating application software and for final system validation.”

Troy sees the advantages of the methodology as: “To address verification tasks the prototype helps to characterize timing and pipeline latencies that are not possible with more high-level representations of the design and perhaps more dramatically the prototype is able to reach execution speeds in the hundreds of megahertz. Typical prototype architectures for verification tasks rely on a CPU-based approach where traffic generation for the DUT is written as a software program. The CPU might be an external workstation or integrated with the prototype itself. Large memory ICs adjacent to the FPGAs store input traffic and results that is preloaded and read back for analysis. Prototypes that can provide an easy to implement test infrastructure that includes memory ICs and controllers, a high-bandwidth connection to the workstation, and a software API will accelerate data transfer and monitoring tasks by the prototyping team.

Software development and system validation tasks will influence the prototype design as well. The software developer is seeking an executable representation to support porting of legacy code and develop new device drivers for the latest interface protocol implementation. In some cases the prototype serves as a way for a company to deploy an architecture design and software driver examples to partners and customers. Both schemes demand high execution speed and often real world interface PHY connections. For example, consumer product developers will seek USB, HDMI, and MIPI interfaces while an industrial product will often require ADC/DAC or Ethernet interfaces. The prototype then must provide an easy way to connect to accessory interfaces and ideally a catalog of controllers and reference designs. And because protocol validation may require many cycles to validate, a means to store many full milliseconds of hardware trace data helps compliance check-out and troubleshooting.”

Rapid Prototyping versus Virtual Prototyping

According to Steve “Rapid prototyping, also called FPGA prototyping, is based on a physical representation of the design under test (DUT) that gets mapped on an array of FPGA devices. Virtual prototyping is based on a virtual representation of the DUT. The source code of an FPGA prototype is RTL code, namely synthesizable design. The source code of a virtual prototype is a design description at a higher level of abstraction, either based on C/C++/SystemC or SystemVerilog languages, that is not synthesizable.  Its rather limited support for hardware debugging hinders its ability to verify drivers and operating systems, where hardware emulation excels.”

Troy describes Synopsys position as such: “To address verification tasks the prototype helps to characterize timing and pipeline latencies that are not possible with more high-level representations of the design and perhaps more dramatically the prototype is able to reach execution speeds in the hundreds of megahertz. Typical prototype architectures for verification tasks rely on a CPU-based approach where traffic generation for the DUT is written as a software program. The CPU might be an external workstation or integrated with the prototype itself. Large memory ICs adjacent to the FPGAs store input traffic and results that is preloaded and read back for analysis. Prototypes that can provide an easy to implement test infrastructure that includes memory ICs and controllers, a high-bandwidth connection to the workstation, and a software API will accelerate data transfer and monitoring tasks by the prototyping team.

Software development and system validation tasks will influence the prototype design as well. The software developer is seeking an executable representation to support porting of legacy code and develop new device drivers for the latest interface protocol implementation. In some cases the prototype serves as a way for a company to deploy an architecture design and software driver examples to partners and customers. Both schemes demand high execution speed and often real world interface PHY connections. For example, consumer product developers will seek USB, HDMI, and MIPI interfaces while an industrial product will often require ADC/DAC or Ethernet interfaces. The prototype then must provide an easy way to connect to accessory interfaces and ideally a catalog of controllers and reference designs. And because protocol validation may require many cycles to validate, a means to store many full milliseconds of hardware trace data helps compliance check-out and troubleshooting.”

Lauro kept is answer short and to the point.  “Rapid prototyping, also called FPGA prototyping, is based on a physical representation of the design under test (DUT) that gets mapped on an array of FPGA devices. Virtual prototyping is based on a virtual representation of the DUT. The source code of an FPGA prototype is RTL code, namely synthesizable design. The source code of a virtual prototype is a design description at a higher level of abstraction, either based on C/C++/SystemC or SystemVerilog languages, that is not synthesizable.”

Cadence’s position is represented by Frank in his usual thorough style.  “Once RTL has become sufficiently stable, it can be mapped into an array of FPGAs for execution. This essentially requires a remapping from the design’s target technology into the FPGA fabric and often needs memories remodeled, different clock domains managed, and smart partitioning before the mapping into the individual FPGAs happens using standard software provided by the FPGA vendors. The main driver for the use of FPGA-based prototyping is software development, which has changed the dynamics of electronics development quite fundamentally over the last decade. Its key advantage is its ability to provide a hardware platform for software development and system validation that is fast enough to satisfy software developers. The software can reach a range of tens of MHz up to 100MHz and allows connections to external interfaces like PCIe, USB, Ethernet, etc. in real time, which leads to the ability to run system validation within the target environments.

When time to availability is a concern, virtual prototyping based on transaction-level models (TLM) can be the clear winner because virtual prototypes can be provided independently of the RTL that the engines on the continuum require. Everything depends on model availability, too. A lot of processor models today, like ARM Fast Models, are available off-the-shelf. Creating models for new IP often delays the application of virtual prototypes due to their sometimes extensive development effort, which can eliminate the time-to-availability advantage during a project. While virtual prototyping can run in the speed range of hundreds of MIPS, not unlike FPGA-based prototypes, the key differences between them are the model fidelity, replication cost, and the ability to debug the hardware.

Model fidelity often determines which prototype to use. There is often no hardware representation available earlier than virtual prototypes, so they can be the only choice for early software bring-up and even initial driver development. They are, however, limited by model fidelity – TLMs are really an abstraction of the real thing as expressed in RTL. When full hardware accuracy is required, FPGA-based prototypes are a great choice for software development and system validation at high speed. We have seen customers deliver dozens if not hundreds of FPGA-based prototypes to software developers, often three months or more prior to silicon being available.

Two more execution engines are worth mentioning. RTL simulation is the more accurate, slower version of virtual prototyping. Its low speed in the Hz or KHz range is really prohibitive for efficient software development. In contrast, due to the high speed of both virtual and FPGA-based prototypes, software development is quite efficient on both of them. Emulation is the slower equivalent of FPGA-based prototyping that can be available much earlier because its bring-up is much easier and more automated, even from not-yet-mature RTL. It also offers almost simulation-like debug and, since it also provides speed in the MHz range, emulation is often the first appropriate engine for software and OS bring-up used for Android, Linux and Windows, as well as for executing benchmarks like AnTuTu. Of course, on a per project basis, it is considered more expensive than FPGA-based prototyping, even though it can be more cost efficient from a verification perspective when considering multiple projects and a large number of regression workloads.”

Figure 2: Characteristics of the two methods (Courtesy of Synopsys Inc.)

Growth Opportunities

For Lauro the situation boils down to this: “Although one of the oldest methods used to verify designs, dating back to the days of breadboarding, FPGA prototyping is still here today, and more and more it will continue to be used in the future. The dramatic improvement in FPGA technology that made possible the manufacturing of devices of monstrous capacity will enable rapid prototyping for ever larger designs. Rapid prototyping is an essential tool in the modern verification/validation toolbox of chip designs.”
Troy thinks that there are growth opportunities and explained: “A prototype’s high-performance and relatively low investment to fabricate has led them to proliferate among IP and ASIC development teams. Market size estimates show that 70% of all ASIC designs are prototyped to some degree using an FPGA-based system today. Given this demand several commercial offerings have emerged that address the limitations exhibited by custom-built boards. The benefits of almost immediate availability, better quality, modularity for better reuse, and the ability to out-source support and maintenance are big benefits. Well documented interfaces and usage guidelines make end-users largely self-sufficient. A significant trend for commercial systems now is development and debugging features of the EDA software being integrated or co-designed along with the hardware system. Commercial systems can demonstrate superior performance, debug visibility, and bring-up efficiency as a result of the development tools using hardware characterization data, being able to target unique hardware functions of the system, and employing communication infrastructure to the DUT. Commercial high-capacity prototypes are often made available as part of the IT infrastructure so various groups can share or be budgeted prototype resources as project demands vary. Network accessibility, independent management of individual FPGA chains in a “rack” installation, and job queue management are common value-added features of such systems.

Another general trend in rapid prototyping is to mix a transaction-level model (TLM) and RTL model abstractions in order to blend the best of both to accelerate the validation task. How do Virtual versus Physical prototypes differ? The biggest contrast is often the model’s availability during the project.  In practice the latest generation CPU architectures are not available as synthesizable RTL. License and deployment restrictions can limit access or the design is so new the RTL is simply not yet available from the vendor. For these reasons virtual prototypes of key CPU subsystems are a practical alternative.  For best performance and thus for the role of software development tasks, hybrid prototypes typically join an FPGA-based prototype, a cycle-accurate implementation in hardware with a TLM prototype using a loosely-timing (LT) coding style. TLM abstracts away the individual events and phases of the behavior of the system and instead focuses on the communication transactions. This may be perfectly acceptable model for the commercial IP block of a CPU but may not be for new custom interface controller-to-PHY design that is being tailored for a particular application. The team integrating the blocks of the design will assess that the abstraction is appropriate to satisfy the verification or validation scenarios.  Although one of the oldest methods used to verify designs, dating back to the days of breadboarding, FPGA prototyping is still here today, and more and more it will continue to be used in the future. The dramatic improvement in FPGA technology that made possible the manufacturing of devices of monstrous capacity will enable rapid prototyping for ever larger designs. Rapid prototyping is an essential tool in the modern verification/validation toolbox of chip designs.”

Steve’s described his opinion as follows: “Historically, rapid prototyping has been utilized for designs sized in the 10’s of millions of gates with some advanced users pushing capacity into the low 100M gates range.  This has limited the use of rapid prototyping to full chips on the smaller range of size and IP blocks or subsystems of larger chips.  For IP block/subsystems, it is relatively common to combine virtual prototypes of the processor subsystem with a rapid prototype of the IP block or subsystem.  This is referred to as “hybrid prototyping.
With the next generation of FPGAs such as Xilinx’s UltraScale and Altera’s Stratix-10 and the continued evolution of prototyping solutions, creating larger rapid prototypes will become practical.  This should result in expanded use of rapid prototyping to cover more full chip pre-silicon validation uses.
In the past, limited silicon visibility made debugging difficult and analysis of various aspects of the design virtually impossible with rapid prototypes.  Improvements in silicon visibility and control will improve debug productivity when issues in the hardware design escape to the prototype.  Visibility improvements will also provide insight into chip and system performance and quality that were previously not possible.”

Frank concluded that: “The growth of FPGA-based prototyping is really primarily driven by software needs because software developers are, in terms of numbers of users, the largest consumer base. FPGA-based prototyping also provides satisfactory speed while delivering the real thing at RTL accuracy at a reasonable replication cost. Second, the rollout of Cadence’s multi-fabric compiler that maps RTL both into the Palladium emulation platform and into the Protium FPGA-based prototyping platform significantly eases the trade-offs with respect to speed and hardware debug between emulation and FPGA-based prototyping. This gives developers even more options than they ever had before and widens the applicability of FPGA-based prototyping. The third driver for growth in prototyping is the advent of hybrid usage of, for example, virtual prototyping with emulation, combining fast execution for some portions of the design (like the processors) with accuracy for other aspects of the design (like graphics processing units).

Overall, rapid or FPGA-based prototyping has its rightful place in the continuum of development engines, offering users high-speed execution of an accurate representation. This advantage makes rapid or FPGA-based prototyping a great platform for software development that requires hardware accuracy, as well as for system validation.”

Conclusion

All four of the contributors painted a positive figure of rapid prototyping.  The growth of FPGA devices, both in size and speed, has been critical in keeping this type of development and verification method applicable to today’s designs.  It is often the case that a development team will need to use a variety of tools as it progresses through the task, and rapid prototyping has proven to be useful and reliable.

Results from the RF and Analog/Mixed-Signal (AMS) IC Survey

Wednesday, October 2nd, 2013

A summary of the results of a survey for developers of products in RF and analog/mixed-signal (AMS) ICs.

This summary details the results of a survey for developers of products in RF and analog/mixed-signal (AMS) ICs. A total of 129 designers responded to this survey. Survey questions focused on job area, company information, end-user application markets, product development types, programming languages, tool vendors, foundries, processes and other areas.

Key Findings

  • More respondents are using Cadence’s EDA tools for RFIC designs. In order, respondents also listed Agilent EESof, Mentor, Ansys/Ansoft, Rhode & Schwartz and Synopsys.
  • More respondents are using Cadence’s EDA tool for AMS IC design. Agilent EESof, Mentor, Aniritsu, Synopsys and Ansys/Ansoft were behind Cadence.
  • Respondents had the most expertise with C/C++. Regarding expertise with programming languages, C/C++ had the highest rating, followed in order by Verilog, Matlab-RF, Matlab-Simulink, Verilog-AMS, VHDL, SystemVerilog, VHDL-AMS and SystemC.
  • For RF design-simulation-verification tools, more respondents in order listed that they use Spice, Verilog, Verilog-AMS, VHDL and Matlab/RF-Simulink. For planned projects, more respondents in order listed SystemC, VHDL-AMS, SystemVerilog, C/C++ and Matlab/RF-Simulink.
  • Regarding the foundries used for RF and/or MMICs, most respondents in order listed TSMC, IBM, TowerJazz, GlobalFoundries, RFMD and UMC.
  • Silicon-based technology is predominately used for current RF/AMS designs. GaAs and SiGe are also widely used. But for future designs, GaAs will lose ground; GaN will see wider adoption.
  • RF and analog/mixed-signal ICs still use fewer transistors than their digital counterparts. Some 30% of respondents are developing designs of less than 1,000 transistors. Only 11% are doing designs of more than 1 million transistors.
  • Digital pre-distortion is still the favorite technique to improve the efficiency of a discrete power amp. Envelope tracking has received a lot of attention in the media. But surprisingly, envelope tracking ranks low in terms of priorities for power amp development.

Implications

  • Cadence continues to dominate the RFIC/AMS EDA environment. Virtuoso remains a favorite among designers. RF/AMS designers will continue to have other EDA tool choices as well.
  • The large foundries, namely TSMC and IBM, will continue to have a solid position in RF/AMS. But the specialty foundries will continue to make inroads. Altis, Dongbu, Magnachip, TowerJazz, Vanguard and others are expanding in various RF/AMS fronts.
  • There is room for new foundry players in RF/AMS. GlobalFoundries and Altis are finding new customers in RF, RF SOI and RF CMOS.
  • The traditional GaAs foundries—TriQuint, RFMD, Win Semi and others—are under pressure in certain segments. The power amp will remain a GaAs-based device, but other RF components are moving to RF SOI, SiGe and other processes.

Detailed Summary

  • Job Function Area-Part 1: A large percentage of respondents are involved in the development of RF and/or AMS ICs. More respondents are currently involved in the development of RF and/or AMS ICs (55%). A smaller percentage said they were involved in the last two years (13%). A significant portion are not are involved in the development of RF or AMS ICs (32%).
  • Job Function Area-Part 2: Respondents listed one or a combination of functions. More respondents listed analog/digital designer (30%), followed in order by engineering management (22%), corporate management (12%) and system architect (10%). The remaining respondents listed analog/digital verification, FPGA designer/verification, software, test, student, RF engineer, among others.
  • Company Information: Respondents listed one or a combination of industries. More respondents listed a university (23%), followed in order by systems integrator (18%), design services (14%), fabless semiconductor (13%) and semiconductor manufacturer (10%). The category “other” represented a significant group (13%). The remaining respondents work for companies involved in ASICs, ASSPs, FPGAs, software and IP.
  • Company Revenue (Annual): More respondents listed less than $25 million (27%), followed in order by $100 million to $999 million (24%) and $1 billion and above (22%). Others listed $25 million to $99 million (8%). Some 19% of respondents did not know.
  • Location: More respondents listed North America (60%), followed in order by Europe (21%) and Asia-Pacific (10%). Other respondents listed Africa, China, Japan, Middle East and South America.
  • Primary End-User Application for Respondent’s ASIC/ASSP/SoC design: More respondents listed communications (67%), followed in order by industrial (28%), consumer/multimedia (24%), computer (21%), medical (15%) and automotive (12%).
  • Primary End Market for Respondent’s Design. For wired communications, more respondents listed networking (80%), followed by backhaul (20%). For wireless communications, more respondents listed handsets (32%) and basestations (32%), followed in order by networking, backhaul, metro area networks and telephony/VoIP.
  • Primary End Market If Design Is Targeted for Consumer Segment. More respondents listed smartphones (34%), followed in order by tablets (24%), displays (18%), video (13%) and audio (11%).

Programming Languages Used With RF/AMS Design Tools:

  • Respondents had the most expertise with C and C++. Regarding expertise with programming languages, C/C++ had an overall rating of 2.47 in the survey, followed by in order by Verilog (2.32), Matlab-RF (2.27), Matlab-Simulink (2.17), Verilog-AMS (2.03), VHDL (1.99), SystemVerilog (1.84), VHDL-AMS (1.70) and SystemC (1.68).
  • Respondents said they had “professional expertise” (19%) with C/C++. Respondents were “competent” (27%) or were “somewhat experienced” (37%) with C/C++. Some 17% said they had “no experience” with C/C++.
  • Respondents said they had “professional expertise” with Verilog-AMS. (13%). Respondents were “competent” (15%) and “somewhat experienced” (35%) with Verilog-AMS. Some 38% said they had “no experience” with Verilog-AMS.
  • Respondents said they had “professional expertise” with Verilog (12%), or were “competent” (30%) or were “somewhat experienced” (36%). Some 22% said they had “no experience” with Verilog.
  • Respondents said they had “professional expertise” with Matlab-RF (10%), or were “competent” (27%) or “somewhat experienced” (42%). Some 21% said they had “no experience” with the technology.
  • Respondents also had “professional experience” with VHDL (10%), SystemVerilog (9%), SystemC (7%), Matlab-Simulink (6%) and VHDL-AMS (3%).
  • Respondents had ‘’no experience” with SystemC (55%), VHDL-AMS (51%), SystemVerilog (49%), Verilog-AMS (38%), VHDL (36%), Matlab-Simulink (26%), Verilog (22%), Matlab-RF (21%) and C/C++ (17%).

Types of Programming Languages and RF Design-Simulation-Verification Tools Used

  • For current projects, more respondents listed Spice (85%), Verilog (85%), Verilog-AMS (79%), VHDL (76%), Matlab/RF-Simulink (71%), C/C++ (64%), SystemVerilog (56%), VHDL-AMS (44%) and SystemC (21%).
  • For planned projects, more respondents listed SystemC (79%), VHDL-AMS (56%), SystemVerilog (44%), C/C++ (36%), Matlab/RF-Simulink (29%), VHDL (24%), Verilog-AMS (21%), Verilog (15%) and Spice (15%).

Which Tool Vendors Are Used in RFIC Development

  • More respondents listed Cadence (60), followed in order by Agilent EESof (43), Mentor (38), Ansys/Ansoft (29), Rhode & Schwartz (26) and Synopsys (25). Others listed were Aniritsu, AWR, Berkeley Design, CST, Dolphin, EMSS, Helic, Hittite, Remcon, Silvaco, Sonnet and Tanner.
  • The respondents for Cadence primarily use the company’s tools for RF design (68%), simulation (73%), layout (67%) and verification (43%). The company’s tools were also used for EM analysis (27%) and test (22%).
  • The respondents for Agilent EESof primarily use the company’s tools for RF design (54%) and simulation (65%). The company’s tools were also used for EM analysis, layout, verification and test.
  • The respondents for Mentor Graphics primarily use the company’s tools for verification (55%), layout (37%) and design (34%). Meanwhile, the respondents for Rhode & Schwartz primarily use the company’s tools for test (69%). The respondents for Synopsys primarily use the company’s tools for design (40%), simulation (60%) and verification (48%).

Which Tool Vendors Are Used in AMS IC Development

  • More respondents listed Cadence (48), followed in order by Agilent EESof (26), Mentor (22), Aniritsu (19), Synopsys (18) and Ansys/Ansoft (15). Others listed were AWR, Berkeley Design, CST, Dolphin, EMSS, Helic, Hittite, Remcon, Rohde & Schwarz, Silvaco, Sonnet and Tanner.
  • The respondents for Cadence primarily use the company’s tools for AMS design (79%), simulation (71%), layout (71%) and verification (48%). The company’s tools were also used for EM analysis and test.
  • The respondents for Agilent EESof primarily use the company’s tools for design (42%), simulation (69%) and EM analysis (54%).
  • The respondents for Mentor Graphics primarily use the company’s tools for design (50%), simulation (46%) and verification (55%). The respondents for Aniritsu primarily use the company’s tools for test (47%). The respondents for Synopsys primarily use the company’s tools for design (61%) and simulation (67%).

Areas of Improvement for Verification and Methodologies

  • Respondents had a mix of comments.

Foundry and Processes

  • Foundry Used for RFICs and/or MMICs: More respondents listed TSMC (32), followed in order by IBM (27), TowerJazz (19), GlobalFoundries (17), RFMD (13) and UMC (13). The next group was Win Semi (12), ST (11), TriQuint (11) and GCS (10). Other respondents listed Altis, Cree, IHP, LFoundry, OMMIC, SMIC, UMS and XFab.
  • Of the respondents for TSMC, 87% use TSMC for RF foundry work and 55% for MMICs. Of the respondents for IBM, 81% use IBM for RF foundry work and 41% for MMICs. Of the respondents for TowerJazz, 84% use TowerJazz for RF foundry work and 42% for MMICs. Of the respondents for GlobalFoundries, 76% use GF for RF foundry work and 41% for MMICs.
  • Complexity of Respondent’s Designs (Transistor Count): More respondents listed less than 1,000 transistors (30%), followed in order by 10,000-99,000 transistors (14%) and 100,000-999,000 transistors (14%). Respondents also listed 1,000-4,900 transistors (11%), greater than 1 million transistors (11%) and 5,000-9,900 transistors (10%).
  • Process Technology Types: For current designs, more respondents listed silicon (66%), followed in order by GaAs (32%), SiGe (27%), GaN (23%) and InP (10%). For future designs, more respondents listed silicon (66%), followed in order by SiGe (31%), GaN (28%), GaAs (16%) and InP (13%).

Technology Selections:

  • Which Baseband Processor Does Design Interface With: More respondents listed TI (35%), ADI (22%) and Tensilica/Cadence (18%). Respondents also list other (26%).
  • Technique Used To Improve Discrete Power Amplifier Efficiency: In terms of priorities, more respondents listed digital pre-distortion (38%), followed in order by linearization (27%), envelop tracking (14%) and crest factor reduction (10%). In terms of priorities, the technique that showed the lowest ranking was envelop tracking (37%), crest factor reduction (21%) and linearization (14%).

Test and Measurement

  • Importance of Test and Measurement: More respondents listed very important (34%), followed in order by important (24%), extremely important (20%), somewhat important (19%) and unimportant (3%).

lapedus_markMark LaPedus has covered the semiconductor industry since 1986, including five years in Asia when he was based in Taiwan. He has held senior editorial positions at Electronic News, EBN and Silicon Strategies. In Asia, he was a contributing writer for Byte Magazine. Most recently, he worked as the semiconductor editor at EE Times.

Hardware-Software Tops Priority List For ASIC Prototypers

Thursday, August 23rd, 2012

By John Blyler
The most important decision facing chip prototyping designers this year (2012) concerned the completeness of the combined hardware and software platform. (See Fig. 1). Cost and boot time followed as the next most importance issues. Close to 200 qualified respondents participated in the annual Chip Design Trends (CDT), “ASIC/ASSP Prototyping with FPGAs” survey.

Fig. 1: Prototyping priorities listed in the 2012 CDT survey.

The concern over a complete hardware-software prototyping solution was in stark contrast to results from previous years, when key concerns revolved around the flexibility and expandability of the system, as well as cost, performance and ease-of-use factors (see chart below).

Another surprising finding in this year’s survey is the importance of system “bring-up-time,” which ranked in the top three concerns for software development-based prototyping systems. The importance of software-related issues was further verified by another survey question that found an overwhelming 65.5% of respondents used a combination of software and hardware execution (i.e. simulation plus FPGA prototyping).

What was the language of choice for these hardware-software co-designers? C/C++ beat out Verilog, VHDL and their derivatives. (see Fig. 2)

Fig. 2: Software languages used in software simulation and hardware (FGPA) based prototyping systems. (Source: 2012 CDT Survey)

Most designers who used a combination of software simulation and hardware FPGA-based prototyping did so to achieve early verification results (42.7%) and to accelerate the (simulation) speed with software processor models (35.4%)

The most common hardware configuration on the FPGA prototyping board consisted of between 4 to 9 clocks with the fastest clock running 50 to 125 MHz.

What interfaces were used to connect the software simulation and FPGA based prototyping/emulation/acceleration platforms? Not surprisingly, ARM-based interfaces were the most popular (56.4%), including ACE, AMBA, AXI, AHB, or APB variations (see Fig. 3). OCP was the choice interface for 17.3% designers. Many software developers just didn’t know what interface was used (36.4%).


Extension Media websites place cookies on your device to give you the best user experience. By using our websites, you agree to placement of these cookies and to our Privacy Policy. Please click here to accept.