Part of the  

Chip Design Magazine

  Network

About  |  Contact

Posts Tagged ‘prototyping’

Blog Review – Monday, June 12, 2017

Monday, June 12th, 2017

This week, we find traffic systems for drones and answers to the questions ‘What’s the difference between safe and secure?’ and ‘Can you hear voice control calling?’

An interesting foray into semantics is conducted by Andrew Hopkins, ARM, as he looks at what makes a system secure and what makes a system safe and can the two adjectives be interchanged in terms of SoC design? (With a little plug for ARM at DAC later this month.)

It had to happen, a traffic system designed to restore order to the skies as commercial drones increase in number. Ken Kaplan, Intel, looks at what NASA scientists and technology leaders have come up with to make sense of the skies.

Voice control is ready to bring voice automation to the smart home, says Kjetil Holstad, Nordic Semiconductor. He highlights a fine line of voice-activation’s predecessors and looks to the future with context-awareness.

More word play, this time from Tom De Schutter, Synopsys, who discusses verification and validation and their role in prototyping.

Tackling two big announcements from Mentor Graphics, Mike Santarini, looks at the establishment of the outsourced assembly and test (OSAT) Alliance program, and the company’s Xpedition high-density advanced packaging (HDAP) flow. He educates without patronizing on why the latter in particular is good news for fabless companies and where it fits in the company’s suite of tools. He also manages to flag up technical sessions on the topic at next month’s DAC.

Reporting from IoT DevCon, Christine Young, Maxim Integrated, highlights the theme of security in a connected world. She reviews the presentation “Shifting the IoT Mindset from Security to Trust,” by Bill Diotte, CEO of Mocana, and In “Zero-Touch Device Onboarding for IoT,” by Jennifer Gilburg, director of strategy, Internet of Things Identity at Intel. She explores a lot of the pitfalls and perils with problem-solving.

Anticipating a revolution in transportation, Alyssa, Dassault Systemes, previews this week’s Movin’On in Montreal, Canada, with an interview with colleague and keynote speaker, Guillaume Gerondeau, Senior Director Transportation and Mobility Asia. He looks at how smart mobility will impact cities and how 3D virtual tools can make the changes accessible and acceptable.

Caroline Hayes, Senior Editor

Cadence Launches New Verification Solutions

Tuesday, March 14th, 2017

Gabe Moretti, Senior Editor

During this year’s DVCon U.S. Cadence introduced two new verification solutions: the Xcelium Parallel Simulator and the Protium S1 FPGA-Based Prototyping Platform, which incorporates innovative implementation algorithms to boost engineering productivity.

Xcelium Parallel Simulator

.The new simulation engine is based on innovative multi-core parallel computing technology, enabling systems-on-chip (SoCs) to get to market faster. On average, customers can achieve 2X improved single-core performance and more than 5X improved multi-core performance versus previous generation Cadence simulators. The Xcelium simulator is production proven, having been deployed to early adopters across mobile, graphics, server, consumer, internet of things (IoT) and automotive projects.

The Xcelium simulator offers the following benefits aimed at accelerating system development:

  • Multi-core simulation improves runtime while also reducing project schedules: The third generation Xcelium simulator is built on the technology acquired from Rocketick. It speeds runtime by an average of 3X for register-transfer level (RTL) design simulation, 5X for gate-level simulation and 10X for parallel design for test (DFT) simulation, potentially saving weeks to months on project schedules.
  • Broad applicability: The simulator supports modern design styles and IEEE standards, enabling engineers to realize performance gains without recoding.
  • Easy to use: The simulator’s compilation and elaboration flow assigns the design and verification testbench code to the ideal engines and automatically selects the optimal number of cores for fast execution speed.
  • Incorporates several new patent-pending technologies to improve productivity: New features that speed overall SoC verification time include SystemVerilog testbench coverage for faster verification closure and parallel multi-core build.

“Verification is often the primary cost and schedule challenge associated with getting new, high-quality products to market,” said Dr. Anirudh Devgan, senior vice president and general manager of the Digital & Signoff Group and the System & Verification Group at Cadence. “The Xcelium simulator combined with JasperGold Apps, the Palladium Z1 Enterprise Emulation Platform and the Protium S1 FPGA-Based Prototyping Platform offer customers the strongest verification suite on the market”

The new Xcelium simulator further extends the innovation within the Cadence Verification Suite and supports the company’s System Design Enablement (SDE) strategy, which enables system and semiconductor companies to create complete, differentiated end products more efficiently. The Verification Suite is comprised of best-in-class core engines, verification fabric technologies and solutions that increase design quality and throughput, fulfilling verification requirements for a wide variety of applications and vertical segments.

Protium S1

The Protium S1 platform provides front-end congruency with the Cadence Palladium Z1 Enterprise Emulation Platform. BY using Xilinx Virtex UltraScale FPGA technology, the new Cadence platform features 6X higher design capacity and an average 2X performance improvement over the previous generation platform. The Protium S1 platform has already been deployed by early adopters in the networking, consumer and storage markets.

Protium S1 is fully compatible with the Palladium Z1 emulator

To increase designer productivity, the Protium S1 platform offers the following benefits:

  • Ultra-fast prototype bring-up: The platform’s advanced memory modeling and implementation capabilities allow designers to reduce prototype bring-up from months to days, thus enabling them to start firmware development much earlier.
  • Ease of use and adoption: The platform shares a common compile flow with the Palladium Z1 platform, which enables up to 80 percent re-use of the existing verification environment and provides front-end congruency between the two platforms.
  • Innovative software debug capabilities: The platform offers firmware and software productivity-enhancing features including memory backdoor access, waveforms across partitions, force and release, and runtime clock control.

“The rising need for early software development with reduced overall project schedules has been the key driver for the delivery of more advanced emulation and FPGA-based prototyping platforms,” said Dr. Anirudh Devgan, senior vice president and general manager of the Digital & Signoff Group and the System & Verification Group at Cadence. “The Protium S1 platform offers software development teams the required hardware and software components, a fully integrated implementation flow with fast bring-up and advanced debug capabilities so they can deliver the most compelling end products, months earlier.”

The Protium S1 platform further extends the innovation within the Cadence Verification Suite and supports the company’s System Design Enablement (SDE) strategy, which enables system and semiconductor companies to create complete, differentiated end products more efficiently. The Verification Suite is comprised of best-in-class core engines, verification fabric technologies and solutions that increase design quality and throughput, fulfilling verification requirements for a wide variety of applications and vertical segments.

Blog Review – Monday, September 28 2015

Monday, September 28th, 2015

ARM Smart Design competition winners; Nordic Semiconductor Global Tour details; Emulation alternative; Bloodhound and bridge-building drones; Imagination Summit in Taiwan; Monolithic 3D ‘game changer’; Cadence and collaboration; What size is wearable technology?

Winners of this year’s ARM Smart Product Design competition had no prior experience of using ARM tools, yet managed, in just three months to produce a sleep Apnea Observer app (by first prize winner, Clemente di Caprio), an amateur radio satellite finder, a water meter, an educational platform for IoT applications and a ‘CamBot’ camera-equipped robot, marvels, Brian Fuller, ARM.

This year’s Nordic Semiconductor Global Tech Tour will start next month, and John Leonard, ARM has details of how to register and more about this year’s focus – the nRF52 Series Bluetooth Smart SoC.

Offering an alternative to the ‘big box’ emulation model, Doug Amos, Aldec, explains FPGA-based emulation.

Justin Nescott, Ansys, has dug out some great stories from the world of technology, from the UK’s Bloodhound project and the sleek vehicle’s speed record attempt; and a story published by Giz Mag about how drones created a bridge – with video proof that it is walkable.

A review of the 2015 Imagination Summit in Taiwan earlier this month is provided by Vicky Hewlett. The report includes some photos from the event, of attendees and speakers at Hsinchu and Taipei.

It is with undeniable glee that Zvi Or-Bach, MonolithIC 3D announces that the company has been invited to a panel session titled: “Monolithic 3D: Will it Happen and if so…” at IEEE 3D-Test Workshop Oct. 9th, 2015. It is not all about the company, but a discussion of the technology challenge and the teaser of the unveiling of a ‘game changer’ technology.

A review of TSMC Open Innovation Platform (OIP) Ecosystem Forum, earlier this month, is presented in the blog by Christine Young, Cadence. There are some observations from Rick Cassidy, TSMC North America on Thursday, on automotive, IoT and foundry collaboration.

How big is wearable, ponders Ricardo Anguiano, Mentor Graphics. Unwrapping a development kit, he provides a link to Nucleus RTOS and wearable devices to help explain what’s wearable and what’s not.

A brief history of Calypto Design Systems, recently acquired by Mentor Graphics, is discussed by Graham Bell, RealIntent, and what the change of ownership means for existing partners.

Beginning a mini series of blogs about the HAPS-80 with ProtoCompiler, Michael Posner, Synospys, begins with a focus on the design flow and time constraints. He provides many helpful illustrations. (The run-on piece about a visit to the tech museum in Shanghai shows how he spends his free time: seeking out robots!)

Caroline Hayes, Senior Editor

Blog Review – Monday, December 15 2014

Monday, December 15th, 2014

Rolling up her sleeves and getting down to some hard work – not just words, Carissa Labriola, ARM, opens a promised series of posts with an intelligent, and through analysis of the Arduino Due and there is even the chance to win one. This is a refreshingly interactive, focused blog for the engineering community.

It’s coming to the end of the year, so it is only to be expected that there is a blog round-up. Real Intent does not disappoint, and Graham Bell provides his ‘Best of’ with links to blog posts, an interview at TechCon and a survey.

There is a medical feel to the blog by Shelly Stalnake, Mentor Graphics, beginning with a biology text book image of an organism to lead into an interesting discussion on parasitic extraction. She lists some advice – and more importantly – links to resources to beat the ‘pests’.

Always considerate of his readers, Michael Posner, Synopsys, opens his blog with a warning that it contains technical content. He goes on to unlock the secrets of ASIC clock conversion, referencing Synopsys of course, but also some other sources to get to grips with this prototyping tool. And in the spirit of Christmas, he also has a giveaway, a signed copy of an FPGA-Based Prototyping Methodology Manual if you can answer a question about HAPS shipments.

Another list is presented by Steve Carlson, Cadence, but his is no wishlists or ‘best of’ in fact it’s a worst-of, with the top five issues that can cause mixed-signal verification misery. This blog is one of the liveliest and most colorful this week, with some quirky graphics to accompany the sound advice that he shares on this topic.

Formal, Logic Simulation, hardware emulation/acceleration. Benefits and Limitations

Wednesday, July 27th, 2016

Stephen Bailey, Director of Emerging Technologies, Mentor Graphics

Verification and Validation are key terms used and have the following differentiation:  Verification (specifically, hardware verification) ensures the design matches R&D’s functional specification for a module, block, subsystem or system. Validation ensures the design meets the market requirements, that it will function correctly within its intended usage.

Software-based simulation remains the workhorse for functional design verification. Its advantages in this space include:

-          Cost:  SW simulators run on standard compute servers.

-          Speed of compile & turn-around-time (TAT):  When verifying the functionality of modules and blocks early in the design project, software simulation has the fastest turn-around-time for recompiling and re-running a simulation.

-          Debug productivity:  SW simulation is very flexible and powerful in debug. If a bug requires interactive debugging (perhaps due to a potential UVM testbench issue with dynamic – stack and heap memory based – objects), users can debug it efficiently & effectively in simulation. Users have very fine level controllability of the simulation – the ability to stop/pause at any time, the ability to dynamically change values of registers, signals, and UVM dynamic objects.

-          Verification environment capabilities: Because it is software simulation, a verification environment can easily be created that peeks and pokes into any corner of the DUT. Stimulus, including traffic generation / irritators can be tightly orchestrated to inject stimulus at cycle accuracy.

-          Simulation’s broad and powerful verification and debug capabilities are why it remains the preferred engine for module and block verification (the functional specification & implementation at the “component” level).

If software-based simulation is so wonderful, then why would anyone use anything else?  Simulation’s biggest negative is performance, especially when combined with capacity (very large, as well as complex designs). Performance, getting verification done faster, is why all the other engines are used. Historically, the hardware acceleration engines (emulation and FPGA-based prototyping) were employed latish in the project cycle when validation of the full chip in its expected environment was the objective. However, both formal and hardware acceleration are now being used for verification as well. Let’s continue with the verification objective by first exploring the advantages and disadvantages of formal engines.

-          Formal’s number one advantage is its comprehensive nature. When provided a set of properties, a formal engine can exhaustively (for all of time) or for a, typically, broad but bounded number of clock cycles, verify that the design will not violate the property(ies). The prototypical example is verifying the functionality of a 32-bit wide multiplier. In simulation, it would take far too many years to exhaustively validate every possible legal multiplicand and multiplier inputs against the expected and actual product for it to be feasible. Formal can do it in minutes to hours.

-          At one point, a negative for formal was that it took a PhD to define the properties and run the tool. Over the past decade, formal has come a long way in usability. Today, formal-based verification applications package properties for specific verification objectives with the application. The user simply specifies the design to verify and, if needed, provides additional data that they should already have available; the tool does the rest. There are two great examples of this approach to automating verification with formal technology:

  • CDC (Clock Domain Crossing) Verification:  CDC verification uses the formal engine to identify clock domain crossings and to assess whether the (right) synchronization logic is present. It can also create metastability models for use with simulation to ensure no metastability across the clock domain boundary is propagated through the design. (This is a level of detail that RTL design and simulation abstract away. The metastability models add that level of detail back to the simulation at the RTL instead of waiting for and then running extremely long full-timing, gate-level simulations.)
  • Coverage Closure:  During the course of verification, formal, simulation and hardware accelerated verification will generate functional and code coverage data. Most organizations require full (or nearly 100%) coverage completion before signing-off the RTL. But, today’s designs contain highly reusable blocks that are also very configurable. Depending on the configuration, functionality may or may not be included in the design. If it isn’t included, then coverage related to that functionality will never be closed. Formal engines analyze the design, in its actual configuration(s) that apply, and does a reachability analysis for any code or (synthesizable) functional coverage point that has not yet been covered. If it can be reached, the formal tool will provide an example waveform to guide development of a test to achieve coverage. If it cannot be reached, the manager has a very high-level of certainty to approving a waiver for that coverage point.

-          With comprehensibility being its #1 advantage, why doesn’t everyone use and depend fully on formal verification:

  • The most basic shortcoming of formal is that you cannot simulate or emulate the design’s dynamic behavior. At its core, formal simply compares one specification (the RTL design) against another (a set of properties written by the user or incorporated into an automated application or VIP). Both are static specifications. Human beings need to witness dynamic behavior to ensure the functionality meets marketing or functional requirements. There remains no substitute for “visualizing” the dynamic behavior to avoid the GIGO (Garbage-In, Garbage-Out) problem. That is, the quality of your formal verification is directly proportional to the quality (and completeness) of your set of properties. For this reason, formal verification will always be a secondary verification engine, albeit one whose value rises year after year.
  • The second constraint on broader use of formal verification is capacity or, in the vernacular of formal verification:  State Space Explosion. Although research on formal algorithms is very active in academia and industry, formal’s capacity is directly related to the state space it must explore. Higher design complexity equals more state space. This constraint limits formal usage to module, block, and (smaller or well pruned/constrained) subsystems, and potentially chip levels (including as a tool to help isolate very difficult to debug issues).

The use of hardware acceleration has a long, checkered history. Back in the “dark ages” of digital design and verification, gate-level emulation of designs had become a big market in the still young EDA industry. Zycad and Ikos dominated the market in the late 1980’s to mid/late-1990’s. What happened?  Verilog and VHDL plus automated logic synthesis happened. The industry moved from the gate to the register-transfer level of golden design specification; from schematic based design of gates to language-based functional specification. The jump in productivity from the move to RTL was so great that it killed the gate-level emulation market. RTL simulation was fast enough. Zycad died (at least as an emulation vendor) and Ikos was acquired after making the jump to RTL, but had to wait for design size and complexity to compel the use of hardware acceleration once again.

Now, 20 years later, it is clear to everyone in the industry that hardware acceleration is back. All 3 major vendors have hardware acceleration solutions. Furthermore, there is no new technology able to provide a similar jump in productivity as did the switch from gate-level to RTL. In fact, the drive for more speed has resulted in emulation and FPGA prototyping sub-markets within the broader market segment of hardware acceleration. Let’s look at the advantages and disadvantages of hardware acceleration (both varieties).

-          Speed:  Speed is THE compelling reason for the growth in hardware acceleration. In simulation today, the average performance (of the DUT) is perhaps 1 kHz. Emulation expectations are for +/- 1 MHz and for FPGA prototypes 10 MHz (or at least 10x that of emulation). The ability to get thousands of more verification cycles done in a given amount of time is extremely compelling. What began as the need for more speed (and effective capacity) to do full chip, pre-silicon validation driven by Moore’s Law and the increase in size and complexity enabled by RTL design & design reuse, continues to push into earlier phases of the verification and validation flow – AKA “shift-left.”  Let’s review a few of the key drivers for speed:

  • Design size and complexity:  We are well into the era of billion gate plus design sizes. Although design reuse addressed the challenge of design productivity, every new/different combination of reused blocks, with or without new blocks, creates a multitude (exponential number) of possible interactions that must be verified and validated.
  • Software:  This is also the era of the SoC. Even HW compute intensive chip applications, such as networking, have a software component to them. Software engineers are accustomed to developing on GHz speed workstations. One MHz or even 10’s of MHz speeds are slow for them, but simulation speeds are completely intolerable and infeasible to enable early SW development or pre-silicon system validation.
  • Functional Capabilities of Blocks & Subsystems:  It can be the size of input data / simuli required to verify a block’s or subsystem’s functionality, the complexity of the functionality itself, or a combination of both that drives the need for huge numbers of verification cycles. Compute power is so great today, that smartphones are able to record 4k video and replay it. Consider the compute power required to enable Advanced Driver Assistance Systems (ADAS) – the car of the future. ADAS requires vision and other data acquisition and processing horsepower, software systems capable of learning from mistakes (artificial intelligence), and high fault tolerance and safety. Multiple blocks in an ADAS system will require verification horsepower that would stress the hardware accelerated performance available even today.

-          As a result of these trends which appear to have no end, hardware acceleration is shifting left and being used earlier and earlier in the verification and validation flows. The market pressure to address its historic disadvantages is tremendous.

  • Compilation time:  Compilation in hardware acceleration requires logic synthesis and implementation / mapping to the hardware that is accelerating the simulation of the design. Synthesis, placement, routing, and mapping are all compilation steps that are not required for software simulation. Various techniques are being employed to reduce the time to compile for emulation and FPGA prototype. Here, emulation has a distinct advantage over FPGA prototypes in compilation and TAT.
  • Debug productivity:  Although simulation remains available for debugging purposes, you’d be right in thinking that falling back on a (significantly) slower engine as your debug solution doesn’t sound like the theoretically best debug productivity. Users want a simulation-like debug productivity experience with their hardware acceleration engines. Again, emulation has advantages over prototyping in debug productivity. When you combine the compilation and debug advantages of emulation over prototyping, it is easy to understand why emulation is typically used earlier in the flow, when bugs in the hardware are more likely to be found and design changes are relatively frequent. FPGA prototyping is typically used as a platform to enable early SW development and, at least some system-level pre-silicon validation.
  • Verification capabilities:  While hardware acceleration engines were used primarily or solely for pre-silicon validation, they could be viewed as laboratory instruments. But as their use continues to shift to earlier in the verification and validation flow, the need for them to become 1st class verification engines grows. That is why hardware acceleration engines are now supporting:
    • UPF for power-managed designs
    • Code and, more appropriately, functional coverage
    • Virtual (non-ICE) usage modes which allow verification environments to be connected to the DUT being emulated or prototyped. While a verification environment might be equated with a UVM testbench, it is actually a far more general term, especially in the context of hardware accelerated verification. The verification environment may consist of soft models of things that exist in the environment the system will be used in (validation context). For example, a soft model of a display system or Ethernet traffic generator or a mass storage device. Soft models provide advantages including controllability, reproducibility (for debug) and easier enterprise management and exploitation of the hardware acceleration technology. It may also include a subsystem of the chip design itself. Today, it has become relatively common to connect a fast model written in software (usually C/C++) to an emulator or FPGA prototype. This is referred to as hybrid emulation or hybrid prototyping. The most common subsystem of a chip to place in a software model is the processor subsystem of an SoC. These models usually exist to enable early software development and can run at speeds equivalent to ~100 MHz. When the processor subsystem is well verified and validated, typically a reused IP subsystem, then hybrid mode can significantly increase the verification cycles of other blocks and subsystems, especially driving tests using embedded software and verifying functionality within a full chip context. Hybrid mode can rightfully be viewed as a sub-category of the virtual usage mode of hardware acceleration.
    • As with simulation and formal before it, hardware acceleration solutions are evolving targeted verification “applications” to facilitate productivity when verifying specific objectives or target markets. For example, a DFT application accelerates and facilitates the validation of test vectors and test logic which are usually added and tested at the gate-level.

In conclusion, it may seem that simulation is being used less today. But, it is all relative. The total number of verification cycles is growing exponentially. More simulation cycles are being performed today even though hardware acceleration and formal cycles are taking relatively larger pieces of the overall verification pie. Formal is growing in appeal as a complementary engine. Because of its comprehensive verification nature, it can significantly bend the cost curve for high-valued (difficult/challenging) verification tasks and objectives. The size and complexity of designs today require the application of all verification engines to the challenges of verifying and validating (pre-silicon) the hardware design and enabling early SW development. The use of hardware acceleration continues to shift-left and be used earlier in the verification and validation flow causing emulation and FPGA prototyping to evolve into full-fledged verification engines (not just ICE validation engines).

FPGAs for ASIC Prototyping Bridge Global Development

Wednesday, July 20th, 2016
YouTube Preview Image

A Prototyping with FPGA Approach

Thursday, February 12th, 2015

Frank Schirrmeister, Group Director for Product Marketing of the System Development Suite, Cadence.

In general, the industry is experiencing the need for what now has been started being called the “shift left” in the design flow as shown Figure 1. Complex hardware stacks, starting from IP assembled into sub-systems, assembled into Systems on Chips (SoCs) and eventually integrated into systems, are combined with complex software stacks, integrating bare metal software and drivers with operating systems, middleware and eventually the end applications that determine the user experience.

From a chip perspective, about 60% into a project three main issues have to be resolved. First, the error rate in the hardware has to be low enough that design teams find confidence to commit to a tape out. Second, the chip has to be validated enough within its environment to be sure that it works within the system. Third, and perhaps most challenging, significant portions of the software have to be brought up to be confident that software/hardware interactions work correctly. In short, hardware verification, system validation and software development have to be performed as early as possible, requiring a “shift left” of development tasks to allow them to happen as early as possible.

Figure 1: A Hardware/Software Development Flow.

Prototyping today happens at two abstraction levels – using transaction-level models (TLM) and register transfer models (RTL) – using five basic engines.

  • Virtual prototyping based on TLM models can happen based on specifications earliest in the design flow and works well for software development but falls short when more detailed hardware models are required and is plagued by model availability and its creation cost and effort.
  • RTL simulation – which by the way today is usually integrated with SystemC based capabilities for TLM execution – allows detailed hardware execution but is limited in speed to the low KHz or even Hz range and as such is not suitable for software execution that may require billions of cycles to just boot an operating system. Hardware assisted techniques come to the rescue.
  • Emulation is used for both hardware verification and lower level software development as speeds can reach the MHz domain. Emulation is separated into processor based and FPGA based emulation, the former allowing for excellent at speed debug and fast bring-up times as long FPGA routing times can be avoided, the latter excelling at speed once the design has been brought up.
  • FPGA based prototyping is typically limited in capacity and can take months to bring up due to modifications required to the design itself and the subsequent required verification. The benefit, once brought up, is a speed range in the 10s of MHz range that is sufficient for software development.
  • The actual prototype silicon is the fifth engine used for bring up. Post-silicon debug and test techniques are finding their way into pre-silicon given the ongoing shift-left. Using software for verification bears the promise to better re-use verification across the five engines all the way into post-silicon.

Advantages Of Using FPGAs For ASIC Prototyping

FPGA providers have been pursuing aggressive roadmaps. Single FPGA devices now nominally hold up to 20 million ASIC gates, with utilization rates of 60%, 8 FPGA systems promise to hold almost 100 MG, which makes them large enough for a fair share of design starts out there. The key advantage of FPGA based systems is the speed that can be achieved and the main volume of FPGA based prototypes today is shipped to enable software development and sub-system validation. They are also relatively portable, so we have seen customers use FPGA based prototypes successfully to interact with their customers to deliver pre-silicon representations of the design for demonstration and software development purposes.

Factors That May Limit The Growth Of The Technique

There certainly is a fair amount of growth out there for FPGA based prototyping, but the challenge of long bring-up times often defies the purpose of early availability. For complex designs, requiring careful partitioning and timing optimization, we have seen cases in which the FPGA based prototype did not become available even until silicon was back. Another limitation is that the debug insight into the hardware is very limited compared to simulation and processor based emulation. While hardware probes can be inserted, they will then reduce the speed of execution because of data logging. Subsequently, FPGA based prototypes find most adoption in the later stages of projects during which RTL has become already stable and the focus can shift to software development.

The Future For Such Techniques

All prototyping techniques are more and more used in combination. Emulation and RTL simulation are combined to achieve “Simulation Acceleration”. Emulation and transaction-level models with Fast Models from ARM are combined to accelerate operating system bring-up and software driven testing. Emulation and FPGA based prototyping are combined to combine the speed of bring-up for new portions of the design in emulation with the speed of execution for stable portions of the design in FPGA based prototyping. Like in the recent introduction of the Cadence Protium FPGA based prototyping platform, both processor based emulation and FPGA based prototyping can share the same front-end to significantly accelerate FPGA based prototyping bring-up. At this point all major EDA vendors have announced a suite of connected engines (Cadence in May 2011, Mentor in March 2014 and Synopsys in September 2014).It will be interesting to see how the continuum of engines grows further together to enable most efficient prototyping at different stages of a development project.

ASIC Prototypes Take the Express Lane for Faster System Validation

Thursday, February 12th, 2015

Troy Scott, Product Marketing Manager, Synopsys Inc.
Demand for earlier availability of ASIC prototypes during a SoC design project is increasing because of the effort and cost to develop software drivers, firmware, and applications. Industry surveys show that design teams now spend up to 50% of engineering budget on software development. This urgency is pushing commercial vendors of FPGA-based prototypes to field products that can demonstrate improved productivity for the engineers that are responsible for hardware/software integration and system validation. In this article we’ll examine the state-of-the-art in FPGA-based prototyping tools and the benefits they deliver.

Design teams that have adopted commercial FPGA-based prototyping systems versus custom built or adapting FPGA evaluation boards typically do so to stretch the investment dollars over multiple  chip designs. Commercial systems tend to be modular and flexible, by stacking or tiling FPGA modules, so that capacity can be scaled up or down for a project’s resource demand and interface peripheral boards featuring interface PHYs can be selectively assembled around FPGA modules. The best commercial systems provide embedded system control elements for rapid programming, FPGA module chaining, clock and reset distribution, heat mitigation, and fault monitoring. All of which contribute to reliability and uptime that is superior to custom-built systems.

One of the major tasks of a physical prototype project is to map ASIC RTL and IP into the hardware resources of an FPGA which, in comparison to an ASIC design, provides a limited number of low-skew clock trees and dedicated memory resources. Replacement and substitution can be a time consuming effort to make the RTL “FPGA-friendly.” This is where FPGA logic synthesis tailored for FPGA-based prototypes helps speed the conversion task.  Synopsys’s ProtoCompiler, design and debug automation tool, provides two clock conversion techniques for the Synopsys HAPS Series. The first called HAPS clock optimization (HCO) is typically applied very early in the bring-up phase when it’s urgent to get the design operational quickly on the live hardware. HCO automatically chooses a master synchronizing clock and all registered elements are synchronized to it. The conversion is quick and easy since it does not depend on careful identification of clock and data signals or constraints by the prototype developer. When higher performance or asynchronous relationships must be modeled then ProtoCompiler provides advanced clock conversion that logically separates the gating from the clock and routes the gating to the dedicated FPGA clock to enable inputs of sequential elements. Separating the gating from the clock allows a single global clock tree to be used for all gated clocks that reference the same base clock.

The Verilog HDL language in the context of a prototyping flow can help address module replacement or exclusion of non-logic elements of the ASIC design. In ProtoCompiler’s design flow for HAPS, the Verilog Force replaces existing driver of internal signals in the hierarchy with new drivers and Verilog Bind inserts a module instance into the design hierarchy. These constructs are ideal for substituting circuitry, changing a clock tree, or to stub out part of an ASIC design that is not needed in the FPGA prototype. The commands when collected into a new file will override the RTL of the design which allows the prototype engineer to make surgical changes to the logic without touching the “golden” source of the RTL drop.

Another significant trend in the physical prototyping space is for vendors to deliver prototypes integrated with a CPU software development platform. These combinations are a popular architecture for hardware/software integration scenarios and ideal for software-driven testing and driver development.  The reprogrammable FPGA allows for testing of various IP configurations, connecting to analog PHYs implemented on test ICs, and clock, reset, and power management circuit integration between control and PHY.

Figure 1 illustrates a commercial implementation of an FPGA-based prototype with a CPU subsystem. The Synopsys DesignWare IP Prototyping Kits take this integration paradigm even further by pre-packaging various IP subsystems from the DesignWare IP catalog with reference drivers and example application running on Linux OS. The kits feature popular interfaces like USB, PCIe, and MIPI and can be assembled, powered on, and running within a few minutes making them ideal for rapid delivery to software developers.

Figure 2. Synopsys DesignWare IP Prototyping Kit Architecture

The demand for shorter bring-up schedules and more efficient work flows are driving innovations by commercial providers of FPGA-based prototyping tools. Many of the benefits come from the co-design of prototype hardware, firmware, and software elements that help expedite the migration from raw ASIC RTL and IP. Today the state of the art in ASIC prototyping and software development tools join software development platforms running reference designs with pre-packaged IP configurations. Prototyping kits are operational out of the box and allow hardware and software developers to immediately engage in integration and validation tasks necessary to ship the next great SoC design.

ASIC Prototyping With FPGA

Thursday, February 12th, 2015

Zibi Zalewski, General Manger of the Hardware Products Division, Aldec

When I began my career as a verification products manager, ASIC/SoC verification was less integrated and the separation among verification stages, tools, and engineering teams was more obvious. At that time the verification process started using simulation, especially for the development phase of the hardware using relatively short test cases. As the design progressed more advanced tests became necessary the move to using emulation was quite natural. This was especially true with the availability of emulators with debugging capabilities that enabled running longer tests in shorter time and debugging issues as they arose. The last stage of this methodology was usually prototyping, when the highest execution speed was required coupled with less need for debugging.  Of course, with the overhead of circuit setup, this took longer and was more complicated.

Today’s ASIC designs have become huge in comparison to the early days making the process of verification extremely complicated. This is the reason that RTL simulation is used only early in the process, mostly for single module verification, since it is simply too slow.

The size of the IC being developed makes even usage of FPGA prototyping boards an issue, since porting designs of 100+ million gates takes months and requires boards that include least several programmable devices. In spite of the fact that FPGAs are getting bigger and bigger in terms of capacity and I/O, SoC projects are growing much faster.  In the end, even a very large prototyping board may not be sufficient.

To add further complication, parts of modern SoCs, like processor subsystems, are developed using virtual platforms with the ability to exchange different processor models depending on the application requirements. Verifying all of the elements within such a complicated system takes massive amounts of time and resources – engineering, software and hardware tools. Considering design size and sophistication, even modular verification becomes a non-so-trivial task, especially during final testing and SoC firmware verification.

In order to reach the maximum productivity and decrease development cost the team must integrate as early as possible to be able to test not only at the module level, but also at the SoC level. Resolution unfortunately is not that simple.  Let’s consider two test cases.

1. SoC design with UVM testbench.

The requirement is to reuse the UVM testbench, but the design needs to run at MHz speed, part of it connected using a physical interface running at speed.

To fulfill such requirements the project team needs an emulator supporting SystemVerilog DPI-C and SCE-MI Function based in order to connect the UVM testbench and DUT.  Since part of the design needs to communicate with a physical interface such emulator needs to support a special adapter module to synchronize the emulator speed with a faster physical interface (e.g. Ethernet port). The result is that the UVM testbench in a  simulator can still be reused, since the main design is running at MHz speed on the emulator and communicating through an external interface that is running at speed – thanks to a speed adapter – with the testbench.

2. SoC design partially developed in virtual platform, and partially written in RTL HDL language. Here the requirements are to reuse the virtual platform and to synchronize it with the rest of the hardware system running at MHz speed.  This approach also requires that part of the design was already optimized to run in prototyping mode.

Since virtual platforms usually interface with external tools using TLM, the natural way is to connect the platform with a transaction level emulator equipped with SCE-MI API that also provides the required MHz speed. To connect the part of the design optimized for prototyping, most likely running at a higher speed than the main emulator clock, it is required to use a speed adapter as in the case already discussed.  If it is possible to connect the virtual platform with an emulator running at two speeds (main emulation clock and higher prototyping clock) the result is that design parts already tested separately can now be tested together as one SoC with the benefit that both software and hardware teams are working on the same DUT.

Figure-1  Integrated verification platform for modern ASIC/SoC.

In both cases we have different tools integrated together (Figure-1), testbench simulated in RTL simulator or in the form of virtual platform is connected with an emulator via SCE-MI API providing integration between software and hardware tools. Next, we have two hardware domains connected via a special speed adapter bridge – emulator domain and prototyping domain (or external interface) synchronized and implemented in the FPGA based board/s. All these elements create a hybrid verification platform for modern SoC/ASIC design that delivers the ability for all teams involved to work on the whole and same source of the project.