Part of the  

Chip Design Magazine

  Network

About  |  Contact

Posts Tagged ‘System Verilog’

Cadence Launches New Verification Solutions

Tuesday, March 14th, 2017

Gabe Moretti, Senior Editor

During this year’s DVCon U.S. Cadence introduced two new verification solutions: the Xcelium Parallel Simulator and the Protium S1 FPGA-Based Prototyping Platform, which incorporates innovative implementation algorithms to boost engineering productivity.

Xcelium Parallel Simulator

.The new simulation engine is based on innovative multi-core parallel computing technology, enabling systems-on-chip (SoCs) to get to market faster. On average, customers can achieve 2X improved single-core performance and more than 5X improved multi-core performance versus previous generation Cadence simulators. The Xcelium simulator is production proven, having been deployed to early adopters across mobile, graphics, server, consumer, internet of things (IoT) and automotive projects.

The Xcelium simulator offers the following benefits aimed at accelerating system development:

  • Multi-core simulation improves runtime while also reducing project schedules: The third generation Xcelium simulator is built on the technology acquired from Rocketick. It speeds runtime by an average of 3X for register-transfer level (RTL) design simulation, 5X for gate-level simulation and 10X for parallel design for test (DFT) simulation, potentially saving weeks to months on project schedules.
  • Broad applicability: The simulator supports modern design styles and IEEE standards, enabling engineers to realize performance gains without recoding.
  • Easy to use: The simulator’s compilation and elaboration flow assigns the design and verification testbench code to the ideal engines and automatically selects the optimal number of cores for fast execution speed.
  • Incorporates several new patent-pending technologies to improve productivity: New features that speed overall SoC verification time include SystemVerilog testbench coverage for faster verification closure and parallel multi-core build.

“Verification is often the primary cost and schedule challenge associated with getting new, high-quality products to market,” said Dr. Anirudh Devgan, senior vice president and general manager of the Digital & Signoff Group and the System & Verification Group at Cadence. “The Xcelium simulator combined with JasperGold Apps, the Palladium Z1 Enterprise Emulation Platform and the Protium S1 FPGA-Based Prototyping Platform offer customers the strongest verification suite on the market”

The new Xcelium simulator further extends the innovation within the Cadence Verification Suite and supports the company’s System Design Enablement (SDE) strategy, which enables system and semiconductor companies to create complete, differentiated end products more efficiently. The Verification Suite is comprised of best-in-class core engines, verification fabric technologies and solutions that increase design quality and throughput, fulfilling verification requirements for a wide variety of applications and vertical segments.

Protium S1

The Protium S1 platform provides front-end congruency with the Cadence Palladium Z1 Enterprise Emulation Platform. BY using Xilinx Virtex UltraScale FPGA technology, the new Cadence platform features 6X higher design capacity and an average 2X performance improvement over the previous generation platform. The Protium S1 platform has already been deployed by early adopters in the networking, consumer and storage markets.

Protium S1 is fully compatible with the Palladium Z1 emulator

To increase designer productivity, the Protium S1 platform offers the following benefits:

  • Ultra-fast prototype bring-up: The platform’s advanced memory modeling and implementation capabilities allow designers to reduce prototype bring-up from months to days, thus enabling them to start firmware development much earlier.
  • Ease of use and adoption: The platform shares a common compile flow with the Palladium Z1 platform, which enables up to 80 percent re-use of the existing verification environment and provides front-end congruency between the two platforms.
  • Innovative software debug capabilities: The platform offers firmware and software productivity-enhancing features including memory backdoor access, waveforms across partitions, force and release, and runtime clock control.

“The rising need for early software development with reduced overall project schedules has been the key driver for the delivery of more advanced emulation and FPGA-based prototyping platforms,” said Dr. Anirudh Devgan, senior vice president and general manager of the Digital & Signoff Group and the System & Verification Group at Cadence. “The Protium S1 platform offers software development teams the required hardware and software components, a fully integrated implementation flow with fast bring-up and advanced debug capabilities so they can deliver the most compelling end products, months earlier.”

The Protium S1 platform further extends the innovation within the Cadence Verification Suite and supports the company’s System Design Enablement (SDE) strategy, which enables system and semiconductor companies to create complete, differentiated end products more efficiently. The Verification Suite is comprised of best-in-class core engines, verification fabric technologies and solutions that increase design quality and throughput, fulfilling verification requirements for a wide variety of applications and vertical segments.

Blog Review – Monday, March 21 2016

Monday, March 21st, 2016

Coffee breaks and C layers; Ideas for IoT security; Weather protection technology; Productivity boost; Shining a light on dark silicon

Empathizing with his audience, Jacek Majkowski, sees the need for coffee but not necessarily a C layer in Standard Co-Emulation Modelling Interface (SCE-MI).

At last week’s Bluetooth World, in Santa Clara, CA, there was a panel discussion – Is the IoT hype or hope? Brian Fuller, ARM, reports on the to-and-fro of ideas from experts from ARM, Google, and moderated by Mark Powell, executive director of the Bluetooth SIG.

Of all the things to do on a sabbatical, Matt Du Puy, ARM, chose to climb Dhaulagiri (26,795feet /8161m), described as one of the most dangerous 8,000m mountains. Brian Fuller, ARM, reports that he is armed a GPS watch with cached terrain data and some questionable film choices on a portable WiDi disk station.

Still with extremes of weather, the Atmel team, enthuses about a KickStarter project for the Oombrella, a smart umbrella that uses sensors to analyse temperature, pressure, humidity and light, to let you know if you will need it because rain is coming your way. Very clever as long as you remember to bring it with you. Not so appealing is the capacity to share via social media the type of weather you are experiencing – and they say the Brits are obsessed with the weather!

IoT protection is occupying an unidentified blogger at Rambus, who longs for a Faraday cage to shield it. The blog has some interesting comments about the make up of, and security measures for the IoT, while promoting the company’s CryptoManager.

Still with IoT security, Richard Anguiano, Mentor Graphics examines a gateway using ARM TrustZone, and heterogeneous operating system configurations and running Nucleus RTOS and Mentor Embedded Linux. There is a link provided to the Secure Converged IoT Gateway and the complete end-to-end IoT solution.

Europe is credited as the birthplace for the Workplace Transformation, but Thomas Garrison, Intel. Ahead of CEBIT he writes about the role of Intel’s 6 th Generation Core vPro processor and what it could mean for a PC’s battery life, compute performance and the user’s productivity.

The prospects for MIPI and future uses in wearables, machine learning, virtual reality and automotive ADAS are uppermost in the mind of Hezi Saar, Synopsys, following MIPI Alliance meetings. He was particularly taken with a Movidius vision processor unit, and includes a video in the blog.

Examining dark silicon, Paul McLellan, Cadence Design Systems, wonders what will supercede Dennard Scaling to overcome the limitations on power on large SoCs.

Caroline Hayes, Senior Editor

Blog Review – Monday, January 11, 2016

Monday, January 11th, 2016

In this week’s review, as one blog has predictions for what 2016 holds, another reviews 2015. Others cover an autonomous flight drone; a taster of DesignCon 2016 and a bionic leg development.

Insisting it’s not black magic or fortune telling but a retelling of notes from past press announcements, Dick James, Chipworks, thinks 2016 will be a year of mixed fortunes, with a low profile for leading edge processes and plenty of activity in memory and sensors as the sectors reap the rewards of developments being realized in the marketplace.

Looking back on 2015, Tom De Schutter, Synopsys, is convinced that the march of software continues and world domination is but a clock cycle away. His questions prompted some interesting feedback on challenges, benefits and working lives.

Looking ahead to autonomous drone flight, Steve Leibson, Xilinx, reports on the the beta release of Aerotenna’s OCPoC (Octagonal Pilot on Chip) ready-to-fly drone-control, based on a Zynq Z-7010 All Programmable SoC with integrated IMU (inertial measurement unit) sensors and GPS receiver.

Bigger isn’t always better, explains Doug Perry, Doulos, in a guest blog for Aldec. As well as outlining the issues facing those verifying larger FPGAs, he provides a comprehensive, and helpful, checklist to tackle this increasingly frequent problem, while throwing in a plug for two webinars on the subject.

Some people have barely unpacked from CES, and ANSYS is already preparing for DesignCon 2016. Margaret Schmitt previews the company’s plan for ‘designing without borders’ with previews of what, and who, can be seen there.

A fascinating case study is related by Karen Schulz, Gumstix, on the ARM Community blog site. The Rehabilitation Institute of Chicago has (RIC) has developed the first neural-controlled bionic leg, without using no nerve redirection surgery or implanted sensors. The revolution is powered by the Gumstix Overo Computer-on-Module.

Showing empathy for engineers struggling with timing closure, Joe Hupcey III, Mentor Graphics, has some sound advice and diagnoses CDC problems. It’s not as serious as it sounds, CDC, or clock domain crossing, can be addressed with IEEE 1801 low power standard. Just what the doctor ordered.

Caroline Hayes, Senior Editor

Rapid Prototyping is an Enduring Methodology

Thursday, September 24th, 2015

Gabe Moretti, Senior Editor

When I started working in the electronic industry hardware development had prototyping using printed circuit boards (PCB) as its only verification tool.   The method was not “rapid” since it involved building and maintaining one or more PCBs.  With the development of the EDA industry simulators became an alternative method, although they really only achieved popularity in the ‘80s with the introduction of hardware description languages like Verilog and VHDL.

Today the majority of designs are developed using software tools, but rapid prototyping is still a method used in a significant portion of designs.  In fact hardware based prototyping is a growing methodology, mostly due to the increased power and size of FPGA devices.  It can now really be called “rapid prototyping.”

Rapid Prototyping Defined

Lauro Rizzatti, a noted expert on the subject of hardware based development of electronics, reinforces the idea in this way: “Although one of the oldest methods used to verify designs, dating back to the days of breadboarding, FPGA prototyping is still here today, and more and more it will continue to be used in the future. “

Saba Sharifi, VP Business Development, Logic Business Unit, System LSI Group at Toshiba America Electronic Components, describes the state of rapid prototyping as follows: “While traditional virtual prototyping involves using CAD and CAE tools to validate a design before building a physical prototype, rapid prototyping is growing in popularity as a method for prototyping SoC and ASIC designs on an FPGA for hardware verification and early software development. In a rapid prototyping environment, a user may start development using an FPGA-based system, and then choose either to keep the design in the FPGA, or to transfer it into a hard-coded solution such as an ASIC. There are a number of different ways to achieve this end.”

To support the hardware based prototyping methodology Toshiba has introduced a new type of Device: Toshiba’s Fast Fit Structured Array (FFSA).   The FFSA technology utilizes metal-configurable standard cell (MCSC) SoC platform technology for designing ASICs and ASSPs, and for replacing FPGAs. Designed with FPGA capabilities in mind, FFSA provides pre-developed wrappers that can realize some of the key FPGA functionality, as well as pre-defined master sizes, to facilitate the conversion process.

According to Saba “In a sense, it’s an extension to traditional FPGA-to-ASIC rapid prototyping – the second portion of the process can be achieved significantly faster using the FFSA approach.  The goal with FFSA technology is to enable developers to reduce their time to market and non-recurring engineering (NRE) costs by minimizing customizable layers (to four metal layers) while delivering the performance, power and lower unit costs associated with standard-cell ASICs.   An FFSA device speeds front-end development due to faster timing closure compared to a traditional ASIC, while back-end prototyping is improved via the pre-defined master sizes. In some cases, customers pursue concurrent development – they get the development process started in the FPGA and begin software development, and then take the FPGA into the FFSA platform. The platform takes into consideration the conversion requirements to bring the FPGA design into the hard-coded FFSA so that the process can be achieved more quickly and easily.  FFSA supports a wide array of interfaces and high speed Serdes (up to 28G), making it well suited for wired and wireless networking, SSD (storage) controllers, and a number of industrial consumer applications. With its power, speed, NRE and TTM benefits, FFSA can be a good solution anywhere that developers have traditionally pursued an FPGA-based approach, or required customized silicon for purpose-built applications with moderate volumes.”

According to Troy Scott, Product Marketing Manager, Synopsys: “FPGA-based prototypes are a class of rapid prototyping methods that are very popular for the promise of high-performance with a relatively low cost to get started. Prototyping with FPGAs is now considered a mainstream verification method which is implemented in a myriad of ways from relatively simple low-cost boards consisting of a single FPGA and memory and interface peripherals, to very complex chassis systems that integrate dozens of FPGAs in order to host billion-gate ASIC and SoC designs. The sophistication of the development tools, debug environment, and connectivity options vary as much as the equipment architectures do. This article examines the trends in rapid prototyping with FPGAs, the advantages they provide, how they compare to other simulation tools like virtual prototypes, and EDA standards that influence the methodology.

According to prototyping specialists surveyed by Synopsys the three most common goals they have are to speed up RTL simulation and verification, validate hardware/software systems, and enable software development. These goals influence the design of a prototype in order to address the unique needs of each use case. “

Figure 1. Top 3 Goals for a Prototype  (Courtesy of Synopsys, Inc)

Advantages of Rapid Prototyping

Frank Schirrmeister, Group Director for Product Marketing, System Development Suite, Cadence provides a business  description of the advantages: “The growth of FPGA-based prototyping is really primarily driven by software needs because software developers are, in terms of numbers of users, the largest consumer base. FPGA-based prototyping also provides satisfactory speed while delivering the real thing at RTL accuracy and at a reasonable replication cost.”

Stephen Bailey, Director of Emerging Technologies, DVT at Mentor graphics puts it as follows: “Performance, more verification cycles in a given period of time, especially for software development, drives the use of rapid prototypes.  Rapid prototyping typically provides 10x (~10 MHz) performance advantage over emulation which is 1,000x (~1 MHz) faster than RTL software simulation (~1 kHz).
Once a design has been implemented in a rapid prototype, that implementation can be easily replicated across as many prototype hardware platforms as the user has available.  No recompilation is required.  Replicating or cloning prototypes provides more platforms for software engineers, who appreciate having their own platform but definitely want a platform available whenever they need to debug.”

Lauro points out that: “The main advantage of rapid prototyping is very fast execution speed that comes at the expense of a very long setup time that may take months on very large designs. Partitioning and clock mapping require an uncommon expertise. Of course, the assumption is that the design fits in the maximum configuration of the prototyping board; otherwise the approach would not be applicable. The speed of FPGA prototyping makes it viable for embedded software development. They are best used for validating application software and for final system validation.”

Troy sees the advantages of the methodology as: “To address verification tasks the prototype helps to characterize timing and pipeline latencies that are not possible with more high-level representations of the design and perhaps more dramatically the prototype is able to reach execution speeds in the hundreds of megahertz. Typical prototype architectures for verification tasks rely on a CPU-based approach where traffic generation for the DUT is written as a software program. The CPU might be an external workstation or integrated with the prototype itself. Large memory ICs adjacent to the FPGAs store input traffic and results that is preloaded and read back for analysis. Prototypes that can provide an easy to implement test infrastructure that includes memory ICs and controllers, a high-bandwidth connection to the workstation, and a software API will accelerate data transfer and monitoring tasks by the prototyping team.

Software development and system validation tasks will influence the prototype design as well. The software developer is seeking an executable representation to support porting of legacy code and develop new device drivers for the latest interface protocol implementation. In some cases the prototype serves as a way for a company to deploy an architecture design and software driver examples to partners and customers. Both schemes demand high execution speed and often real world interface PHY connections. For example, consumer product developers will seek USB, HDMI, and MIPI interfaces while an industrial product will often require ADC/DAC or Ethernet interfaces. The prototype then must provide an easy way to connect to accessory interfaces and ideally a catalog of controllers and reference designs. And because protocol validation may require many cycles to validate, a means to store many full milliseconds of hardware trace data helps compliance check-out and troubleshooting.”

Rapid Prototyping versus Virtual Prototyping

According to Steve “Rapid prototyping, also called FPGA prototyping, is based on a physical representation of the design under test (DUT) that gets mapped on an array of FPGA devices. Virtual prototyping is based on a virtual representation of the DUT. The source code of an FPGA prototype is RTL code, namely synthesizable design. The source code of a virtual prototype is a design description at a higher level of abstraction, either based on C/C++/SystemC or SystemVerilog languages, that is not synthesizable.  Its rather limited support for hardware debugging hinders its ability to verify drivers and operating systems, where hardware emulation excels.”

Troy describes Synopsys position as such: “To address verification tasks the prototype helps to characterize timing and pipeline latencies that are not possible with more high-level representations of the design and perhaps more dramatically the prototype is able to reach execution speeds in the hundreds of megahertz. Typical prototype architectures for verification tasks rely on a CPU-based approach where traffic generation for the DUT is written as a software program. The CPU might be an external workstation or integrated with the prototype itself. Large memory ICs adjacent to the FPGAs store input traffic and results that is preloaded and read back for analysis. Prototypes that can provide an easy to implement test infrastructure that includes memory ICs and controllers, a high-bandwidth connection to the workstation, and a software API will accelerate data transfer and monitoring tasks by the prototyping team.

Software development and system validation tasks will influence the prototype design as well. The software developer is seeking an executable representation to support porting of legacy code and develop new device drivers for the latest interface protocol implementation. In some cases the prototype serves as a way for a company to deploy an architecture design and software driver examples to partners and customers. Both schemes demand high execution speed and often real world interface PHY connections. For example, consumer product developers will seek USB, HDMI, and MIPI interfaces while an industrial product will often require ADC/DAC or Ethernet interfaces. The prototype then must provide an easy way to connect to accessory interfaces and ideally a catalog of controllers and reference designs. And because protocol validation may require many cycles to validate, a means to store many full milliseconds of hardware trace data helps compliance check-out and troubleshooting.”

Lauro kept is answer short and to the point.  “Rapid prototyping, also called FPGA prototyping, is based on a physical representation of the design under test (DUT) that gets mapped on an array of FPGA devices. Virtual prototyping is based on a virtual representation of the DUT. The source code of an FPGA prototype is RTL code, namely synthesizable design. The source code of a virtual prototype is a design description at a higher level of abstraction, either based on C/C++/SystemC or SystemVerilog languages, that is not synthesizable.”

Cadence’s position is represented by Frank in his usual thorough style.  “Once RTL has become sufficiently stable, it can be mapped into an array of FPGAs for execution. This essentially requires a remapping from the design’s target technology into the FPGA fabric and often needs memories remodeled, different clock domains managed, and smart partitioning before the mapping into the individual FPGAs happens using standard software provided by the FPGA vendors. The main driver for the use of FPGA-based prototyping is software development, which has changed the dynamics of electronics development quite fundamentally over the last decade. Its key advantage is its ability to provide a hardware platform for software development and system validation that is fast enough to satisfy software developers. The software can reach a range of tens of MHz up to 100MHz and allows connections to external interfaces like PCIe, USB, Ethernet, etc. in real time, which leads to the ability to run system validation within the target environments.

When time to availability is a concern, virtual prototyping based on transaction-level models (TLM) can be the clear winner because virtual prototypes can be provided independently of the RTL that the engines on the continuum require. Everything depends on model availability, too. A lot of processor models today, like ARM Fast Models, are available off-the-shelf. Creating models for new IP often delays the application of virtual prototypes due to their sometimes extensive development effort, which can eliminate the time-to-availability advantage during a project. While virtual prototyping can run in the speed range of hundreds of MIPS, not unlike FPGA-based prototypes, the key differences between them are the model fidelity, replication cost, and the ability to debug the hardware.

Model fidelity often determines which prototype to use. There is often no hardware representation available earlier than virtual prototypes, so they can be the only choice for early software bring-up and even initial driver development. They are, however, limited by model fidelity – TLMs are really an abstraction of the real thing as expressed in RTL. When full hardware accuracy is required, FPGA-based prototypes are a great choice for software development and system validation at high speed. We have seen customers deliver dozens if not hundreds of FPGA-based prototypes to software developers, often three months or more prior to silicon being available.

Two more execution engines are worth mentioning. RTL simulation is the more accurate, slower version of virtual prototyping. Its low speed in the Hz or KHz range is really prohibitive for efficient software development. In contrast, due to the high speed of both virtual and FPGA-based prototypes, software development is quite efficient on both of them. Emulation is the slower equivalent of FPGA-based prototyping that can be available much earlier because its bring-up is much easier and more automated, even from not-yet-mature RTL. It also offers almost simulation-like debug and, since it also provides speed in the MHz range, emulation is often the first appropriate engine for software and OS bring-up used for Android, Linux and Windows, as well as for executing benchmarks like AnTuTu. Of course, on a per project basis, it is considered more expensive than FPGA-based prototyping, even though it can be more cost efficient from a verification perspective when considering multiple projects and a large number of regression workloads.”

Figure 2: Characteristics of the two methods (Courtesy of Synopsys Inc.)

Growth Opportunities

For Lauro the situation boils down to this: “Although one of the oldest methods used to verify designs, dating back to the days of breadboarding, FPGA prototyping is still here today, and more and more it will continue to be used in the future. The dramatic improvement in FPGA technology that made possible the manufacturing of devices of monstrous capacity will enable rapid prototyping for ever larger designs. Rapid prototyping is an essential tool in the modern verification/validation toolbox of chip designs.”
Troy thinks that there are growth opportunities and explained: “A prototype’s high-performance and relatively low investment to fabricate has led them to proliferate among IP and ASIC development teams. Market size estimates show that 70% of all ASIC designs are prototyped to some degree using an FPGA-based system today. Given this demand several commercial offerings have emerged that address the limitations exhibited by custom-built boards. The benefits of almost immediate availability, better quality, modularity for better reuse, and the ability to out-source support and maintenance are big benefits. Well documented interfaces and usage guidelines make end-users largely self-sufficient. A significant trend for commercial systems now is development and debugging features of the EDA software being integrated or co-designed along with the hardware system. Commercial systems can demonstrate superior performance, debug visibility, and bring-up efficiency as a result of the development tools using hardware characterization data, being able to target unique hardware functions of the system, and employing communication infrastructure to the DUT. Commercial high-capacity prototypes are often made available as part of the IT infrastructure so various groups can share or be budgeted prototype resources as project demands vary. Network accessibility, independent management of individual FPGA chains in a “rack” installation, and job queue management are common value-added features of such systems.

Another general trend in rapid prototyping is to mix a transaction-level model (TLM) and RTL model abstractions in order to blend the best of both to accelerate the validation task. How do Virtual versus Physical prototypes differ? The biggest contrast is often the model’s availability during the project.  In practice the latest generation CPU architectures are not available as synthesizable RTL. License and deployment restrictions can limit access or the design is so new the RTL is simply not yet available from the vendor. For these reasons virtual prototypes of key CPU subsystems are a practical alternative.  For best performance and thus for the role of software development tasks, hybrid prototypes typically join an FPGA-based prototype, a cycle-accurate implementation in hardware with a TLM prototype using a loosely-timing (LT) coding style. TLM abstracts away the individual events and phases of the behavior of the system and instead focuses on the communication transactions. This may be perfectly acceptable model for the commercial IP block of a CPU but may not be for new custom interface controller-to-PHY design that is being tailored for a particular application. The team integrating the blocks of the design will assess that the abstraction is appropriate to satisfy the verification or validation scenarios.  Although one of the oldest methods used to verify designs, dating back to the days of breadboarding, FPGA prototyping is still here today, and more and more it will continue to be used in the future. The dramatic improvement in FPGA technology that made possible the manufacturing of devices of monstrous capacity will enable rapid prototyping for ever larger designs. Rapid prototyping is an essential tool in the modern verification/validation toolbox of chip designs.”

Steve’s described his opinion as follows: “Historically, rapid prototyping has been utilized for designs sized in the 10’s of millions of gates with some advanced users pushing capacity into the low 100M gates range.  This has limited the use of rapid prototyping to full chips on the smaller range of size and IP blocks or subsystems of larger chips.  For IP block/subsystems, it is relatively common to combine virtual prototypes of the processor subsystem with a rapid prototype of the IP block or subsystem.  This is referred to as “hybrid prototyping.
With the next generation of FPGAs such as Xilinx’s UltraScale and Altera’s Stratix-10 and the continued evolution of prototyping solutions, creating larger rapid prototypes will become practical.  This should result in expanded use of rapid prototyping to cover more full chip pre-silicon validation uses.
In the past, limited silicon visibility made debugging difficult and analysis of various aspects of the design virtually impossible with rapid prototypes.  Improvements in silicon visibility and control will improve debug productivity when issues in the hardware design escape to the prototype.  Visibility improvements will also provide insight into chip and system performance and quality that were previously not possible.”

Frank concluded that: “The growth of FPGA-based prototyping is really primarily driven by software needs because software developers are, in terms of numbers of users, the largest consumer base. FPGA-based prototyping also provides satisfactory speed while delivering the real thing at RTL accuracy at a reasonable replication cost. Second, the rollout of Cadence’s multi-fabric compiler that maps RTL both into the Palladium emulation platform and into the Protium FPGA-based prototyping platform significantly eases the trade-offs with respect to speed and hardware debug between emulation and FPGA-based prototyping. This gives developers even more options than they ever had before and widens the applicability of FPGA-based prototyping. The third driver for growth in prototyping is the advent of hybrid usage of, for example, virtual prototyping with emulation, combining fast execution for some portions of the design (like the processors) with accuracy for other aspects of the design (like graphics processing units).

Overall, rapid or FPGA-based prototyping has its rightful place in the continuum of development engines, offering users high-speed execution of an accurate representation. This advantage makes rapid or FPGA-based prototyping a great platform for software development that requires hardware accuracy, as well as for system validation.”

Conclusion

All four of the contributors painted a positive figure of rapid prototyping.  The growth of FPGA devices, both in size and speed, has been critical in keeping this type of development and verification method applicable to today’s designs.  It is often the case that a development team will need to use a variety of tools as it progresses through the task, and rapid prototyping has proven to be useful and reliable.

ASIC Prototyping With FPGA

Thursday, February 12th, 2015

Zibi Zalewski, General Manger of the Hardware Products Division, Aldec

When I began my career as a verification products manager, ASIC/SoC verification was less integrated and the separation among verification stages, tools, and engineering teams was more obvious. At that time the verification process started using simulation, especially for the development phase of the hardware using relatively short test cases. As the design progressed more advanced tests became necessary the move to using emulation was quite natural. This was especially true with the availability of emulators with debugging capabilities that enabled running longer tests in shorter time and debugging issues as they arose. The last stage of this methodology was usually prototyping, when the highest execution speed was required coupled with less need for debugging.  Of course, with the overhead of circuit setup, this took longer and was more complicated.

Today’s ASIC designs have become huge in comparison to the early days making the process of verification extremely complicated. This is the reason that RTL simulation is used only early in the process, mostly for single module verification, since it is simply too slow.

The size of the IC being developed makes even usage of FPGA prototyping boards an issue, since porting designs of 100+ million gates takes months and requires boards that include least several programmable devices. In spite of the fact that FPGAs are getting bigger and bigger in terms of capacity and I/O, SoC projects are growing much faster.  In the end, even a very large prototyping board may not be sufficient.

To add further complication, parts of modern SoCs, like processor subsystems, are developed using virtual platforms with the ability to exchange different processor models depending on the application requirements. Verifying all of the elements within such a complicated system takes massive amounts of time and resources – engineering, software and hardware tools. Considering design size and sophistication, even modular verification becomes a non-so-trivial task, especially during final testing and SoC firmware verification.

In order to reach the maximum productivity and decrease development cost the team must integrate as early as possible to be able to test not only at the module level, but also at the SoC level. Resolution unfortunately is not that simple.  Let’s consider two test cases.

1. SoC design with UVM testbench.

The requirement is to reuse the UVM testbench, but the design needs to run at MHz speed, part of it connected using a physical interface running at speed.

To fulfill such requirements the project team needs an emulator supporting SystemVerilog DPI-C and SCE-MI Function based in order to connect the UVM testbench and DUT.  Since part of the design needs to communicate with a physical interface such emulator needs to support a special adapter module to synchronize the emulator speed with a faster physical interface (e.g. Ethernet port). The result is that the UVM testbench in a  simulator can still be reused, since the main design is running at MHz speed on the emulator and communicating through an external interface that is running at speed – thanks to a speed adapter – with the testbench.

2. SoC design partially developed in virtual platform, and partially written in RTL HDL language. Here the requirements are to reuse the virtual platform and to synchronize it with the rest of the hardware system running at MHz speed.  This approach also requires that part of the design was already optimized to run in prototyping mode.

Since virtual platforms usually interface with external tools using TLM, the natural way is to connect the platform with a transaction level emulator equipped with SCE-MI API that also provides the required MHz speed. To connect the part of the design optimized for prototyping, most likely running at a higher speed than the main emulator clock, it is required to use a speed adapter as in the case already discussed.  If it is possible to connect the virtual platform with an emulator running at two speeds (main emulation clock and higher prototyping clock) the result is that design parts already tested separately can now be tested together as one SoC with the benefit that both software and hardware teams are working on the same DUT.

Figure-1  Integrated verification platform for modern ASIC/SoC.

In both cases we have different tools integrated together (Figure-1), testbench simulated in RTL simulator or in the form of virtual platform is connected with an emulator via SCE-MI API providing integration between software and hardware tools. Next, we have two hardware domains connected via a special speed adapter bridge – emulator domain and prototyping domain (or external interface) synchronized and implemented in the FPGA based board/s. All these elements create a hybrid verification platform for modern SoC/ASIC design that delivers the ability for all teams involved to work on the whole and same source of the project.

Hardware-Software Tops Priority List For ASIC Prototypers

Thursday, August 23rd, 2012

By John Blyler
The most important decision facing chip prototyping designers this year (2012) concerned the completeness of the combined hardware and software platform. (See Fig. 1). Cost and boot time followed as the next most importance issues. Close to 200 qualified respondents participated in the annual Chip Design Trends (CDT), “ASIC/ASSP Prototyping with FPGAs” survey.

Fig. 1: Prototyping priorities listed in the 2012 CDT survey.

The concern over a complete hardware-software prototyping solution was in stark contrast to results from previous years, when key concerns revolved around the flexibility and expandability of the system, as well as cost, performance and ease-of-use factors (see chart below).

Another surprising finding in this year’s survey is the importance of system “bring-up-time,” which ranked in the top three concerns for software development-based prototyping systems. The importance of software-related issues was further verified by another survey question that found an overwhelming 65.5% of respondents used a combination of software and hardware execution (i.e. simulation plus FPGA prototyping).

What was the language of choice for these hardware-software co-designers? C/C++ beat out Verilog, VHDL and their derivatives. (see Fig. 2)

Fig. 2: Software languages used in software simulation and hardware (FGPA) based prototyping systems. (Source: 2012 CDT Survey)

Most designers who used a combination of software simulation and hardware FPGA-based prototyping did so to achieve early verification results (42.7%) and to accelerate the (simulation) speed with software processor models (35.4%)

The most common hardware configuration on the FPGA prototyping board consisted of between 4 to 9 clocks with the fastest clock running 50 to 125 MHz.

What interfaces were used to connect the software simulation and FPGA based prototyping/emulation/acceleration platforms? Not surprisingly, ARM-based interfaces were the most popular (56.4%), including ACE, AMBA, AXI, AHB, or APB variations (see Fig. 3). OCP was the choice interface for 17.3% designers. Many software developers just didn’t know what interface was used (36.4%).

Yikes! Why Is My SystemVerilog Testbench So Slooooow?

Thursday, August 23rd, 2012

It turns out that SystemVerilog != Verilog. OK, we all figured that out a few years ago as we started to build verification environments using IEEE 1800 SystemVerilog. While it did add design features like new ways to interface code, it also had verification features like classes, dynamic data types, and randomization that have no analog (pardon the pun) in the IEEE 1364 Verilog language. But the syntax was a reasonable extension, many more designs needed advanced verification, and we had the Open Verification Methodology (OVM) followed by the standardized Accellera Systems Initiative Universal Verification Methodology (UVM) so thousands of engineers got trained on object-oriented programming. Architectures were created, templates were followed, and the verification IP components were built. Then they were integrated and the simulation speed took a nose dive. Yikes, why did that happen?

To view this white paper, click here.