Part of the  

Chip Design Magazine

  Network

About  |  Contact

Posts Tagged ‘testbench’

Verification Joins the Adults’ Table

Tuesday, January 24th, 2017

Adam Sherer, Group Director, Product Management, System & Verification Group, Cadence

As we plan for our family gatherings this holiday season, it’s time to welcome Verification to the adults’ table. Design and Implementation are already at the table, having established their own families consisting of architects with the comprehensive experience to manage the overall flow and specialists who provide the deep knowledge needed to make each project succeed. Verification has matured with the realization that it needs its own family of architects and specialists that have the experience and knowledge to rapidly and repeatedly verify complex projects.

Figure 1 The family table

This maturation of Verification occurred as complexity drove the need for the architect’s role. Designs pushed through a billion gates and systems grew their functional dependency on the fusion of analog, software, digital, and power. Meanwhile, the teams verifying these designs became distributed around the globe. A holistic view of verification became necessary and it was rooted in a more rigorous verification planning process. When we listen to the architect at our holiday dinner this year, we’ll hear how she wished for and got verification management automation with Cadence’s vManager solution. In order to close her verification plan, she needs to reuse verification IP (VIP), specify new Cadence VIP protocols, and direct the internal development of new VIP running on a range of verification engines. She also realizes that traditional methods will not scale to complex scenarios that must be verified across the complete SoC, so she is excited by the new portable stimulus standard work in Accellera and is piloting a project using Cadence’s Perspec System Verifier to gain an efficiency edge over her company’s competitors.

Design and Implementation were impressed by the automation that Verification was able to access. They asked Verification if that meant she had resources to spare for their families. She couldn’t help but laugh but then calmed down and explained how her family is growing with the specialists needed to implement the verification plans. She also discussed how those experts are actually already working with experts from Design and Implementation to achieve verification closure.

Figure 2 The Cadence Verification Family

Verification is a multi-engine, multi-abstraction, multi-domain task that starts and finishes with the entire development team. At the start of development, design experts and verification experts apply JasperGold formal analysis with coverage to both raise quality and mark the block-level features as verified in the overall plan. UVM experts then step in to complete comprehensive IP/subsystem verification using high-performance digital and mixed-signal simulation with the Incisive Enterprise Simulator. While the randomization and four-state simulation is critical at this stage, the UVM testbench can consume as much as 50% of the simulation time, which lengthens runtime as the project moves to subsystem and SoC integration. The verification experts then apply acceleration techniques to reduce time spent in the testbench, develop new scenarios with the Perspec System Verifier to enable fast four-state RTL simulation with the Cadence RocketSim Parallel Simulation Engine, and accelerate with the Cadence Palladium Z1 Enterprise Emulation System. As the project moves to the performance, capacity, coverage, and accessibility of the Palladium Z1 engine, new experts are able to address system features dependent on bare metal software and in-circuit data. Since the end customer interacts with the system through application software, the verification experts work with software teams using Cadence Protium Rapid Prototyping Platform, which provides the performance needed to support the verification needs of this team. With all of these experts around the world, the verification architect explains that she needs fabrics that enable them to communicate. She uses the Cadence Indago Debug Platform and vManager to provide unified debug across the engines, and multi-engine metrics to help her automate the verification plan. More and more of the engines provide verification metrics like coverage from simulation and emulation that can be merged together and rolled up to the vManager solution. Even the implementation teams are working together with the verification experts to simulate post PG netlists using the Incisive Enterprise Simulator XL and RocketSim solutions, enabling final signoff on the project.

As Design and Implementation pass dessert around the table, they are very impressed with Verification. They’ve seen the growing complexity in their own families and have been somewhat perplexed by how verification gets done. Verification has talked about new tools, standards, and methodologies for years, and they assumed those productivity enhancements meant that verification engineers could remain generalists by accessing more automation. Hearing more about the breadth and depth of the verification challenge has helped them realize that that there is an absolute need for a complete verification family with architects and experts. Raising a toast to the newest member of the electronic design adults’ table, the family knows that 2017 is going to be a great year.

Verification Choices: Formal, Simulation, Emulation

Thursday, July 21st, 2016

Gabe Moretti, Senior Editor

Lately there have been articles and panels about the best type of tools to use to verify a design.  Most of the discussion has been centered on the choice between simulation and emulation, but, of course, formal techniques should also be considered.  I did not include FPGA based verification in this article because I felt to be a choice equal to emulation, but at a different price point.

I invited a few representatives of EDA companies to answer questions about the topic.  The respondents are:

Steve Bailey, Director of Emerging Technologies at Mentor Graphics,

Dave Kelf, Vice President of Marketing at OneSpin Solutions

Frank Schirrmeister, Senior Product Management Director at Cadence

Seena Shankar, Technical Marketing Manager at Silvaco

Vigyan Singhal, President and CEO at Oski Technology

Lauro Rizzatti, Verification Consultant

A Search for the Best Technique

I first wanted an opinion of what each technology does better.  Of course the question is ambiguous because the choice of tool, as Lauro Rizzatti points out, depends on the characteristics of the design to be verified.  “As much as I favor emulation, when design complexity does not stand in the way, simulation and formal are superior choices for design verification. Design debugging in simulation is unmatched by emulation. Not only interactive, flexible and versatile, simulation also supports four-state and timing analysis.
However, design complexity growth is here to stay, and the curve will only get more challenging into the future. And, we not only have to deal with complexity measured in more transistors or gates in hardware, but also measured in more code in embedded software. Tasked to address this trend, both simulation and formal would hit the wall. This is where emulation comes in to rule the day.  Performance is not the only criteria to measure the viability of a verification engine.”

Vigyan Singhal wrote: “Both formal and emulation are becoming increasingly popular. Why use a chain saw (emulation) when you can use a scalpel (formal)? Every bug that is truly a block-level bug (and most bugs are) is most cost effective to discover with formal. True system-level bugs, like bandwidth or performance for representative traffic patterns, are best left for emulation.  Too often, we make the mistake of not using formal early enough in the design flow.”

Seena Shankar provided a different point of view. “Simulation gives full visibility to the RTL and testbench. Earlier in the development cycle, it is easier to fix bugs and rerun a simulation. But we are definitely gated by the number of cycles that can be run. A basic test

exercising a couple of functional operations could take up to 12 hours for a design with a 100 million gates.

Emulation takes longer to setup because all RTL components need  to be in place before a test run can begin. The upside is that millions of operations can be run in minutes. However, debug is difficult and time consuming compared to simulation.  Formal verification needs a different kind of expertise. It is only effective for smaller blocks but can really find corner case bugs through assumptions and constraints provided to the tool.”

Steve Bailey concluded that:” It may seem that simulation is being used less today. But, it is all relative. The total number of verification cycles is growing exponentially. More simulation cycles are being performed today even though hardware acceleration and formal cycles are taking relatively larger pieces of the overall verification pie. Formal is growing in appeal as a complementary engine. Because of its comprehensive verification nature, it can significantly bend the cost curve for high-valued (difficult/challenging) verification tasks and objectives. The size and complexity of designs today require the application of all verification engines to the challenges of verifying and validating (pre-silicon) the hardware design and enabling early SW development. The use of hardware acceleration continues to shift-left and be used earlier in the verification and validation flow causing emulation and FPGA prototyping to evolve into full-fledged verification engines (not just ICE validation engines).”

If I had my choice I would like to use formal tools to develop an executable specification as early as possible in the design, making sure that all functional characteristics of the intended product will be implemented and that the execution parameters will be respected.  I agree that the choice between simulation and emulation depends on the size of the block being verified, and I also think that hardware/software co-simulation will most often require the use of an emulation/acceleration device.

Limitations to Cooperation Among the Techniques

Since all three techniques have value in some circumstance, can designers easily move from one to another?

Frank Schirrmeister provided a very exhaustive response to the question, including a good figure.

“The following figure shows some of the connections that exist today. The limitations of cooperation between the engines are often of a less technical nature. Instead, they tend to result from the gaps between different disciplines in terms of cross knowledge between them.

Figure 1: Techniques Relationships (Courtesy of Cadence)

Some example integrations include:

-          Simulation acceleration combining RTL simulation and emulation. The technical challenges have mostly been overcome using transactors to connect testbenches, often at the transaction level that runs on simulation hosts to the hardware holding the design under test (DUT) and executing at higher speed. This allows users to combine the expressiveness in simulated testbenches to increase verification efficiency with the speed of synthesizable DUTs in emulation.

-          At this point, we even have enabled hot-swap between simulation and emulation. For example, we can run gate-level netlists without timing in emulation at faster speeds. This allows users to reach a point of interest at a later point of the execution that would take hours or days in simulation. Once the point of interest is reached, users can switch (hot swap) back into simulation, adding back the timing and continue the gate-level timing simulation.

-          Emulation and FPGA-based prototyping can share a common front-end, such as in the Cadence System Development Suite, to allow faster bring-up using multi-fabric compilation.

-          Formal and simulation also combine nicely for assertions, X-propagation, etc., and, when assertions are synthesizable and can be mapped into emulation, formal techniques are linked even with hardware-based execution.

Vigyan Singhal noted that: “Interchangeability of databases and poorly architected testbenches are limitations. There is still no unification of coverage database standard enabling integration of results between formal, simulation and emulation. Often, formal or simulation testbenches are not architected for reuse, even though they can almost always be. All constraints in formal testbenches should be simulatable and emulatable; if checkers and bus functional models (BFMs) are separated in simulation, checkers can sometimes be used in formal and in emulation.”

Dave Kelf concluded that: “the real question here is: How do we describe requirements and design specs in machine-readable forms, use this information to produce a verification plan, translate them into test structures for different tools, and extract coverage information that can be checked against the verification plan? It is this top-down, closed-loop environment generally accepted as ideal, but we have yet to see it realized in the industry. We are limited fundamentally by the ability to create a machine-readable specification.”

Portable Stimulus

Accellera has formed a study group to explore the possibility of developing a portable stimulus methodology.  The group is very active and progress is being made in that direction.  Since the group has yet to publish a first proposal, it was difficult to ask any specific questions, although I thought that a judgement on the desirability of such effort was important.

Frank Schirrmeister wrote: “At the highest level, the portable stimulus project allows designers to create tests to verify SoC integration, including items like low-power scenarios and cache coherency. By keeping the tests as software routines executing on processors that are available in the design anyway, the stimulus becomes portable between the different dynamic engines, specifically simulation, emulation, and FPGA prototyping. The difference in usage with the same stimulus then really lies in execution speed – regressions can run on faster engines with less debug – and on debug insight once a bug is encountered.”

Dave Kelf also has a positive opinion about the effort. “Portable Stimulus is an excellent effort to abstract the key part of the UVM test structures such that they may be applied to both simulation and emulation. This is a worthy effort in the right direction, but it is just scraping the surface. The industry needs to bring assertions into this process, and consider how this stimulus may be better derived from high-level specifications”

SystemVerilog

The language SystemVerilog is considered by some the best language to use for SoC development.  Yet, the language has limitations according to some of the respondents.

Seena Shankar answered the question “Is SystemVerilog the best we can do for system verification? as follows: “Sort of. SystemVerilog encapsulates the best features from Software and hardware paradigms for verification. It is a standard that is very easy to follow but may not be the best in performance. If the performance hit can be managed with a combination of system C/C++ or Verilog or any other verification languages the solution might be limited in terms of portability across projects or simulators.”

Dave Kelf wrote: “One of the most misnamed languages is SystemVerilog. Possibly the only thing this language was not designed to do was any kind of system specification. The name was produced in a misguided attempt to compete or compare with SystemC, and that was clearly a mistake. Now it is possible to use SystemVerilog at the system level, but it is clear that a C derived language is far more effective.
What is required is a format that allows untimed algorithmic design with enough information for it to be synthesized, virtual platforms that provide a hardware/software test capability at an acceptable level of performance, and general system structures to be analyzed and specified. C++ is the only language close to this requirement.”

And Frank Schirrmeister observed: “SystemVerilog and technologies like universal verification methodology (UVM) work well at the IP and sub-system level, but seem to run out of steam when extended to full system-on-chip (SoC) verification. That’s where the portable stimulus project comes in, extending what is available in UVM to the SoC level and allowing vertical re-use from IP to the SoC. This approach overcomes the issues for which UVM falls short at the SoC level.”

Conclusion

Both design engineers and verification engineers are still waiting for help from EDA companies.  They have to deal with differing methodologies, and imperfect languages while tackling ever more complex designs.  It is not surprising then that verification is the most expensive portion of a development project.  Designers must be careful to insure that what they write is verifiable, while verification engineers need to not only understand the requirements and architecture of the design, but also be familiar with the characteristics of the language used by developers to describe both the architecture and the functionality of the intended product.  I believe that one way to improve the situation is for both EDA companies and system companies to approach a new design not just as a piece of silicon but as a product that integrates hardware, software, mechanical, and physical characteristics.  Then both development and verification plans can choose the most appropriate tools that can co-exist and provide coherent results.

Horizontal and Vertical Flow Integration for Design and Verification

Thursday, August 20th, 2015

Frank Schirrmeister, senior group director for product marketing of the System Development Suite at Cadence.

System design and verification are a critical component for making products successful in an always-on and always-connected world. For example, I wear a device on my wrist that constantly monitors my activities and buzzes to remind me that I’ve been sitting for too long. The device transmits my activity to my mobile phone that serves as a data aggregator, only to forward it on to the cloudy sky from where I get friendly reminders about my activity progress. I’m absolutely hoping that my health insurance is not connected to my activity progress because my premium payments could easily fluctuate daily. How do we go about verifying our personal devices and the system interaction across all imaginable scenarios? It sounds like an impossibly complex task.

From personal experience, it is clear to me that flows need to be connected both in horizontal and vertical directions. Bear with me for a minute while I explain.

Rolling back about 25 years, I was involved in my first chip design. To optimize area, I designed a three-transistor dynamic memory cell for what we would today call 800nm technology at 0.8 micron. The layout was designed manually from gate-level schematics that had been entered manually as well. In order to verify throughput for the six-chip system that my chip was part of, I developed a model at the register-transfer level (RTL) using this new thing at the time called, VHSIC Hardware Description Language (VHDL) (yep, I am European). What I would call vertical integration today was clunky at best 25 years ago. I was stubbing data out from VHDL into files that would be re-used to verify the gate-level. My colleagues and I would write scripts to extract layout characteristics to determine the speed of the memory cell and annotate that to the gate level for verification. No top-down automation was used, i.e. no synthesis of any kind.

About five to seven years after my first chip design (we are now late in the ‘90s if you are counting), everything in the flow had moved upward and automation was added. My team designed an MPEG-2 decoder fully in the RTL and used logic synthesis for implementation. The golden reference data came from C-models—vertically going upward—and was not directly connected to the RTL. Instead, we used file-based verification of the RTL against the C-model. Technology data from the 130nm technology that we used at the time was annotated back into logic synthesis for timing simulation and to drive placement. Here, vertical integration really started to work. And the verification complexity had risen so much that we needed to extend horizontally, too. We verified the RTL both using simulation and emulation with a System Realizer M250. We took drops of the RTL, froze it, cross-mapped it manually to emulation and ran longer sequences—specifically around audio/video synchronization for which we needed seconds of actual real time video decoding to be executed. We used four levels vertically: layout to gate to the RTL (automated with annotations back to the RTL) and the C-level on top for reference. Horizontally, we used both simulation and emulation.

Now fast-forward another 10 years or so. At that point, I had switched to the EDA side of things. Using early electronic system-level (ESL) reference flows, we annotated .lib technology information all the way up into virtual platforms for power analysis. Based on the software driving the chip, the technology impact on power consumption could be assessed. Accuracy was a problem, and that’s why I think that flows may have been a bit too early for their time back in 2010.

So where are we today?

Well, the automation between the four levels has been greatly increased vertically. Users take .lib information all the way up into emulation using tools like the Cadence Palladium® Dynamic Power Analysis (DPA), which enables engineers using emulation to also analyze software in a system-level environment. This tool allows designers to achieve up to 90% greater accuracy compared to the actual chip power consumption as reported by TI and most recently Realtek. High-level synthesis (HLS) has become mainstream for parts of the chip. That means the fourth level above the RTL is getting more and more connected as design entry moves upward, and with it, verification is more and more connected as well.

And horizontally, we are now using at least four engines, formal, RTL simulation, emulation, and field-programmable gate array (FPGA)-based prototyping, which are increasingly integrated. A couple of examples include:

  • Simulation acceleration – combining simulation and emulation
  • Simulation/emulation hot swap – stopping in simulation and starting in emulation, as well as vice versa
  • Virtual platform/emulation hybrids – combining virtual platforms and emulation
  • Multi-fabric compilation – same flow for emulation and FPGA-based prototyping
  • United Power Format (UPF)/Common Power Format (CPF) low-power verification – using the same setup for simulation and emulation
  • Simulation/emulation coverage merge – combining data collected in simulation and emulation

Arguably, with the efforts to shift post-silicon verification even further to the left, the actual chip becomes the fifth engine.

So what’s next? It looks like we have the horizontal pillar engines complete now when we add in the chip. Vertically, integration will become even closer to allow a more accurate prediction prior to actual implementations. For example, the recent introduction of the Cadence Genus™ Synthesis Solution delivers improved productivity during RTL design and improved quality of results (QoR) in final implementation. In addition, the introduction of the Cadence Joules™ RTL Power Solution provides a more accurate measure of RTL power consumption, which greatly improves the top-down estimation flow from the RTL downstream. This further increases accuracy for the Palladium DPA and the Cadence Incisive® Enterprise Simulator that automates testbench creation and performs coverage-driven functional verification, analysis, and debug—from the system level to the gate level—boosting verification productivity and predictability.

Horizontal and vertical flow integration is really the name of the game for today’s chip designer and future chip designers.

Yikes! Why Is My SystemVerilog Testbench So Slooooow?

Thursday, August 23rd, 2012

It turns out that SystemVerilog != Verilog. OK, we all figured that out a few years ago as we started to build verification environments using IEEE 1800 SystemVerilog. While it did add design features like new ways to interface code, it also had verification features like classes, dynamic data types, and randomization that have no analog (pardon the pun) in the IEEE 1364 Verilog language. But the syntax was a reasonable extension, many more designs needed advanced verification, and we had the Open Verification Methodology (OVM) followed by the standardized Accellera Systems Initiative Universal Verification Methodology (UVM) so thousands of engineers got trained on object-oriented programming. Architectures were created, templates were followed, and the verification IP components were built. Then they were integrated and the simulation speed took a nose dive. Yikes, why did that happen?

To view this white paper, click here.