Part of the  

Chip Design Magazine

  Network

About  |  Contact

Posts Tagged ‘testbench’

The Verification Times are Changing

Monday, April 17th, 2017

Adnan Hamid, CEO, Breker Verification Systems

If you have been an ASIC designer for a couple of decades, you know how much your job has evolved during that time. Not only in the way chips get designed, using large numbers of IP blocks, but in the way in which they are verified.

Back in 1996, there were about 10,000 design starts and average size was under 30,000 gates. Designs were composed of a small number of blocks, almost all developed in-house and most designed to integrate to an external processor, unless the chip itself was a processor. It is likely that you were using a directed test methodology

The high-end processor of the time was the Pentium Pro that came in at 5.5-million gates, implemented in 500-nanometer (nm) technology, and the ARM 7 released in 1994 was beginning to gain some attention at one tenth the size of the Pentium Pro. Also gaining significant attention were new languages and tools that enable pseudo random test generation.

The design process became more efficient over that time period through the introduction of higher level design languages and corresponding synthesis tools, but most of the gains have come from increasing amounts of reuse. Today, most designs count on more than 90% of the chip area being filled with reused design blocks and most designs use tens or even a hundred different IP blocks.

One of the primary value statements of IP reuse is that the verification of those blocks is done by the IP provider and, since that design is used in multiple designs, its overall quality is likely to be higher than that of an in-house developed block. In the early days of IP, that may have been a questionable statement because many of the IP suppliers were nothing more than two people hacking away at code in a garage. However, today, most IP suppliers are trusted partners and, even though their designs are still not 100% verified, they no longer pose the largest threat to the overall success of a design.

The primary verification methodologies being used today are still the same as those that were emerging 20 years ago. The languages have been improved and standardized and the methodologies that go along with them have become highly developed. The fact remains that those methodologies were targeted at what we would consider to be a block today.

A typical SoC design team will design one or two custom blocks. These will make their design differentiated from the others in the industry and it is likely that those blocks will continue to use existing verification methodologies. The larger problem today is how do you verify the system-level functionality of the chip? Existing methodologies are highly inefficient for this task meaning that most design teams revert to using direct test strategies at the system level.

A system-level test can be viewed as the execution of a scenario that corresponds to a typical user-level function. In a cell phone, this could be making a call while watching a video. For a smart TV, it could be watching a TV station from the antenna while streaming an Internet video in an insert. These are the types of functions that must be proven to work before a tapeout can be considered.

For people tasked with this problem, solutions are rapidly emerging. You may have heard about a development within Accellera called the Portable Stimulus Working Group. This group is in the process of bringing together ideas from several EDA tools companies who have created tools to solve the integration verification problem. Most of them are based on the idea of graphs that define the valid data and control flows of the design. From this, they can randomly generate testcases that exercise those paths through the design. [ Flowgraphs were used in verification since 1968, in the SNAP simulator built at TRW System. Editor]

The biggest change in this methodology, compared to existing ones, is that it is not focused on stimulus generation. With SystemVerilog, the randomization helps generate stimulus, but the user is responsible for defining constraints, generating the necessary checkers, creation of the coverage model and, in some cases, showing that coverage actually corresponds to detection of faults. With Portable Stimulus, the user creates a verification intent model, and this is a unified model for the entire act of verification. From it, stimulus and checkers can be created. Constraints and coverage are annotated directly on the graph.

Figure 1: Portable stimulus enables a graph-based verification approach where users are able to generate stimuli from a graph.

Source: IBM, from DVCon India 2015 User Track Presentation

What this means is that verification is about to become very similar to design in that the user creates a high-level verification model and then has a synthesis engine generate the testbench from that model. The user will not generate low-level pieces of verification anymore and tests that are created will span multiple IP blocks and the connectivity that binds them together.

Several other advantages come from the notion of a model and a synthesis engine. How often have you struggled with the adaptation of a test originally targeted for a simulator, which now needs to be run on an emulator? How often have you been given a testbench developed for standalone verification of a block and been asked to integrate that into sub-system verification? How often have you had testbenches from a previous design that you want to adapt for a new design where only things such as interrupts or the address map have changed? Portable Stimulus addresses all of these issues because it has the notion of reuse fundamentally built into it.

Some languages were standardized before having been fully proven. That is not the case with graph-based verification. As an example, Breker has worked in this area for over a decade and while we may have been ahead of our time, it means that several of our customers have been successfully turning out chips based on this emerging methodology.

Accellera should release the first version of the graph-based verification methodology standard by the end of 2017. If you are wondering about adopting tools today, an easy migration path will be provided from the existing specification language to those that are expected to be contained within the released standard.

About Adnan Hamid

Adnan Hamid is the founder CEO of Breker and the inventor of its core technology. Under his leadership, Breker has come to be a market leader in functional verification technologies for complex systems-on-chips (SoCs), and Portable Stimulus in particular. Breker is an active Accellera member on the Portable Stimulus Working Group, taking a lead in defining the specifications of the upcoming Portable Stimulus Standard.  The Breker expertise in the automation of self-verifying testcases is setting the bar for the completeness of verification for SoCs.

Verification Joins the Adults’ Table

Tuesday, January 24th, 2017

Adam Sherer, Group Director, Product Management, System & Verification Group, Cadence

As we plan for our family gatherings this holiday season, it’s time to welcome Verification to the adults’ table. Design and Implementation are already at the table, having established their own families consisting of architects with the comprehensive experience to manage the overall flow and specialists who provide the deep knowledge needed to make each project succeed. Verification has matured with the realization that it needs its own family of architects and specialists that have the experience and knowledge to rapidly and repeatedly verify complex projects.

Figure 1 The family table

This maturation of Verification occurred as complexity drove the need for the architect’s role. Designs pushed through a billion gates and systems grew their functional dependency on the fusion of analog, software, digital, and power. Meanwhile, the teams verifying these designs became distributed around the globe. A holistic view of verification became necessary and it was rooted in a more rigorous verification planning process. When we listen to the architect at our holiday dinner this year, we’ll hear how she wished for and got verification management automation with Cadence’s vManager solution. In order to close her verification plan, she needs to reuse verification IP (VIP), specify new Cadence VIP protocols, and direct the internal development of new VIP running on a range of verification engines. She also realizes that traditional methods will not scale to complex scenarios that must be verified across the complete SoC, so she is excited by the new portable stimulus standard work in Accellera and is piloting a project using Cadence’s Perspec System Verifier to gain an efficiency edge over her company’s competitors.

Design and Implementation were impressed by the automation that Verification was able to access. They asked Verification if that meant she had resources to spare for their families. She couldn’t help but laugh but then calmed down and explained how her family is growing with the specialists needed to implement the verification plans. She also discussed how those experts are actually already working with experts from Design and Implementation to achieve verification closure.

Figure 2 The Cadence Verification Family

Verification is a multi-engine, multi-abstraction, multi-domain task that starts and finishes with the entire development team. At the start of development, design experts and verification experts apply JasperGold formal analysis with coverage to both raise quality and mark the block-level features as verified in the overall plan. UVM experts then step in to complete comprehensive IP/subsystem verification using high-performance digital and mixed-signal simulation with the Incisive Enterprise Simulator. While the randomization and four-state simulation is critical at this stage, the UVM testbench can consume as much as 50% of the simulation time, which lengthens runtime as the project moves to subsystem and SoC integration. The verification experts then apply acceleration techniques to reduce time spent in the testbench, develop new scenarios with the Perspec System Verifier to enable fast four-state RTL simulation with the Cadence RocketSim Parallel Simulation Engine, and accelerate with the Cadence Palladium Z1 Enterprise Emulation System. As the project moves to the performance, capacity, coverage, and accessibility of the Palladium Z1 engine, new experts are able to address system features dependent on bare metal software and in-circuit data. Since the end customer interacts with the system through application software, the verification experts work with software teams using Cadence Protium Rapid Prototyping Platform, which provides the performance needed to support the verification needs of this team. With all of these experts around the world, the verification architect explains that she needs fabrics that enable them to communicate. She uses the Cadence Indago Debug Platform and vManager to provide unified debug across the engines, and multi-engine metrics to help her automate the verification plan. More and more of the engines provide verification metrics like coverage from simulation and emulation that can be merged together and rolled up to the vManager solution. Even the implementation teams are working together with the verification experts to simulate post PG netlists using the Incisive Enterprise Simulator XL and RocketSim solutions, enabling final signoff on the project.

As Design and Implementation pass dessert around the table, they are very impressed with Verification. They’ve seen the growing complexity in their own families and have been somewhat perplexed by how verification gets done. Verification has talked about new tools, standards, and methodologies for years, and they assumed those productivity enhancements meant that verification engineers could remain generalists by accessing more automation. Hearing more about the breadth and depth of the verification challenge has helped them realize that that there is an absolute need for a complete verification family with architects and experts. Raising a toast to the newest member of the electronic design adults’ table, the family knows that 2017 is going to be a great year.

Verification Choices: Formal, Simulation, Emulation

Thursday, July 21st, 2016

Gabe Moretti, Senior Editor

Lately there have been articles and panels about the best type of tools to use to verify a design.  Most of the discussion has been centered on the choice between simulation and emulation, but, of course, formal techniques should also be considered.  I did not include FPGA based verification in this article because I felt to be a choice equal to emulation, but at a different price point.

I invited a few representatives of EDA companies to answer questions about the topic.  The respondents are:

Steve Bailey, Director of Emerging Technologies at Mentor Graphics,

Dave Kelf, Vice President of Marketing at OneSpin Solutions

Frank Schirrmeister, Senior Product Management Director at Cadence

Seena Shankar, Technical Marketing Manager at Silvaco

Vigyan Singhal, President and CEO at Oski Technology

Lauro Rizzatti, Verification Consultant

A Search for the Best Technique

I first wanted an opinion of what each technology does better.  Of course the question is ambiguous because the choice of tool, as Lauro Rizzatti points out, depends on the characteristics of the design to be verified.  “As much as I favor emulation, when design complexity does not stand in the way, simulation and formal are superior choices for design verification. Design debugging in simulation is unmatched by emulation. Not only interactive, flexible and versatile, simulation also supports four-state and timing analysis.
However, design complexity growth is here to stay, and the curve will only get more challenging into the future. And, we not only have to deal with complexity measured in more transistors or gates in hardware, but also measured in more code in embedded software. Tasked to address this trend, both simulation and formal would hit the wall. This is where emulation comes in to rule the day.  Performance is not the only criteria to measure the viability of a verification engine.”

Vigyan Singhal wrote: “Both formal and emulation are becoming increasingly popular. Why use a chain saw (emulation) when you can use a scalpel (formal)? Every bug that is truly a block-level bug (and most bugs are) is most cost effective to discover with formal. True system-level bugs, like bandwidth or performance for representative traffic patterns, are best left for emulation.  Too often, we make the mistake of not using formal early enough in the design flow.”

Seena Shankar provided a different point of view. “Simulation gives full visibility to the RTL and testbench. Earlier in the development cycle, it is easier to fix bugs and rerun a simulation. But we are definitely gated by the number of cycles that can be run. A basic test

exercising a couple of functional operations could take up to 12 hours for a design with a 100 million gates.

Emulation takes longer to setup because all RTL components need  to be in place before a test run can begin. The upside is that millions of operations can be run in minutes. However, debug is difficult and time consuming compared to simulation.  Formal verification needs a different kind of expertise. It is only effective for smaller blocks but can really find corner case bugs through assumptions and constraints provided to the tool.”

Steve Bailey concluded that:” It may seem that simulation is being used less today. But, it is all relative. The total number of verification cycles is growing exponentially. More simulation cycles are being performed today even though hardware acceleration and formal cycles are taking relatively larger pieces of the overall verification pie. Formal is growing in appeal as a complementary engine. Because of its comprehensive verification nature, it can significantly bend the cost curve for high-valued (difficult/challenging) verification tasks and objectives. The size and complexity of designs today require the application of all verification engines to the challenges of verifying and validating (pre-silicon) the hardware design and enabling early SW development. The use of hardware acceleration continues to shift-left and be used earlier in the verification and validation flow causing emulation and FPGA prototyping to evolve into full-fledged verification engines (not just ICE validation engines).”

If I had my choice I would like to use formal tools to develop an executable specification as early as possible in the design, making sure that all functional characteristics of the intended product will be implemented and that the execution parameters will be respected.  I agree that the choice between simulation and emulation depends on the size of the block being verified, and I also think that hardware/software co-simulation will most often require the use of an emulation/acceleration device.

Limitations to Cooperation Among the Techniques

Since all three techniques have value in some circumstance, can designers easily move from one to another?

Frank Schirrmeister provided a very exhaustive response to the question, including a good figure.

“The following figure shows some of the connections that exist today. The limitations of cooperation between the engines are often of a less technical nature. Instead, they tend to result from the gaps between different disciplines in terms of cross knowledge between them.

Figure 1: Techniques Relationships (Courtesy of Cadence)

Some example integrations include:

-          Simulation acceleration combining RTL simulation and emulation. The technical challenges have mostly been overcome using transactors to connect testbenches, often at the transaction level that runs on simulation hosts to the hardware holding the design under test (DUT) and executing at higher speed. This allows users to combine the expressiveness in simulated testbenches to increase verification efficiency with the speed of synthesizable DUTs in emulation.

-          At this point, we even have enabled hot-swap between simulation and emulation. For example, we can run gate-level netlists without timing in emulation at faster speeds. This allows users to reach a point of interest at a later point of the execution that would take hours or days in simulation. Once the point of interest is reached, users can switch (hot swap) back into simulation, adding back the timing and continue the gate-level timing simulation.

-          Emulation and FPGA-based prototyping can share a common front-end, such as in the Cadence System Development Suite, to allow faster bring-up using multi-fabric compilation.

-          Formal and simulation also combine nicely for assertions, X-propagation, etc., and, when assertions are synthesizable and can be mapped into emulation, formal techniques are linked even with hardware-based execution.

Vigyan Singhal noted that: “Interchangeability of databases and poorly architected testbenches are limitations. There is still no unification of coverage database standard enabling integration of results between formal, simulation and emulation. Often, formal or simulation testbenches are not architected for reuse, even though they can almost always be. All constraints in formal testbenches should be simulatable and emulatable; if checkers and bus functional models (BFMs) are separated in simulation, checkers can sometimes be used in formal and in emulation.”

Dave Kelf concluded that: “the real question here is: How do we describe requirements and design specs in machine-readable forms, use this information to produce a verification plan, translate them into test structures for different tools, and extract coverage information that can be checked against the verification plan? It is this top-down, closed-loop environment generally accepted as ideal, but we have yet to see it realized in the industry. We are limited fundamentally by the ability to create a machine-readable specification.”

Portable Stimulus

Accellera has formed a study group to explore the possibility of developing a portable stimulus methodology.  The group is very active and progress is being made in that direction.  Since the group has yet to publish a first proposal, it was difficult to ask any specific questions, although I thought that a judgement on the desirability of such effort was important.

Frank Schirrmeister wrote: “At the highest level, the portable stimulus project allows designers to create tests to verify SoC integration, including items like low-power scenarios and cache coherency. By keeping the tests as software routines executing on processors that are available in the design anyway, the stimulus becomes portable between the different dynamic engines, specifically simulation, emulation, and FPGA prototyping. The difference in usage with the same stimulus then really lies in execution speed – regressions can run on faster engines with less debug – and on debug insight once a bug is encountered.”

Dave Kelf also has a positive opinion about the effort. “Portable Stimulus is an excellent effort to abstract the key part of the UVM test structures such that they may be applied to both simulation and emulation. This is a worthy effort in the right direction, but it is just scraping the surface. The industry needs to bring assertions into this process, and consider how this stimulus may be better derived from high-level specifications”

SystemVerilog

The language SystemVerilog is considered by some the best language to use for SoC development.  Yet, the language has limitations according to some of the respondents.

Seena Shankar answered the question “Is SystemVerilog the best we can do for system verification? as follows: “Sort of. SystemVerilog encapsulates the best features from Software and hardware paradigms for verification. It is a standard that is very easy to follow but may not be the best in performance. If the performance hit can be managed with a combination of system C/C++ or Verilog or any other verification languages the solution might be limited in terms of portability across projects or simulators.”

Dave Kelf wrote: “One of the most misnamed languages is SystemVerilog. Possibly the only thing this language was not designed to do was any kind of system specification. The name was produced in a misguided attempt to compete or compare with SystemC, and that was clearly a mistake. Now it is possible to use SystemVerilog at the system level, but it is clear that a C derived language is far more effective.
What is required is a format that allows untimed algorithmic design with enough information for it to be synthesized, virtual platforms that provide a hardware/software test capability at an acceptable level of performance, and general system structures to be analyzed and specified. C++ is the only language close to this requirement.”

And Frank Schirrmeister observed: “SystemVerilog and technologies like universal verification methodology (UVM) work well at the IP and sub-system level, but seem to run out of steam when extended to full system-on-chip (SoC) verification. That’s where the portable stimulus project comes in, extending what is available in UVM to the SoC level and allowing vertical re-use from IP to the SoC. This approach overcomes the issues for which UVM falls short at the SoC level.”

Conclusion

Both design engineers and verification engineers are still waiting for help from EDA companies.  They have to deal with differing methodologies, and imperfect languages while tackling ever more complex designs.  It is not surprising then that verification is the most expensive portion of a development project.  Designers must be careful to insure that what they write is verifiable, while verification engineers need to not only understand the requirements and architecture of the design, but also be familiar with the characteristics of the language used by developers to describe both the architecture and the functionality of the intended product.  I believe that one way to improve the situation is for both EDA companies and system companies to approach a new design not just as a piece of silicon but as a product that integrates hardware, software, mechanical, and physical characteristics.  Then both development and verification plans can choose the most appropriate tools that can co-exist and provide coherent results.

Horizontal and Vertical Flow Integration for Design and Verification

Thursday, August 20th, 2015

Frank Schirrmeister, senior group director for product marketing of the System Development Suite at Cadence.

System design and verification are a critical component for making products successful in an always-on and always-connected world. For example, I wear a device on my wrist that constantly monitors my activities and buzzes to remind me that I’ve been sitting for too long. The device transmits my activity to my mobile phone that serves as a data aggregator, only to forward it on to the cloudy sky from where I get friendly reminders about my activity progress. I’m absolutely hoping that my health insurance is not connected to my activity progress because my premium payments could easily fluctuate daily. How do we go about verifying our personal devices and the system interaction across all imaginable scenarios? It sounds like an impossibly complex task.

From personal experience, it is clear to me that flows need to be connected both in horizontal and vertical directions. Bear with me for a minute while I explain.

Rolling back about 25 years, I was involved in my first chip design. To optimize area, I designed a three-transistor dynamic memory cell for what we would today call 800nm technology at 0.8 micron. The layout was designed manually from gate-level schematics that had been entered manually as well. In order to verify throughput for the six-chip system that my chip was part of, I developed a model at the register-transfer level (RTL) using this new thing at the time called, VHSIC Hardware Description Language (VHDL) (yep, I am European). What I would call vertical integration today was clunky at best 25 years ago. I was stubbing data out from VHDL into files that would be re-used to verify the gate-level. My colleagues and I would write scripts to extract layout characteristics to determine the speed of the memory cell and annotate that to the gate level for verification. No top-down automation was used, i.e. no synthesis of any kind.

About five to seven years after my first chip design (we are now late in the ‘90s if you are counting), everything in the flow had moved upward and automation was added. My team designed an MPEG-2 decoder fully in the RTL and used logic synthesis for implementation. The golden reference data came from C-models—vertically going upward—and was not directly connected to the RTL. Instead, we used file-based verification of the RTL against the C-model. Technology data from the 130nm technology that we used at the time was annotated back into logic synthesis for timing simulation and to drive placement. Here, vertical integration really started to work. And the verification complexity had risen so much that we needed to extend horizontally, too. We verified the RTL both using simulation and emulation with a System Realizer M250. We took drops of the RTL, froze it, cross-mapped it manually to emulation and ran longer sequences—specifically around audio/video synchronization for which we needed seconds of actual real time video decoding to be executed. We used four levels vertically: layout to gate to the RTL (automated with annotations back to the RTL) and the C-level on top for reference. Horizontally, we used both simulation and emulation.

Now fast-forward another 10 years or so. At that point, I had switched to the EDA side of things. Using early electronic system-level (ESL) reference flows, we annotated .lib technology information all the way up into virtual platforms for power analysis. Based on the software driving the chip, the technology impact on power consumption could be assessed. Accuracy was a problem, and that’s why I think that flows may have been a bit too early for their time back in 2010.

So where are we today?

Well, the automation between the four levels has been greatly increased vertically. Users take .lib information all the way up into emulation using tools like the Cadence Palladium® Dynamic Power Analysis (DPA), which enables engineers using emulation to also analyze software in a system-level environment. This tool allows designers to achieve up to 90% greater accuracy compared to the actual chip power consumption as reported by TI and most recently Realtek. High-level synthesis (HLS) has become mainstream for parts of the chip. That means the fourth level above the RTL is getting more and more connected as design entry moves upward, and with it, verification is more and more connected as well.

And horizontally, we are now using at least four engines, formal, RTL simulation, emulation, and field-programmable gate array (FPGA)-based prototyping, which are increasingly integrated. A couple of examples include:

  • Simulation acceleration – combining simulation and emulation
  • Simulation/emulation hot swap – stopping in simulation and starting in emulation, as well as vice versa
  • Virtual platform/emulation hybrids – combining virtual platforms and emulation
  • Multi-fabric compilation – same flow for emulation and FPGA-based prototyping
  • United Power Format (UPF)/Common Power Format (CPF) low-power verification – using the same setup for simulation and emulation
  • Simulation/emulation coverage merge – combining data collected in simulation and emulation

Arguably, with the efforts to shift post-silicon verification even further to the left, the actual chip becomes the fifth engine.

So what’s next? It looks like we have the horizontal pillar engines complete now when we add in the chip. Vertically, integration will become even closer to allow a more accurate prediction prior to actual implementations. For example, the recent introduction of the Cadence Genus™ Synthesis Solution delivers improved productivity during RTL design and improved quality of results (QoR) in final implementation. In addition, the introduction of the Cadence Joules™ RTL Power Solution provides a more accurate measure of RTL power consumption, which greatly improves the top-down estimation flow from the RTL downstream. This further increases accuracy for the Palladium DPA and the Cadence Incisive® Enterprise Simulator that automates testbench creation and performs coverage-driven functional verification, analysis, and debug—from the system level to the gate level—boosting verification productivity and predictability.

Horizontal and vertical flow integration is really the name of the game for today’s chip designer and future chip designers.

Yikes! Why Is My SystemVerilog Testbench So Slooooow?

Thursday, August 23rd, 2012

It turns out that SystemVerilog != Verilog. OK, we all figured that out a few years ago as we started to build verification environments using IEEE 1800 SystemVerilog. While it did add design features like new ways to interface code, it also had verification features like classes, dynamic data types, and randomization that have no analog (pardon the pun) in the IEEE 1364 Verilog language. But the syntax was a reasonable extension, many more designs needed advanced verification, and we had the Open Verification Methodology (OVM) followed by the standardized Accellera Systems Initiative Universal Verification Methodology (UVM) so thousands of engineers got trained on object-oriented programming. Architectures were created, templates were followed, and the verification IP components were built. Then they were integrated and the simulation speed took a nose dive. Yikes, why did that happen?

To view this white paper, click here.