Part of the  

Chip Design Magazine

  Network

About  |  Contact

Posts Tagged ‘Portable Stimulus’

The Verification Times are Changing

Monday, April 17th, 2017

Adnan Hamid, CEO, Breker Verification Systems

If you have been an ASIC designer for a couple of decades, you know how much your job has evolved during that time. Not only in the way chips get designed, using large numbers of IP blocks, but in the way in which they are verified.

Back in 1996, there were about 10,000 design starts and average size was under 30,000 gates. Designs were composed of a small number of blocks, almost all developed in-house and most designed to integrate to an external processor, unless the chip itself was a processor. It is likely that you were using a directed test methodology

The high-end processor of the time was the Pentium Pro that came in at 5.5-million gates, implemented in 500-nanometer (nm) technology, and the ARM 7 released in 1994 was beginning to gain some attention at one tenth the size of the Pentium Pro. Also gaining significant attention were new languages and tools that enable pseudo random test generation.

The design process became more efficient over that time period through the introduction of higher level design languages and corresponding synthesis tools, but most of the gains have come from increasing amounts of reuse. Today, most designs count on more than 90% of the chip area being filled with reused design blocks and most designs use tens or even a hundred different IP blocks.

One of the primary value statements of IP reuse is that the verification of those blocks is done by the IP provider and, since that design is used in multiple designs, its overall quality is likely to be higher than that of an in-house developed block. In the early days of IP, that may have been a questionable statement because many of the IP suppliers were nothing more than two people hacking away at code in a garage. However, today, most IP suppliers are trusted partners and, even though their designs are still not 100% verified, they no longer pose the largest threat to the overall success of a design.

The primary verification methodologies being used today are still the same as those that were emerging 20 years ago. The languages have been improved and standardized and the methodologies that go along with them have become highly developed. The fact remains that those methodologies were targeted at what we would consider to be a block today.

A typical SoC design team will design one or two custom blocks. These will make their design differentiated from the others in the industry and it is likely that those blocks will continue to use existing verification methodologies. The larger problem today is how do you verify the system-level functionality of the chip? Existing methodologies are highly inefficient for this task meaning that most design teams revert to using direct test strategies at the system level.

A system-level test can be viewed as the execution of a scenario that corresponds to a typical user-level function. In a cell phone, this could be making a call while watching a video. For a smart TV, it could be watching a TV station from the antenna while streaming an Internet video in an insert. These are the types of functions that must be proven to work before a tapeout can be considered.

For people tasked with this problem, solutions are rapidly emerging. You may have heard about a development within Accellera called the Portable Stimulus Working Group. This group is in the process of bringing together ideas from several EDA tools companies who have created tools to solve the integration verification problem. Most of them are based on the idea of graphs that define the valid data and control flows of the design. From this, they can randomly generate testcases that exercise those paths through the design. [ Flowgraphs were used in verification since 1968, in the SNAP simulator built at TRW System. Editor]

The biggest change in this methodology, compared to existing ones, is that it is not focused on stimulus generation. With SystemVerilog, the randomization helps generate stimulus, but the user is responsible for defining constraints, generating the necessary checkers, creation of the coverage model and, in some cases, showing that coverage actually corresponds to detection of faults. With Portable Stimulus, the user creates a verification intent model, and this is a unified model for the entire act of verification. From it, stimulus and checkers can be created. Constraints and coverage are annotated directly on the graph.

Figure 1: Portable stimulus enables a graph-based verification approach where users are able to generate stimuli from a graph.

Source: IBM, from DVCon India 2015 User Track Presentation

What this means is that verification is about to become very similar to design in that the user creates a high-level verification model and then has a synthesis engine generate the testbench from that model. The user will not generate low-level pieces of verification anymore and tests that are created will span multiple IP blocks and the connectivity that binds them together.

Several other advantages come from the notion of a model and a synthesis engine. How often have you struggled with the adaptation of a test originally targeted for a simulator, which now needs to be run on an emulator? How often have you been given a testbench developed for standalone verification of a block and been asked to integrate that into sub-system verification? How often have you had testbenches from a previous design that you want to adapt for a new design where only things such as interrupts or the address map have changed? Portable Stimulus addresses all of these issues because it has the notion of reuse fundamentally built into it.

Some languages were standardized before having been fully proven. That is not the case with graph-based verification. As an example, Breker has worked in this area for over a decade and while we may have been ahead of our time, it means that several of our customers have been successfully turning out chips based on this emerging methodology.

Accellera should release the first version of the graph-based verification methodology standard by the end of 2017. If you are wondering about adopting tools today, an easy migration path will be provided from the existing specification language to those that are expected to be contained within the released standard.

About Adnan Hamid

Adnan Hamid is the founder CEO of Breker and the inventor of its core technology. Under his leadership, Breker has come to be a market leader in functional verification technologies for complex systems-on-chips (SoCs), and Portable Stimulus in particular. Breker is an active Accellera member on the Portable Stimulus Working Group, taking a lead in defining the specifications of the upcoming Portable Stimulus Standard.  The Breker expertise in the automation of self-verifying testcases is setting the bar for the completeness of verification for SoCs.

Specialists and Generalists Needed for Verification

Friday, December 16th, 2016

Gabe Moretti, Senior Editor

Verification continues to take up a huge portion of the project schedule. Designs are getting more complex and with complexity comes what appears to be an emerging trend –– the move toward generalists and specialists. Generalists manage the verification flow and are knowledgeable about simulation and the UVM. Specialists with expertise in formal verification, portable stimulus and emulation are deployed when needed.  I talked with four specialists in the technology:

David Kelf, Vice President Marketing. OneSpin Solutions,

Harry Foster, Chief Scientist Verification at Mentor Graphics

Lauro Rizzatti, Verification Consultant, Rizzatti LLC, and

Pranav Ashar, CTO, Real Intent

I asked each of them the following questions:

- Is this a real trend or a short-term aberration?

- If it is a real trend, how do we make complex verification tools and methodologies suitable for mainstream verification engineers?

- Are verification tools too complicated for a generalist to become an expert?

David: Electronics design has always had its share of specialists. A good argument could be made that CAD managers were specialists in the IT department, and that the notion of separate verification teams was driven by emerging specialists in testbench automation approaches. Now we are seeing something else. That is, the breakup of verification experts into specialized groups, some project based, and others that operate across different projects. With design complexity comes verification complexity. Formal verification and emulation, for example, were little-used tools and only then for the most difficult designs. That’s changed with the increase in size, complexity and functionality of modern designs.

Formal Verification, in particular, found its way into mainstream flows through “apps” where the entire use model is automated and the product focused on specific high-value verification functions. Formal is also applied manually through the use of hand-written assertions and this task is often left to specialist Formal users, creating an apparently independent group within companies who may be applied to different projects. The emergence of these teams, while providing a valuable function, can limit the proliferation of this technology as they become the keepers of the flame, if you like. The generalist engineers come to rely on them rather than exploring the use of the technology for themselves. This, in turn, has a limiting factor on the growth of the technology and the realization of its full potential as an alternative to simulation

Harry: It’s true, design is getting more complex. However, as an industry, we have done a remarkable job of keeping up with design, which we can measure by the growth in demand for design engineers. In fact, between 2007 and 2016 the industry has gone through about four iterations of Moore’s Law. Yet, the demand for design engineers has only grown at a 3.6 percent compounded annual growth rate.

Figure 1

During this same period, the demand for verification engineers has grown at a 10.4 percent compounded annual growth rate. In other words, verification complexity is growing at a faster rate than design complexity. This should not be too big a surprise since it is generally accepted in the academic world that design complexity grows at a Moore’s Law rate, while verification complexity grows at a much steeper rate (i.e., double exponential).

One contributing factor to growing verification complexity is the emergence of new layers of verification requirements that did not exist years ago. For example, beyond the traditional functional domain, we have added clock domains, power domains, security domains, safety requirements, software, and then obviously, overall performance requirements.

Figure 2

Each of these new layers of requirements requires specialized domain knowledge. Hence, domain expertise is now a necessity in both the design and verification communities to effectively address emerging new layers of requirements.

For verification, a one-size-fits-all approach to verification is no longer sufficient to completely verify an SoC. There is a need for specialized tools and methodologies specifically targeted at each of these new (and continually emerging) layers of requirements. Hence, in addition to domain knowledge expertise, verification process specialists are required to address growing verification complexity.

The emergence of verification specialization is not a new trend; although, perhaps it has become more obvious due to growing verification complexity. For example, to address the famous floating point bug in the 1990’s it became apparent that theorem proving and other formal technology would be necessary to fill the gap of traditional simulation-based verification approaches. These techniques require full-time dedication that generalist are unlikely to master because their focus is spread across so many other tools and methodologies. One could make the same argument about the adoption of constrained- random, coverage-driven testbenches using UVM (requiring object-oriented programing skills, which I do not consider generalist skills), emulation, and FPGA prototyping. These technologies have become indispensable in today’s SoC verification/validation tool box, and to get the most out of the project’s investment, specialist are required.

So the question is how do we make complex tools and methodologies suitable for mainstream verification engineers? We are addressing this issue today by developing verification apps that solve a specific, narrowly focused problem and require minimal tool and methodology expertise. For example, we have seen numerous formal apps emerge that span a wide spectrum of the design process from IP development into post-silicon validation.  These apps no longer require the user to write assertions or be an expert in formal techniques. In fact, the formal engines are often hidden from the user, who then focuses on “what” they want to verify, versus the “how.” A few examples include: connectivity check used during IP integration, register check used to exhaustively verify control and status register behavior

against its CSV or IP-XACT register specification, and security check used to exhaustively verify that only the paths you specify can reach security or safety-critical storage elements. Perhaps one of the best- known formal apps is clock-domain crossing (CDC) checking, which is used to identify metastabilty issues due to the interaction of multiple clock domains.

Emulation is another area where we are seeing the emergence of verification apps. For example, deterministic ICE in emulation, which overcomes unpredictability in traditional ICE environments by adding 100 percent visibility and repeatability for debugging and provides access to other “virtual- based” use models. In addition, DFT emulation apps that accelerate Design for Test (DFT) verification prior to tape-out to minimize the risk of catastrophic failure while significantly reducing run times when verifying designs after DFT insertion.

In summary, the need for verification specialists today is driven by two demands: (1) specialized domain knowledge driven by new layers of verification requirements, and (2) verification tool and methodology expertise. This is not a bad thing. If I had a brain aneurysm, I would prefer that my doctor has mastered the required skills in endoscopy and other brain surgery techniques versus a general practitioner with a broad set of skills. Don’t get me wrong, both are required.

Lauro: In my mind, it is a trend, but the distinction may blur its contours soon. Let’s take hardware emulation. Hardware emulation always required specialists for its deployment, and, even more so, to fully optimize it to its fullest capacity. As they used to say, it came with a team of application engineers in the box to avoid that the time-to-emulation would exceed the time-to-first silicon. Today, hardware emulation is still a long way from being a plug-and-play verification tool, but recent developments by emulation vendors are making it easier and more accessible to use and deploy by generalists. The move from the in-circuit-emulation (ICE) mode driven by a physical target system to transaction-based communications driven by a virtual testbed designates it a data center resource status available to all types of verification engineers without specialist intervention. I see that as a huge step forward in the evolution of hardware emulation and its role in the design verification flow.

Pranav: The generalist vs. specialist discussion fits right into the shifting paradigm in which generic verification tools are being replaced by tools that are essentially verification solutions for specific failure modes.

The failure modes addressed in this manner are typically due to intricate phenomena that are hard to specify and model in simulation or general-purpose Assertion Based Verification (ABV), hard to resolve for a simulator or unguided ABV tool, whose propensity for occurrence increases with SOC size and integration complexity, and that are often insidious or hard to isolate. Such failure modes are a common cause of respins and redesign, with the result that

sign-off and bug-hunting for them based on solution-oriented tools has become ubiquitous in the design community.

Good examples are failures caused by untimed paths on an SOC, common sources of which are asynchronous clock-domain crossings, interacting reset domains and Static Timing Analysis (STA) exceptions. It has become common practice to address these scenarios using solution-oriented verification tools.

In the absence of recent advances by EDA companies in developing solution-oriented verification tools, SOC design houses would have been reliant on in-house design verification (DV) specialists to develop and maintain homegrown strategies for complex failure modes. In the new paradigm, the bias has shifted back toward the generalist DV engineer with the heavy lifting being done by an EDA tool. The salutary outcome of this trend for design houses is that the verification of SOCs for these complex failures is now more accessible, more automatic, more robust, and cheaper.

My Conclusions

It is hardtop disagree with the comments by my interlocutors.  Everything said is true.  But I think they have been too kind and just answered the questions without objecting to their limitations.  In fact the way to simplify verification is to do improve the way circuits are designed.  What is missing from design methodology is validation of what has been implemented before it is deemed ready for verification.  Designers are so pressed for time, due to design complexity and short schedules, that they must find ways to cut corners.  They reuse whenever possible and rely on their experience to assume that the circuit does what is supposed to do.  Unfortunately in most cases where a bug is found during design integration, they have neglected to check that the circuit does not do what is not supposed to do.  That is not always the fault of EDA tools.  The most glaring example is the choice by the electronic industry to use Verilog over VHDL.  VHDL is a much more robust language with built-in checks that exclude design errors that can be made using Verilog.  But VHDL takes longer to write and design engineers decided that schedule time took precedence over error avoidance.

The issue is always the same, no matter how simple or complex the design is: the individual self-assurance that he or she knows what he or she is doing.  The way to make design easier to verify is to create them better.  That means that the design should be semantically correct and that the implementation of all required features be completely validated by the designers themselves before handing the design to a verification engineer.

I do not think that I have just demanded that a design engineer also be a verification engineer.  What may be required is a UDM: Unified Design Methodology.  The industry is, may be unconsciously, already moving in that directions in two ways: the increased use of third party IP and the increasing volume of Design Rules by each foundry.  I can see these two trends growing brighter with each successive technology iteration: it is time to stop ignoring them.

Verification Choices: Formal, Simulation, Emulation

Thursday, July 21st, 2016

Gabe Moretti, Senior Editor

Lately there have been articles and panels about the best type of tools to use to verify a design.  Most of the discussion has been centered on the choice between simulation and emulation, but, of course, formal techniques should also be considered.  I did not include FPGA based verification in this article because I felt to be a choice equal to emulation, but at a different price point.

I invited a few representatives of EDA companies to answer questions about the topic.  The respondents are:

Steve Bailey, Director of Emerging Technologies at Mentor Graphics,

Dave Kelf, Vice President of Marketing at OneSpin Solutions

Frank Schirrmeister, Senior Product Management Director at Cadence

Seena Shankar, Technical Marketing Manager at Silvaco

Vigyan Singhal, President and CEO at Oski Technology

Lauro Rizzatti, Verification Consultant

A Search for the Best Technique

I first wanted an opinion of what each technology does better.  Of course the question is ambiguous because the choice of tool, as Lauro Rizzatti points out, depends on the characteristics of the design to be verified.  “As much as I favor emulation, when design complexity does not stand in the way, simulation and formal are superior choices for design verification. Design debugging in simulation is unmatched by emulation. Not only interactive, flexible and versatile, simulation also supports four-state and timing analysis.
However, design complexity growth is here to stay, and the curve will only get more challenging into the future. And, we not only have to deal with complexity measured in more transistors or gates in hardware, but also measured in more code in embedded software. Tasked to address this trend, both simulation and formal would hit the wall. This is where emulation comes in to rule the day.  Performance is not the only criteria to measure the viability of a verification engine.”

Vigyan Singhal wrote: “Both formal and emulation are becoming increasingly popular. Why use a chain saw (emulation) when you can use a scalpel (formal)? Every bug that is truly a block-level bug (and most bugs are) is most cost effective to discover with formal. True system-level bugs, like bandwidth or performance for representative traffic patterns, are best left for emulation.  Too often, we make the mistake of not using formal early enough in the design flow.”

Seena Shankar provided a different point of view. “Simulation gives full visibility to the RTL and testbench. Earlier in the development cycle, it is easier to fix bugs and rerun a simulation. But we are definitely gated by the number of cycles that can be run. A basic test

exercising a couple of functional operations could take up to 12 hours for a design with a 100 million gates.

Emulation takes longer to setup because all RTL components need  to be in place before a test run can begin. The upside is that millions of operations can be run in minutes. However, debug is difficult and time consuming compared to simulation.  Formal verification needs a different kind of expertise. It is only effective for smaller blocks but can really find corner case bugs through assumptions and constraints provided to the tool.”

Steve Bailey concluded that:” It may seem that simulation is being used less today. But, it is all relative. The total number of verification cycles is growing exponentially. More simulation cycles are being performed today even though hardware acceleration and formal cycles are taking relatively larger pieces of the overall verification pie. Formal is growing in appeal as a complementary engine. Because of its comprehensive verification nature, it can significantly bend the cost curve for high-valued (difficult/challenging) verification tasks and objectives. The size and complexity of designs today require the application of all verification engines to the challenges of verifying and validating (pre-silicon) the hardware design and enabling early SW development. The use of hardware acceleration continues to shift-left and be used earlier in the verification and validation flow causing emulation and FPGA prototyping to evolve into full-fledged verification engines (not just ICE validation engines).”

If I had my choice I would like to use formal tools to develop an executable specification as early as possible in the design, making sure that all functional characteristics of the intended product will be implemented and that the execution parameters will be respected.  I agree that the choice between simulation and emulation depends on the size of the block being verified, and I also think that hardware/software co-simulation will most often require the use of an emulation/acceleration device.

Limitations to Cooperation Among the Techniques

Since all three techniques have value in some circumstance, can designers easily move from one to another?

Frank Schirrmeister provided a very exhaustive response to the question, including a good figure.

“The following figure shows some of the connections that exist today. The limitations of cooperation between the engines are often of a less technical nature. Instead, they tend to result from the gaps between different disciplines in terms of cross knowledge between them.

Figure 1: Techniques Relationships (Courtesy of Cadence)

Some example integrations include:

-          Simulation acceleration combining RTL simulation and emulation. The technical challenges have mostly been overcome using transactors to connect testbenches, often at the transaction level that runs on simulation hosts to the hardware holding the design under test (DUT) and executing at higher speed. This allows users to combine the expressiveness in simulated testbenches to increase verification efficiency with the speed of synthesizable DUTs in emulation.

-          At this point, we even have enabled hot-swap between simulation and emulation. For example, we can run gate-level netlists without timing in emulation at faster speeds. This allows users to reach a point of interest at a later point of the execution that would take hours or days in simulation. Once the point of interest is reached, users can switch (hot swap) back into simulation, adding back the timing and continue the gate-level timing simulation.

-          Emulation and FPGA-based prototyping can share a common front-end, such as in the Cadence System Development Suite, to allow faster bring-up using multi-fabric compilation.

-          Formal and simulation also combine nicely for assertions, X-propagation, etc., and, when assertions are synthesizable and can be mapped into emulation, formal techniques are linked even with hardware-based execution.

Vigyan Singhal noted that: “Interchangeability of databases and poorly architected testbenches are limitations. There is still no unification of coverage database standard enabling integration of results between formal, simulation and emulation. Often, formal or simulation testbenches are not architected for reuse, even though they can almost always be. All constraints in formal testbenches should be simulatable and emulatable; if checkers and bus functional models (BFMs) are separated in simulation, checkers can sometimes be used in formal and in emulation.”

Dave Kelf concluded that: “the real question here is: How do we describe requirements and design specs in machine-readable forms, use this information to produce a verification plan, translate them into test structures for different tools, and extract coverage information that can be checked against the verification plan? It is this top-down, closed-loop environment generally accepted as ideal, but we have yet to see it realized in the industry. We are limited fundamentally by the ability to create a machine-readable specification.”

Portable Stimulus

Accellera has formed a study group to explore the possibility of developing a portable stimulus methodology.  The group is very active and progress is being made in that direction.  Since the group has yet to publish a first proposal, it was difficult to ask any specific questions, although I thought that a judgement on the desirability of such effort was important.

Frank Schirrmeister wrote: “At the highest level, the portable stimulus project allows designers to create tests to verify SoC integration, including items like low-power scenarios and cache coherency. By keeping the tests as software routines executing on processors that are available in the design anyway, the stimulus becomes portable between the different dynamic engines, specifically simulation, emulation, and FPGA prototyping. The difference in usage with the same stimulus then really lies in execution speed – regressions can run on faster engines with less debug – and on debug insight once a bug is encountered.”

Dave Kelf also has a positive opinion about the effort. “Portable Stimulus is an excellent effort to abstract the key part of the UVM test structures such that they may be applied to both simulation and emulation. This is a worthy effort in the right direction, but it is just scraping the surface. The industry needs to bring assertions into this process, and consider how this stimulus may be better derived from high-level specifications”

SystemVerilog

The language SystemVerilog is considered by some the best language to use for SoC development.  Yet, the language has limitations according to some of the respondents.

Seena Shankar answered the question “Is SystemVerilog the best we can do for system verification? as follows: “Sort of. SystemVerilog encapsulates the best features from Software and hardware paradigms for verification. It is a standard that is very easy to follow but may not be the best in performance. If the performance hit can be managed with a combination of system C/C++ or Verilog or any other verification languages the solution might be limited in terms of portability across projects or simulators.”

Dave Kelf wrote: “One of the most misnamed languages is SystemVerilog. Possibly the only thing this language was not designed to do was any kind of system specification. The name was produced in a misguided attempt to compete or compare with SystemC, and that was clearly a mistake. Now it is possible to use SystemVerilog at the system level, but it is clear that a C derived language is far more effective.
What is required is a format that allows untimed algorithmic design with enough information for it to be synthesized, virtual platforms that provide a hardware/software test capability at an acceptable level of performance, and general system structures to be analyzed and specified. C++ is the only language close to this requirement.”

And Frank Schirrmeister observed: “SystemVerilog and technologies like universal verification methodology (UVM) work well at the IP and sub-system level, but seem to run out of steam when extended to full system-on-chip (SoC) verification. That’s where the portable stimulus project comes in, extending what is available in UVM to the SoC level and allowing vertical re-use from IP to the SoC. This approach overcomes the issues for which UVM falls short at the SoC level.”

Conclusion

Both design engineers and verification engineers are still waiting for help from EDA companies.  They have to deal with differing methodologies, and imperfect languages while tackling ever more complex designs.  It is not surprising then that verification is the most expensive portion of a development project.  Designers must be careful to insure that what they write is verifiable, while verification engineers need to not only understand the requirements and architecture of the design, but also be familiar with the characteristics of the language used by developers to describe both the architecture and the functionality of the intended product.  I believe that one way to improve the situation is for both EDA companies and system companies to approach a new design not just as a piece of silicon but as a product that integrates hardware, software, mechanical, and physical characteristics.  Then both development and verification plans can choose the most appropriate tools that can co-exist and provide coherent results.