Part of the  

Chip Design Magazine

  Network

About  |  Contact

Posts Tagged ‘Flowgraph’

The Verification Times are Changing

Monday, April 17th, 2017

Adnan Hamid, CEO, Breker Verification Systems

If you have been an ASIC designer for a couple of decades, you know how much your job has evolved during that time. Not only in the way chips get designed, using large numbers of IP blocks, but in the way in which they are verified.

Back in 1996, there were about 10,000 design starts and average size was under 30,000 gates. Designs were composed of a small number of blocks, almost all developed in-house and most designed to integrate to an external processor, unless the chip itself was a processor. It is likely that you were using a directed test methodology

The high-end processor of the time was the Pentium Pro that came in at 5.5-million gates, implemented in 500-nanometer (nm) technology, and the ARM 7 released in 1994 was beginning to gain some attention at one tenth the size of the Pentium Pro. Also gaining significant attention were new languages and tools that enable pseudo random test generation.

The design process became more efficient over that time period through the introduction of higher level design languages and corresponding synthesis tools, but most of the gains have come from increasing amounts of reuse. Today, most designs count on more than 90% of the chip area being filled with reused design blocks and most designs use tens or even a hundred different IP blocks.

One of the primary value statements of IP reuse is that the verification of those blocks is done by the IP provider and, since that design is used in multiple designs, its overall quality is likely to be higher than that of an in-house developed block. In the early days of IP, that may have been a questionable statement because many of the IP suppliers were nothing more than two people hacking away at code in a garage. However, today, most IP suppliers are trusted partners and, even though their designs are still not 100% verified, they no longer pose the largest threat to the overall success of a design.

The primary verification methodologies being used today are still the same as those that were emerging 20 years ago. The languages have been improved and standardized and the methodologies that go along with them have become highly developed. The fact remains that those methodologies were targeted at what we would consider to be a block today.

A typical SoC design team will design one or two custom blocks. These will make their design differentiated from the others in the industry and it is likely that those blocks will continue to use existing verification methodologies. The larger problem today is how do you verify the system-level functionality of the chip? Existing methodologies are highly inefficient for this task meaning that most design teams revert to using direct test strategies at the system level.

A system-level test can be viewed as the execution of a scenario that corresponds to a typical user-level function. In a cell phone, this could be making a call while watching a video. For a smart TV, it could be watching a TV station from the antenna while streaming an Internet video in an insert. These are the types of functions that must be proven to work before a tapeout can be considered.

For people tasked with this problem, solutions are rapidly emerging. You may have heard about a development within Accellera called the Portable Stimulus Working Group. This group is in the process of bringing together ideas from several EDA tools companies who have created tools to solve the integration verification problem. Most of them are based on the idea of graphs that define the valid data and control flows of the design. From this, they can randomly generate testcases that exercise those paths through the design. [ Flowgraphs were used in verification since 1968, in the SNAP simulator built at TRW System. Editor]

The biggest change in this methodology, compared to existing ones, is that it is not focused on stimulus generation. With SystemVerilog, the randomization helps generate stimulus, but the user is responsible for defining constraints, generating the necessary checkers, creation of the coverage model and, in some cases, showing that coverage actually corresponds to detection of faults. With Portable Stimulus, the user creates a verification intent model, and this is a unified model for the entire act of verification. From it, stimulus and checkers can be created. Constraints and coverage are annotated directly on the graph.

Figure 1: Portable stimulus enables a graph-based verification approach where users are able to generate stimuli from a graph.

Source: IBM, from DVCon India 2015 User Track Presentation

What this means is that verification is about to become very similar to design in that the user creates a high-level verification model and then has a synthesis engine generate the testbench from that model. The user will not generate low-level pieces of verification anymore and tests that are created will span multiple IP blocks and the connectivity that binds them together.

Several other advantages come from the notion of a model and a synthesis engine. How often have you struggled with the adaptation of a test originally targeted for a simulator, which now needs to be run on an emulator? How often have you been given a testbench developed for standalone verification of a block and been asked to integrate that into sub-system verification? How often have you had testbenches from a previous design that you want to adapt for a new design where only things such as interrupts or the address map have changed? Portable Stimulus addresses all of these issues because it has the notion of reuse fundamentally built into it.

Some languages were standardized before having been fully proven. That is not the case with graph-based verification. As an example, Breker has worked in this area for over a decade and while we may have been ahead of our time, it means that several of our customers have been successfully turning out chips based on this emerging methodology.

Accellera should release the first version of the graph-based verification methodology standard by the end of 2017. If you are wondering about adopting tools today, an easy migration path will be provided from the existing specification language to those that are expected to be contained within the released standard.

About Adnan Hamid

Adnan Hamid is the founder CEO of Breker and the inventor of its core technology. Under his leadership, Breker has come to be a market leader in functional verification technologies for complex systems-on-chips (SoCs), and Portable Stimulus in particular. Breker is an active Accellera member on the Portable Stimulus Working Group, taking a lead in defining the specifications of the upcoming Portable Stimulus Standard.  The Breker expertise in the automation of self-verifying testcases is setting the bar for the completeness of verification for SoCs.