Gabe Moretti, Contributing Editor
As we approach the DVCon conference it is timely to look at how our industry approaches managing design verification.
Much has been said about the tools, but I think not enough resources have been dedicated to the issue of management and measurement of verification. John Brennan, Product Director in the Verification Group at Cadence observed that verification used to be a whole lot easier. It used to be that you sent some stimulus to your design, view a few waveforms, collect some basic data by looking at the simulator log data, and then onto the next part of the design to verify. The problem with all of this is that it’s simply too much information, and with randomness comes lack of clarity about what is actually tested and not. He continued by stating that you can not verify every state and transition in your design, it is simply impossible, the magnitude is too large. So what do you verify, and how are IP and chip suppliers addressing the challenge? We at Cadence see several trends emerging that will help users with this daunting task, as follows: use collaboration based environments, use the right tool for the job, have Deep Analytics and Visibility, and deploy Feature based verification.
My specific questions to the panelists follow. I chose a representative one from each of them.
* How does a verification group manage the verification process and assess risk?
Dave Kelf, Marketing Director at OneSpin Solutions opened the detail discussion by describing the present situation. Whereas design management follows a reasonably predictable path, verification management is still based on the subjective, unpredictable assessment of when is enough testing enough!
Verification management is all about predicting the time and resources required to reach the moving target of verification closure. However, there is still no concrete method available to predict when a design is fully, exhaustively, 100% tested. Today’s techniques all have an element of uncertainty, which translate to the risk of an undetected bug. The best a verification manager can do is to assess the progress point at which the probability of a remaining bug is infinitesimally small.
For a large design block, a combination of test coverage results, a test spec-to-simulation performed comparison, time-since-last-bug discovery, verification time spent and the end of the schedule may all play into this decision. For a complete SoC, running the entire system, including software, on an emulator for days on end might be the only way, today, to inspire confidence of a working design.
If we were to solve just one remaining problem in verification, achieving a deep and meaningful understanding of verification coverage pertaining to the original functional specification should be it.
* What is the role of verification coverage in providing metrics toward verification closure, and is this proving useful.
Thomas L. Anderson, Vice President of Marketing, Breker Verification Systems answered that coverage is, frankly, all that the verification team has to assess how well the chip has been exercised. Code coverage is a given, but in recent years, functional coverage has gained much more prominence. The most recent forms of coverage are derived automatically, for example, from assertions or graph-based scenario models, and so provide much return for little investment.
* How has design evolution affected verification management? Examples include IP usage and SoC trends.
Rajeev Ranjan, CTO of Jasper Design Automation observed that as designs get bigger in general, and as they incorporate more-and-more IPs developed by multiple internal and external parties, integration verification becomes a very large concern. Specifically, verification tasks such as interface validation, connectivity checking, functional verification of IPs in the face of hierarchical power management strategies, and ensuring that the hardware coherency protocols do not cause any deadlock in the overall system. Additionally, depending on the end-market for the system, security path verification can also be a significant, system-wide challenge.
* What should be the first step in preparing a verification plan?
Tom Fitzpatrick, verification evangelist, Mentor Graphics has dedicated many years to the study and solutions of verification issues. He noted that the first step in preparing a verification plan is to understand what the design is supposed to do and under what conditions it’s expected to do it. Verification is really the art of modeling the “real world” in which the device is expected to operate, so it’s important to have that understanding. After that, it’s important to understand the difference between “block-level” and “system-level” behaviors that you want to test. Yes, the entire system must be able to, for example, boot an RTOS and process data packets or whatever, but there are a lot of specifics that can be verified separately at the block- or subsystem-level before you just throw everything together and see if it works. Understanding what pieces of the lower level environments can be reused and will prove useful at the system level, and being able to reuse those pieces effectively and efficiently is one key to verification productivity.
Another key is the ability to verify specific pieces of functionality as early as possible in the process and use that information to avoid targeting that functionality at higher levels. For example, using automated tools at the block level to identify reset or X-propagation issues, or state machine deadlock conditions, eliminates the need to try and create stimulus scenarios to uncover these issues. Similarly, being able to verify all aspects of a block’s protocol implementation at the block level means that you don’t need to waste time creating system-level scenarios to try and get specific blocks to use different protocol modes. Identifying where best to verify the pieces of your verification plan allows every phase of your verification to be more efficient.
* Is criteria available to determine what tools need to be considered for various project phases? Which tools are proving effective? Is budget a consideration?
Yuan Lu, Chief Verification Architect, Atrenta Inc. contributed the following. Verification teams deploy a variety of tools to address various categories of verification issues, depending on how you break your design into multiple blocks and what you want to test at each level of hierarchy. At a macro level, comprehensive/exhaustive verification is expected at block/IP level. However, at the SoC level, functions such as connectivity checking, heart beat verification, and hardware/software co-verification are performed.
Over the years, there has emerged some level of consensus within the industry as to what type of tools need to be used for verification at the IP and SoC levels. But, so far, there is no perfect way to hand off IPs to the SoC team. The ultimate goal is to ensure that the IP team communicates to the SoC team about what has been tested and how the SoC team can use this information to figure out if the IP level verification was sufficient to meet the SoC needs.
* Not long ago, the Unified Verification Methodology (UVM) was unveiled with the promise of improving verification management, among other advantages. How has that worked?
Herve Alexanian, Engineering Director, Advanced Dataflow Group at Sonics, Inc. pointed out that as an internal protocol is thoroughly specified, including configurable options, a set of assertions can naturally be written or generated depending on the degree of configurability. Along the same lines, functional coverage points and reference (UVM) sequences are also defined. These definitions are the best way to enter modern verification approaches, allowing the most benefit from formal techniques and verification planning. Although some may see such definitions as too rigid to accommodate changes in requirements, making a change in a fundamental interface is intentionally costly as it is in software. It implies additional scrutiny on how architectural changes are implemented in a way that tends to minimize functional corners that later prove so costly to verify.
* What areas of verification need to be improved to reduce verification risk and ease the management burden?
Vigyan Singhal, President and CEO, Oski Technology said that for the most part, current verification methodology relies on simulation and emulation for functional verification. As shown consistently in the 2007, 2010 and 2012, Wilson Research Group Surveys sponsored by Mentor Graphics, two thirds of projects are behind schedule and functional bugs are still the main culprit for chip respins. This shows that the current methodology has significant verification risk.
Verification teams today spend most of the time in subsystem (63.9%) and full chip simulation (36.1%), and most of the time is spent in debugging (36%). This is not surprising as debugging at the subsystem and chip level with thousands or long cycle traces can take a long time.
The solution to the challenge is to improve block-level design quality so as to reduce the verification and management burden at the subsystem and chip level. Formal property verification is a powerful technique for block-level verification. It is exhaustive and can catch all corner-case bugs. While formal verification adds another step in the verification flow with additional management tasks to track its progress, the time and effort spent will lead to reduced time and effort at the subsystem and chip level, and improve overall design quality. With short time-to-market windows, design teams need to guarantee first-silicon success. We believe increased formal usage in the verification flow will reduce verification risks and ease management burden.
As he had opened the discussion, John Brennan closed it noting that functional verification has no one single silver bullet, it takes multiple engineers, operating across multiple heterogeneous engines, with multiple analytics. This multi-specialists verification is now, the VPM tools that support multi-specialist verification are needed now.