How do we cut the verification problem down to size - or can we?

Cutting the Verification Problem Down to Size – a Two-step Process

Yunshan Zhu, PhD
VP of New Technologies
Atrenta Inc.
San Jose, CA

We all know the verification problem is exploding.  Sophisticated power management techniques are making the problem worse. Recognizing that a significant portion of current system-on-chip (SoC) design is IP selection and IP integration, I would like to offer a two-step process to address the verification challenge. The proposed methodology requires a collaborative approach between the IP and SoC teams.

A state of the art SoC design typically has 50-60 IPs. For the next advanced technology node (14nm), an SoC is forecasted to have on average over 200 IPs. In theory, SoC design should be simple with the availability of a wide variety of third party and internal IPs. In practice, verification has become the dominant problem in SoC design – with the increasing number of IPs on an SoC, the verification problem is getting worse. Why?

The potential interactions grow exponentially with respect to the number of IPs on an SoC. The state space of an SoC is so large that trying to catch a corner case functional bug using system-level testing is like trying to catch a needle in a haystack. Unfortunately, SoC verification is the last step before chip-level RTL freeze and therefore has the most direct impact on the design schedule. To make SoC verification tractable, a two-step approach should be adopted.

First, SoC engineers must trust the quality of IPs and must have a mechanism to verify the quality of IPs. The burden of proof is on the IP engineers. In today’s environment, such proof is often based on trial and error or based on an IP vendors’ reputation. Most IPs are delivered as RTL only. No tests or testbenches are included as a part of the IP deliverable. With billions of dollars of potential revenue on the line, SoC teams will need the ability to independently verify the quality of IPs.

As testbenches are notoriously hard to migrate, it is unlikely that IP vendors can ship them to SoC teams. Instead, IP vendors could ship functional coverage points. These functional coverage points capture what has or has not been hit in the IP test environment. If a functional coverage point hit by an SoC test is also hit in the IP tests, it is a confirmation of IP test quality. If a functional coverage point hit by an SoC test is missing in the IP tests, it should be a cause for alarm, and the IP tests should be augmented to cover that case.

Second, SoC engineers must also test that the integration of the IPs is properly done. Since SoC engineers often have little visibility into the implementation of individual IPs, IP engineers can help the integration process by shipping IPs with functional coverage points to guide SoC test development and assertions to guide SoC debug. Note that the functional coverage points here serve a different purpose from the ones in the previous paragraph. The goal for the coverage points in this case is to assist the SoC engineer by identifying key integration tests. The assertions should capture design assumptions made at the IP level. If violated, it will indicate that an IP is not properly integrated. An assertion can be used to check for something as simple as a port being properly connected, or it can be used for complex clocking checks such as no FIFO overflow as a result of channel back pressure.

Some design teams have started shipping IPs with assertions and coverage points, but the practice is still in the nice-to-have category and lacks the consistency of an IP signoff criteria. In contrast, today’s IPs are often documented with area or power information. We should note that power consumption, for example, is incremental. That is, if an IP uses 100% more power, the SoC will use 100/N% more power, where N is the number of IPs on an SoC. Functional failure, on the other hand, is not incremental. If an IP fails, it kills the entire chip.

For next generation SoC designs, rigorous IP functional signoff will become a must-have. This will require some effort for IP teams to capture assertions and coverage points. By doing so, the confidence of SoC correctness will be increased and this in turn will enable more aggressive IP designs. Two steps to tame the process – not easily achieved, but well worth the effort.

Cutting the Verification Problem Down to Size…

Cary Chin
Director of Technical Marketing
Synopsys Inc.

In modern electronics design, the “verification problem” is well-known to be growing more rapidly than our ability to simply run our simulations on faster machines. The big issue is that we must use current generation machines to design, verify and debug the next-generation designs, and the complexity of our designs is increasing beyond the compute capabilities of each generation of hardware. So one generation might require n machines, and the next 2n machines, followed by 4n machines, etc. Given limited budgets, verification times must take up the slack, with verification times stretching our product development cycles, to the point where a late-identified bug can completely wipe out the project schedule.

On top of all this, we’ve been adding non-traditional functionality in our designs in the last decade, driven by the world’s thirst for mobile computing devices, from notebooks to smartphones to tablets. To meet the need for reasonable battery life without extra product bulk, we have aggressively adopted low power design alongside our traditional design goals of speed and area. To build designs that consume less energy, one of our main strategies has been the reduction of operating voltage in non-timing-critical areas of the design. Since voltage has a squared effect on dynamic power, the potential power savings are potentially huge – and the limiting case of reducing voltage to zero is even more compelling. With unused portions of a design shut down (unpowered), there is no power dissipation at all, dynamic (due to switching) or static (due to leakage). Eureka! We have solved the power problem!
But, as usual, the panacea brings its share of hard work - reducing voltage and shutting down circuits unfortunately also has a huge impact on our design and verification methodologies. From the design implementation standpoint, providing multiple voltage sources, and being able to shut them off selectively, has significantly raised the level of complexity for the logical and physical implementation and optimization processes. And similarly on the verification front, the traditional assumption that “everything is powered on” is no longer true; this creates an entirely new dimension of verification complexity, which is growing independently of the traditional “verification problem.”

The only way to deal with this complexity growth is to adopt new tools and methodologies that allow us to change the way that we approach the verification problem. One of the essential changes necessary in functional verification is to have the tools understand the fundamental concepts of power – “on vs. off” and the analog idea of voltage.  Only by completely rethinking our tools and flows with these in mind, can we really address the power verification problem without completely exploding the computational complexity of verification. In recent years, we have adopted low power design flows that allow us to deal with power shut down, multiple and varying voltages, and complex power states.  But we are just beginning.  Today, power state complexity is limited in its definition, implementation, and verification. 

As we move forward, the granularity of power control will no doubt be getting finer and finer.  Every point (in space and in time) at which we can lower voltage or shut down power is an opportunity to decrease the energy consumption of the device, leading to better performing products.  With battery technology increasing only marginally (and no major breakthrough in sight), the dependence for additional “wow” functionality in our mobile devices falls squarely on design and verification engineers, and the tools that they use to measure and minimize energy usage. Our sense of energy conservation and efficiency has permeated every level of our lives, from national parks to recycling, from solar panels to smartphones, and now to each transistor and electron inside.

chin_cary

Cary Chin, Director of Technical Marketing, Synopsys
Cary Chin is the Director of Technical Marketing at Synopsys Inc., and has 25 years of technical management experience in computer hardware and software design. Cary holds undergraduate and graduate degrees in Electrical Engineering from Stanford University, and has taught computer science and programming classes from second grade computer lab up to the collegiate level at Stanford. In his spare time he teaches violin and viola, serves as President of the Palo Alto Chamber Orchestra, and enjoys playing and coaching chamber music.

zhu_yunshan

Yunshan Zhu, PhD, Vice President of New Technologies
Yunshan Zhu is Vice President of New Technologies at Atrenta Inc.  Before its acquisition by Atrenta, Dr. Zhu co-founded NextOp Software and led the company through the development, delivery and production implementation of its flagship assertion synthesis product for multiple semiconductor companies. Prior to NextOp, Dr. Zhu was a member of the Advanced Technology Group at Synopsys. Dr. Zhu also worked as a visiting scientist and a post-doc at Carnegie Mellon University where he co-invented the bounded model checking algorithm. Dr. Zhu did his undergraduate study at the University of Science and Technology of China and received his PhD in Computer Science from the University of North Carolina at Chapel Hill.


EECatalog Tech Videos

MAGAZINE

  • Download the latest issue of the Chip Design Magazine
    and subscribe to receive future issues and the email newsletter.

©2014 Extension Media. All Rights Reserved. PRIVACY POLICY | TERMS AND CONDITIONS