Published on May 25th, 2011
Over the last forty years EDA advancements –– and breakthroughs –– have made it possible for semiconductor companies to continue to defy Moore’s Law. Although most design steps have been automated, a significant aspect that remains primitive is functional debugging of RTL code. Many recent independent surveys show that debug takes between 30-35% of the entire design process. In other words, engineers spend one third of their time understanding why certain failures are occurring, finding the root cause and rectifying the problems. This represents an attitudinal shift about how we currently view verification where, traditionally, the bottleneck is thought to be discovering bugs, not fixing them.
Unless specific precaution is given to this task, design debugging will continue to add tremendous cost, risks and jeopardy to the electronics design industry that faces ever shrinking time-to-market deadlines. Reasons are numerous and stem from:
Debugging for functional failures manifests itself at every level of the verification cycle. At a higher level, the verification engineer needs to bin the error messages following overnight regression test to identify the appropriate engineer to treat them further.
This process is commonly known as triage debugging. Failure triage today is performed in an ad hoc manner based only on the messages available in the simulation log files. Design teams waste valuable resources as problems are handed to the wrong engineers, failures are incorrectly dismissed as duplicates or duplicate bugs are treated as distinct bugs.
At the lower level, verification and design engineers have to manually explore the design source files, navigate through the waveforms, iteratively find drivers and back trace through the blocks until the bug source is found. This can add weeks or months of uncertainty and costs into the project.
Significant technology advances in verification over the past decade have targeted the discovery of bugs. For example, constrained random stimulus generation and intelligent test-benches are more efficient with verification methodologies such as universal verification methodology (UVM) and the use of SystemVerilog. A wide offering of linting, clock domain crossing (CDC), property checking and other advanced verification tools have also improved the efficiency in discovering bugs.
And yet, once the existence of a bug is confirmed, the verification engineer has little automation at his or her disposal to aid in localizing root-cause of failures. Engineers still use the same manual waveform viewing technologies from decades ago to tackle problems that have become orders of magnitude larger in complexity.
A new kind of tool changes this practice by offering a means of automated circuit debugging and error localization. Once verification fails, it automatically analyzes the design and returns the root cause of errors, performed with no intervention by the engineer. Diagnostic capabilities can be applied to failure triage where it bins regression failures, identifies the general problem area and points to the engineer best suited to fix each bug. During low-level root cause analysis, it assists engineers find the exact line of erroneous code and to determine the proper fix.
A prime example is of this new class of tool is OnPoint debug automation from Vennsa Technologies. It can improve productivity each time debugging is required, save months of manual effort and get to design closure faster. There will be no more need to wonder, “Why did my design fail verification, dude?”
Dr. Andreas Veneris is president and CEO of Vennsa Technologies. As a professor at the University of Toronto, he has published more than 70 papers on debugging. He earned his Ph.D at the University of Illinois.