Published on June 18th, 2009
Formal verification technology can be a valuable addition to an existing IC verification flow, and can help reduce the risks associated with increasing design complexity. With selective deployment, verification teams can discover where the return on investment (ROI) for formal technology is the greatest. Hewlett-Packard (HP) engineers recently undertook such a project as part of the company's ProCurve ProVision ASIC development effort.
This article shows how we applied formal verification to blocks that were identified as possible candidates for formal, including an arbiter, transaction conversion block, credit management system, transaction funnel, and cyclical redundancy check (CRC) system. It explains the properties that were used, the bugs that were found, and the challenges that were overcome, including the verification of large state spaces. It also describes lessons learned, includes suggestions for formal deployment, and notes future directions for our use of formal technology.
Our ASIC verification effort used JasperGold, a property checking tool from Jasper Design Automation. The deployment was a collaborative attempt to realize "targeted ROI" – that is, where formal verification was projected to have the greatest value as a complement to simulation. As part of the verification effort, formal verification helped verify critical functionality and showed high ROI for such tasks as packet integrity analysis and flow control. Jasper also suggested methodologies for proof convergence as the tool itself contains a number of proof enabling features, reducing the need for manual efforts.
Starting with little experience in formal verification, the team found real bugs, including some that probably would not have been found with simulation. We found that formal verification can improve RTL quality, reduce lengthy random simulation efforts, and execute proofs of high-level end-to-end behavior. This deployment also showed how the value proposition of formal technology varies from one block to another.
We ran formal proofs on a next-generation design; this ASIC has built-in wire speed intelligence, offers an engineering balance between feature capabilities, performance and price, and is scalable across a number of products. It contains a mix of 10G and 1G ports and approaches 1 billion transistors in size.
Using formal verification allowed engineers to evaluate how the technology complements simulation. One goal was to prove properties on blocks that were difficult to simulate, while another was to discover bugs missed by simulation. We also wanted to learn how to enable proof convergence by reducing large state spaces and reducing sequential depth.
The effort was aimed at blocks that were identified as good candidates for formal verification. This included blocks comprised of new RTL code, blocks with limited controllability in simulation, and blocks exhibiting a high simulation bug rate. The deployment included the following testcases:
The arbiter block was buried deep within the design, and was difficult to verify with simulation. It would have been difficult for a verification engineer to build a simulation testbench to exercise all the requestors in a way that would have flushed out all possible bugs. A formal proof, however, can quickly find arbitration problems that random simulation might miss.
The team wrote a safety property to ensure that any requestor should receive its grant within n grants. This property resulted in the discovery of a bug that would not have been caught with simulation – a bug in which the arbiter skipped a requestor. The property was written using SystemVerilog Assertions (SVA) as follows:
Later using a liveness property, formal verification identified a counterexample showing how a requestor could be skipped indefinitely which would result in a starvation issue. The team fixed the bug and reran the proof to verify the revised implementation. The value of formal verification was extremely high in Testcase 1, given that the bug probably would not have been caught by simulation, and that detection and debug in silicon might have taken at least an engineering month of effort.
The transaction conversion block was a major enhancement to a legacy routing chip that enabled it to interface to a next generation backplane. The transaction conversion block was selected for formal verification because it was implemented with entirely new RTL code.
To ease the proof burden, the arbiter in this block was verified separately as Testcase 1.
In addition, the transaction conversion block contained a FIFO which was 64 entries deep, and the team remodeled it as a 4-entry deep FIFO. Since this FIFO propagated control information, we could not use the Jasper FIFO proof accelerator.
We chose to prove a property asserting that transactions traversing from port n to port m have entered legally through port n, and that no transactions were re-ordered, corrupted, or lost. To prove the property, engineers developed a formal testbench to determine when a legal transaction entered the block. When such a condition occurred, they pushed the translated version of the transaction that was anticipated to egress the block onto a Jasper Scoreboard Proof Accelerator (PA).
The PA contains built-in properties to verify that the transactions entering the block emerged in the correct order, without corruption and without any drops. The team achieved proof convergence by eliminating all but two bits in the transaction data field which were overloaded with special control information. This modeling helped determine when data should be pushed onto and popped off the scoreboard.
Using formal verification to prove these built-in properties, a verification engineer identified six design under test (DUT) issues, including bugs and specification ambiguities as illustrated in Figure 1.
Figure 1. Formal verification uncovered several issues in the transaction conversion block.
Most of these issues could potentially have been found with simulation, if the team had spent the time to find them. However, it would have taken a great deal of effort to verify all illegal scenarios in this block with simulation. Since this block was in the critical path of the project schedule, anything that reduced the verification effort required for the block was a good investment. Thanks to formal verification, the overall verification effort was reduced and better quality RTL was delivered for simulation.
The DUT in Testcase 3 was a complex credit management system that spanned multiple chips. The motivation for applying formal technology was the relatively high pre-silicon bug rate in the credit management logic.
We developed two sub-properties. The intent was to verify that a credit check transaction, which queries the available credit in the system, returns the correct response. For example, the agent queried should always return the actual number of credits that it possesses.
In order to understand how the credit management system worked, Jasper engineers used Visualize, an interactive waveform tool in JasperGold, to explore credit migration and other design functions. Visualize produced a trace showing a returned credit. Verification engineers then interactively added three more constraints. With this small set of constraints, Visualize produced a waveform that showed a credit being consumed, sent around the system, and eventually returned (Figure 2).
Figure 2. The Visualize function provides an interactive waveform environment that helps verification engineers understand designs.
To ease modeling of the credit management logic, the team developed a slimmed-down "replica" of the multi-chip system. The intent was to model only the logic relevant to the credit management functionality. As it turned out, the replica excluded the buggy portion of the design. This modeling error was discovered when sub-property 1 was unable to rediscover a known failure from simulation. Furthermore, a cover property representing the bug signature was proven unreachable.
Had the replica of the system contained the bug, the cover property describing the bug would have been proven as reachable. Hence it became clear that the replica did not contain the buggy logic. In contrast, the recommended methodology is to use safe abstractions when extracting a portion of a design out of a larger design. Had this methodology been used, the bug would not have been excluded and thus would have been found.
State space tunneling successfully helped achieve a proof of sub-property 2. State space tunneling, a feature of this formal technology, helps a verification engineer identify unreachable states considered by the tool and write assertions to rule them out.
However, the value of this proof was questioned when it was clear that the replica did not contain the original bug. In particular, the completeness of the replica was questioned and consequently the value of any proof on an incomplete replica was deemed to be low. Nevertheless, running formal verification on the credit management system provided an extremely valuable learning experience about formal technology.
Further, if a bug had been found, the proof would have been very important. Losing or gaining credits in a system can be catastrophic. Proving functionality that can be catastrophic to the design if there are bugs in it is a great use of formal verification.
One of the most valuable applications of formal verification technology involved a transaction funnel with three transaction streams converging into a single interface (Figure 3). With such a block, it's important that protocols are handled correctly, so data doesn't get re-ordered, corrupted or lost. But the transaction funnel would have been extremely difficult to thoroughly verify with simulation because of low controllability – it's hard for a pseudo-random simulation to set up all possible transactions.
Figure 3. The transaction funnel merges three transaction streams converging into a single stream
The properties that were verified stated that transaction is never driven into the funnel when the funnel is not ready. The challenges were the huge state space, due to more than 12,000 flip-flops in the property's cone of influence (COI) as well as the huge sequential depth due to many wide counters.
The EDA vendor suggested several steps we could take to enable proof convergence. First, a verification engineer used the tool command "get_design_info" to list flops associated with the property's fan-in cone, reviewed the flop list with designers to identify irrelevant flops, and finally added a cut point for each irrelevant flop. This part of the proof setup took about two days. However, the total effort on the proof spanned a couple of weeks, since the proof was not converging due to a flaw in how the initial value abstraction was set up.
Further, because of the huge sequential depth due to many wide counters, the team used the tool to abstract the initial value of the counters. This modeling strategy effectively let the design emerge from reset into any legal "mature" state. This approach is extremely effective in skipping over the natural reset sequence and a long stream of activity to get into an "interesting" state that exposes a potential bug.
The result: A proof was achieved after around 40 minutes of run time, with a proof depth of 73 clock cycles. The ROI impact was high. Formal technology avoided a sizeable random simulation effort that would have had very low controllability and would have concluded with a limited amount of confidence that a catastrophic failure would not happen. Moreover, the formal verification effort involved creating 27 assumptions coded in System Verilog Assertion (SVA) that were added to the RTL simulation.
This block was in the critical path for the chip, and it was formally verified with about two weeks of engineering effort. A comparable simulation-based effort might have taken three months for a junior engineer or six weeks for a senior engineer. The use of formal technology with the funnel driver may have shaved a week off of the whole SoC development program, avoiding any lost revenues that a week's delay in time to market would have cost.
Testcase 5 included a CRC generator, noisy channel, and CRC checker. The datapath was 256 bits wide and the CRC signature was 16 bits wide, compiled over 8 clock cycles (Figure 4).
Figure 4. CRC generator includes a noisy channel in which bits can be flipped.
CRCs are not generally regarded as good candidates for formal proofs, but a decision was made to try to formally verify this CRC block anyway. The property stipulated that any single bit corruption on the noisy channel would result in a CRC error. The SVA property was written as follows:
The interesting part of the proof setup was modeling the noisy channel:
where ^ denotes a bitwise XOR, and noise was constrained with the assumption:
The strategy was to choose any one of m clock cycles and corrupt any one of n bits on a chosen clock cycle. The proof thus considered the number of unique packets times the number of possible bit corruptions:
The proof was achieved, and the exercise won support from one of our senior architects to use formal technology for architectural studies.
Results from the formal technology deployment were impressive. First, the design team found real DUT issues. In Testcase 1, for example, a round-robin arbiter skipped a requestor. In Testcase 2, a block contained input flops without a reset value. Also in Testcase 2, engineers found a specification ambiguity – a signal had a periodic nature that was not clearly spelled out in the spec. And in this same test case, an aborted transaction was improperly accepted.
We executed proofs of high-level, end-to-end behavior, verifying that:
The team also successfully learned and used proof convergence enablers such as proof accelerators, initial value abstraction, state space tunneling, parallel proof engines, and the ability to dump a list of flip-flops in a property's fan in-cone. These enablers made it possible to safely abstract and thus simplify the cone of influence.
There were, also, some important lessons learned. One key lesson is that verification engineers need to have a clear agreement with their management on objectives for formal verification. Engineers can apply formal technology several ways – to find known bugs in a buggy design, to hunt bugs in a presumably "bug free" design, and to achieve a complete proof of a property. It's thus important to sit down with your management and agree upon what you're trying to do and how long it will take.
In addition, there's a myth that you do not need a formal testbench to run formal verification. This may be true for very small designs, but a sizable block that requires clean transactions will need a testbench to drive legal transactions. In practice, a formal testbench is not as complicated as a fully fledged simulation testbench. Plan to implement a formal testbench for this purpose.
Also, be careful when remodeling the original design for verification. As noted with Testcase 3, unintentionally excluding the buggy portion of the design made it impossible to find a known bug. The methodology recommended by our formal supplier uses safe abstractions. Had that been done, the buggy logic would not have been excluded.
Another lesson learned was a counter-example debug shortcut. Instead of the standard, rigorous approach of back-tracing a failure by starting at the outputs, try checking the primary input stimulus that the proof engine devised first to ascertain if an illegal input stimulus had been applied. Then sharpen your assumptions or modeling logic to avoid illegal input stimulus.
As a result of this initial deployment, we are seeking to expand our use of formal verification. One direction is to use formal techniques to study blocks for new chip architectures, and identify difficult-to-verify functionality. The group would also like to develop a team of three-to-four formal verification engineers, and deploy formal tools to help design engineers explore functionality.
Elsewhere in the company, formal verification is used to check end-to-end properties in high-end servers, for bug hunting in low-end servers, and for RTL debug and high-level properties in storage and tape drives.
The team proved high-level, end-to-end properties, improved RTL quality, and won support for the use of formal verification at the architectural level. Further, the engineers successfully used advanced features to reduce large state spaces and improve proof convergence. As a result, formal verification will be an important part of our verification methodology in the future.
Jim Kasak has been a Senior ASIC Verification Engineer with the HP ProCurve Networking Business group in Roseville, CA for two years. He is currently developing verification methodologies for next generation ASICs with a focus on formal verification tools. During the previous 18 years, Jim worked with Intel Corporation in Folsom, CA in design, verification, CAD development, and management roles.
Ross Weber is a Senior Applications Engineer at Jasper Design Automation. Prior to joining Jasper in 2008, he was an ASIC Design and Verification Engineer at Unisys for 10 years.
Holly Stump, Vice President of Marketing at Jasper Design Automation, has more than 20 years experience in EDA and high technology. Since her early days as a chip designer at HP, she has seen verification evolve from early gate-level simulation to today's advanced formal technologies. Holly is a strong advocate for leveraging formal technology across a wide spectrum of applications, from architectural analysis through silicon debug.
Are you up-to-date on important SoC and IP design trends, analysis and market forecasts?
Chip Design now offers customized market research services.
For more information contact Jenna Johnson at 612.598.3446
Disneyland Hotel, Anaheim CA. October 6-8, 2015
Santa Clara Convention Center, Santa Clara, CA November 10-12, 2015
Santa Clara Convention Center, Santa Clara, CA Jan 19-21, 2016
DoubleTree, San Jose Feb 29-Mar 3, 2016
San Francisco, CA May 22-27, 2016
Austin, TX June 5-9, 2016