Published on October 17th, 2005

A Wake-Up Call for FPGA Verification

"I don't need to worry about verification; I can just re-program to fix any bugs."

For decades, this has been the mantra for most designers of FPGAs and other programmable devices. To a considerable extent, this statement has been true. Certainly these designers have a big advantage in not having to go through the full mask-generation and fabrication process for standard cell or custom design, or even the abbreviated process for gate arrays, in order to fix bugs. Indeed, the ability to fix bugs almost instantly is one of the most seductive features of FPGAs.

I've been marketing verification IP and EDA verification tools for ten years now, and the vast majority of my customers have been designing ASICs or custom chips. Sure, we'd try to convince the FPGA users, and they were usually willing to listen to a presentation, but nine times out of ten they would just thank us and repeat the mantra.

However, in the last few years I have seen a gradual but noteworthy shift in thinking. Nothing has changed the underlying value of FPGAs: it's still the case that bugs can be fixed quickly and inexpensively. The problem, of course, is that one needs to detect and diagnose bugs in order to fix them. Ay, there's the rub.

Detecting the bugs is the lesser of the two problems, since running FPGAs in prototype systems tends to stress the designs well. There's always a risk that the lab tests won't hit every corner case that might have been exercised by an effective verification environment. A good example is an error condition that might not occur in a particular prototype setup but that would be easy to inject in a modern simulation testbench.

Diagnosing the detected bugs can be a really hard problem. Although FPGA architectures may inherently offer more visibility into internal signals than fabricated chips, the process of going from a detected system-level error to isolating the actual bug in the design RTL source code (or, perish the thought, a gate-level schematic) is challenging.

I recall a visit I made a few years ago to a prospective customer designing systems in the networking space. They had a very simple verification environment—a few PCs running an inexpensive simulator with only some basic, hand-crafted "sanity tests." They relied on programming their FPGAs early in the project and doing the bulk of their verification while running in their prototype systems, which had always worked just fine in the past.

They were trying to get their next-generation system working in the lab. The engineers were finding plenty of bugs, but they were going crazy trying to diagnose them. Whenever they found a problem, they would take their best guess as to where the bug might be and re-program the FPGAs to bring out some internal signals to watch on their logic analyzers. Of course, they didn't always guess right and so the process was highly iterative.

They had been debugging in the lab for more than six months and were nowhere near having a working system. They were at severe risk of missing their market window and ceding the next generation to their competitors, so they were desperately talking to EDA vendors for ideas on how to get out of their jam. I offered some suggestions, but they decided that they could not afford the investment in workstations and more advanced tools, so they soldiered on. I'm not quite sure whatever happened to their project, but the company seems to have disappeared.

This story is surely not unique. As FPGA-based systems grow in complexity, the size of the devices grows and the ability to diagnose bugs found in the lab diminishes. In response, FPGA design teams are increasingly adopting verification environments that look more like those of ASIC and custom chip projects than those of their past.

Some FPGA designers are using code coverage and functional coverage to get a handle on what is actually happening in their design. Assertions in their code, sometimes synthesized into the FPGAs as well, are improving bug diagnosis by helping to pinpoint unexpected behavior. A few teams have even adopted a full constrained-random, coverage-driven verification approach that rivals the sophistication of modern ASIC verification. The adoption of all these techniques is being accelerated by the availability of SystemVerilog, the standard design and verification language that subsumes Verilog, VHDL, assertions, and testbench languages.

The more I talk with customers, the more I'm convinced that FPGA designers can no longer risk their schedules by sticking their heads in the sand when it comes to verification. Detecting, diagnosing and fixing bugs in million-gate chips are challenges regardless of the silicon technology. Fortunately, FPGA designers can draw on the considerable experience of their peers, as documented in the recently-published Verification Methodology Manual for SystemVerilog. Facility with advanced verification techniques is a requirement for today's FPGA design teams.

Comments? Share your thoughts by contacting the Editorial Director at jblyler@extensionmedia.com.


Tech Videos

©2017 Extension Media. All Rights Reserved. PRIVACY POLICY | TERMS AND CONDITIONS