Part of the  

Chip Design Magazine

  Network

About  |  Contact

The Visibility Challenge

By Jim Hogan
Gary Smith, the noted EDA industry analyst, wrote an interesting blog post recently. In that blog he spoke about the progress the FPGA suppliers have made in adopting new processes, thus offering a more robust design platform.

In fact, new product families such as the Virtex-6 from Xilinx are taking market share from traditional ASIC design starts and becoming a more viable end-product alternative. There are many reasons for this. One of the most significant is the cost of delivering a leading edge ASIC. If FPGAs can meet the market requirements in terms of cost, performance and power, they are a great alternative to the ASIC without the silicon risk. Their capacity, performance and even lower costs have made more than a few companies think seriously about leveraging the benefits of programmability in a more widespread way and not just looking at FPGAs as prototyping platforms.

But Gary sounds a note of caution by saying, “There is no free ride for the FPGA designer.” It is true that by upping the ante on the capabilities of their FPGA devices, the silicon suppliers have thrown down the gauntlet from a design standpoint. New methodologies and tools–true system-level approaches like we have seen for hardwired gate arrays–will be required. Gone are the days of the free FPGA tools from Xilinx and Altera doing the job. We are talking about massive, complex and highly integrated SoCs with complex embedded software now, and this level of design needs a much more sophisticated design strategy than the old blow-and-go approach. Clearly, the old “blow-and-go” days of FPGA design are in the rear view mirror.

As with the ASIC days in years gone by, it will take some time for the industry to respond to all the needs of complex FPGAs. But remember what the advent of HDLs and logic synthesis did for the gate array designer? It not only removed a huge burden from the silicon suppliers who gradually exited the tool business, but leapfrogged chip companies’ ability to do application-specific IC design. This, for example, was the foundation that Cisco Systems built their routers on.

In the FPGA world, designers are used to using free tools from the FPGA suppliers and maybe a handful of lower-end commercial EDA tools. But those days are quickly disappearing when you are faced with a device containing multi-million gates, multi-embedded IP cores and performance capabilities that require precision timing-driven design approaches.

To emphasize Gary’s point, The FPGA Journal recently conducted a survey of FPGA designers. To no one’s real surprise, they discovered that close to a quarter of the designers are in a ‘crisis’ mode when it comes to the amount of time spent in debugging FPGAs. They complain of huge amounts of time required to verify a complex FPGA, find bugs and complete endless loops through synthesis and place-and-route. The implication is clear: The progress FPGAs have made is threatening to undermine their largest advantage, time to market, because of inadequate design tools and methodologies.

As with most areas of design automation, verification has become the major bottleneck. For FPGAs it comes down to two issues, performance and efficiency of visibility into the design itself. Today, 40% of all FPGAs include embedded processors. Most of them are soft cores, but as mainstream embedded processors are added to FPGA devices this percentage will increase and the complexity of these designs will skyrocket. Modern communications or video applications have an insatiable appetite for performance bandwidth. To deliver this performance, new applications are moving algorithms that previously executed in software to the FPGA hardware. With this change comes a significant verification and debug problem. We have good tools to debug software but when the associated hardware does not work, the system as a whole does not work. In order to debug these new embedded designs with FPGAs, the designer needs the ability to incrementally compile the hardware code along side of the processor in the chip and debug them together.

Let’s start with the performance challenge. Because of the sheer size and complexity of modern FPGAs, a software simulation run of the full-chip RTL that once completed in hours can now take days or weeks. The solution is to migrate as much of the design into a physical FPGA as soon as possible, because this will allow those portions to be run at-speed, and it will also dramatically reduce the loading on the software simulator. Most large-scale FPGA design groups utilize some combination of FPGA board based solutions from the FPGA manufactures, EDA suppliers or roll their own. This can add some significant time to the development cycle because of design and debug of the board based solutions themselves. However, this has proven to be a successful way to increase verification throughput. Native FPGA simulation as demonstrated by companies like GateRocket with its hardware-based RocketDrive product is a very interesting and promising approach.

But the real hidden time sink is in debug. And this has as much to do with the length of time required to re-run synthesis and place-and-route as it does with the actual process of finding bugs. For example, full-chip logic synthesis and place-and-route (PAR) runs that used to complete during lunch can now exceed 18 hours. This means that whenever a bug slips through to the system test lab and requires a change to the FPGA design, it can take more than a day to get the device re-programmed with a fix ready for testing.

In many cases, actually identifying the source of a bug can be virtually impossible, because bugs can be introduced at any stage of the design process. Often you can only simulate or debug where you think you have a bug. Coverage is a huge issue.

Since one bug may mask several others, it is not uncommon to re-spin the FPGA and re-test it in the system, only to discover that additional changes are required. It’s easy to see how this slow, iterative process can become unwieldy, and can lead to weeks or months of project delays. Imported IP presents challenges as well. In the FPGA domain, it’s common to be presented with two models: a high-level representation containing behavioral constructs for use in simulation, and a gate-level representation to be incorporated into the FPGA. The problem is that there may be subtle differences between the behavioral and gate-level representations, and these differences only manifest themselves when the FPGA design is deployed in its target system.

As with the performance challenge, the ‘visibility’ challenge—which equates to lost productivity—is the solution to target the design in-system, a la ASIC emulation. With an integrated hardware/software debug system (GateRocket calls it ‘device native’), once each new block is verified at the RTL (or behavioral) level in the context of our full-chip design, its synthesized/gate-level equivalent can be moved over into the physical FPGA. As soon as a problem manifests itself, the verification run can be incrementally repeated with the RTL version of the suspect block resident in the simulation world running in parallel with the gate-level version realized in the physical FPGA. The signals from the peripheries of these blocks (along with any designated signals internal to the blocks) can be compared “on-the-fly.” This is not only a huge time savings but changes the nature of design itself at this level.

Using this technology–combining conventional simulation with physical hardware and an appropriate debugging environment–it is possible to very quickly detect, isolate, and identify bugs, irrespective of where they originated in the FPGA design flow. Once a bug has been isolated to one block of the design, a change can be made to the RTL representation of that block, which can then be re-run along with the hardware representation of the other blocks. In this way, a fix can be immediately tested and verified without re-running synthesis and place-and-route, and with only the suspect block running in the software simulator.

Given the progress FPGA silicon providers are making in pacing with Moore’s Law and coming to market with phenomenal capabilities, it would be a shame to allow design tools and methodologies to slow us down. The EDA industry must meet the challenge. Advancements like giving designers the ability to see how a design behaves in the physical chip (often the actual target FPGA itself) by running it in-system while still having access to all the capabilities and flexibility of a software simulator (like those provided by GateRocket) are great breakthroughs that will help FPGAs make continued in roads in the traditional ASIC market and bring the power of programmability to more companies.

Leave a Reply