Part of the  

Chip Design Magazine

  Network

About  |  Contact

Headlines

Headlines

Horizontal and Vertical Flow Integration for Design and Verification

Frank Schirrmeister, senior group director for product marketing of the System Development Suite at Cadence.

System design and verification are a critical component for making products successful in an always-on and always-connected world. For example, I wear a device on my wrist that constantly monitors my activities and buzzes to remind me that I’ve been sitting for too long. The device transmits my activity to my mobile phone that serves as a data aggregator, only to forward it on to the cloudy sky from where I get friendly reminders about my activity progress. I’m absolutely hoping that my health insurance is not connected to my activity progress because my premium payments could easily fluctuate daily. How do we go about verifying our personal devices and the system interaction across all imaginable scenarios? It sounds like an impossibly complex task.

From personal experience, it is clear to me that flows need to be connected both in horizontal and vertical directions. Bear with me for a minute while I explain.

Rolling back about 25 years, I was involved in my first chip design. To optimize area, I designed a three-transistor dynamic memory cell for what we would today call 800nm technology at 0.8 micron. The layout was designed manually from gate-level schematics that had been entered manually as well. In order to verify throughput for the six-chip system that my chip was part of, I developed a model at the register-transfer level (RTL) using this new thing at the time called, VHSIC Hardware Description Language (VHDL) (yep, I am European). What I would call vertical integration today was clunky at best 25 years ago. I was stubbing data out from VHDL into files that would be re-used to verify the gate-level. My colleagues and I would write scripts to extract layout characteristics to determine the speed of the memory cell and annotate that to the gate level for verification. No top-down automation was used, i.e. no synthesis of any kind.

About five to seven years after my first chip design (we are now late in the ‘90s if you are counting), everything in the flow had moved upward and automation was added. My team designed an MPEG-2 decoder fully in the RTL and used logic synthesis for implementation. The golden reference data came from C-models—vertically going upward—and was not directly connected to the RTL. Instead, we used file-based verification of the RTL against the C-model. Technology data from the 130nm technology that we used at the time was annotated back into logic synthesis for timing simulation and to drive placement. Here, vertical integration really started to work. And the verification complexity had risen so much that we needed to extend horizontally, too. We verified the RTL both using simulation and emulation with a System Realizer M250. We took drops of the RTL, froze it, cross-mapped it manually to emulation and ran longer sequences—specifically around audio/video synchronization for which we needed seconds of actual real time video decoding to be executed. We used four levels vertically: layout to gate to the RTL (automated with annotations back to the RTL) and the C-level on top for reference. Horizontally, we used both simulation and emulation.

Now fast-forward another 10 years or so. At that point, I had switched to the EDA side of things. Using early electronic system-level (ESL) reference flows, we annotated .lib technology information all the way up into virtual platforms for power analysis. Based on the software driving the chip, the technology impact on power consumption could be assessed. Accuracy was a problem, and that’s why I think that flows may have been a bit too early for their time back in 2010.

So where are we today?

Well, the automation between the four levels has been greatly increased vertically. Users take .lib information all the way up into emulation using tools like the Cadence Palladium® Dynamic Power Analysis (DPA), which enables engineers using emulation to also analyze software in a system-level environment. This tool allows designers to achieve up to 90% greater accuracy compared to the actual chip power consumption as reported by TI and most recently Realtek. High-level synthesis (HLS) has become mainstream for parts of the chip. That means the fourth level above the RTL is getting more and more connected as design entry moves upward, and with it, verification is more and more connected as well.

And horizontally, we are now using at least four engines, formal, RTL simulation, emulation, and field-programmable gate array (FPGA)-based prototyping, which are increasingly integrated. A couple of examples include:

  • Simulation acceleration – combining simulation and emulation
  • Simulation/emulation hot swap – stopping in simulation and starting in emulation, as well as vice versa
  • Virtual platform/emulation hybrids – combining virtual platforms and emulation
  • Multi-fabric compilation – same flow for emulation and FPGA-based prototyping
  • United Power Format (UPF)/Common Power Format (CPF) low-power verification – using the same setup for simulation and emulation
  • Simulation/emulation coverage merge – combining data collected in simulation and emulation

Arguably, with the efforts to shift post-silicon verification even further to the left, the actual chip becomes the fifth engine.

So what’s next? It looks like we have the horizontal pillar engines complete now when we add in the chip. Vertically, integration will become even closer to allow a more accurate prediction prior to actual implementations. For example, the recent introduction of the Cadence Genus™ Synthesis Solution delivers improved productivity during RTL design and improved quality of results (QoR) in final implementation. In addition, the introduction of the Cadence Joules™ RTL Power Solution provides a more accurate measure of RTL power consumption, which greatly improves the top-down estimation flow from the RTL downstream. This further increases accuracy for the Palladium DPA and the Cadence Incisive® Enterprise Simulator that automates testbench creation and performs coverage-driven functional verification, analysis, and debug—from the system level to the gate level—boosting verification productivity and predictability.

Horizontal and vertical flow integration is really the name of the game for today’s chip designer and future chip designers.

Tags: , , , , , , , , , , , ,

Leave a Reply