Part of the  

Chip Design Magazine

  Network

About  |  Contact

Posts Tagged ‘Palladium’

Verification Joins the Adults’ Table

Tuesday, January 24th, 2017

Adam Sherer, Group Director, Product Management, System & Verification Group, Cadence

As we plan for our family gatherings this holiday season, it’s time to welcome Verification to the adults’ table. Design and Implementation are already at the table, having established their own families consisting of architects with the comprehensive experience to manage the overall flow and specialists who provide the deep knowledge needed to make each project succeed. Verification has matured with the realization that it needs its own family of architects and specialists that have the experience and knowledge to rapidly and repeatedly verify complex projects.

Figure 1 The family table

This maturation of Verification occurred as complexity drove the need for the architect’s role. Designs pushed through a billion gates and systems grew their functional dependency on the fusion of analog, software, digital, and power. Meanwhile, the teams verifying these designs became distributed around the globe. A holistic view of verification became necessary and it was rooted in a more rigorous verification planning process. When we listen to the architect at our holiday dinner this year, we’ll hear how she wished for and got verification management automation with Cadence’s vManager solution. In order to close her verification plan, she needs to reuse verification IP (VIP), specify new Cadence VIP protocols, and direct the internal development of new VIP running on a range of verification engines. She also realizes that traditional methods will not scale to complex scenarios that must be verified across the complete SoC, so she is excited by the new portable stimulus standard work in Accellera and is piloting a project using Cadence’s Perspec System Verifier to gain an efficiency edge over her company’s competitors.

Design and Implementation were impressed by the automation that Verification was able to access. They asked Verification if that meant she had resources to spare for their families. She couldn’t help but laugh but then calmed down and explained how her family is growing with the specialists needed to implement the verification plans. She also discussed how those experts are actually already working with experts from Design and Implementation to achieve verification closure.

Figure 2 The Cadence Verification Family

Verification is a multi-engine, multi-abstraction, multi-domain task that starts and finishes with the entire development team. At the start of development, design experts and verification experts apply JasperGold formal analysis with coverage to both raise quality and mark the block-level features as verified in the overall plan. UVM experts then step in to complete comprehensive IP/subsystem verification using high-performance digital and mixed-signal simulation with the Incisive Enterprise Simulator. While the randomization and four-state simulation is critical at this stage, the UVM testbench can consume as much as 50% of the simulation time, which lengthens runtime as the project moves to subsystem and SoC integration. The verification experts then apply acceleration techniques to reduce time spent in the testbench, develop new scenarios with the Perspec System Verifier to enable fast four-state RTL simulation with the Cadence RocketSim Parallel Simulation Engine, and accelerate with the Cadence Palladium Z1 Enterprise Emulation System. As the project moves to the performance, capacity, coverage, and accessibility of the Palladium Z1 engine, new experts are able to address system features dependent on bare metal software and in-circuit data. Since the end customer interacts with the system through application software, the verification experts work with software teams using Cadence Protium Rapid Prototyping Platform, which provides the performance needed to support the verification needs of this team. With all of these experts around the world, the verification architect explains that she needs fabrics that enable them to communicate. She uses the Cadence Indago Debug Platform and vManager to provide unified debug across the engines, and multi-engine metrics to help her automate the verification plan. More and more of the engines provide verification metrics like coverage from simulation and emulation that can be merged together and rolled up to the vManager solution. Even the implementation teams are working together with the verification experts to simulate post PG netlists using the Incisive Enterprise Simulator XL and RocketSim solutions, enabling final signoff on the project.

As Design and Implementation pass dessert around the table, they are very impressed with Verification. They’ve seen the growing complexity in their own families and have been somewhat perplexed by how verification gets done. Verification has talked about new tools, standards, and methodologies for years, and they assumed those productivity enhancements meant that verification engineers could remain generalists by accessing more automation. Hearing more about the breadth and depth of the verification challenge has helped them realize that that there is an absolute need for a complete verification family with architects and experts. Raising a toast to the newest member of the electronic design adults’ table, the family knows that 2017 is going to be a great year.

Rapid Prototyping is an Enduring Methodology

Thursday, September 24th, 2015

Gabe Moretti, Senior Editor

When I started working in the electronic industry hardware development had prototyping using printed circuit boards (PCB) as its only verification tool.   The method was not “rapid” since it involved building and maintaining one or more PCBs.  With the development of the EDA industry simulators became an alternative method, although they really only achieved popularity in the ‘80s with the introduction of hardware description languages like Verilog and VHDL.

Today the majority of designs are developed using software tools, but rapid prototyping is still a method used in a significant portion of designs.  In fact hardware based prototyping is a growing methodology, mostly due to the increased power and size of FPGA devices.  It can now really be called “rapid prototyping.”

Rapid Prototyping Defined

Lauro Rizzatti, a noted expert on the subject of hardware based development of electronics, reinforces the idea in this way: “Although one of the oldest methods used to verify designs, dating back to the days of breadboarding, FPGA prototyping is still here today, and more and more it will continue to be used in the future. “

Saba Sharifi, VP Business Development, Logic Business Unit, System LSI Group at Toshiba America Electronic Components, describes the state of rapid prototyping as follows: “While traditional virtual prototyping involves using CAD and CAE tools to validate a design before building a physical prototype, rapid prototyping is growing in popularity as a method for prototyping SoC and ASIC designs on an FPGA for hardware verification and early software development. In a rapid prototyping environment, a user may start development using an FPGA-based system, and then choose either to keep the design in the FPGA, or to transfer it into a hard-coded solution such as an ASIC. There are a number of different ways to achieve this end.”

To support the hardware based prototyping methodology Toshiba has introduced a new type of Device: Toshiba’s Fast Fit Structured Array (FFSA).   The FFSA technology utilizes metal-configurable standard cell (MCSC) SoC platform technology for designing ASICs and ASSPs, and for replacing FPGAs. Designed with FPGA capabilities in mind, FFSA provides pre-developed wrappers that can realize some of the key FPGA functionality, as well as pre-defined master sizes, to facilitate the conversion process.

According to Saba “In a sense, it’s an extension to traditional FPGA-to-ASIC rapid prototyping – the second portion of the process can be achieved significantly faster using the FFSA approach.  The goal with FFSA technology is to enable developers to reduce their time to market and non-recurring engineering (NRE) costs by minimizing customizable layers (to four metal layers) while delivering the performance, power and lower unit costs associated with standard-cell ASICs.   An FFSA device speeds front-end development due to faster timing closure compared to a traditional ASIC, while back-end prototyping is improved via the pre-defined master sizes. In some cases, customers pursue concurrent development – they get the development process started in the FPGA and begin software development, and then take the FPGA into the FFSA platform. The platform takes into consideration the conversion requirements to bring the FPGA design into the hard-coded FFSA so that the process can be achieved more quickly and easily.  FFSA supports a wide array of interfaces and high speed Serdes (up to 28G), making it well suited for wired and wireless networking, SSD (storage) controllers, and a number of industrial consumer applications. With its power, speed, NRE and TTM benefits, FFSA can be a good solution anywhere that developers have traditionally pursued an FPGA-based approach, or required customized silicon for purpose-built applications with moderate volumes.”

According to Troy Scott, Product Marketing Manager, Synopsys: “FPGA-based prototypes are a class of rapid prototyping methods that are very popular for the promise of high-performance with a relatively low cost to get started. Prototyping with FPGAs is now considered a mainstream verification method which is implemented in a myriad of ways from relatively simple low-cost boards consisting of a single FPGA and memory and interface peripherals, to very complex chassis systems that integrate dozens of FPGAs in order to host billion-gate ASIC and SoC designs. The sophistication of the development tools, debug environment, and connectivity options vary as much as the equipment architectures do. This article examines the trends in rapid prototyping with FPGAs, the advantages they provide, how they compare to other simulation tools like virtual prototypes, and EDA standards that influence the methodology.

According to prototyping specialists surveyed by Synopsys the three most common goals they have are to speed up RTL simulation and verification, validate hardware/software systems, and enable software development. These goals influence the design of a prototype in order to address the unique needs of each use case. “

Figure 1. Top 3 Goals for a Prototype  (Courtesy of Synopsys, Inc)

Advantages of Rapid Prototyping

Frank Schirrmeister, Group Director for Product Marketing, System Development Suite, Cadence provides a business  description of the advantages: “The growth of FPGA-based prototyping is really primarily driven by software needs because software developers are, in terms of numbers of users, the largest consumer base. FPGA-based prototyping also provides satisfactory speed while delivering the real thing at RTL accuracy and at a reasonable replication cost.”

Stephen Bailey, Director of Emerging Technologies, DVT at Mentor graphics puts it as follows: “Performance, more verification cycles in a given period of time, especially for software development, drives the use of rapid prototypes.  Rapid prototyping typically provides 10x (~10 MHz) performance advantage over emulation which is 1,000x (~1 MHz) faster than RTL software simulation (~1 kHz).
Once a design has been implemented in a rapid prototype, that implementation can be easily replicated across as many prototype hardware platforms as the user has available.  No recompilation is required.  Replicating or cloning prototypes provides more platforms for software engineers, who appreciate having their own platform but definitely want a platform available whenever they need to debug.”

Lauro points out that: “The main advantage of rapid prototyping is very fast execution speed that comes at the expense of a very long setup time that may take months on very large designs. Partitioning and clock mapping require an uncommon expertise. Of course, the assumption is that the design fits in the maximum configuration of the prototyping board; otherwise the approach would not be applicable. The speed of FPGA prototyping makes it viable for embedded software development. They are best used for validating application software and for final system validation.”

Troy sees the advantages of the methodology as: “To address verification tasks the prototype helps to characterize timing and pipeline latencies that are not possible with more high-level representations of the design and perhaps more dramatically the prototype is able to reach execution speeds in the hundreds of megahertz. Typical prototype architectures for verification tasks rely on a CPU-based approach where traffic generation for the DUT is written as a software program. The CPU might be an external workstation or integrated with the prototype itself. Large memory ICs adjacent to the FPGAs store input traffic and results that is preloaded and read back for analysis. Prototypes that can provide an easy to implement test infrastructure that includes memory ICs and controllers, a high-bandwidth connection to the workstation, and a software API will accelerate data transfer and monitoring tasks by the prototyping team.

Software development and system validation tasks will influence the prototype design as well. The software developer is seeking an executable representation to support porting of legacy code and develop new device drivers for the latest interface protocol implementation. In some cases the prototype serves as a way for a company to deploy an architecture design and software driver examples to partners and customers. Both schemes demand high execution speed and often real world interface PHY connections. For example, consumer product developers will seek USB, HDMI, and MIPI interfaces while an industrial product will often require ADC/DAC or Ethernet interfaces. The prototype then must provide an easy way to connect to accessory interfaces and ideally a catalog of controllers and reference designs. And because protocol validation may require many cycles to validate, a means to store many full milliseconds of hardware trace data helps compliance check-out and troubleshooting.”

Rapid Prototyping versus Virtual Prototyping

According to Steve “Rapid prototyping, also called FPGA prototyping, is based on a physical representation of the design under test (DUT) that gets mapped on an array of FPGA devices. Virtual prototyping is based on a virtual representation of the DUT. The source code of an FPGA prototype is RTL code, namely synthesizable design. The source code of a virtual prototype is a design description at a higher level of abstraction, either based on C/C++/SystemC or SystemVerilog languages, that is not synthesizable.  Its rather limited support for hardware debugging hinders its ability to verify drivers and operating systems, where hardware emulation excels.”

Troy describes Synopsys position as such: “To address verification tasks the prototype helps to characterize timing and pipeline latencies that are not possible with more high-level representations of the design and perhaps more dramatically the prototype is able to reach execution speeds in the hundreds of megahertz. Typical prototype architectures for verification tasks rely on a CPU-based approach where traffic generation for the DUT is written as a software program. The CPU might be an external workstation or integrated with the prototype itself. Large memory ICs adjacent to the FPGAs store input traffic and results that is preloaded and read back for analysis. Prototypes that can provide an easy to implement test infrastructure that includes memory ICs and controllers, a high-bandwidth connection to the workstation, and a software API will accelerate data transfer and monitoring tasks by the prototyping team.

Software development and system validation tasks will influence the prototype design as well. The software developer is seeking an executable representation to support porting of legacy code and develop new device drivers for the latest interface protocol implementation. In some cases the prototype serves as a way for a company to deploy an architecture design and software driver examples to partners and customers. Both schemes demand high execution speed and often real world interface PHY connections. For example, consumer product developers will seek USB, HDMI, and MIPI interfaces while an industrial product will often require ADC/DAC or Ethernet interfaces. The prototype then must provide an easy way to connect to accessory interfaces and ideally a catalog of controllers and reference designs. And because protocol validation may require many cycles to validate, a means to store many full milliseconds of hardware trace data helps compliance check-out and troubleshooting.”

Lauro kept is answer short and to the point.  “Rapid prototyping, also called FPGA prototyping, is based on a physical representation of the design under test (DUT) that gets mapped on an array of FPGA devices. Virtual prototyping is based on a virtual representation of the DUT. The source code of an FPGA prototype is RTL code, namely synthesizable design. The source code of a virtual prototype is a design description at a higher level of abstraction, either based on C/C++/SystemC or SystemVerilog languages, that is not synthesizable.”

Cadence’s position is represented by Frank in his usual thorough style.  “Once RTL has become sufficiently stable, it can be mapped into an array of FPGAs for execution. This essentially requires a remapping from the design’s target technology into the FPGA fabric and often needs memories remodeled, different clock domains managed, and smart partitioning before the mapping into the individual FPGAs happens using standard software provided by the FPGA vendors. The main driver for the use of FPGA-based prototyping is software development, which has changed the dynamics of electronics development quite fundamentally over the last decade. Its key advantage is its ability to provide a hardware platform for software development and system validation that is fast enough to satisfy software developers. The software can reach a range of tens of MHz up to 100MHz and allows connections to external interfaces like PCIe, USB, Ethernet, etc. in real time, which leads to the ability to run system validation within the target environments.

When time to availability is a concern, virtual prototyping based on transaction-level models (TLM) can be the clear winner because virtual prototypes can be provided independently of the RTL that the engines on the continuum require. Everything depends on model availability, too. A lot of processor models today, like ARM Fast Models, are available off-the-shelf. Creating models for new IP often delays the application of virtual prototypes due to their sometimes extensive development effort, which can eliminate the time-to-availability advantage during a project. While virtual prototyping can run in the speed range of hundreds of MIPS, not unlike FPGA-based prototypes, the key differences between them are the model fidelity, replication cost, and the ability to debug the hardware.

Model fidelity often determines which prototype to use. There is often no hardware representation available earlier than virtual prototypes, so they can be the only choice for early software bring-up and even initial driver development. They are, however, limited by model fidelity – TLMs are really an abstraction of the real thing as expressed in RTL. When full hardware accuracy is required, FPGA-based prototypes are a great choice for software development and system validation at high speed. We have seen customers deliver dozens if not hundreds of FPGA-based prototypes to software developers, often three months or more prior to silicon being available.

Two more execution engines are worth mentioning. RTL simulation is the more accurate, slower version of virtual prototyping. Its low speed in the Hz or KHz range is really prohibitive for efficient software development. In contrast, due to the high speed of both virtual and FPGA-based prototypes, software development is quite efficient on both of them. Emulation is the slower equivalent of FPGA-based prototyping that can be available much earlier because its bring-up is much easier and more automated, even from not-yet-mature RTL. It also offers almost simulation-like debug and, since it also provides speed in the MHz range, emulation is often the first appropriate engine for software and OS bring-up used for Android, Linux and Windows, as well as for executing benchmarks like AnTuTu. Of course, on a per project basis, it is considered more expensive than FPGA-based prototyping, even though it can be more cost efficient from a verification perspective when considering multiple projects and a large number of regression workloads.”

Figure 2: Characteristics of the two methods (Courtesy of Synopsys Inc.)

Growth Opportunities

For Lauro the situation boils down to this: “Although one of the oldest methods used to verify designs, dating back to the days of breadboarding, FPGA prototyping is still here today, and more and more it will continue to be used in the future. The dramatic improvement in FPGA technology that made possible the manufacturing of devices of monstrous capacity will enable rapid prototyping for ever larger designs. Rapid prototyping is an essential tool in the modern verification/validation toolbox of chip designs.”
Troy thinks that there are growth opportunities and explained: “A prototype’s high-performance and relatively low investment to fabricate has led them to proliferate among IP and ASIC development teams. Market size estimates show that 70% of all ASIC designs are prototyped to some degree using an FPGA-based system today. Given this demand several commercial offerings have emerged that address the limitations exhibited by custom-built boards. The benefits of almost immediate availability, better quality, modularity for better reuse, and the ability to out-source support and maintenance are big benefits. Well documented interfaces and usage guidelines make end-users largely self-sufficient. A significant trend for commercial systems now is development and debugging features of the EDA software being integrated or co-designed along with the hardware system. Commercial systems can demonstrate superior performance, debug visibility, and bring-up efficiency as a result of the development tools using hardware characterization data, being able to target unique hardware functions of the system, and employing communication infrastructure to the DUT. Commercial high-capacity prototypes are often made available as part of the IT infrastructure so various groups can share or be budgeted prototype resources as project demands vary. Network accessibility, independent management of individual FPGA chains in a “rack” installation, and job queue management are common value-added features of such systems.

Another general trend in rapid prototyping is to mix a transaction-level model (TLM) and RTL model abstractions in order to blend the best of both to accelerate the validation task. How do Virtual versus Physical prototypes differ? The biggest contrast is often the model’s availability during the project.  In practice the latest generation CPU architectures are not available as synthesizable RTL. License and deployment restrictions can limit access or the design is so new the RTL is simply not yet available from the vendor. For these reasons virtual prototypes of key CPU subsystems are a practical alternative.  For best performance and thus for the role of software development tasks, hybrid prototypes typically join an FPGA-based prototype, a cycle-accurate implementation in hardware with a TLM prototype using a loosely-timing (LT) coding style. TLM abstracts away the individual events and phases of the behavior of the system and instead focuses on the communication transactions. This may be perfectly acceptable model for the commercial IP block of a CPU but may not be for new custom interface controller-to-PHY design that is being tailored for a particular application. The team integrating the blocks of the design will assess that the abstraction is appropriate to satisfy the verification or validation scenarios.  Although one of the oldest methods used to verify designs, dating back to the days of breadboarding, FPGA prototyping is still here today, and more and more it will continue to be used in the future. The dramatic improvement in FPGA technology that made possible the manufacturing of devices of monstrous capacity will enable rapid prototyping for ever larger designs. Rapid prototyping is an essential tool in the modern verification/validation toolbox of chip designs.”

Steve’s described his opinion as follows: “Historically, rapid prototyping has been utilized for designs sized in the 10’s of millions of gates with some advanced users pushing capacity into the low 100M gates range.  This has limited the use of rapid prototyping to full chips on the smaller range of size and IP blocks or subsystems of larger chips.  For IP block/subsystems, it is relatively common to combine virtual prototypes of the processor subsystem with a rapid prototype of the IP block or subsystem.  This is referred to as “hybrid prototyping.
With the next generation of FPGAs such as Xilinx’s UltraScale and Altera’s Stratix-10 and the continued evolution of prototyping solutions, creating larger rapid prototypes will become practical.  This should result in expanded use of rapid prototyping to cover more full chip pre-silicon validation uses.
In the past, limited silicon visibility made debugging difficult and analysis of various aspects of the design virtually impossible with rapid prototypes.  Improvements in silicon visibility and control will improve debug productivity when issues in the hardware design escape to the prototype.  Visibility improvements will also provide insight into chip and system performance and quality that were previously not possible.”

Frank concluded that: “The growth of FPGA-based prototyping is really primarily driven by software needs because software developers are, in terms of numbers of users, the largest consumer base. FPGA-based prototyping also provides satisfactory speed while delivering the real thing at RTL accuracy at a reasonable replication cost. Second, the rollout of Cadence’s multi-fabric compiler that maps RTL both into the Palladium emulation platform and into the Protium FPGA-based prototyping platform significantly eases the trade-offs with respect to speed and hardware debug between emulation and FPGA-based prototyping. This gives developers even more options than they ever had before and widens the applicability of FPGA-based prototyping. The third driver for growth in prototyping is the advent of hybrid usage of, for example, virtual prototyping with emulation, combining fast execution for some portions of the design (like the processors) with accuracy for other aspects of the design (like graphics processing units).

Overall, rapid or FPGA-based prototyping has its rightful place in the continuum of development engines, offering users high-speed execution of an accurate representation. This advantage makes rapid or FPGA-based prototyping a great platform for software development that requires hardware accuracy, as well as for system validation.”

Conclusion

All four of the contributors painted a positive figure of rapid prototyping.  The growth of FPGA devices, both in size and speed, has been critical in keeping this type of development and verification method applicable to today’s designs.  It is often the case that a development team will need to use a variety of tools as it progresses through the task, and rapid prototyping has proven to be useful and reliable.

Horizontal and Vertical Flow Integration for Design and Verification

Thursday, August 20th, 2015

Frank Schirrmeister, senior group director for product marketing of the System Development Suite at Cadence.

System design and verification are a critical component for making products successful in an always-on and always-connected world. For example, I wear a device on my wrist that constantly monitors my activities and buzzes to remind me that I’ve been sitting for too long. The device transmits my activity to my mobile phone that serves as a data aggregator, only to forward it on to the cloudy sky from where I get friendly reminders about my activity progress. I’m absolutely hoping that my health insurance is not connected to my activity progress because my premium payments could easily fluctuate daily. How do we go about verifying our personal devices and the system interaction across all imaginable scenarios? It sounds like an impossibly complex task.

From personal experience, it is clear to me that flows need to be connected both in horizontal and vertical directions. Bear with me for a minute while I explain.

Rolling back about 25 years, I was involved in my first chip design. To optimize area, I designed a three-transistor dynamic memory cell for what we would today call 800nm technology at 0.8 micron. The layout was designed manually from gate-level schematics that had been entered manually as well. In order to verify throughput for the six-chip system that my chip was part of, I developed a model at the register-transfer level (RTL) using this new thing at the time called, VHSIC Hardware Description Language (VHDL) (yep, I am European). What I would call vertical integration today was clunky at best 25 years ago. I was stubbing data out from VHDL into files that would be re-used to verify the gate-level. My colleagues and I would write scripts to extract layout characteristics to determine the speed of the memory cell and annotate that to the gate level for verification. No top-down automation was used, i.e. no synthesis of any kind.

About five to seven years after my first chip design (we are now late in the ‘90s if you are counting), everything in the flow had moved upward and automation was added. My team designed an MPEG-2 decoder fully in the RTL and used logic synthesis for implementation. The golden reference data came from C-models—vertically going upward—and was not directly connected to the RTL. Instead, we used file-based verification of the RTL against the C-model. Technology data from the 130nm technology that we used at the time was annotated back into logic synthesis for timing simulation and to drive placement. Here, vertical integration really started to work. And the verification complexity had risen so much that we needed to extend horizontally, too. We verified the RTL both using simulation and emulation with a System Realizer M250. We took drops of the RTL, froze it, cross-mapped it manually to emulation and ran longer sequences—specifically around audio/video synchronization for which we needed seconds of actual real time video decoding to be executed. We used four levels vertically: layout to gate to the RTL (automated with annotations back to the RTL) and the C-level on top for reference. Horizontally, we used both simulation and emulation.

Now fast-forward another 10 years or so. At that point, I had switched to the EDA side of things. Using early electronic system-level (ESL) reference flows, we annotated .lib technology information all the way up into virtual platforms for power analysis. Based on the software driving the chip, the technology impact on power consumption could be assessed. Accuracy was a problem, and that’s why I think that flows may have been a bit too early for their time back in 2010.

So where are we today?

Well, the automation between the four levels has been greatly increased vertically. Users take .lib information all the way up into emulation using tools like the Cadence Palladium® Dynamic Power Analysis (DPA), which enables engineers using emulation to also analyze software in a system-level environment. This tool allows designers to achieve up to 90% greater accuracy compared to the actual chip power consumption as reported by TI and most recently Realtek. High-level synthesis (HLS) has become mainstream for parts of the chip. That means the fourth level above the RTL is getting more and more connected as design entry moves upward, and with it, verification is more and more connected as well.

And horizontally, we are now using at least four engines, formal, RTL simulation, emulation, and field-programmable gate array (FPGA)-based prototyping, which are increasingly integrated. A couple of examples include:

  • Simulation acceleration – combining simulation and emulation
  • Simulation/emulation hot swap – stopping in simulation and starting in emulation, as well as vice versa
  • Virtual platform/emulation hybrids – combining virtual platforms and emulation
  • Multi-fabric compilation – same flow for emulation and FPGA-based prototyping
  • United Power Format (UPF)/Common Power Format (CPF) low-power verification – using the same setup for simulation and emulation
  • Simulation/emulation coverage merge – combining data collected in simulation and emulation

Arguably, with the efforts to shift post-silicon verification even further to the left, the actual chip becomes the fifth engine.

So what’s next? It looks like we have the horizontal pillar engines complete now when we add in the chip. Vertically, integration will become even closer to allow a more accurate prediction prior to actual implementations. For example, the recent introduction of the Cadence Genus™ Synthesis Solution delivers improved productivity during RTL design and improved quality of results (QoR) in final implementation. In addition, the introduction of the Cadence Joules™ RTL Power Solution provides a more accurate measure of RTL power consumption, which greatly improves the top-down estimation flow from the RTL downstream. This further increases accuracy for the Palladium DPA and the Cadence Incisive® Enterprise Simulator that automates testbench creation and performs coverage-driven functional verification, analysis, and debug—from the system level to the gate level—boosting verification productivity and predictability.

Horizontal and vertical flow integration is really the name of the game for today’s chip designer and future chip designers.