Part of the  

Chip Design Magazine

  Network

About  |  Contact

Posts Tagged ‘simulation’

Next Page »

Cadence Launches New Verification Solutions

Tuesday, March 14th, 2017

Gabe Moretti, Senior Editor

During this year’s DVCon U.S. Cadence introduced two new verification solutions: the Xcelium Parallel Simulator and the Protium S1 FPGA-Based Prototyping Platform, which incorporates innovative implementation algorithms to boost engineering productivity.

Xcelium Parallel Simulator

.The new simulation engine is based on innovative multi-core parallel computing technology, enabling systems-on-chip (SoCs) to get to market faster. On average, customers can achieve 2X improved single-core performance and more than 5X improved multi-core performance versus previous generation Cadence simulators. The Xcelium simulator is production proven, having been deployed to early adopters across mobile, graphics, server, consumer, internet of things (IoT) and automotive projects.

The Xcelium simulator offers the following benefits aimed at accelerating system development:

  • Multi-core simulation improves runtime while also reducing project schedules: The third generation Xcelium simulator is built on the technology acquired from Rocketick. It speeds runtime by an average of 3X for register-transfer level (RTL) design simulation, 5X for gate-level simulation and 10X for parallel design for test (DFT) simulation, potentially saving weeks to months on project schedules.
  • Broad applicability: The simulator supports modern design styles and IEEE standards, enabling engineers to realize performance gains without recoding.
  • Easy to use: The simulator’s compilation and elaboration flow assigns the design and verification testbench code to the ideal engines and automatically selects the optimal number of cores for fast execution speed.
  • Incorporates several new patent-pending technologies to improve productivity: New features that speed overall SoC verification time include SystemVerilog testbench coverage for faster verification closure and parallel multi-core build.

“Verification is often the primary cost and schedule challenge associated with getting new, high-quality products to market,” said Dr. Anirudh Devgan, senior vice president and general manager of the Digital & Signoff Group and the System & Verification Group at Cadence. “The Xcelium simulator combined with JasperGold Apps, the Palladium Z1 Enterprise Emulation Platform and the Protium S1 FPGA-Based Prototyping Platform offer customers the strongest verification suite on the market”

The new Xcelium simulator further extends the innovation within the Cadence Verification Suite and supports the company’s System Design Enablement (SDE) strategy, which enables system and semiconductor companies to create complete, differentiated end products more efficiently. The Verification Suite is comprised of best-in-class core engines, verification fabric technologies and solutions that increase design quality and throughput, fulfilling verification requirements for a wide variety of applications and vertical segments.

Protium S1

The Protium S1 platform provides front-end congruency with the Cadence Palladium Z1 Enterprise Emulation Platform. BY using Xilinx Virtex UltraScale FPGA technology, the new Cadence platform features 6X higher design capacity and an average 2X performance improvement over the previous generation platform. The Protium S1 platform has already been deployed by early adopters in the networking, consumer and storage markets.

Protium S1 is fully compatible with the Palladium Z1 emulator

To increase designer productivity, the Protium S1 platform offers the following benefits:

  • Ultra-fast prototype bring-up: The platform’s advanced memory modeling and implementation capabilities allow designers to reduce prototype bring-up from months to days, thus enabling them to start firmware development much earlier.
  • Ease of use and adoption: The platform shares a common compile flow with the Palladium Z1 platform, which enables up to 80 percent re-use of the existing verification environment and provides front-end congruency between the two platforms.
  • Innovative software debug capabilities: The platform offers firmware and software productivity-enhancing features including memory backdoor access, waveforms across partitions, force and release, and runtime clock control.

“The rising need for early software development with reduced overall project schedules has been the key driver for the delivery of more advanced emulation and FPGA-based prototyping platforms,” said Dr. Anirudh Devgan, senior vice president and general manager of the Digital & Signoff Group and the System & Verification Group at Cadence. “The Protium S1 platform offers software development teams the required hardware and software components, a fully integrated implementation flow with fast bring-up and advanced debug capabilities so they can deliver the most compelling end products, months earlier.”

The Protium S1 platform further extends the innovation within the Cadence Verification Suite and supports the company’s System Design Enablement (SDE) strategy, which enables system and semiconductor companies to create complete, differentiated end products more efficiently. The Verification Suite is comprised of best-in-class core engines, verification fabric technologies and solutions that increase design quality and throughput, fulfilling verification requirements for a wide variety of applications and vertical segments.

EDA in the year 2017 – Part 2

Tuesday, January 17th, 2017

Gabe Moretti, Senior Editor

The first part of the article, published last week, covered design methods and standards in EDA together with industry predictions that impacted all of our industry.  This part will cover automotive, design verification and FPGA.  I found it interesting that David Kelf, VP of Marketing at OneSpin Solutions, thought that Machine learning will begin to penetrate the EDA industry as well.  He stated: “Machine Learning hit a renaissance and is finding its way into a number of market segments. Why should design automation be any different?  2017 will be the start of machine learning to create a new breed of design automation tool, equipped with this technology and able to configure itself for specific designs and operations to perform them more efficiently. By adapting algorithms to suit the input code, many interesting things will be possible.”

Rob Knoth, Product Management Director, Digital and Signoff Group at Cadence touched on an issue that is being talked about more recently: security.  He noted that: “In 2016, IoT bot-net attacks brought down large swaths of the Internet – the first time the security impact of IoT was felt by many. Private and nation-state attacks compromised personal/corporate/government email throughout the year. “

In 2017, we have the potential for security concerns to start a retreat from always-on social media and a growing value on private time and information. I don’t see a silver bullet for security on our horizon. Instead, I anticipate an increasing focus for products to include security managers (like their safety counterparts) on the design team and to consider safety from the initial concept through the design/production cycle.

Figure 1.  Just one of the many electronics systems found in an automobile (courtesy of Mentor)

Automotive

The automotive industry has increased the use of electronics year over year for a long time.  At this point an automobile is a true intelligent system, at least as far as what the driver and passenger can see and hear the “infotainment system”.  Late model cars also offer collision avoidance and stay-in-lane functions, but more is coming.

Here is what Wally Rhines thinks: “Automotive and aerospace designers have traditionally been driven by mechanical design.  Now the differentiation and capability of cars and planes is increasingly being driven by electronics.  Ask your children what features they want to see in a new car.  The answer will be in-vehicle infotainment.  If you are concerned about safety, the designers of automobiles are even more concerned.  They have to deal with new regulations like ISO 26262, as well as other capabilities, in addition to environmental requirements and the basic task of “sensor fusion” as we attach more and more visual, radar, laser and other sensors to the car.  There is no way to reliably design vehicles and aircraft without virtual simulation of electrical behavior.

In addition, total system simulation has become a requirement.  How do you know that the wire bundle will fit through the hole in the door frame?  EDA tools can tell you the answer, but only after seeking out the data from the mechanical design.  Wiring in a car or plane is a three dimensional problem.  EDA tools traditionally worry about two dimension routing problems.  The world is changing.  We are going to see the basic EDA technology for designing integrated circuits be applied to the design of systems. Companies that can remain at the leading edge of IC design will be able to apply that technology to systems.”

David Kelf, VP of Marketing at OneSpin Solutions, observed: “OneSpin called it last year and I’ll do it again –– Automotive will be the “killer app” of 2017. With new players entering the marketing all the time, we will see impressive designs featured in advanced cars, which themselves will move toward a driverless future.  All automotive designs currently being designed for safety will need to be built to be as secure as possible. The ISO 26262 committee is working on security as well safety and I predict security will feature in the standard in 2017. Tools to help predict vulnerabilities will become more important. Formal, of course, is the perfect platform for this capability. Watch for advanced security features in formal.”

Rob Knoth, Product Management Director, Digital and Signoff Group at Cadence noted: “In 2016, autonomous vehicle technology reached an inflection point. We started seeing more examples of private companies operating SAE 3 in America and abroad (Singapore, Pittsburgh, San Francisco).  We also saw active participation by the US and world governments to help guide tech companies in the proliferation and safety of the technology (ex. US DOT V2V/V2I standard guidelines, and creating federal ADAS guidelines to prevent state-level differences). Probably the most unique example was also the first drone delivery by a major retailer, something which was hinted at 3 years prior and seemingly just a flight of fancy then.

Looking ahead to 2017, both the breadth and depth are expected to expand, including the first operation of SAE level 4/5 in limited use on public streets outside the US, and on private roads inside US. Outside of ride sharing and city driving, I expect to see the increasing spread of ADAS technology to long distance trucking and non-urban transportation. To enable this, additional investments from traditional vehicle OEM’s partnering with both software and silicon companies will be needed to enable high-levels of autonomous functions. To help bring these to reality, I also expect the release of new standards to guide both the functional safety and reliability of automotive semiconductors. Even though the pace of government standards can lag, for ADAS technology to reach its true potential, it will require both standards and innovation.”

FPGA

The IoT market is expected to provide a significant opportunity to the electronics industry to grow revenue and open new markets.  I think the use of FPGA in IoT dvices will increase the use of these devices in system design.

I asked Geoff Tate, CEO of FlexLogix, his opinions on the subject.  He offered four points that he expects to become reality in 2017:

1. the first customer chip will be fabricated using embedded FPGA from an IP supplier

2. the first customer announcements will be made of customers adopting embedded FPGA from an IP supplier

3. embedded FPGAs will be proven in silicon running at 1GHz+

4. the number of customers doing chip design using embedded FPGA will go from a handful to dozen.

Zibi Zalewski, Hardware Division General Manager at Aldec also addressed the FPGA subject.

“I believe FPGA devices are an important technology player to mention when talking what to expect in 2017. With the growth of embedded electronics driven by Automotive, Embedded Vision and/or IoT markets, FPGA technology becomes a core element particularly for in products that require low power and re-programmability.

Features of FPGA such as pipelining and the ability to execute and easily scale parallel instances of the implemented function allow for the use of FPGA for more than just the traditionally understood embedded markets. FPGA computing power usage is exploding in the High Performance Computing (HPC) where FPGA devices are used to accelerate different scientific algorithms, big data processing and complement CPU based data centers and clouds. We can’t talk about FPGA these days without mentioning SoC FPGAs which merge the microprocessor (quite often ARM) with reprogrammable space. Thanks to such configurations, it is possible to combine software and hardware worlds into one device with the benefits of both.

All those activities have led to solid growth in FPGA engineering, which is pushing on further growth of FPGA development and verification tools. This includes not only typical solutions in simulation and implementation. We should also observe solid growth in tools and services simplifying the usage of FPGA for those who don’t even know this technology such as high level synthesis or engineering services to port C/C++ sources into FPGA implementable code. The demand for development environments like compilers supporting both software and hardware platforms will only be growing, with the main goal focused on ease of use by wide group of engineers who were not even considering the FPGA platform for their target application.

At the other end of the FPGA rainbow are the fast-growing, largest FPGA offered both from Xilinx and Intel/Altera. ASIC design emulation and prototyping will push harder and harder on the so called big-box emulators offering higher performance and significantly lower price per gate and so becoming more affordable for even smaller SoC projects. This is especially true when partnered with high quality design mapping software that handles multi-FPGA partitioning, interconnections, clocks and memories.”

Figure 2. Verification can look like a maze at times

Design Verification

There are many methods to verify a design and companies will, quite often, use more than one on the same design.  Each method: simulation, formal analysis, and emulation, has its strong points.

For many years, logic simulation was the only tool available, although hardware acceleration of logic simulation was also available.

Frank Schirrmeister, Senior Product Management Group Director, System and Verification Group at Cadence submitted a through analysis of verification issues.  He wrote: “From a verification perspective, we will see further market specialization in 2017 – mobile, server, automotive (especially ADAS) and aero/defense markets will further create specific requirements for tools and flows, including ISO 26262 TCL1 documentation and support for other standards. The Internet of Things (IoT) with its specific security and low power requirements really runs across application domains.  Continuing the trend in 2016, verification flows will continue to become more application-specific in 2017, often centered on specific processor architectures. For instance, verification solutions optimized for mobile applications have different requirements than for servers and automotive applications or even aerospace and defense designs. As application-specific requirements grow stronger and stronger, this trend is likely to continue going forward, but cross-impact will also happen (like mobile and multimedia on infotainment in automotive).

Traditionally ecosystems have been centered on processor architectures. Mobile and Server are key examples, with their respective leading architectures holding the lion share of their respective markets. The IoT is mixing this up a little as more processor architectures can play and offer unique advantages, with configurable and extensible architectures. No clear winner is in sight yet, but 2017 will be a key year in the race between IoT processor architectures. Even OpenSource hardware architectures are look like they will be very relevant judging from the recent momentum which eerily reminds me of the early Linux days. It’s one of the most entertaining spaces to watch in 2017 and for years to come.

Verification will become a whole lot smarter. The core engines themselves continue to compete on performance and capacity. Differentiation further moves in how smart applications run on top of the core engines and how smart they are used in conjunction.

For the dynamic engines in software-based simulation, the race towards increased speed and parallel execution will accelerate together with flows and methodologies for automotive safety and digital mixed-signal applications.

In the hardware emulation world, differentiation for the two basic ways of emulating – processor-based and FPGA-based – will be more and more determined by how the engines are used. Specifically, the various use models for core emulation like verification acceleration low power verification, dynamic power analysis, post-silicon validation—often driven by the ever growing software content—will extend further, with more virtualization joining real world connections. Yes, there will also be competition on performance, which clearly varies between processor-based and FPGA-based architectures—depending on design size and how much debug is enabled—as well as the versatility of use models, which determines the ROI of emulation.

FPGA-based prototypes address the designer’s performance needs for software development, using the same core FPGA fabrics. Therefore, differentiation moves into the software stacks on top, and the congruency between emulation and FPGA-based prototyping using multi-fabric compilation allows mapping both into emulation and FPGA-based prototyping.

All this is complemented by smart connections into formal techniques and cross-engine verification planning, debug and software-driven verification (i.e. software becoming the test bench at the SoC level). Based on standardization driven by the Portable Stimulus working group in Accellera, verification reuse between engines and cross-engine optimization will gain further importance.

Besides horizontal integration between engines—virtual prototyping, simulation, formal, emulation and FPGA-based prototyping—the vertical integration between abstraction levels will become more critical in 2017 as well. For low power specifically, activity data created from RTL execution in emulation can be connected to power information extracted from .lib technology files using gate-level representations or power estimation from RTL. This allows designers to estimate hardware-based power consumption in the context of software using deep cycles over longer timeframes that are emulated. ‘

Anyone who knows Frank will not be surprised by the length of the answer.

Wally Rhines, Chairman and CEO of Mentor Graphics was less verbose.  He said:” Total system simulation has become a requirement.  How do you know that the wire bundle will fit through the hole in the door frame?  EDA tools can tell you the answer, but only after seeking out the data from the mechanical design.  Wiring in a car or plane is a three dimensional problem.  EDA tools traditionally worry about two dimension routing problems.  The world is changing.  We are going to see the basic EDA technology for designing integrated circuits be applied to the design of systems. Companies that can remain at the leading edge of IC design will be able to apply that technology to systems.

This will create a new market for EDA.  It will be larger than the traditional IC design market for EDA.  But it will be based upon the basic simulation, verification and analysis tools of IC design EDA.  Sometime in the near future, designers of complex systems will be able to make tradeoffs early in the design cycle by using virtual simulation.  That know-how will come from integrated circuit design.  It’s no longer feasible to build prototypes of systems and test them for design problems.  That approach is going away.  In its place will be virtual prototyping.  This will be made possible by basic EDA technology.  Next year will be a year of rapid progress in that direction.  I’m excited by the possibilities as we move into the next generation of electronic design automation.”

The increasing size of chips has made emulation a more popular tool than in the past.  Lauro Rizzatti, Principal at Lauro Rizzatti LLC, is a pioneer in emulation and continues to be thought of as a leading expert in the method.  He noted: “Expect new use models for hardware emulation in 2017 that will support traditional market segments such as processor, graphics, networking and storage, and emerging markets currently underserved by emulation –– safety and security, along with automotive and IoT.

Chips will continue to be bigger and more complex, and include an ever-increasing amount of embedded software. Project groups will increasingly turn to hardware emulation because it’s the only verification tool to debug the interaction between the embedded software and the underlying hardware. It is also the only tool capable to estimate power consumption in a realistic environment, when the chip design is booting an OS and processing software apps. More to the point, hardware emulation can thoroughly test the integrity of a design after the insertion of DFT logic, since it can verify gate-level netlists of any size, a virtually impossible task with logic simulators.

Finally, its move to the data center solidifies its position as a foundational verification tool that offers a reasonable cost of ownership.”

Formal verification tools, sometimes referred to as “static analysis tools” have seen their use increase year over year once vendors found human interface methods that did not require a highly-trained user.  Roger Sabbagh, VP of Application Engineering at Oski Technology pointed out: “The world is changing at an ever-increasing pace and formal verification is one area of EDA that is leading the way. As we stand on the brink of 2017, I can only imagine what great new technologies we will experience in the coming year. Perhaps it’s having a package delivered to our house by a flying drone or riding in a driverless car or eating food created by a 3-D printer. But one thing I do know is that in the coming year, more people will have the critical features of their architectural design proven by formal verification. That’s right. System-level requirements, such as coherency, absence of deadlock, security and safety will increasingly be formally verified at the architectural design level. Traditionally, we relied on RTL verification to test these requirements, but the coverage and confidence gained at that level is insufficient. Moreover, bugs may be found very late in the design cycle where they risk generating a lot of churn. The complexity of today’s systems of systems on a chip dictates that a new approach be taken. Oski is now deploying architectural formal verification with design architects very early in the design process, before any RTL code is developed, and it’s exciting to see the benefits it brings. I’m sure we will be hearing a lot more about this in the coming year and beyond!”

Finally David Kelf, VP Marketing at OneSpin Solutions observed: “We will see tight integrations between simulation and formal that will drive adoption among simulation engineers in greater numbers than before. The integration will include the tightening of coverage models, joint debug and functionality where the formal method can pick up from simulation and even emulation with key scenarios for bug hunting.”


Conclusion

The two combined articles are indeed quite long.  But the EDA industry is serving a multi-faceted set of customers with varying and complex requirements.  To do it justice, length is unavoidable.

Formal, Logic Simulation, hardware emulation/acceleration. Benefits and Limitations

Wednesday, July 27th, 2016

Stephen Bailey, Director of Emerging Technologies, Mentor Graphics

Verification and Validation are key terms used and have the following differentiation:  Verification (specifically, hardware verification) ensures the design matches R&D’s functional specification for a module, block, subsystem or system. Validation ensures the design meets the market requirements, that it will function correctly within its intended usage.

Software-based simulation remains the workhorse for functional design verification. Its advantages in this space include:

-          Cost:  SW simulators run on standard compute servers.

-          Speed of compile & turn-around-time (TAT):  When verifying the functionality of modules and blocks early in the design project, software simulation has the fastest turn-around-time for recompiling and re-running a simulation.

-          Debug productivity:  SW simulation is very flexible and powerful in debug. If a bug requires interactive debugging (perhaps due to a potential UVM testbench issue with dynamic – stack and heap memory based – objects), users can debug it efficiently & effectively in simulation. Users have very fine level controllability of the simulation – the ability to stop/pause at any time, the ability to dynamically change values of registers, signals, and UVM dynamic objects.

-          Verification environment capabilities: Because it is software simulation, a verification environment can easily be created that peeks and pokes into any corner of the DUT. Stimulus, including traffic generation / irritators can be tightly orchestrated to inject stimulus at cycle accuracy.

-          Simulation’s broad and powerful verification and debug capabilities are why it remains the preferred engine for module and block verification (the functional specification & implementation at the “component” level).

If software-based simulation is so wonderful, then why would anyone use anything else?  Simulation’s biggest negative is performance, especially when combined with capacity (very large, as well as complex designs). Performance, getting verification done faster, is why all the other engines are used. Historically, the hardware acceleration engines (emulation and FPGA-based prototyping) were employed latish in the project cycle when validation of the full chip in its expected environment was the objective. However, both formal and hardware acceleration are now being used for verification as well. Let’s continue with the verification objective by first exploring the advantages and disadvantages of formal engines.

-          Formal’s number one advantage is its comprehensive nature. When provided a set of properties, a formal engine can exhaustively (for all of time) or for a, typically, broad but bounded number of clock cycles, verify that the design will not violate the property(ies). The prototypical example is verifying the functionality of a 32-bit wide multiplier. In simulation, it would take far too many years to exhaustively validate every possible legal multiplicand and multiplier inputs against the expected and actual product for it to be feasible. Formal can do it in minutes to hours.

-          At one point, a negative for formal was that it took a PhD to define the properties and run the tool. Over the past decade, formal has come a long way in usability. Today, formal-based verification applications package properties for specific verification objectives with the application. The user simply specifies the design to verify and, if needed, provides additional data that they should already have available; the tool does the rest. There are two great examples of this approach to automating verification with formal technology:

  • CDC (Clock Domain Crossing) Verification:  CDC verification uses the formal engine to identify clock domain crossings and to assess whether the (right) synchronization logic is present. It can also create metastability models for use with simulation to ensure no metastability across the clock domain boundary is propagated through the design. (This is a level of detail that RTL design and simulation abstract away. The metastability models add that level of detail back to the simulation at the RTL instead of waiting for and then running extremely long full-timing, gate-level simulations.)
  • Coverage Closure:  During the course of verification, formal, simulation and hardware accelerated verification will generate functional and code coverage data. Most organizations require full (or nearly 100%) coverage completion before signing-off the RTL. But, today’s designs contain highly reusable blocks that are also very configurable. Depending on the configuration, functionality may or may not be included in the design. If it isn’t included, then coverage related to that functionality will never be closed. Formal engines analyze the design, in its actual configuration(s) that apply, and does a reachability analysis for any code or (synthesizable) functional coverage point that has not yet been covered. If it can be reached, the formal tool will provide an example waveform to guide development of a test to achieve coverage. If it cannot be reached, the manager has a very high-level of certainty to approving a waiver for that coverage point.

-          With comprehensibility being its #1 advantage, why doesn’t everyone use and depend fully on formal verification:

  • The most basic shortcoming of formal is that you cannot simulate or emulate the design’s dynamic behavior. At its core, formal simply compares one specification (the RTL design) against another (a set of properties written by the user or incorporated into an automated application or VIP). Both are static specifications. Human beings need to witness dynamic behavior to ensure the functionality meets marketing or functional requirements. There remains no substitute for “visualizing” the dynamic behavior to avoid the GIGO (Garbage-In, Garbage-Out) problem. That is, the quality of your formal verification is directly proportional to the quality (and completeness) of your set of properties. For this reason, formal verification will always be a secondary verification engine, albeit one whose value rises year after year.
  • The second constraint on broader use of formal verification is capacity or, in the vernacular of formal verification:  State Space Explosion. Although research on formal algorithms is very active in academia and industry, formal’s capacity is directly related to the state space it must explore. Higher design complexity equals more state space. This constraint limits formal usage to module, block, and (smaller or well pruned/constrained) subsystems, and potentially chip levels (including as a tool to help isolate very difficult to debug issues).

The use of hardware acceleration has a long, checkered history. Back in the “dark ages” of digital design and verification, gate-level emulation of designs had become a big market in the still young EDA industry. Zycad and Ikos dominated the market in the late 1980’s to mid/late-1990’s. What happened?  Verilog and VHDL plus automated logic synthesis happened. The industry moved from the gate to the register-transfer level of golden design specification; from schematic based design of gates to language-based functional specification. The jump in productivity from the move to RTL was so great that it killed the gate-level emulation market. RTL simulation was fast enough. Zycad died (at least as an emulation vendor) and Ikos was acquired after making the jump to RTL, but had to wait for design size and complexity to compel the use of hardware acceleration once again.

Now, 20 years later, it is clear to everyone in the industry that hardware acceleration is back. All 3 major vendors have hardware acceleration solutions. Furthermore, there is no new technology able to provide a similar jump in productivity as did the switch from gate-level to RTL. In fact, the drive for more speed has resulted in emulation and FPGA prototyping sub-markets within the broader market segment of hardware acceleration. Let’s look at the advantages and disadvantages of hardware acceleration (both varieties).

-          Speed:  Speed is THE compelling reason for the growth in hardware acceleration. In simulation today, the average performance (of the DUT) is perhaps 1 kHz. Emulation expectations are for +/- 1 MHz and for FPGA prototypes 10 MHz (or at least 10x that of emulation). The ability to get thousands of more verification cycles done in a given amount of time is extremely compelling. What began as the need for more speed (and effective capacity) to do full chip, pre-silicon validation driven by Moore’s Law and the increase in size and complexity enabled by RTL design & design reuse, continues to push into earlier phases of the verification and validation flow – AKA “shift-left.”  Let’s review a few of the key drivers for speed:

  • Design size and complexity:  We are well into the era of billion gate plus design sizes. Although design reuse addressed the challenge of design productivity, every new/different combination of reused blocks, with or without new blocks, creates a multitude (exponential number) of possible interactions that must be verified and validated.
  • Software:  This is also the era of the SoC. Even HW compute intensive chip applications, such as networking, have a software component to them. Software engineers are accustomed to developing on GHz speed workstations. One MHz or even 10’s of MHz speeds are slow for them, but simulation speeds are completely intolerable and infeasible to enable early SW development or pre-silicon system validation.
  • Functional Capabilities of Blocks & Subsystems:  It can be the size of input data / simuli required to verify a block’s or subsystem’s functionality, the complexity of the functionality itself, or a combination of both that drives the need for huge numbers of verification cycles. Compute power is so great today, that smartphones are able to record 4k video and replay it. Consider the compute power required to enable Advanced Driver Assistance Systems (ADAS) – the car of the future. ADAS requires vision and other data acquisition and processing horsepower, software systems capable of learning from mistakes (artificial intelligence), and high fault tolerance and safety. Multiple blocks in an ADAS system will require verification horsepower that would stress the hardware accelerated performance available even today.

-          As a result of these trends which appear to have no end, hardware acceleration is shifting left and being used earlier and earlier in the verification and validation flows. The market pressure to address its historic disadvantages is tremendous.

  • Compilation time:  Compilation in hardware acceleration requires logic synthesis and implementation / mapping to the hardware that is accelerating the simulation of the design. Synthesis, placement, routing, and mapping are all compilation steps that are not required for software simulation. Various techniques are being employed to reduce the time to compile for emulation and FPGA prototype. Here, emulation has a distinct advantage over FPGA prototypes in compilation and TAT.
  • Debug productivity:  Although simulation remains available for debugging purposes, you’d be right in thinking that falling back on a (significantly) slower engine as your debug solution doesn’t sound like the theoretically best debug productivity. Users want a simulation-like debug productivity experience with their hardware acceleration engines. Again, emulation has advantages over prototyping in debug productivity. When you combine the compilation and debug advantages of emulation over prototyping, it is easy to understand why emulation is typically used earlier in the flow, when bugs in the hardware are more likely to be found and design changes are relatively frequent. FPGA prototyping is typically used as a platform to enable early SW development and, at least some system-level pre-silicon validation.
  • Verification capabilities:  While hardware acceleration engines were used primarily or solely for pre-silicon validation, they could be viewed as laboratory instruments. But as their use continues to shift to earlier in the verification and validation flow, the need for them to become 1st class verification engines grows. That is why hardware acceleration engines are now supporting:
    • UPF for power-managed designs
    • Code and, more appropriately, functional coverage
    • Virtual (non-ICE) usage modes which allow verification environments to be connected to the DUT being emulated or prototyped. While a verification environment might be equated with a UVM testbench, it is actually a far more general term, especially in the context of hardware accelerated verification. The verification environment may consist of soft models of things that exist in the environment the system will be used in (validation context). For example, a soft model of a display system or Ethernet traffic generator or a mass storage device. Soft models provide advantages including controllability, reproducibility (for debug) and easier enterprise management and exploitation of the hardware acceleration technology. It may also include a subsystem of the chip design itself. Today, it has become relatively common to connect a fast model written in software (usually C/C++) to an emulator or FPGA prototype. This is referred to as hybrid emulation or hybrid prototyping. The most common subsystem of a chip to place in a software model is the processor subsystem of an SoC. These models usually exist to enable early software development and can run at speeds equivalent to ~100 MHz. When the processor subsystem is well verified and validated, typically a reused IP subsystem, then hybrid mode can significantly increase the verification cycles of other blocks and subsystems, especially driving tests using embedded software and verifying functionality within a full chip context. Hybrid mode can rightfully be viewed as a sub-category of the virtual usage mode of hardware acceleration.
    • As with simulation and formal before it, hardware acceleration solutions are evolving targeted verification “applications” to facilitate productivity when verifying specific objectives or target markets. For example, a DFT application accelerates and facilitates the validation of test vectors and test logic which are usually added and tested at the gate-level.

In conclusion, it may seem that simulation is being used less today. But, it is all relative. The total number of verification cycles is growing exponentially. More simulation cycles are being performed today even though hardware acceleration and formal cycles are taking relatively larger pieces of the overall verification pie. Formal is growing in appeal as a complementary engine. Because of its comprehensive verification nature, it can significantly bend the cost curve for high-valued (difficult/challenging) verification tasks and objectives. The size and complexity of designs today require the application of all verification engines to the challenges of verifying and validating (pre-silicon) the hardware design and enabling early SW development. The use of hardware acceleration continues to shift-left and be used earlier in the verification and validation flow causing emulation and FPGA prototyping to evolve into full-fledged verification engines (not just ICE validation engines).

Verification Choices: Formal, Simulation, Emulation

Thursday, July 21st, 2016

Gabe Moretti, Senior Editor

Lately there have been articles and panels about the best type of tools to use to verify a design.  Most of the discussion has been centered on the choice between simulation and emulation, but, of course, formal techniques should also be considered.  I did not include FPGA based verification in this article because I felt to be a choice equal to emulation, but at a different price point.

I invited a few representatives of EDA companies to answer questions about the topic.  The respondents are:

Steve Bailey, Director of Emerging Technologies at Mentor Graphics,

Dave Kelf, Vice President of Marketing at OneSpin Solutions

Frank Schirrmeister, Senior Product Management Director at Cadence

Seena Shankar, Technical Marketing Manager at Silvaco

Vigyan Singhal, President and CEO at Oski Technology

Lauro Rizzatti, Verification Consultant

A Search for the Best Technique

I first wanted an opinion of what each technology does better.  Of course the question is ambiguous because the choice of tool, as Lauro Rizzatti points out, depends on the characteristics of the design to be verified.  “As much as I favor emulation, when design complexity does not stand in the way, simulation and formal are superior choices for design verification. Design debugging in simulation is unmatched by emulation. Not only interactive, flexible and versatile, simulation also supports four-state and timing analysis.
However, design complexity growth is here to stay, and the curve will only get more challenging into the future. And, we not only have to deal with complexity measured in more transistors or gates in hardware, but also measured in more code in embedded software. Tasked to address this trend, both simulation and formal would hit the wall. This is where emulation comes in to rule the day.  Performance is not the only criteria to measure the viability of a verification engine.”

Vigyan Singhal wrote: “Both formal and emulation are becoming increasingly popular. Why use a chain saw (emulation) when you can use a scalpel (formal)? Every bug that is truly a block-level bug (and most bugs are) is most cost effective to discover with formal. True system-level bugs, like bandwidth or performance for representative traffic patterns, are best left for emulation.  Too often, we make the mistake of not using formal early enough in the design flow.”

Seena Shankar provided a different point of view. “Simulation gives full visibility to the RTL and testbench. Earlier in the development cycle, it is easier to fix bugs and rerun a simulation. But we are definitely gated by the number of cycles that can be run. A basic test

exercising a couple of functional operations could take up to 12 hours for a design with a 100 million gates.

Emulation takes longer to setup because all RTL components need  to be in place before a test run can begin. The upside is that millions of operations can be run in minutes. However, debug is difficult and time consuming compared to simulation.  Formal verification needs a different kind of expertise. It is only effective for smaller blocks but can really find corner case bugs through assumptions and constraints provided to the tool.”

Steve Bailey concluded that:” It may seem that simulation is being used less today. But, it is all relative. The total number of verification cycles is growing exponentially. More simulation cycles are being performed today even though hardware acceleration and formal cycles are taking relatively larger pieces of the overall verification pie. Formal is growing in appeal as a complementary engine. Because of its comprehensive verification nature, it can significantly bend the cost curve for high-valued (difficult/challenging) verification tasks and objectives. The size and complexity of designs today require the application of all verification engines to the challenges of verifying and validating (pre-silicon) the hardware design and enabling early SW development. The use of hardware acceleration continues to shift-left and be used earlier in the verification and validation flow causing emulation and FPGA prototyping to evolve into full-fledged verification engines (not just ICE validation engines).”

If I had my choice I would like to use formal tools to develop an executable specification as early as possible in the design, making sure that all functional characteristics of the intended product will be implemented and that the execution parameters will be respected.  I agree that the choice between simulation and emulation depends on the size of the block being verified, and I also think that hardware/software co-simulation will most often require the use of an emulation/acceleration device.

Limitations to Cooperation Among the Techniques

Since all three techniques have value in some circumstance, can designers easily move from one to another?

Frank Schirrmeister provided a very exhaustive response to the question, including a good figure.

“The following figure shows some of the connections that exist today. The limitations of cooperation between the engines are often of a less technical nature. Instead, they tend to result from the gaps between different disciplines in terms of cross knowledge between them.

Figure 1: Techniques Relationships (Courtesy of Cadence)

Some example integrations include:

-          Simulation acceleration combining RTL simulation and emulation. The technical challenges have mostly been overcome using transactors to connect testbenches, often at the transaction level that runs on simulation hosts to the hardware holding the design under test (DUT) and executing at higher speed. This allows users to combine the expressiveness in simulated testbenches to increase verification efficiency with the speed of synthesizable DUTs in emulation.

-          At this point, we even have enabled hot-swap between simulation and emulation. For example, we can run gate-level netlists without timing in emulation at faster speeds. This allows users to reach a point of interest at a later point of the execution that would take hours or days in simulation. Once the point of interest is reached, users can switch (hot swap) back into simulation, adding back the timing and continue the gate-level timing simulation.

-          Emulation and FPGA-based prototyping can share a common front-end, such as in the Cadence System Development Suite, to allow faster bring-up using multi-fabric compilation.

-          Formal and simulation also combine nicely for assertions, X-propagation, etc., and, when assertions are synthesizable and can be mapped into emulation, formal techniques are linked even with hardware-based execution.

Vigyan Singhal noted that: “Interchangeability of databases and poorly architected testbenches are limitations. There is still no unification of coverage database standard enabling integration of results between formal, simulation and emulation. Often, formal or simulation testbenches are not architected for reuse, even though they can almost always be. All constraints in formal testbenches should be simulatable and emulatable; if checkers and bus functional models (BFMs) are separated in simulation, checkers can sometimes be used in formal and in emulation.”

Dave Kelf concluded that: “the real question here is: How do we describe requirements and design specs in machine-readable forms, use this information to produce a verification plan, translate them into test structures for different tools, and extract coverage information that can be checked against the verification plan? It is this top-down, closed-loop environment generally accepted as ideal, but we have yet to see it realized in the industry. We are limited fundamentally by the ability to create a machine-readable specification.”

Portable Stimulus

Accellera has formed a study group to explore the possibility of developing a portable stimulus methodology.  The group is very active and progress is being made in that direction.  Since the group has yet to publish a first proposal, it was difficult to ask any specific questions, although I thought that a judgement on the desirability of such effort was important.

Frank Schirrmeister wrote: “At the highest level, the portable stimulus project allows designers to create tests to verify SoC integration, including items like low-power scenarios and cache coherency. By keeping the tests as software routines executing on processors that are available in the design anyway, the stimulus becomes portable between the different dynamic engines, specifically simulation, emulation, and FPGA prototyping. The difference in usage with the same stimulus then really lies in execution speed – regressions can run on faster engines with less debug – and on debug insight once a bug is encountered.”

Dave Kelf also has a positive opinion about the effort. “Portable Stimulus is an excellent effort to abstract the key part of the UVM test structures such that they may be applied to both simulation and emulation. This is a worthy effort in the right direction, but it is just scraping the surface. The industry needs to bring assertions into this process, and consider how this stimulus may be better derived from high-level specifications”

SystemVerilog

The language SystemVerilog is considered by some the best language to use for SoC development.  Yet, the language has limitations according to some of the respondents.

Seena Shankar answered the question “Is SystemVerilog the best we can do for system verification? as follows: “Sort of. SystemVerilog encapsulates the best features from Software and hardware paradigms for verification. It is a standard that is very easy to follow but may not be the best in performance. If the performance hit can be managed with a combination of system C/C++ or Verilog or any other verification languages the solution might be limited in terms of portability across projects or simulators.”

Dave Kelf wrote: “One of the most misnamed languages is SystemVerilog. Possibly the only thing this language was not designed to do was any kind of system specification. The name was produced in a misguided attempt to compete or compare with SystemC, and that was clearly a mistake. Now it is possible to use SystemVerilog at the system level, but it is clear that a C derived language is far more effective.
What is required is a format that allows untimed algorithmic design with enough information for it to be synthesized, virtual platforms that provide a hardware/software test capability at an acceptable level of performance, and general system structures to be analyzed and specified. C++ is the only language close to this requirement.”

And Frank Schirrmeister observed: “SystemVerilog and technologies like universal verification methodology (UVM) work well at the IP and sub-system level, but seem to run out of steam when extended to full system-on-chip (SoC) verification. That’s where the portable stimulus project comes in, extending what is available in UVM to the SoC level and allowing vertical re-use from IP to the SoC. This approach overcomes the issues for which UVM falls short at the SoC level.”

Conclusion

Both design engineers and verification engineers are still waiting for help from EDA companies.  They have to deal with differing methodologies, and imperfect languages while tackling ever more complex designs.  It is not surprising then that verification is the most expensive portion of a development project.  Designers must be careful to insure that what they write is verifiable, while verification engineers need to not only understand the requirements and architecture of the design, but also be familiar with the characteristics of the language used by developers to describe both the architecture and the functionality of the intended product.  I believe that one way to improve the situation is for both EDA companies and system companies to approach a new design not just as a piece of silicon but as a product that integrates hardware, software, mechanical, and physical characteristics.  Then both development and verification plans can choose the most appropriate tools that can co-exist and provide coherent results.

Two Tiers EDA Industry

Thursday, June 16th, 2016

Gabe Moretti, Senior Editor

Talking to Lucio Lanza you must be open to ideas that appear strange and wrong at first sight.  I had just that talk with him during DAC.  I enjoy talking to Lucio because I too have strange ideas, certainly not as powerful as him, but strange enough to keep my brain flexible.

So we were talking about the industry when suddenly Lucio said: “You know the EDA industry needs to divide itself in two: design and manufacturing are different things.”

The statement does not make much sense from an historical perspective, in fact it is contrary to how EDA does business today, but you must think about it from today and future point of view.  The industry was born and grew under the idea that a company would want to develop its own product totally in house, growing knowledge and experience not only of its own market, but also of semiconductor capabilities.  The EDA industry provides a service that replaces what companies would otherwise have to do internally when designing and developing an IC or a PCB.  The EDA industry provides all the required tools which would have otherwise been developed internally.  But with the IoT as the prime factor for growth, dealing with the vagaries of optimizing a design for a given process is something most companies are either unprepared to do, or too costly given the sale price of the finished product.  I think that a majority of IoT products will not be sensitive to a specific process’s characteristics.

The Obstacles

So why not change, as Lucio forecasts.  The problem is design methodology.  Unfortunately, given the design flow supported today, a team is supposed to take the design through synthesis before they can analyze the design for physical characteristics.  This approach is based on the assumption that the design team is actively engaged in the layout phase of the die.  But product developers should not, in general, be concerned with how the die is laid out.  A designer should have the tool to predict leakage, power consumption, noise, and thermal at the system level.  The tools need to be accurate, but not precise.  It should be possible to predict the physical behavior of the design given the characteristics of the final product and of the chosen process.  Few companies producing a product that is leading edge and will sell in large volume will need to be fully involved in the post synthesis work, but the number of these companies continues to shrink in direct proportion to the cost of using the process.

EDA startups should not look at post synthesis markets.  They should target system level design and verification.  The EDA industry must start thinking in terms of the products its customers are developing, not the silicon used to implement them.  A profound change in both the technological and business approach to our market is needed, if we want to grow.  But change is difficult and new problems require not just new tools, but new thinking.  Change is hard and almost always uncomfortable.

Software development and debug must be supported by a true hardware/software co-design and co-development system.  At present there are co-verification tools, but true co-development is still not possible, at least not within the EDA industry.

As I have said many times before “chips don’t float” thus tier one of the new EDA must also provide packaging tools, printed circuit board (PCB) design tools, and mechanical design tools to create the product.  In other words we must develop true system level design and not be so myopic to believe that our goal is Electronic System Level support.  The electronic part is a partial solution that does not yield a product, just a piece of a product.

The Pioneers

I know of a company that has already taken a business approach that is similar to what Lucio is thinking about.  The company had always exhibited at DAC, but since its new business approach it was not there this year.  Most customers of eSilicon do not go to DAC, they go to shows and conferences that deal with their end products’ markets.  The business approach of the company, as described to me by Mike Gianfagna, VP of Marketing at eSilicon, is to partner with a customer to implement a product, not a design.  eSilicon provides the EDA knowhow and the relationship with the chosen foundry, while the customer provides the knowledge of the end market.  When the product is ready both companies share in the revenue following a prior agreed to formula.  This apparently small change in the business model takes EDA out of the service business and into the full electronic industry opportunity.  It also relives companies from the burden of understanding and working the transformation of a design into silicon.

Figure 2: Idealized eSilicon Flow (Courtesy of eSilicon)

What eSilicon offers is not what Lucio has in mind, but it comes very close in most aspects, especially in its business approach to the development of a product, not just a die.

Existing Structure

Not surprisingly there are consortia that already provide structure to help the development of a two tiers EDA industry.   The newly renamed ESDA can help define and form the new industry while its marketing agreement with SEMICO can foster a closer discourse with the IP industry.  Accellera Systems Initiative, or simply Accellera, already specializes in design and verification issues, and also focuses on IP standards, thus fitting one of the two tiers perfectly.  The SI2 consortium, on the other hand, focuses mostly on post synthesis and fabrication issues, providing support for the second tier.  Accellera, therefore, provides standards and methodology for the first tier, SI2 for the second tier, while ESDA straddles both.

The Future

In the past using the latest process was a demonstration that a company was not only a leader in its market, but an electronics technology leader.  This is no longer the case.  A company can develop and sell a leading product using   a 90 or 65nm process for example and still be considered a leader in its own market.  Most IoT products will be price sensitive, so minimizing both development and production costs will be imperative.

Having a partner that will provide the know-how to transform the description of the electronic circuit into a layout ready to manufacture will diminish development costs since the company no longer has to employ designers that are solely dedicated to post synthesis analysis, layout and TCAD.

EDA companies that target these markets will see their market size shrink significantly but the customers’ knowledge of the requirements and technological characteristics of the tools will significantly improve.

The most significant impact will be that the EDA available revenue volume will increase since EDA companies will be able to get revenue from every unit sold of a specific product.

Future Challenges in Design Verification and Creation

Wednesday, March 23rd, 2016

Gabe Moretti, Senior Editor

Dr. Wally Rhines, Chairman and CEO of Mentor Graphics, delivered the keynote address at the recently concluded DVCon U.S. in San Jose.  The title of the presentation was: “Design Verification Challenges: Past, Present, and Future”.  Although one must know the past and recognize the present challenges, the future ones were those that interested me the most.

But let’s start from the present.  As can be seen in Figure 1 designers today use five major techniques to verify a design.  The techniques are not integrated with each other; they are as five separate silos within the verification methodology.  The near future goal as explained by Wally is to integrate the verification process.  The work of the Portable Stimulus Working Group within the Accellera System Initiative is addressing the problem.  The goal, according to Bill Hodges of Intel is: “Users should not be able to tell if their job was executed on a simulator, emulator, or prototype”.

Figure 1.  Verification Silos

The present EDA development work addresses the functionality of the design, both at the logical and at the physical level.  But, especially with the growing introduction of Internet of Things (IoT) devices and applications the issues of security and safety are becoming a requirement and we have not learned how to verify the device robustness in these areas.

Security

Figure 2, courtesy of Mentor Graphics, encapsulates the security problem.  The number of security breaches increases with every passing day it seems, and the financial and privacy losses are significant.

Figure 2

Chip designers must worry about malicious logic inside the chip, Counterfeit chips, and side-channel attacks.  Malicious logic is normally inserted dynamically into the chip using Trojan malware.  They must be detected and disabled.  The first thing designers need to do is to implement counter measures within the chip.  Designers must implement logic to analyze runtime activity to recognize foreign induced activity through a combination of hardware and firmware.  Although simulation can be used for verification, static tests to determine that the chip performs as specified and does not execute unspecified functions should be used during the development process.  Well-formed and complete assertions can approximate a specification document for the design.

Another security threat called “side-channel attacks” is similar to the Trojan attack but it differs in the fact that it takes advantage of back doors left open, either intentionally or not, by the developers.  Back doors are built into systems to deal with special security circumstances by the developers’ institution, but can be used criminally when discovered by unauthorized third parties.  To defend from such eventuality designers can use hardened IP or special logic to verify authenticity.  Clearly during development these countermeasures must be verified and weaknesses discovered.  The question to be answered is: “Is the desired path the only path possible?”

Safety

As the use of electronic systems grows at an increasing pace in all sort of products, the reliability of such systems grows in importance.  Although many products can be replaced when they fail without serious consequences for the users, an increasing number of systems failures put the safety of human being in great jeopardy.  Dr. Rhines identified in particular systems in the automotive, medical, and aerospace industries.  Safety standards have been developed in these industries that cover electronic systems.  Specifically, ISO 26262 in the automotive industry, IEC 60601 in the medical field, and DO-254 in aerospace applications.  These certification standards aim to insure that no harm will occur to systems, their operators, or to bystanders by verifying the functional robustness of the implementation.

Clearly no one would want a heart pace maker (figure 3) that is not fail-safe to be implanted in a living organism.

Figure 3. Implementation subject to IEC 60601 requirements

The certification standards address safe system development process by requiring evidence that all reasonable system safety objective are satisfied.  The goal is to avoid the risk of systematic failures or random hardware failures by establishing appropriate requirements and processes.  Before a system is certified, auditors check that each applicable requirement in the standard has been implemented and verified.  They must identify specific tests used to verify compliance to each specific requirement and must also be assured that automatic requirements tracking is available for a number of years.

Dr. Rhines presented a slide that dealt with the following question: “Is your system safe in the presence of a fault?”.

To answer the question verification engineers must inject faults in the verification stream.  Doping this it helps determining if the response of the system matches the specification, despite the presence of faults.  It also helps developers understand the effects of faults on target system behavior, and is assesses the overall risk.  Wally noted that formal-based fault injection/verification can exhaustively verify the safety aspects of the design in the presence of faults.

Conclusion

Dr. Rhines focused on the verification aspects during his presentation and his conclusions covered four points.

  • Despite design re-use, verification complexity continues to increase at 3-4X the rate of design creation
  • Increasing verification requirements drive new capabilities for each type of verification engine
  • Continuing verification productivity gains require EDA to both abstract the verification process from the underlying engines, develop common environments, methodologies and tools, and separate the “what” from the “how”
  • Verification for security and safety is providing another major wave of verification requirements.

I would like to point out that developments in verification alone are not enough.  What EDA really needs is to develop a system approach to the problem of developing and verifying a system.  The industry has given lip service to system design and the tools available so far still maintain a “silos” approach to the problem.  What is really required is the ability to work at the architectural level and evaluate a number of possible solutions to a well specified requirements document.  Formal tools provide good opportunities to approximate, if not totally implement, an executable requirements document.  Designers need to be able to evaluate a number of alternatives that include the use of mixed hardware and software implementations, analog and mixed-signal solutions, IP re-use, and electro-mechanical devices, such as MEMS.

It is useless or even dangerous to begin development under false assumptions whose impact will be found, if ever, once designers are well into the implementation stage.  The EDA industry is still focusing too much on fault identification and not enough on fault avoidance.

How to Drive a Successful IoT Application Design Project

Monday, December 14th, 2015

Mladen Nizic and Brad Griffin, Cadence Design Ssystems

Internet of Things (IoT) applications are changing the way we live. They are changing how we manufacture and transport goods, deliver healthcare and other services, manage energy distribution and consumption and even how we travel and communicate. An edge-node composition is an essential element of an IoT application, providing an interface between the digital and analog worlds. Despite the diversity of IoT applications, a typical edge node includes sensors to collect information from the outside world, some amount of processing power and memory, the ability to receive and transmit information, and the ability to control devices in the immediate vicinity. Although they are modest in device counts compared to large systems on chips (SoCs), edge node devices are very complex systems that integrate analog and digital functions in silicon, package, and board and are controlled by software that must operate for many years harvesting energy or using a coin battery.

Engineers need to design, verify, and implement these edge-node systems rapidly to meet tight market windows. To achieve aggressive timelines, they need a flow that enables system prototyping, hardware/software verification, mixed-signal chip design and manufacturing, and chip/IC package/board integration. In this article, we will focus on two critical steps in the flow that impact the design cycle and the success of an entire project: 1) simulation/verification of the system/chip and 2) signal integrity analysis in chip-package-board integration.

Simulation/Verification

Verification is the biggest design challenge today, particularly when analog functionality is involved, and IoT devices are no exception. High-performance analog, digital and mixed-signal simulation is indispensable but not sufficient and must be complemented by a model-based, metric-driven methodology. Key elements of the methodology are as follows:

Verification planning and management: Engineers develop verification plans and manage the plan execution carefully to filter out issues as early on as possible. A typical IoT device operates in many different modes (standby, active sensing, recharging, data processing, transmitting/receiving, test, etc.), and the functional verification plan must verify all modes and their transitions in a well-defined sequence. Since operations are controlled by embedded software, the software is ideally verified in conjunction with the hardware. It is important to understand which tests can be performed at a higher level of abstraction and which require transistor-level simulation. For example, high-level abstractions can verify that software algorithm/processor issues apply correct controls to a multiplexer selecting analog input. However, transistor-level simulation is required to verify that a built-in A-to-D converter operates correctly in a specified temperature range.

Behavioral modeling: Due to the complexity of IoT designs, executing the verification plan using transistor-level simulations is practically impossible and needs to be reserved for verifying specific electrical characteristics that require a high level of accuracy and correlation to silicon. For most functional verification planning, the investment in abstracting analog components using Verilog or VHDL behavioral models pays off by making verification much more efficient in thoroughly covering the entire system. Recent advancements in Real Number Modeling (RNM) using Verilog-AMS/wreal or SystemVerilog IEEE 1800 have made the simulation of analog, digital, and software components of an IoT system practical. Of course, modeling has to be done with a clear purpose as required by the verification plan, and the models must be in alignment with the specifications or transistor-level circuit in the case of a bottom-up design.

Coverage metrics: To assess the success of the verification of IoT designs, which are, by default, mixed-signal in nature, digital concepts of coverage metrics need to be extended to analog and mixed-signal—at least when it comes to functional verification. Using property specification language (PSL) or SystemVerilog assertions (SVAs) in conjunction with RNM simulations gives designers the ability to collect coverage, set pass/fail criteria, and evaluate the quality and completeness of the testbench, which can be used to drive improvement. This feedback loop is a major methodology improvement in comparison with the traditional direct test method.

Low-power verification: IoT devices must be extremely power efficient. To minimize power consumption, designers use advanced low-power techniques such as multiple power domains and supply voltages and power shutoffs, which help reduce active and leakage currents or completely turn off parts of the design when not needed. Power specifications captured in standard formats (like CPF or UPF-1801) can be used to ensure that power intent is implemented correctly. Designers should pay particular attention when it comes to managing the switching of power supplies to different power domains and handling analog/digital signal-crossing during power shutoffs. Dynamic CPF/UPF1801-aware mixed-signal simulation and static methods are becoming a standard part of verification methodology.

Mixed-signal simulation: High-performance, tightly integrated SPICE/FastSPICE transistor-level and digital engines supporting analog behavioral languages including RNM are at the core of the verification flow. For example, Cadence® Virtuoso®  AMS Designer is able to mix different levels of hierarchy and understand low-power specifications that make it a simulator of choice for verifying IoT designs.

The outlined methodology is well-supported by the Cadence flow as shown in Figure 1 below.

Fig. 1. Cadence flow for an IoT design

Signal Integrity Analysis

When you first consider designing an IoT device, signal and power integrity may not be the first thing that comes to mind. The focus will likely be on how this unique device will collect input, what it will produce for output, and what kinds of bells and whistles distinguish this device from competitors. However, any modern-day system, including edge-node IoT devices, must be fast, economical, and low power.

Therefore, it is a given that signals will be switching at high rates on a system that is the lowest possible cost and consumes minimal power. Like it or not, signal and power integrity is going to become part of the design challenge at some point.

Design considerations engineers need to keep in mind include:

Power management: Most IoT devices are powered by a battery. Requirements to recharge or replace that battery may make the difference in a product succeeding or failing.  The device must be designed to deliver sufficient power to all components (i.e. microcontrollers and memory) in an efficient manor while keeping low-voltage power rails stable while the device is operating.

The power delivery network (PDN) must be designed to take into account the current return path of switching signals and in a way that reduces any voltage drop due to power that is choked off as a result of congestion caused from signal vias, mounting holes, or various other causes that carve up the PDN. Maintaining stable power is a challenge. Decoupling capacitors (decaps) are used to ensure PDN stability. Space requirements and product cost create a desire to minimize the use of decaps.

The path to a successful IoT PDN design rests in utilizing analysis tools for both DC and AC analysis.  Having a tightly integrated design and analysis environment, as provided by Cadence Allegro® Sigrity™ products, provides design efficiency that saves time and engineering cost while optimizing the IoT PDN for cost and performance.

Fig. 2. Integrated side-by-side PCB design and power integrity analysis as seen in Allegro Sigrity PI Base

Memory interfaces: While sensors provide much of the input, at the heart of a typical IoT device is a microcontroller and system memory. Storing and recalling data quickly and accurately is essential to IoT functions. Dynamic RAM and some of the faster static RAM components utilize parallel bus interfaces to store and retrieve data. The data bus and the address bus provide design challenges. Simultaneous switching signals with fast-edge rates and small voltage swings create a perfect storm of opportunity for simultaneous switching noise (SSN) to impact signal quality. An IoT device used for medical assessment or a device used for military applications such as threat analysis certainly cannot afford to have unreliable data storage and retrieval.

To ensure these devices have reliable data storage and retrieval, controlled impedance and delay-tuned signal routing must be performed during design, and timing analysis must also be performed to ensure that all setup and hold conditions are met.

The path to successful memory interface design is through a constraint-driven design environment that sets both physical and electrical constraints at the logic stage of design. As physical implementation begins, dynamic rule checking that validates length and spacing rules can ensure that data signals, clock signals, address bus signals, and various control signals are routed to complicated timing specifications.

However, with the miniaturized size of many IoT devices (i.e. wearable devices), memory interface signals transition from layer to layer through vias that produce impedance discontinuities. Power-aware signal integrity analysis is required to ensure the tiny timing margins are not impacted by signal ringing, overshoot, and rippling ground reference voltages.

When signal quality issues are discovered through the analysis process, a quick path to resolution through the physical implementation tools is the key to keeping predictable IoT product development schedules.

SerDes interfaces: Many IoT devices communicate to the outside world through wireless interfaces. However, some wearable devices have a physical connector that transfers collected data to a host system. Data transfers must be fast and follow a standard interface protocol such as USB. Designing an interface so that it meets electrical compliance testing becomes part of the design requirements. The USB Implementers Forum (USB-IF) offers an integrator’s list of products that meet a set of compliance tests.  While designing these high-speed interfaces (current USB specs allow transfer speeds of up to 10Gbps), simulating compliance tests is a way to make sure designs will pass the first time.

To meet compliance specifications at high data transfer rates, reflections, crosstalk, interconnect loss, and equalization must be assessed and analyzed.

For serial links, substrate and PCB vias often create the largest impedance discontinuity on the serial link, causing potential reflections along the channel and crosstalk between channels. It can be challenging to maintain signal quality in the face of routing challenges weaving through via fields, as well as the need to transition layers through signal vias. It takes special care to craft signals to meet routing density challenges vs. “best practice” signal integrity. When crafting via transitions that need to appear virtually transparent as well as routing signals through dense via fields, maintaining signal integrity requires detailed extraction and simulation techniques while refining these physical implementation challenges.

At gigabit data rates, USB links are likely to utilize advanced equalization techniques, such as feed forward equalization (FFE) or continuous time linear equalization (CTLE).  FFE and CTLE are complex signal-processing algorithms that are implemented within semiconductor I/Os. To simulate these functions, the algorithms are mimicked in software models and implemented within simulation tools using the Algorithmic Modeling Interface (AMI) extension to the IBIS (I/O Buffer Information Sheet) standard. For USB multi-gigabit SerDes, many component vendors supply IBIS-AMI models.  However, for those vendors that do not, model creation software is available that uses predefined algorithms that can be customized through parameterization to match the performance of the component with the USB interface.

Serial links require compliance to a specific bit error rate (BER). The target BER is typically less than one error for every 10 billion bits received. Since it is not practical to simulate tens of billions of bits of data with traditional circuit simulation, high-capacity channel simulation has become part of any serial link analysis methodology. This approach applies an impulse response to characterize the serial channel and then applies advanced methods to achieve high-capacity throughput.

Having an analysis environment that can perform compliance testing while directly integrating with the implementation tools enables rapid tuning. With the ability to efficiently maximize performance of serial links during the design stage, IoT products can quickly be prototyped, tested at compliance meetings, and completed to meet aggressive time-to-market requirements.

Summary

With IoT devices being designed for a number of industries—consumer, medical, industrial, and military, just to name a few—each IoT device design team must consider the signal and power requirements  and recognize that signal and power integrity must become part of the design and analysis methodology. The competitive nature of this emerging industry means that time to market and rapid prototyping are essential to the success of a design team. Utilizing an integrated design and a signal/power analysis environment can provide IoT product creation with the highest probability of success.

ASIC Prototyping With FPGA

Thursday, February 12th, 2015

Zibi Zalewski, General Manger of the Hardware Products Division, Aldec

When I began my career as a verification products manager, ASIC/SoC verification was less integrated and the separation among verification stages, tools, and engineering teams was more obvious. At that time the verification process started using simulation, especially for the development phase of the hardware using relatively short test cases. As the design progressed more advanced tests became necessary the move to using emulation was quite natural. This was especially true with the availability of emulators with debugging capabilities that enabled running longer tests in shorter time and debugging issues as they arose. The last stage of this methodology was usually prototyping, when the highest execution speed was required coupled with less need for debugging.  Of course, with the overhead of circuit setup, this took longer and was more complicated.

Today’s ASIC designs have become huge in comparison to the early days making the process of verification extremely complicated. This is the reason that RTL simulation is used only early in the process, mostly for single module verification, since it is simply too slow.

The size of the IC being developed makes even usage of FPGA prototyping boards an issue, since porting designs of 100+ million gates takes months and requires boards that include least several programmable devices. In spite of the fact that FPGAs are getting bigger and bigger in terms of capacity and I/O, SoC projects are growing much faster.  In the end, even a very large prototyping board may not be sufficient.

To add further complication, parts of modern SoCs, like processor subsystems, are developed using virtual platforms with the ability to exchange different processor models depending on the application requirements. Verifying all of the elements within such a complicated system takes massive amounts of time and resources – engineering, software and hardware tools. Considering design size and sophistication, even modular verification becomes a non-so-trivial task, especially during final testing and SoC firmware verification.

In order to reach the maximum productivity and decrease development cost the team must integrate as early as possible to be able to test not only at the module level, but also at the SoC level. Resolution unfortunately is not that simple.  Let’s consider two test cases.

1. SoC design with UVM testbench.

The requirement is to reuse the UVM testbench, but the design needs to run at MHz speed, part of it connected using a physical interface running at speed.

To fulfill such requirements the project team needs an emulator supporting SystemVerilog DPI-C and SCE-MI Function based in order to connect the UVM testbench and DUT.  Since part of the design needs to communicate with a physical interface such emulator needs to support a special adapter module to synchronize the emulator speed with a faster physical interface (e.g. Ethernet port). The result is that the UVM testbench in a  simulator can still be reused, since the main design is running at MHz speed on the emulator and communicating through an external interface that is running at speed – thanks to a speed adapter – with the testbench.

2. SoC design partially developed in virtual platform, and partially written in RTL HDL language. Here the requirements are to reuse the virtual platform and to synchronize it with the rest of the hardware system running at MHz speed.  This approach also requires that part of the design was already optimized to run in prototyping mode.

Since virtual platforms usually interface with external tools using TLM, the natural way is to connect the platform with a transaction level emulator equipped with SCE-MI API that also provides the required MHz speed. To connect the part of the design optimized for prototyping, most likely running at a higher speed than the main emulator clock, it is required to use a speed adapter as in the case already discussed.  If it is possible to connect the virtual platform with an emulator running at two speeds (main emulation clock and higher prototyping clock) the result is that design parts already tested separately can now be tested together as one SoC with the benefit that both software and hardware teams are working on the same DUT.

Figure-1  Integrated verification platform for modern ASIC/SoC.

In both cases we have different tools integrated together (Figure-1), testbench simulated in RTL simulator or in the form of virtual platform is connected with an emulator via SCE-MI API providing integration between software and hardware tools. Next, we have two hardware domains connected via a special speed adapter bridge – emulator domain and prototyping domain (or external interface) synchronized and implemented in the FPGA based board/s. All these elements create a hybrid verification platform for modern SoC/ASIC design that delivers the ability for all teams involved to work on the whole and same source of the project.

Newer Processes Raise ESL Issues

Wednesday, August 13th, 2014

Gabe Moretti, Senior Editor

In June I wrote about   how EDA changed its traditional flow in order to support advanced semiconductors manufacturing.  I do not think that, although the changes are significant and meaningful they are enough to sustain the increase in productivity required by financial demands.  What is necessary, in my opinion, is a better support for system level developers.

Leaving the solution to design and integration problems to a later stage of the development process creates more complexity since the network impacted is much larger.  Each node in the architecture is now a collection of components and primitive electronic elements that dilute and thus hide the intended functional architecture.

Front End Design Issues

Changes in the way front end design is done are being implemented.  Anand Iyer, Calypto’s Director of Product Marketing focused on the need to plan power at system level.  He observed that: “Addressing DFP issues need to done in the front end tools, as the RTL logic structure and architecture choices determines 80% of the power. Designers need to minimize the activity/clock frequency across their designs since this is the only metric to control dynamic power. They can achieve this in many ways: (1) Reducing activity permanently from their design, (2) Reduce activity temporarily during the active mode of the design.”  Anand went on to cover the two points: “The first point requires a sequential analysis of the entire design to identify opportunities where we can save power. These opportunities need to be evaluated against possible timing and area impact. We need automation when it comes to large and complex designs. PowerPro can help designers optimize their designs for activity.”

As for the other point he said: “The second issue requires understanding the interaction of hardware and software. Techniques like power gating and DVFS fall under this category.”

Anand also recognized that high level synthesis can be used to achieve low power designs.  Starting from C++ or SystemC, architects can produce alternative microarchitectures and see the power impact of their choices (with physically aware RTL power analysis).  This is hugely powerful to enable exploration because if this is done only at RTL it is time consuming and unrealistic to actually try multiple implementations of a complex design.  Plus, the RTL low power techniques are automatically considered and automatically implemented once you have selected the best architecture that meets your power, performance, and cost constraints.

Steve Carlson, Director of Marketing at Cadence pointed out that about a decade ago design teams had their choice of about four active process nodes when planning their designs.  He noted that: “In 2014 there are ten or more active choices for design teams to consider.  This means that solution space for product design has become a lot more rich.  It also means that design teams needs a more fine grained approach to planning and vendor/node selection.  It follows that the assumptions made during the planning process need to be tested as early and often, and with as much accuracy as possible at each stage. The power/performance and area trade-offs create end product differentiation.  One area that can certainly be improved is the connection to trade-offs between hardware architecture and software.  Getting more accurate insight into power profiles can enable trade-offs at the architectural and micro architectural levels.

Perhaps less obvious is the need for process accurate early physical planning (i.e., understands design rules for coloring, etc.).”

As shown in the following figure designers have to be aware that parts of the design are coming from different suppliers and thus Steve states that: “It is essential for the front-end physical planning/prototyping stages of design to be process-aware to prevent costly surprises down the implementation road.”

Simulation and Verification

One of the major recent changes in IC design is the growing number of mixed/signals designs.  They present new design and verification challenges particularly when new advanced processes are targeted for manufacturing.  On the standard development side Accellera has responded by releasing a new version of its Verilog-AMS.  It is a mature standard originally released in 2000. It is built on top of the Verilog subset of the IEEE 1800 -2012 SystemVerilog.  The standard defines how analog behavior interacts with event-based functionality, providing a bridge between the analog and digital worlds. To model continuous-time behavior, Verilog-AMS is defined to be applicable to both electrical and non-electrical system descriptions.  It supports conservative and signal-flow descriptions and can also be used to describe discrete (digital) systems and the resulting mixed-signal interactions.

The revised standard, Verilog-AMS 2.4, includes extensions to benefit verification, behavioral modeling and compact modeling. There are also several clarifications and over 20 errata fixes that improve the overall quality of the standard.Resources on how best to use the standard and a sample library with power and domain application examples are available from Accellera.

Scott Little, chair of the Verilog AMS WG stated: “This revision adds several features that users have been requesting for some time, such as supply sensitive connect modules, an analog event type to enable efficient electrical-to-real conversion and current checker modules.”

The standard continues to be refined and extended to meet the expanding needs of various user communities. The Verilog-AMS WG is currently exploring options to align Verilog-AMS with SystemVerilog in the form of a dot standard to IEEE 1800. In addition, work is underway to focus on new features and enhancements requested by the community to improve mixed-signal design and verification.

Clearly another aspect of verification that has grown significantly in the past few years is the availability of Verification IP modules.  Together with the new version of the UVM 1.2 (Universal Verification Methodology) standard just released by Accellera, they represent a significant increment in the verification power available to designers.

Jonah McLeod, Director of Corporate Marketing Communications at Kilopass, is also concerned about analog issues.  He said: “Accelerating Spice has to be major tool development of this generation of tools. The biggest problem designers face in complex SoC is getting corner cases to converge. This can be time consuming an imprecise with current generation tools.  Start-ups claiming montecarlo spice accelerations like Solido Design Automation and CLK Design Automation are attempting to solve the problem. Both promise to achieve Spice-level accuracy on complex circuits within a couple of percentage points in a fraction of the time.”

One area of verification that is not often covered is its relationship with manufacturing test.  Thomas L. Anderson, Vice President of Marketing at Breker Verification Systems told me that: “The enormous complexity of a deep submicron (32, 38, 20, 14 nm) SoC has a profound impact on manufacturing test. Today, many test engineers treat the SoC as a black box, applying stimulus and checking results only at the chip I/O pins. Some write a few simple C tests to download into the SoC’s embedded processors and run as part of the manufacturing test process. Such simple tests do not validate the chip well, and many companies are seeing returns with defects missed by the tester. Test time limitations typically prohibit the download and run of an operating system and user applications, but clearly a better test is needed. The answer is available today: automatically generated C test cases that run on “bare metal” (no operating system) while stressing every aspect of the SoC. These run realistic user scenarios in multi-threaded, multi-processor mode within the SoC while coordinating with the I/O pins. These test cases validate far more functionality and performance before the SoC ever leaves the factory, greatly reducing return rates while improving the customer experience.”

Digital Designers Grapple with Analog Mixed Signal Designs

Tuesday, June 10th, 2014

By John Blyler, Chief Content Officer

Today’s growth of analog and mixed signal circuits in the Internet of Things (IoT) applications raises questions about compiling C-code, running simulations, low power designs, latency and IP integration.

Often, the most valuable portion of a technical seminar is found in the question-and-answer (Q&A) session that follows the actual presentation. For me, that was true during a recent talk on the creation of mixed signal devices for smart analog and the Internet of Things (IoT) applications. The speakers included Diya Soubra, CPU Product Marketing Manager and Joel Rosenberg, Platform Marketing Director at ARM; and Mladen Nizic, Engineering Director at Cadence. What follows is my paraphrasing of the Q&A session with reference to the presentation where appropriate. – JB

Question: Is it possible to run C and assembly code on an ARM® Cortex®-M0 processor in Cadence’s Virtuoso for custom IC design? Is there a C-compiler within the tool?

Nizic: The C compiler comes from Keil®, ARM’s software development kit. The ARM DS-5 Development Studio is an Eclipse based tool suite for the company’s processors and SoCs. Once the code is compiled, it is run together with RTL software in our (Cadence) Incisive Mixed Signal simulator. The result is a simulation of the processor driven by an instruction set with all digital peripherals simulated in RTL or at the gate level. The analog portions of the design are simulated at the appropriate behavioral level, i.e., Spice transistor level, electrical behavioral Verilog A or a real number model. [See the mixed signal trends section of, “Moore’s Cycle, Fifth Horseman, Mixed Signals, and IP Stress”)

You can use the electrical behavioral models like a Verilog A and VHDL-A and –AMS to simulate the analog portions of the design. But real number models have become increasingly popular for this task. With real number models, you can model analog signals with variable amplitudes but discrete time steps, just as required by digital simulation. Simulations with a real number model representation for analog are done at almost the same speed as the digital simulation and with very little penalty (in accuracy). For example, here (see Figure 1) are the results of a system simulation where we verify how quickly Cortex-M0 would us a regulation signal to bring pressure to a specified value. It takes some 28-clock cycles. Other test bench scenarios might be explored, e.g., sending the Cortex-M0 into sleep mode if no changes in pressure are detected or waking up the processor in a few clock cycles to stabilize the system. The point is that you can swap these real number models for electrical models in Verilog A or for transistor models to redo your simulation to verify that the transistor model performs as expected.

Figure 1: The results of a Cadence simulation to verify the accuracy of a Cortex-M0 to regulate a pressure monitoring system. (Courtesy of Cadence)

Question: Can you give some examples of applications where products are incorporating analog capabilities and how they are using them?

Soubra: Everything related to motor control, power conversion and power control are good examples of where adding a little bit of (processor) smarts placed next to the mixed signal input can make a big difference. This is a clear case of how the industry is shifting toward this analog integration.

Question: What capabilities does ARM offer to support the low power requirement for mixed signal SoC design?

Rosenberg: The answer to this question has both a memory and logic component. In terms of memory, we offer the extended range register file compilers which today can go up to 256k bits. Even though the performance requirement for a given application may be relatively low, the user will want to boot from the flash into the SRAM or the register file instance. Then they will shut down the flash and execute out of the RAM as the RAM offers significantly lower active as well as stand-by power compared to executing out of flash.

On the logic side, we offer a selection from 7, 9 and 12 tracks. Within that, there are three Vt options – one for high, nominal and lower speeds. Beyond that we also offer power management kits that provide things like level shifters and power gating so the user can shut down none active parts of the SoC circuit.

Question: What are the latency numbers for waking up different domains that have been put to sleep?

Soubra: The numbers that I shared during the presentation do not include any peripherals since I have no way of knowing what peripherals will be added. In terms of who is consuming what power, the normal progression tends to be the processor, peripherals, bus and then the flash block. The “wake-up” state latency depends upon the implementation itself. You can go from tens-of-cycles to multiple-of-tens depending upon how the clocks and phase locked loops (PLLs) are implemented. If we shut everything down, then a few cycles will be required before everything goes off an, before we can restart the processor. But we are talking about tens not hundreds of cycles.

Question: And for the wake-up clock latency?

Soubra: Wake-up is the same thing, because when the wake-up controller says “lets go,” it has to restart all the clocks before it starts the processor. So it is exactly the same amount.

ARM Cortex-M low power technologies.

Question: What analog intellectual property (IP) components are offered by ARM and Cadence? How can designers integrate their own IP in the flow?

Nizic: At Cadence, through the acquisition of Cosmic, we have a portfolio of applicable analog and mixed signal IP, e.g., converters, sensors and the like. We support all design views that are necessary for this kind of methodology including model abstracts from real number to behavioral models. Like ARM’s physical IP, all of ours are qualified for the various foundry nodes so the process of integrating IP and silicon is fairly smooth.

Soubra: From ARM’s point-of-view, we are totally focused on the digital part of the SoC, including the processors, bus infrastructure components, peripherals, and memory controllers that are part of the physical IP (standard cell libraries, I/O cells, SRAM, etc). Designers integrate the digital parts (processors, bus components, peripherals and memory controller) in RTL design stages. Also, they can add the functional simulation models of memories and I/O cells in simulations, together with models of analog components from Cadence. The actual physical IP are integrated during various implementation stages (synthesis, placement and routing, etc).

Question: How can designers integrate their own IP into the SoC?

Nizic: Some of the capabilities and flows that we described are actually used to create customer IP for later reuse in SoC integration. There is a centric flow that can be used, whether the customer’s IP is pure analog or contains a small amount of standard cell digital. For example, the behavioral modeling capabilities help package this IP for the functional simulation in full chip verification. But getting the IP ready is only one aspect of the flow.

From a physical abstract it’s possible to characterize the IP for use in timing driven mode. This approach would allow you to physically verify the IP on the SoC for full chip verification.

Next Page »