Part of the  

Chip Design Magazine

  Network

About  |  Contact

Posts Tagged ‘Emulation’

Next Page »

The Verification Times are Changing

Monday, April 17th, 2017

Adnan Hamid, CEO, Breker Verification Systems

If you have been an ASIC designer for a couple of decades, you know how much your job has evolved during that time. Not only in the way chips get designed, using large numbers of IP blocks, but in the way in which they are verified.

Back in 1996, there were about 10,000 design starts and average size was under 30,000 gates. Designs were composed of a small number of blocks, almost all developed in-house and most designed to integrate to an external processor, unless the chip itself was a processor. It is likely that you were using a directed test methodology

The high-end processor of the time was the Pentium Pro that came in at 5.5-million gates, implemented in 500-nanometer (nm) technology, and the ARM 7 released in 1994 was beginning to gain some attention at one tenth the size of the Pentium Pro. Also gaining significant attention were new languages and tools that enable pseudo random test generation.

The design process became more efficient over that time period through the introduction of higher level design languages and corresponding synthesis tools, but most of the gains have come from increasing amounts of reuse. Today, most designs count on more than 90% of the chip area being filled with reused design blocks and most designs use tens or even a hundred different IP blocks.

One of the primary value statements of IP reuse is that the verification of those blocks is done by the IP provider and, since that design is used in multiple designs, its overall quality is likely to be higher than that of an in-house developed block. In the early days of IP, that may have been a questionable statement because many of the IP suppliers were nothing more than two people hacking away at code in a garage. However, today, most IP suppliers are trusted partners and, even though their designs are still not 100% verified, they no longer pose the largest threat to the overall success of a design.

The primary verification methodologies being used today are still the same as those that were emerging 20 years ago. The languages have been improved and standardized and the methodologies that go along with them have become highly developed. The fact remains that those methodologies were targeted at what we would consider to be a block today.

A typical SoC design team will design one or two custom blocks. These will make their design differentiated from the others in the industry and it is likely that those blocks will continue to use existing verification methodologies. The larger problem today is how do you verify the system-level functionality of the chip? Existing methodologies are highly inefficient for this task meaning that most design teams revert to using direct test strategies at the system level.

A system-level test can be viewed as the execution of a scenario that corresponds to a typical user-level function. In a cell phone, this could be making a call while watching a video. For a smart TV, it could be watching a TV station from the antenna while streaming an Internet video in an insert. These are the types of functions that must be proven to work before a tapeout can be considered.

For people tasked with this problem, solutions are rapidly emerging. You may have heard about a development within Accellera called the Portable Stimulus Working Group. This group is in the process of bringing together ideas from several EDA tools companies who have created tools to solve the integration verification problem. Most of them are based on the idea of graphs that define the valid data and control flows of the design. From this, they can randomly generate testcases that exercise those paths through the design. [ Flowgraphs were used in verification since 1968, in the SNAP simulator built at TRW System. Editor]

The biggest change in this methodology, compared to existing ones, is that it is not focused on stimulus generation. With SystemVerilog, the randomization helps generate stimulus, but the user is responsible for defining constraints, generating the necessary checkers, creation of the coverage model and, in some cases, showing that coverage actually corresponds to detection of faults. With Portable Stimulus, the user creates a verification intent model, and this is a unified model for the entire act of verification. From it, stimulus and checkers can be created. Constraints and coverage are annotated directly on the graph.

Figure 1: Portable stimulus enables a graph-based verification approach where users are able to generate stimuli from a graph.

Source: IBM, from DVCon India 2015 User Track Presentation

What this means is that verification is about to become very similar to design in that the user creates a high-level verification model and then has a synthesis engine generate the testbench from that model. The user will not generate low-level pieces of verification anymore and tests that are created will span multiple IP blocks and the connectivity that binds them together.

Several other advantages come from the notion of a model and a synthesis engine. How often have you struggled with the adaptation of a test originally targeted for a simulator, which now needs to be run on an emulator? How often have you been given a testbench developed for standalone verification of a block and been asked to integrate that into sub-system verification? How often have you had testbenches from a previous design that you want to adapt for a new design where only things such as interrupts or the address map have changed? Portable Stimulus addresses all of these issues because it has the notion of reuse fundamentally built into it.

Some languages were standardized before having been fully proven. That is not the case with graph-based verification. As an example, Breker has worked in this area for over a decade and while we may have been ahead of our time, it means that several of our customers have been successfully turning out chips based on this emerging methodology.

Accellera should release the first version of the graph-based verification methodology standard by the end of 2017. If you are wondering about adopting tools today, an easy migration path will be provided from the existing specification language to those that are expected to be contained within the released standard.

About Adnan Hamid

Adnan Hamid is the founder CEO of Breker and the inventor of its core technology. Under his leadership, Breker has come to be a market leader in functional verification technologies for complex systems-on-chips (SoCs), and Portable Stimulus in particular. Breker is an active Accellera member on the Portable Stimulus Working Group, taking a lead in defining the specifications of the upcoming Portable Stimulus Standard.  The Breker expertise in the automation of self-verifying testcases is setting the bar for the completeness of verification for SoCs.

EDA in the year 2017 – Part 2

Tuesday, January 17th, 2017

Gabe Moretti, Senior Editor

The first part of the article, published last week, covered design methods and standards in EDA together with industry predictions that impacted all of our industry.  This part will cover automotive, design verification and FPGA.  I found it interesting that David Kelf, VP of Marketing at OneSpin Solutions, thought that Machine learning will begin to penetrate the EDA industry as well.  He stated: “Machine Learning hit a renaissance and is finding its way into a number of market segments. Why should design automation be any different?  2017 will be the start of machine learning to create a new breed of design automation tool, equipped with this technology and able to configure itself for specific designs and operations to perform them more efficiently. By adapting algorithms to suit the input code, many interesting things will be possible.”

Rob Knoth, Product Management Director, Digital and Signoff Group at Cadence touched on an issue that is being talked about more recently: security.  He noted that: “In 2016, IoT bot-net attacks brought down large swaths of the Internet – the first time the security impact of IoT was felt by many. Private and nation-state attacks compromised personal/corporate/government email throughout the year. “

In 2017, we have the potential for security concerns to start a retreat from always-on social media and a growing value on private time and information. I don’t see a silver bullet for security on our horizon. Instead, I anticipate an increasing focus for products to include security managers (like their safety counterparts) on the design team and to consider safety from the initial concept through the design/production cycle.

Figure 1.  Just one of the many electronics systems found in an automobile (courtesy of Mentor)

Automotive

The automotive industry has increased the use of electronics year over year for a long time.  At this point an automobile is a true intelligent system, at least as far as what the driver and passenger can see and hear the “infotainment system”.  Late model cars also offer collision avoidance and stay-in-lane functions, but more is coming.

Here is what Wally Rhines thinks: “Automotive and aerospace designers have traditionally been driven by mechanical design.  Now the differentiation and capability of cars and planes is increasingly being driven by electronics.  Ask your children what features they want to see in a new car.  The answer will be in-vehicle infotainment.  If you are concerned about safety, the designers of automobiles are even more concerned.  They have to deal with new regulations like ISO 26262, as well as other capabilities, in addition to environmental requirements and the basic task of “sensor fusion” as we attach more and more visual, radar, laser and other sensors to the car.  There is no way to reliably design vehicles and aircraft without virtual simulation of electrical behavior.

In addition, total system simulation has become a requirement.  How do you know that the wire bundle will fit through the hole in the door frame?  EDA tools can tell you the answer, but only after seeking out the data from the mechanical design.  Wiring in a car or plane is a three dimensional problem.  EDA tools traditionally worry about two dimension routing problems.  The world is changing.  We are going to see the basic EDA technology for designing integrated circuits be applied to the design of systems. Companies that can remain at the leading edge of IC design will be able to apply that technology to systems.”

David Kelf, VP of Marketing at OneSpin Solutions, observed: “OneSpin called it last year and I’ll do it again –– Automotive will be the “killer app” of 2017. With new players entering the marketing all the time, we will see impressive designs featured in advanced cars, which themselves will move toward a driverless future.  All automotive designs currently being designed for safety will need to be built to be as secure as possible. The ISO 26262 committee is working on security as well safety and I predict security will feature in the standard in 2017. Tools to help predict vulnerabilities will become more important. Formal, of course, is the perfect platform for this capability. Watch for advanced security features in formal.”

Rob Knoth, Product Management Director, Digital and Signoff Group at Cadence noted: “In 2016, autonomous vehicle technology reached an inflection point. We started seeing more examples of private companies operating SAE 3 in America and abroad (Singapore, Pittsburgh, San Francisco).  We also saw active participation by the US and world governments to help guide tech companies in the proliferation and safety of the technology (ex. US DOT V2V/V2I standard guidelines, and creating federal ADAS guidelines to prevent state-level differences). Probably the most unique example was also the first drone delivery by a major retailer, something which was hinted at 3 years prior and seemingly just a flight of fancy then.

Looking ahead to 2017, both the breadth and depth are expected to expand, including the first operation of SAE level 4/5 in limited use on public streets outside the US, and on private roads inside US. Outside of ride sharing and city driving, I expect to see the increasing spread of ADAS technology to long distance trucking and non-urban transportation. To enable this, additional investments from traditional vehicle OEM’s partnering with both software and silicon companies will be needed to enable high-levels of autonomous functions. To help bring these to reality, I also expect the release of new standards to guide both the functional safety and reliability of automotive semiconductors. Even though the pace of government standards can lag, for ADAS technology to reach its true potential, it will require both standards and innovation.”

FPGA

The IoT market is expected to provide a significant opportunity to the electronics industry to grow revenue and open new markets.  I think the use of FPGA in IoT dvices will increase the use of these devices in system design.

I asked Geoff Tate, CEO of FlexLogix, his opinions on the subject.  He offered four points that he expects to become reality in 2017:

1. the first customer chip will be fabricated using embedded FPGA from an IP supplier

2. the first customer announcements will be made of customers adopting embedded FPGA from an IP supplier

3. embedded FPGAs will be proven in silicon running at 1GHz+

4. the number of customers doing chip design using embedded FPGA will go from a handful to dozen.

Zibi Zalewski, Hardware Division General Manager at Aldec also addressed the FPGA subject.

“I believe FPGA devices are an important technology player to mention when talking what to expect in 2017. With the growth of embedded electronics driven by Automotive, Embedded Vision and/or IoT markets, FPGA technology becomes a core element particularly for in products that require low power and re-programmability.

Features of FPGA such as pipelining and the ability to execute and easily scale parallel instances of the implemented function allow for the use of FPGA for more than just the traditionally understood embedded markets. FPGA computing power usage is exploding in the High Performance Computing (HPC) where FPGA devices are used to accelerate different scientific algorithms, big data processing and complement CPU based data centers and clouds. We can’t talk about FPGA these days without mentioning SoC FPGAs which merge the microprocessor (quite often ARM) with reprogrammable space. Thanks to such configurations, it is possible to combine software and hardware worlds into one device with the benefits of both.

All those activities have led to solid growth in FPGA engineering, which is pushing on further growth of FPGA development and verification tools. This includes not only typical solutions in simulation and implementation. We should also observe solid growth in tools and services simplifying the usage of FPGA for those who don’t even know this technology such as high level synthesis or engineering services to port C/C++ sources into FPGA implementable code. The demand for development environments like compilers supporting both software and hardware platforms will only be growing, with the main goal focused on ease of use by wide group of engineers who were not even considering the FPGA platform for their target application.

At the other end of the FPGA rainbow are the fast-growing, largest FPGA offered both from Xilinx and Intel/Altera. ASIC design emulation and prototyping will push harder and harder on the so called big-box emulators offering higher performance and significantly lower price per gate and so becoming more affordable for even smaller SoC projects. This is especially true when partnered with high quality design mapping software that handles multi-FPGA partitioning, interconnections, clocks and memories.”

Figure 2. Verification can look like a maze at times

Design Verification

There are many methods to verify a design and companies will, quite often, use more than one on the same design.  Each method: simulation, formal analysis, and emulation, has its strong points.

For many years, logic simulation was the only tool available, although hardware acceleration of logic simulation was also available.

Frank Schirrmeister, Senior Product Management Group Director, System and Verification Group at Cadence submitted a through analysis of verification issues.  He wrote: “From a verification perspective, we will see further market specialization in 2017 – mobile, server, automotive (especially ADAS) and aero/defense markets will further create specific requirements for tools and flows, including ISO 26262 TCL1 documentation and support for other standards. The Internet of Things (IoT) with its specific security and low power requirements really runs across application domains.  Continuing the trend in 2016, verification flows will continue to become more application-specific in 2017, often centered on specific processor architectures. For instance, verification solutions optimized for mobile applications have different requirements than for servers and automotive applications or even aerospace and defense designs. As application-specific requirements grow stronger and stronger, this trend is likely to continue going forward, but cross-impact will also happen (like mobile and multimedia on infotainment in automotive).

Traditionally ecosystems have been centered on processor architectures. Mobile and Server are key examples, with their respective leading architectures holding the lion share of their respective markets. The IoT is mixing this up a little as more processor architectures can play and offer unique advantages, with configurable and extensible architectures. No clear winner is in sight yet, but 2017 will be a key year in the race between IoT processor architectures. Even OpenSource hardware architectures are look like they will be very relevant judging from the recent momentum which eerily reminds me of the early Linux days. It’s one of the most entertaining spaces to watch in 2017 and for years to come.

Verification will become a whole lot smarter. The core engines themselves continue to compete on performance and capacity. Differentiation further moves in how smart applications run on top of the core engines and how smart they are used in conjunction.

For the dynamic engines in software-based simulation, the race towards increased speed and parallel execution will accelerate together with flows and methodologies for automotive safety and digital mixed-signal applications.

In the hardware emulation world, differentiation for the two basic ways of emulating – processor-based and FPGA-based – will be more and more determined by how the engines are used. Specifically, the various use models for core emulation like verification acceleration low power verification, dynamic power analysis, post-silicon validation—often driven by the ever growing software content—will extend further, with more virtualization joining real world connections. Yes, there will also be competition on performance, which clearly varies between processor-based and FPGA-based architectures—depending on design size and how much debug is enabled—as well as the versatility of use models, which determines the ROI of emulation.

FPGA-based prototypes address the designer’s performance needs for software development, using the same core FPGA fabrics. Therefore, differentiation moves into the software stacks on top, and the congruency between emulation and FPGA-based prototyping using multi-fabric compilation allows mapping both into emulation and FPGA-based prototyping.

All this is complemented by smart connections into formal techniques and cross-engine verification planning, debug and software-driven verification (i.e. software becoming the test bench at the SoC level). Based on standardization driven by the Portable Stimulus working group in Accellera, verification reuse between engines and cross-engine optimization will gain further importance.

Besides horizontal integration between engines—virtual prototyping, simulation, formal, emulation and FPGA-based prototyping—the vertical integration between abstraction levels will become more critical in 2017 as well. For low power specifically, activity data created from RTL execution in emulation can be connected to power information extracted from .lib technology files using gate-level representations or power estimation from RTL. This allows designers to estimate hardware-based power consumption in the context of software using deep cycles over longer timeframes that are emulated. ‘

Anyone who knows Frank will not be surprised by the length of the answer.

Wally Rhines, Chairman and CEO of Mentor Graphics was less verbose.  He said:” Total system simulation has become a requirement.  How do you know that the wire bundle will fit through the hole in the door frame?  EDA tools can tell you the answer, but only after seeking out the data from the mechanical design.  Wiring in a car or plane is a three dimensional problem.  EDA tools traditionally worry about two dimension routing problems.  The world is changing.  We are going to see the basic EDA technology for designing integrated circuits be applied to the design of systems. Companies that can remain at the leading edge of IC design will be able to apply that technology to systems.

This will create a new market for EDA.  It will be larger than the traditional IC design market for EDA.  But it will be based upon the basic simulation, verification and analysis tools of IC design EDA.  Sometime in the near future, designers of complex systems will be able to make tradeoffs early in the design cycle by using virtual simulation.  That know-how will come from integrated circuit design.  It’s no longer feasible to build prototypes of systems and test them for design problems.  That approach is going away.  In its place will be virtual prototyping.  This will be made possible by basic EDA technology.  Next year will be a year of rapid progress in that direction.  I’m excited by the possibilities as we move into the next generation of electronic design automation.”

The increasing size of chips has made emulation a more popular tool than in the past.  Lauro Rizzatti, Principal at Lauro Rizzatti LLC, is a pioneer in emulation and continues to be thought of as a leading expert in the method.  He noted: “Expect new use models for hardware emulation in 2017 that will support traditional market segments such as processor, graphics, networking and storage, and emerging markets currently underserved by emulation –– safety and security, along with automotive and IoT.

Chips will continue to be bigger and more complex, and include an ever-increasing amount of embedded software. Project groups will increasingly turn to hardware emulation because it’s the only verification tool to debug the interaction between the embedded software and the underlying hardware. It is also the only tool capable to estimate power consumption in a realistic environment, when the chip design is booting an OS and processing software apps. More to the point, hardware emulation can thoroughly test the integrity of a design after the insertion of DFT logic, since it can verify gate-level netlists of any size, a virtually impossible task with logic simulators.

Finally, its move to the data center solidifies its position as a foundational verification tool that offers a reasonable cost of ownership.”

Formal verification tools, sometimes referred to as “static analysis tools” have seen their use increase year over year once vendors found human interface methods that did not require a highly-trained user.  Roger Sabbagh, VP of Application Engineering at Oski Technology pointed out: “The world is changing at an ever-increasing pace and formal verification is one area of EDA that is leading the way. As we stand on the brink of 2017, I can only imagine what great new technologies we will experience in the coming year. Perhaps it’s having a package delivered to our house by a flying drone or riding in a driverless car or eating food created by a 3-D printer. But one thing I do know is that in the coming year, more people will have the critical features of their architectural design proven by formal verification. That’s right. System-level requirements, such as coherency, absence of deadlock, security and safety will increasingly be formally verified at the architectural design level. Traditionally, we relied on RTL verification to test these requirements, but the coverage and confidence gained at that level is insufficient. Moreover, bugs may be found very late in the design cycle where they risk generating a lot of churn. The complexity of today’s systems of systems on a chip dictates that a new approach be taken. Oski is now deploying architectural formal verification with design architects very early in the design process, before any RTL code is developed, and it’s exciting to see the benefits it brings. I’m sure we will be hearing a lot more about this in the coming year and beyond!”

Finally David Kelf, VP Marketing at OneSpin Solutions observed: “We will see tight integrations between simulation and formal that will drive adoption among simulation engineers in greater numbers than before. The integration will include the tightening of coverage models, joint debug and functionality where the formal method can pick up from simulation and even emulation with key scenarios for bug hunting.”


Conclusion

The two combined articles are indeed quite long.  But the EDA industry is serving a multi-faceted set of customers with varying and complex requirements.  To do it justice, length is unavoidable.

Specialists and Generalists Needed for Verification

Friday, December 16th, 2016

Gabe Moretti, Senior Editor

Verification continues to take up a huge portion of the project schedule. Designs are getting more complex and with complexity comes what appears to be an emerging trend –– the move toward generalists and specialists. Generalists manage the verification flow and are knowledgeable about simulation and the UVM. Specialists with expertise in formal verification, portable stimulus and emulation are deployed when needed.  I talked with four specialists in the technology:

David Kelf, Vice President Marketing. OneSpin Solutions,

Harry Foster, Chief Scientist Verification at Mentor Graphics

Lauro Rizzatti, Verification Consultant, Rizzatti LLC, and

Pranav Ashar, CTO, Real Intent

I asked each of them the following questions:

- Is this a real trend or a short-term aberration?

- If it is a real trend, how do we make complex verification tools and methodologies suitable for mainstream verification engineers?

- Are verification tools too complicated for a generalist to become an expert?

David: Electronics design has always had its share of specialists. A good argument could be made that CAD managers were specialists in the IT department, and that the notion of separate verification teams was driven by emerging specialists in testbench automation approaches. Now we are seeing something else. That is, the breakup of verification experts into specialized groups, some project based, and others that operate across different projects. With design complexity comes verification complexity. Formal verification and emulation, for example, were little-used tools and only then for the most difficult designs. That’s changed with the increase in size, complexity and functionality of modern designs.

Formal Verification, in particular, found its way into mainstream flows through “apps” where the entire use model is automated and the product focused on specific high-value verification functions. Formal is also applied manually through the use of hand-written assertions and this task is often left to specialist Formal users, creating an apparently independent group within companies who may be applied to different projects. The emergence of these teams, while providing a valuable function, can limit the proliferation of this technology as they become the keepers of the flame, if you like. The generalist engineers come to rely on them rather than exploring the use of the technology for themselves. This, in turn, has a limiting factor on the growth of the technology and the realization of its full potential as an alternative to simulation

Harry: It’s true, design is getting more complex. However, as an industry, we have done a remarkable job of keeping up with design, which we can measure by the growth in demand for design engineers. In fact, between 2007 and 2016 the industry has gone through about four iterations of Moore’s Law. Yet, the demand for design engineers has only grown at a 3.6 percent compounded annual growth rate.

Figure 1

During this same period, the demand for verification engineers has grown at a 10.4 percent compounded annual growth rate. In other words, verification complexity is growing at a faster rate than design complexity. This should not be too big a surprise since it is generally accepted in the academic world that design complexity grows at a Moore’s Law rate, while verification complexity grows at a much steeper rate (i.e., double exponential).

One contributing factor to growing verification complexity is the emergence of new layers of verification requirements that did not exist years ago. For example, beyond the traditional functional domain, we have added clock domains, power domains, security domains, safety requirements, software, and then obviously, overall performance requirements.

Figure 2

Each of these new layers of requirements requires specialized domain knowledge. Hence, domain expertise is now a necessity in both the design and verification communities to effectively address emerging new layers of requirements.

For verification, a one-size-fits-all approach to verification is no longer sufficient to completely verify an SoC. There is a need for specialized tools and methodologies specifically targeted at each of these new (and continually emerging) layers of requirements. Hence, in addition to domain knowledge expertise, verification process specialists are required to address growing verification complexity.

The emergence of verification specialization is not a new trend; although, perhaps it has become more obvious due to growing verification complexity. For example, to address the famous floating point bug in the 1990’s it became apparent that theorem proving and other formal technology would be necessary to fill the gap of traditional simulation-based verification approaches. These techniques require full-time dedication that generalist are unlikely to master because their focus is spread across so many other tools and methodologies. One could make the same argument about the adoption of constrained- random, coverage-driven testbenches using UVM (requiring object-oriented programing skills, which I do not consider generalist skills), emulation, and FPGA prototyping. These technologies have become indispensable in today’s SoC verification/validation tool box, and to get the most out of the project’s investment, specialist are required.

So the question is how do we make complex tools and methodologies suitable for mainstream verification engineers? We are addressing this issue today by developing verification apps that solve a specific, narrowly focused problem and require minimal tool and methodology expertise. For example, we have seen numerous formal apps emerge that span a wide spectrum of the design process from IP development into post-silicon validation.  These apps no longer require the user to write assertions or be an expert in formal techniques. In fact, the formal engines are often hidden from the user, who then focuses on “what” they want to verify, versus the “how.” A few examples include: connectivity check used during IP integration, register check used to exhaustively verify control and status register behavior

against its CSV or IP-XACT register specification, and security check used to exhaustively verify that only the paths you specify can reach security or safety-critical storage elements. Perhaps one of the best- known formal apps is clock-domain crossing (CDC) checking, which is used to identify metastabilty issues due to the interaction of multiple clock domains.

Emulation is another area where we are seeing the emergence of verification apps. For example, deterministic ICE in emulation, which overcomes unpredictability in traditional ICE environments by adding 100 percent visibility and repeatability for debugging and provides access to other “virtual- based” use models. In addition, DFT emulation apps that accelerate Design for Test (DFT) verification prior to tape-out to minimize the risk of catastrophic failure while significantly reducing run times when verifying designs after DFT insertion.

In summary, the need for verification specialists today is driven by two demands: (1) specialized domain knowledge driven by new layers of verification requirements, and (2) verification tool and methodology expertise. This is not a bad thing. If I had a brain aneurysm, I would prefer that my doctor has mastered the required skills in endoscopy and other brain surgery techniques versus a general practitioner with a broad set of skills. Don’t get me wrong, both are required.

Lauro: In my mind, it is a trend, but the distinction may blur its contours soon. Let’s take hardware emulation. Hardware emulation always required specialists for its deployment, and, even more so, to fully optimize it to its fullest capacity. As they used to say, it came with a team of application engineers in the box to avoid that the time-to-emulation would exceed the time-to-first silicon. Today, hardware emulation is still a long way from being a plug-and-play verification tool, but recent developments by emulation vendors are making it easier and more accessible to use and deploy by generalists. The move from the in-circuit-emulation (ICE) mode driven by a physical target system to transaction-based communications driven by a virtual testbed designates it a data center resource status available to all types of verification engineers without specialist intervention. I see that as a huge step forward in the evolution of hardware emulation and its role in the design verification flow.

Pranav: The generalist vs. specialist discussion fits right into the shifting paradigm in which generic verification tools are being replaced by tools that are essentially verification solutions for specific failure modes.

The failure modes addressed in this manner are typically due to intricate phenomena that are hard to specify and model in simulation or general-purpose Assertion Based Verification (ABV), hard to resolve for a simulator or unguided ABV tool, whose propensity for occurrence increases with SOC size and integration complexity, and that are often insidious or hard to isolate. Such failure modes are a common cause of respins and redesign, with the result that

sign-off and bug-hunting for them based on solution-oriented tools has become ubiquitous in the design community.

Good examples are failures caused by untimed paths on an SOC, common sources of which are asynchronous clock-domain crossings, interacting reset domains and Static Timing Analysis (STA) exceptions. It has become common practice to address these scenarios using solution-oriented verification tools.

In the absence of recent advances by EDA companies in developing solution-oriented verification tools, SOC design houses would have been reliant on in-house design verification (DV) specialists to develop and maintain homegrown strategies for complex failure modes. In the new paradigm, the bias has shifted back toward the generalist DV engineer with the heavy lifting being done by an EDA tool. The salutary outcome of this trend for design houses is that the verification of SOCs for these complex failures is now more accessible, more automatic, more robust, and cheaper.

My Conclusions

It is hardtop disagree with the comments by my interlocutors.  Everything said is true.  But I think they have been too kind and just answered the questions without objecting to their limitations.  In fact the way to simplify verification is to do improve the way circuits are designed.  What is missing from design methodology is validation of what has been implemented before it is deemed ready for verification.  Designers are so pressed for time, due to design complexity and short schedules, that they must find ways to cut corners.  They reuse whenever possible and rely on their experience to assume that the circuit does what is supposed to do.  Unfortunately in most cases where a bug is found during design integration, they have neglected to check that the circuit does not do what is not supposed to do.  That is not always the fault of EDA tools.  The most glaring example is the choice by the electronic industry to use Verilog over VHDL.  VHDL is a much more robust language with built-in checks that exclude design errors that can be made using Verilog.  But VHDL takes longer to write and design engineers decided that schedule time took precedence over error avoidance.

The issue is always the same, no matter how simple or complex the design is: the individual self-assurance that he or she knows what he or she is doing.  The way to make design easier to verify is to create them better.  That means that the design should be semantically correct and that the implementation of all required features be completely validated by the designers themselves before handing the design to a verification engineer.

I do not think that I have just demanded that a design engineer also be a verification engineer.  What may be required is a UDM: Unified Design Methodology.  The industry is, may be unconsciously, already moving in that directions in two ways: the increased use of third party IP and the increasing volume of Design Rules by each foundry.  I can see these two trends growing brighter with each successive technology iteration: it is time to stop ignoring them.

Formal, Logic Simulation, hardware emulation/acceleration. Benefits and Limitations

Wednesday, July 27th, 2016

Stephen Bailey, Director of Emerging Technologies, Mentor Graphics

Verification and Validation are key terms used and have the following differentiation:  Verification (specifically, hardware verification) ensures the design matches R&D’s functional specification for a module, block, subsystem or system. Validation ensures the design meets the market requirements, that it will function correctly within its intended usage.

Software-based simulation remains the workhorse for functional design verification. Its advantages in this space include:

-          Cost:  SW simulators run on standard compute servers.

-          Speed of compile & turn-around-time (TAT):  When verifying the functionality of modules and blocks early in the design project, software simulation has the fastest turn-around-time for recompiling and re-running a simulation.

-          Debug productivity:  SW simulation is very flexible and powerful in debug. If a bug requires interactive debugging (perhaps due to a potential UVM testbench issue with dynamic – stack and heap memory based – objects), users can debug it efficiently & effectively in simulation. Users have very fine level controllability of the simulation – the ability to stop/pause at any time, the ability to dynamically change values of registers, signals, and UVM dynamic objects.

-          Verification environment capabilities: Because it is software simulation, a verification environment can easily be created that peeks and pokes into any corner of the DUT. Stimulus, including traffic generation / irritators can be tightly orchestrated to inject stimulus at cycle accuracy.

-          Simulation’s broad and powerful verification and debug capabilities are why it remains the preferred engine for module and block verification (the functional specification & implementation at the “component” level).

If software-based simulation is so wonderful, then why would anyone use anything else?  Simulation’s biggest negative is performance, especially when combined with capacity (very large, as well as complex designs). Performance, getting verification done faster, is why all the other engines are used. Historically, the hardware acceleration engines (emulation and FPGA-based prototyping) were employed latish in the project cycle when validation of the full chip in its expected environment was the objective. However, both formal and hardware acceleration are now being used for verification as well. Let’s continue with the verification objective by first exploring the advantages and disadvantages of formal engines.

-          Formal’s number one advantage is its comprehensive nature. When provided a set of properties, a formal engine can exhaustively (for all of time) or for a, typically, broad but bounded number of clock cycles, verify that the design will not violate the property(ies). The prototypical example is verifying the functionality of a 32-bit wide multiplier. In simulation, it would take far too many years to exhaustively validate every possible legal multiplicand and multiplier inputs against the expected and actual product for it to be feasible. Formal can do it in minutes to hours.

-          At one point, a negative for formal was that it took a PhD to define the properties and run the tool. Over the past decade, formal has come a long way in usability. Today, formal-based verification applications package properties for specific verification objectives with the application. The user simply specifies the design to verify and, if needed, provides additional data that they should already have available; the tool does the rest. There are two great examples of this approach to automating verification with formal technology:

  • CDC (Clock Domain Crossing) Verification:  CDC verification uses the formal engine to identify clock domain crossings and to assess whether the (right) synchronization logic is present. It can also create metastability models for use with simulation to ensure no metastability across the clock domain boundary is propagated through the design. (This is a level of detail that RTL design and simulation abstract away. The metastability models add that level of detail back to the simulation at the RTL instead of waiting for and then running extremely long full-timing, gate-level simulations.)
  • Coverage Closure:  During the course of verification, formal, simulation and hardware accelerated verification will generate functional and code coverage data. Most organizations require full (or nearly 100%) coverage completion before signing-off the RTL. But, today’s designs contain highly reusable blocks that are also very configurable. Depending on the configuration, functionality may or may not be included in the design. If it isn’t included, then coverage related to that functionality will never be closed. Formal engines analyze the design, in its actual configuration(s) that apply, and does a reachability analysis for any code or (synthesizable) functional coverage point that has not yet been covered. If it can be reached, the formal tool will provide an example waveform to guide development of a test to achieve coverage. If it cannot be reached, the manager has a very high-level of certainty to approving a waiver for that coverage point.

-          With comprehensibility being its #1 advantage, why doesn’t everyone use and depend fully on formal verification:

  • The most basic shortcoming of formal is that you cannot simulate or emulate the design’s dynamic behavior. At its core, formal simply compares one specification (the RTL design) against another (a set of properties written by the user or incorporated into an automated application or VIP). Both are static specifications. Human beings need to witness dynamic behavior to ensure the functionality meets marketing or functional requirements. There remains no substitute for “visualizing” the dynamic behavior to avoid the GIGO (Garbage-In, Garbage-Out) problem. That is, the quality of your formal verification is directly proportional to the quality (and completeness) of your set of properties. For this reason, formal verification will always be a secondary verification engine, albeit one whose value rises year after year.
  • The second constraint on broader use of formal verification is capacity or, in the vernacular of formal verification:  State Space Explosion. Although research on formal algorithms is very active in academia and industry, formal’s capacity is directly related to the state space it must explore. Higher design complexity equals more state space. This constraint limits formal usage to module, block, and (smaller or well pruned/constrained) subsystems, and potentially chip levels (including as a tool to help isolate very difficult to debug issues).

The use of hardware acceleration has a long, checkered history. Back in the “dark ages” of digital design and verification, gate-level emulation of designs had become a big market in the still young EDA industry. Zycad and Ikos dominated the market in the late 1980’s to mid/late-1990’s. What happened?  Verilog and VHDL plus automated logic synthesis happened. The industry moved from the gate to the register-transfer level of golden design specification; from schematic based design of gates to language-based functional specification. The jump in productivity from the move to RTL was so great that it killed the gate-level emulation market. RTL simulation was fast enough. Zycad died (at least as an emulation vendor) and Ikos was acquired after making the jump to RTL, but had to wait for design size and complexity to compel the use of hardware acceleration once again.

Now, 20 years later, it is clear to everyone in the industry that hardware acceleration is back. All 3 major vendors have hardware acceleration solutions. Furthermore, there is no new technology able to provide a similar jump in productivity as did the switch from gate-level to RTL. In fact, the drive for more speed has resulted in emulation and FPGA prototyping sub-markets within the broader market segment of hardware acceleration. Let’s look at the advantages and disadvantages of hardware acceleration (both varieties).

-          Speed:  Speed is THE compelling reason for the growth in hardware acceleration. In simulation today, the average performance (of the DUT) is perhaps 1 kHz. Emulation expectations are for +/- 1 MHz and for FPGA prototypes 10 MHz (or at least 10x that of emulation). The ability to get thousands of more verification cycles done in a given amount of time is extremely compelling. What began as the need for more speed (and effective capacity) to do full chip, pre-silicon validation driven by Moore’s Law and the increase in size and complexity enabled by RTL design & design reuse, continues to push into earlier phases of the verification and validation flow – AKA “shift-left.”  Let’s review a few of the key drivers for speed:

  • Design size and complexity:  We are well into the era of billion gate plus design sizes. Although design reuse addressed the challenge of design productivity, every new/different combination of reused blocks, with or without new blocks, creates a multitude (exponential number) of possible interactions that must be verified and validated.
  • Software:  This is also the era of the SoC. Even HW compute intensive chip applications, such as networking, have a software component to them. Software engineers are accustomed to developing on GHz speed workstations. One MHz or even 10’s of MHz speeds are slow for them, but simulation speeds are completely intolerable and infeasible to enable early SW development or pre-silicon system validation.
  • Functional Capabilities of Blocks & Subsystems:  It can be the size of input data / simuli required to verify a block’s or subsystem’s functionality, the complexity of the functionality itself, or a combination of both that drives the need for huge numbers of verification cycles. Compute power is so great today, that smartphones are able to record 4k video and replay it. Consider the compute power required to enable Advanced Driver Assistance Systems (ADAS) – the car of the future. ADAS requires vision and other data acquisition and processing horsepower, software systems capable of learning from mistakes (artificial intelligence), and high fault tolerance and safety. Multiple blocks in an ADAS system will require verification horsepower that would stress the hardware accelerated performance available even today.

-          As a result of these trends which appear to have no end, hardware acceleration is shifting left and being used earlier and earlier in the verification and validation flows. The market pressure to address its historic disadvantages is tremendous.

  • Compilation time:  Compilation in hardware acceleration requires logic synthesis and implementation / mapping to the hardware that is accelerating the simulation of the design. Synthesis, placement, routing, and mapping are all compilation steps that are not required for software simulation. Various techniques are being employed to reduce the time to compile for emulation and FPGA prototype. Here, emulation has a distinct advantage over FPGA prototypes in compilation and TAT.
  • Debug productivity:  Although simulation remains available for debugging purposes, you’d be right in thinking that falling back on a (significantly) slower engine as your debug solution doesn’t sound like the theoretically best debug productivity. Users want a simulation-like debug productivity experience with their hardware acceleration engines. Again, emulation has advantages over prototyping in debug productivity. When you combine the compilation and debug advantages of emulation over prototyping, it is easy to understand why emulation is typically used earlier in the flow, when bugs in the hardware are more likely to be found and design changes are relatively frequent. FPGA prototyping is typically used as a platform to enable early SW development and, at least some system-level pre-silicon validation.
  • Verification capabilities:  While hardware acceleration engines were used primarily or solely for pre-silicon validation, they could be viewed as laboratory instruments. But as their use continues to shift to earlier in the verification and validation flow, the need for them to become 1st class verification engines grows. That is why hardware acceleration engines are now supporting:
    • UPF for power-managed designs
    • Code and, more appropriately, functional coverage
    • Virtual (non-ICE) usage modes which allow verification environments to be connected to the DUT being emulated or prototyped. While a verification environment might be equated with a UVM testbench, it is actually a far more general term, especially in the context of hardware accelerated verification. The verification environment may consist of soft models of things that exist in the environment the system will be used in (validation context). For example, a soft model of a display system or Ethernet traffic generator or a mass storage device. Soft models provide advantages including controllability, reproducibility (for debug) and easier enterprise management and exploitation of the hardware acceleration technology. It may also include a subsystem of the chip design itself. Today, it has become relatively common to connect a fast model written in software (usually C/C++) to an emulator or FPGA prototype. This is referred to as hybrid emulation or hybrid prototyping. The most common subsystem of a chip to place in a software model is the processor subsystem of an SoC. These models usually exist to enable early software development and can run at speeds equivalent to ~100 MHz. When the processor subsystem is well verified and validated, typically a reused IP subsystem, then hybrid mode can significantly increase the verification cycles of other blocks and subsystems, especially driving tests using embedded software and verifying functionality within a full chip context. Hybrid mode can rightfully be viewed as a sub-category of the virtual usage mode of hardware acceleration.
    • As with simulation and formal before it, hardware acceleration solutions are evolving targeted verification “applications” to facilitate productivity when verifying specific objectives or target markets. For example, a DFT application accelerates and facilitates the validation of test vectors and test logic which are usually added and tested at the gate-level.

In conclusion, it may seem that simulation is being used less today. But, it is all relative. The total number of verification cycles is growing exponentially. More simulation cycles are being performed today even though hardware acceleration and formal cycles are taking relatively larger pieces of the overall verification pie. Formal is growing in appeal as a complementary engine. Because of its comprehensive verification nature, it can significantly bend the cost curve for high-valued (difficult/challenging) verification tasks and objectives. The size and complexity of designs today require the application of all verification engines to the challenges of verifying and validating (pre-silicon) the hardware design and enabling early SW development. The use of hardware acceleration continues to shift-left and be used earlier in the verification and validation flow causing emulation and FPGA prototyping to evolve into full-fledged verification engines (not just ICE validation engines).

Verification Choices: Formal, Simulation, Emulation

Thursday, July 21st, 2016

Gabe Moretti, Senior Editor

Lately there have been articles and panels about the best type of tools to use to verify a design.  Most of the discussion has been centered on the choice between simulation and emulation, but, of course, formal techniques should also be considered.  I did not include FPGA based verification in this article because I felt to be a choice equal to emulation, but at a different price point.

I invited a few representatives of EDA companies to answer questions about the topic.  The respondents are:

Steve Bailey, Director of Emerging Technologies at Mentor Graphics,

Dave Kelf, Vice President of Marketing at OneSpin Solutions

Frank Schirrmeister, Senior Product Management Director at Cadence

Seena Shankar, Technical Marketing Manager at Silvaco

Vigyan Singhal, President and CEO at Oski Technology

Lauro Rizzatti, Verification Consultant

A Search for the Best Technique

I first wanted an opinion of what each technology does better.  Of course the question is ambiguous because the choice of tool, as Lauro Rizzatti points out, depends on the characteristics of the design to be verified.  “As much as I favor emulation, when design complexity does not stand in the way, simulation and formal are superior choices for design verification. Design debugging in simulation is unmatched by emulation. Not only interactive, flexible and versatile, simulation also supports four-state and timing analysis.
However, design complexity growth is here to stay, and the curve will only get more challenging into the future. And, we not only have to deal with complexity measured in more transistors or gates in hardware, but also measured in more code in embedded software. Tasked to address this trend, both simulation and formal would hit the wall. This is where emulation comes in to rule the day.  Performance is not the only criteria to measure the viability of a verification engine.”

Vigyan Singhal wrote: “Both formal and emulation are becoming increasingly popular. Why use a chain saw (emulation) when you can use a scalpel (formal)? Every bug that is truly a block-level bug (and most bugs are) is most cost effective to discover with formal. True system-level bugs, like bandwidth or performance for representative traffic patterns, are best left for emulation.  Too often, we make the mistake of not using formal early enough in the design flow.”

Seena Shankar provided a different point of view. “Simulation gives full visibility to the RTL and testbench. Earlier in the development cycle, it is easier to fix bugs and rerun a simulation. But we are definitely gated by the number of cycles that can be run. A basic test

exercising a couple of functional operations could take up to 12 hours for a design with a 100 million gates.

Emulation takes longer to setup because all RTL components need  to be in place before a test run can begin. The upside is that millions of operations can be run in minutes. However, debug is difficult and time consuming compared to simulation.  Formal verification needs a different kind of expertise. It is only effective for smaller blocks but can really find corner case bugs through assumptions and constraints provided to the tool.”

Steve Bailey concluded that:” It may seem that simulation is being used less today. But, it is all relative. The total number of verification cycles is growing exponentially. More simulation cycles are being performed today even though hardware acceleration and formal cycles are taking relatively larger pieces of the overall verification pie. Formal is growing in appeal as a complementary engine. Because of its comprehensive verification nature, it can significantly bend the cost curve for high-valued (difficult/challenging) verification tasks and objectives. The size and complexity of designs today require the application of all verification engines to the challenges of verifying and validating (pre-silicon) the hardware design and enabling early SW development. The use of hardware acceleration continues to shift-left and be used earlier in the verification and validation flow causing emulation and FPGA prototyping to evolve into full-fledged verification engines (not just ICE validation engines).”

If I had my choice I would like to use formal tools to develop an executable specification as early as possible in the design, making sure that all functional characteristics of the intended product will be implemented and that the execution parameters will be respected.  I agree that the choice between simulation and emulation depends on the size of the block being verified, and I also think that hardware/software co-simulation will most often require the use of an emulation/acceleration device.

Limitations to Cooperation Among the Techniques

Since all three techniques have value in some circumstance, can designers easily move from one to another?

Frank Schirrmeister provided a very exhaustive response to the question, including a good figure.

“The following figure shows some of the connections that exist today. The limitations of cooperation between the engines are often of a less technical nature. Instead, they tend to result from the gaps between different disciplines in terms of cross knowledge between them.

Figure 1: Techniques Relationships (Courtesy of Cadence)

Some example integrations include:

-          Simulation acceleration combining RTL simulation and emulation. The technical challenges have mostly been overcome using transactors to connect testbenches, often at the transaction level that runs on simulation hosts to the hardware holding the design under test (DUT) and executing at higher speed. This allows users to combine the expressiveness in simulated testbenches to increase verification efficiency with the speed of synthesizable DUTs in emulation.

-          At this point, we even have enabled hot-swap between simulation and emulation. For example, we can run gate-level netlists without timing in emulation at faster speeds. This allows users to reach a point of interest at a later point of the execution that would take hours or days in simulation. Once the point of interest is reached, users can switch (hot swap) back into simulation, adding back the timing and continue the gate-level timing simulation.

-          Emulation and FPGA-based prototyping can share a common front-end, such as in the Cadence System Development Suite, to allow faster bring-up using multi-fabric compilation.

-          Formal and simulation also combine nicely for assertions, X-propagation, etc., and, when assertions are synthesizable and can be mapped into emulation, formal techniques are linked even with hardware-based execution.

Vigyan Singhal noted that: “Interchangeability of databases and poorly architected testbenches are limitations. There is still no unification of coverage database standard enabling integration of results between formal, simulation and emulation. Often, formal or simulation testbenches are not architected for reuse, even though they can almost always be. All constraints in formal testbenches should be simulatable and emulatable; if checkers and bus functional models (BFMs) are separated in simulation, checkers can sometimes be used in formal and in emulation.”

Dave Kelf concluded that: “the real question here is: How do we describe requirements and design specs in machine-readable forms, use this information to produce a verification plan, translate them into test structures for different tools, and extract coverage information that can be checked against the verification plan? It is this top-down, closed-loop environment generally accepted as ideal, but we have yet to see it realized in the industry. We are limited fundamentally by the ability to create a machine-readable specification.”

Portable Stimulus

Accellera has formed a study group to explore the possibility of developing a portable stimulus methodology.  The group is very active and progress is being made in that direction.  Since the group has yet to publish a first proposal, it was difficult to ask any specific questions, although I thought that a judgement on the desirability of such effort was important.

Frank Schirrmeister wrote: “At the highest level, the portable stimulus project allows designers to create tests to verify SoC integration, including items like low-power scenarios and cache coherency. By keeping the tests as software routines executing on processors that are available in the design anyway, the stimulus becomes portable between the different dynamic engines, specifically simulation, emulation, and FPGA prototyping. The difference in usage with the same stimulus then really lies in execution speed – regressions can run on faster engines with less debug – and on debug insight once a bug is encountered.”

Dave Kelf also has a positive opinion about the effort. “Portable Stimulus is an excellent effort to abstract the key part of the UVM test structures such that they may be applied to both simulation and emulation. This is a worthy effort in the right direction, but it is just scraping the surface. The industry needs to bring assertions into this process, and consider how this stimulus may be better derived from high-level specifications”

SystemVerilog

The language SystemVerilog is considered by some the best language to use for SoC development.  Yet, the language has limitations according to some of the respondents.

Seena Shankar answered the question “Is SystemVerilog the best we can do for system verification? as follows: “Sort of. SystemVerilog encapsulates the best features from Software and hardware paradigms for verification. It is a standard that is very easy to follow but may not be the best in performance. If the performance hit can be managed with a combination of system C/C++ or Verilog or any other verification languages the solution might be limited in terms of portability across projects or simulators.”

Dave Kelf wrote: “One of the most misnamed languages is SystemVerilog. Possibly the only thing this language was not designed to do was any kind of system specification. The name was produced in a misguided attempt to compete or compare with SystemC, and that was clearly a mistake. Now it is possible to use SystemVerilog at the system level, but it is clear that a C derived language is far more effective.
What is required is a format that allows untimed algorithmic design with enough information for it to be synthesized, virtual platforms that provide a hardware/software test capability at an acceptable level of performance, and general system structures to be analyzed and specified. C++ is the only language close to this requirement.”

And Frank Schirrmeister observed: “SystemVerilog and technologies like universal verification methodology (UVM) work well at the IP and sub-system level, but seem to run out of steam when extended to full system-on-chip (SoC) verification. That’s where the portable stimulus project comes in, extending what is available in UVM to the SoC level and allowing vertical re-use from IP to the SoC. This approach overcomes the issues for which UVM falls short at the SoC level.”

Conclusion

Both design engineers and verification engineers are still waiting for help from EDA companies.  They have to deal with differing methodologies, and imperfect languages while tackling ever more complex designs.  It is not surprising then that verification is the most expensive portion of a development project.  Designers must be careful to insure that what they write is verifiable, while verification engineers need to not only understand the requirements and architecture of the design, but also be familiar with the characteristics of the language used by developers to describe both the architecture and the functionality of the intended product.  I believe that one way to improve the situation is for both EDA companies and system companies to approach a new design not just as a piece of silicon but as a product that integrates hardware, software, mechanical, and physical characteristics.  Then both development and verification plans can choose the most appropriate tools that can co-exist and provide coherent results.

Future Challenges in Design Verification and Creation

Wednesday, March 23rd, 2016

Gabe Moretti, Senior Editor

Dr. Wally Rhines, Chairman and CEO of Mentor Graphics, delivered the keynote address at the recently concluded DVCon U.S. in San Jose.  The title of the presentation was: “Design Verification Challenges: Past, Present, and Future”.  Although one must know the past and recognize the present challenges, the future ones were those that interested me the most.

But let’s start from the present.  As can be seen in Figure 1 designers today use five major techniques to verify a design.  The techniques are not integrated with each other; they are as five separate silos within the verification methodology.  The near future goal as explained by Wally is to integrate the verification process.  The work of the Portable Stimulus Working Group within the Accellera System Initiative is addressing the problem.  The goal, according to Bill Hodges of Intel is: “Users should not be able to tell if their job was executed on a simulator, emulator, or prototype”.

Figure 1.  Verification Silos

The present EDA development work addresses the functionality of the design, both at the logical and at the physical level.  But, especially with the growing introduction of Internet of Things (IoT) devices and applications the issues of security and safety are becoming a requirement and we have not learned how to verify the device robustness in these areas.

Security

Figure 2, courtesy of Mentor Graphics, encapsulates the security problem.  The number of security breaches increases with every passing day it seems, and the financial and privacy losses are significant.

Figure 2

Chip designers must worry about malicious logic inside the chip, Counterfeit chips, and side-channel attacks.  Malicious logic is normally inserted dynamically into the chip using Trojan malware.  They must be detected and disabled.  The first thing designers need to do is to implement counter measures within the chip.  Designers must implement logic to analyze runtime activity to recognize foreign induced activity through a combination of hardware and firmware.  Although simulation can be used for verification, static tests to determine that the chip performs as specified and does not execute unspecified functions should be used during the development process.  Well-formed and complete assertions can approximate a specification document for the design.

Another security threat called “side-channel attacks” is similar to the Trojan attack but it differs in the fact that it takes advantage of back doors left open, either intentionally or not, by the developers.  Back doors are built into systems to deal with special security circumstances by the developers’ institution, but can be used criminally when discovered by unauthorized third parties.  To defend from such eventuality designers can use hardened IP or special logic to verify authenticity.  Clearly during development these countermeasures must be verified and weaknesses discovered.  The question to be answered is: “Is the desired path the only path possible?”

Safety

As the use of electronic systems grows at an increasing pace in all sort of products, the reliability of such systems grows in importance.  Although many products can be replaced when they fail without serious consequences for the users, an increasing number of systems failures put the safety of human being in great jeopardy.  Dr. Rhines identified in particular systems in the automotive, medical, and aerospace industries.  Safety standards have been developed in these industries that cover electronic systems.  Specifically, ISO 26262 in the automotive industry, IEC 60601 in the medical field, and DO-254 in aerospace applications.  These certification standards aim to insure that no harm will occur to systems, their operators, or to bystanders by verifying the functional robustness of the implementation.

Clearly no one would want a heart pace maker (figure 3) that is not fail-safe to be implanted in a living organism.

Figure 3. Implementation subject to IEC 60601 requirements

The certification standards address safe system development process by requiring evidence that all reasonable system safety objective are satisfied.  The goal is to avoid the risk of systematic failures or random hardware failures by establishing appropriate requirements and processes.  Before a system is certified, auditors check that each applicable requirement in the standard has been implemented and verified.  They must identify specific tests used to verify compliance to each specific requirement and must also be assured that automatic requirements tracking is available for a number of years.

Dr. Rhines presented a slide that dealt with the following question: “Is your system safe in the presence of a fault?”.

To answer the question verification engineers must inject faults in the verification stream.  Doping this it helps determining if the response of the system matches the specification, despite the presence of faults.  It also helps developers understand the effects of faults on target system behavior, and is assesses the overall risk.  Wally noted that formal-based fault injection/verification can exhaustively verify the safety aspects of the design in the presence of faults.

Conclusion

Dr. Rhines focused on the verification aspects during his presentation and his conclusions covered four points.

  • Despite design re-use, verification complexity continues to increase at 3-4X the rate of design creation
  • Increasing verification requirements drive new capabilities for each type of verification engine
  • Continuing verification productivity gains require EDA to both abstract the verification process from the underlying engines, develop common environments, methodologies and tools, and separate the “what” from the “how”
  • Verification for security and safety is providing another major wave of verification requirements.

I would like to point out that developments in verification alone are not enough.  What EDA really needs is to develop a system approach to the problem of developing and verifying a system.  The industry has given lip service to system design and the tools available so far still maintain a “silos” approach to the problem.  What is really required is the ability to work at the architectural level and evaluate a number of possible solutions to a well specified requirements document.  Formal tools provide good opportunities to approximate, if not totally implement, an executable requirements document.  Designers need to be able to evaluate a number of alternatives that include the use of mixed hardware and software implementations, analog and mixed-signal solutions, IP re-use, and electro-mechanical devices, such as MEMS.

It is useless or even dangerous to begin development under false assumptions whose impact will be found, if ever, once designers are well into the implementation stage.  The EDA industry is still focusing too much on fault identification and not enough on fault avoidance.

Technology Implications for 2016

Monday, February 1st, 2016

Gabe Moretti, Senior Editor

Although it is logical to expect that all sectors of the EDA industry will see improvements in 2016, some sectors will be more active as they are more directly connected with the market forces that fuel the electronics consumers market.

Verification

Michael Sanie, Senior Director of Verification Marketing at Synopsys points to changes in the requirements for EDA tools:

“With the rise of the Internet of Things (IoT), Web 2.0 applications and social media comes the demand for devices that are smaller, faster and consume lower power, despite being equipped with increasing amounts of software content. As a result, SoC designs have grown tremendously in complexity. In addition, advanced verification teams are now faced with the challenge of not only reducing functional bugs, but also accelerating both software bring-up and time to market. The process of finding and fixing functional bugs and performing software bring-up involves intricate verification flows including virtual platforms, static and formal verification, simulation, emulation and finally, FPGA-based prototyping.  Up until very recently, each step in the verification flow was isolated, time-consuming and tedious, in addition to requiring several disjoint technologies and methodologies.

In 2016, the industry will continue to strive towards greater levels of verification productivity and early software bring-up.  This will be achieved through the introduction of larger, more unified platform solutions that feature a continuum of technologies enabling faster engines, native integrations and unified compile, debug, coverage and verification IP.  With this continuum of technologies being integrated into a unified platform solution, each step in the verification flow is further streamlined, and little time is spent in transitioning between steps. The rise of such platforms will continue to enable further dramatic increases in SoC verification productivity and earlier software bring-up.”

Semiconductor Processes

The persistent effort by EDA companies to follow the predictions of Gordon Moore, commonly known as Moore’s Law, are continuing in spite of the ever growing optical challenges that the present lithography process finds.

Vassilios Gerousis, Distinguished Engineer, Cadence points out that: “While few expect 10nm production, we will definitely see 10nm test chip products this year. Some will even hit production timelines and become actual product designs. At the same time, we will see more products go into production at the 14nm and 16nm process nodes. Designers are definitely migrating from 28nm, and even skipping over 20nm.”

Zhihong Liu, Executive Chairman, ProPlus Design Solutions also thinks that advanced process nodes will be utilized this year.  “In 2016, we’ll see more designs at advanced process technologies such as FinFET at 16/14nm, and even trial projects at 10nm and 7nm. It becomes necessary for the EDA community to develop one common tool platform for process development, CAD and circuit design to help designers evaluate, select and implement new processes. However, such tool platforms did not exist before. An ongoing concern within the transistor-level design community is EDA tools such as FastSPICE simulators for verification and signoff that are not as accurate or reliable as they should be. It’s becoming a critical need as project teams move to advanced nodes and larger scale designs that require higher accuracy and incur greater risk and higher costs to fabricate the chip.”

3DIC and Thermal

Michael Buehler-Garcia points to the increased use of 3D-ICs in design.  “3D IC packaging has historically been the domain of packaging and OSAT’s. Newer offerings are driving 3D implementation from the chip design perspective. With this change, chip design techniques are being used to analyze and verify the chip stack to ensure we eliminate integration issues, especially considering that chips in a 3D stack often come from different foundries, and are verified using different processes. In 2016, we project increased “chip out” physical and circuit verification that can be performed independently on each die, and at the interfacing level (die-to-die, die-to-package, etc.). In concert with this change, we are seeing a customer-driven desire for a standardized verification process – used by chip design companies and assembly houses to ensure the manufacturability and performance of IC packages – that will significantly reduce risk of package failure, while also reducing turnaround time for both the component providers and assembly houses. By implementing a repeatable, industry-wide supported process, all participants can improve both their first-time success rate and overall product quality. Initial work with customers and package assembly houses has proven the feasibility of this approach.  As we all know, standards take a long time, but 2016 is the year to start the process.”

Dr John Parry, Electronics Industry Vertical Manager, Mentor Graphics Mechanical Analysis Division adds that thermal considerations are also increasing in importance in system design.“The trend we see in the chip thermal-mechanical space is a stronger need for qualifying new materials, methods, packaging technologies and manufacturing processes. Our advanced thermal design software, thermal characterization and active power cycling hardware is helping customers to meet this expanding need.”

Emulation

One of the areas showing significant growth is the area of hardware based emulation.  Lauro Rizzatti, a noted verification experts stated:  “In 2016 and beyond, new generations of hardware emulators will be brought to market. They will have added capabilities that are more powerful than currently available, as well as support new applications.

Hardware emulation will continue to be the foundation of a verification strategy in 2016. This isn’t so much of a prediction but a fact as standard designs routinely exceed the 100-million gate mark and processor, graphics and networking designs approach one-billion gates. Hardware emulation is the only verification tool able to take on and verify these oversized, complex designs. It’s also true as new capabilities enable emulation to be a datacenter resource, enabling worldwide project teams access to this remarkable verification tool. In 2016, this will become the new normal as project teams better leverage their investment in hardware emulation and work to avoid risk.”

The importance of emulation was also underscored by Jean-Marie Brunet, Director of Marketing, Mentor Graphics Emulation Division especially in the area of hardware/software co-verification and software debug.  “Emulation is going mainstream. In 2016, its use will continue to grow faster than the overall EDA industry. Customers are starting to look beyond traditional emulation criteria – such as capacity, speed, compile time, and easy hardware and software debugging – to an expanding list of new criteria: transaction-based verification, multi-user / multi-project access, live and off-line embedded software development and validation, and data-center operation as a centrally managed resource rather than a standalone box in the testing lab. Enterprise management applications help automate emulation by maximizing uptime and supporting intelligent job queuing. This approach not only balances the workload, but also shuffles the queued workload to prioritize critical jobs.

Software applications will continue to shape emulation’s future. For example, emulation is driving a methodology shift in the way power is analyzed and measured. Now designers can boot an OS and run real software applications on SoC designs in an emulator. Real-time switching activity information generated during emulation runs is passed to power analysis tools where power issues can be evaluated. In 2016 there will be a steady stream of new emulator applications aligning with customers’ verification needs such as design for test, coverage closure and visualization.”

Power Management

Last year Sonics introduced a software based method for power management.  As Grant Pierce, the company’s CEO believes that this year the tool should see significant acceptance.  “SoC designs have hit the power wall and the time for dynamic power management solutions is here. This marks the beginning of a multi-year, SoC development trend where system architects and sub-system engineers must consider power as a primary design constraint. Reducing power consumption is now every electrical engineer’s concern–both to enable product success as well as to support reducing greenhouse gas emissions as a global issue. SoC designers can no longer afford to ignore power management without suffering serious consequences, especially in untethered applications where users expect all-day, all-month or even all-year battery life, even with increasing functionality and seemingly constant connectivity.

The majority of SoC design teams, which don’t have the luxury of employing dedicated power engineering staff, will look to purchase third-party IP solutions that deliver orders of magnitude greater energy savings than traditional software-based approaches to power management and control. While designs for mobile phones, tablets, and the application processors that operate them grow more power sensitive with each successive generation of highly capable devices, Sonics expects dynamic, hardware-based power management solutions to be extremely attractive to a broad set of designers building products for automotive, machine vision, smart TV, data center, and IoT markets.


Figure 1. Representation of an SoC partitioned into many Power Grains with a power controller synthesized to simultaneously control each grain. The zoomed view shows the local control element on each grain – highlighting the modular and distributed construction of the ICE-Grain architecture.

As our CTO Drew Wingard has stated, hardware-based power management solutions offer distinct advantages over software-based approaches in terms of speed and parallelism. SoC designers incorporating this approach can better manage the distributed nature of their heterogeneous chips and decentralize the power management to support finer granularity of power control. They can harvest shorter periods of idle time more effectively, which means increasing portions of their chips can stay turned off longer. The bottom line is that these solutions will achieve substantial energy savings to benefit both product and societal requirements.”

High Level Synthesis

Raik Brinkmann, president and CEO of OneSpin Solutions, noted:

”High-level synthesis will get more traction because the demand is there. As more designs get comfortable using the SystemC/C++ level, demand for EDA tools supporting the task will increase, including formal verification.  Additionally, algorithmic design will be driven further in 2016 by applications on FPGAs to reduce power and increase performance over GPU. That suggests FPGA implementation and verification flows will require more automation to improve turnaround time, a viable opportunity for EDA vendors.  Finally, verification challenges on the hardware/firmware interface will increase as more complex blocks are generated and need firmware to access and drive their functions.”

As it has done throughout its existence the EDA industry will continue to be the engine that propels the growth of the electronics industry.  We have seen in the past a propensity in the electronics industry to think ahead and prepare itself to be ready to offer new products as soon as the demand materializes, so I expect that the worst it can happen is a mild slowdown in the demand for EDA tools in 2016.

Improving Emulation Throughput for Multi-Project SoC Designs

Monday, November 30th, 2015

Frank Schirrmeister, Cadence Design Systems

Introduction

While the average design size in 2014 was about 180 million gates, according to Semico, that size is expected to grow to an average of 340 million gates in 2018. Of course, since these are simply average figures, there are designs that also have more than a billion gates. Large SoCs typically consist of multiple heterogeneous and homogenous integrated processor cores as well as hundreds of clock domains and IP blocks. All of these components need to be verified to ensure that they work as intended. The software executing on the cores adds additional verification challenges and complexity. So while overall design size growth is important, you also need to consider the varied sizes of each of these components. On top of all this, design schedules are as aggressive as ever, given today’s market pressures. See Figure 1 for an example of an advanced SoC.

In reaction to shrinking design cycles, we’re seeing much more hardware design reuse as well as the application of advanced power management techniques, where functions are instead implemented in embedded software. Design reuse and power management add to the verification challenge, as you need to consider everything from IP interaction at different levels to the interaction between the hardware and software components.

With these considerations, you need a hardware /software verification platform that can simultaneously handle tasks of different sizes and execution lengths, from the smaller IP blocks to sub-systems up to the SoC level. And, you’ll need to extend this verification effort across your enterprise to the other designs that you’re concurrently developing.

Figure 1: Advanced SoC architecture

The Emulation Throughput Loop

An effective system verification flow integrates a number of techniques:

• Virtual prototyping supports early software development prior to RTL availability, with validation of system hardware /software interfaces as the RTL becomes available and transaction-level models (TLMs) can be run in hybrid configurations with RTL.

• Advanced verification of RTL using simulation is the golden reference today, used in every project. It is focused on hardware verification as it is limited in speed, but unbeatable in bring-up time and advanced debug.

• Simulation acceleration verifies RTL hardware as a combination of software-based simulation and hardware- assisted execution, allowing you to reuse your existing simulation verification environment

• Emulation has proven to be effective for hardware /software co-verification with the fastest design bring-up and simulation-like debug, executing jobs of varying sizes and lengths in parallel

• FPGA-based prototyping is applied when RTL matures and provides a pre-silicon platform for early software development, system validation, and throughput regressions

When considering the emulation throughput of a platform and its ROI, it’s important to keep in mind the four major steps in an emulation throughput loop (Figure 2), along with the questions you should ask yourself at each step:

• In the build process, you’re building your verification job, compiling the databases to address different workloads and tasks. Some questions to consider: How fast is the compiler? What is its efficiency and degree of automation for RTL vs. gate-level netlist? How user-friendly is it? How many constructs does it support? Speed in building these databases is just one parameter, as effective use of resources is also important, particularly since the use model will change as you move through different scenarios in your design.

• Allocation is about allocating the workload as efficiently as possible given a certain resource. Questions to consider here: how many parallel jobs can the system handle? How quickly can you allocate from one resource to another without much user intervention? How do you prioritize and swap these tasks according to demand load? All these significantly impact the workload efficiency with which a compute resource can execute the verification workloads a team has to verify.

• The run step is about execution, but while it may be tempting to focus on how fast the system can run, that’s not the whole story here. This step also pertains to the use model and the interface solutions for that particular use model. Questions to consider: Is the system running jobs based on your design priorities? How many jobs can your system execute in parallel? Can jobs be suspended and resumed? Can a specific point of interest be saved to be resumed from later?

• Debug focuses on detection of pre- and post-silicon bugs. Questions to consider: How well can you debug using the verification platform while minimizing the need to re-run the tests to isolate the bug? How efficiently can you adjust your triggers dynamically without re-compiling? Do you have sufficient debug traces for both HW and SW analysis to allow you to capture a large enough window of debug interest, or do you have to execute multiple times to get the right data to debug?

Figure 2: The Emulation Throughput Loop

Each of these steps impacts overall emulation throughput in their own way. For example, debug analysis, considering its data capture and generation requirements, can slow down FPGA-based prototyping and FPGA-based emulation significantly. So you need to decide how much debug is needed for each phase in each particular design. And you might make some adjustments, like using processor-based emulation, during which debug runs un-intrusively at emulation speeds.

As illustrated in Figure 3, this emulation throughput loop will be repeated multiple times in parallel from initial bring-up to silicon tapeout as you integrate and validate each feature set in your design. So each engineer on your team would go through the process of editing their source, checking, compiling, checking in, and integrating their work with that of their peers. Eventually, and later in the flow, the system becomes deterministic and coherent. Once you’ve settled on your feature set and move into the regression run, you might undergo the emulation loop less often until tapeout.

Figure 3: The verification productivity loop is repeated multiple times in parallel through the design cycle.

Why a Holistic View of Parameters for Emulation Throughput Matters

It is also important to take a holistic view of your parameters when you are evaluating emulation throughput. Consider energy cost (some may calculate this value as intrinsic power over time). However, intrinsic power is merely one portion of the power equation. When comparing, for example, an FPGA-based emulator with a processor-based emulator, you’ll likely find that the total processor-based emulator power can be lower than in an FPGA-based emulator when you calculate all of the power consumed in a verification task, i.e. adding up compilation (which often needs large parallel server farms for FPGA-based systems), allocation (processor-based emulation may allow better workload efficiency if the granularity of emulation resources is fine enough), execution, and potentially multiple debug runs required to generate comparable amounts of debug information if trace buffers are not appropriately sized.

Also important considerations are job granularity and gate utilization; some systems are simply better designed than others to minimize wasted emulation space and provide faster turnaround times. It’s not only about the number of jobs supported. For example, a system with fine-grained capacity can support more jobs in parallel while minimizing the resources required. To calculate total turnaround time, you need to consider the number of jobs, the number of parallel jobs that your system can execute, and the run and debug times for this.

How to Support Enterprise-Level Verification Computing Needs?

If you have multiple ongoing design projects involving multiple teams distributed across the globe, not just any verification computing system will do. Enterprise-level verification computing calls for a system that:

• Can scale to billion-gate designs while running within your performance /power targets

• Has the ability to support a high number of daily design turns

• Offers debug productivity with sufficient trace depth to generate the required debug information and fast analysis and compile times

• Can provide the flexibility to allocate a given job to any other domain on the logic board to make the best use of computing resources

• Can offer the efficiency of dynamic resource allocation by breaking up a job in a non-consecutive manner to match the available resource in the system

• Does not lock your jobs to given physical interfaces, instead providing the ability to switch virtually between targets and jobs

Advantages of an End-to-End Flow

Given these parameters, there certainly are advantages to using an end-to-end flow. When an emulation system is part of a set of connected platforms, you can quickly migrate between the interoperable platforms and between hardware /software domains to complete your system-level design, integration, and verification tasks. An integrated flow could provide a substantial boost to design team productivity, while minimizing design risks.

An example of an integrated set of development platforms to support hardware /software design and verification can be found in Cadence’s System Development Suite. The suite’s connected platforms reduce system integration time by up to 50%, covering early pre-silicon software development, IP design and verification, subsystem and SoC verification, netlist validation, hardware /software integration and validation, and system and silicon validation.

Summary

Clearly, if your design team isn’t using your hardware resources as efficiently or effectively as they could, the team loses valuable productivity. With today’s large, complex SoCs, achieving high levels of multi-user verification throughput and productivity are essential elements of an overall competitive advantage. Many traditional compute resources aren’t equipped to meet these parameters. Instead, today’s designs call for capabilities including high capacity, efficient job allocation, fast runtime, and extensive debug to help you swiftly take your designs to tapeout.

New Markets for EDA

Wednesday, December 3rd, 2014

Brian Derrick, Vice President Corporate Marketing, Mentor Graphics

EDA grows by solving new problems as discontinuities occur and design cannot proceed as usual.  Often these are incremental, but occasionally problems or transitions occur that create new markets for our industry.

Discontinuities in Traditional EDA

One of the most pressing challenges today is the escalating complexity of hardware verification and the need to verify software earlier in the design cycle. Emulation is rapidly becoming a mainstream methodology. As part of an integrated enterprise verification solution, it allows designers to perform pre-silicon testing and debug at accelerated hardware speeds, using real-world stimulus within a unified verification environment that seamlessly moves data and transactors between simulation and emulation.   Enterprise verification utilizing emulation delivers performance and productivity improvements ranging from 400X to 10,000X.

Performance alone did not enable emulation to become mainstream.  There has been a transformation from project-bound engineering lab instrument to datacenter-hosted global resource.  This transformation begins by eliminating the In-Circuit Emulation (ICE) tangle of cables, speed adaptors and physical devices, replacing them with virtual devices.  The latest generation of emulators can be installed in most standard data centers, making emulators similar to any other server installation.

What’s equally exciting is the number of software engineers who have moved their embedded software development and debug to emulators. With the accelerated throughput, developers feel as though they are debugging embedded software on the actual silicon product.  All of this explains why the emulation market has doubled in the past five years, with a three year compounded annual growth rate of 23%.

Another discontinuity in traditional EDA is physical testing at 20nm and below.  As FinFET technology becomes pervasive at these nodes, there is strong potential for increased defects within the standard cells.  Transistor-Level automatic test pattern generation targets undetected internal faults based on the actual cell layout.  It improves the quality of wafer level test, reducing the need for system-level, functional test.  This “cell-aware” capability has been well qualified on FinFET parts and will become pervasive in leading-edge physical design verification, keeping the Design for Test market on its accelerated growth rate that is 4X the overall EDA market growth.

EDA Growth Opportunities in New Markets

Other important growth opportunities for our industry can be found in markets that are in transition, with emerging requirements for automated design.  There is no doubt that automotive electronics is one of the most promising segments.  As the automotive industry transitions from mechanical to electronic differentiation of their products, the need for electronic design automation is accelerating.

Automobiles  are complex electronics systems where leading-edge vehicles have up to 150 electronic networks, 200 microprocessors, nearly 100 electronic motors, hundreds of LEDs, all connected by nearly 3 miles of wiring.  And the embedded software responsible for managing all of this can reach upwards of 65 million lines of code (Figure 1).  Automotive ICs are already a $20 billon market and are the fastest growing segment according to IC Insights.  Electronics now account for 35-40% of a car’s cost and that number is expected to increase to 50% in the future.

Figure 1:  Complexity is driving the automation of the electronic design of automotive electronics

Automotive suppliers are adopting EDA solutions to address the unique electronic systems and software challenges in this rapidly developing segment.  Simple wiring tools are being replaced with complete enterprise solutions from concept through design, manufacturing, costing and after-sales service.  These tools and flows are enabling the industry to handle the requirements of a highly regulated environment, while increasing quality, minimizing costs, reducing weight, and manage power across literally thousands of options for an OEM platform.

The rapid expansion of electronic control units and networks in nearly all new automotive platforms has accelerated the demand for AUTOSAR development tools and solutions.   AUTOSAR is an open, standardized automotive software architecture, jointly developed by automobile manufacturers, suppliers and tool developers.  Now add to that the safety-critical embedded software requirements and standards such as ISO 26262, regulations for fuel efficiency, and environmental emissions, and the opportunity for design automation is just beginning.

Driver experience is now a crucial differentiator, with in-vehicle infotainment (IVI), advance driver assistance (ADAS), and driver information, becoming the major selling points for new automobiles.  Active noise cancellation, high speed hi-def video, smart mirrors, head up display, proximity information, over the air updates, and animated graphics are just a few of the capabilities being deployed and developed for automobiles today.

There is a strong demand for EDA solutions that combine system design and software development for heterogeneous systems. Embedded Linux, AUTOSAR and real time operating systems are deployed across diverse multi-core SOCs in a growing number of in-vehicle networks.

Many of the EDA solutions developed for the automotive industry are being adopted by other markets with similar challenges.  Electronic systems interconnect tools are enabling the optimization of the cable/harness systems in aerospace, defense, heavy equipment, off-road, agriculture,  and other transportation-related markets.   As the automotive industry develops and deploys driver convenience and information systems, it will make them affordable for many of these adjacent markets.

New markets for EDA are emerging as the complexity of SoCs increase and the world we interact with becomes more connected.  Solving these new problems and applying EDA solutions to markets in transition, like automotive, aerospace, the broader transportation industry, and the Internet of Things, will fuel the growth of the design automation industry into the future.

System Level Power Budgeting

Wednesday, March 12th, 2014

Gabe Moretti, Contributing Editor

I would like to start by thanking Vic Kulkarni, VP and GM at Apache Design a wholly owned subsidiary of ANSYS, Bernard Murphy, Chief Technology Officer at Atrenta,and Steve Brown, Product Marketing Director at Cadence for contributing to this article.

Steve began by nothing that defining a system level power budget for a SoC starts from chip package selection and the power supply or battery life parameters. This sets the power/heat constraint for the design, and is selected while balancing functionality of the device, performance of the design, and area of the logic and on-chip memories.

Unfortunately, as Vic points out semiconductor design engineers must meet power specification thresholds, or power budgets, that are dictated by the electronic system vendors to whom they sell their products.   Bernard wrote that accurate pre-implementation IP power estimation is almost always required. Since almost all design today is IP-based, accurate estimation for IPs is half the battle. Today you can get power estimates for RTL with accuracy within 15% of silicon as long as you are modeling representative loads.

With the insatiable demand for handling multiple scenarios (i.e. large FSDB files) like GPS, searches, music, extreme gaming, streaming video, download data rates and more using mobile devices, dynamic power consumed by SOCs continues to rise in spite of strides made in reducing the static power consumption in advanced technology nodes. As shown in Figure 1, the end user demand for higher performance mobile devices that have longer battery life or higher thermal limit is expanding the “power gap” between power budgets and estimated power consumption levels.

Typical “chip power budget” for a mobile application could be as follows (Ref: Mobile companies): Active power budget = 700mW @100Mbps for download with MIMO, 100mW @IDLE-mode; Leakage power <5mW with all power-domain off etc.

Accurate power analysis and optimization tools must be employed during all the design phases from system level, RTL-to-gate level sign-off to model and analyze power consumption levels and provide methodologies to meet power budgets.

Skyrocketing performance vs. limited battery & thermal limit (ref. Samsung- Apache Tech Forum)

The challenge is to find ways to abstract with reasonable accuracy for different types of IP and different loads. Reasonable methods to parameterize power have been found for single and multiple processor systems, but not for more general heterogeneous systems. Absent better models, most methods used today are based on quite simple lookup tables, representing average consumption. Si2 is doing work in defining standards in this area.

Vic is convinced that careful power budgeting at a high level also enables design of the power delivery network in the downstream design flow. Power delivery with reliable and consistent power to all components of ICs and electronic systems while meeting power budgets is known as power delivery integrity.  Power delivery integrity is analogous to the way in which an electric power grid operator ensures that electricity is delivered to end users reliably, consistently and in adequate amounts while minimizing loss in the transmission network.  ICs and electronic systems designed with inadequate power delivery integrity may experience large fluctuations in supply voltage and operating power that can cause system failure. For example, these fluctuations particularly impact ICs used in mobile handsets and high performance computers, which are more sensitive to variations in supply voltage and power.  Ensuring power delivery integrity requires accurate modeling of multiple individual components, which are designed by different engineering teams, as well as comprehensive analysis of the interactions between these components.

Methods To Model System Behavior With Power

At present engineers have a few approaches at their disposal.  Vic points out that the designer must translate the power requirements into block-level power budgeting to come up with specific metrics.

Dynamic power estimation per operating power mode, leakage power and sleep power estimation at RTL, power distribution at a glance, identification of high-power consuming areas, power domains, frequency-scaling feasibility for each IP, retention flop design trade-off, power-delivery network planning, required current consumption per voltage source and so on.

Bernard thinks that Spreadsheet Modeling is probably the most common approach. The spreadsheet captures typical application use-cases, broken down into IP activities, determined from application simulations/emulations. It also represents, for each IP in the system, a power lookup table or set of curves. Power estimation simply sums across IP values in a selected use-case. An advantage is no limitation in complexity – you can model a full smart phone including battery, RF and so on. Disadvantages are the need to understand an accurate set of use-cases ahead of deployment, and the abstraction problem mentioned above.  But Steve points out that these spreadsheets are difficult to create and maintain, and fall short for identifying outlier conditions that are critical for the end users experience.

Steve also points out that some companies are adapting virtual platforms to measure dynamic power, and improve hardware / software partitioning decisions. The main barrier to this solution remains creation of the virtual platform models, and then also adding the notion of power to the models. Reuse of IP enables reuse of existing models, but they still require effort to maintain and adapt power calculations for new process nodes.

Bernard has experienced engineers that run the full RTL against realistic software loads, dump activity for all (or a large number) of nodes and compute power based on the dump. An advantage is that they can skip the modeling step and still get an estimate as good as for RTL modeling. Disadvantages include needing the full design (making it less useful for planning) and significant slowdown in emulation when dumping all nodes, making it less feasible to run extensive application experiments.  Steve concurs.  Dynamic power analysis is a particularly useful technique, available in emulation and simulation. The emulator provides MHz performance enabling analysis of many cycles, often times with test driver software to focus on the most interesting use cases.

Bernard is of the opinion that while C/C++/SystemC Modeling seems an obvious target, it also suffers from the abstraction problem. Steve thinks that a likely architecture in this scenario has the virtual platform containing the processing subsystem and memory subsystem and executes as 100s of MHz, and the emulator contains the rest of the SoC and a replication of the memory subsystem and executes at higher speeds and provides cycle accurate power analysis and functional debugging.

Again,  Bernard wants to underscore, progress has been made for specialized designs, such as single and multiple processors, but these approaches have little relevance for more common heterogeneous systems. Perhaps Si2 work in this area will help.

Next Page »