Part of the  

Chip Design Magazine

  Network

About  |  Contact

Posts Tagged ‘OneSpin Solutions’

EDA in the year 2017 – Part 2

Tuesday, January 17th, 2017

Gabe Moretti, Senior Editor

The first part of the article, published last week, covered design methods and standards in EDA together with industry predictions that impacted all of our industry.  This part will cover automotive, design verification and FPGA.  I found it interesting that David Kelf, VP of Marketing at OneSpin Solutions, thought that Machine learning will begin to penetrate the EDA industry as well.  He stated: “Machine Learning hit a renaissance and is finding its way into a number of market segments. Why should design automation be any different?  2017 will be the start of machine learning to create a new breed of design automation tool, equipped with this technology and able to configure itself for specific designs and operations to perform them more efficiently. By adapting algorithms to suit the input code, many interesting things will be possible.”

Rob Knoth, Product Management Director, Digital and Signoff Group at Cadence touched on an issue that is being talked about more recently: security.  He noted that: “In 2016, IoT bot-net attacks brought down large swaths of the Internet – the first time the security impact of IoT was felt by many. Private and nation-state attacks compromised personal/corporate/government email throughout the year. “

In 2017, we have the potential for security concerns to start a retreat from always-on social media and a growing value on private time and information. I don’t see a silver bullet for security on our horizon. Instead, I anticipate an increasing focus for products to include security managers (like their safety counterparts) on the design team and to consider safety from the initial concept through the design/production cycle.

Figure 1.  Just one of the many electronics systems found in an automobile (courtesy of Mentor)

Automotive

The automotive industry has increased the use of electronics year over year for a long time.  At this point an automobile is a true intelligent system, at least as far as what the driver and passenger can see and hear the “infotainment system”.  Late model cars also offer collision avoidance and stay-in-lane functions, but more is coming.

Here is what Wally Rhines thinks: “Automotive and aerospace designers have traditionally been driven by mechanical design.  Now the differentiation and capability of cars and planes is increasingly being driven by electronics.  Ask your children what features they want to see in a new car.  The answer will be in-vehicle infotainment.  If you are concerned about safety, the designers of automobiles are even more concerned.  They have to deal with new regulations like ISO 26262, as well as other capabilities, in addition to environmental requirements and the basic task of “sensor fusion” as we attach more and more visual, radar, laser and other sensors to the car.  There is no way to reliably design vehicles and aircraft without virtual simulation of electrical behavior.

In addition, total system simulation has become a requirement.  How do you know that the wire bundle will fit through the hole in the door frame?  EDA tools can tell you the answer, but only after seeking out the data from the mechanical design.  Wiring in a car or plane is a three dimensional problem.  EDA tools traditionally worry about two dimension routing problems.  The world is changing.  We are going to see the basic EDA technology for designing integrated circuits be applied to the design of systems. Companies that can remain at the leading edge of IC design will be able to apply that technology to systems.”

David Kelf, VP of Marketing at OneSpin Solutions, observed: “OneSpin called it last year and I’ll do it again –– Automotive will be the “killer app” of 2017. With new players entering the marketing all the time, we will see impressive designs featured in advanced cars, which themselves will move toward a driverless future.  All automotive designs currently being designed for safety will need to be built to be as secure as possible. The ISO 26262 committee is working on security as well safety and I predict security will feature in the standard in 2017. Tools to help predict vulnerabilities will become more important. Formal, of course, is the perfect platform for this capability. Watch for advanced security features in formal.”

Rob Knoth, Product Management Director, Digital and Signoff Group at Cadence noted: “In 2016, autonomous vehicle technology reached an inflection point. We started seeing more examples of private companies operating SAE 3 in America and abroad (Singapore, Pittsburgh, San Francisco).  We also saw active participation by the US and world governments to help guide tech companies in the proliferation and safety of the technology (ex. US DOT V2V/V2I standard guidelines, and creating federal ADAS guidelines to prevent state-level differences). Probably the most unique example was also the first drone delivery by a major retailer, something which was hinted at 3 years prior and seemingly just a flight of fancy then.

Looking ahead to 2017, both the breadth and depth are expected to expand, including the first operation of SAE level 4/5 in limited use on public streets outside the US, and on private roads inside US. Outside of ride sharing and city driving, I expect to see the increasing spread of ADAS technology to long distance trucking and non-urban transportation. To enable this, additional investments from traditional vehicle OEM’s partnering with both software and silicon companies will be needed to enable high-levels of autonomous functions. To help bring these to reality, I also expect the release of new standards to guide both the functional safety and reliability of automotive semiconductors. Even though the pace of government standards can lag, for ADAS technology to reach its true potential, it will require both standards and innovation.”

FPGA

The IoT market is expected to provide a significant opportunity to the electronics industry to grow revenue and open new markets.  I think the use of FPGA in IoT dvices will increase the use of these devices in system design.

I asked Geoff Tate, CEO of FlexLogix, his opinions on the subject.  He offered four points that he expects to become reality in 2017:

1. the first customer chip will be fabricated using embedded FPGA from an IP supplier

2. the first customer announcements will be made of customers adopting embedded FPGA from an IP supplier

3. embedded FPGAs will be proven in silicon running at 1GHz+

4. the number of customers doing chip design using embedded FPGA will go from a handful to dozen.

Zibi Zalewski, Hardware Division General Manager at Aldec also addressed the FPGA subject.

“I believe FPGA devices are an important technology player to mention when talking what to expect in 2017. With the growth of embedded electronics driven by Automotive, Embedded Vision and/or IoT markets, FPGA technology becomes a core element particularly for in products that require low power and re-programmability.

Features of FPGA such as pipelining and the ability to execute and easily scale parallel instances of the implemented function allow for the use of FPGA for more than just the traditionally understood embedded markets. FPGA computing power usage is exploding in the High Performance Computing (HPC) where FPGA devices are used to accelerate different scientific algorithms, big data processing and complement CPU based data centers and clouds. We can’t talk about FPGA these days without mentioning SoC FPGAs which merge the microprocessor (quite often ARM) with reprogrammable space. Thanks to such configurations, it is possible to combine software and hardware worlds into one device with the benefits of both.

All those activities have led to solid growth in FPGA engineering, which is pushing on further growth of FPGA development and verification tools. This includes not only typical solutions in simulation and implementation. We should also observe solid growth in tools and services simplifying the usage of FPGA for those who don’t even know this technology such as high level synthesis or engineering services to port C/C++ sources into FPGA implementable code. The demand for development environments like compilers supporting both software and hardware platforms will only be growing, with the main goal focused on ease of use by wide group of engineers who were not even considering the FPGA platform for their target application.

At the other end of the FPGA rainbow are the fast-growing, largest FPGA offered both from Xilinx and Intel/Altera. ASIC design emulation and prototyping will push harder and harder on the so called big-box emulators offering higher performance and significantly lower price per gate and so becoming more affordable for even smaller SoC projects. This is especially true when partnered with high quality design mapping software that handles multi-FPGA partitioning, interconnections, clocks and memories.”

Figure 2. Verification can look like a maze at times

Design Verification

There are many methods to verify a design and companies will, quite often, use more than one on the same design.  Each method: simulation, formal analysis, and emulation, has its strong points.

For many years, logic simulation was the only tool available, although hardware acceleration of logic simulation was also available.

Frank Schirrmeister, Senior Product Management Group Director, System and Verification Group at Cadence submitted a through analysis of verification issues.  He wrote: “From a verification perspective, we will see further market specialization in 2017 – mobile, server, automotive (especially ADAS) and aero/defense markets will further create specific requirements for tools and flows, including ISO 26262 TCL1 documentation and support for other standards. The Internet of Things (IoT) with its specific security and low power requirements really runs across application domains.  Continuing the trend in 2016, verification flows will continue to become more application-specific in 2017, often centered on specific processor architectures. For instance, verification solutions optimized for mobile applications have different requirements than for servers and automotive applications or even aerospace and defense designs. As application-specific requirements grow stronger and stronger, this trend is likely to continue going forward, but cross-impact will also happen (like mobile and multimedia on infotainment in automotive).

Traditionally ecosystems have been centered on processor architectures. Mobile and Server are key examples, with their respective leading architectures holding the lion share of their respective markets. The IoT is mixing this up a little as more processor architectures can play and offer unique advantages, with configurable and extensible architectures. No clear winner is in sight yet, but 2017 will be a key year in the race between IoT processor architectures. Even OpenSource hardware architectures are look like they will be very relevant judging from the recent momentum which eerily reminds me of the early Linux days. It’s one of the most entertaining spaces to watch in 2017 and for years to come.

Verification will become a whole lot smarter. The core engines themselves continue to compete on performance and capacity. Differentiation further moves in how smart applications run on top of the core engines and how smart they are used in conjunction.

For the dynamic engines in software-based simulation, the race towards increased speed and parallel execution will accelerate together with flows and methodologies for automotive safety and digital mixed-signal applications.

In the hardware emulation world, differentiation for the two basic ways of emulating – processor-based and FPGA-based – will be more and more determined by how the engines are used. Specifically, the various use models for core emulation like verification acceleration low power verification, dynamic power analysis, post-silicon validation—often driven by the ever growing software content—will extend further, with more virtualization joining real world connections. Yes, there will also be competition on performance, which clearly varies between processor-based and FPGA-based architectures—depending on design size and how much debug is enabled—as well as the versatility of use models, which determines the ROI of emulation.

FPGA-based prototypes address the designer’s performance needs for software development, using the same core FPGA fabrics. Therefore, differentiation moves into the software stacks on top, and the congruency between emulation and FPGA-based prototyping using multi-fabric compilation allows mapping both into emulation and FPGA-based prototyping.

All this is complemented by smart connections into formal techniques and cross-engine verification planning, debug and software-driven verification (i.e. software becoming the test bench at the SoC level). Based on standardization driven by the Portable Stimulus working group in Accellera, verification reuse between engines and cross-engine optimization will gain further importance.

Besides horizontal integration between engines—virtual prototyping, simulation, formal, emulation and FPGA-based prototyping—the vertical integration between abstraction levels will become more critical in 2017 as well. For low power specifically, activity data created from RTL execution in emulation can be connected to power information extracted from .lib technology files using gate-level representations or power estimation from RTL. This allows designers to estimate hardware-based power consumption in the context of software using deep cycles over longer timeframes that are emulated. ‘

Anyone who knows Frank will not be surprised by the length of the answer.

Wally Rhines, Chairman and CEO of Mentor Graphics was less verbose.  He said:” Total system simulation has become a requirement.  How do you know that the wire bundle will fit through the hole in the door frame?  EDA tools can tell you the answer, but only after seeking out the data from the mechanical design.  Wiring in a car or plane is a three dimensional problem.  EDA tools traditionally worry about two dimension routing problems.  The world is changing.  We are going to see the basic EDA technology for designing integrated circuits be applied to the design of systems. Companies that can remain at the leading edge of IC design will be able to apply that technology to systems.

This will create a new market for EDA.  It will be larger than the traditional IC design market for EDA.  But it will be based upon the basic simulation, verification and analysis tools of IC design EDA.  Sometime in the near future, designers of complex systems will be able to make tradeoffs early in the design cycle by using virtual simulation.  That know-how will come from integrated circuit design.  It’s no longer feasible to build prototypes of systems and test them for design problems.  That approach is going away.  In its place will be virtual prototyping.  This will be made possible by basic EDA technology.  Next year will be a year of rapid progress in that direction.  I’m excited by the possibilities as we move into the next generation of electronic design automation.”

The increasing size of chips has made emulation a more popular tool than in the past.  Lauro Rizzatti, Principal at Lauro Rizzatti LLC, is a pioneer in emulation and continues to be thought of as a leading expert in the method.  He noted: “Expect new use models for hardware emulation in 2017 that will support traditional market segments such as processor, graphics, networking and storage, and emerging markets currently underserved by emulation –– safety and security, along with automotive and IoT.

Chips will continue to be bigger and more complex, and include an ever-increasing amount of embedded software. Project groups will increasingly turn to hardware emulation because it’s the only verification tool to debug the interaction between the embedded software and the underlying hardware. It is also the only tool capable to estimate power consumption in a realistic environment, when the chip design is booting an OS and processing software apps. More to the point, hardware emulation can thoroughly test the integrity of a design after the insertion of DFT logic, since it can verify gate-level netlists of any size, a virtually impossible task with logic simulators.

Finally, its move to the data center solidifies its position as a foundational verification tool that offers a reasonable cost of ownership.”

Formal verification tools, sometimes referred to as “static analysis tools” have seen their use increase year over year once vendors found human interface methods that did not require a highly-trained user.  Roger Sabbagh, VP of Application Engineering at Oski Technology pointed out: “The world is changing at an ever-increasing pace and formal verification is one area of EDA that is leading the way. As we stand on the brink of 2017, I can only imagine what great new technologies we will experience in the coming year. Perhaps it’s having a package delivered to our house by a flying drone or riding in a driverless car or eating food created by a 3-D printer. But one thing I do know is that in the coming year, more people will have the critical features of their architectural design proven by formal verification. That’s right. System-level requirements, such as coherency, absence of deadlock, security and safety will increasingly be formally verified at the architectural design level. Traditionally, we relied on RTL verification to test these requirements, but the coverage and confidence gained at that level is insufficient. Moreover, bugs may be found very late in the design cycle where they risk generating a lot of churn. The complexity of today’s systems of systems on a chip dictates that a new approach be taken. Oski is now deploying architectural formal verification with design architects very early in the design process, before any RTL code is developed, and it’s exciting to see the benefits it brings. I’m sure we will be hearing a lot more about this in the coming year and beyond!”

Finally David Kelf, VP Marketing at OneSpin Solutions observed: “We will see tight integrations between simulation and formal that will drive adoption among simulation engineers in greater numbers than before. The integration will include the tightening of coverage models, joint debug and functionality where the formal method can pick up from simulation and even emulation with key scenarios for bug hunting.”


Conclusion

The two combined articles are indeed quite long.  But the EDA industry is serving a multi-faceted set of customers with varying and complex requirements.  To do it justice, length is unavoidable.

Specialists and Generalists Needed for Verification

Friday, December 16th, 2016

Gabe Moretti, Senior Editor

Verification continues to take up a huge portion of the project schedule. Designs are getting more complex and with complexity comes what appears to be an emerging trend –– the move toward generalists and specialists. Generalists manage the verification flow and are knowledgeable about simulation and the UVM. Specialists with expertise in formal verification, portable stimulus and emulation are deployed when needed.  I talked with four specialists in the technology:

David Kelf, Vice President Marketing. OneSpin Solutions,

Harry Foster, Chief Scientist Verification at Mentor Graphics

Lauro Rizzatti, Verification Consultant, Rizzatti LLC, and

Pranav Ashar, CTO, Real Intent

I asked each of them the following questions:

- Is this a real trend or a short-term aberration?

- If it is a real trend, how do we make complex verification tools and methodologies suitable for mainstream verification engineers?

- Are verification tools too complicated for a generalist to become an expert?

David: Electronics design has always had its share of specialists. A good argument could be made that CAD managers were specialists in the IT department, and that the notion of separate verification teams was driven by emerging specialists in testbench automation approaches. Now we are seeing something else. That is, the breakup of verification experts into specialized groups, some project based, and others that operate across different projects. With design complexity comes verification complexity. Formal verification and emulation, for example, were little-used tools and only then for the most difficult designs. That’s changed with the increase in size, complexity and functionality of modern designs.

Formal Verification, in particular, found its way into mainstream flows through “apps” where the entire use model is automated and the product focused on specific high-value verification functions. Formal is also applied manually through the use of hand-written assertions and this task is often left to specialist Formal users, creating an apparently independent group within companies who may be applied to different projects. The emergence of these teams, while providing a valuable function, can limit the proliferation of this technology as they become the keepers of the flame, if you like. The generalist engineers come to rely on them rather than exploring the use of the technology for themselves. This, in turn, has a limiting factor on the growth of the technology and the realization of its full potential as an alternative to simulation

Harry: It’s true, design is getting more complex. However, as an industry, we have done a remarkable job of keeping up with design, which we can measure by the growth in demand for design engineers. In fact, between 2007 and 2016 the industry has gone through about four iterations of Moore’s Law. Yet, the demand for design engineers has only grown at a 3.6 percent compounded annual growth rate.

Figure 1

During this same period, the demand for verification engineers has grown at a 10.4 percent compounded annual growth rate. In other words, verification complexity is growing at a faster rate than design complexity. This should not be too big a surprise since it is generally accepted in the academic world that design complexity grows at a Moore’s Law rate, while verification complexity grows at a much steeper rate (i.e., double exponential).

One contributing factor to growing verification complexity is the emergence of new layers of verification requirements that did not exist years ago. For example, beyond the traditional functional domain, we have added clock domains, power domains, security domains, safety requirements, software, and then obviously, overall performance requirements.

Figure 2

Each of these new layers of requirements requires specialized domain knowledge. Hence, domain expertise is now a necessity in both the design and verification communities to effectively address emerging new layers of requirements.

For verification, a one-size-fits-all approach to verification is no longer sufficient to completely verify an SoC. There is a need for specialized tools and methodologies specifically targeted at each of these new (and continually emerging) layers of requirements. Hence, in addition to domain knowledge expertise, verification process specialists are required to address growing verification complexity.

The emergence of verification specialization is not a new trend; although, perhaps it has become more obvious due to growing verification complexity. For example, to address the famous floating point bug in the 1990’s it became apparent that theorem proving and other formal technology would be necessary to fill the gap of traditional simulation-based verification approaches. These techniques require full-time dedication that generalist are unlikely to master because their focus is spread across so many other tools and methodologies. One could make the same argument about the adoption of constrained- random, coverage-driven testbenches using UVM (requiring object-oriented programing skills, which I do not consider generalist skills), emulation, and FPGA prototyping. These technologies have become indispensable in today’s SoC verification/validation tool box, and to get the most out of the project’s investment, specialist are required.

So the question is how do we make complex tools and methodologies suitable for mainstream verification engineers? We are addressing this issue today by developing verification apps that solve a specific, narrowly focused problem and require minimal tool and methodology expertise. For example, we have seen numerous formal apps emerge that span a wide spectrum of the design process from IP development into post-silicon validation.  These apps no longer require the user to write assertions or be an expert in formal techniques. In fact, the formal engines are often hidden from the user, who then focuses on “what” they want to verify, versus the “how.” A few examples include: connectivity check used during IP integration, register check used to exhaustively verify control and status register behavior

against its CSV or IP-XACT register specification, and security check used to exhaustively verify that only the paths you specify can reach security or safety-critical storage elements. Perhaps one of the best- known formal apps is clock-domain crossing (CDC) checking, which is used to identify metastabilty issues due to the interaction of multiple clock domains.

Emulation is another area where we are seeing the emergence of verification apps. For example, deterministic ICE in emulation, which overcomes unpredictability in traditional ICE environments by adding 100 percent visibility and repeatability for debugging and provides access to other “virtual- based” use models. In addition, DFT emulation apps that accelerate Design for Test (DFT) verification prior to tape-out to minimize the risk of catastrophic failure while significantly reducing run times when verifying designs after DFT insertion.

In summary, the need for verification specialists today is driven by two demands: (1) specialized domain knowledge driven by new layers of verification requirements, and (2) verification tool and methodology expertise. This is not a bad thing. If I had a brain aneurysm, I would prefer that my doctor has mastered the required skills in endoscopy and other brain surgery techniques versus a general practitioner with a broad set of skills. Don’t get me wrong, both are required.

Lauro: In my mind, it is a trend, but the distinction may blur its contours soon. Let’s take hardware emulation. Hardware emulation always required specialists for its deployment, and, even more so, to fully optimize it to its fullest capacity. As they used to say, it came with a team of application engineers in the box to avoid that the time-to-emulation would exceed the time-to-first silicon. Today, hardware emulation is still a long way from being a plug-and-play verification tool, but recent developments by emulation vendors are making it easier and more accessible to use and deploy by generalists. The move from the in-circuit-emulation (ICE) mode driven by a physical target system to transaction-based communications driven by a virtual testbed designates it a data center resource status available to all types of verification engineers without specialist intervention. I see that as a huge step forward in the evolution of hardware emulation and its role in the design verification flow.

Pranav: The generalist vs. specialist discussion fits right into the shifting paradigm in which generic verification tools are being replaced by tools that are essentially verification solutions for specific failure modes.

The failure modes addressed in this manner are typically due to intricate phenomena that are hard to specify and model in simulation or general-purpose Assertion Based Verification (ABV), hard to resolve for a simulator or unguided ABV tool, whose propensity for occurrence increases with SOC size and integration complexity, and that are often insidious or hard to isolate. Such failure modes are a common cause of respins and redesign, with the result that

sign-off and bug-hunting for them based on solution-oriented tools has become ubiquitous in the design community.

Good examples are failures caused by untimed paths on an SOC, common sources of which are asynchronous clock-domain crossings, interacting reset domains and Static Timing Analysis (STA) exceptions. It has become common practice to address these scenarios using solution-oriented verification tools.

In the absence of recent advances by EDA companies in developing solution-oriented verification tools, SOC design houses would have been reliant on in-house design verification (DV) specialists to develop and maintain homegrown strategies for complex failure modes. In the new paradigm, the bias has shifted back toward the generalist DV engineer with the heavy lifting being done by an EDA tool. The salutary outcome of this trend for design houses is that the verification of SOCs for these complex failures is now more accessible, more automatic, more robust, and cheaper.

My Conclusions

It is hardtop disagree with the comments by my interlocutors.  Everything said is true.  But I think they have been too kind and just answered the questions without objecting to their limitations.  In fact the way to simplify verification is to do improve the way circuits are designed.  What is missing from design methodology is validation of what has been implemented before it is deemed ready for verification.  Designers are so pressed for time, due to design complexity and short schedules, that they must find ways to cut corners.  They reuse whenever possible and rely on their experience to assume that the circuit does what is supposed to do.  Unfortunately in most cases where a bug is found during design integration, they have neglected to check that the circuit does not do what is not supposed to do.  That is not always the fault of EDA tools.  The most glaring example is the choice by the electronic industry to use Verilog over VHDL.  VHDL is a much more robust language with built-in checks that exclude design errors that can be made using Verilog.  But VHDL takes longer to write and design engineers decided that schedule time took precedence over error avoidance.

The issue is always the same, no matter how simple or complex the design is: the individual self-assurance that he or she knows what he or she is doing.  The way to make design easier to verify is to create them better.  That means that the design should be semantically correct and that the implementation of all required features be completely validated by the designers themselves before handing the design to a verification engineer.

I do not think that I have just demanded that a design engineer also be a verification engineer.  What may be required is a UDM: Unified Design Methodology.  The industry is, may be unconsciously, already moving in that directions in two ways: the increased use of third party IP and the increasing volume of Design Rules by each foundry.  I can see these two trends growing brighter with each successive technology iteration: it is time to stop ignoring them.

Security Issues in IoT

Tuesday, September 27th, 2016

Gabe Moretti, Senior Editor

Security is one of the essential ingredients of the success of IoT architectures.  There are two major sides to security: data security and functional security.  Data security refers to the avoidance that data contained in a system is appropriated illegally.  Functional security refers to having a particular system function in a manner it was not intended to by an outside factor.  Architects must prepare for both eventualities when designing a system to foresee devise ways to intercept or isolate the system from such actors.

Data Protection

The major threat to data contained in a system is the always connected idea.  Many systems today are always connected to the internet, whether or not they need to be.  Connection to the internet is the default state for the vast majority of computing systems, whether or not what is being executed requires such connection.  For example, the system on which I am typing this article is connected to the internet while I am using Microsoft Word: it does not need to be, all I need is already installed or saved on the local system.  It is clear that we need to architect modems that connect when needed and that use “not connected” as the default state.  This will significantly diminish the opportunity for a attacking agent to upload malware or appropriate data stored on the system.  Firewalls are fine, but clearly they can be defeated, as demonstrated in many highly publicized cases.

Encryption is another useful tool for data protection.  Data stored in a system should always be encrypted.  Using the data locally should require dynamic decryption and encryption as the data is used locally or transmitted to another system.  The resulting execution overhead is a very small price to pay given the execution speed of modern systems.

I asked Lee Sun, Field Application Engineer at Kilopass Technology what his company is doing to insure security.  Here is what is said:

“Designers of chips for secure networks have begun to conclude that the antifuse one-time programmable (OTP) technology is one of the most secure embedded non-volatile memory (eNVM) technologies available today.  Antifuse OTP is a configurable on-chip memory implemented in standard CMOS logic with no additional masks or processing steps. Most importantly, antifuse OTP offers exceptional data protection because information stored in an antifuse bitcell provides virtually no evidence of the content inside. The bitcell does not store a charge, which means there’s no electrical state of the memory bit cell. The programming of the bitcell is beneath the gate oxide so the breakdown is not visible with SEM imaging. This protection at the physical layer prevents the antifuse eNVM from being hacked by invasive or semi-invasive attacks. Additional logic is available with the Kilopass IP to prevent passive attacks such as voltage tampering, glitching or differential power analysis. To date, there have been no reports of any successful attempts to extract the contents of an antifuse OTP using any of these techniques.”

Of course such level of protection comes at a price, but a large part of IoT systems do not need to store data locally.  For example, a home management system that controls temperature, lighting, intrusion prevention, food management and more, uses a number of computing devices that can be controlled by a central system.  Only the central node needs to have data protection and commands to the server nodes can be encrypted.
Robert Murphy, System Engineer at Cypress Semiconductor addressed the problem by using a secured processor or MCU.  He continued:

“These are general purpose or fixed function devices, used in applications such as home automation or mobile payment authentication. They provide Digital Security functions and are occasionally encapsulated in a package that offers Physical Security. Because various applications require different levels of security, the FIPS 140-2 standard was created to put security standards and requirements in place for hardware and software products. FIPS 140-2 provides third-party assurance of security claims, ranging from level 1 through level 4. Certification for systems can be obtained through test facilities that have been certified by the National Institute of Standards and Technology.

Securing data in the system is mostly accomplished through Digital Security, which includes a combination of cryptography and memory protection. Cryptography secures a system through confidentially, integrity, non-repudiation and authentication. Confidentially refers to keeping a message secret from all but authorized parties through the use of a cryptographic ciphers that employ the latest symmetric (secret-key) and asymmetric (public-key) standards.  Integrity ensures that a message has not been modified during transfer though the use of a hash function such as SHA. Non-repudiation is the process by which the recipient of a message is assured that the message came from the stated receiver, through the use of asymmetric encryption. Authentication provides confirmation that the message came from the expected sender. Authentication can be addressed using either a Message Authentication Code (MAC), which relies on symmetric encryption and provides message integrity; or digital signature, which relies on asymmetric encryption and provides both message integrity and non-repudiation. Attackers will attempt to circumvent cryptography through brute-force attacks or through side-channel attacks. Considering that Cryptography is centered around symmetric or asymmetric keys, protecting those keys from being altered or extracted is critical. This is where Secure MCUs utilize Physical Security and the detection methods described earlier. For added security, devices can incorporate a Physical Unclonable Function (PUF). These are circuits that rely on the uniqueness of the physical microarchitecture of a device that are inherent to the manufacturing process.  This uniqueness can then be applied to cryptographic functions, such as key generation.

Memory protection is comprised of several aspects, which work in layers. The first layer is JTAG and SWD access to the device. This is the mechanism used to program the device initially and if left exposed, can be used to reveal memory contents and other critical information. Therefore, it is important to disable/lockout this interface once deploying a system to production. A Secure MCU can permanently remove the interface through the use of Fuse bits, which are one-time programmable (OTP) memory where an unblown bit is represented by a logical value of zero and a blown bit is represented by a logic value of one.  The next layer is the Secure Boot Process. As discussed earlier, the Secure Boot consists of a Root-of-Trust that verifies the integrity of the device firmware, and can prevent uncertified firmware from having a chance to execute. Since the root of trust cannot be modified by firmware, it is immune to malicious firmware.  Next is Memory Protection Units (MPUs), which are hardware blocks in a device that are used to control access rights to areas of device memory. For example, using MPUs, a Secure MCU can limit access to crypto key storage. In the event an attack can circumvent the secure boot procedure or initiate a software attack through a communication interface, MPUs can limit the resources that the firmware has access to.

When employing any security solution, identify what needs to be secured and focus on that. Otherwise, you run the risk of creating a backdoor. By implementing these Digital Security functions, and layering it under a solid Physical Security solution, one can have a reasonable level of confidence that the data in the system is secured.”

Robert Gates, Chief Safety Officer ESD, at Mentor graphics described the data security requirements in automotive systems.

“Critical data in the automotive context will be handled in a similar way. A Trusted Execution Environment will be established to define and enforce policies for data that must be secured, which will apply to reading sensitive data, creating new sensitive data, and overwriting this data. This data will take on several forms; customer information such as financial and location information, as well as information that is managed by the manufacturer such as calibration settings and other forms of data.”

Inhibiting alien functionality

The collective security of a system requires that it uses the correct data and performs the required functions.  The first requirement is the one most often covered and understood by the public at large, but the second one is equally important with consequences just as negative.

Angela Raucher, Product Line Manager, ARC EM Processors at Synopsys summarized the problem and required solution this way: “With security there is no magic bullet, however taking care to include as many security features as is practical will help extend the amount of time and effort required by potential hackers to the point where it isn’t feasible for them to perform their attacks.”  Ms. Raucher continued: “One method to protect highly sensitive operations such as cryptography and generation of secret keys is to add a separate secure core, such as an ARC SEM security processor, with its own memory resources, that can perform operations in a tamper-resistant isolated environment (see figure 1).  The ARC SEM processor uses multiple privilege levels of access control, a bus state signal denoting whether the processor is in a secure mode, and a memory protection unit that can allocate and protect memory regions based on the privilege level to separate the trusted and non-trusted worlds.  For the case of ARC SecureShield technology, there is also a unique feature enabling each memory region to be scrambled or encrypted independently, offering an additional layer of protection.”

Figure 1. Trusted Execution Environment using ARC SecureShield Technology

Robert Gates from Mentor takes the same direction and points out that “The root of trust starts with the microprocessor, which will generally have some kind of secure storage capabilities such as ARM® TrustZone or Intel Secure Guard Extensions (SGX). This secure storage, which is part of a trusted hardware component (an important topic in itself beyond the scope of the current discussion) contains the signature of a boot-loader (placed in secure storage by the device manufacturer) and the crypto key to be used to unlock and enable its operation; assuming these are as expected by the microprocessor, the second stage loader is allowed to execute, establishing it as a trusted component. A similar exchange occurs between the loader and the operating system on its boot-up (weather this is an RTOS or a more fully features OS like Linux or Android), establishing the kernel as a trusted component (Figure 2).”

Figure 2. ), establishing the kernel as a trusted component

Robert Murphy, System Engineer at Cypress points out that there are ways to steal information without executing code.  “Non-invasive attacks are the simplest and most inexpensive to perform; however, they often can be difficult to detect as they leave no tamper evidence. Side-channel attacks, the most common type, consist of Single Power Analysis (SPA), Differential Power Analysis (DPA) and Electromagnetic Analysis (EMA). SPA and DPA attacks are effective at determining information about general functionality or cryptographic functions, as the power consumed by a processor or MCU varies based on the operation being performed. For example, the squaring and multiplication operations of RSA encryption exhibit different power profiles and can be distinguished using an oscilloscope. Similarly, with EMA an attacker can reach the same outcome by studying the electromagnetic radiation from a device. Due to the passive nature of these attacks, countermeasures are fairly open-loop.  EMA attacks can be prevented though proper shielding. DPA and SHA attacks can be prevented by increasing the amount of power supply noise, vary algorithm timing and randomly inducing instructions, such as NOPs, that have no effect on the algorithm but impact power consumption.”

Developing a secure system

Both Imperas and OneSpin Solutions have pointed out to me that it is important to check the level of security of a system while the system is being developed.

David Kelf, Vice President of Marketing at OneSpin Solutions observed that “Many common hardware vulnerabilities take the form of enabling an unexpected path for data transfer or operational control. This can happen through the regular operational conduits of a design, or ancillary components such as the scan path, debug interfaces or an unexpected memory transfer. Even the power rails in a device may be manipulated to track private data access. The only sensible way to verify that such a path does not exist is to check every state that various blocks can get into and make sure that none of these states allows either a confidentiality breach, or an operational integrity problem.

An ideal technology for this purpose is Formal Verification, which allows questions to be asked of a design, such as “can this private key ever get to an output except the one intended” and have this question matched against all possible design states. Indeed, Formal is now being used for this purpose in designs that require a high degree of security.”

Larry Lapides, VP Sales at Imperas remarks that the use of virtual platform can significantly contribute to finding and fixing possible vulnerable issues in a system.  “The virtual platform based software development tools allow the running of the exact executables that would run on the hardware, but with additional controllability and observability, and determinism, not available when using the hardware platforms.

Among the approaches being used for security, hypervisors show excellent initial results.  Hypervisors are a layer of software that sits between the processor and the operating system and applications.  A hypervisor allows guest virtual machines (VMs) to be configured on the hardware, with each VM isolated from the others.  This isolation enables one or more guest VMs to run secure operating systems or secure applications.

Hypervisors have become increasingly common in different sectors in the industry: mil-aero, factory automation, automotive, artificial intelligence and the IoT. Embedded hypervisors face evolving demands for isolation, robustness and security in the face of more diverse and complex hardware, while at the same time needing to have minimal power and performance overhead.”

Imperas is a founding member of the prpl Foundation Security Working Group that has brought together companies and individuals with expertise in various aspects of embedded systems and security to document current best practices and derive recommended new security practices for embedded systems. Learn more at https://prplfoundation.org.

Conclusion

Due to the scope of the subject this article is an overview only.  Follow up articles will present more details on the matter covered.

The EDA Industry Macro Projections for 2016

Monday, January 25th, 2016

Gabe Moretti, Senior Editor

How the EDA industry will fare in 2016 will be influenced by the worldwide financial climate. Instability in oil prices, the Middle East wars and the unpredictability of the Chinese market will indirectly influence the EDA industry.  EDA has seen significant growth since 1996, but the growth is indirectly influenced by the overall health of the financial community (see Figure 1).

Figure 1. EDA Quarterly Revenue Report from EDA Consortium

China has been a growing market for EDA tools and Chinese consumers have purchased a significant number of semiconductors based products in the recent past.  Consumer products demand is slowing, and China’s financial health is being questioned.  The result is that demand for EDA tools may be less than in 2015.   I have received so many forecasts for 2016 that I have decided to brake the subject into two articles.  The first article will cover the macro aspects, while the second will focus more on specific tools and market segments.

Economy and Technology

EDA itself is changing.  Here is what Bob Smith, executive director of the EDA consortium has to say:

“Cooperation and competition will be the watchwords for 2016 in our industry. The ecosystem and all the players are responsible for driving designs into the semiconductor manufacturing ecosystem. Success is highly dependent on traditional EDA, but we are realizing that there are many other critical components, including semiconductor IP, embedded software and advanced packaging such as 3D-IC. In other words, our industry is a “design ecosystem” feeding the manufacturing sector. The various players in our ecosystem are realizing that we can and should work together to increase the collective growth of our industry. Expect to see industry organizations serving as the intermediaries to bring these various constituents together.”

Bob Smith’s words acknowledge that the term “system” has taken a new meaning in EDA.  We are no longer talking about developing a hardware system, or even a hardware/software system.  A system today includes digital and analog hardware, software both at the system and application level, MEMS, third party IP, and connectivity and co-execution with other systems.  EDA vendors are morphing in order to accommodate these new requirements.  Change is difficult because it implies error as well as successes, and 2016 will be a year of changes.

Lucio Lanza, managing director of Lanza techVentures and a recipient of the Phil Kaufman award, describes it this way:

“We’ve gone from computers talking to each other to an era of PCs connecting people using PCs. Today, the connections of people and devices seem irrelevant. As we move to the Internet of Things, things will get connected to other things and won’t go through people. In fact, I call it the World of Things not IoT and the implications are vast for EDA, the semiconductor industry and society. The EDA community has been the enabler for this connected phenomenon. We now have a rare opportunity to be more creative in our thinking about where the technology is going and how we can assist in getting there in a positive and meaningful way.”

Ranjit Adhikary, director of Marketing at Cliosoft acknowledges the growing need for tools integration in his remarks:

“The world is currently undergoing a quiet revolution akin to the dot com boom in the late 1990s. There has been a growing effort to slowly but surely provide connectivity between various physical objects and enable them to share and exchange data and manage the devices using smartphones. The labors of these efforts have started to bear fruit and we can see that in the automotive and consumables industries. What this implies from a semiconductor standpoint is that the number of shipments of analog and RF ICs will grow at a remarkable pace and there will be increased efforts from design companies to have digital, analog and RF components in the same SoC. From an EDA standpoint, different players will also collaborate to share the same databases. An example of this would be Keysight Technologies and Cadence Designs Systems on OpenAccess libraries. Design companies will seek to improve the design methodologies and increase the use of IPs to ensure a faster turnaround time for SoCs. From an infrastructure standpoint a growing number of design companies will invest more in the design data and IP management to ensure better design collaboration between design teams located at geographically dispersed locations as well as to maximize their resources.”

Michiel Ligthart, president and chief operating officer at Verific Design Automation points to the need to integrate tools from various sources to achieve the most effective design flow:

“One of the more interesting trends Verific has observed over the last five years is the differentiation strategy adopted by a variety of large and small CAD departments. Single-vendor tool flows do not meet all requirements. Instead, IDMs outline their needs and devise their own design and verification flow to improve over their competition. That trend will only become more pronounced in 2016.”

New and Expanding Markets

The focus toward IoT applications has opened up new markets as well as expanded existing ones.  For example the automotive market is looking to new functionalities both in car and car-to-car applications.

Raik Brinkmann, president and chief executive officer at OneSpin Solutions wrote:

“OneSpin Solutions has witnessed the push toward automotive safety for more than two years. Demand will further increase as designers learn how to apply the ISO26262 standard. I’m not sure that security will come to the forefront in 2016 because there no standards as yet and ad hoc approaches will dominate. However, the pressure for security standards will be high, just as ISO26262 was for automotive.”

Michael Buehler-Garcia, Mentor Graphics Calibre Design Solutions, Senior Director of Marketing notes that many of the established and thought of as obsolete process nodes will instead see increased volume due to the technologies required to implement IoT architectures.

“As cutting-edge process nodes entail ever higher non-recurring engineering (NRE) costs, ‘More than Moore’ technologies are moving from the “press release” stage to broader adoption. One consequence of this adoption has been a renewed interest in more established processes. Historical older process node users, such as analog design, RFCMOS, and microelectromechanical systems (MEMS), are now being joined by silicon photonics, standalone radios, and standalone memory controllers as part of a 3D-IC implementation. In addition, the Internet of Things (IoT) functionality we crave is being driven by a “milli-cents for nano-acres of silicon,” which aligns with the increase in designs targeted for established nodes (130 nm and older). New physical verification techniques developed for advanced nodes can simplify life for design companies working at established nodes by reducing the dependency on human intervention. In 2016, we expect to see more adoption of advanced software solutions such as reliability checking, pattern matching, “smart” fill, advanced extraction solutions, “chip out” package assembly verification, and waiver processing to help IC designers implement more complex designs on established nodes. We also foresee this renewed interest in established nodes driving tighter capacity access, which in turn will drive increased use of design optimization techniques, such as DFM scoring, filling analysis, and critical area analysis, to help maximize the robustness of designs in established nodes.”

Warren Kurisu, Director of Product Management, Mentor Graphics Embedded Systems Division points to wearables, another sector within the IoT market, as an opportunity for expansion.

“We are seeing multiple trends. Wearables are increasing in functionality and complexity enabled by the availability of advanced low-power heterogeneous multicore architectures and the availability of power management tools. The IoT continues to gain momentum as we are now seeing a heavier demand for intelligent, customizable IoT gateways. Further, the emergence of IoT 2.0 has placed a new emphasis on end-to-end security from the cloud and gateway right down to the edge device.”

Power management is one of the areas that has seen significant concentration on the part of EDA vendors.  But not much has been said about battery technology.  Shreefal Mehta, president and CEO of Paper Battery Company offered the following observations.

“The year 2016 will be the year we see tremendous advances in energy storage and management.   The gap between the rate of growth of our electronic devices and the battery energy that fuels them will increase to a tipping point.   On average, battery energy density has only grown 12% while electronic capabilities have more than doubled annually.  The need for increased energy and power density will be a major trend in 2016.  More energy-efficient processors and sensors will be deployed into the market, requiring smaller, safer, longer-lasting and higher-performing energy sources. Today’s batteries won’t cut it.

Wireless devices and sensors that need pulses of peak power to transmit compute and/or perform analog functions will continue to create a tension between the need for peak power pulses and long energy cycles. For example, cell phone transmission and Bluetooth peripherals are, as a whole, low power but the peak power requirements are several orders of magnitude greater than the average power consumption.  Hence, new, hybrid power solutions will begin to emerge especially where energy-efficient delivery is needed with peak power and as the ratio of average to peak grows significantly. 

Traditional batteries will continue to improve in offering higher energy at lower prices, but current lithium ion will reach a limit in the balance between energy and power in a single cell with new materials and nanostructure electrodes being needed to provide high power and energy.  This situation is aggravated by the push towards physically smaller form factors where energy and power densities diverge significantly. Current efforts in various companies and universities are promising but will take a few more years to bring to market.

The Supercapacitor market is poised for growth in 2016 with an expected CAGR of 19% through 2020.  Between the need for more efficient form factors, high energy density and peak power performance, a new form of supercapacitors will power the ever increasing demands of portable electronics. The Hybrid supercapacitor is the bridge between the high energy batteries and high power supercapacitors. Because these devices are higher energy than traditional supercapacitors and higher power than batteries they may either be used in conjunction with or completely replace battery systems. Due to the way we are using our smartphones, supercapacitors will find a good use model there as well as applications ranging from transportation to enterprise storage.

Memory in smartphones and tablets containing solid state drives (SSDs) will become more and more accustomed to architectures which manage non-volatile cache in a manner which preserves content in the event of power failure. These devices will use large swaths of video and the media data will be stored on RAM (backed with FLASH) which can allow frequent overwrites in these mobile devices without the wear-out degradation that would significantly reduce the life of the FLASH memory if used for all storage. To meet the data integrity concerns of this shadowed memory, supercapacitors will take a prominent role in supplying bridge power in the event of an energy-depleted battery, thereby adding significant value and performance to mobile entertainment and computing devices.

Finally, safety issues with lithium ion batteries have just become front and center and will continue to plague the industry and manufacturing environments.  Flaming hoverboards, shipment and air travel restrictions on lithium batteries render the future of personal battery power questionable. Improved testing and more regulations will come to pass, however because of the widespread use of battery-powered devices safety will become a key factor.   What we will see in 2016 is the emergence of the hybrid supercapacitor, which offers a high-capacity alternative to Lithium batteries in terms of power efficiency. This alternative can operate over a wide temperature range, have long cycle lives and – most importantly are safe. “

Greg Schmergel, CEO, Founder and President of memory-maker Nantero, Inc points out that just as new power storage devices will open new opportunities so will new memory devices.

“With the traditional memories, DRAM and flash, nearing the end of the scaling roadmap, new memories will emerge and change memory from a standard commodity to a potentially powerful competitive advantage.  As an example, NRAM products such as multi-GB high-speed DDR4-compatible nonvolatile standalone memories are already being designed, giving new options to designers who can take advantage of the combination of nonvolatility, high speed, high density and low power.  The emergence of next-generation nonvolatile memory which is faster than flash will enable new and creative systems architectures to be created which will provide substantial customer value.”

Jin Zhang, Vice President of Marketing and Customer Relations at Oski Technology is of the opinion that the formal methods sector is an excellent prospect to increase the EDA market.

“Formal verification adoption is growing rapidly worldwide and that will continue into 2016. Not surprisingly, the U.S. market leads the way, with China following a close second. Usage is especially apparent in China where a heavy investment has been made in the semiconductor industry, particularly in CPU designs. Many companies are starting to build internal formal groups. Chinese project teams are discovering the benefits of improving design qualities using Formal Sign-off Methodology.”

These market forces are fueling the growth of specific design areas that are supported by EDA tools.  In the companion article some of these areas will be discussed.

The Various Faces of IP Modeling

Friday, January 23rd, 2015

Gabe Moretti, Senior Editor

Given their complexity, the vast majority of today’s SoC designs contain a high number of third party IP components.  These can be developed outside the company or by another division of the same company.  In general they present the same type of obstacle to easy integration and require a model or multiple types of models in order to minimize the integration cost in the final design.

Generally one thinks of models when talking about verification, but in fact as Frank Schirrmeister, Product Marketing Group Director at Cadence reminded me, there are three major purposes for modeling IP cores.  Each purpose requires different models.  In fact, Bernard Murphy, Chief Technology Officer at Atrenta identified even more uses of models during our interview.

Frank Schirrmeister listed performance analysis, functional verification, and software development support as the three major uses of IP models.

Performance Analysis

Frank points out that one of the activities performed during this type of analysis is the analysis of the interconnect between the IP and the rest of the system.  This activity does not require a complete model of the IP.  Cadence’s Interconnect Workbench creates the model of the component interconnect by running different scenarios against the RT level model of the IP.  Clearly a tool like Palladium is used given the size of the required simulation of an RTL model.  So to analyze, for example, an ARM AMBA 8 interconnect, engineers will use simulations representing what the traffic of a peripheral may be and what the typical processor load may be and apply the resulting behavior models to the details of the interconnect to analyze the performance of the system.

Drew Wingard, CTO at Sonics remarked that “From the perspective of modeling on-chip network IP, Sonics separates functional verification versus performance verification. The model of on-chip network IP is much more useful in a performance verification environment because in functional verification the network is typically abstracted to its address map. Sonics’ verification engineers develop cycle accurate SystemC models for all of our IP to enable rapid performance analysis and validation.

For purposes of SoC performance verification, the on-chip network IP model cannot be a true black box because it is highly configurable. In the performance verification loop, it is very useful to have access to some of the network’s internal observation points. Sonics IP models include published observation points to enable customers to look at, for example, arbitration behaviors and queuing behaviors so they can effectively debug their SoC design.  Sonics also supports the capability to ‘freeze’ the on-chip network IP model which turns it into a configured black box as part of a larger simulation model. This is useful in the case where a semiconductor company wants to distribute a performance model of its chip to a system company for evaluation.”

Bernard Murphy, Chief Technology Officer, Atrenta noted that: ” Hierarchical timing modeling is widely used on large designs, but cannot comprehensively cover timing exceptions which may extend beyond the IP. So you have to go back to the implementation model.”  Standards, of course, make engineers’ job easier.  He continued: “SDC for constraints and ILM for timing abstraction are probably largely fine as-is (apart from continuing refinements to deal with shrinking geometries).”

Functional Verification

Tom De Schutter, Senior Product Marketing Manager, Virtualizer – VDK, Synopsys

said that “the creation of a transaction-level model (TLM) representing commercial IP has become a well-accepted practice. In many cases these transaction-level models are being used as the golden reference for the IP along with a verification test suite based on the model. The test suite and the model are then used to verify the correct functionality of the IP.  SystemC TLM-2.0 has become the standard way of creating such models. Most commonly a SystemC TLM-2.0 LT (Loosely Timed) model is created as reference model for the IP, to help pull in software development and to speed up verification in the context of a system.”

Frank Schirrmeister noted that verification requires the definition of the IP at an IP XACT level to drive the different verification scenarios.  Cadence’s Interconnect Workbench generates the appropriate RTL models from a description of the architecture of the interconnects.”

IEEE 1685, “Standard for IP-XACT, Standard Structure for Packaging, Integrating and Re-Using IP Within Tool-Flows,” describes an XML Schema for meta-data documenting Intellectual Property (IP) used in the development, implementation and verification of electronic systems and an Application Programming Interface (API) to provide tool access to the meta-data. This schema provides a standard method to document IP that is compatible with automated integration techniques. The API provides a standard method for linking tools into a System Development framework, enabling a more flexible, optimized development environment. Tools compliant with this standard will be able to interpret, configure, integrate and manipulate IP blocks that comply with the proposed IP meta-data description.

David Kelf, Vice President of Marketing at OneSpin Solutions said: “A key trend for both design and verification IP is the increased configurability required by designers. Many IP vendors have responded to this need through the application of abstraction in their IP models and synthesis to generate the required end code. This, in turn, has increased the use of languages such as SystemC and High Level Synthesis – AdaptIP is an example of a company doing this – that enables a broad range of configuration options as well as tailoring for specific end-devices. As this level of configuration increases, together with synthesis, the verification requirements of these models also changes. It is vital that the final model to be used matches the original pre-configured source that will have been thoroughly verified by the IP vendor. This in turn drives the use of a range of verification methods, and Equivalency Checking (EC) is a critical technology in this regard. A new breed of EC tools is necessary for this purpose that can process multiple languages at higher levels of abstractions, and deal with various synthesis optimizations applied to the block.  As such, advanced IP configuration requirements have an affect across many tools and design flows.”

Bernard Murphy pointed out that “Assertions are in a very real sense an abstracted model of an IP. These are quite important in formal analyses also in quality/coverage analysis at full chip level.  There is the SVA standard for assertions; but beyond that there is a wide range of expressions from very complex assertions to quite simple assertions with no real bounds on complexity, scope etc. It may be too early to suggest any additional standards.”

Software Development

Tom De Schutter pointed out that “As SystemC TLM-2.0 LT has been accepted by IP providers as the standard, it has become a lot easier to assemble systems using models from different sources. The resulting model is called a virtual prototype and enables early software development alongside the hardware design task. Virtual prototypes gave have also become a way to speed up verification, either of a specific custom IP under test or of an entire system setup. In both scenarios the virtual prototype is used to speed up software execution as part of a so-called software-driven verification effort.

A model is typically provided as a configurable executable, thus avoiding the risk of creating an illegal copy of the IP functionality. The IP vendor can decide the internal visibility and typically limits visibility to whatever is required to enable software development, which typically means insight into certain registers and memories are provided.”

Frank Schirrmeister pointed out that these models are hard to create or if they exist they may be hard to get.  Pure virtual models like ARM Fast Models connected to TLM models can be used to obtain a fast simulation of a system boot.  Hybrid use models can be used by developers of lower level software, like drivers. To build a software development environment engineers will use for example a ARM Fast Model and plug in the actual RTL connected to a transactor to enable driver development.  ARM Fast Models connected with say a graphics system running in emulation on a Palladium system is an example of such environment.

ARM Fast Models are virtual platforms used mostly by software developers without the need for expensive development boards.  They also comply with the TLM-2.0 interface specification for integration with other components in the system simulation.

Other Modeling Requirements

Although there are three main modeling requirements, complex IP components require further analysis in order to be used in designs implemented in advanced processes.  A discussion with Steve Brown, Product Marketing Director, IP Group at Cadence covered power analysis requirements.  Steve’s observations can be summed up thus: “For power analysis designers need power consumption information during the IP selection process.  How does the IP match the design criteria and how does the IP differentiate itself from other IP with respect to power use.  Here engineers even need SPICE models to understand how I/O signals work.  Signal integrity is crucial in integrating the IP into the whole system.”

Bernard Murphy added: “Power intent (UPF) is one component, but what about power estimation? Right now we can only run slow emulations for full chip implementation, then roll up into a power calculation.  Although we have UPF as a standard estimation is in early stages. IEEE 1801 (UPF) is working on extensions.  Also there are two emerging activities – P2415 and 2416 –working respectively on energy proportionality modeling at the system level and modeling at the chip/IP level.”

IP Marketplace, a recently introduced web portal from eSilicon, makes power estimation of a particular IP over a range of processes very easy and quick.  “The IP MarketPlace environment helps users avoid complicated paperwork; find which memories will best help meet their chip’s power, performance or area (PPA) targets; and easily isolate key data without navigating convoluted data sheets” said Lisa Minwell, eSilicon’s senior director of IP product marketing.

Brad Griffin, Product marketing Director, Sigrity Technology at Cadence, talked about the physical problems that can arise during integration, especially when it concerns memories.  “PHY and controllers can be from either the same vendor or from different ones.  The problem is to get the correct signal integrity and power integrity required from  a particular PHY.  So for example a cell phone using a LP DDR4 interface on a 64 bit bus means a lot of simultaneous switching.  So IP vendors, including Cadence of course, provide IBIS models,.  But Cadence goes beyond that.  We have created virtual reference designs and using the Sigrity technology we can simulate and show  that we can match the actual reference design.  And then the designer can also evaluate types of chip package and choose the correct one.  It is important to be able to simulate the chip, the package, and the board together, and Cadence can do that.”

Another problem facing SoC designers is Clock Domain Crossing (CDC).  Bernard Murphy noted that :”Full-chip flat CDC has been the standard approach but is very painful on large designs. There is a trend toward hierarchical analysis (just as happened in STA), which requires hierarchical models There are no standards for CDC.  Individual companies have individual approaches, e.g. Atrenta has its own abstraction models. Some SDC standardization around CDC-specific constraints would be welcome, but this area is still evolving rapidly.”

Conclusion

Although on the surface the problem of providing models for an IP component may appear straightforward and well defined, in practice it is neither well defined nor standardized.  Each IP vendor has its own set of deliverable models and often its own formats.  The task of comanies like Cadence and Synopsys that sell their own IP and also

provide EDA tools to support other IP vendors is quite complex.  Clearly, although some standard development work is ongoing, accommodating present offerings and future requirements under one standard is challenging and will certainly require compromises.

Next Year in EDA: What Will Shape 2015

Thursday, December 4th, 2014

Gabe Moretti, Senior Editor

The Big Picture

Having talked to Cadence, Mentor and Synopsys I think it is very important to hear what the rest of the EDA industry has to say about the coming year.  After all, in spite of the financial importance of the “big three” a significant amount of innovation comes from smaller companies focused on one or just a few sectors of the market.

Piyush Sancheti, VP of Marketing at Atrenta pointed out that users drive the market and that users worry about time to market.  In the companion article Chi-Ping Hsu of Cadence stated the same.  To meet the market window they need predictability in product development, and therefore must manage design size and complexity, handle IP quality and integration risks, and avoid surprises during development.  He observed that “The EDA industry as a whole is still growing in the single digits. However, certain front-end domains like RTL design and verification are growing much faster. The key drivers for growth are emulation, static and assertion-based verification, and power intent verification.

As the industry matures, consolidation around the big 3 vendors will continue to be a theme. Innovation is still fueled by startups, but EDA startup activity is not quite as robust.  In 2014 Synopsys, Cadence and Mentor continued to drive growth with their investment and acquisitions in the semiconductor IP space, which is a good trend.”

Hamhua Ng, CEO of Plunify said: “There is much truth in the saying, ‘Those who don’t learn from history are doomed to repeat it,’ especially in the data-driven world that we live in today. It seems like every retailer, social network and financial institution is analyzing and finding patterns in the data that we generate. To businesses, being able to pick out trends from consumer behavior and quickly adapt products and services to better address customer requirements will result in significant cost savings and quality improvements.

Intuitively, chip design is an ideal area to apply these data analysis techniques because of an abundance of data generated in the process and the sheer cost and expertise required in realizing a design from the drawing board all the way to silicon. If only engineers can learn useful lessons – For instance, what worked well and what didn’t work as well – from all the chips that have ever been designed in history, what insights we would have today. Many companies already have processes in place for reviewing past projects and extracting information.”

While talking about design complexity Bill Neifert, CTO at Carbon Design Systems noted that: “Initially targeted more at the server market, ARM’s 64-bit V8 architecture has been thrust into mobile since Apple announced that it was using it for the iPhone 5.  Since then, we’ve seen a mad dash as semiconductor companies start developing mobile SoC designs containing multiple clusters of multicore processors.

Mobile processors have a large amount of complexity in both hardware and software. Coping with this move to 64 bits has placed a huge amount of stress on the hardware, software and systems teams.

With the first generation of 64-bit designs, many companies are handling this migration by changing as few variables as possible. They’ll take reference implementations and heavily leverage third-party IP in order to get something out the door. For this next generation of designs though, teams are starting to add more of their own differentiating IP and software. This raises a host of new verification and validation issues especially when addressing the complications being introduced with hardware cache coherency.”

Internet of Things (IoT)

IoT is expected to drive much of the growth in the electronics industry and therefore in EDA.  One can begin to see a few examples of products designed to work in the IoT architecture, even if the architecture is not yet completely finalized.  There are wearable products that at the moment work only locally but have the potential to be connected via a cell phone to a central data processing system that generates information.  Intelligent cars are at the moment self-contained IoT architectures that collect data, generate information, and in some cases, act on the information in real time.

David Kelf, VP of Marketing at OneSpin Solutions talked about the IoT  in the automotive area. “2015 is yet again destined to be an exciting year. We see some specific IoT applications taking off, particularly in the automotive space and with other designs of a safety critical nature. It is clear that automotive electronics is accelerating. In particular is the concept of various automotive “apps” running on the central computer that interfaces with sensors around the car. This leads to a complex level of interaction, which must be fully verified from a safety critical and security point of view, and this will drive the leading edge of verification technology in 2015. Safety Critical standards will continue to be key verification drivers, not just in this industry sector but for others as well.”

Drew Wingard, CTO of Sonics said that: “The IoT market opportunity is top-of-mind for every company in the electronics industry supply chain including EDA tool vendors. For low-cost IoT devices, systems companies cannot afford to staff 1000-person SoC design teams. Furthermore, why do system design companies need two verification engineers for every designer? From an EDA tools and methodology perspective, today’s approach doesn’t work well for IoT designs.

SoC designers need to view their IoT designs in a more modular way and accept that components are “known good” across levels of abstraction. EDA tools and the verification environments that they support must eliminate the need to re-verify components whenever they are integrated into the next level up. It boils down to verification reuse. Agile design methodologies have a focus on automated component testing that SoC designers should consider carefully. IoT will drive the EDA industry trend toward a more agile methodology that delivers faster time-to-market. EDA’s role in IoT is to help lower the cost of design and verification to meet the requirements of this new market.”

Verification

Verification continues to be a hot topic.  The emphasis has shifted from logic verification to system verification, where system is understood to contain both hardware and software components.  As the level of abstraction of design under test (DUT) has increased, the task of verification has become more demanding.

Michael Sanie, Senior Director of Verification Marketing at Synopsys talked about the drivers that will influence verification progress in 2015.

“SoCs are growing in unprecedented complexity, employing a variety of advanced low power techniques and an increasing amount of embedded software.  While both SoC verification and software development/validation traditionally have been the long-poles of project timelines, they are now inseparable and together have a significant impact on time-to-market.  Advanced SoC verification teams are now driven by not only reducing functional bugs, but also by how early they can achieve software bring-up for the SoCs.  The now-combined process of finding/debugging functional bugs and software bring-up is often comprised of complex flows with several major steps including virtual platforms, static and formal verification, simulation, emulation and FPGA-based prototyping, with tedious and lengthy transitions between each step taking as long as weeks. Further complicating matters, each step requires a different technology/methodology for debug, coverage, verification IP, etc.

In 2015, the industry will continue its journey into new levels of verification productivity and early software bring-up by looking at how these steps can be approached universally with the introduction of larger platforms built from the industry’s fastest engines for each of these steps, further integration and unification compile, set up, debug, verification IP and coverage. Such an approach creates a continuum of technologies leveraging virtual platforms, static and formal verification, simulation, emulation and FPGA-based prototyping, enabling a much shorter transition time between each step.  It further creates a unified debug solution across all domains and abstraction levels.  The emergence of such platforms will then enable dramatic increases in SoC verification productivity and earlier software bring-up/development.”

Bill Neifert of Carbon says that: “In order to enable system differentiation, design teams need to take a more system-oriented approach.  Verification techniques that work well at the block level start falling apart when applied to complex SoC systems. There needs to be a greater reliance upon system-level validation methodologies to keep up with the next generation of differentiated 64-bit designs. Accurate virtual prototypes can play a huge role in this validation task and we’ve seen an enormous upswing in the adoption of our Carbon Performance Analysis Kits (CPAKs) to perform exactly this task. A CPAK from System Exchange, for example, can be customized quickly to accurately model the behavior of the SoC design and then exercised using system-level benchmarks or verification software. This approach enables teams to spend far less time developing their validation solution and a lot more time extracting value from it.”

We hear a lot about design reuse, especially in terms of IP use.  Drew Wingard of Sonics points to a lack of reuse in verification.  “One of the biggest barriers to design reuse is the lack of verification reuse. Verification remains the largest and most time-consuming task in SoC design, in large part due to the popularity of constrained-random simulation techniques and the lack of true, component-based verification reuse. Today, designers verify a component at a very small unit level, then re-verify it at the IP core level, then re-verify the IP core at the IP subsystem level, then re-verify the IP subsystem at the SoC level and then, re-verify the SoC in the context of the system.

They don’t always use the same techniques at every one of those levels, but there is significant effort spent and test code developed at every level to check the design. Designers run and re-write the tests at every level of abstraction because when they capture the test the first time, they don’t abstract the tests so that they could be reused.

SoC designers need to view their IoT designs in a more modular way and accept that components are “known good” across levels of abstraction. EDA tools and the verification environments that they support must eliminate the need to re-verify components whenever they are integrated into the next level up. It boils down to verification reuse. Agile design methodologies have a focus on automated component testing that SoC designers should consider carefully. IoT will drive the EDA industry trend toward a more agile methodology that delivers faster time-to-market. EDA’s role in IoT is to help lower the cost of design and verification to meet the requirements of this new market.”

Piyush Sancheti of Atrenta  acknowledges that front-end design and verification tools are growing driven by more complex designs and shorter time-to-market.  But design verification difficulty continues to increase with shrinking time to completion reality.  Companies are turning more and more to static verification, formal techniques and emulation.  The goal is RTL debug and signoff aiming at more automatic, or knowledge based place and route functions.

Regarding formal tools David Kelf of OneSpin noted that: “Formal techniques, in general, continue to proliferate through many verification flows. We see an increase in the use of the technology by designers to perform initial design investigation, and greater linkage into the simulation flow. Combining the advances in FPGA, the safety critical driver for high-reliability verification and increases in formal technology, we believe that this year fundamental shifts.”

Jin Zhang, Senior Director of Marketing at Oski Technology had an interesting input to the subject of formal verification because it was based on the feedback she received recently from the Decoding Formal Club.  Here is what he said: “In October, Oski Technology hosted the quarterly Decoding Formal Club where more than 40 formal enthusiasts gathered to talk about Formal Sign-off and processer verification using formal technology. The sheer energy and enthusiasm of Silicon Valley engineers speaks to the growing adoption of formal verification.

Several experts on formal technology who attended the event view the future of formal verification similarly. They echoed the trends we have been seeing –– formal adoption is in full bloom and could soon replace simulation in verification sign-off.

What’s encouraging is not just the adoption of formal technology in simple use models, such as formal lint or formal apps, but in an expert use model as well. For years, expert-level use has been regarded as academic and not applicable to solving real-world challenges without the aid of a doctoral degree. Today, End-to-End formal verification, as the most advanced formal methodology, leads to complete verification of design blocks with no bugs left behind. With ever-increasing complexity and daunting verification tasks, the promise and realization of signing off a design block using formal alone is the core driver of the formal adoption trend.

The trend is global. Semiconductor companies worldwide are recognizing the value of End-to-End formal verification and working to apply it to critical designs, as well as staffing in-house formal teams. Formal verification has never been so well regarded.

While 2015 may not be the year when every semiconductor company has adopted formal verification, it won’t be long before formal becomes as normal as simulation, and shoulders as much responsibility in verification sign off.”

Advanced Processes Challenges

Although the number of system companies that can afford to use advanced processes is diminishing, their challenges are an important indicator of future requirements for a larger set of users.

Mary Ann White, Director of Product Marketing, Galaxy Design Platform at Synopsys points out how timing and power requirement analysis are critical elements of design flows.

“The endurance of Moore’s law drives design and EDA trends where consumer appetites for all things new and shiny continue to be insatiable. More functional consolidation into a single SoC pushes ultra-large designs more into the norm, propelling the need for more hierarchically oriented implementation and signoff methodologies.  While 16- and 14-nm FinFET technologies become a reality by moving into production for high-performance applications, the popularity of the 28-nm node will persevere, especially for mobile and IoT (Internet of Things) devices.

Approximately 25% of all designs today are ≥50 million gates in size according to Synopsys’ latest Global User Survey. Nearly half of those devices are easily approaching one billion transistors. The sheer size of these designs compels adoption of the latest hierarchical implementation techniques with either black boxes or block abstract models that contain timing information with interfaces that can be further optimized. The Galaxy Design Platform has achieved several successful tapeouts of designs with hundreds of millions of instances, and newer technologies such as IC Compiler II have been architected to handle even more.  In addition, utilization of a sign-off based hierarchical approach, such as PrimeTime HyperScale technology, saves time to closure, allowing STA completion of 100+ million instances in hours vs. days while also providing rapid ECO turnaround time.

The density of FinFET processes is quite attractive, especially for high-performance designs which tend to be very large multi-core devices. FinFET transistors have brought dynamic, rather than leakage (static) power to the forefront as the main concern. Thanks to 20-nm, handling of double patterning is now well established for FinFET. However, the next-generation process nodes are now introduced at a much faster pace than ever before, and test chips for the next 10-nm node are already occurring. Handling the varying multi-patterning requirements for these next-generation technologies will be a huge focus over the next year, with early access and ecosystem partnerships between EDA vendors, foundries and customers.

Meanwhile, as mobile devices continue their popularity and IoT devices (such as wearables) become more prevalent, extending battery life and power conservation remain primary requirements. Galaxy has a plethora of different optimization techniques to help mitigate power consumption. Along with the need for more silicon efficiency to lower costs, the 28-nm process node is ideal for these types of applications. Already accounting for more than a third of revenue for foundries, 28-nm (planar and FD-SOI) is poised to last a while even as FinFET processes come online.”

Dr. Bruce McGaughy, CTO and VP of Engineering at ProPlus was kind enough to provide his point of view on the subject.

“The challenges of moving to sub-20nm process technologies are forcing designers to look far more closely at their carefully constructed design flows. The trend in 2015 could be a widespread retooling effort to stave off these challenges as the most leading-edge designs start using FinFET technology, introducing a complication at every step in the design flow.

Observers point to the obvious: Challenges facing circuit designers are mounting as the tried-and-true methodologies and design tools fall farther and farther behind. It’s especially apparent with conventional SPICE and FastSPICE simulators, the must-have tools for circuit design.

FastSPICE is faltering and the necessity of using Giga-scale SPICE is emerging. At the sub-28nm process node, for example, designers need to consider layout dependent effects (LDE) and process variations. FastSPICE tricks and techniques, such as isomorphism and table models, do not work.

As we move further into the realm of FinFET at the sub-20nm nodes, FastSPICE’s limitations become even more pronounced. FinFET design requires a new SPICE model called BSIM-CMG, more complicated than BSIM3 and BSIM4 models, industry-standard models for CMOS technology used by the industry for 20 years. New FinFET physical effects include strong Miller Capacitance effects, break FastSPICE’s partitioning and event-driven schemes. Typically, FinFET models have over 1,000 parameters per transistor, and more than 20,000 lines of C code, posing a tremendous computational challenge to SPICE simulators.

Furthermore, the latest advanced processes pose new and previously undetected challenges. With reduced supply voltage and increased process variations, circuits now are more sensitive to small currents and charges, such as SRAM read cycles and leakage currents. FastSPICE focuses on event-driven piecewise linear (PWL) voltage approximations rather than continuous waveforms of currents, charges and voltages. More delay and noise issues are appearing in the interconnect, requiring post layout simulation with high accuracy, and multiple power domains and ramp-up/ramp-down cycles are more common.

All are challenging for FastSPICE, but can be managed by Giga-scale SPICE simulators. FastSpice simulators are lacking in accuracy for sensitive currents, voltage regulators and leakage, which is where Giga-scale SPICE simulators can really shine.

Time to market and high mask costs demand tools that always give accurate and reliable results, and catch problems before tapeout.  Accordingly, designers and verification engineers are demanding a tool that does not require the “tweaking” of options to suit their situations, such as different sets of simulation options for different circuit types, and assigned accuracy where the simulator thinks it’s needed. Rather, they should get accuracy everywhere without option tweaks.

Often, verification engineers are not as familiar with the circuits as the designers, and may inadvertently choose a set of options that causes the FastSPICE simulator to ignore important effects, such as shorting out voltage regulator power nets. Weak points could be lurking in those overlooked areas of the chip. With Giga-scale SPICE, such approximations are not used and unnecessary.

Here’s where Giga-scale SPICE simulation takes over, being perfectly suited for the new process technologies including 16/14nm FinFET. They offer pure SPICE accuracy and deliver comparable FastSPICE simulator capacity and performance.

For the first time, Giga-scale SPICE makes it possible for designers to use one simulation engine to design both small and large circuit blocks, and simultaneously use the same simulator for full-chip verification with SPICE accuracy, eliminating glitches or inconsistencies. We are at the point that retooling the simulation tools, including making investments in parallel processing hardware, is the right investment to make to improve time to market and reduce the risk or respins. At the same time, tighter margins can be achieved in design, resulting in better performance and yield.”

Memory and FPGA

With the increase use of embedded software in SoC designs memories and memory controllers are gaining in importance.  Bob Smith, Senior VP of Marketing and Business Development at Uniquify presents a compelling argument.

” The term ‘EDA’ encompasses tools that both help in automating the design process (think synthesis or place and route) as well as automating the analysis process (such as timing analysis or signal integrity analysis). A new area of opportunity for EDA analysis is emerging to support the understanding and characterization of high-speed DDR memory subsystems.

The high-performance demands placed on DDR memory systems requires that design teams thoroughly analyze and understand the system margins and variations encountered during system operation. At rated DDR4 speeds, there is only about 300ps of allowable timing margin across the entire system (ASIC or SoC, package, PCB or other interconnect media, and the DDR SDRAM itself). Both static (process-related) and dynamic variations (due to environmental variables such as temperature) must also be factored into this tight margin. The goal is straightforward: optimize the complete DDR system to achieve the highest possible performance while minimizing any negative impacts on memory performance due to anticipated variations and leave enough margin such that unanticipated variations don’t cause the memory subsystem to fail.

However, understanding the available margin and how it will be impacted by static and dynamic variation is challenging. Historically, there is no visibility into the DDR subsystem itself and the JEDEC specifications only address the behavior of the DDR-SDRAM devices and not the components outside of the device. Device characterization helps, but only accounts for part-part variation. Ideally, DDR system designers would like to be able to measure timing margins in-situ, with variations present, to fully and accurately understand system behavior and gain visibility into possible issues.
A new tool for DDR System analysis provides this visibility. A special interface in the DDR PHY allows it to run numerous different analyses to check the robustness of the entire DDR system including the board and DDR-SDRAM device(s). The tool can be used to determine DDR system margins, identify board or DDR component peculiarities and be used to help tune various parameters to compensate for issues discovered and maximize DDR performance in a given system. Since it is essentially a window into the DDR subsystem, it can also be used to characterize and compare the performance of different boards and board layouts and even compare the performance and margins of different DDR-SDRAM components.

Figure 1: A new tool for DDR System analysis can check bit-level margins on a DDR read.

Charlie Cheng, CEO of Kilopass talks about the need for new memory technology.  “For the last few years, the driver for chip and EDA tool development has been the multicore SoC that is central to all smartphones and tablets. Memory dominates these devices with 85% of the chip area and an even larger percentage of the leakage power consumption. Overlay the strong friction that is slowing down the transition to 14/16nm from 28nm –– the most effective node. It becomes quickly obvious that a new high-density, power-thrifty memory technology is needed at the 28nm node. Memory design has been sorely lacking in innovation for the last 20 years with all the resources getting invested on the process side. 2015 will a be the year of major changes in this area as the industry begins to take a better look at support for low-power, security and mobile applications.”

The use of FPGA devices in SoC has also increased in 2014.  David Kelf thinks that: “The significant advancement in FPGA technology will lead to a new wave of FPGA designs in 2015. New device geometries from the leading FPGA vendors are changing the ASIC to FPGA cost/volume curve and this will have an affect in the size and complexity of these devices. In addition, we will see more specialized synthesis tools from all the vendors, which provide for greater, device-targeted optimizations. This in turn drives further advancement of new verification flows for larger FPGAs and it is our prediction that most of the larger devices will make use of a formal based verification flow to increase overall QoR (Quality of Results).”

The need for better tools to support the use of FPGA is also acknowledged by Harnhua NG at Plurify.  “FPGA software tools contain switches and parameters that influence synthesis optimizations and place-and-route quality of results. Combined with user-specified timing and location constraints, timing, area and power results can vary by as much as 70% without even modifying the design’s source code. Experienced FPGA designers intuitively know good switches and parameters through years of experience, but have to manually refine this intuition as design techniques, chip architectures and software tools rapidly evolve. Continuing refinement and improvement are better managed using data analysis and machine learning algorithms.”

Conclusion

It should not be surprising that EDA vendors see many financial and technical opportunities available in 2015.  Consumers’ appetite for electronic gadgets, together with the growth of cloud computing, and new IoT implementations provide markets for new EDA tools.  How many vendors will hit the proper market windows is still to be seen and timing to market will be the fundamental characteristic of the 2015 EDA industry.

Internet of Things (IoT) and EDA

Tuesday, April 8th, 2014

Gabe Moretti, Contributing Editor

A number of companies contributed to this article.  In Particular: Apache Design Solutions, ARM, Atrenta, Breker Verification Systems, Cadence, Cliosoft, Dassault Systemes, Mentor Graphics, Onespin Solutions, Oski Technologies, and Uniquify.

In his keynote speech at the recent CDNLive Silicon Valley 2014 conference, Lip-Bu Tan, Cadence CEO, cited mobility, cloud computing, and Internet of Things as three key growth drivers for the semiconductor industry. He cited industry studies that predict 50 billion devices by 2020.  Of those three, IoT is the latest area attracting much conversation.  Is EDA ready to support its growth?

The consensus is that in many aspects EDA is ready to provide tools required for IoT implementation.  David Flynn, a ARM Fellow put it best.  “For the most part, we believe EDA is ready for IoT.  Products for IoT are typically not designed on ‘bleeding-edge’ technology nodes, so implementation can benefit from all the years of development of multi-voltage design techniques applied to mature semiconductor processes.”

Michael Munsey, Director of ENOVIA Semiconductor Strategy at Dassault Systèmes observed that conversely companies that will be designing devices for IoT may not be ready.  “Traditional EDA is certainly ready for the core design, verification, and implementation of the devices that will connect to the IoT.  Many of the devices that will connect to the IoT will not be the typical designs that are pushing Moore’s Law.  Many of the devices may be smaller, lower performance devices that do not necessarily need the latest and greatest process technology.  To be cost effective at producing these devices, companies will rely heavily on IP in order to assemble devices quickly in order to meet consumer and market demands.  In fact, we may begin to see companies that traditionally have not been silicon developers getting in to chip design. We will see an explosive growth in the IP ecosystem of companies producing IP to support these new devices.”

Vic Kulkarni, Senior VP and GM, Apache Design, Inc.  put it as follows: “There is nothing “new or different” about the functionality of EDA tools for the IoT applications, and EDA tool providers have to think of this market opportunity from a perspective of mainstream users, newer licensing and pricing model for “mass market”, i.e.  low-cost and low-touch technical support, data and IP security and the overall ROI.”

But IoT also requires new approaches to design and offers new challenges.  David Kelf, VP of Marketing at Onespin Solutions provided a picture of what a generalized IoT component architecture is likely to be.

Figure 1: generalized IoT component architecture (courtesy Onespin Solutions)

He went on to state: “The included graphic shows an idealized projection of the main components in a general purpose IoT platform. At a minimum, this platform will include several analog blocks, a processor able to handle protocol stacks for wireless communication and the Internet Protocol (IP). It will need some sensor-required processing, an extremely effective power control solution, and possibly, another capability such as GPS or RFID and even a Long Term Evolution (LTE) 4G Baseband.”

Jin Zhang, Senior Director of Marketing at Oski Technologies observed that “If we parse the definition of IoT, we can identify three key characteristics:

  1. IoT can sense and gather data automatically from the environment
  2. IoT can interact and communicate among themselves and the environment
  3. IoT can process all the data and perform the right action with or without human interaction

These imply that sensors of all kinds for temperature, light, movement and human vitals, fast, stable and extensive communication networks, light-speed processing power and massive data storage devices and centers will become the backbone of this infrastructure.

The realization of IoT relies on the semiconductor industry to create even larger and more complex SoC or Network-on-Chip devices to support all the capabilities. This, in turn, will drive the improvement and development of EDA tools to support the creation, verification and manufacturing of these devices, especially verification where too much time is spent on debugging the design.”

Power Management

IoT will require advanced power management and EDA companies are addressing the problem.  Rob Aitken, also a ARM fellow, said:” We see an opportunity for dedicated flows around near-threshold and low voltage operation, especially in clock tree synthesis and hold time measurement. There’s also an opportunity for per-chip voltage delivery solutions that determine on a chip-by-chip basis what the ideal operation voltage should be and enable that voltage to be delivered via a regulator, ideally on-chip but possibly off-chip as well. The key is that existing EDA solutions can cope, but better designs can be obtained with improved tools.”

Kamran Shah, Director of Marketing for Embedded Software at Mentor Graphics, noted: “SoC suppliers are investing heavily in introducing power saving features including Dynamic Voltage Frequency Scaling (DVFS), hibernate power saving modes, and peripheral clock gating techniques. Early in the design phase, it’s now possible to use Transaction Level Models (TLM) tools such as Mentor Graphics Vista to iteratively evaluate the impact of hardware and software partitioning, bus implementations, memory control management, and hardware accelerators in order to optimize for power consumption”

Figure 2: IoT Power Analysis (courtesy of Mentor Graphics)

Bernard Murphy, Chief Technology Officer at Atrenta, pointed out that: “Getting to ultra-low power is going to require a lot of dark silicon, and that will require careful scenario modeling to know when functions can be turned off. I think this is going to drive a need for software-based system power modeling, whether in virtual models, TLM (transaction-level modeling), or emulation. Optimization will also create demand for power sensitivity analysis – which signals / registers most affect power and when. Squeezing out picoAmps will become as common as squeezing out microns, which will stimulate further automation to optimize register and memory gating.”

Verification and IP

Verifying either one component or a subset of connected components will be more challenging.  Components in general will have to be designed so that they can be “fixed” remotely.  This means either fix a real bug or download an upgrade.  Intel is already marketing such a solution which is not restricted to IoT applications.Also networks will be heterogeneous by design, thus significantly complicating verification.

Ranjit Adhikary, Director of Marketing at Cliosoft, noted that “From a SoC designer’s perspective, “Internet of Things” means an increase in configurable mixed-signal designs. Since devices now must have a larger life span, they will need to have a software component associated with them that could be upgraded as the need arises over their life spans. Designs created will have a blend of analog, digital and RF components and designers will use tools from different EDA companies to develop different components of the design. The design flow will increasingly become more complex and the handshake between the digital and analog designers in the course of creating mixed-signal designs has to become better. The emphasis on mixed-signal verification will only increase to ensure all corner cases are caught early on in the design cycle.”

Thomas L. Anderson, Vice President of Marketing at Breker Verification Systems, has a similar prospective but he is more pessimistic.  He noted that “Many IoT nodes will be located in hard-to-reach places, so replacement or repair will be highly unlikely. Some nodes will support software updates via the wireless network, but this is a risky proposition since there’s not much recourse if something goes wrong. A better approach is a bulletproof SoC whose hardware, software, and combination of the two have been thoroughly verified. This means that the SoC verification team must anticipate, and test for, every possible user scenario that could occur once the node is in operation.”

One solution, according to Mr. Anderson, is “automatic generation of C test cases from graph-based scenario models that capture the design intent and the verification space. These test cases are multi-threaded and multi-processor, running realistic user scenarios based on the functions that will be provided by the IoT nodes containing the SoC. These test cases communicate and synchronize with the UVM verification components (UVCs) in the testbench when data must be sent into the chip or sent out of the chip and compared with expected results.”

Bob Smith, Senior Vice President of Marketing and Business development at Uniquify, noted that “Connecting the unconnected is no small challenge and requires complex and highly sophisticated SoCs. Yet, at the same time, unit costs must be small so that high volumes can be achieved. Arguably, the most critical IP for these SoCs to operate correctly is the DDR memory subsystem. In fact, it is ubiquitous in SoCs –– where there’s a CPU and the need for more system performance, there’s a memory interface. As a result, it needs to be fast, low power and small to keep costs low.  The SoC’s processors spend the majority of cycles reading and writing to DDR memory. This means that all of the components, including the DDR controller, PHY and I/O, need to work flawlessly as does the external DRAM memory device(s). If there’s a problem with the DDR memory subsystem, such as jitter, data/clock skew, setup/hold time or complicated physical implementation issues, the IoT product may work intermittently or not at all. Consequently, system yield and reliability are of upmost concern.”

He went on to say: “The topic may be the Internet of Things and EDA, but the big winners in the race for IoT market share will be providers of all kinds of IP. The IP content of SoC designs often reaches 70% or more, and SoCs are driving IoT, connecting the unconnected. The big three EDA vendors know this, which is why they have gobbled up some of the largest and best known IP providers over the last few years.”

Conclusion

Things that seem simple often turn out not to be.  Implementing IoT will not be simple because as the implementation goes forward, new and more complex opportunities will present themselves.

Vic Kulkarni said: “I believe that EDA solution providers have to go beyond their “comfort zone” of being hardware design tool providers and participate in the hierarchy of IoT above the “Devices” level, especially in the “Gateway” arena. There will be opportunities for providing big data analytics, security stack, efficient protocol standard between “Gateway” and “Network”, embedded software and so on. We also have to go beyond our traditional customer base to end-market OEMs.”

Frank Schirrmeister, product marketing group director at Cadence, noted that “The value chain for the Internet of Things consists not only of the devices that create data. The IoT also includes the hubs that collect data and upload data to the cloud. Finally, the value chain includes the cloud and the big data analytics it stores.  Wired/wireless communications glues all of these elements together.”

Verification Management

Tuesday, February 11th, 2014

Gabe Moretti, Contributing Editor

As we approach the DVCon conference it is timely to look at how our industry approaches managing design verification.

Much has been said about the tools, but I think not enough resources have been dedicated to the issue of management and measurement of verification.  John Brennan, Product Director in the Verification Group at Cadence observed that verification used to be a whole lot easier. It used to be that you sent some stimulus to your design, view a few waveforms, collect some basic data by looking at the simulator log data, and then onto the next part of the design to verify.   The problem with all of this is that it’s simply too much information, and with randomness comes lack of clarity about what is actually tested and not.  He continued by stating that you can not verify every state and transition in your design, it is simply impossible, the magnitude is too large.  So what do you verify, and how are IP and chip suppliers addressing the challenge?  We at Cadence see several trends emerging that will help users with this daunting task, as follows: use collaboration based environments, use the right tool for the job, have Deep Analytics and Visibility, and deploy Feature based verification.

My specific questions to the panelists follow.  I chose a representative one from each of them.

* How does a verification group manage the verification process and assess risk?

Dave Kelf, Marketing Director at OneSpin Solutions opened the detail discussion by describing the present situation. Whereas design management follows a reasonably predictable path, verification management is still based on the subjective, unpredictable assessment of when is enough testing enough!

Verification management is all about predicting the time and resources required to reach the moving target of verification closure. However, there is still no concrete method available to predict when a design is fully, exhaustively, 100% tested. Today’s techniques all have an element of uncertainty, which translate to the risk of an undetected bug. The best a verification manager can do is to assess the progress point at which the probability of a remaining bug is infinitesimally small.

For a large design block, a combination of test coverage results, a test spec-to-simulation performed comparison, time-since-last-bug discovery, verification time spent and the end of the schedule may all play into this decision. For a complete SoC, running the entire system, including software, on an emulator for days on end might be the only way, today, to inspire confidence of a working design.

If we were to solve just one remaining problem in verification, achieving a deep and meaningful understanding of verification coverage pertaining to the original functional specification should be it.

*  What is the role of verification coverage in providing metrics toward verification closure, and is this proving useful.

Thomas L. Anderson, Vice President of Marketing, Breker Verification Systems answered that coverage is, frankly, all that the verification team has to assess how well the chip has been exercised. Code coverage is a given, but in recent years, functional coverage has gained much more prominence. The most recent forms of coverage are derived automatically, for example, from assertions or graph-based scenario models, and so provide much return for little investment.

*  How has design evolution affected verification management? Examples include IP usage and SoC trends.

Rajeev Ranjan, CTO of Jasper Design Automation observed that as designs get bigger in general, and as they incorporate more-and-more IPs developed by multiple internal and external parties,  integration verification becomes a very large concern.  Specifically, verification tasks such as interface validation, connectivity checking, functional verification of IPs in the face of hierarchical power management strategies, and ensuring that the hardware coherency protocols do not cause any deadlock in the overall system.  Additionally, depending on the end-market for the system, security path verification can also be a significant, system-wide challenge.

*  What should be the first step in preparing a verification plan?

Tom Fitzpatrick, verification evangelist, Mentor Graphics has dedicated many years to the study and solutions of verification issues.  He noted that the first step in preparing a verification plan is to understand what the design is supposed to do and under what conditions it’s expected to do it. Verification is really the art of modeling the “real world” in which the device is expected to operate, so it’s important to have that understanding. After that, it’s important to understand the difference between “block-level” and “system-level” behaviors that you want to test. Yes, the entire system must be able to, for example, boot an RTOS and process data packets or whatever, but there are a lot of specifics that can be verified separately at the block- or subsystem-level before you just throw everything together and see if it works. Understanding what pieces of the lower level environments can be reused and will prove useful at the system level, and being able to reuse those pieces effectively and efficiently is one key to verification productivity.

Another key is the ability to verify specific pieces of functionality as early as possible in the process and use that information to avoid targeting that functionality at higher levels. For example, using automated tools at the block level to identify reset or X-propagation issues, or state machine deadlock conditions, eliminates the need to try and create stimulus scenarios to uncover these issues. Similarly, being able to verify all aspects of a block’s protocol implementation at the block level means that you don’t need to waste time creating system-level scenarios to try and get specific blocks to use different protocol modes. Identifying where best to verify the pieces of your verification plan allows every phase of your verification to be more efficient.

*  Is criteria available to determine what tools need to be considered for various project phases? Which tools are proving effective? Is budget a consideration?

Yuan Lu, Chief Verification Architect, Atrenta Inc. contributed the following. Verification teams deploy a variety of tools to address various categories of verification issues, depending on how you break your design into multiple blocks and what you want to test at each level of hierarchy. At a macro level, comprehensive/exhaustive verification is expected at block/IP level. However, at the SoC level, functions such as connectivity checking, heart beat verification, and hardware/software co-verification are performed.

Over the years, there has emerged some level of consensus within the industry as to what type of tools need to be used for verification at the IP and SoC levels. But, so far, there is no perfect way to hand off IPs to the SoC team. The ultimate goal is to ensure that the IP team communicates to the SoC team about what has been tested and how the SoC team can use this information to figure out if the IP level verification was sufficient to meet the SoC needs.

*  Not long ago, the Unified Verification Methodology (UVM) was unveiled with the promise of improving verification management, among other advantages. How has that worked?

Herve Alexanian, Engineering Director, Advanced Dataflow Group at Sonics, Inc. pointed out that as an internal protocol is thoroughly specified, including configurable options, a set of assertions can naturally be written or generated depending on the degree of configurability. Along the same lines, functional coverage points and reference (UVM) sequences are also defined. These definitions are the best way to enter modern verification approaches, allowing the most benefit from formal techniques and verification planning. Although some may see such definitions as too rigid to accommodate changes in requirements, making a change in a fundamental interface is intentionally costly as it is in software. It implies additional scrutiny on how architectural changes are implemented in a way that tends to minimize functional corners that later prove so costly to verify.

*  What areas of verification need to be improved to reduce verification risk and ease the management burden?

Vigyan Singhal, President and CEO, Oski Technology said that for the most part, current verification methodology relies on simulation and emulation for functional verification. As shown consistently in the 2007, 2010 and 2012, Wilson Research Group Surveys sponsored by Mentor Graphics, two thirds of projects are behind schedule and functional bugs are still the main culprit for chip respins. This shows that the current methodology has significant verification risk.

Verification teams today spend most of the time in subsystem (63.9%) and full chip simulation (36.1%), and most of the time is spent in debugging (36%). This is not surprising as debugging at the subsystem and chip level with thousands or long cycle traces can take a long time.

The solution to the challenge is to improve block-level design quality so as to reduce the verification and management burden at the subsystem and chip level. Formal property verification is a powerful technique for block-level verification. It is exhaustive and can catch all corner-case bugs. While formal verification adds another step in the verification flow with additional management tasks to track its progress, the time and effort spent will lead to reduced time and effort at the subsystem and chip level, and improve overall design quality. With short time-to-market windows, design teams need to guarantee first-silicon success. We believe increased formal usage in the verification flow will reduce verification risks and ease management burden.

As he had opened the discussion, John Brennan closed it noting that functional verification has no one single silver bullet, it takes multiple engineers, operating across multiple heterogeneous engines, with multiple analytics.  This multi-specialists verification is now, the VPM tools that support multi-specialist verification are needed now.