Part of the  

Chip Design Magazine

  Network

About  |  Contact

Posts Tagged ‘Breker Verification Systems’

The Verification Times are Changing

Monday, April 17th, 2017

Adnan Hamid, CEO, Breker Verification Systems

If you have been an ASIC designer for a couple of decades, you know how much your job has evolved during that time. Not only in the way chips get designed, using large numbers of IP blocks, but in the way in which they are verified.

Back in 1996, there were about 10,000 design starts and average size was under 30,000 gates. Designs were composed of a small number of blocks, almost all developed in-house and most designed to integrate to an external processor, unless the chip itself was a processor. It is likely that you were using a directed test methodology

The high-end processor of the time was the Pentium Pro that came in at 5.5-million gates, implemented in 500-nanometer (nm) technology, and the ARM 7 released in 1994 was beginning to gain some attention at one tenth the size of the Pentium Pro. Also gaining significant attention were new languages and tools that enable pseudo random test generation.

The design process became more efficient over that time period through the introduction of higher level design languages and corresponding synthesis tools, but most of the gains have come from increasing amounts of reuse. Today, most designs count on more than 90% of the chip area being filled with reused design blocks and most designs use tens or even a hundred different IP blocks.

One of the primary value statements of IP reuse is that the verification of those blocks is done by the IP provider and, since that design is used in multiple designs, its overall quality is likely to be higher than that of an in-house developed block. In the early days of IP, that may have been a questionable statement because many of the IP suppliers were nothing more than two people hacking away at code in a garage. However, today, most IP suppliers are trusted partners and, even though their designs are still not 100% verified, they no longer pose the largest threat to the overall success of a design.

The primary verification methodologies being used today are still the same as those that were emerging 20 years ago. The languages have been improved and standardized and the methodologies that go along with them have become highly developed. The fact remains that those methodologies were targeted at what we would consider to be a block today.

A typical SoC design team will design one or two custom blocks. These will make their design differentiated from the others in the industry and it is likely that those blocks will continue to use existing verification methodologies. The larger problem today is how do you verify the system-level functionality of the chip? Existing methodologies are highly inefficient for this task meaning that most design teams revert to using direct test strategies at the system level.

A system-level test can be viewed as the execution of a scenario that corresponds to a typical user-level function. In a cell phone, this could be making a call while watching a video. For a smart TV, it could be watching a TV station from the antenna while streaming an Internet video in an insert. These are the types of functions that must be proven to work before a tapeout can be considered.

For people tasked with this problem, solutions are rapidly emerging. You may have heard about a development within Accellera called the Portable Stimulus Working Group. This group is in the process of bringing together ideas from several EDA tools companies who have created tools to solve the integration verification problem. Most of them are based on the idea of graphs that define the valid data and control flows of the design. From this, they can randomly generate testcases that exercise those paths through the design. [ Flowgraphs were used in verification since 1968, in the SNAP simulator built at TRW System. Editor]

The biggest change in this methodology, compared to existing ones, is that it is not focused on stimulus generation. With SystemVerilog, the randomization helps generate stimulus, but the user is responsible for defining constraints, generating the necessary checkers, creation of the coverage model and, in some cases, showing that coverage actually corresponds to detection of faults. With Portable Stimulus, the user creates a verification intent model, and this is a unified model for the entire act of verification. From it, stimulus and checkers can be created. Constraints and coverage are annotated directly on the graph.

Figure 1: Portable stimulus enables a graph-based verification approach where users are able to generate stimuli from a graph.

Source: IBM, from DVCon India 2015 User Track Presentation

What this means is that verification is about to become very similar to design in that the user creates a high-level verification model and then has a synthesis engine generate the testbench from that model. The user will not generate low-level pieces of verification anymore and tests that are created will span multiple IP blocks and the connectivity that binds them together.

Several other advantages come from the notion of a model and a synthesis engine. How often have you struggled with the adaptation of a test originally targeted for a simulator, which now needs to be run on an emulator? How often have you been given a testbench developed for standalone verification of a block and been asked to integrate that into sub-system verification? How often have you had testbenches from a previous design that you want to adapt for a new design where only things such as interrupts or the address map have changed? Portable Stimulus addresses all of these issues because it has the notion of reuse fundamentally built into it.

Some languages were standardized before having been fully proven. That is not the case with graph-based verification. As an example, Breker has worked in this area for over a decade and while we may have been ahead of our time, it means that several of our customers have been successfully turning out chips based on this emerging methodology.

Accellera should release the first version of the graph-based verification methodology standard by the end of 2017. If you are wondering about adopting tools today, an easy migration path will be provided from the existing specification language to those that are expected to be contained within the released standard.

About Adnan Hamid

Adnan Hamid is the founder CEO of Breker and the inventor of its core technology. Under his leadership, Breker has come to be a market leader in functional verification technologies for complex systems-on-chips (SoCs), and Portable Stimulus in particular. Breker is an active Accellera member on the Portable Stimulus Working Group, taking a lead in defining the specifications of the upcoming Portable Stimulus Standard.  The Breker expertise in the automation of self-verifying testcases is setting the bar for the completeness of verification for SoCs.

Internet of Things (IoT) and EDA

Tuesday, April 8th, 2014

Gabe Moretti, Contributing Editor

A number of companies contributed to this article.  In Particular: Apache Design Solutions, ARM, Atrenta, Breker Verification Systems, Cadence, Cliosoft, Dassault Systemes, Mentor Graphics, Onespin Solutions, Oski Technologies, and Uniquify.

In his keynote speech at the recent CDNLive Silicon Valley 2014 conference, Lip-Bu Tan, Cadence CEO, cited mobility, cloud computing, and Internet of Things as three key growth drivers for the semiconductor industry. He cited industry studies that predict 50 billion devices by 2020.  Of those three, IoT is the latest area attracting much conversation.  Is EDA ready to support its growth?

The consensus is that in many aspects EDA is ready to provide tools required for IoT implementation.  David Flynn, a ARM Fellow put it best.  “For the most part, we believe EDA is ready for IoT.  Products for IoT are typically not designed on ‘bleeding-edge’ technology nodes, so implementation can benefit from all the years of development of multi-voltage design techniques applied to mature semiconductor processes.”

Michael Munsey, Director of ENOVIA Semiconductor Strategy at Dassault Systèmes observed that conversely companies that will be designing devices for IoT may not be ready.  “Traditional EDA is certainly ready for the core design, verification, and implementation of the devices that will connect to the IoT.  Many of the devices that will connect to the IoT will not be the typical designs that are pushing Moore’s Law.  Many of the devices may be smaller, lower performance devices that do not necessarily need the latest and greatest process technology.  To be cost effective at producing these devices, companies will rely heavily on IP in order to assemble devices quickly in order to meet consumer and market demands.  In fact, we may begin to see companies that traditionally have not been silicon developers getting in to chip design. We will see an explosive growth in the IP ecosystem of companies producing IP to support these new devices.”

Vic Kulkarni, Senior VP and GM, Apache Design, Inc.  put it as follows: “There is nothing “new or different” about the functionality of EDA tools for the IoT applications, and EDA tool providers have to think of this market opportunity from a perspective of mainstream users, newer licensing and pricing model for “mass market”, i.e.  low-cost and low-touch technical support, data and IP security and the overall ROI.”

But IoT also requires new approaches to design and offers new challenges.  David Kelf, VP of Marketing at Onespin Solutions provided a picture of what a generalized IoT component architecture is likely to be.

Figure 1: generalized IoT component architecture (courtesy Onespin Solutions)

He went on to state: “The included graphic shows an idealized projection of the main components in a general purpose IoT platform. At a minimum, this platform will include several analog blocks, a processor able to handle protocol stacks for wireless communication and the Internet Protocol (IP). It will need some sensor-required processing, an extremely effective power control solution, and possibly, another capability such as GPS or RFID and even a Long Term Evolution (LTE) 4G Baseband.”

Jin Zhang, Senior Director of Marketing at Oski Technologies observed that “If we parse the definition of IoT, we can identify three key characteristics:

  1. IoT can sense and gather data automatically from the environment
  2. IoT can interact and communicate among themselves and the environment
  3. IoT can process all the data and perform the right action with or without human interaction

These imply that sensors of all kinds for temperature, light, movement and human vitals, fast, stable and extensive communication networks, light-speed processing power and massive data storage devices and centers will become the backbone of this infrastructure.

The realization of IoT relies on the semiconductor industry to create even larger and more complex SoC or Network-on-Chip devices to support all the capabilities. This, in turn, will drive the improvement and development of EDA tools to support the creation, verification and manufacturing of these devices, especially verification where too much time is spent on debugging the design.”

Power Management

IoT will require advanced power management and EDA companies are addressing the problem.  Rob Aitken, also a ARM fellow, said:” We see an opportunity for dedicated flows around near-threshold and low voltage operation, especially in clock tree synthesis and hold time measurement. There’s also an opportunity for per-chip voltage delivery solutions that determine on a chip-by-chip basis what the ideal operation voltage should be and enable that voltage to be delivered via a regulator, ideally on-chip but possibly off-chip as well. The key is that existing EDA solutions can cope, but better designs can be obtained with improved tools.”

Kamran Shah, Director of Marketing for Embedded Software at Mentor Graphics, noted: “SoC suppliers are investing heavily in introducing power saving features including Dynamic Voltage Frequency Scaling (DVFS), hibernate power saving modes, and peripheral clock gating techniques. Early in the design phase, it’s now possible to use Transaction Level Models (TLM) tools such as Mentor Graphics Vista to iteratively evaluate the impact of hardware and software partitioning, bus implementations, memory control management, and hardware accelerators in order to optimize for power consumption”

Figure 2: IoT Power Analysis (courtesy of Mentor Graphics)

Bernard Murphy, Chief Technology Officer at Atrenta, pointed out that: “Getting to ultra-low power is going to require a lot of dark silicon, and that will require careful scenario modeling to know when functions can be turned off. I think this is going to drive a need for software-based system power modeling, whether in virtual models, TLM (transaction-level modeling), or emulation. Optimization will also create demand for power sensitivity analysis – which signals / registers most affect power and when. Squeezing out picoAmps will become as common as squeezing out microns, which will stimulate further automation to optimize register and memory gating.”

Verification and IP

Verifying either one component or a subset of connected components will be more challenging.  Components in general will have to be designed so that they can be “fixed” remotely.  This means either fix a real bug or download an upgrade.  Intel is already marketing such a solution which is not restricted to IoT applications.Also networks will be heterogeneous by design, thus significantly complicating verification.

Ranjit Adhikary, Director of Marketing at Cliosoft, noted that “From a SoC designer’s perspective, “Internet of Things” means an increase in configurable mixed-signal designs. Since devices now must have a larger life span, they will need to have a software component associated with them that could be upgraded as the need arises over their life spans. Designs created will have a blend of analog, digital and RF components and designers will use tools from different EDA companies to develop different components of the design. The design flow will increasingly become more complex and the handshake between the digital and analog designers in the course of creating mixed-signal designs has to become better. The emphasis on mixed-signal verification will only increase to ensure all corner cases are caught early on in the design cycle.”

Thomas L. Anderson, Vice President of Marketing at Breker Verification Systems, has a similar prospective but he is more pessimistic.  He noted that “Many IoT nodes will be located in hard-to-reach places, so replacement or repair will be highly unlikely. Some nodes will support software updates via the wireless network, but this is a risky proposition since there’s not much recourse if something goes wrong. A better approach is a bulletproof SoC whose hardware, software, and combination of the two have been thoroughly verified. This means that the SoC verification team must anticipate, and test for, every possible user scenario that could occur once the node is in operation.”

One solution, according to Mr. Anderson, is “automatic generation of C test cases from graph-based scenario models that capture the design intent and the verification space. These test cases are multi-threaded and multi-processor, running realistic user scenarios based on the functions that will be provided by the IoT nodes containing the SoC. These test cases communicate and synchronize with the UVM verification components (UVCs) in the testbench when data must be sent into the chip or sent out of the chip and compared with expected results.”

Bob Smith, Senior Vice President of Marketing and Business development at Uniquify, noted that “Connecting the unconnected is no small challenge and requires complex and highly sophisticated SoCs. Yet, at the same time, unit costs must be small so that high volumes can be achieved. Arguably, the most critical IP for these SoCs to operate correctly is the DDR memory subsystem. In fact, it is ubiquitous in SoCs –– where there’s a CPU and the need for more system performance, there’s a memory interface. As a result, it needs to be fast, low power and small to keep costs low.  The SoC’s processors spend the majority of cycles reading and writing to DDR memory. This means that all of the components, including the DDR controller, PHY and I/O, need to work flawlessly as does the external DRAM memory device(s). If there’s a problem with the DDR memory subsystem, such as jitter, data/clock skew, setup/hold time or complicated physical implementation issues, the IoT product may work intermittently or not at all. Consequently, system yield and reliability are of upmost concern.”

He went on to say: “The topic may be the Internet of Things and EDA, but the big winners in the race for IoT market share will be providers of all kinds of IP. The IP content of SoC designs often reaches 70% or more, and SoCs are driving IoT, connecting the unconnected. The big three EDA vendors know this, which is why they have gobbled up some of the largest and best known IP providers over the last few years.”

Conclusion

Things that seem simple often turn out not to be.  Implementing IoT will not be simple because as the implementation goes forward, new and more complex opportunities will present themselves.

Vic Kulkarni said: “I believe that EDA solution providers have to go beyond their “comfort zone” of being hardware design tool providers and participate in the hierarchy of IoT above the “Devices” level, especially in the “Gateway” arena. There will be opportunities for providing big data analytics, security stack, efficient protocol standard between “Gateway” and “Network”, embedded software and so on. We also have to go beyond our traditional customer base to end-market OEMs.”

Frank Schirrmeister, product marketing group director at Cadence, noted that “The value chain for the Internet of Things consists not only of the devices that create data. The IoT also includes the hubs that collect data and upload data to the cloud. Finally, the value chain includes the cloud and the big data analytics it stores.  Wired/wireless communications glues all of these elements together.”

Verification Management

Tuesday, February 11th, 2014

Gabe Moretti, Contributing Editor

As we approach the DVCon conference it is timely to look at how our industry approaches managing design verification.

Much has been said about the tools, but I think not enough resources have been dedicated to the issue of management and measurement of verification.  John Brennan, Product Director in the Verification Group at Cadence observed that verification used to be a whole lot easier. It used to be that you sent some stimulus to your design, view a few waveforms, collect some basic data by looking at the simulator log data, and then onto the next part of the design to verify.   The problem with all of this is that it’s simply too much information, and with randomness comes lack of clarity about what is actually tested and not.  He continued by stating that you can not verify every state and transition in your design, it is simply impossible, the magnitude is too large.  So what do you verify, and how are IP and chip suppliers addressing the challenge?  We at Cadence see several trends emerging that will help users with this daunting task, as follows: use collaboration based environments, use the right tool for the job, have Deep Analytics and Visibility, and deploy Feature based verification.

My specific questions to the panelists follow.  I chose a representative one from each of them.

* How does a verification group manage the verification process and assess risk?

Dave Kelf, Marketing Director at OneSpin Solutions opened the detail discussion by describing the present situation. Whereas design management follows a reasonably predictable path, verification management is still based on the subjective, unpredictable assessment of when is enough testing enough!

Verification management is all about predicting the time and resources required to reach the moving target of verification closure. However, there is still no concrete method available to predict when a design is fully, exhaustively, 100% tested. Today’s techniques all have an element of uncertainty, which translate to the risk of an undetected bug. The best a verification manager can do is to assess the progress point at which the probability of a remaining bug is infinitesimally small.

For a large design block, a combination of test coverage results, a test spec-to-simulation performed comparison, time-since-last-bug discovery, verification time spent and the end of the schedule may all play into this decision. For a complete SoC, running the entire system, including software, on an emulator for days on end might be the only way, today, to inspire confidence of a working design.

If we were to solve just one remaining problem in verification, achieving a deep and meaningful understanding of verification coverage pertaining to the original functional specification should be it.

*  What is the role of verification coverage in providing metrics toward verification closure, and is this proving useful.

Thomas L. Anderson, Vice President of Marketing, Breker Verification Systems answered that coverage is, frankly, all that the verification team has to assess how well the chip has been exercised. Code coverage is a given, but in recent years, functional coverage has gained much more prominence. The most recent forms of coverage are derived automatically, for example, from assertions or graph-based scenario models, and so provide much return for little investment.

*  How has design evolution affected verification management? Examples include IP usage and SoC trends.

Rajeev Ranjan, CTO of Jasper Design Automation observed that as designs get bigger in general, and as they incorporate more-and-more IPs developed by multiple internal and external parties,  integration verification becomes a very large concern.  Specifically, verification tasks such as interface validation, connectivity checking, functional verification of IPs in the face of hierarchical power management strategies, and ensuring that the hardware coherency protocols do not cause any deadlock in the overall system.  Additionally, depending on the end-market for the system, security path verification can also be a significant, system-wide challenge.

*  What should be the first step in preparing a verification plan?

Tom Fitzpatrick, verification evangelist, Mentor Graphics has dedicated many years to the study and solutions of verification issues.  He noted that the first step in preparing a verification plan is to understand what the design is supposed to do and under what conditions it’s expected to do it. Verification is really the art of modeling the “real world” in which the device is expected to operate, so it’s important to have that understanding. After that, it’s important to understand the difference between “block-level” and “system-level” behaviors that you want to test. Yes, the entire system must be able to, for example, boot an RTOS and process data packets or whatever, but there are a lot of specifics that can be verified separately at the block- or subsystem-level before you just throw everything together and see if it works. Understanding what pieces of the lower level environments can be reused and will prove useful at the system level, and being able to reuse those pieces effectively and efficiently is one key to verification productivity.

Another key is the ability to verify specific pieces of functionality as early as possible in the process and use that information to avoid targeting that functionality at higher levels. For example, using automated tools at the block level to identify reset or X-propagation issues, or state machine deadlock conditions, eliminates the need to try and create stimulus scenarios to uncover these issues. Similarly, being able to verify all aspects of a block’s protocol implementation at the block level means that you don’t need to waste time creating system-level scenarios to try and get specific blocks to use different protocol modes. Identifying where best to verify the pieces of your verification plan allows every phase of your verification to be more efficient.

*  Is criteria available to determine what tools need to be considered for various project phases? Which tools are proving effective? Is budget a consideration?

Yuan Lu, Chief Verification Architect, Atrenta Inc. contributed the following. Verification teams deploy a variety of tools to address various categories of verification issues, depending on how you break your design into multiple blocks and what you want to test at each level of hierarchy. At a macro level, comprehensive/exhaustive verification is expected at block/IP level. However, at the SoC level, functions such as connectivity checking, heart beat verification, and hardware/software co-verification are performed.

Over the years, there has emerged some level of consensus within the industry as to what type of tools need to be used for verification at the IP and SoC levels. But, so far, there is no perfect way to hand off IPs to the SoC team. The ultimate goal is to ensure that the IP team communicates to the SoC team about what has been tested and how the SoC team can use this information to figure out if the IP level verification was sufficient to meet the SoC needs.

*  Not long ago, the Unified Verification Methodology (UVM) was unveiled with the promise of improving verification management, among other advantages. How has that worked?

Herve Alexanian, Engineering Director, Advanced Dataflow Group at Sonics, Inc. pointed out that as an internal protocol is thoroughly specified, including configurable options, a set of assertions can naturally be written or generated depending on the degree of configurability. Along the same lines, functional coverage points and reference (UVM) sequences are also defined. These definitions are the best way to enter modern verification approaches, allowing the most benefit from formal techniques and verification planning. Although some may see such definitions as too rigid to accommodate changes in requirements, making a change in a fundamental interface is intentionally costly as it is in software. It implies additional scrutiny on how architectural changes are implemented in a way that tends to minimize functional corners that later prove so costly to verify.

*  What areas of verification need to be improved to reduce verification risk and ease the management burden?

Vigyan Singhal, President and CEO, Oski Technology said that for the most part, current verification methodology relies on simulation and emulation for functional verification. As shown consistently in the 2007, 2010 and 2012, Wilson Research Group Surveys sponsored by Mentor Graphics, two thirds of projects are behind schedule and functional bugs are still the main culprit for chip respins. This shows that the current methodology has significant verification risk.

Verification teams today spend most of the time in subsystem (63.9%) and full chip simulation (36.1%), and most of the time is spent in debugging (36%). This is not surprising as debugging at the subsystem and chip level with thousands or long cycle traces can take a long time.

The solution to the challenge is to improve block-level design quality so as to reduce the verification and management burden at the subsystem and chip level. Formal property verification is a powerful technique for block-level verification. It is exhaustive and can catch all corner-case bugs. While formal verification adds another step in the verification flow with additional management tasks to track its progress, the time and effort spent will lead to reduced time and effort at the subsystem and chip level, and improve overall design quality. With short time-to-market windows, design teams need to guarantee first-silicon success. We believe increased formal usage in the verification flow will reduce verification risks and ease management burden.

As he had opened the discussion, John Brennan closed it noting that functional verification has no one single silver bullet, it takes multiple engineers, operating across multiple heterogeneous engines, with multiple analytics.  This multi-specialists verification is now, the VPM tools that support multi-specialist verification are needed now.