Part of the  

Chip Design Magazine

  Network

About  |  Contact

Posts Tagged ‘Cadence’

Next Page »

Cadence to Acquire Jasper Design Automation

Wednesday, April 23rd, 2014

By Hamilton Carter, Verification-Low Power Editor

Earlier this week, Cadence announced that they have entered into a definitive agreement to purchase Jasper Design Automation. The two companies are both focused on SoC verification and cited their shared customer base and the current verification budget allotment on most SoC development projects of over 70% as reasons for the transaction. Cadence will pay $170 million for the privately held Jasper.  Jasper shows $24 million in cash and cash equivalents on its balance sheet at the moment. Cadence will be funding the purchase using cash-on-hand and an existing rotating line of credit.

Jasper is a market leader in the growing formal analysis verification sector.  They’ve been very successful in distributing verification solutions termed Verification Apps built on top of their formal engine Jasper Gold.  Cadence hopes to expand the formal analysis sector more quickly by leveraging the combination of Jasper’s expertise and Cadence’s existing worldwide sales team.

Both companies are excited about the potential for integrating Jasper’s tools with the metric driven verification flow provided by Cadence.  Charlie Huang, senior vice president of the System & Verification Group and Worldwide Field Operations at Cadence, said about the acquisition, “Jasper’s formal analysis solutions are used by customers today alongside Cadence’s metric-driven verification flow to form a broad verification solution. We look forward to welcoming Jasper’s strong formal development expertise and skilled team to Cadence.”

Kathryn Kranen, president and CEO of Jasper,  commented, “The verification technologies, when combined, will benefit customers through a comprehensive metric-driven verification approach that unites formal and dynamic techniques, realizing the strength of each and leveraging the integration between them.”

The acquisition is currently scheduled to take place in the second quarter of fiscal 2014 subject to typical closing conditions and regulatory approvals.  There is no word yet as to when customers can expect a consumer-ready integration of Jasper and Cadence technologies.

Blog Review – Mon. April 21 2014

Monday, April 21st, 2014

Post silicon preview; Apps to drive for; Motivate to educate; Battery warning; Break it up, bots. By Caroline Hayes, Senior Editor.

Gabe Moretti attended the Freescale Technology Forum and found the ARM Cortex-A57 Carbon Performance Analysis Kit (CPAK) that previews post silicon performance, pre-silicon.

In a considered blog post, Joel Hoffmann, Intel, looks at the top four car apps and what they mean for system designers. He knows what he is talking about, he is preparing for the panel at Open Automotive 14 – Automotive Suppliers: Collaborate or Die in Sweden next month.

How to get the next generation of EDA-focused students to commit is the topic of a short keynote at this year’s DAC by Rob Rutenbar, professor of Computer Science, University of Illinois. Richard Goering, Cadence reports on progress so far with industry collaboration and looks ahead.

Consider managing power in SoCs above all else, urges Scott Seiden, Sonics, who sounds a little frustrated with his cell phone.

Michael Posner, Synopsys, revels in a good fight – between robots in the FIRST student robot design competition. Engaging and educational.

Internet of Things (IoT) and EDA

Tuesday, April 8th, 2014

Gabe Moretti, Contributing Editor

A number of companies contributed to this article.  In Particular: Apache Design Solutions, ARM, Atrenta, Breker Verification Systems, Cadence, Cliosoft, Dassault Systemes, Mentor Graphics, Onespin Solutions, Oski Technologies, and Uniquify.

In his keynote speech at the recent CDNLive Silicon Valley 2014 conference, Lip-Bu Tan, Cadence CEO, cited mobility, cloud computing, and Internet of Things as three key growth drivers for the semiconductor industry. He cited industry studies that predict 50 billion devices by 2020.  Of those three, IoT is the latest area attracting much conversation.  Is EDA ready to support its growth?

The consensus is that in many aspects EDA is ready to provide tools required for IoT implementation.  David Flynn, a ARM Fellow put it best.  “For the most part, we believe EDA is ready for IoT.  Products for IoT are typically not designed on ‘bleeding-edge’ technology nodes, so implementation can benefit from all the years of development of multi-voltage design techniques applied to mature semiconductor processes.”

Michael Munsey, Director of ENOVIA Semiconductor Strategy at Dassault Systèmes observed that conversely companies that will be designing devices for IoT may not be ready.  “Traditional EDA is certainly ready for the core design, verification, and implementation of the devices that will connect to the IoT.  Many of the devices that will connect to the IoT will not be the typical designs that are pushing Moore’s Law.  Many of the devices may be smaller, lower performance devices that do not necessarily need the latest and greatest process technology.  To be cost effective at producing these devices, companies will rely heavily on IP in order to assemble devices quickly in order to meet consumer and market demands.  In fact, we may begin to see companies that traditionally have not been silicon developers getting in to chip design. We will see an explosive growth in the IP ecosystem of companies producing IP to support these new devices.”

Vic Kulkarni, Senior VP and GM, Apache Design, Inc.  put it as follows: “There is nothing “new or different” about the functionality of EDA tools for the IoT applications, and EDA tool providers have to think of this market opportunity from a perspective of mainstream users, newer licensing and pricing model for “mass market”, i.e.  low-cost and low-touch technical support, data and IP security and the overall ROI.”

But IoT also requires new approaches to design and offers new challenges.  David Kelf, VP of Marketing at Onespin Solutions provided a picture of what a generalized IoT component architecture is likely to be.

Figure 1: generalized IoT component architecture (courtesy Onespin Solutions)

He went on to state: “The included graphic shows an idealized projection of the main components in a general purpose IoT platform. At a minimum, this platform will include several analog blocks, a processor able to handle protocol stacks for wireless communication and the Internet Protocol (IP). It will need some sensor-required processing, an extremely effective power control solution, and possibly, another capability such as GPS or RFID and even a Long Term Evolution (LTE) 4G Baseband.”

Jin Zhang, Senior Director of Marketing at Oski Technologies observed that “If we parse the definition of IoT, we can identify three key characteristics:

  1. IoT can sense and gather data automatically from the environment
  2. IoT can interact and communicate among themselves and the environment
  3. IoT can process all the data and perform the right action with or without human interaction

These imply that sensors of all kinds for temperature, light, movement and human vitals, fast, stable and extensive communication networks, light-speed processing power and massive data storage devices and centers will become the backbone of this infrastructure.

The realization of IoT relies on the semiconductor industry to create even larger and more complex SoC or Network-on-Chip devices to support all the capabilities. This, in turn, will drive the improvement and development of EDA tools to support the creation, verification and manufacturing of these devices, especially verification where too much time is spent on debugging the design.”

Power Management

IoT will require advanced power management and EDA companies are addressing the problem.  Rob Aitken, also a ARM fellow, said:” We see an opportunity for dedicated flows around near-threshold and low voltage operation, especially in clock tree synthesis and hold time measurement. There’s also an opportunity for per-chip voltage delivery solutions that determine on a chip-by-chip basis what the ideal operation voltage should be and enable that voltage to be delivered via a regulator, ideally on-chip but possibly off-chip as well. The key is that existing EDA solutions can cope, but better designs can be obtained with improved tools.”

Kamran Shah, Director of Marketing for Embedded Software at Mentor Graphics, noted: “SoC suppliers are investing heavily in introducing power saving features including Dynamic Voltage Frequency Scaling (DVFS), hibernate power saving modes, and peripheral clock gating techniques. Early in the design phase, it’s now possible to use Transaction Level Models (TLM) tools such as Mentor Graphics Vista to iteratively evaluate the impact of hardware and software partitioning, bus implementations, memory control management, and hardware accelerators in order to optimize for power consumption”

Figure 2: IoT Power Analysis (courtesy of Mentor Graphics)

Bernard Murphy, Chief Technology Officer at Atrenta, pointed out that: “Getting to ultra-low power is going to require a lot of dark silicon, and that will require careful scenario modeling to know when functions can be turned off. I think this is going to drive a need for software-based system power modeling, whether in virtual models, TLM (transaction-level modeling), or emulation. Optimization will also create demand for power sensitivity analysis – which signals / registers most affect power and when. Squeezing out picoAmps will become as common as squeezing out microns, which will stimulate further automation to optimize register and memory gating.”

Verification and IP

Verifying either one component or a subset of connected components will be more challenging.  Components in general will have to be designed so that they can be “fixed” remotely.  This means either fix a real bug or download an upgrade.  Intel is already marketing such a solution which is not restricted to IoT applications.Also networks will be heterogeneous by design, thus significantly complicating verification.

Ranjit Adhikary, Director of Marketing at Cliosoft, noted that “From a SoC designer’s perspective, “Internet of Things” means an increase in configurable mixed-signal designs. Since devices now must have a larger life span, they will need to have a software component associated with them that could be upgraded as the need arises over their life spans. Designs created will have a blend of analog, digital and RF components and designers will use tools from different EDA companies to develop different components of the design. The design flow will increasingly become more complex and the handshake between the digital and analog designers in the course of creating mixed-signal designs has to become better. The emphasis on mixed-signal verification will only increase to ensure all corner cases are caught early on in the design cycle.”

Thomas L. Anderson, Vice President of Marketing at Breker Verification Systems, has a similar prospective but he is more pessimistic.  He noted that “Many IoT nodes will be located in hard-to-reach places, so replacement or repair will be highly unlikely. Some nodes will support software updates via the wireless network, but this is a risky proposition since there’s not much recourse if something goes wrong. A better approach is a bulletproof SoC whose hardware, software, and combination of the two have been thoroughly verified. This means that the SoC verification team must anticipate, and test for, every possible user scenario that could occur once the node is in operation.”

One solution, according to Mr. Anderson, is “automatic generation of C test cases from graph-based scenario models that capture the design intent and the verification space. These test cases are multi-threaded and multi-processor, running realistic user scenarios based on the functions that will be provided by the IoT nodes containing the SoC. These test cases communicate and synchronize with the UVM verification components (UVCs) in the testbench when data must be sent into the chip or sent out of the chip and compared with expected results.”

Bob Smith, Senior Vice President of Marketing and Business development at Uniquify, noted that “Connecting the unconnected is no small challenge and requires complex and highly sophisticated SoCs. Yet, at the same time, unit costs must be small so that high volumes can be achieved. Arguably, the most critical IP for these SoCs to operate correctly is the DDR memory subsystem. In fact, it is ubiquitous in SoCs –– where there’s a CPU and the need for more system performance, there’s a memory interface. As a result, it needs to be fast, low power and small to keep costs low.  The SoC’s processors spend the majority of cycles reading and writing to DDR memory. This means that all of the components, including the DDR controller, PHY and I/O, need to work flawlessly as does the external DRAM memory device(s). If there’s a problem with the DDR memory subsystem, such as jitter, data/clock skew, setup/hold time or complicated physical implementation issues, the IoT product may work intermittently or not at all. Consequently, system yield and reliability are of upmost concern.”

He went on to say: “The topic may be the Internet of Things and EDA, but the big winners in the race for IoT market share will be providers of all kinds of IP. The IP content of SoC designs often reaches 70% or more, and SoCs are driving IoT, connecting the unconnected. The big three EDA vendors know this, which is why they have gobbled up some of the largest and best known IP providers over the last few years.”

Conclusion

Things that seem simple often turn out not to be.  Implementing IoT will not be simple because as the implementation goes forward, new and more complex opportunities will present themselves.

Vic Kulkarni said: “I believe that EDA solution providers have to go beyond their “comfort zone” of being hardware design tool providers and participate in the hierarchy of IoT above the “Devices” level, especially in the “Gateway” arena. There will be opportunities for providing big data analytics, security stack, efficient protocol standard between “Gateway” and “Network”, embedded software and so on. We also have to go beyond our traditional customer base to end-market OEMs.”

Frank Schirrmeister, product marketing group director at Cadence, noted that “The value chain for the Internet of Things consists not only of the devices that create data. The IoT also includes the hubs that collect data and upload data to the cloud. Finally, the value chain includes the cloud and the big data analytics it stores.  Wired/wireless communications glues all of these elements together.”

Blog Review – April 07 2014

Monday, April 7th, 2014

Interesting comments from PADS users are teasingly outlined by John McMillan, Mentor Graphics.

An impressed Dominic Pajak, ARM, relates hopes for the IoT using ARM mbed and Google Glass to control an industrial tank control which can be viewed by Google Glass.

The cultural gap between engineers is examined by Brian Fuller, Cadence, as he reviews DesignCon 2014 and group director,Frank Schirrmeister’s call for a “new species” of system designer

More IoT thoughts, this time a warning from Divya Naidu Kol, Intel, with ideas of how to welcome the IoT without losing control of our information.

The Wheels of Industry Roll On

Wednesday, March 19th, 2014

The wheels of industry roll on

Apart from pretzels and weissbier, Embedded World in Nuremberg was distinguished by the car, the factory and the IoT. By Caroline Hayes, Senior Editor.

On reflection, it should be no surprise that automotive themes were all around the exhibition halls of Embedded World 2014 in Nuremberg, Germany. The country produces 44% of the cars and light trucks manufactured in western Europe (defined as Europe apart from Turkey and the former communist bloc).

Software
My Embedded World began with a breakfast meeting at Cadence, where Senior Director, Frank Schirrmeister, outlined that the classic EDA model is evolving to what has been termed System Design Enablement. This shift is largely due to the growing electronics content in vehicles, he explained, with the proliferation of CAN (Controlled Area Network), LIN (Local Interconnect Network), MOST (Media Oriented Systems Transport), Ethernet and FlexRay protocols. “For the consumer, it is satellite navigation systems; it is under-the-hood with software and there is an automotive element to the IoT (Internet of Things),” he said.
Traditionally, he explained, Cadence’s domain was the SoC (System on Chip), augmented by the acquisitions last year of Tensilica, Cosmic Circuits and Evatronix. System Design Enablement is based around adding the software stack for applications: “We are broadening EDA to take the subsystem in the chip, and enable interaction between the hardware and software to broaden the system design environment.” Listing the company’s attributes for automotive, he ran down the list of Ethernet IP; virtual platforms, such as automated driver assist systems; FPGA platforms for prototyping, mixed signal for under-the-hood. The last two are also used in IoT. For embedded control, notes Schirrmeister, virtualization needs to adapt, with virtual hardware methodology to meet the complexities and variants of cars today. “We need more simulation that ever before – automotive is the second largest growth area in the USA, Europe and Japan.”

Industry
More rugged modes of transport were occupying Daniel Piper, Senior Marketing Manager, EMEA, Kontron. The company was highlighting embedded boards based on the Intel Atom E3800 and COM Express modules based on the Intel Atom E3800 and Intel Celeron N2900/J1900.
There was also the first SMARC (Smart Mobility ARChitecture) CoMs (Computer on Modules) – the first design from the German company based on an x86 Atom processor. The SMARC-sXBTi CoMs are scalable so that the same look and feel for software development can be shared in the industrial space, said Piper, reaching from the automated shop floor to the connected tablet. For this, the benefits of SMARC, the low power consumption and small footprint are exploited, says Piper. Power consumption is 5 to 10W and the low profile, mini-computer form factor measures 82x50mm. “As well as long-term availability [an average of seven years] the software services, low profile and low power consumption suit the mobile applications and interfaces, such as eye camera interfaces, that are adapted from the ARM world but on an x86.” This, he says, extends SMARC into x86 modules and markets, using the same form factor, which brings ease of use for the end customer.

Industrial virtualization
Perhaps the most targeted announcement of Embedded World was that from Intel, which announced virtualization platforms for industrial systems with software and tools to create industrial, embedded systems.
Jim Robinson, General Manager, Segments and Broad Market Division, Internet of Things Solutions Group, Intel, echoed Piper’s sentiment, that the industrial sector is looking for new, innovative ways to connect and build: “Bringing together what have typically been different sub-systems into a single computing platform, makes it easier – and more affordable – for OEMs, machine builders and system integrators to deliver consolidated, virtualized platforms,” he said.
The Intel Industrial Solutions Systems Consolidation Series bundles together an embedded computer with an Intel Core i7 processor and pre-integrated virtualization software stack, which includes Wind River Hypervisor, pre-configured to support three partitions, running Wind River VxWorks for real-time applications and Wind River Linux 5.0 for non-rea-time applications. Robinson explained that the role of virtualization is to partition important workloads using multiple virtual machines. With it, developers can consolidate multiple discrete sub-systems onto a single device, reducing costs, increasing flexibility and reducing factory space, he said.
Intel System Studio software tools were also announced. They are designed for the Industrial Solutions System Consolidation Series to build and analyse industrial, embedded systems.

The IoT network
The software and tool suite are part of the company’s Developer Program for Internet of Things. This was another phrase often heard, and seen, around the halls in Nuremberg.
Dan Demers, Director of Marketing, congatec, was enthusiastic about IoT for its role in bridging mobile connectivity into the industrial space. “The longevity of industrial applications, and the IoT, from mobile devices into the industrial area demands customised solutions. Markets are opening up, because silicon innovations are bringing the power consumption down and the performance levels up. Historically, we could not put an x86 processor on such a board – now we can. Off-the-shelf is the easiest option, but not all off-the-shelf offerings have the I/O requirements, so the next best thing is a CoM.” There are, he says many advantages to employing a CoM, such as reduced time to market, with around six to nine months development time.
ARM took a difference approach to the IoT. Its IP dominates phones today, shipping 20billion ARM-based mobile phone chips to date, but the IoT market could be 100billion connected devices. Chris Turner, Senior Product Marketing Manager, Processor Division, ARM, explained to me about the software layer of the IoT and security. “The ARM ecosystem demonstrates the underlying code – the hypervisors – the protocol stack and the partnerships within the ecosystems. It is a long product plan to build a hypervisor for an architecture and this is where the ecosystem co-operation comes into play.” The ARM ecosystem includes more than 1,000 partners, developers and engineers developing ARM-based solutions and providing support.
Still with the IoT, Wind River had a creative booth, which demonstrated device connectivity in many forms, supported by its Intelligent Device Platform. As well as IoT protocol support, the scalable, secure development environment supports WiFi, Bluetooth and ZigBee.
It can be customized, which reduces development time. Provisioning and device management is via a web-based tool. Security in any connected network is critical, particularly if used in public transport, where signaling has to be accurate and timely, to utilities, where companies not only want to prevent fraud, or theft, but also to protect against outages. A customized secure remote management feature provides encrypted communication between a device and cloud based management. Lua and Java programming environments are supported to allow engineers to build gateway applications, connect and to send and receive data from the cloud.

Although there were several themes at this year’s exhibition and conference, the healthy competition between the processor of choice and co-operation between software, chip and board companies to work together to integrate and innovate was encouraging both in terms of the economy and in terms of the potential for the diverse, engaging design initiatives of tomorrow.

Blog Review – March 17 2014

Monday, March 17th, 2014

Robotic vehicles; CDNLive’s system focus; Rubik challenge; Design data stats; vManager renovation. By Caroline Hayes, Senior Editor

Start with a Tamiya tracked vehicle chassis kit and add a gearbox, a PCB loaded with a Spartan-6 FPGA and present to a boy-racer and you have the contents of sleibso’s blog at Cadence. He takes a look at the Logitraxx tracked robotic vehicle on Kickstarter and revs up an explanatory video.

Brian Fuller reports on Cadence’s CDNLive event and Imagination President, Krishna Yarlagadda’s keynote vision for a system-driven approach to power, area, cost, software, and security in IoT design.

David Gilday and Mike Dobson broke a Guiness World Record in the UK at the Big Bang Fair in Birmingham, with the CubeStormer 3 robot solving a Rubik’s cube in 3.25s – smashing the existing 5.27s record, set by the CubeStorner’s predecessor. Lorikate anticpates the event in the ARM Community blog, with videos of past escapades by the duo.

Sage advice from Gabe Moretti, on the news that DVCon Europe has been announced. He advocates new ideas and opportunities for growth and suggests the conference spreads its wings further afield.

Interesting stats from Graham Bell, Real Intent, with the survey data about verification data adoption, clock domain bugs, and other design issue questions.

Hamilton Carter commemorates 10 years since the introduction of vManager and looks forward to the latest version, even if it is unimaginatively named. His blog delves into the new tool’s features.

System Level Power Budgeting

Wednesday, March 12th, 2014

Gabe Moretti, Contributing Editor

I would like to start by thanking Vic Kulkarni, VP and GM at Apache Design a wholly owned subsidiary of ANSYS, Bernard Murphy, Chief Technology Officer at Atrenta,and Steve Brown, Product Marketing Director at Cadence for contributing to this article.

Steve began by nothing that defining a system level power budget for a SoC starts from chip package selection and the power supply or battery life parameters. This sets the power/heat constraint for the design, and is selected while balancing functionality of the device, performance of the design, and area of the logic and on-chip memories.

Unfortunately, as Vic points out semiconductor design engineers must meet power specification thresholds, or power budgets, that are dictated by the electronic system vendors to whom they sell their products.   Bernard wrote that accurate pre-implementation IP power estimation is almost always required. Since almost all design today is IP-based, accurate estimation for IPs is half the battle. Today you can get power estimates for RTL with accuracy within 15% of silicon as long as you are modeling representative loads.

With the insatiable demand for handling multiple scenarios (i.e. large FSDB files) like GPS, searches, music, extreme gaming, streaming video, download data rates and more using mobile devices, dynamic power consumed by SOCs continues to rise in spite of strides made in reducing the static power consumption in advanced technology nodes. As shown in Figure 1, the end user demand for higher performance mobile devices that have longer battery life or higher thermal limit is expanding the “power gap” between power budgets and estimated power consumption levels.

Typical “chip power budget” for a mobile application could be as follows (Ref: Mobile companies): Active power budget = 700mW @100Mbps for download with MIMO, 100mW @IDLE-mode; Leakage power <5mW with all power-domain off etc.

Accurate power analysis and optimization tools must be employed during all the design phases from system level, RTL-to-gate level sign-off to model and analyze power consumption levels and provide methodologies to meet power budgets.

Skyrocketing performance vs. limited battery & thermal limit (ref. Samsung- Apache Tech Forum)

The challenge is to find ways to abstract with reasonable accuracy for different types of IP and different loads. Reasonable methods to parameterize power have been found for single and multiple processor systems, but not for more general heterogeneous systems. Absent better models, most methods used today are based on quite simple lookup tables, representing average consumption. Si2 is doing work in defining standards in this area.

Vic is convinced that careful power budgeting at a high level also enables design of the power delivery network in the downstream design flow. Power delivery with reliable and consistent power to all components of ICs and electronic systems while meeting power budgets is known as power delivery integrity.  Power delivery integrity is analogous to the way in which an electric power grid operator ensures that electricity is delivered to end users reliably, consistently and in adequate amounts while minimizing loss in the transmission network.  ICs and electronic systems designed with inadequate power delivery integrity may experience large fluctuations in supply voltage and operating power that can cause system failure. For example, these fluctuations particularly impact ICs used in mobile handsets and high performance computers, which are more sensitive to variations in supply voltage and power.  Ensuring power delivery integrity requires accurate modeling of multiple individual components, which are designed by different engineering teams, as well as comprehensive analysis of the interactions between these components.

Methods To Model System Behavior With Power

At present engineers have a few approaches at their disposal.  Vic points out that the designer must translate the power requirements into block-level power budgeting to come up with specific metrics.

Dynamic power estimation per operating power mode, leakage power and sleep power estimation at RTL, power distribution at a glance, identification of high-power consuming areas, power domains, frequency-scaling feasibility for each IP, retention flop design trade-off, power-delivery network planning, required current consumption per voltage source and so on.

Bernard thinks that Spreadsheet Modeling is probably the most common approach. The spreadsheet captures typical application use-cases, broken down into IP activities, determined from application simulations/emulations. It also represents, for each IP in the system, a power lookup table or set of curves. Power estimation simply sums across IP values in a selected use-case. An advantage is no limitation in complexity – you can model a full smart phone including battery, RF and so on. Disadvantages are the need to understand an accurate set of use-cases ahead of deployment, and the abstraction problem mentioned above.  But Steve points out that these spreadsheets are difficult to create and maintain, and fall short for identifying outlier conditions that are critical for the end users experience.

Steve also points out that some companies are adapting virtual platforms to measure dynamic power, and improve hardware / software partitioning decisions. The main barrier to this solution remains creation of the virtual platform models, and then also adding the notion of power to the models. Reuse of IP enables reuse of existing models, but they still require effort to maintain and adapt power calculations for new process nodes.

Bernard has experienced engineers that run the full RTL against realistic software loads, dump activity for all (or a large number) of nodes and compute power based on the dump. An advantage is that they can skip the modeling step and still get an estimate as good as for RTL modeling. Disadvantages include needing the full design (making it less useful for planning) and significant slowdown in emulation when dumping all nodes, making it less feasible to run extensive application experiments.  Steve concurs.  Dynamic power analysis is a particularly useful technique, available in emulation and simulation. The emulator provides MHz performance enabling analysis of many cycles, often times with test driver software to focus on the most interesting use cases.

Bernard is of the opinion that while C/C++/SystemC Modeling seems an obvious target, it also suffers from the abstraction problem. Steve thinks that a likely architecture in this scenario has the virtual platform containing the processing subsystem and memory subsystem and executes as 100s of MHz, and the emulator contains the rest of the SoC and a replication of the memory subsystem and executes at higher speeds and provides cycle accurate power analysis and functional debugging.

Again,  Bernard wants to underscore, progress has been made for specialized designs, such as single and multiple processors, but these approaches have little relevance for more common heterogeneous systems. Perhaps Si2 work in this area will help.

Are Best Practices Resulting in a Verification Gap?

Tuesday, March 4th, 2014

By John Blyler, Chief Content Officer

A panel of experts from Cadence, Mentor, NXP, Synopsys and Xilinx debate the reality and causes of the apparently widening verification gap in chip design.

A panel of semiconductor experts debate the reality and causes of the apparently widening verification gap in chip design.Does a verification gap exist in the design of complex system-on-chips (SoCs)? This is the focus of a panel of experts at DVCon 2014, which will include Janick Bergeron, Fellow at Synopsys; Jim Caravella, VP of Engineering at NXP – Harry Foster, Chief Verification Technologist at Mentor Graphics, John Goodenough, VP, ARM,  Bill Grundmann, a Fellow at Xilinx; and Mike Stellfox, a Fellow at Cadence. JL Gray, a  Senior Architect at Cadence, organized the panel. What follows is a position statement from the panelist in preparation for this discussion. – JB

Panel Description: “Did We Create the Verification Gap?”

According to industry experts, the “Verification Gap” between what we need to do and what we’re actually able to do to verify large designs is growing worse each year. According to these experts, we must do our best to improve our verification methods and tools before our entire project schedule is taken up by verification tasks.

But what if the Verification Gap is actually occurring as a result of continued adoption of industry standard methods. Are we blindly following industry best practices without keeping in mind that the actual point of our efforts is to create a product with as few bugs as possible, as opposed to simply trying to find as many bugs as we can?

Are we blindly following industry best practices …

Panelists will explore how verification teams interact with broader project teams and examine the characteristics of a typical verification effort, including the wall between design and verification, verification involvement (or lack thereof) in the design and architecture phase, and reliance on constrained random in absence of robust planning and prioritization to determine the reasons behind today’s Verification Gap.

Panelist Responses:

Grundmann: Here are my key points:

  • Methodologies and tools for constructing and implementing hardware have dramatically improved, while verification processes appear to have not kept pace with the same improvements.  As hardware construction is simplified, then there is a trend to have less resources building hardware but same or more resources performing verification.  Design teams with 3X verification to hardware design are not unrealistic and that ratio is trending higher.

    … we have to expect to provide a means to make in-field changes …

  • As it gets easier to build hardware, performing hardware verification is approaching software development type of resourcing in a project.
  • As of now, it very easy to quickly construction various hardware “crap”, but it very hard to prove any are what you want.
  • It possible that we can never be thoroughly verification “clean” without delivering some version of the product with a reasonable quality level of verification.  This may mean we have to expect to provide a means to make in-field changes to the products through software-like patches.

Stellfox: Most chips are developed today based on highly configurable modular IP cores with many embedded CPUs and a large amount of embedded SW content, and I think a big part of the “verification gap” is due to the fact that most development flows have not been optimized with this in mind.  To address the verification gap, design and verification teams need to focus more on the following:

  • IP teams need to develop and deliver the IP in a way that it is more optimized for SoC HW and SW integration.  While the IP cores need to be high quality, it is not sufficient to only deliver high quality IP since much of the work today is spent in integrating the IP and enabling earlier SW bring-up and validation.

    There needs to be more focus on integrating and verifying the SW modules with HW blocks …

  • There needs to be more focus on integrating and verifying the SW modules with HW blocks early and often, starting at the IP level to Subsystem to SoC.  After all, the SW APIs largely determine how the HW can be used in a given application, so time might be wasted “over-verifying” designs for use cases which may not be applicable in a specific product.
  • Much of the work in developing a chip is about integrating design IPs, VIPs, and SW, but most companies do not have a systematic, automated approach with supporting infrastructure for this type of development work.

Foster: No, the industry as a whole did not create the verification challenge.  To say this lacks an understanding of the problem.  While design grows at a Moore’s Law rate, verification grows at a double exponential rate. Compounded with increased complexity due to Moore’s Law are the additional dimensions of hardware-software interaction validation, complex power management schemes, and other physical effects that now directly affect functional correctness.  Emerging solutions, such as constrained-random, formal property checking, emulation (and so on) didn’t emerge because they were just cool ideas.  The emerged to address specific problems. Many design teams are looking for a single hammer that they can use to address today’s verification challenges. Unfortunately, we are dealing with an NP-hard problem, which means that there will never be a single solution that will solve all classes of problems.

Many design teams are looking for a single hammer that they can use to address today’s verification challenges.

Historically, the industry has always addressed complexity through abstraction (e.g., the move from transistors to gates, the move from gates to RTL, etc.). Strategically, the industry will be forced to move up in abstraction to address today’s challenges. However, there is still a lot of work to be done (in terms of research and tool development) to make this shift in design and verification a reality.

Caravella: The verification gap is a broad topic so I’m not exactly sure what you’re looking for, but here’s a good guess.

Balancing resource and budget for a product must be done across much more than just verification.

 Verification is only a portion of the total effort, resources and investment required to develop products and release them to market. Balancing resource and budget for a product must be done across much more than just verification. Bringing a chip to market (and hence revenue) requires design, validation, test, DFT, qualification and yield optimization. Given this and the insatiable need for more pre-tape out verification, what is the best balance? I would say that the chip does not need to be perfect when it comes to verification/bugs, it must be “good enough”. Spending 2x the resources/budget to identify bugs that do not impact the system or the customer is a waste of resources. These resources could be better spent elsewhere in the product development food chain or it could be used to do more products and grow the business. The main challenge is how best to quantify the risk to maximize the ROI of any verification effort.

Jasper: [Editor’s Note: Although not part of the panel, Jasper provided an additional perspective on the verification gap.]

  • Customers are realizing that UVM is very “heavy” for IP verification.  Specifically, writing and debugging a UVM testbench for block and unit level IP is very time consuming task in-and-of-itself, plus it incurs an ongoing overhead in regressions when the UVC’s are effectively “turned off” and/or simply used as passive monitors for system level verification.  Increasingly, we see customers ditching the low level UVM testbench and exhaustively verifying their IPs with formal-based.  In this way, the users can focus on system integration verification and not have to deal with bugs that should have been caught much sooner.

    UVM is very “heavy” for IP verification.

  • Speaking of system-level verification: we see customers applying formal at this level as well.  In addition to now familiar SoC connectivity and register validation flows, we see formal replacing simulation in architectural design and analysis.  In short, even without any RTL or SystemC, customers can use an architectural spec to feed into formal under-the-hood to exhaustively verify that a given architecture or protocol is correct by construction, won’t deadlock, etc.
  • The need for sharing coverage data between multiple vendors’ tool chains is increasing, yet companies appear to be ignoring the UCIS interoperability API.  This is creating a big gap in customers’ verification closure processes because it’s a challenge to compare verification metrics across multi-vendor flows, and they are none too happy about it.

Blog Review – March 03 2014

Monday, March 3rd, 2014

By Caroline Hayes, Senior Editor

Cadence’s Brian Fuller admits to some skullduggery in reducing last week’s Mobile World Congress in Barcelona down to a single theme of IP, but his blog makes a good case for why IP is the answer to the industry’s challenges.

Also from Barcelona, Wendy Boswell submits her report on the Mobile World Congress, focusing on Intel news such as the Intel Galileo Hackathon, which is not a challenge to bring down whole networks, but an invitation to “to create cool stuff” with the new board. Inevitably, Internet of Things is also covered on the three-day blog but also new mobile development tools.

As this year’s DVCon (Design and Verification Conference) begins this week, Real Intent’s Graham Bell reflects on last year’s event. His blog provides links to video recordings of some questions posed there about assertion synthesis during the company-sponsored panel “Where Does Design End And Verification Begin?”

Reaching for the stars, J Van Domelen, Mentor Graphics, considers the selection process for the one-way journey to Mars and finds a near-neighbour of Mentor Graphics is on the ‘short list’ of 1,058 applicants.

Verification Management

Tuesday, February 11th, 2014

Gabe Moretti, Contributing Editor

As we approach the DVCon conference it is timely to look at how our industry approaches managing design verification.

Much has been said about the tools, but I think not enough resources have been dedicated to the issue of management and measurement of verification.  John Brennan, Product Director in the Verification Group at Cadence observed that verification used to be a whole lot easier. It used to be that you sent some stimulus to your design, view a few waveforms, collect some basic data by looking at the simulator log data, and then onto the next part of the design to verify.   The problem with all of this is that it’s simply too much information, and with randomness comes lack of clarity about what is actually tested and not.  He continued by stating that you can not verify every state and transition in your design, it is simply impossible, the magnitude is too large.  So what do you verify, and how are IP and chip suppliers addressing the challenge?  We at Cadence see several trends emerging that will help users with this daunting task, as follows: use collaboration based environments, use the right tool for the job, have Deep Analytics and Visibility, and deploy Feature based verification.

My specific questions to the panelists follow.  I chose a representative one from each of them.

* How does a verification group manage the verification process and assess risk?

Dave Kelf, Marketing Director at OneSpin Solutions opened the detail discussion by describing the present situation. Whereas design management follows a reasonably predictable path, verification management is still based on the subjective, unpredictable assessment of when is enough testing enough!

Verification management is all about predicting the time and resources required to reach the moving target of verification closure. However, there is still no concrete method available to predict when a design is fully, exhaustively, 100% tested. Today’s techniques all have an element of uncertainty, which translate to the risk of an undetected bug. The best a verification manager can do is to assess the progress point at which the probability of a remaining bug is infinitesimally small.

For a large design block, a combination of test coverage results, a test spec-to-simulation performed comparison, time-since-last-bug discovery, verification time spent and the end of the schedule may all play into this decision. For a complete SoC, running the entire system, including software, on an emulator for days on end might be the only way, today, to inspire confidence of a working design.

If we were to solve just one remaining problem in verification, achieving a deep and meaningful understanding of verification coverage pertaining to the original functional specification should be it.

*  What is the role of verification coverage in providing metrics toward verification closure, and is this proving useful.

Thomas L. Anderson, Vice President of Marketing, Breker Verification Systems answered that coverage is, frankly, all that the verification team has to assess how well the chip has been exercised. Code coverage is a given, but in recent years, functional coverage has gained much more prominence. The most recent forms of coverage are derived automatically, for example, from assertions or graph-based scenario models, and so provide much return for little investment.

*  How has design evolution affected verification management? Examples include IP usage and SoC trends.

Rajeev Ranjan, CTO of Jasper Design Automation observed that as designs get bigger in general, and as they incorporate more-and-more IPs developed by multiple internal and external parties,  integration verification becomes a very large concern.  Specifically, verification tasks such as interface validation, connectivity checking, functional verification of IPs in the face of hierarchical power management strategies, and ensuring that the hardware coherency protocols do not cause any deadlock in the overall system.  Additionally, depending on the end-market for the system, security path verification can also be a significant, system-wide challenge.

*  What should be the first step in preparing a verification plan?

Tom Fitzpatrick, verification evangelist, Mentor Graphics has dedicated many years to the study and solutions of verification issues.  He noted that the first step in preparing a verification plan is to understand what the design is supposed to do and under what conditions it’s expected to do it. Verification is really the art of modeling the “real world” in which the device is expected to operate, so it’s important to have that understanding. After that, it’s important to understand the difference between “block-level” and “system-level” behaviors that you want to test. Yes, the entire system must be able to, for example, boot an RTOS and process data packets or whatever, but there are a lot of specifics that can be verified separately at the block- or subsystem-level before you just throw everything together and see if it works. Understanding what pieces of the lower level environments can be reused and will prove useful at the system level, and being able to reuse those pieces effectively and efficiently is one key to verification productivity.

Another key is the ability to verify specific pieces of functionality as early as possible in the process and use that information to avoid targeting that functionality at higher levels. For example, using automated tools at the block level to identify reset or X-propagation issues, or state machine deadlock conditions, eliminates the need to try and create stimulus scenarios to uncover these issues. Similarly, being able to verify all aspects of a block’s protocol implementation at the block level means that you don’t need to waste time creating system-level scenarios to try and get specific blocks to use different protocol modes. Identifying where best to verify the pieces of your verification plan allows every phase of your verification to be more efficient.

*  Is criteria available to determine what tools need to be considered for various project phases? Which tools are proving effective? Is budget a consideration?

Yuan Lu, Chief Verification Architect, Atrenta Inc. contributed the following. Verification teams deploy a variety of tools to address various categories of verification issues, depending on how you break your design into multiple blocks and what you want to test at each level of hierarchy. At a macro level, comprehensive/exhaustive verification is expected at block/IP level. However, at the SoC level, functions such as connectivity checking, heart beat verification, and hardware/software co-verification are performed.

Over the years, there has emerged some level of consensus within the industry as to what type of tools need to be used for verification at the IP and SoC levels. But, so far, there is no perfect way to hand off IPs to the SoC team. The ultimate goal is to ensure that the IP team communicates to the SoC team about what has been tested and how the SoC team can use this information to figure out if the IP level verification was sufficient to meet the SoC needs.

*  Not long ago, the Unified Verification Methodology (UVM) was unveiled with the promise of improving verification management, among other advantages. How has that worked?

Herve Alexanian, Engineering Director, Advanced Dataflow Group at Sonics, Inc. pointed out that as an internal protocol is thoroughly specified, including configurable options, a set of assertions can naturally be written or generated depending on the degree of configurability. Along the same lines, functional coverage points and reference (UVM) sequences are also defined. These definitions are the best way to enter modern verification approaches, allowing the most benefit from formal techniques and verification planning. Although some may see such definitions as too rigid to accommodate changes in requirements, making a change in a fundamental interface is intentionally costly as it is in software. It implies additional scrutiny on how architectural changes are implemented in a way that tends to minimize functional corners that later prove so costly to verify.

*  What areas of verification need to be improved to reduce verification risk and ease the management burden?

Vigyan Singhal, President and CEO, Oski Technology said that for the most part, current verification methodology relies on simulation and emulation for functional verification. As shown consistently in the 2007, 2010 and 2012, Wilson Research Group Surveys sponsored by Mentor Graphics, two thirds of projects are behind schedule and functional bugs are still the main culprit for chip respins. This shows that the current methodology has significant verification risk.

Verification teams today spend most of the time in subsystem (63.9%) and full chip simulation (36.1%), and most of the time is spent in debugging (36%). This is not surprising as debugging at the subsystem and chip level with thousands or long cycle traces can take a long time.

The solution to the challenge is to improve block-level design quality so as to reduce the verification and management burden at the subsystem and chip level. Formal property verification is a powerful technique for block-level verification. It is exhaustive and can catch all corner-case bugs. While formal verification adds another step in the verification flow with additional management tasks to track its progress, the time and effort spent will lead to reduced time and effort at the subsystem and chip level, and improve overall design quality. With short time-to-market windows, design teams need to guarantee first-silicon success. We believe increased formal usage in the verification flow will reduce verification risks and ease management burden.

As he had opened the discussion, John Brennan closed it noting that functional verification has no one single silver bullet, it takes multiple engineers, operating across multiple heterogeneous engines, with multiple analytics.  This multi-specialists verification is now, the VPM tools that support multi-specialist verification are needed now.

Next Page »