Part of the  

Chip Design Magazine

  Network

About  |  Contact

Posts Tagged ‘Xilinx’

Next Page »

Blog Review – Monday May 18, 2015

Monday, May 18th, 2015

Zynq detects pedestrians; ARMv8-A explained; Product development demands test; Driving connectivity; Celebrating Constellations; Chip challenges

The helpful Michael Thomas, ARM, advises readers that there is The Cortex-A Series Programmer’s Guide for ARMv8-A available and introduces what is in the guide for a taster of the architecture’s features.

The Embedded Vision Summit gives many bloggers material for posts. The first is Steve Leibson, Xilinx, who includes a video Mathworks presented there, with a description if a real-time pedestrian detector running on a Zynq-based workflow, using MathWorks’ Sumulink and HDL Coder.

Another attendee was Brian Fuller, Cadence, who took away the secrets to successful product development, which he sums up as: test, test, test. (He does elaborate beyond that in his detailed blog, reviewing Mike Alrdred of Dyson’s keynote.

Anticipating another event, DAC, Ravi Ravikumar, Ansys, looks at the connected car and the role of design in intelligent vehicles.

Also with an eye on DAC, Rupert Baines, UltraSoC has a guest blog at IP-Extreme, and praises the Constellations initiative, with some solid support – and some restrained back-slapping.

Continuing a verification series, Harry Foster, Mentor, looks at the FPGA space and reflects on how the industry makes choices in formal technology.

A guest blog, at Chip Design, by Dr. Bruce McGaughy, ProPlus Design Solutions, looks at what innovative chip designs mean for chip designers. His admiration for the changing pace of design is balanced with identifying the drivers for low power design to meet the IoT portable phase.

Why do we need HDCP 2.2 and what do we need to do to ensure cryptography and security? These are addressed, and answered, by VIP Experts, Synopsys, in this informative blog.

By Caroline Hayes, Senior Editor

Blog Review – Monday, May 04 2015

Monday, May 4th, 2015

Steve Leibson, Xilinx, reports on an interesting academic program to ‘look at, poke, modify, and experiment with’ the MIPS RISC processor RTL using a simplified Imagination Technologies microAptiv processor core. The MIPSfpga program provides university CS and EECS departments access to a fully-validated, current generation MIPS CPU. There are also plans for the Imagination University Programme to expand this university program to the PowerVR graphics processors and FlowCloud IoT technology.

Two ARM server hardware platform used in cloud-based set-top box systems are explained in detail by Karthik Ranjan, ARM. The blog looks back at early cable TV systems and looks ahead to the IoT and cloud use in virtual network functionality (VNF) ahead of the VNF world Congress in San Jose this week.

In praise of an overlooked object-oriented language, Ruby, Michael Cizl, IP Extreme, presents a strong case and urges readers to rethink their choices.

Excitement is growing for the advent of Windows 10. Rambus speculates on the inclusion of a universal sensor driver set for environmental, biometric, proximity and motion sensing. Adam Shah, IDG, speculates on what this will mean for functionality for devices running the OS.

Wrestling with power, the panel of experts at the Electronic Design Processes Symposium, discussed if the industry needs to rethink tackling power for IoT devices in particular. Brian Fuller, Cadence, reports from the Monterey event.

Distilling a report from IHS Automotive, John Day, Mentor Graphics, identifies apps and trends that smartphones will bring to the in-car experience, from Apple to Android, with a graph of consumers’ preferences from Bluetooth for hand-free use, touchscreen to an auxillary hook to add an MP3 player or phone.

Returning to a familiar blog-topic, Michael Posner, Synopsys, compares hybrid prototyping vs prototyping bridges, using the company’s latest DesignWare Hybrid IP Prototyping Kits as a starting point for the IP prototyping discussion.

Caroline Hayes, Senior Editor.

Blog Review – Monday April 20, 2015

Monday, April 20th, 2015

Half a century and still quoted as relevant is more than most of us could hope to achieve, so the 50 anniversary of Gordon Moore’s pronouncement which we call Moore’s Law is celebrated by Gaurav Jalan, as he reviews the observation first pronounced on April 19, 1965, which he credits with the birth of the EDA industry, and the fabless ecosystem amongst other things.

Another celebrant is Axel Scherer, Cadence, who reflects on not just shrinking silicon size but the speed of the passing of time.

On the same theme of what Moore’s Law means today for FinFets and nano-wire logic libraries, Navraj Nandra, Synopsys, also commemorates the anniversary, with an example of what the CAD team has been doing with quantum effects at lower nodes.

At NAB (National Broadcasters Association) 2015, in Las Vegas, Steve Leibson, Xilinx, had an ‘eye-opening’ experience at the CoreEL Technologies booth, where the company’s FPGA evaluation kits were the subject of some large screen demos.

Reminiscing about the introduction of the HSA Foundation, Alexandru Voica, Imagination Technologies, provides an update on why heterogeneous computing is one step closer now.

Dr. Martin Scott, the senior VP and GM of Rambus’ Cryptography Research Division, recently participated in a Silicon Summit Internet of Things (IoT) panel hosted by the Global Semiconductor Alliance (GSA). In this blog he discusses the security of the IoT and opportunities for good and its vulnerabilities.

An informative blog by Paul Black, ARM examines the ARM architecture and DS-5 v5.21 DSTREAM support for debug, discussing power in the core domain and how to manage it for effective debug and design.

Caroline Hayes, Senior Editor

Blog Review – Monday, March 23, 2015

Monday, March 23rd, 2015

Warren Savage, IPextreme, has some sage, timely advice that applies to crossword solving, meeting scheduling and work flows.

At the recent Open Power Summit, Convey Computer announced the Coherent Accelerator Processor Interface (CAPI) development kit based on its Eagle PCIe coprocessor board. Steve Liebson, Xilinx, has a vested interest is telling more, as the accelerator is based on the Xilinx Virtex-7 980T FPGA.

Gloomy predictions from Zvi Or-Bach, MonolithIC 3D, who puts a line in the sand at the 28nm node as smartphone and tablet growth slows.

Saying you can see unicorns is not advisable in commerce, but Ramesh Dewangan, Real Intent has spotted some at Confluence 2015, but where, he wonders, are those for the EDA industry?

ARM’s use of Cadence’s Innovus Implementation System software to design the ARM Cortex-A72 is discussed by Richard Goering, Cadence. As well as the collaboration, the virtues of ARM’s ‘highest performance and most advanced processor’ are highlighted.

ARM has partnered with the BBC, reveals Gary Atkinson, ARM, in the Make it Digital initiative by the broadcasting corporation. One element of the campaign is the Microbit project, in which every child in school year 7 (11-12 years old) will be given a small ARM-based development board that they can program using a choice of software editor. Teachers will be trained and there will be a suite of training materials and tutorials for every child to program their first IoT device.

Mentor Graphics is celebrating a win at the first annual LEDs Magazine Sapphire Award in the category of SSL Tools and Test. Nazita Saye, Mentor Graphics, is in Hollywood Report mode and reviews the awards.

Responding to feedback from readers, Satyapriya Acharya, Syopsys, posts a very interesting blog about verifiying the AMBA system level environment. It is well thought out and informative, with the promise of more capabilities needed in a system monitor to perform checks.

Blog Review – Monday, February 09, 2015

Monday, February 9th, 2015

Arthur C Clarke interview; Mastering Zynq; The HAPS and the HAPS-nots; Love thy customer; What designers want; The butterfly effect for debug

A nostalgic look by at an AT&T and MIT conference, by Artie Beavis, ARM, has a great video interview with Arthur C Clarke. It is fascinating to see the man himself envisage mobile connectivity and ‘devices that send information to friends, the exchange of pictorial information and data; the ‘completely mobile’ telephone as well as looking forward to receiving signals from outer space.

A video tutorial presented by Dr Mohammad S Sadri, Microelectronic Systems Design Research Group at Technische Universität Kaiserslautern, Germany, shows viewers how to create AXI-based peripherals in the Xilinx Zynq SoC programmable logic. Steve Liebson, Xilinx posts the video. Dr Sadri may appear a little awkward with the camera rolling, but he clearly knows his stuff and the 23 minute video is informative.

Showing a little location envy, Michael Posner, Synopsys, visited his Californian counterparts, and inbetween checking out gym and cafeteria facilities, he caught up on FPGA-based prototype debug and HAPS.

Good news from the Semiconductor Industry Association as Falan Yinug reports on record-breaking sales in 2014 and quarterly growth. Who bought what makes interesting – and reassuring – reading.

Although hit with the love bug, McKenzie Mortensen, IPextreme, does not let her heart rule her head when it comes to customer relations. She presents the company’s good (customer) relationship guide in this blog.

A teaser of survey results from Neha Mittal, Arrow Devices, shows what design and verification engineers want. Although the survey is open to more respondents until February 15, the results received so far are a mix of predictable and some surprises, all with the option to see disaggregated, or specific, responses for each questions.

From bugs to butterflies, Doug Koslow, Cadence, considers the butterfly effect in verification and presents some sound information and graphics to show the benefits of the company’s SimVision.

Caroline Hayes, Senior Editor

Blog Review – Monday Sept. 08 2014

Monday, September 8th, 2014

Semiconductor sales – good news; Namibia’s Internet project; Rambus fellow considers IoT security; Intel fashions a year of wearables.
by Caroline Hayes, Senior Editor


If you are wondering what Intel’s New Devices Group has been doing since its formation 12 months ago, Michael A. Bell reveals all in this blog. All the obvious wearable candidates are there, but with a twist – biometric earbuds, a bracelet computer…And there is also an insight into business news, an indication of where this business unit is headed.

A fair bit of name-dropping kicks off this Rambus blog, celebrating Rich Page’s IoT projects. This interview with the company’s fellow is an interesting take on IoT, as Page predicts a shift of emphasis to security and the ensuing design challenges.

Citizen Connect is a worthy project, supported by MyDigitalBridge Foundation, Microsoft, and Adaptrum. Steve Leibso, Xilinx, reports on the latest project for broadband access in the continent, wirelessly connecting 28 schools in Namibia with Ethernet over a 62x152km area using TVWS (TV White Space) spectrum.

It’s been a while since a journalist could write this, but it looks like good news for the semiconductor market. Falan Yinug, explains some of the statistics behind the SIA report of Q2 sales that set some records, for a feelgood read.

Blog Review – Mon. August 11 2014

Monday, August 11th, 2014

VW e-Golf; Cadence’s power signoff launch; summer viewing; power generation research.
By Caroline Hayes, Senior Editor

VW’s plans for all-electric Golf models have captured the interest of John Day, Mentor Graphics. He attended the Management Briefing Seminar and reports on the carbon offsetting and a solar panel co-operation with SunPower. I think I know what Day will be travelling in to get to work.

Cadence has announced plans to tackle power signoff this week and Richard Goering elaborates on the Voltus-Fi Custom Power Integrity launch and provides a detailed and informative blog on the subject.

Grab some popcorn (or not) and this summer blockbusters, as lined up by Scott Knowlton, Synopsys. Perhaps not the next Harry Potter series, but certainly a must-see for anyone who missed the company’s demos at PCI-SIG DevCon. This humorous blog continues the cinema analogy for “Industry First: PCI Express 4.0 Controller IP”, “DesignWare PHY IP for PCI Express at 16Gb/s”, “PCI PHY and Controller IP for PCI Express 3.0” and “Synopsys M-PCIe Protocol Analysis with Teledyne LeCroy”.

Fusion energy could be the answer for energy demands and Steve Leibson, Xilinx, shares Dave Wilson’s (National Instruments) report of a fascinating project by National Instruments to monitor and control a compact spherical tokamak (used as neutron sources) with the UK company, Tokamak Solutions.

Deeper Dive – Is IP reuse good or bad?

Friday, May 30th, 2014

To buy or to reuse, that is the question. Caroline Hayes, Senior Editor asked four industry experts, Carsten Elgert (EC), Product Marketing Director, IPG (IP Group), Cadence, Tom Feist (TF), Senior Marketing Director, Design Methodology, Xilinx, Dave Tokic (DT), Senior Director, Partner Ecosystems and Alliances, Xilinx, and Warren Savage (WS), President and CEO, IPextreme about the pros and cons of IP reuse versus third party IP.

What are the advantages and disadvantages when integrating or re-using existing IP?

WS: This is sort of analogous to asking what are the advantages/disadvantages of living a healthy lifestyle? The disadvantages are few, the advantages myriad. But in essence it’s all about practicality. If you can re-use a piece of technology, that means that you don’t have to spend money developing something different which includes a huge cost of verification. Today’s chips are simply too large to functionally verify every gate. A part of the every chip verification strategy assumes that pre-existing IP has already had its verification during its development and if it is silicon-proven, this only decreases the risk of any latent defects that may be discovered. The only reason to not to reuse an IP is that the IP itself is lacking in some ways that make the case to not reuse it but rather create a new IP.

strong>TF: Improved productivity. Reuse can have disadvantages when using older IP on new technologies, it is not always possible to target the newer features of a device with old IP.

DT: IP reuse is all about improving productivity and can result in significantly shrinking the design time especially with configurable IP. Challenges come from when the IP itself needs to be modified from the original, which then requires additional design and verification time. Verification in general could be more challenging, as most IP is verified in isolation and not in the context of the system. For example, if the IP is being used in a way the provider didn’t “think of” in their verification process, it may have bugs that are discovered during that integration verification phase that then needs to be reflected back to determining which IP has the issue and correcting/verifying that IP.

CE: The benefits of using your own IP in-house are that you know what the IP is doing and can use it again – it is also not available as third party IP, for differentiation. The disadvantage is that it is rarely documented allowing it to be used in different departments. The same engineers know what they are taking when they reuse their IP, but to properly document it, to make the product, can be time-consuming for a neighboring department. It is also the case that unwanted behavior is not verified. However, it is cheaper and it works.

What are the advantages and disadvantages of using third party IP?

WS: The advantages of using third party IP is usually related to that company being a domain expert in a certain field. By using IP from that company, you are in fact licensing expertise from this company and putting it to use in an effective way. Think of it like going to dinner at a 4-star Michelin rated restaurant. The ingredients may be ordinary but how they are assembled are far more exceptional than something the ordinary person can achieve on their own.

TF: Xilinx cannot cover all areas. Having a third party ecosystem allows us to increase the reach to customers. We have qualification processes in place to ensure the quality of the IP is up to the required standard.

DT: Xilinx and its ecosystem provide more than 600 cores across all markets, with over 130 IP providers in our Alliance Program. These partners not only provide more “fundamental” standards-based IP, but also provide a very rich set of domain and application-specific IP that would be difficult for Xilinx to develop and support. This allows Xilinx technology to be more easily and quickly adopted in 100’s of applications. Some of the challenges come in terms of consistency of deliverables, quality, and business models. Xilinx has a mature process of partner qualification and also work with the partner to expose IP quality metrics when we promote a partner IP product that helps customers make smarter decisions on choosing a provider or IP core.

EC: Third party IP means there is no rewriting. Without it, it could take hundreds of man-years to develop a function that is not major selling point for a design. Third party IP is compatible as well as bug-free/limited bugs. It is like spending hundreds of hours redesigning the steering wheel of a car – it is not a differentiating selling point of the vehicle.

Is third party IP a justifiable risk in terms of cost and/or compatibility?

DT: The third party IP ecosystem for ASIC and programmable technologies has been around for decades. There are many respected and high quality providers out there, from smaller specialized IP core partners such as Xylon, OmniTek, Northwest Logic, and PLDA to name a few, up to the industry giants like ARM and Synopsys generating $100M’s in annual revenue. But ultimately the customer is responsible for determining the system, cost, and schedule requirements, evaluating IP options, and make the “build vs. buy” decision.

EC: I would reformulate that question: Can [the industry] live without IP? Then, the risk is justifiable.

How important are industry standards in IP integration? What else would you like to see?

TF: IP standards are very important. For example, Xilinx is on the IEEE P 1735 working group for IP security. This is very important to protect customer, 3rd party and Xilinx IP throughout flows that may contain 3rd party EDA tools and still allow everyone to interoperate on the IP. We hope to see the 2.0 revision of this standard ratified this year so all tool vendors and Xilinx can adopt it and make IP truly portable, yet protected.

DT: Another example is AMBA AXI4, where Xilinx worked closely with ARM to define this high speed interconnect standard to be optimized for integrating IP on both ASIC and programmable logic platforms.

WS: Today, not so much. There has been considerable discussion on this topic for the last 15 years and various industry initiatives have come and gone over the years on this. The most successful one to date has been IP-XACT. There is massive level of IP reuse today and the lack of standards has not slowed that. I am seeing that the way the industry is handling this problem is through the deployment of pre-integrated subsystems that include both a collection of IP blocks and the embedded software that drives it. I think that within another five years, the idea of “Lego-like” construction tools will die as they do nothing to solve the verification problem associated with such constructions.

Have you any statistics you can share on the value or TAM (Total Available Market) for EDA IP?
WS: I assume you mean IP? Which I like to point out is distinctive from EDA. True, many EDA players are adopting an IP strategy but that is primarily because the growth in IP and the stagnation of the EDA markets are forcing those players to find new growth areas. Sustained double-digit growth is hard to ignore.

To the TAM (total available market) question, a lot of market research says the market is around $2billion today. I have long postulated that the real IP market is at least twice the size of the stated market, sort of like the “dark matter” theories in astrophysics. But even this ignores considerable amounts of patent licensing, embedded software licensing and such which dwarf the $2billion number.

TF: According to EDAC Semiconductor IP revenue totaled $486million, a 4.2% increase compared to Q4 2012 and four-quarters moving average increased 9.6%.

EC: For interface IP, I can see a market for interface IP – sharing no processor, no ARM, to grow 20% to $400million [Analyst IP Nest].

IP Integration to accelerate SoCs

Thursday, May 29th, 2014

As the accelerating market of SoCs inevitably means a faster rate of adoption, system level designers are also faced with fragmenting markets, with new standards adopted and with multiple types of complex requirements in a system. The integration of IP (Intellectual Property) can be a boon, but there are caveats, as Caroline Hayes, Senior Editor reports.

Today, there are many types of IP in use today, processor, DSP, high-speed IP, clocking IP, even whole systems. There are also a variety of high-speed I/O and infrastructure IP (such as the AXI4 interconnect). It is used to support protocols, interfaces, controllers and processor on the core.

Dave Tokic, Senior Director, Partner Ecosystems and Alliances, Xilinx believes that IP is invaluable. “It just isn’t possible to create the systems being developed without substantial IP integration,” he says. He goes on to explain the appeal of IP. “More and more, designers are being asked to create increasingly complex designs much faster on programmable devices. As such, we’ve seen a tremendous growth in the number of available IP cores and their use and reuse on programmable devices. Typically, we see the use of a broad range of connectivity and high speed I/O such as Ethernet, PCI Express, SDI, MIPI, and HDMI; general purpose and specialized processors, such as compact Xilinx Microblaze microcontrollers and high-speed DSP, to powerful multi-core ARM A9 application processors we’ve integrated into the Zynq All Programmable SoC ; optimized DMA and memory controllers, all the way up to the highest performance Hybrid Memory Cube controllers; application-specific IP for compression such as HEVC, JPEG-2000 to high performance image and video processing pipeline IP; and a broad range of clocking and infrastructure IP, such as the AMBA AXI4 interconnect.”
For Carsten Elgert, Product Marketing Director, IPG (IP Group), Cadence, this diversity in the system design is why the IP market is growing – according to EDAC Semiconductor, IP revenue totaled $486million Q4 2013, a 4.2 % increase compared to Q4 2012.

Tom Feist, Senior Marketing Director, Design Methodology, Xilinx, summarizes it thus: “As geometries shrink, typically parts and designs get larger and more complex yet design teams do not. This means higher productivity is required to produce larger designs. A key factor to address this is IP reuse.”
The proliferation of connections to an outside DDR controller, flash memory controller, display output or other display standards and communications protocols like USB, are all defined and the designer has to meet the standards of protocol committees. Egbert says: “When designing an SoC, there is no time, and [the designer] usually does not have the knowledge to design the standard protocol by himself – the end product will be sold because of a unique functionality, not because of an excellent USB protocol.”
It is, he argues far easier to use IP blocks as components, buying a peripheral 1²C bus or UART protocol rather build 20k gates. The availability of IP helps – there is interface IP, communications IP, microprocessor IP and IP from third party vendors, such as EDA companies and ARM for example, says Elgert.
He points out that IP is bought to ensure code compatibility, and includes high-end versions, such as ARM, and low-end, 8bit processors in SoCs, with less than 3k gates. There is also EDA IP and legacy processors. “And do not forget analog IP,” counsels Elgert. The SoC analog or mixed signal portion needs PLL and sensors using ADC and AC – this is very common if the SoC is analog.”
With so much to choose from, I asked what are the ‘rules’ if there are any, to choosing IP. Tokic believes “Some of the key factors for successful IP use is the quality of the IP and ease of integration,” he says, referring to the company’s functional verification and validation. “We work closely with our ecosystem of qualified IP providers to provide visibility to our customers on various IP quality metrics… [We] led the industry in automating IP integration with the IP Integrator technology built into the Vivado Design Suite that shortens the design cycle up to 10x over more manual processes.”

Elgert breaks it down into technology nodes, posing the question for engineers to ask – “What is available on process node that I’m forgetting? This can be an extensive shipping list for complex systems,” he says. He also advises narrowing the choice down to a preferred silicon vendor.
For example, he says, the PHY and the layer stack is relevant, being specific for each available technology node. “I would consider as many silicon technologies as possible,” is his advice.
Other items on Elgert’s list are to consider the technical performance, the PPA, or Power Performance Areas. “What is the power consumer, and how big is the silicon? Silicon still costs money. Compare PPA, which is often confidential information,” he says “for responsiveness and clarity of data – EDA support is important here”.
To implement the IP, Feist observes that customers prefer to use industry standards and avoid proprietary bus / interconnect. “By leveraging ARM’s AMBA AXI4 protocol [customers] can leverage state-of-the-industry interconnect that enables point to point connections versus a bus structure. This accelerates their design development and the designs are more portable.” (The vast majority of internal bus structures use AMBA AXI, confirms Elgert.

Xilinx relies on an ecosystem for IP integration support. Feist explains the role of the Vivado design suite. “In Vivado IP all parts of the IPI design should be packaged as IP. If just using third party or Xilinx IP this is done for you. For customer IP, they need to package this up and we provide a packager for this. To get the most out of IPI, the IP should be designed in a way that uses interfaces where possible like the Xilinx IP. By using AXI, it allows customers to leverage IPs that come with Vivado like bus monitors, bandwidth checkers, JTAG stimulus, bus functional models etc. that should speed up development of the IP.”
By providing an extensible IP catalog, which can contain Xilinx, third-party, and intra-company IP that can be shared across a design team, division, or company, the suite leverages industry standards such as ARM’s AXI interconnect and IP-XACT metadata when packaging IP. With IP Integrator, Vivado facilitates rapid development of smarter systems, which can include embedded, DSP, video, analog, networking, interface, building block, and connectivity IP consolidated into a hierarchical system and IP-centric view.
Users can also package their own RTL, or C/C++/SystemC and MATLAB/Simulink algorithms into the IP catalog using High-Level Synthesis (HLS) or System Generator for DSP with the IP packager.
Xilinx describes IPI as providing an automated IP subsystem generation to speed configuration. For Feist, IPI provides the second step to allow a user to quickly integrate an IP created in HLS into a full design. “The first step happens earlier in HLS tool that is adding AXI interfaces automatically to the C code. With both these steps it is possible to connect an IP generated in HLS to an ARM core pretty quickly.”
Synopsys is also helping customers reduce configuration and debug time with a ‘plug and play’ IP accelerator program, to be announced at DAC next month. It will package its DesignWare IP together with hardware IP prototyping kits and software development kits to modify IP with a fast iteration flow and without the time penalties, was all Dr Johannes Stahl, Director, Product Marketing, Virtual Prototyping, Synopsys would reveal on a visit to the UK ahead of DAC.

Turning to EDA’s role in IP, Elgert says the same challenges occur as are met when designing an SoC. The GDS2 or RTL has to match the RTL of an in-house design. The IP vendor has to support the IP flow, he says, but there is a delivery burden on EDA companies to support customers to ensure that the various tool flows and verification platforms are enabled.
All agree that as geometries shrink, parts and designs get larger and more complex. This creates demands on small (even shrinking) design teams and IP use and reuse can accelerate the timeframe of design, verification and eventually time to market, with designs that are differentiated with functions specifically for the application.

Are Best Practices Resulting in a Verification Gap?

Tuesday, March 4th, 2014

By John Blyler, Chief Content Officer

A panel of experts from Cadence, Mentor, NXP, Synopsys and Xilinx debate the reality and causes of the apparently widening verification gap in chip design.

A panel of semiconductor experts debate the reality and causes of the apparently widening verification gap in chip design.Does a verification gap exist in the design of complex system-on-chips (SoCs)? This is the focus of a panel of experts at DVCon 2014, which will include Janick Bergeron, Fellow at Synopsys; Jim Caravella, VP of Engineering at NXP – Harry Foster, Chief Verification Technologist at Mentor Graphics, John Goodenough, VP, ARM,  Bill Grundmann, a Fellow at Xilinx; and Mike Stellfox, a Fellow at Cadence. JL Gray, a  Senior Architect at Cadence, organized the panel. What follows is a position statement from the panelist in preparation for this discussion. – JB

Panel Description: “Did We Create the Verification Gap?”

According to industry experts, the “Verification Gap” between what we need to do and what we’re actually able to do to verify large designs is growing worse each year. According to these experts, we must do our best to improve our verification methods and tools before our entire project schedule is taken up by verification tasks.

But what if the Verification Gap is actually occurring as a result of continued adoption of industry standard methods. Are we blindly following industry best practices without keeping in mind that the actual point of our efforts is to create a product with as few bugs as possible, as opposed to simply trying to find as many bugs as we can?

Are we blindly following industry best practices …

Panelists will explore how verification teams interact with broader project teams and examine the characteristics of a typical verification effort, including the wall between design and verification, verification involvement (or lack thereof) in the design and architecture phase, and reliance on constrained random in absence of robust planning and prioritization to determine the reasons behind today’s Verification Gap.

Panelist Responses:

Grundmann: Here are my key points:

  • Methodologies and tools for constructing and implementing hardware have dramatically improved, while verification processes appear to have not kept pace with the same improvements.  As hardware construction is simplified, then there is a trend to have less resources building hardware but same or more resources performing verification.  Design teams with 3X verification to hardware design are not unrealistic and that ratio is trending higher.

    … we have to expect to provide a means to make in-field changes …

  • As it gets easier to build hardware, performing hardware verification is approaching software development type of resourcing in a project.
  • As of now, it very easy to quickly construction various hardware “crap”, but it very hard to prove any are what you want.
  • It possible that we can never be thoroughly verification “clean” without delivering some version of the product with a reasonable quality level of verification.  This may mean we have to expect to provide a means to make in-field changes to the products through software-like patches.

Stellfox: Most chips are developed today based on highly configurable modular IP cores with many embedded CPUs and a large amount of embedded SW content, and I think a big part of the “verification gap” is due to the fact that most development flows have not been optimized with this in mind.  To address the verification gap, design and verification teams need to focus more on the following:

  • IP teams need to develop and deliver the IP in a way that it is more optimized for SoC HW and SW integration.  While the IP cores need to be high quality, it is not sufficient to only deliver high quality IP since much of the work today is spent in integrating the IP and enabling earlier SW bring-up and validation.

    There needs to be more focus on integrating and verifying the SW modules with HW blocks …

  • There needs to be more focus on integrating and verifying the SW modules with HW blocks early and often, starting at the IP level to Subsystem to SoC.  After all, the SW APIs largely determine how the HW can be used in a given application, so time might be wasted “over-verifying” designs for use cases which may not be applicable in a specific product.
  • Much of the work in developing a chip is about integrating design IPs, VIPs, and SW, but most companies do not have a systematic, automated approach with supporting infrastructure for this type of development work.

Foster: No, the industry as a whole did not create the verification challenge.  To say this lacks an understanding of the problem.  While design grows at a Moore’s Law rate, verification grows at a double exponential rate. Compounded with increased complexity due to Moore’s Law are the additional dimensions of hardware-software interaction validation, complex power management schemes, and other physical effects that now directly affect functional correctness.  Emerging solutions, such as constrained-random, formal property checking, emulation (and so on) didn’t emerge because they were just cool ideas.  The emerged to address specific problems. Many design teams are looking for a single hammer that they can use to address today’s verification challenges. Unfortunately, we are dealing with an NP-hard problem, which means that there will never be a single solution that will solve all classes of problems.

Many design teams are looking for a single hammer that they can use to address today’s verification challenges.

Historically, the industry has always addressed complexity through abstraction (e.g., the move from transistors to gates, the move from gates to RTL, etc.). Strategically, the industry will be forced to move up in abstraction to address today’s challenges. However, there is still a lot of work to be done (in terms of research and tool development) to make this shift in design and verification a reality.

Caravella: The verification gap is a broad topic so I’m not exactly sure what you’re looking for, but here’s a good guess.

Balancing resource and budget for a product must be done across much more than just verification.

 Verification is only a portion of the total effort, resources and investment required to develop products and release them to market. Balancing resource and budget for a product must be done across much more than just verification. Bringing a chip to market (and hence revenue) requires design, validation, test, DFT, qualification and yield optimization. Given this and the insatiable need for more pre-tape out verification, what is the best balance? I would say that the chip does not need to be perfect when it comes to verification/bugs, it must be “good enough”. Spending 2x the resources/budget to identify bugs that do not impact the system or the customer is a waste of resources. These resources could be better spent elsewhere in the product development food chain or it could be used to do more products and grow the business. The main challenge is how best to quantify the risk to maximize the ROI of any verification effort.

Jasper: [Editor’s Note: Although not part of the panel, Jasper provided an additional perspective on the verification gap.]

  • Customers are realizing that UVM is very “heavy” for IP verification.  Specifically, writing and debugging a UVM testbench for block and unit level IP is very time consuming task in-and-of-itself, plus it incurs an ongoing overhead in regressions when the UVC’s are effectively “turned off” and/or simply used as passive monitors for system level verification.  Increasingly, we see customers ditching the low level UVM testbench and exhaustively verifying their IPs with formal-based.  In this way, the users can focus on system integration verification and not have to deal with bugs that should have been caught much sooner.

    UVM is very “heavy” for IP verification.

  • Speaking of system-level verification: we see customers applying formal at this level as well.  In addition to now familiar SoC connectivity and register validation flows, we see formal replacing simulation in architectural design and analysis.  In short, even without any RTL or SystemC, customers can use an architectural spec to feed into formal under-the-hood to exhaustively verify that a given architecture or protocol is correct by construction, won’t deadlock, etc.
  • The need for sharing coverage data between multiple vendors’ tool chains is increasing, yet companies appear to be ignoring the UCIS interoperability API.  This is creating a big gap in customers’ verification closure processes because it’s a challenge to compare verification metrics across multi-vendor flows, and they are none too happy about it.
Next Page »