Part of the  

Chip Design Magazine

  Network

About  |  Contact

Posts Tagged ‘Mentor Graphics’

Next Page »

Blog Review – Mon April 14 2014

Monday, April 14th, 2014

Static warning about keyword variables in C language; wearable electronics; more power to the user interface; IP sales – where and when to shop around; EDA consolidation concerns. By Caroline Hayes, Senior Editor

Defining the static keyword in the C language can cause mayhem and confusion, but Jacob Beningo, ARM, has helpful advice in his blog about when and where to declare.

With an eye on the aesthetics of wearable electronics, Ansys’s Sudhir Sharma writes about cool, wearable electronics design, with some interesting examples and practical news for a related webinar using Synapse for engineering services.

As a follow up to his web seminar, called Create Compelling User Interfaces for Embedded with Qt Framework, Phil Brumby sits in the guest blogger seat at Mentor Graphics. He uses it as a platform to complete unfinished business, posting and answering questions not covered in the seminar and to help assess the processor power required for a particular project.

When and what to buy and if to buy at all, is the focus of a well constructed blog by Neha Mittal , Arrow Devices. It defines the four IP development models, and lists the advantages and disadvantages of each.

John Blyler looks at the EDA market’s recent activity for mergers and considers the future, with a Consolidation Curve and the effect consolidation has on the industry and its innovation.

Blog Review – April 07 2014

Monday, April 7th, 2014

Interesting comments from PADS users are teasingly outlined by John McMillan, Mentor Graphics.

An impressed Dominic Pajak, ARM, relates hopes for the IoT using ARM mbed and Google Glass to control an industrial tank control which can be viewed by Google Glass.

The cultural gap between engineers is examined by Brian Fuller, Cadence, as he reviews DesignCon 2014 and group director,Frank Schirrmeister’s call for a “new species” of system designer

More IoT thoughts, this time a warning from Divya Naidu Kol, Intel, with ideas of how to welcome the IoT without losing control of our information.

Blog Review – Mar 24 2014 Horse-play; games to play; multi-core puzzles; Moore pays

Monday, March 24th, 2014

Cadence’s Virtuoso migration path explained; Dassault reaches giddy-up heights in showjumping; an enthusiastic review of the Game Development Conference 2014, Mentor offers hope for embedded developers coping with complexity and MonolithIC 3D believes the end is nigh for Moore’s Law without cost penalties. By Caroline Hayes, Senior Editor.

Advocating migrating designs, Tom Volden, Cadence, presents an informative blog, explaining the company’s Viruoso design migration flow.

Last week, Paris, France hosted the Saut Hermès international showjumping event and Aurelien Dassault, reports on a 3D Experience for TV viewers to learn more about the artistry.

Creating a whole new game plan, Ellie Stone, ARM, reviews some of the highlights from GDC 14 (Game Developers’ Conference) with new of partners projects, the Sports Car Challenge and the Artist Competition wall.

The joy for a consumer is the bane of a developer’s working day: high complexity means developing multi-threaded applications bearing multiple-OS in mind, laments Anil Khanna, Mentor. He does offer hope though, with this blog and a link to more information.

Do not mourn the demise of Moore’s Law without counting the cost, warns Zvi Or-Bach, MonolithIC 3D. His blog has some interesting illustrations for the end of smaller transistors without price increases.

Blog Review – March 10 2014

Monday, March 10th, 2014

By Caroline Hayes, Senior Editor

Dassault takes to the stage; DVCon keynote urges apps take-up; Getting smart about smart products; Mentor’s Automotive Open Source event; ARM graphic is a water cooler moment; advocating early software development.

First the Oscars, now the SXSW Interactive Festival. Elena tells how Dassault Systèmes will appear in Austin, Texas for the second year in a row, showcasing 3D technologies in a theatre stage setting. Lights, camera, action!

At this year’s DVCon, Brian Fuller relates how Cadence CEO Lip-Bu Tan advocates social media and apps development in the engineering ecosystem in his keynote speech of the event.

Be warned, Todd McDevitt Ansys, has a lot to say about Systems Engineering for Smart Products and he makes a great start in this first of a series of blogs explaining the evolution of MBSE ( model-based systems engineering).

If you have a window in your diary for tomorrow, Matt Radochonski, Mentor Graphics, encourages you to attend the Silicon Valley Automotive Open Source event Security and Performance Benefits of Automotive Virtualization. He helpfully provides an abstract from colleague Felix Baum’s presentation to whet your appetite.

Someone at ARM has clearly had trouble communicating with engineers. Karthik Ranian supplies a humorous pie chart summarizing their foibles. Never has a single graphic attracted so many comments – and judgments.

For those struggling with early software development, single-minded Michael Posner offers advice on Synopsys virtual prototyping and hybrid prototyping tools to get the message across that you can never plan too early. It may not be an all-encompassing round up but some of the graphics are effective.

Blog Review – March 03 2014

Monday, March 3rd, 2014

By Caroline Hayes, Senior Editor

Cadence’s Brian Fuller admits to some skullduggery in reducing last week’s Mobile World Congress in Barcelona down to a single theme of IP, but his blog makes a good case for why IP is the answer to the industry’s challenges.

Also from Barcelona, Wendy Boswell submits her report on the Mobile World Congress, focusing on Intel news such as the Intel Galileo Hackathon, which is not a challenge to bring down whole networks, but an invitation to “to create cool stuff” with the new board. Inevitably, Internet of Things is also covered on the three-day blog but also new mobile development tools.

As this year’s DVCon (Design and Verification Conference) begins this week, Real Intent’s Graham Bell reflects on last year’s event. His blog provides links to video recordings of some questions posed there about assertion synthesis during the company-sponsored panel “Where Does Design End And Verification Begin?”

Reaching for the stars, J Van Domelen, Mentor Graphics, considers the selection process for the one-way journey to Mars and finds a near-neighbour of Mentor Graphics is on the ‘short list’ of 1,058 applicants.

Security Challenges in a Connected World

Thursday, February 20th, 2014

By Caroline Hayes, Senior Editor

One of the joys of today’s electronics devices is that they are connected – endlessly connected. Anyone can tweet, engage in social media, share data, video, audio and graphic files wherever they happen to be. However, with the freedom of connectivity, comes the vulnerability of security breaches.

The access to applications and interfaces can bring us closer together – but it can also leave devices, and their users’ data, vulnerable to exploitation, attack and theft.

The business risk
The rise of connected devices is exposing networks to security breaches and cyber-attacks. In industrial use, automation networks, for example, the wireless network connections brings a serious threat, says IHS, of attack from malware. It recalls the Stuxnet computer worm that hit industrial control systems in Iran. It was designed to subvert and engage in the surveillance of supervisory control and data acquisition systems made by German manufacturer, Siemens.

As well as factory automation, many businesses, large and small, employ staff that bring their own devices to the workplace. The risk is that tablets and smartphones may lack sufficient security levels and expose a network, allowing hackers to access data or to spread malware through an international network.

The embedded revolution
Felix Baum, Senior Product Manager, Runtime Solutions, Mentor Embedded Software Division, agrees that unfettered connectivity is a double-edged sword. “We find ourselves in the midst of an embedded market undergoing revolutionary change. Unlike devices of yesterday that had limited access to the Internet and were mostly purpose- built, today’s embedded devices, with more powerful processing power and numerous built-in connectivity options, run on Linux and/or other modern operating systems side-by-side, which allows these devices to extend features and functionality via upgrades by device manufacturers or by downloaded third party applications. These embedded devices are capable of handling massive amounts of data of increasing value such as personal health records and banking/credit credentials putting them in a position of high risk to be exploited,” he warns.

Most attacks on embedded devices exploit vulnerabilities in software, for example with Linux and another operating system side-by-side; weaknesses in hardware interfacing, multi-tasking and timing or through Internet access via the connectivity options, rather than general data processing or network security issues.

Addressing these issues, Baum expands on what Mentor Graphics offers in the way of embedded protection for today’s and the next-generation of embedded, mobile devices. The company’s portfolio spans general-purpose operating systems, such as Mentor Embedded Linux, to the RTOS (Real-Time Operating System) Nucleus, an OpenSSL-based solution, with security facilities, such as encryption protocols and a set of algorithms for security. It supports cryptographic APIs (Application Programming Interfaces) and AES (Advanced Encryption Standard) 128 and AES 256, DES (Data Encryption Standard), 3DES (Triple DES), Blowfish and Cast-128 security protocols.

“These operating systems undergo network penetration testing, offer customers the ability to run kernel and application code in separate isolated areas and offer encryption capabilities. When a design relies on the multi-core ARM devices, customers can utilize the Mentor Embedded Hypervisor for additional separation and isolation capabilities to enhance design robustness. The Hypervisor also fully supports ARM TrustZone technology allowing designers to protect sensitive data and code by placing them into Secure World”.

Prevention is better than cure
Rob Coombs, Security Systems Marketing Director, ARM (right), considers a little forethought will go a long way. “Security engineering is a specialized topic where developers need to think about how a malicious adversary would attack the system, not just “does it work”. Typically a specialized secure Trusted OS is needed to provide secure services that live in hardware isolation to the main code. In ARM designs the Trusted OS normally exists in the Secure World that TrustZone architecture provides,” he says.

For Coombs, the process begins with consideration for where an attack may come from. “System designers benefit from thinking from the start how they are going to protect the system from software attack. Security needs to be designed into the hardware with roots of trust and secure boot and then build outwards from there”.

Consumers and business people alike will not give up their connected worlds, so what can be done to design safe, secure embedded devices?

Both Baum and Coombs agree that a holistic approach is necessary. For Baum, this includes hardware, software and the development process “to develop robust and reliable devices. Only by doing so, will they be able to ensure that the chain of trust is not broken,” he says. When devices are booted into a trusted state and application code has been authenticated, they provide some security, he says.

The same holistic approach is advised by Coombs. He points out that the problem with mobile devices is that their very nature means that they are made up of elements that need security yet are accessed by other parties. For example, a cell phone’s SIM (Subscriber Identity Module) will be provided by the OEM, which may need to access the operating system and other secure elements for holding keys and performing system integrity checks. Making the secure elements tamperproof resists physical attacks.

“ARM has a four compartment model of security providing a hierarchy of trust. System designers can decide which assets are best protected in which compartment e.g. hypervisor or TrustZone based TEE (Trusted Execution Environment)”. Coombs describes the latter as an important component in delivering secure services and applications.

Isolation policy
The first compartment is Normal World –or user/system mode (as opposed to the Trusted World). This is where processes or application are isolated from each other by the operating system and the MMU (Memory Management Unit). Each process has its own addressable memory, a set of capabilities and permissions, administered by the operating system kernel, which executes with system-level privilege.

Another weapon in the security armor is Hypervisor Mode, where multiple instances of the same or different operating systems execute on the same processor as a virtual machine. Each virtual machine can be isolated and virtualized through the use of a system MMU, to virtualize other bus masters. By separating them, resources and assets in each virtual machine can be protected from the others.

In the Trusted World secure state, the company’s TrustZone security extensions allow the system to be physically partitioned into secure and non-secure components. Again, this serves to isolate assets and ensures that software cannot directly access secure memory or secure peripherals.

Finally, the SecurCore processors enable physically separate, tamper proof ICs, delivering secure processing and storage that is protected against physical attack or loss through improperly secured devices, and also protection from software attack.

Given all these elements, how can designers balance speed and accuracy in system design? Coombs again points to TrustZone, by which SoC designers can be guided to select the security hardware features needed to address different markets. “ARM Trusted Firmware provides an open source base of critical low level code that the industry can align with. Other security code can then be ported on top”. The benefits of this are reduced time to market and reduced fragmentation, says Coombs, as well as easier porting of Secure World software and the ability to support new features in the latest 64bit platforms.

The next stage is to look to the future. I asked what can be done to future-proof designs for authentication. “ARM has recently joined the FIDO (Fast Identity Online) Alliance and views it is a good place to create a verification framework that works for website owners and device manufacturers,” says Coombs. “A TrustZone based TEE can support secure peripherals (such as a touchscreen) and this can be integrated to create a strong authentication of person and device.

“For Crypto and key stores this is ideally managed from the TrustZone-based TEE to provide hardware isolation from malicious code. If the TEE provider offers Over The Air provisioning of Secure World code then updates can be delivered to future-proof the design”.

Blog Review – Feb. 18 2014

Tuesday, February 18th, 2014

Grand prizes in Paris design; variability pitfalls; snap happy; volume vs innovation

By Caroline Hayes, senior editor

One of the most visually arresting blogs this week is from Neno Horvat at Dassault Systèmes. A fashion parade of projects set against the backdrop of Hôtel National des Invalides, in Paris. The occasion? The Festival de l-Automobile International (FAI) and the Creativ Experience award and the Grand Prix for research into the intelligent car.

Using a blog as a real community jumping point and information service, Shelly Stalnaker’s blog directs us to fellow Mentor Graphics author, Sudhakar Jilla article about the variability pitfalls of advanced nodes design and manufacturing.

Happy, snappy days are conjured up in the blog by ARM’s rmijat, in which he recounts his smartphone photography presentation at Electronic Imaging Conference. One of the week’s most detailed blogs, he takes us through the history of the camera phone to computational photography and future prospects.

Jack Harding, eSilicon, left Las Vegas a richer man, not from a big win, but by reflecting on the prospect of how few companies can bring to market the ICs needed for all the innovation that CES promised.

High Level Synthesis (HLS) Splits EDA Market

Friday, February 14th, 2014

Recent acquisitions and spin-offs by the major electronic design automation company’s reveals key differences in the design of complex chips.

Last week, Cadence Design Systems announced the acquisition of Forte Design. This announcement brought renewed interest to the high-level synthesis (HLS) of semiconductor chips. But the acquisition also raises questions about emerging changes in the electronic design automation (EDA) industry. Before looking at these wide-ranging changes, let’s see how this acquisition may affect the immediate HLS market.

At first glance, it seems that Cadence has acquired a redundant tool. Both Forte’s Cynthesizer and Cadence’s C-to-Silicon are SystemC-based applications that help chip designers create complex system-on-chips (SoCs) designs from higher levels of abstraction. “High-level synthesis (HLS) tools synthesize C/C++/SystemC code targeting hardware implementation, after the hardware-software trade-offs and partitioning activities have been performed upstream in the design flow,” explained Gary Dare, General Manager at Space Codesign,  a provider of front-end architectural EDA design tools.

Although both Cadence’s and Forte’s HLS tools are based on SystemC, they are not identical in function.

Forte’s strength lies in the optimization of data path design, i.e., the flow of data on a chip. This strength comes from Forte’s previous acquisition of the Arithmetic IP libraries, which focuses on mathematical expressions and related data types, e.g. floating-point calculations.

How do the data bits from arithmetic computations move through a SoC? That’s where the C-to-Silicon tool takes over. “Forte’s arithmetic and data focus will complement Cadence’s C-to-Silicon strength in control logic synthesis domain,” notes Craig Cochran, VP of Corporate Marketing at Cadence. The control plane serves to route and control the flow of information and arithmetic computations from the data plane world.

Aside from complementary data and control plane synthesis, the primary difference between the two tools is that C-to-Silicon was built on top of a register-transfer level (RTL) compiler, thus allowing chip designers to synthesize from high-level SystemC level down down to the hardware specific gate level.

The emphasis on the SystemC support for both tools is important. “Assuming that Cadence keeps the Forte Design team, it will be possible to enhance C-to-Silicon with better SystemC support based on Cynthesizer technology,” observed Nikolaos Kavvadias, CEO, Ajax Compilers. “However, for the following 2 or 3 years both tools will need to be offered.”

From a long-term perspective, Cadence’s acquisition of Forte’s tools should enhance their position in classic high-level synthesis (HLS). “Within 2013, Cadence acquired Tensilica’s and Evatronix’ IP businesses,” notes Kavvadias. “Both moves make sense if Cadence envisions selling the platform and the tools to specialize (e.g. add hardware accelerators), develop and test at a high level.”

These last two process areas – design and verification – are key strategies in Cadences recent push into the IP market. Several acquisitions beyond Tensilica and Evatronix over the last few years have strengthened the company’s portfolio of design and verification IP. Further, the acquisition of Forte’s HSL tool should give Cadence greater opportunities to drive the SystemC design and verification standards.

Enablement verses Realization

Does this acquisition of another HLS company support Cadence’s long-term EDA360 vision? When first introduced several years ago, the vision acknowledged the need for EDA tools to more than automate the chip development process. It shifted focus to development of a hardware and software system in which the hardware development was driven by the needs of the software application.

“Today, the company is looking beyond the classic definition of EDA – which emphasizes automation – to the enablement of the full system including hardware, software and IP on chips and boards to interconnections and verification of the complete system,” explains Cochran. “And this fits into that system (HLS) context.

The system design enablement approach was first introduced by Cadence during last month’s earning report. The company has not yet detailed how the “enablement” approach relates to its previous “realization” vision. But Cochran explains it this way: “Enablement goes beyond automation. Enablement includes our content contribution to our customer’s design in the form of licensable IP and software.” The software comes in many forms, from the drivers and applications that run on the Tensilica (IP) processors to other embedded software and codices.”

This change in semantics may reflect the change in the way EDA tool companies interface with the larger semiconductor supply chain. According to Cochran and others, design teams from larger chip companies are relying more on HLS tools for architectural development and verification of larger and larger chips.  In these ever growing SoC designs, RTL synthesis has become a bottleneck. This means that chip designers must synthesize much larger portions of their chips in a way that reduces human error and subsequent debug and verification activities. That’s the advantage offered by maturing high-level synthesis tools.

Cadence believes that SystemC is the right language for HLS development. But what is the alternative?

HLS Market Fragments

The other major high-level synthesis technology in the EDA market relies on ANSI-C and C++ implementation that involve proprietary libraries and data types, explained Cochran. “These proprietary libraries and data types are needed to define the synthesis approach in terms of mathematical functions, communication between IP blocks and to represent concurrency.” The ANSI-C approach appeals to designers writing software algorithm rather than designing chip hardware.

Kavvadias agrees, but adds this perspective. “Given the Synopsys’s recent acquisition of Target Compiler Technologies (TCT), it appears that the big three have different HLS market orientations: Cadence with a SystemC to ASIC/FPGA end-to-end flow, Snopsys moving on to application-specific instruction-set processor (ASIP) synthesis technology, while Mentor has offloaded its HLS business.”

“Further, Synopsys now has two totally distinct ASIP synthesis technologies, LISATek’s Processor Designer and TCT’s IP Designer,” noes Kavvadias. “They are based on different formalisms (LISA and nML) and have different code and model generation approaches. In order to appeal to ASIP synthesis tool users, Cadence will have to focus to the XPRES toolset. But I’m not sure this will happen.”

A few years ago, Mentor Graphics spun out it HLS technology to Calypto. But Mentor still owns a stake in the spin-off company. That’s why long-time EDA analyst Gary Smith believes that the Forte acquisition puts Cadence and Mentor-Calypto’s CatapultC way ahead of Synopsys’s Synfora Synphony C Compiler. “The Synopsys HLS tool pretty much only does algorthmic mapping to RTL, whereas Forte and Mentor-Calypto tools can do algorthmic mapping, control logic, data paths, registers, memory interfaces, etc. — a whole design.”

What does the Future hold?

Forte’s tool focus on data path synthesis and associated arithmetic IP should mean few integration issues with Cadence’s existing HLS tool C-to-Silicon. However, Kavvadias notes that the acquisition makes floating-point IP increasingly important. “It is relevant to algorithmists (e.g. using MATLAB or NumPy/SciPy) wishing to push-button algorithms to hardware.” The efficient implementation of floating-point functions is not a trivial task.

Kavvadias  predictions that, “if CDNS buys a matrix-processor oriented IP portfolio, then their next step is definitely a MATLAB- or Python-to-hardware HLS tool and maybe the MATLAB/Python platform beyond that.” Matrix processors are popular in digital signal processing (DSP) applications that require massive multiply-accumulate (MAC) data operation.

Today’s sensor and mobile designs require the selection of the most energy-efficient platforms available. In turn, this mandates the need for early, high-level power trade-off studies – perfect for High-Level Synthesis (HLS) tools.

Verification Management

Tuesday, February 11th, 2014

Gabe Moretti, Contributing Editor

As we approach the DVCon conference it is timely to look at how our industry approaches managing design verification.

Much has been said about the tools, but I think not enough resources have been dedicated to the issue of management and measurement of verification.  John Brennan, Product Director in the Verification Group at Cadence observed that verification used to be a whole lot easier. It used to be that you sent some stimulus to your design, view a few waveforms, collect some basic data by looking at the simulator log data, and then onto the next part of the design to verify.   The problem with all of this is that it’s simply too much information, and with randomness comes lack of clarity about what is actually tested and not.  He continued by stating that you can not verify every state and transition in your design, it is simply impossible, the magnitude is too large.  So what do you verify, and how are IP and chip suppliers addressing the challenge?  We at Cadence see several trends emerging that will help users with this daunting task, as follows: use collaboration based environments, use the right tool for the job, have Deep Analytics and Visibility, and deploy Feature based verification.

My specific questions to the panelists follow.  I chose a representative one from each of them.

* How does a verification group manage the verification process and assess risk?

Dave Kelf, Marketing Director at OneSpin Solutions opened the detail discussion by describing the present situation. Whereas design management follows a reasonably predictable path, verification management is still based on the subjective, unpredictable assessment of when is enough testing enough!

Verification management is all about predicting the time and resources required to reach the moving target of verification closure. However, there is still no concrete method available to predict when a design is fully, exhaustively, 100% tested. Today’s techniques all have an element of uncertainty, which translate to the risk of an undetected bug. The best a verification manager can do is to assess the progress point at which the probability of a remaining bug is infinitesimally small.

For a large design block, a combination of test coverage results, a test spec-to-simulation performed comparison, time-since-last-bug discovery, verification time spent and the end of the schedule may all play into this decision. For a complete SoC, running the entire system, including software, on an emulator for days on end might be the only way, today, to inspire confidence of a working design.

If we were to solve just one remaining problem in verification, achieving a deep and meaningful understanding of verification coverage pertaining to the original functional specification should be it.

*  What is the role of verification coverage in providing metrics toward verification closure, and is this proving useful.

Thomas L. Anderson, Vice President of Marketing, Breker Verification Systems answered that coverage is, frankly, all that the verification team has to assess how well the chip has been exercised. Code coverage is a given, but in recent years, functional coverage has gained much more prominence. The most recent forms of coverage are derived automatically, for example, from assertions or graph-based scenario models, and so provide much return for little investment.

*  How has design evolution affected verification management? Examples include IP usage and SoC trends.

Rajeev Ranjan, CTO of Jasper Design Automation observed that as designs get bigger in general, and as they incorporate more-and-more IPs developed by multiple internal and external parties,  integration verification becomes a very large concern.  Specifically, verification tasks such as interface validation, connectivity checking, functional verification of IPs in the face of hierarchical power management strategies, and ensuring that the hardware coherency protocols do not cause any deadlock in the overall system.  Additionally, depending on the end-market for the system, security path verification can also be a significant, system-wide challenge.

*  What should be the first step in preparing a verification plan?

Tom Fitzpatrick, verification evangelist, Mentor Graphics has dedicated many years to the study and solutions of verification issues.  He noted that the first step in preparing a verification plan is to understand what the design is supposed to do and under what conditions it’s expected to do it. Verification is really the art of modeling the “real world” in which the device is expected to operate, so it’s important to have that understanding. After that, it’s important to understand the difference between “block-level” and “system-level” behaviors that you want to test. Yes, the entire system must be able to, for example, boot an RTOS and process data packets or whatever, but there are a lot of specifics that can be verified separately at the block- or subsystem-level before you just throw everything together and see if it works. Understanding what pieces of the lower level environments can be reused and will prove useful at the system level, and being able to reuse those pieces effectively and efficiently is one key to verification productivity.

Another key is the ability to verify specific pieces of functionality as early as possible in the process and use that information to avoid targeting that functionality at higher levels. For example, using automated tools at the block level to identify reset or X-propagation issues, or state machine deadlock conditions, eliminates the need to try and create stimulus scenarios to uncover these issues. Similarly, being able to verify all aspects of a block’s protocol implementation at the block level means that you don’t need to waste time creating system-level scenarios to try and get specific blocks to use different protocol modes. Identifying where best to verify the pieces of your verification plan allows every phase of your verification to be more efficient.

*  Is criteria available to determine what tools need to be considered for various project phases? Which tools are proving effective? Is budget a consideration?

Yuan Lu, Chief Verification Architect, Atrenta Inc. contributed the following. Verification teams deploy a variety of tools to address various categories of verification issues, depending on how you break your design into multiple blocks and what you want to test at each level of hierarchy. At a macro level, comprehensive/exhaustive verification is expected at block/IP level. However, at the SoC level, functions such as connectivity checking, heart beat verification, and hardware/software co-verification are performed.

Over the years, there has emerged some level of consensus within the industry as to what type of tools need to be used for verification at the IP and SoC levels. But, so far, there is no perfect way to hand off IPs to the SoC team. The ultimate goal is to ensure that the IP team communicates to the SoC team about what has been tested and how the SoC team can use this information to figure out if the IP level verification was sufficient to meet the SoC needs.

*  Not long ago, the Unified Verification Methodology (UVM) was unveiled with the promise of improving verification management, among other advantages. How has that worked?

Herve Alexanian, Engineering Director, Advanced Dataflow Group at Sonics, Inc. pointed out that as an internal protocol is thoroughly specified, including configurable options, a set of assertions can naturally be written or generated depending on the degree of configurability. Along the same lines, functional coverage points and reference (UVM) sequences are also defined. These definitions are the best way to enter modern verification approaches, allowing the most benefit from formal techniques and verification planning. Although some may see such definitions as too rigid to accommodate changes in requirements, making a change in a fundamental interface is intentionally costly as it is in software. It implies additional scrutiny on how architectural changes are implemented in a way that tends to minimize functional corners that later prove so costly to verify.

*  What areas of verification need to be improved to reduce verification risk and ease the management burden?

Vigyan Singhal, President and CEO, Oski Technology said that for the most part, current verification methodology relies on simulation and emulation for functional verification. As shown consistently in the 2007, 2010 and 2012, Wilson Research Group Surveys sponsored by Mentor Graphics, two thirds of projects are behind schedule and functional bugs are still the main culprit for chip respins. This shows that the current methodology has significant verification risk.

Verification teams today spend most of the time in subsystem (63.9%) and full chip simulation (36.1%), and most of the time is spent in debugging (36%). This is not surprising as debugging at the subsystem and chip level with thousands or long cycle traces can take a long time.

The solution to the challenge is to improve block-level design quality so as to reduce the verification and management burden at the subsystem and chip level. Formal property verification is a powerful technique for block-level verification. It is exhaustive and can catch all corner-case bugs. While formal verification adds another step in the verification flow with additional management tasks to track its progress, the time and effort spent will lead to reduced time and effort at the subsystem and chip level, and improve overall design quality. With short time-to-market windows, design teams need to guarantee first-silicon success. We believe increased formal usage in the verification flow will reduce verification risks and ease management burden.

As he had opened the discussion, John Brennan closed it noting that functional verification has no one single silver bullet, it takes multiple engineers, operating across multiple heterogeneous engines, with multiple analytics.  This multi-specialists verification is now, the VPM tools that support multi-specialist verification are needed now.

Blog Review: Feb. 10 – Vehicle chat; the firmware-software divide; Webinars; Nanomanufacturing

Monday, February 10th, 2014

This week, vehicle-to-vehicle communication could be a safety technology; there is a call for firmware engineers to be trained as software engineers to avoid catastrophic mistakes; Ansys announces a series of webinars, but hurry, they start tomorrow! And the Semiconductor Industry Association calls for nanomanufacturing action.

There are ways to express dissatisfaction with another driver’s performance, usually involving hand gestures behind a glass, but John Day, Mentor Graphics, looks at how vehicle-to-vehicle technology could be the way forward for automotive safety improvements – and perhaps nicer, calmer, drivers too

We’ve missed Sandy Adam, Ansys, but she is back with some solid news in her blog. Ansys is hosting a series of webinars, and will be available on the company’s Resource Library within a week of the webinar. The list of webinar topics, which kicks off tomorrow, and how to register is detailed.

Who listens to engineers, asks Bob Scaccia, ARM. Firmware engineers and engineering managers alike are overlooked – and look at what cost.

Depending on the science behind nanomanufacturing cannot be ignored, says Falan Yinug, SIA (Semiconductor Industry Association), as he outlines the high points but also the challenges.

Next Page »