Part of the  

Chip Design Magazine

  Network

About  |  Contact

Posts Tagged ‘UVM’

Next Page »

Blog Review – Monday, February 27, 2017

Monday, February 27th, 2017

Intel and the IoT at Mobile World Congress; Hardware rally cry; Space race; UVM update; Keepng track of heritage with archaeological tools

It’s Mobile World Congress this week, in Barcelona, Spain (February 27 to March 2). Alison Challman, Intel, rounds up some of the IoT highlights at the show, encompassing automated driving, the smart city and smart and connected homes.

Putting some pep into hardware design, Dave Pursley, Cadence, advocates hardware designers adopt a higher level of abstraction and then synthesizing to RTL implementations via high-level synthesis to get happy.

Lamenting the slow pace of space electronics technology compared with commercial products, Ross Bannatyne, Vorago Technologies, reports on the company’s ARM Cortex-M0 MCU in the SpaceX Falcon 9, which headed for the International Space Station.

Reflecting on the development of UVM1.2, Tom Fitzpatrick, Mentor Graphics, charts the progress of the Universal Verification Methodology (UVM) and how the to tackle compatibility with earlier versions. More can be discussed at this week’s DVCon US at the company’s booth.

Exploring Android, using Qt tools is the topic explored by Laszlo Agocs, Qt, with example of how to develop Android TV Vulkan content. His blog is a guide to building a qtbase for Android, targeting the 64-bit architecture of the Tegra X1-based NVIDIA Shield TV, using QtGui, QtQuick modules.

Digital preservation captures physical, important sites, which may be at risk, or lost completely, through earthquakes, floods, the passage of time and human threats. Alyssa, Dassault Systemes, has found some examples by CyArk that is preserving sites and how virtual reality headsets can make the sites accessible.

Caroline Hayes, Senior Editor

Blog Review – Monday, November 16, 2015

Monday, November 16th, 2015

ARM TechCon 2015 highlights: IoT, mbed and magic; vehicle monitoring systems; the road ahead for automotive design

It’s crunch time for IoT, announced ARM CEO Simon Segars at ARM TechCon. Christine Young, Cadence reports on what Segars believes is needed to get the IoT right.

Posing as a ‘booth babe’, Richard Solomon, Synopsys, was also at ARM TechCon demonstrating the latest iteration of DesignWare IP for PCI Express 4.0. As usual, there are pictures illustrating some of the technology, this time around switch port IP and Gen2 PCI, and quirky pictures from the show floor, to give readers a flavor of the event.

Tracking the progress of mbed OS, Chris Ciufo, eecatalog, prowled the mbed Zone at this year’s ARM TechCon, finding IoT ‘firsts’ and updates of wearables.

Enchanted by IoT, Eric Gowland, ARM, found ARM TechCon full of wonder and magic – or, to paraphrase Arthur C Clark, technology that was indistinguishable from magic. There are some anecdotes from the event – words and pictures – of how companies are using the cloud and the IoT and inspiring the next generation of magicians.

Spotting where Zynq devices are used in booth displays, might become an interesting distraction when I am visiting some lesser shows in future. I got the idea from Steve Leibson, Xilinx, who happened upon the Micrium booth at ARM TechCon where one was being used, stopping to investigate, he found out about free μC/OS for Makers.

Back to Europe, where DVCon Europe was help in Munich, Germany (November 11-12). John Aynsley, Doulos, was pleased that UVM is alive and well and companies like Aldec are realising that help and support is needed.

Identifying the move from behavior-based driver monitoring systems to inward-looking, camera-based systems, John Day, Mentor Graphics, looks at what this will use of sensors will mean for automakers who want to combine value and safety features.
Deciding how many functions to offer will be increasingly important for automakers, he advises.

Still with the automotive industry, Tomvanvu, Atmel, addresses anyone designed for automotive embedded systems and looks at what is driving progression for the inevitable self-driving cars.

Caroline Hayes, Senior Editor

Blog Review – Tuesday, August 18, 2015

Monday, August 17th, 2015

Where will the future of embedded software lead; Manufacturing success; DDR memory IP – a personal view; Untangling the IoT protocols; The battle of virtual prototyping; Accellera SI update; Smart buildings; SoC crisis management

The rise of phones, GPS, tablets and cars means embedded software increases in complexity, muses Colin Walls, Mentor Graphics. He traces the route of hardware and software in simple systems to ones that have to work harder and smarter.

Managing to avoid sounding smug, Falan Yinug reports on the SIA (Semiconductor Industry Association) paper confirming the semiconductor industry is the USA’s most innovative manufacturing industry, and looks at its role in the economy.

Less about being a woman in technology and more about the nitty gritty of DDR controller memory IP, Anne Hughes, DDR IP Engineering, Cadence talks to Christine Young.

Fitting protocols like CoAp and IPSO smart objects in the IoT structure can be daunting, but Pratulsharma, ARM, has written an illustrated blog that can lead readers through the wilderness.

Clearly taken with the Battlebots TV show, Tom De Schutter, Synopsys, considers how to minimise design risks to avoid destruction, or at least behave as intended.

A considered view of the Accellera Sytems Initiative is given by Gabe Moretti, Chip Design Magazine. He elaborates on what the UVM standardization will mean for the wider EDA industry.

Where the IoT is used, and how, for smart buildings, is examined by Rob Sheppard, Intel.

Alarming in his honesty, Gadge Panesar, Ultrasoc, says no-one know how SoCs operate and urges others to be as honest as he is and seek help – with some analytics and IP independence.

Accellera Systems Initiative Continues to Grow

Thursday, October 17th, 2013

By Gabe Moretti

The convergence of system, software and semiconductor design activities to meet the increasing challenges of creating complex system-on-chips (SoCs) has brought to the forefront the need for a single organization to create new EDA and IP standards.

As one of the founders of Accellera, and the one responsible for its name, it gives me great pleasure to see how the consortium has grown and widened its interests.  Through the mergers and acquisitions of the Open SystemC Initiative (OSCI), Virtual Sockets Interface Alliance (VSIA), The SPIRIT Consortium, and now assets of OCP-IP, Accellera is the leading standards organization that develops language-based standards used by system, semiconductor, IP and EDA companies.

As its original name implies, Accellera is Italian meaning accelerate, its activities target EDA tools and methods with the aim of fostering efficiency and portability.

Created to develop standards for design and verification languages and methods, Accellera has grown by merging or acquiring other consortia, expanding its role to Electronic System Level standards, and IP standards.  It now has forty one member companies from industries such as EDA, IP, semiconductors, and electronics systems.  As a result of its wider activities it even its name has grown and now is “Accellera Systems Initiative”.

In addition to the corporate members Accellera has formed three Users Communities, to educate engineers and increase the use of standards.  The Communities are: OCP, SystemC, and UVM.  The first one deals with IP standards and issues, the second supports the SystemC modeling and verification language, while the third one works on Unified Verification procedures.

Accellera has 17 active Technical Committees.  Their work to date has resulted in 7 IEEE standards.  Accellera sponsors a yearly conference DVCON held generally in February but also collaborates with engineering conferences in Europe and Japan.  With the growth of electronics activities in nations like India and China, Accellera is considering a more active presence in those countries as well.

Verification Joins the Adults’ Table

Tuesday, January 24th, 2017

Adam Sherer, Group Director, Product Management, System & Verification Group, Cadence

As we plan for our family gatherings this holiday season, it’s time to welcome Verification to the adults’ table. Design and Implementation are already at the table, having established their own families consisting of architects with the comprehensive experience to manage the overall flow and specialists who provide the deep knowledge needed to make each project succeed. Verification has matured with the realization that it needs its own family of architects and specialists that have the experience and knowledge to rapidly and repeatedly verify complex projects.

Figure 1 The family table

This maturation of Verification occurred as complexity drove the need for the architect’s role. Designs pushed through a billion gates and systems grew their functional dependency on the fusion of analog, software, digital, and power. Meanwhile, the teams verifying these designs became distributed around the globe. A holistic view of verification became necessary and it was rooted in a more rigorous verification planning process. When we listen to the architect at our holiday dinner this year, we’ll hear how she wished for and got verification management automation with Cadence’s vManager solution. In order to close her verification plan, she needs to reuse verification IP (VIP), specify new Cadence VIP protocols, and direct the internal development of new VIP running on a range of verification engines. She also realizes that traditional methods will not scale to complex scenarios that must be verified across the complete SoC, so she is excited by the new portable stimulus standard work in Accellera and is piloting a project using Cadence’s Perspec System Verifier to gain an efficiency edge over her company’s competitors.

Design and Implementation were impressed by the automation that Verification was able to access. They asked Verification if that meant she had resources to spare for their families. She couldn’t help but laugh but then calmed down and explained how her family is growing with the specialists needed to implement the verification plans. She also discussed how those experts are actually already working with experts from Design and Implementation to achieve verification closure.

Figure 2 The Cadence Verification Family

Verification is a multi-engine, multi-abstraction, multi-domain task that starts and finishes with the entire development team. At the start of development, design experts and verification experts apply JasperGold formal analysis with coverage to both raise quality and mark the block-level features as verified in the overall plan. UVM experts then step in to complete comprehensive IP/subsystem verification using high-performance digital and mixed-signal simulation with the Incisive Enterprise Simulator. While the randomization and four-state simulation is critical at this stage, the UVM testbench can consume as much as 50% of the simulation time, which lengthens runtime as the project moves to subsystem and SoC integration. The verification experts then apply acceleration techniques to reduce time spent in the testbench, develop new scenarios with the Perspec System Verifier to enable fast four-state RTL simulation with the Cadence RocketSim Parallel Simulation Engine, and accelerate with the Cadence Palladium Z1 Enterprise Emulation System. As the project moves to the performance, capacity, coverage, and accessibility of the Palladium Z1 engine, new experts are able to address system features dependent on bare metal software and in-circuit data. Since the end customer interacts with the system through application software, the verification experts work with software teams using Cadence Protium Rapid Prototyping Platform, which provides the performance needed to support the verification needs of this team. With all of these experts around the world, the verification architect explains that she needs fabrics that enable them to communicate. She uses the Cadence Indago Debug Platform and vManager to provide unified debug across the engines, and multi-engine metrics to help her automate the verification plan. More and more of the engines provide verification metrics like coverage from simulation and emulation that can be merged together and rolled up to the vManager solution. Even the implementation teams are working together with the verification experts to simulate post PG netlists using the Incisive Enterprise Simulator XL and RocketSim solutions, enabling final signoff on the project.

As Design and Implementation pass dessert around the table, they are very impressed with Verification. They’ve seen the growing complexity in their own families and have been somewhat perplexed by how verification gets done. Verification has talked about new tools, standards, and methodologies for years, and they assumed those productivity enhancements meant that verification engineers could remain generalists by accessing more automation. Hearing more about the breadth and depth of the verification challenge has helped them realize that that there is an absolute need for a complete verification family with architects and experts. Raising a toast to the newest member of the electronic design adults’ table, the family knows that 2017 is going to be a great year.

Formal, Logic Simulation, hardware emulation/acceleration. Benefits and Limitations

Wednesday, July 27th, 2016

Stephen Bailey, Director of Emerging Technologies, Mentor Graphics

Verification and Validation are key terms used and have the following differentiation:  Verification (specifically, hardware verification) ensures the design matches R&D’s functional specification for a module, block, subsystem or system. Validation ensures the design meets the market requirements, that it will function correctly within its intended usage.

Software-based simulation remains the workhorse for functional design verification. Its advantages in this space include:

-          Cost:  SW simulators run on standard compute servers.

-          Speed of compile & turn-around-time (TAT):  When verifying the functionality of modules and blocks early in the design project, software simulation has the fastest turn-around-time for recompiling and re-running a simulation.

-          Debug productivity:  SW simulation is very flexible and powerful in debug. If a bug requires interactive debugging (perhaps due to a potential UVM testbench issue with dynamic – stack and heap memory based – objects), users can debug it efficiently & effectively in simulation. Users have very fine level controllability of the simulation – the ability to stop/pause at any time, the ability to dynamically change values of registers, signals, and UVM dynamic objects.

-          Verification environment capabilities: Because it is software simulation, a verification environment can easily be created that peeks and pokes into any corner of the DUT. Stimulus, including traffic generation / irritators can be tightly orchestrated to inject stimulus at cycle accuracy.

-          Simulation’s broad and powerful verification and debug capabilities are why it remains the preferred engine for module and block verification (the functional specification & implementation at the “component” level).

If software-based simulation is so wonderful, then why would anyone use anything else?  Simulation’s biggest negative is performance, especially when combined with capacity (very large, as well as complex designs). Performance, getting verification done faster, is why all the other engines are used. Historically, the hardware acceleration engines (emulation and FPGA-based prototyping) were employed latish in the project cycle when validation of the full chip in its expected environment was the objective. However, both formal and hardware acceleration are now being used for verification as well. Let’s continue with the verification objective by first exploring the advantages and disadvantages of formal engines.

-          Formal’s number one advantage is its comprehensive nature. When provided a set of properties, a formal engine can exhaustively (for all of time) or for a, typically, broad but bounded number of clock cycles, verify that the design will not violate the property(ies). The prototypical example is verifying the functionality of a 32-bit wide multiplier. In simulation, it would take far too many years to exhaustively validate every possible legal multiplicand and multiplier inputs against the expected and actual product for it to be feasible. Formal can do it in minutes to hours.

-          At one point, a negative for formal was that it took a PhD to define the properties and run the tool. Over the past decade, formal has come a long way in usability. Today, formal-based verification applications package properties for specific verification objectives with the application. The user simply specifies the design to verify and, if needed, provides additional data that they should already have available; the tool does the rest. There are two great examples of this approach to automating verification with formal technology:

  • CDC (Clock Domain Crossing) Verification:  CDC verification uses the formal engine to identify clock domain crossings and to assess whether the (right) synchronization logic is present. It can also create metastability models for use with simulation to ensure no metastability across the clock domain boundary is propagated through the design. (This is a level of detail that RTL design and simulation abstract away. The metastability models add that level of detail back to the simulation at the RTL instead of waiting for and then running extremely long full-timing, gate-level simulations.)
  • Coverage Closure:  During the course of verification, formal, simulation and hardware accelerated verification will generate functional and code coverage data. Most organizations require full (or nearly 100%) coverage completion before signing-off the RTL. But, today’s designs contain highly reusable blocks that are also very configurable. Depending on the configuration, functionality may or may not be included in the design. If it isn’t included, then coverage related to that functionality will never be closed. Formal engines analyze the design, in its actual configuration(s) that apply, and does a reachability analysis for any code or (synthesizable) functional coverage point that has not yet been covered. If it can be reached, the formal tool will provide an example waveform to guide development of a test to achieve coverage. If it cannot be reached, the manager has a very high-level of certainty to approving a waiver for that coverage point.

-          With comprehensibility being its #1 advantage, why doesn’t everyone use and depend fully on formal verification:

  • The most basic shortcoming of formal is that you cannot simulate or emulate the design’s dynamic behavior. At its core, formal simply compares one specification (the RTL design) against another (a set of properties written by the user or incorporated into an automated application or VIP). Both are static specifications. Human beings need to witness dynamic behavior to ensure the functionality meets marketing or functional requirements. There remains no substitute for “visualizing” the dynamic behavior to avoid the GIGO (Garbage-In, Garbage-Out) problem. That is, the quality of your formal verification is directly proportional to the quality (and completeness) of your set of properties. For this reason, formal verification will always be a secondary verification engine, albeit one whose value rises year after year.
  • The second constraint on broader use of formal verification is capacity or, in the vernacular of formal verification:  State Space Explosion. Although research on formal algorithms is very active in academia and industry, formal’s capacity is directly related to the state space it must explore. Higher design complexity equals more state space. This constraint limits formal usage to module, block, and (smaller or well pruned/constrained) subsystems, and potentially chip levels (including as a tool to help isolate very difficult to debug issues).

The use of hardware acceleration has a long, checkered history. Back in the “dark ages” of digital design and verification, gate-level emulation of designs had become a big market in the still young EDA industry. Zycad and Ikos dominated the market in the late 1980’s to mid/late-1990’s. What happened?  Verilog and VHDL plus automated logic synthesis happened. The industry moved from the gate to the register-transfer level of golden design specification; from schematic based design of gates to language-based functional specification. The jump in productivity from the move to RTL was so great that it killed the gate-level emulation market. RTL simulation was fast enough. Zycad died (at least as an emulation vendor) and Ikos was acquired after making the jump to RTL, but had to wait for design size and complexity to compel the use of hardware acceleration once again.

Now, 20 years later, it is clear to everyone in the industry that hardware acceleration is back. All 3 major vendors have hardware acceleration solutions. Furthermore, there is no new technology able to provide a similar jump in productivity as did the switch from gate-level to RTL. In fact, the drive for more speed has resulted in emulation and FPGA prototyping sub-markets within the broader market segment of hardware acceleration. Let’s look at the advantages and disadvantages of hardware acceleration (both varieties).

-          Speed:  Speed is THE compelling reason for the growth in hardware acceleration. In simulation today, the average performance (of the DUT) is perhaps 1 kHz. Emulation expectations are for +/- 1 MHz and for FPGA prototypes 10 MHz (or at least 10x that of emulation). The ability to get thousands of more verification cycles done in a given amount of time is extremely compelling. What began as the need for more speed (and effective capacity) to do full chip, pre-silicon validation driven by Moore’s Law and the increase in size and complexity enabled by RTL design & design reuse, continues to push into earlier phases of the verification and validation flow – AKA “shift-left.”  Let’s review a few of the key drivers for speed:

  • Design size and complexity:  We are well into the era of billion gate plus design sizes. Although design reuse addressed the challenge of design productivity, every new/different combination of reused blocks, with or without new blocks, creates a multitude (exponential number) of possible interactions that must be verified and validated.
  • Software:  This is also the era of the SoC. Even HW compute intensive chip applications, such as networking, have a software component to them. Software engineers are accustomed to developing on GHz speed workstations. One MHz or even 10’s of MHz speeds are slow for them, but simulation speeds are completely intolerable and infeasible to enable early SW development or pre-silicon system validation.
  • Functional Capabilities of Blocks & Subsystems:  It can be the size of input data / simuli required to verify a block’s or subsystem’s functionality, the complexity of the functionality itself, or a combination of both that drives the need for huge numbers of verification cycles. Compute power is so great today, that smartphones are able to record 4k video and replay it. Consider the compute power required to enable Advanced Driver Assistance Systems (ADAS) – the car of the future. ADAS requires vision and other data acquisition and processing horsepower, software systems capable of learning from mistakes (artificial intelligence), and high fault tolerance and safety. Multiple blocks in an ADAS system will require verification horsepower that would stress the hardware accelerated performance available even today.

-          As a result of these trends which appear to have no end, hardware acceleration is shifting left and being used earlier and earlier in the verification and validation flows. The market pressure to address its historic disadvantages is tremendous.

  • Compilation time:  Compilation in hardware acceleration requires logic synthesis and implementation / mapping to the hardware that is accelerating the simulation of the design. Synthesis, placement, routing, and mapping are all compilation steps that are not required for software simulation. Various techniques are being employed to reduce the time to compile for emulation and FPGA prototype. Here, emulation has a distinct advantage over FPGA prototypes in compilation and TAT.
  • Debug productivity:  Although simulation remains available for debugging purposes, you’d be right in thinking that falling back on a (significantly) slower engine as your debug solution doesn’t sound like the theoretically best debug productivity. Users want a simulation-like debug productivity experience with their hardware acceleration engines. Again, emulation has advantages over prototyping in debug productivity. When you combine the compilation and debug advantages of emulation over prototyping, it is easy to understand why emulation is typically used earlier in the flow, when bugs in the hardware are more likely to be found and design changes are relatively frequent. FPGA prototyping is typically used as a platform to enable early SW development and, at least some system-level pre-silicon validation.
  • Verification capabilities:  While hardware acceleration engines were used primarily or solely for pre-silicon validation, they could be viewed as laboratory instruments. But as their use continues to shift to earlier in the verification and validation flow, the need for them to become 1st class verification engines grows. That is why hardware acceleration engines are now supporting:
    • UPF for power-managed designs
    • Code and, more appropriately, functional coverage
    • Virtual (non-ICE) usage modes which allow verification environments to be connected to the DUT being emulated or prototyped. While a verification environment might be equated with a UVM testbench, it is actually a far more general term, especially in the context of hardware accelerated verification. The verification environment may consist of soft models of things that exist in the environment the system will be used in (validation context). For example, a soft model of a display system or Ethernet traffic generator or a mass storage device. Soft models provide advantages including controllability, reproducibility (for debug) and easier enterprise management and exploitation of the hardware acceleration technology. It may also include a subsystem of the chip design itself. Today, it has become relatively common to connect a fast model written in software (usually C/C++) to an emulator or FPGA prototype. This is referred to as hybrid emulation or hybrid prototyping. The most common subsystem of a chip to place in a software model is the processor subsystem of an SoC. These models usually exist to enable early software development and can run at speeds equivalent to ~100 MHz. When the processor subsystem is well verified and validated, typically a reused IP subsystem, then hybrid mode can significantly increase the verification cycles of other blocks and subsystems, especially driving tests using embedded software and verifying functionality within a full chip context. Hybrid mode can rightfully be viewed as a sub-category of the virtual usage mode of hardware acceleration.
    • As with simulation and formal before it, hardware acceleration solutions are evolving targeted verification “applications” to facilitate productivity when verifying specific objectives or target markets. For example, a DFT application accelerates and facilitates the validation of test vectors and test logic which are usually added and tested at the gate-level.

In conclusion, it may seem that simulation is being used less today. But, it is all relative. The total number of verification cycles is growing exponentially. More simulation cycles are being performed today even though hardware acceleration and formal cycles are taking relatively larger pieces of the overall verification pie. Formal is growing in appeal as a complementary engine. Because of its comprehensive verification nature, it can significantly bend the cost curve for high-valued (difficult/challenging) verification tasks and objectives. The size and complexity of designs today require the application of all verification engines to the challenges of verifying and validating (pre-silicon) the hardware design and enabling early SW development. The use of hardware acceleration continues to shift-left and be used earlier in the verification and validation flow causing emulation and FPGA prototyping to evolve into full-fledged verification engines (not just ICE validation engines).

ASIC Prototyping With FPGA

Thursday, February 12th, 2015

Zibi Zalewski, General Manger of the Hardware Products Division, Aldec

When I began my career as a verification products manager, ASIC/SoC verification was less integrated and the separation among verification stages, tools, and engineering teams was more obvious. At that time the verification process started using simulation, especially for the development phase of the hardware using relatively short test cases. As the design progressed more advanced tests became necessary the move to using emulation was quite natural. This was especially true with the availability of emulators with debugging capabilities that enabled running longer tests in shorter time and debugging issues as they arose. The last stage of this methodology was usually prototyping, when the highest execution speed was required coupled with less need for debugging.  Of course, with the overhead of circuit setup, this took longer and was more complicated.

Today’s ASIC designs have become huge in comparison to the early days making the process of verification extremely complicated. This is the reason that RTL simulation is used only early in the process, mostly for single module verification, since it is simply too slow.

The size of the IC being developed makes even usage of FPGA prototyping boards an issue, since porting designs of 100+ million gates takes months and requires boards that include least several programmable devices. In spite of the fact that FPGAs are getting bigger and bigger in terms of capacity and I/O, SoC projects are growing much faster.  In the end, even a very large prototyping board may not be sufficient.

To add further complication, parts of modern SoCs, like processor subsystems, are developed using virtual platforms with the ability to exchange different processor models depending on the application requirements. Verifying all of the elements within such a complicated system takes massive amounts of time and resources – engineering, software and hardware tools. Considering design size and sophistication, even modular verification becomes a non-so-trivial task, especially during final testing and SoC firmware verification.

In order to reach the maximum productivity and decrease development cost the team must integrate as early as possible to be able to test not only at the module level, but also at the SoC level. Resolution unfortunately is not that simple.  Let’s consider two test cases.

1. SoC design with UVM testbench.

The requirement is to reuse the UVM testbench, but the design needs to run at MHz speed, part of it connected using a physical interface running at speed.

To fulfill such requirements the project team needs an emulator supporting SystemVerilog DPI-C and SCE-MI Function based in order to connect the UVM testbench and DUT.  Since part of the design needs to communicate with a physical interface such emulator needs to support a special adapter module to synchronize the emulator speed with a faster physical interface (e.g. Ethernet port). The result is that the UVM testbench in a  simulator can still be reused, since the main design is running at MHz speed on the emulator and communicating through an external interface that is running at speed – thanks to a speed adapter – with the testbench.

2. SoC design partially developed in virtual platform, and partially written in RTL HDL language. Here the requirements are to reuse the virtual platform and to synchronize it with the rest of the hardware system running at MHz speed.  This approach also requires that part of the design was already optimized to run in prototyping mode.

Since virtual platforms usually interface with external tools using TLM, the natural way is to connect the platform with a transaction level emulator equipped with SCE-MI API that also provides the required MHz speed. To connect the part of the design optimized for prototyping, most likely running at a higher speed than the main emulator clock, it is required to use a speed adapter as in the case already discussed.  If it is possible to connect the virtual platform with an emulator running at two speeds (main emulation clock and higher prototyping clock) the result is that design parts already tested separately can now be tested together as one SoC with the benefit that both software and hardware teams are working on the same DUT.

Figure-1  Integrated verification platform for modern ASIC/SoC.

In both cases we have different tools integrated together (Figure-1), testbench simulated in RTL simulator or in the form of virtual platform is connected with an emulator via SCE-MI API providing integration between software and hardware tools. Next, we have two hardware domains connected via a special speed adapter bridge – emulator domain and prototyping domain (or external interface) synchronized and implemented in the FPGA based board/s. All these elements create a hybrid verification platform for modern SoC/ASIC design that delivers the ability for all teams involved to work on the whole and same source of the project.

Hot Trends for 2015

Tuesday, December 2nd, 2014

Chi-Ping Hsu, Senior Vice President, Chief Strategy Officer, EDA and Chief of Staff to the CEO at Cadence

The new system-design imperative

We’re at a tipping point in system design. In the past, the consumer hung on every word from technology wizards, looking longingly at what was to come. But today, the consumer calls the shots and drives the pace and specifications of future technology directions. This has fostered, in part, a new breed of system design companies that has taken direct control over the semiconductor content.

These systems companies are reaping business (pricing, availability), technical (broader scope of optimization) and strategic (IP protection, secrecy) benefits.  This is clearly a trend in which the winning systems companies are partaking.

They’re less interested in plucking components from shelves and soldering them to boards and much more interested in conceiving, implementing and verifying their systems holistically, from application software down to chip, board and package. To this end, they are embracing the marriage of EDA and IP as a speedy and efficient means of enabling their system visions. For companies positioned with the proper products and services, the growth opportunities in 2015 are enormous.

The shift left

Time-to-market pressures and system complexity force another reconsideration in how systems are designed. Take verification for example. Systems design companies are increasingly designing at higher levels, which requires understanding and validating software earlier in the process. This has led to the “shift left” phenomenon.

The simple way to think about this trend is that everything that was done “later” in the design flow is now being started “earlier” (e.g., software development begins before hardware is completed).  Another way to visualize this macroscopic change is to think about the familiar system development “V-diagram” (Figure 1 below). The essence of this evolution is the examination of any and all dependencies in the product planning and development process to understand how they can be made to overlap in time.

This overlap creates the complication of “more moving parts” but it also enables co-optimization across domains.  Thus, the right side of the “V” shifts left (Figure 2 below) to form more of an accelerated flow. (Note: for all of the engineers in the room, don’t be too literal or precise; it is meant to be thematic of the trend).

FIGURE 1

Prime examples of the shift left are the efforts in software development that are early enough to contemplate hardware changes (i.e., hardware optimization and hardware dependent software optimization), while at the other end of the spectrum we see early collaboration between the foundry, EDA tool makers and IP suppliers to co-optimize the overall enablement offering to maximize the value proposition of the new node.

A by-product of the early software development is the enablement of software-driven verification methodologies that can be used to verify that the integration of sub-systems does not break the design. Another benefit is that performance and energy can be optimized in the system context with both hardware and software optimizations possible.  And, it is no longer just performance and power – quality, security and safety are also moving to the top level of concerns.

FIGURE 2

Chip-package-board interdependencies

Another design area being revolutionized is packaging. Form factors, price points, performance and power are drivers behind squeezing out new ideas.  The lines between PCB, package, interposer and chip are being blurred.

Having design environments that are familiar to the principle in the system interconnect creation, regardless of being PCB, package or die centric by nature, provides a cockpit from which the cross fabric structures can be created, and optimized.  Being able to provide all of the environments also means that data interoperable data sharing is smooth between the domains.  Possessing analysis tools that operate independent of the design environment offers the consistent results for all parties incorporating the cross fabric interface data.  In particular power and signal integrity are critical analyses to ensure design tolerances without risking the cost penalties of overdesign.

The rise of mixed-signal design

In general, but especially driven by the rise of Internet of Things (IoT) applications, mixed-signal design has soared in recent years. Some experts estimate that as much as 85% of all designs have at least some mixed-signal elements on board.

Figure 3: IBS Mixed-signal design start forecast (source: IBS)

Being able to leverage high quality, high performance mixed signal IP is a very powerful solution to the complexity of mixed signal design in advanced nodes. Energy-efficient design features are also pervasive.  Standards support for power reduction strategies (from multi-supply voltage, voltage/frequency scaling, and power shut-down to multi-threshold cells) can be applied across the array of analysis, verification and optimization technologies.

To verify these designs, the industry has been a little slower to migrate. The reality is that there is only so much tool and methodology change that can be digested by a design team while it remains immersed in the machine that cranks out new designs.  So, offering a step-by-step progression that lends itself to incremental progress is what has been devised.  “Beginning with the end in mind” has been the mantra of the legions of SoC verification teams that start with a sketch of the outcome desired in the planning and management phase at the beginning of the program. The industry best practices are summarized as: MD-UVM-MS – that is, metrics-driven unified verification methodology with mixed signal.

Figure 4: Path to MS Verification Greatness

Newer Processes Raise ESL Issues

Wednesday, August 13th, 2014

Gabe Moretti, Senior Editor

In June I wrote about   how EDA changed its traditional flow in order to support advanced semiconductors manufacturing.  I do not think that, although the changes are significant and meaningful they are enough to sustain the increase in productivity required by financial demands.  What is necessary, in my opinion, is a better support for system level developers.

Leaving the solution to design and integration problems to a later stage of the development process creates more complexity since the network impacted is much larger.  Each node in the architecture is now a collection of components and primitive electronic elements that dilute and thus hide the intended functional architecture.

Front End Design Issues

Changes in the way front end design is done are being implemented.  Anand Iyer, Calypto’s Director of Product Marketing focused on the need to plan power at system level.  He observed that: “Addressing DFP issues need to done in the front end tools, as the RTL logic structure and architecture choices determines 80% of the power. Designers need to minimize the activity/clock frequency across their designs since this is the only metric to control dynamic power. They can achieve this in many ways: (1) Reducing activity permanently from their design, (2) Reduce activity temporarily during the active mode of the design.”  Anand went on to cover the two points: “The first point requires a sequential analysis of the entire design to identify opportunities where we can save power. These opportunities need to be evaluated against possible timing and area impact. We need automation when it comes to large and complex designs. PowerPro can help designers optimize their designs for activity.”

As for the other point he said: “The second issue requires understanding the interaction of hardware and software. Techniques like power gating and DVFS fall under this category.”

Anand also recognized that high level synthesis can be used to achieve low power designs.  Starting from C++ or SystemC, architects can produce alternative microarchitectures and see the power impact of their choices (with physically aware RTL power analysis).  This is hugely powerful to enable exploration because if this is done only at RTL it is time consuming and unrealistic to actually try multiple implementations of a complex design.  Plus, the RTL low power techniques are automatically considered and automatically implemented once you have selected the best architecture that meets your power, performance, and cost constraints.

Steve Carlson, Director of Marketing at Cadence pointed out that about a decade ago design teams had their choice of about four active process nodes when planning their designs.  He noted that: “In 2014 there are ten or more active choices for design teams to consider.  This means that solution space for product design has become a lot more rich.  It also means that design teams needs a more fine grained approach to planning and vendor/node selection.  It follows that the assumptions made during the planning process need to be tested as early and often, and with as much accuracy as possible at each stage. The power/performance and area trade-offs create end product differentiation.  One area that can certainly be improved is the connection to trade-offs between hardware architecture and software.  Getting more accurate insight into power profiles can enable trade-offs at the architectural and micro architectural levels.

Perhaps less obvious is the need for process accurate early physical planning (i.e., understands design rules for coloring, etc.).”

As shown in the following figure designers have to be aware that parts of the design are coming from different suppliers and thus Steve states that: “It is essential for the front-end physical planning/prototyping stages of design to be process-aware to prevent costly surprises down the implementation road.”

Simulation and Verification

One of the major recent changes in IC design is the growing number of mixed/signals designs.  They present new design and verification challenges particularly when new advanced processes are targeted for manufacturing.  On the standard development side Accellera has responded by releasing a new version of its Verilog-AMS.  It is a mature standard originally released in 2000. It is built on top of the Verilog subset of the IEEE 1800 -2012 SystemVerilog.  The standard defines how analog behavior interacts with event-based functionality, providing a bridge between the analog and digital worlds. To model continuous-time behavior, Verilog-AMS is defined to be applicable to both electrical and non-electrical system descriptions.  It supports conservative and signal-flow descriptions and can also be used to describe discrete (digital) systems and the resulting mixed-signal interactions.

The revised standard, Verilog-AMS 2.4, includes extensions to benefit verification, behavioral modeling and compact modeling. There are also several clarifications and over 20 errata fixes that improve the overall quality of the standard.Resources on how best to use the standard and a sample library with power and domain application examples are available from Accellera.

Scott Little, chair of the Verilog AMS WG stated: “This revision adds several features that users have been requesting for some time, such as supply sensitive connect modules, an analog event type to enable efficient electrical-to-real conversion and current checker modules.”

The standard continues to be refined and extended to meet the expanding needs of various user communities. The Verilog-AMS WG is currently exploring options to align Verilog-AMS with SystemVerilog in the form of a dot standard to IEEE 1800. In addition, work is underway to focus on new features and enhancements requested by the community to improve mixed-signal design and verification.

Clearly another aspect of verification that has grown significantly in the past few years is the availability of Verification IP modules.  Together with the new version of the UVM 1.2 (Universal Verification Methodology) standard just released by Accellera, they represent a significant increment in the verification power available to designers.

Jonah McLeod, Director of Corporate Marketing Communications at Kilopass, is also concerned about analog issues.  He said: “Accelerating Spice has to be major tool development of this generation of tools. The biggest problem designers face in complex SoC is getting corner cases to converge. This can be time consuming an imprecise with current generation tools.  Start-ups claiming montecarlo spice accelerations like Solido Design Automation and CLK Design Automation are attempting to solve the problem. Both promise to achieve Spice-level accuracy on complex circuits within a couple of percentage points in a fraction of the time.”

One area of verification that is not often covered is its relationship with manufacturing test.  Thomas L. Anderson, Vice President of Marketing at Breker Verification Systems told me that: “The enormous complexity of a deep submicron (32, 38, 20, 14 nm) SoC has a profound impact on manufacturing test. Today, many test engineers treat the SoC as a black box, applying stimulus and checking results only at the chip I/O pins. Some write a few simple C tests to download into the SoC’s embedded processors and run as part of the manufacturing test process. Such simple tests do not validate the chip well, and many companies are seeing returns with defects missed by the tester. Test time limitations typically prohibit the download and run of an operating system and user applications, but clearly a better test is needed. The answer is available today: automatically generated C test cases that run on “bare metal” (no operating system) while stressing every aspect of the SoC. These run realistic user scenarios in multi-threaded, multi-processor mode within the SoC while coordinating with the I/O pins. These test cases validate far more functionality and performance before the SoC ever leaves the factory, greatly reducing return rates while improving the customer experience.”

Blog Review – Mon. July 14 2014

Monday, July 14th, 2014

Accellera prepares UVM; Shades of OpenGL ES; Healthy heart in 3D; Webinar for SoC-IoT; Smart watch tear-down. By Caroline Hayes, Senior Editor.

An informative update of Universal Verification (UVM) 1.2 is set out by Dennis Brophy, Mentor Graphics on the announcement by Accellera of the update. Ahead of the final review process, which will end October 31, the author sets out what the standard may mean for current and future projects.

The addition of Compute Shaders to OpenGL ES for mobile API is one of the most notable, says Tim Hartley, ARM. He explains what these APIs do and where to use them for maximum effectiveness.

Dassault Systemes was part of The Loving Heart project, producing a video with the BBC, to advertise the world’s first, realistic 3D simulation model of a human heart, developed by Simulia software. The blog, by Alyssa, adds some background and context to how it can be used.

A webinar on Tuesday July 22, covering the SoC verification challenges in the IoT will be hosted by ARM and Cadence. Brian Fuller flags up why presenters in ‘SoC Verification Challenges in the IoT Age’ will help those migrating from 8- and 16bit systems, with a focus on using an ARM Cortex-M0 processor.

Inside the Galaxy Gear, the truly wearable smart watch, is an ARM Cortex-M4 powered STMicroelectronics device. Chris Ciufo cannot pretend to be taken off-guard by the ABI Research teardown.

Next Page »