Part of the  

Chip Design Magazine


About  |  Contact

Posts Tagged ‘SystemC’

Portable Stimulus

Thursday, March 23rd, 2017

Gabe Moretti, Senior Editor

Portable Stimulus (PS) is not a new sex toy, and is not an Executable Specification either.  So what is it?  It is a method, or rather it will be once the work is finished to define inputs independently from the verification tool used.

As the complexity of a system increases, the cost of its functional verification increases at a more rapid pace.   Verification engineers must consider not only wanted scenarios but also erroneous one.  Increased complexity increases the number of unwanted scenarios.  To perform all the required tests, engineers use different tools, including logic simulation, accelerators and emulators, and FPGA prototyping tools as well.  To transport a test from one tool to another is a very time consuming job, which is also prone to errors.  The reason is simple.  Not only each different class of tools uses different syntax, in some cases it also uses different semantics.

The Accellera System Initiative, known commonly as simply Accellera is working on a solution.  It formed a Working Group to develop a way to define tests in a way that is independent of the tool used to perform the verification.  The group, made up of engineers and not of markting professionals, chose as its name what they are supposed to deliver, a Portable Stimulus since the verification tests are made up of stimuli to the device under test (DUT) and the stimuli will be portable among verification tools.

Adnan Hamid, CEO of Breker, gave me a demo at DVCon US this year.  Their product is trying to solve the same problem, but the standard being developed will only be similar, that is based on the same concept.  Both will be a descriptive language, Breker based on SystemC and PS based on SystemVerilog, but the approach the same.  The verification team develops a directed network where each node represents a test.  The Accellera work must, of course, be vendor independent, so their work is more complex.  The figure below may give you an idea of the complexity.

Once the working group is finished, and they expect to be finished no later than the end of 2017, each EDA vendor could then develop a generator that will translate the test described in PS language into the appropriate string of commands and stimuli required to actually perform the test with the tool in question.

The approach, of course, is such that the product of the Accellera work can then be easily submitted to the IEEE for standardization, since it will obey the IEEE requirements for standardization.

My question is: What about Formal Verification?  I believe that it would be possible to derive assertions from the PS language.  If this can be done it would be a wonderful result for the industry.  An IP vendor, for example, will then be able to provide only one definition of the test used to verify the IP, and the customer will be able to readily use it no matter which tool is appropriate at the time of acceptance and integration of the IP.

Accellera Relicenses SystemC Reference Implementation under the Apache 2.0 License

Monday, August 15th, 2016

Gabe Moretti, Senior Editor

SystemC is a subset of the C language.  The C language is widely used by software developers.  The SysremC subset contains the features of C that are synthesizable, that is, they are useful to describe hardware components and designs.  SystemC is used mainly by designers working at the system level, especially when it is necessary to simulate both hardware and software concurrently.  An algorithmic description in SystemC of a hardware block generally simulates faster than the same description implemented in a traditional hardware description language.

Accellera Systems Initiative (Accellera), the electronics industry organization focused on the creation and adoption of electronic design automation (EDA) and intellectual property (IP) standards, just announced that all SystemC supplemental material, including material contributed under the SystemC Open Source License Agreement prior to the merger of the Open SystemC Initiative (OSCI) and Accellera in 2011, has now been re-licensed under the Apache License, version 2.0.

The SystemC Open Source License used for the supplemental material required a lengthier contribution process that will no longer be necessary under Apache 2.0. Other Accellera supplemental material already available under Apache 2.0 includes the Universal Verification Methodology (UVM) base class library.

“This is a significant milestone for Accellera and the SystemC community,” stated Shishpal Rawat, Accellera Systems Initiative chair. “Having all SystemC supplemental material, such as proof-of-concept and reference implementations, user guides and examples, under the widely used and industry-preferred Apache 2.0 license will make it easier for companies to more readily contribute further improvements to the supplemental material.  We have been working with all of the contributing companies over the past 18 months to ensure that we could offer SystemC users a clear path to use and improve the SystemC supplemental material, and we are very proud of the efforts of our team to make this happen.”

The supplemental material forms the basis for possible future revisions of the standard as new methods and possible extensions to the language are adopted by a significant majority of users.  It is important to keep in mind that a modeling language is a “living” language, which means that it is subject to periodic changes.  For example, the IEEE specifies that an EDA modeling language standard be reaffirmed every five years.  This institutionalizes the possibility of a new version of the standard at regular intervals.

DVCon U.S. 2016 Is Around the Corner

Thursday, February 18th, 2016

Gabe Moretti, Senior Editor

Within the EDA industry, the Design & Verification Conference and Exhibition (DVCon) has created one of the most successful communities of the 21st century.  Started as a conference dealing with two design languages, Verilog and VHDL, DVCon has grown to cover all aspects of design and verification.  Beginning as a conference based in Silicon Valley, the conference is now held on three continents: America, Asia and Europe.  Both DVCon Europe and DVCon India have shown significant growth, and plans are well on their way to offer a DVCon in China as well.  As Yatin Trivedi, General Chair of this year’s DVCon U.S., says, “DVCon continues to be the premier conference for design and verification engineers of all experience levels. Compared to larger and more general conferences, DVCon affords attendees a concentrated menu of technical sessions – tutorials, papers, poster sessions and panels – focused on design and verification hot topics. In addition to participation in high quality technical sessions, DVCon attendees have the opportunity to take part in the many informal, but often intense, technical discussions that pop up around the conference venue among more than 800 design and verification engineers and engineering managers. This networking opportunity among peers is possibly the greatest benefit to DVCon attendees.”

Professionals attend DVCon to learn and to share, not just to show off their research achievements as a community.  The conference is focused on providing its attendees with the opportunity to learn by offering two days of tutorials as well as frequent networking opportunities.  The technical program offers engineers examples of how today’s problems have been solved under demanding development schedules and budgets.  Ambar Sarkar, Program Chair, offers this advice on the DVCon U.S. 2016 web site: “Find what your peers are working on and interact with the thought leaders in our industry. Learn where the trends are and become a thought leader yourself.”

Grown from the need to verify digital designs, verification technology now faces the need to verify heterogeneous systems that include analog, software, MEMS, and communication hardware and protocols.  Adapting to these new requirements is a task that the industry has not yet solved.

At the same time, methods and tools for mixed-signal or system-level design still need maturing.  The concept of system-level design is being revolutionized as architectures like those required for IoT applications demand heterogeneous systems.

Attendees to DVCon U.S. will find ample opportunity to consider, debate, and compare both requirements and solutions that impact near term projects.

Tutorials and Papers

As part of its mission to provide a learning venue for designers and verification engineers, DVCon U.S. offers two full days of tutorials.  The presentations of the 12 tutorial sessions are divided between Monday and Thursday, separate from the rest of the technical program so they do not conflict and force attendees to make difficult attendance choices.

Accellera has a unique approach to putting together its technical program.  I am slightly paraphrasing this year’s Program Chair, Ambar Sarkar, by stating that DVCon U.S. lets the industry set the agenda, not the conference asking for papers on selected topics.  He told me that the basic question is: “Can a practicing engineer get new ideas and try to use them in his or her upcoming project?” For this reason, the call for papers asks only for abstracts and those that do not meet the request are eliminated.  After a further selection, the authors of the chosen abstracts are asked to submit a full paper.  Those papers are then grouped according to their common subject areas into sessions.  The sessions that emerge automatically reflect the latest trends in the industry.

The paper presentations during Tuesday and Wednesday take the majority of the conference’s time and form the technical backbone of the event.

Of the 127 papers submitted, 36 were chosen to be presented in full.  There will be 13 sessions covering the following areas: UVM, Design and Modeling, Low Power, SystemVerilog, Fault Analysis, Emulation, Mixed-Signal, Resource Management, and Formal Techniques.  Each session offers from 3 to 4 individual papers.


Poster presentations are selected in the same manner as papers.  A poster presentation is less formal but has the advantage of giving the author the opportunity to interact with a small audience and thus the learning process can be bilateral.  There have been occasions in the past when an abstract submitted as a poster is switched to an oral presentation with the consent of the author.  Such operation is possible because the submitting and selecting process is similar and thus the poster has already been judged as presenting an approach that will be useful to the attendees.


This year’s keynote will be delivered by Wally Rhines, the 2015 recipient of the Phil Kaufman Award.  Wally is well known in the EDA industry for both his insight and his track record as the Chairman and CEO of Mentor Graphics.  The title of his address is Design Verification Challenges: Past, Present, and Future.  Dr. Rhines will review the history of each major phase of verification evolution and then concentrate on the challenges of newly emerging problems. While functional verification still dominates the effort, new requirements for security and safety are becoming more important and will ultimately involve challenges that could be more difficult than those we have faced in the past.

Panels: One Good and One Suspect

There are two panels on the conference schedule.  One panel: “Emulation + Static Verification Will Replace Simulation”, scheduled for Wednesday March 2nd at 1:30 in the afternoon looks at the near future verification methods.  Both emulation and static verification use has been increasing significantly.  May be the verification paradigm of the future is to invest in high-end targeted static verification tools to get the design to a very high quality level, followed by very high-speed emulation or FPGA-prototyping for system-level functional verification. Where does that leave RTL simulation? Between a rock and a hard place! Gate-level simulation is already marginalized to doing basic sanity checks. May be RTL simulation will follow. Or will it?

The other panel scheduled for 8:30 in the morning of the same day concerns me a lot.  The title is “Redefining ESL” and the description of the panel is taken from a blog that Brian Bailey, the panel moderator, published on September 24 of 2015.  You can read the blog here:

In the blog Brian holds the point of view that ESL is not a design flow, it is a verification flow, and it will not take off until the industry recognizes that. Only now are we beginning to define what ESL verification means, but is it too little, too late?  There are a few problems with the panels committee accepting this panel.  To begin with ESL is an outdated concept.  Today’s systems include much more than digital design.  Modern SoCs, even small ones like those fund in IoT applications, include analog, firmware, and MEMS blocks.  All of these are outside of the ESL definition and fall within the System Level Design (SLD) market.

The statement made by Brian that ESL would not be made viable by the introduction of viable High Level Synthesis (HLS) tools is simply false.  ESL verification became a valuable tool only when designers began to use HLS products to automatically derive RTL models from ESL descriptions in SystemVerilog or C/C++ even if HLS covered mostly algorithms expressed in something else besides Verilog, VHDL, or SystemC.

High Power Tools for Low Power

Tuesday, November 4th, 2014

Gabe Moretti, Senior Editor

The subject of design for low power, it is a miracle we do not have DLP in the vocabulary, continues to be in the minds of EDA people.  I talked about it with Krishna Balachandran, product marketing director for low power at Cadence.

I asked Krishna how pervasive is the need to conserve power among the Cadence users community.

Krishna responded: “The need to conserve power is pervasive in IP and chips targeting mobile applications. Interest in conserving power has spread to include plug-in-the-wall target applications because of government regulations regarding energy usage and the need to be green. Almost all integrated circuits designed today exercise some power management techniques to conserve power. Power conservation is so pervasive that it has now become the third optimization metric in addition to area and speed. Another noticeable trend is that while power management techniques are widespread in both purely analog and purely digital designs, mixed-signal designs are increasingly becoming power-managed.”

Given how pervasive the focus on reducing power seems to be I wondered how do tools for low power help in deciding tradeoffs?  I know, it is quite an open ended question and I got a very long answer from Krishna.

“Power estimation and analysis must be done early and accurately for maximizing the possible tradeoffs. If a design is only optimized for power for the hardware without comprehending how the software will take advantage of the power management in the hardware, power estimates will be inaccurate and problems will be found too late in the product development cycle. Power estimation is increasingly done employing hardware emulation platforms which are excellent at identifying peak-power issues by running real-life software application scenarios on the hardware being designed that allows appropriate power architecture modifications at an early stage.

Thorough functional verification of the register-transfer level (RTL) is very important to weed out any power-related design bugs. Metric-driven power-aware simulation, simulation acceleration and tools providing powerful low power debug capabilities are deployed to ensure that the design is functional in all the power modes and help shorten the overall verification time. Low power static and exhaustive formal verification complement simulation to find functional design bugs as well as issues with the power intent specification.

Smart leakage and dynamic power optimization options in logic synthesis and Place-and-Route (P&R) tools work hard by using power as a cost function in addition to area and speed. It is important that P&R tools do not undo any intelligent optimizations done for power at the logic synthesis stage. For example, placement-aware logic synthesis tools can swap out single bit flip flops and replace them with multi-bit flops referred to as Multi-Bit Cell Inferencing (MBCI), which can significantly reduce the load on the clock tree, a major power consumer in today’s System-On-Chip (SoC) designs. A Multi-Mode Multi-Corner (MMMC) optimization capability is a must to simultaneously optimize power while meeting timing. A rich set of power switch and enable chaining options and integration with a power-analysis and signoff engine are needed in your P&R tool to help identify the appropriate number and locations of power switches. Power switch analysis, insertion and placement capabilities tradeoff ramp time for the power domain that is waking up vs. the rush current and IR drop in the neighboring power domains that were already on.-Coupled with this, Graphical User Interface (GUI) capabilities within the P&R system need to allow the designer to interactively specify the switch and enable chaining topology to deliver a power efficient design that takes the tradeoffs between ramp time, IR drop and rush currents into consideration. All successive refinements of the design need to be verified vs. the original design intent (the RTL), and low power equivalence and static and formal verification tools do just that.

Mixed-signal low power designs pose unique challenges for both implementation and functional verification. Tools must be able to stitch together the digital blocks in an analog schematic environment taking into account the disparate representations of power in the two domains. Verification solutions must be able to check the interfaces of the analog/digital domains that are frequently a source of errors such as missing level shifters. Without employing such tools, the process is manual and error prone with design bugs creeping into silicon.”

Since it is important to plan power optimization solutions at the system level what is the approach that Cadence envisions as the most productive?   I know you have covered this somewhat in your previous answer but I would like to focus on the subject more.  Krishna was very polite in focusing his answer.

“ES-level tools provide the biggest bang for the buck when it comes to power optimization. The sooner you optimize, the more options you have for making excellent trade-offs. A good example of this is a High Level Synthesis tool that generates multiple micro-architectures from a given high-level description of an algorithm in C/C++/SystemC and helps the designer make the trade-off between area, speed and power for the target design. Since it operates at a pre-RTL stage, the resulting power/area/speed trade-offs are very impactful. Furthermore, it is desirable to integrate power estimation and logic synthesis engines within the High Level Synthesis tool, thus ensuring a high degree of correlation with downstream implementation tools.”

I think we will see increased attention to power optimization tools at the system level in the short term.  Feedback from designers and architects will help in defining the problem better.  And it is my hope that hardware engineers will be able to teach software developers how to use hardware more efficiently.  In my professional life time we have gone from having to count how many bites resulted from the code to considering hardware as an unlimited asset.  The time might be here to start considering how the software code impacts hardware again.