Part of the  

Chip Design Magazine


About  |  Contact



Experts Roundtable: Verification and Power vs. Performance

By Hamilton Carter

Low-Power Engineering sat down with month’s roundtable participants, Lawrence Loh, VP of Applications Engineering from Jasper Design Automation and Gary Smith of Gary Smith EDA to discuss low power engineering issues.  What follows are excerpts from those interviews.

LPE:  Power and thermal issues have been identified as key concerns in the mobile market. What trends exist in the adoption mobile system level power modeling? Have you observed trends that are emphasizing power modeling over performance modeling? How is the modeling work distributed between system level, (including OS and software), chip level, and block level design and verification within companies that are using power modelling? Are most of the chip vendors you’re aware of using some sort of power and or thermal modeling? Are there still chip houses that are just doing seat of their pants power non-aware design and hoping for the best?

Loh: They can run slower when power is too high or thermal conditions are too harsh and sacrifice performance by going to the next lower power level.  An option must exist for the software to be able to operate the same tasks without compromising the overall functionality while changing power or speed and not breaking the overall functionality.  Low power capability is one of our hottest issues in the last year and a half.

People have been using low power techniques for some time.  Most SoCs have low power features, even in home entertainment and wall plug devices.  Look at TVs for example.  One of the main reasons that power becomes important is that TVs today are much more capable and much higher resolution, but the power consumption cannot go up accordingly.  A lot of the specifications require maintaining a certain power level.  How do you keep the energy consumption down, to adhere to the level of power that a TV should consume?  There is a lot of recognition and push to make them not very power hungry and still process lots of information.

Verifying low power functionalities has become a priority.  Engineers at the higher levels such as system level decide how to prioritize it.

Smith: No, power has become the number one problem.  All design targets are being constrained by power.  If you don’t meet your power budget you must either slow down your design (parallel computing) or restrict the size of your design.  That’s after you use all of the design tricks to lower your power consumption (power gating, clock gating, etc.).

LPE: IP power requirements change based on the silicon process mode the block is deployed in. What efforts are being made in the EDA industry to automatically associate IP power usage with foundry process nodes? What level of detailed information from the foundries is sufficient? How involved should the foundries be in the process? What kind of optimizations can be used to estimate power consumption of a given block in a given usage mode without resorting to transistor level power consumption calculations.

Loh: A lot of time low power is done after integration if people provide IP and other integrators have a high level net that turns power on and off.  It’s not enough for IP to provide power functionality.  Power consumption of IP is one of the major considerations.  Now, a lot of IP comes with the capability to turn on and off lots of power domains.  Power-consumption is one of the key factors for choosing an IP.

What we have seen is that in a more direct sense is that one company’s foundry may have very power efficient memories, another’s foundry is maybe better at getting down to a smaller footprint.  They are using the foundry’s available options to inform their decision of choosing which power methods to implement.

There are certain areas where Jasper can help more than others to make sure that functionality doesn’t change while gating the clock.  We have sequence checkers to check for correct sequences.   We have the capability to verify the challenges that our different customers have.

LPE:  OK, so, you’re prioritizing engine development based on what customers are using.

Loh: Yes

Smith: The outcome of all of the silicon tricks are handed to the design engineer, and the EDA tools, through the SPICE models.  There is no magic there, you need to have extremely accurate transistor information.  Then you have to get that information up as high as you can into the design flow.  So far we have the necessarily accurate power models up into the Architectural design area.  We still need to get power models into the behavioral area where the system architect works.  Keep in mind the further up in the flow you get the more power savings you can make.

LPE: Re-usable IP blocks created in isolation without sufficient knowledge of their target usage can be over-designed leading to excess power being consumed. Are there standards based, or EDA based trends to address this?

Loh: Most people have done power in multiple hierarchies.  In the end, the system engineer has to figure out how to make everything work together.  What they rely on are accurate power models to determine how much power is used.  At the IP level, they try to characterize the IP power consumption as accurately as possible.  Ideally they want to have some kind of power budget to know what power the IP uses in different situations.  They need an as accurate as possible model of the power pattern so it can be used to decide what to turn on and off at any given moment.  Anyone can break the chain here with inaccurate information that will cause the system engineer to make the wrong choice.

At the IP level, the EDA community has a responsibility to help.  For us, we’re trying to make sure that whatever power functionality is inserted doesn’t break the design.

Not as much is done at the firmware level, a lot more tricks can be done there.  Jasper’s job is to make sure engineers are provided with as much flexibility in the hardware as possible.

Smith: IP blocks covers the entire spectrum of the designs.  We have power standards the reach into the architectural area.  We still need behavioral standards.  The further up the flow you are in the design the more the IP Blocks become more application specific until they come with their own software bundled into the package.  So it’s not is much the information as it is the necessary standards and the trade-off between accuracy and speed of execution that needs to needs worked out for the process to work.

LPE: At DAC 2013, a shortage of system level engineers that were capable of encompassing an application down to transistor level worldview in their design and modeling activities was identified. Can power/thermal aware system modeling be utilized in the high level operating system and application development activities? Are there plans to incorporate modeling at these levels? Are there engineers/programmers that can use these models if they are provided? Is there demand for EDA tools to make this information more accessible to higher level engineers/programmers?

Loh: I’m not convinced one person who knows everything is even necessary.  It’s a team project.  People have their own domains and need to make sure, for example, that when a logic designer makes his block, he gets it correct and at similar level for transistors.  Knowing how to optimize the entire design is useful, but someone doesn’t need to know how each bit at each level works.

System level guys can’t know every detail.  He won’t do that.  He knows the high level view of how the pieces should work together and can help to bridge the gap.  He’ll know about tools that help close the gap between IP and integration, and maybe he’ll know something about the system and system software that are necessary.  This is how we’ve been able to scale so far using a hierarchical organization.

Smith: Well we sure need more Tall Thin Engineers however once we have all of the standards in place we will have the tools needed for the HW and SW engineers to get the job done.  The actual applications programmers will be given necessary pass/fail types of tools (plus some analytical tools) to develop Power Aware Software.

Tags: ,

One Response to “Experts Roundtable: Verification and Power vs. Performance”

  1. Gary Smith EDA » Experts Roundtable: Verification and Power vs. Performance Says:

    [...] Experts Roundtable: Verification and Power vs. Performance About the AuthorSocial Share [...]

Leave a Reply

Extension Media websites place cookies on your device to give you the best user experience. By using our websites, you agree to placement of these cookies and to our Privacy Policy. Please click here to accept.