Part of the  

Chip Design Magazine

  Network

About  |  Contact

Posts Tagged ‘Jasper’

Expert Interviews: Jasper’s Lawrence Loh: IP Power Specs

Tuesday, February 4th, 2014

LPD got to speak briefly with Lawrence Loh of Jasper Design Automation about IP power specifications recently.  Here’s what we learned.

Q: “What can IP providers do to provide better models to more accurately represent power in various operating states?”

SOCs today are made from IP, they aren’t designed from scratch.  Most of the power description is done at the SoC level though.  You rarely see power domain descriptions for IPs.
There are two types of power models that are very important to provide for IP.  Power estimation is not just a number.  It’s about what determining what usage scenarios use what power.  The first set of models is built by looking at which signals are switching during a particular functional behavior of the IP. Then power-estimation for each of these behaviors is performed. With this model, the SOC team can perform its own power estimation based on which and how often certain functional behaviors of these IPs are used.
The second set of models is the functional model that maps the behavior of the SoC in terms of integrated functionality.  These models need to accurately model the behavior of the IP, not just for normal functionalities, but also for power-up and down behavior. The SOC team will then able to verify the overall SOC functionalities including power-sequencing and other low-power behavior.

SOCs today are made from IP, they aren’t designed from scratch.  Most of the power description is done at the SoC level though.  You rarely see power domain descriptions for IPs.  There are two types of power models that are very important to provide for IP.  Power estimation is not just a number.  It’s about what determining what usage scenarios use what power.  The first set of models is built by looking at which signals are switching during a particular functional behavior of the IP. Then power-estimation for each of these behaviors is performed. With this model, the SOC team can perform its own power estimation based on which and how often certain functional behaviors of these IPs are used.  The second set of models is the functional model that maps the behavior of the SoC in terms of integrated functionality.  These models need to accurately model the behavior of the IP, not just for normal functionalities, but also for power-up and down behavior. The SOC team will then able to verify the overall SOC functionalities including power-sequencing and other low-power behavior.

Q:  Who is the customer of the power description information?

The SoC team usually has a group that is specifically in charge of low power. They determine what power each IP block uses, compare this with the power budget they have, and determine how to partition the IP into different power domains to accomplish their goals. Once the power domains are determined, they will need to define a proper power sequence to enable and disable power for each domain.  The next level of customers for IP power specifications are the people who do the implementation.  They create a file alongside the RTL code that describes how IP blocks connect to each other.  They try to include enough information so that if, for example, blocks that were initially connected with each other directly are placed in different domains, it will be apparent where rails need to be placed to isolate the new domains from each other.  These are usually people who write in the proprietary or standard format that describes the power.  Finally, there are the verification people who need to make sure that the different power behaviors work as expected.   With so many different groups of people that need to work with the power domains it’s important to have a way to capture this information.

From the solution space point of view, obviously many vendors are providing solution to different aspects of low power engineering projects.  Having a power specification language that describes power consumption, power domains and interconnects allows different tools to enter into the flow.  Low power is a big aspect of SoC design projects spanning many activities and solution spaces.   A power specification language gives different vendors a way to integrate their tools together so that they can implement a complete low power solution.  There are some standards around and standards have advantages and disadvantages, but the advantages are more universal.  IPXACT is one way to standardize. SystemRDL is another.  The jury is still out on what format is the best.

Lawrence Loh, Vice President of Worldwide Applications Engineering, Jasper Design
Lawrence Loh holds overall management responsibility for the company’s applications engineering and methodology development. Loh has been with the company since 2002, and was formerly Jasper’s Director of Application Engineering. He holds four U.S. patents on formal technologies. His prior experience includes verification and emulation engineering for MIPS, and verification manager for Infineon’s successful LAN Business Unit. Loh holds a BSEE from California Polytechnic State University and an MSEE from San Diego State.

Lawrence Loh, Vice President of Worldwide Applications Engineering, Jasper DesignLawrence Loh holds overall management responsibility for the company’s applications engineering and methodology development. Loh has been with the company since 2002, and was formerly Jasper’s Director of Application Engineering. He holds four U.S. patents on formal technologies. His prior experience includes verification and emulation engineering for MIPS, and verification manager for Infineon’s successful LAN Business Unit. Loh holds a BSEE from California Polytechnic State University and an MSEE from San Diego State.

Low Power News, Jan. 6, 2014

Monday, January 6th, 2014

Happy New Year and welcome to the inaugural edition of the Low Power News for 2014.  Not too surprisingly, not a lot happens over the holidays when most tech companies have forced shutdowns.  Consequently, this week we’ll have a single  news story followed by a look back at the Low Power News of 2013.

It looks like the ‘Vegas CES announcements have begun!  Silicon Laboratories announced that Magellan has chosen their EFM32™ Giant Gecko microcontroller (MCU) for their Echo smart sports watch.  Clark Weber, senior director of Fitness and Wearable Products at Magellan says,

“Since sophisticated multiple functions potentially require a lot of energy, we chose the EFM32 Giant Gecko and companion Simplicity Studio design tools as our 32-bit low-energy platform, enabling us to maximize battery life without compromising the end user experience or future functionality.”

And now for a last look at 2013

ARM made a move into graphics last year, purchasing Cadence’s PANTA display controller cores, and acquiring Geomerics.

Women’s bra’s tweeted on the IoT.

It seemed like security was a big issue for everyone last year, and Jasper was no exception, announcing their Security Path Verification App.

ST Microelectronics developed technology that can help detect concussive blows during football, helping to improve the safety of the game.

Progress was made on the always ephemeral invisibility cloak, including an old-school invisibility demonstration from the University of Rochester.

Apache’s Aveek Sarkar filled us in on choosing low power design methodologies.

And finally, researchers in Chicago worked to offload some the of the GPS work from satellite receivers to the accelerometers in your smart phone.

Connected IP Blocks: Busses or Networks?

Tuesday, November 26th, 2013

Gabe Moretti

David Shippy of Altera, Lawrence Loh from Jasper Design, Mentor’s Steve Bailey, and Drew Wingard from Sonics got together to discuss the issues inherent in connecting IP blocks whether in a SoC or on stack die architecture.

SLD: On chip connectivity uses area and power and also generates noise. Have we addressed these issues sufficiently?

Wingard: The communication problem is fundamental to the high degree of integration. We must consider how to partition the design to obtain sufficient high performance while consuming the least amount of power. A significant portion of the cost benefits of going to a higher level of integration comes from the ability to share critical resources like off chip memory or a single control processor across a wide variety of elements. This communication problem is fundamental to the design integration. We cannot make it take zero area, we cannot make it have zero latency so we must think how we partition the design so that we can get things done with sufficient high performance and low power.

Loh: Another dimension that is challenging is the verification aspect. We need to continue to innovate in order to address it.

Shippy: The number of transistors in FPGAs is growing but I do not see that the number of transistors dedicated to connectivity growing at the same pace as in other technologies. At Altera we use network on chip to efficiently connect various processing blocks together. It turns out that both area and power used by the network is very small compared to the rest of the die. In general we can say that the resources dedicated to interconnect are around 3% to 5% of the die area.

Wingard: Connectivity tends to use a relatively large portion of the “long wires” on the chip, and so even if it may be a relatively modest part of the die area it runs the risk of presenting a higher challenge in the physical domain.

SLD: When people talk about the future they talk about “internet of things” or “smart everything”. SoC are becoming more complex. Two things we can do. One knowing the nature of the IP that we have to connect we could develop standard protocols that use smaller busses or we can use a network on chip. Which do you think is most promising?

Wingard: I think you cannot assume that you know what type of IP you are going to connect. There will be a wide variety of applications and a number of wireless protocols used. We must start with an open model that is independent of the IP in the architecture. I am strongly in favor of the decoupled network approach.

SLD: What is the overhead we are prepared to pay for a solution?

Shippy: The solution needs to be a distributed system that uses a narrower interface with fewer wires.

SLD: What type of work do we need to do to be ready to have a common, if not standard verification method for this type of connectivity?

Loh: Again as Drew stated we can have different types of connectivity, some optimized for power, other for throughput for example. People are creating architectural level protocols. When you have a well defined method to describe things then you can derive a verification methodology.

Bailey: If we are looking to an equivalent to UVM you start from the protocol. A protocol has various levels. You must consider what type of interconnect is used then you can verify if they are connected correctly and control aspects like arbitration. Then you can move to the functional level. To do that you must be able to generate traffic and the only way to do this is to either mimic I/O through files or have a high level model to generate the traffic so that we can replicate what it will happen under stress to make sure the network can handle the traffic. It all starts with basic verification IP. The protocol used will determine the properties of its various levels of abstraction, and the IP can provide ways to move across these levels to create a required verification method.

Wingard: Our customers expect that we will provide the verification system that will prove to them that we can provide the type of communication they want to implement. The state of the art today is that there will be surprises when the communication subsystem is connected to the various IP blocks. The advantage of having protocol verification IP is that what we see today is that the problems tend to be system interaction challenges and we do not yet have a complete way of describing how to gather the appropriate information to verify the entire system.

Bailey: Yes there is a difference between verifying the interconnect and doing system verification. There is still work that needs to be done to capture the stress points of the entire system: it is a big challenge.

SLD: What do you think should be the next step?

Wingard: I think the trend is pretty clear. More and more application areas are reaching the level of complexity where it makes sense to adopt network based communication solutions. As developers start addressing these issues they will start to appreciate the fact that there may be a need for system wide solution for other things like power management for example. As communication mechanisms become more sophisticated we need to address the issue of security at the system level as well.

Loh: In order to address the issue of system level verification we must understand that the design needs to be analyzed from the very beginning so that we can determine the choices available. Standardization can no longer be just how things are connected, but it must grow to cover how things are defined.

Shippy: One of the challenges is how are we going to build heterogeneous systems that interact together correctly while also dealing with all of the physical characteristics of the circuit. With many processors, and lots of DSPs and accelerators we need to figure a topology to interconnect all of those blocks and at the same time deal with the data coming from the outside or being output to the outside environment. The problem is how to verify and optimize the system, not just the communication flow.

Bailey: As the design part of future products evolves, so the verification methods will have to evolve. There has to be a level of coherency that covers the functionality of the system so that designers can understand the level of stress they are simulating for the entire system. Designers also need to be able to isolate the problem when found. Where does it originate from? An IP subsystem, the connectivity network, or is it a logic problem in the aggregate circuitry?

Contributors Biographies:

David Shippy is currently the Director of System Architecture for Altera Corporation where he manages System Architecture and Performance Modeling. Prior to that he was Chief Architect for low power X86 CPUs and SOCs at AMD. Before that he was Vice President of Engineering at Intrinsity where he led the development of the ARM CPU design for the Apple iPhone 4 and iPad. Prior to that he spent most of his career at IBM leading PowerPC microprocessor designs including the role of Chief Architect and technical leader of the PowerPC CPU microprocessors for the Xbox 360 and PlayStation 3 game machines. His experience designing high-performance microprocessor chips and leading large teams spans more than 30 years. He has over 50 patents in all areas of high performance, low power microprocessor technology.

Lawrence Loh, Vice President of Worldwide Applications Engineering, Jasper Design
Lawrence Loh holds overall management responsibility for the company’s applications engineering and methodology development. Loh has been with the company since 2002, and was formerly Jasper’s Director of Application Engineering. He holds four U.S. patents on formal technologies. His prior experience includes verification and emulation engineering for MIPS, and verification manager for Infineon’s successful LAN Business Unit. Loh holds a BSEE from California Polytechnic State University and an MSEE from San Diego State.

Stephen Bailey is the director of emerging technologies in the Design Verification and Test Division of Mentor Graphics. Steve chaired the Accellera and IEEE 1801 working group efforts that resulted in the UPF standard. He has been active in EDA standards for over two decades and has served as technical program chair and conference chair for industry conferences, including DVCon. Steve began his career designing embedded software for avionics systems before moving into the EDA industry in 1990. Since then he has worked in R&D, applications, and technical and product marketing. Steve holds BSCS and MSCS degrees from Chapman University.

Drew Wingard co-founded Sonics in September 1996 and is its chief technical officer, secretary, and board member. Before co-founding Sonics, Drew led the development of advanced circuits and CAD methodology for MicroUnity Systems Engineering. He co-founded and worked at Pomegranate Technology, where he designed an advanced SIMD multimedia processor. He received is BS from the University of Texas at Austin and his MS and PHD from Stanford University, all in electrical engineering.

Experts Roundtable: Verification and Power vs. Performance

Wednesday, October 9th, 2013

By Hamilton Carter

Low-Power Engineering sat down with month’s roundtable participants, Lawrence Loh, VP of Applications Engineering from Jasper Design Automation and Gary Smith of Gary Smith EDA to discuss low power engineering issues.  What follows are excerpts from those interviews.

LPE:  Power and thermal issues have been identified as key concerns in the mobile market. What trends exist in the adoption mobile system level power modeling? Have you observed trends that are emphasizing power modeling over performance modeling? How is the modeling work distributed between system level, (including OS and software), chip level, and block level design and verification within companies that are using power modelling? Are most of the chip vendors you’re aware of using some sort of power and or thermal modeling? Are there still chip houses that are just doing seat of their pants power non-aware design and hoping for the best?

Loh: They can run slower when power is too high or thermal conditions are too harsh and sacrifice performance by going to the next lower power level.  An option must exist for the software to be able to operate the same tasks without compromising the overall functionality while changing power or speed and not breaking the overall functionality.  Low power capability is one of our hottest issues in the last year and a half.

People have been using low power techniques for some time.  Most SoCs have low power features, even in home entertainment and wall plug devices.  Look at TVs for example.  One of the main reasons that power becomes important is that TVs today are much more capable and much higher resolution, but the power consumption cannot go up accordingly.  A lot of the specifications require maintaining a certain power level.  How do you keep the energy consumption down, to adhere to the level of power that a TV should consume?  There is a lot of recognition and push to make them not very power hungry and still process lots of information.

Verifying low power functionalities has become a priority.  Engineers at the higher levels such as system level decide how to prioritize it.

Smith: No, power has become the number one problem.  All design targets are being constrained by power.  If you don’t meet your power budget you must either slow down your design (parallel computing) or restrict the size of your design.  That’s after you use all of the design tricks to lower your power consumption (power gating, clock gating, etc.).

LPE: IP power requirements change based on the silicon process mode the block is deployed in. What efforts are being made in the EDA industry to automatically associate IP power usage with foundry process nodes? What level of detailed information from the foundries is sufficient? How involved should the foundries be in the process? What kind of optimizations can be used to estimate power consumption of a given block in a given usage mode without resorting to transistor level power consumption calculations.

Loh: A lot of time low power is done after integration if people provide IP and other integrators have a high level net that turns power on and off.  It’s not enough for IP to provide power functionality.  Power consumption of IP is one of the major considerations.  Now, a lot of IP comes with the capability to turn on and off lots of power domains.  Power-consumption is one of the key factors for choosing an IP.

What we have seen is that in a more direct sense is that one company’s foundry may have very power efficient memories, another’s foundry is maybe better at getting down to a smaller footprint.  They are using the foundry’s available options to inform their decision of choosing which power methods to implement.

There are certain areas where Jasper can help more than others to make sure that functionality doesn’t change while gating the clock.  We have sequence checkers to check for correct sequences.   We have the capability to verify the challenges that our different customers have.

LPE:  OK, so, you’re prioritizing engine development based on what customers are using.

Loh: Yes

Smith: The outcome of all of the silicon tricks are handed to the design engineer, and the EDA tools, through the SPICE models.  There is no magic there, you need to have extremely accurate transistor information.  Then you have to get that information up as high as you can into the design flow.  So far we have the necessarily accurate power models up into the Architectural design area.  We still need to get power models into the behavioral area where the system architect works.  Keep in mind the further up in the flow you get the more power savings you can make.

LPE: Re-usable IP blocks created in isolation without sufficient knowledge of their target usage can be over-designed leading to excess power being consumed. Are there standards based, or EDA based trends to address this?

Loh: Most people have done power in multiple hierarchies.  In the end, the system engineer has to figure out how to make everything work together.  What they rely on are accurate power models to determine how much power is used.  At the IP level, they try to characterize the IP power consumption as accurately as possible.  Ideally they want to have some kind of power budget to know what power the IP uses in different situations.  They need an as accurate as possible model of the power pattern so it can be used to decide what to turn on and off at any given moment.  Anyone can break the chain here with inaccurate information that will cause the system engineer to make the wrong choice.

At the IP level, the EDA community has a responsibility to help.  For us, we’re trying to make sure that whatever power functionality is inserted doesn’t break the design.

Not as much is done at the firmware level, a lot more tricks can be done there.  Jasper’s job is to make sure engineers are provided with as much flexibility in the hardware as possible.

Smith: IP blocks covers the entire spectrum of the designs.  We have power standards the reach into the architectural area.  We still need behavioral standards.  The further up the flow you are in the design the more the IP Blocks become more application specific until they come with their own software bundled into the package.  So it’s not is much the information as it is the necessary standards and the trade-off between accuracy and speed of execution that needs to needs worked out for the process to work.

LPE: At DAC 2013, a shortage of system level engineers that were capable of encompassing an application down to transistor level worldview in their design and modeling activities was identified. Can power/thermal aware system modeling be utilized in the high level operating system and application development activities? Are there plans to incorporate modeling at these levels? Are there engineers/programmers that can use these models if they are provided? Is there demand for EDA tools to make this information more accessible to higher level engineers/programmers?

Loh: I’m not convinced one person who knows everything is even necessary.  It’s a team project.  People have their own domains and need to make sure, for example, that when a logic designer makes his block, he gets it correct and at similar level for transistors.  Knowing how to optimize the entire design is useful, but someone doesn’t need to know how each bit at each level works.

System level guys can’t know every detail.  He won’t do that.  He knows the high level view of how the pieces should work together and can help to bridge the gap.  He’ll know about tools that help close the gap between IP and integration, and maybe he’ll know something about the system and system software that are necessary.  This is how we’ve been able to scale so far using a hierarchical organization.

Smith: Well we sure need more Tall Thin Engineers however once we have all of the standards in place we will have the tools needed for the HW and SW engineers to get the job done.  The actual applications programmers will be given necessary pass/fail types of tools (plus some analytical tools) to develop Power Aware Software.

Has Power Trumped Performance?

Wednesday, October 2nd, 2013

By Hamilton Carter

About a year ago, John Blyler reported on several talks at the Ansys-Apache Design “Dimensions of Electronic Design” seminar. Those talks indicated that power-consumption design considerations were inexorably inching toward becoming the key concern at mobile SoC and IP design houses. It’s all fine and dandy that you can track your stock portfolio and video-record your kid’s ballet recital while catching up with Grandma—all on your smartphone. But it’s all for naught if the heat generated by the phone’s multi-tasking hardware burns your hand just before your battery dies.

I checked in with a few industry experts this month to see how things had progressed in the ensuing year. While the reduction of raw power consumption by physical means is both a concern and a valuable tool, this month’s roundtable participants—Lawrence Loh, VP of Applications Engineering from Jasper Design Automation, and Gary Smith of Gary Smith EDA—focused on how EDA tools are successfully dealing with power concerns at the functional level.

Power vs. Performance

Once upon a time, verification of low-power functionality was a low priority for many design teams. If all of the power features didn’t work on the first tapeout, it wasn’t the end of the world. Apparently, those days are long gone.

When asked if there were still design houses that did seat-of-their-pants power design and verification, Gary Smith’s initial response was a simple “No.” He then expanded on the situation: “No, power has become the number-one problem. All design targets are being constrained by power. If you don’t meet your power budget, you must either slow down your design (parallel computing) or restrict the size of your design. That’s after you use all of the design tricks to lower your power consumption (power gating, clock gating, etc.).”

Power considerations are even turning up in rather fundamental condensed-matter physics research. Researchers are working on new types of electron-spin-enabled memories. This field is still in its adolescence. At a colloquium earlier this month, the presenter indicated that one of the key motivators for the research had been to develop new versions of dense memories that consumed less power.

First, Do No Harm

Low-power verification has two key considerations. Of course, the first one is don’t let your end customers burn their hands, laps, or furniture or wind up with a 15-min. battery life. Second, while you’re taking care of the first consideration, make sure you don’t break any of the device’s functionality on which the customers depend. It’s a thankless job; your customers probably won’t swoon over the extra 5 min. of battery life you were able to squeeze in. But calls to the help lines will be fast and furious if they lose their contact list every time their smartphone goes into sleep mode.

When I asked Lawrence Loh if he knew of any engineering teams that were skimping on power verification, he couldn’t think of a single one. Instead, he pointed out that the indicators they’ve had point in the exact opposite direction. Jasper’s low-power verification app was one of its most inquired-about and most-used products. The app works with Jasper’s other formal checkers to ensure that power optimizations haven’t corrupted other functionality.

Low-Power Design in Stay-at-Home Products

While power consumption is certainly a prominent concern in mobile designs, it’s become an issue in stationary devices like set-top boxes and desktop PCs as well. With ever-increasing amounts of graphical and audio resolution available—as well as the itinerant ramp in available processing power—it’s natural that consumers should expect to reap benefits in the form of visually and stereophonically stunning output. You might think that we’ve just about hit the limit of human perception at our current resolutions. But keep in mind that developments like those made by Daniel Smalley of MIT earlier this year promise that bandwidth-hungry technologies like holographic projection are just around the corner. Power has become the fly in the ointment here as well. No matter the quality of a device’s video and audio output, the typical consumer—as well as all of the power-conscious certification boards (think polar bears)—still expect televisions to consume no more power than their resolution-hobbled (relatively speaking) predecessors from a few years ago.

Looking at the Bottom from the Top

Everyone agreed that plenty of good power-consumption modeling information is available from the fabs. Lawrence Loh pointed out that the data is so good that chip-level architects are leveraging it to decide which portions of their designs could benefit most from power optimizations based on the available power information available for each type of logic cell.

Tall and Touch It All or Divide and Conquer?

At the Design Automation Conference (DAC) in Austin this summer, more than one speaker pointed out that there’s a shortage of engineers with a rock-solid grasp of the entire SoC hierarchy from application code down through block-level register transfer level (RTL) and transistor-level power consumption. While a few of these folks have been identified—and they’re certainly a valuable addition to any team—most chip-design projects and the EDA industry at large are following the tried and true hierarchical methodology of divide and conquer.

Lawrence Loh mentioned that one engineer probably won’t have knowledge of the entire chip and its intended application. Yet each level of engineer (transistor, block, sub-block, SoC, RTOS…), when provided with a power budget, can rather easily ensure that their particular design portion meets requirements. This methodology has been used successfully for the last few decades in the functional verification of large chip-design projects. Crucial functionality is verified at a given level, and that level of the design is then passed to the next design team up. The higher-level design team depends on the lower-level verification to be both adequate and complete, as it focuses its efforts on verifying the functionality added by its own level. Loh said this technique is already being applied quite effectively to the verification of low-power functionality as well.

For his part, Gary Smith presented a vision of the future that included both hierarchical organization and automated tools. “Well, we sure need more tall, thin engineers. However, once we have all of the standards in place, we will have the tools needed for the HW and SW engineers to get the job done. The actual application programmers will be given necessary pass/fail types of tools (plus some analytical tools) to develop power-aware software.”

The hierarchical division of power management hasn’t percolated all the way to the top of the application stack in terms of standards yet. Gary pointed out that while we have good information on how much power devices consume, power standards currently only reach into the architectural area. Standards that reach all the way into the behavioral level would allow system architects to reap additional power savings.

“We have power standards that reach into the architectural area. We still need behavioral standards. The further up the flow you are in the design, the more the IP blocks become more application-specific until they come with their own software bundled into the package. So it’s not as much the information as it is the necessary standards and the tradeoff between accuracy and speed of execution that needs to be worked out for the process to work.”

Conclusions

Power and thermal constraints are at least as important as performance gains in mobile designs. The advent of WiFi-enabled wristwatches, for example, provides a whole new level of nightmarish scenarios when devices overheat. For the moment, though, it looks like the tried and true verification methodology of dividing the chip into hierarchical levels—combined with smart engineers and clever EDA tools—has once again turned a giant engineering problem into one that is at least tractable.


Extension Media websites place cookies on your device to give you the best user experience. By using our websites, you agree to placement of these cookies and to our Privacy Policy. Please click here to accept.