Part of the  

Chip Design Magazine

  Network

About  |  Contact

Headlines

Headlines

System Level Power Budgeting

Gabe Moretti, Contributing Editor

I would like to start by thanking Vic Kulkarni, VP and GM at Apache Design a wholly owned subsidiary of ANSYS, Bernard Murphy, Chief Technology Officer at Atrenta,and Steve Brown, Product Marketing Director at Cadence for contributing to this article.

Steve began by nothing that defining a system level power budget for a SoC starts from chip package selection and the power supply or battery life parameters. This sets the power/heat constraint for the design, and is selected while balancing functionality of the device, performance of the design, and area of the logic and on-chip memories.

Unfortunately, as Vic points out semiconductor design engineers must meet power specification thresholds, or power budgets, that are dictated by the electronic system vendors to whom they sell their products.   Bernard wrote that accurate pre-implementation IP power estimation is almost always required. Since almost all design today is IP-based, accurate estimation for IPs is half the battle. Today you can get power estimates for RTL with accuracy within 15% of silicon as long as you are modeling representative loads.

With the insatiable demand for handling multiple scenarios (i.e. large FSDB files) like GPS, searches, music, extreme gaming, streaming video, download data rates and more using mobile devices, dynamic power consumed by SOCs continues to rise in spite of strides made in reducing the static power consumption in advanced technology nodes. As shown in Figure 1, the end user demand for higher performance mobile devices that have longer battery life or higher thermal limit is expanding the “power gap” between power budgets and estimated power consumption levels.

Typical “chip power budget” for a mobile application could be as follows (Ref: Mobile companies): Active power budget = 700mW @100Mbps for download with MIMO, 100mW @IDLE-mode; Leakage power <5mW with all power-domain off etc.

Accurate power analysis and optimization tools must be employed during all the design phases from system level, RTL-to-gate level sign-off to model and analyze power consumption levels and provide methodologies to meet power budgets.

Skyrocketing performance vs. limited battery & thermal limit (ref. Samsung- Apache Tech Forum)

The challenge is to find ways to abstract with reasonable accuracy for different types of IP and different loads. Reasonable methods to parameterize power have been found for single and multiple processor systems, but not for more general heterogeneous systems. Absent better models, most methods used today are based on quite simple lookup tables, representing average consumption. Si2 is doing work in defining standards in this area.

Vic is convinced that careful power budgeting at a high level also enables design of the power delivery network in the downstream design flow. Power delivery with reliable and consistent power to all components of ICs and electronic systems while meeting power budgets is known as power delivery integrity.  Power delivery integrity is analogous to the way in which an electric power grid operator ensures that electricity is delivered to end users reliably, consistently and in adequate amounts while minimizing loss in the transmission network.  ICs and electronic systems designed with inadequate power delivery integrity may experience large fluctuations in supply voltage and operating power that can cause system failure. For example, these fluctuations particularly impact ICs used in mobile handsets and high performance computers, which are more sensitive to variations in supply voltage and power.  Ensuring power delivery integrity requires accurate modeling of multiple individual components, which are designed by different engineering teams, as well as comprehensive analysis of the interactions between these components.

Methods To Model System Behavior With Power

At present engineers have a few approaches at their disposal.  Vic points out that the designer must translate the power requirements into block-level power budgeting to come up with specific metrics.

Dynamic power estimation per operating power mode, leakage power and sleep power estimation at RTL, power distribution at a glance, identification of high-power consuming areas, power domains, frequency-scaling feasibility for each IP, retention flop design trade-off, power-delivery network planning, required current consumption per voltage source and so on.

Bernard thinks that Spreadsheet Modeling is probably the most common approach. The spreadsheet captures typical application use-cases, broken down into IP activities, determined from application simulations/emulations. It also represents, for each IP in the system, a power lookup table or set of curves. Power estimation simply sums across IP values in a selected use-case. An advantage is no limitation in complexity – you can model a full smart phone including battery, RF and so on. Disadvantages are the need to understand an accurate set of use-cases ahead of deployment, and the abstraction problem mentioned above.  But Steve points out that these spreadsheets are difficult to create and maintain, and fall short for identifying outlier conditions that are critical for the end users experience.

Steve also points out that some companies are adapting virtual platforms to measure dynamic power, and improve hardware / software partitioning decisions. The main barrier to this solution remains creation of the virtual platform models, and then also adding the notion of power to the models. Reuse of IP enables reuse of existing models, but they still require effort to maintain and adapt power calculations for new process nodes.

Bernard has experienced engineers that run the full RTL against realistic software loads, dump activity for all (or a large number) of nodes and compute power based on the dump. An advantage is that they can skip the modeling step and still get an estimate as good as for RTL modeling. Disadvantages include needing the full design (making it less useful for planning) and significant slowdown in emulation when dumping all nodes, making it less feasible to run extensive application experiments.  Steve concurs.  Dynamic power analysis is a particularly useful technique, available in emulation and simulation. The emulator provides MHz performance enabling analysis of many cycles, often times with test driver software to focus on the most interesting use cases.

Bernard is of the opinion that while C/C++/SystemC Modeling seems an obvious target, it also suffers from the abstraction problem. Steve thinks that a likely architecture in this scenario has the virtual platform containing the processing subsystem and memory subsystem and executes as 100s of MHz, and the emulator contains the rest of the SoC and a replication of the memory subsystem and executes at higher speeds and provides cycle accurate power analysis and functional debugging.

Again,  Bernard wants to underscore, progress has been made for specialized designs, such as single and multiple processors, but these approaches have little relevance for more common heterogeneous systems. Perhaps Si2 work in this area will help.

Tags: , , , , , , , , , , ,

Leave a Reply