Part of the  

Chip Design Magazine

  Network

About  |  Contact

Posts Tagged ‘Sonics’

Next Page »

Blog Review – Monday, May 22, 2017

Monday, May 22nd, 2017

This week’s collection looks at what’s needed for autonomous cars; Qt tackles flaky tests, Sonics seeks wonderment, and blogs for design advice

Just as drivers choose their cars to meet their needs, so driverless cars need an assortment of processors, argues Intel’s Kathy Winter. She likens the designer’s toolbox to a golf bag with something for every dilemma encountered.

Reporting from the bi-annual GENIVI meeting in Birmingham, England, Andrew Pattersen, Mentor Graphics, learns that big data ownership could be a bone of contention in the next business model for the automotive industry.

Autonomous automotive development requires a thorough understanding of a variety of protocols for automation, electronics control and software. Jaspreet Singh Gambhir, Synopsys, explains how verification offerings can accelerate design.

It is always fun to hear about design mishaps and Sudhir Sharma, ANSYS, entertains with some he has come across to explain why digital twins and physics-based simulation not only meets design objectives but can save costs and boost profitability.

Where’s the wonder?, wonders Randy Smith, Sonics, marveling at why more people were impressed at the Machine Learning Developers Conference as he learned about Wave Computing’s dataflow for deep learning.

Consistency is key for Frederik Gladhorn, Qt, as he investigates a metric infrastructure for what he calls flaky tests, which hamper a design’s progress, with some practical advice and examples.

Speaking directly to anyone struggling with multiple layer design, Parul Agarwal, Cadence Design Systems, has some thoughts and advice on how to use a multi-layer bus. The blog is illustrated with some useful images as a practical guide for anyone struggling with layer patterns.

Caroline Hayes, Senior Editor

Blog Review – Monday, April 24, 2017

Monday, April 24th, 2017

This week’s blogs are concerned with AI and intelligent, connected vehicles, sometimes both. There are quests to find the facts behind myths and searches for answers for power management and software security too.

Is an effective tool for verification, the stuff of legends? Gabe Moretti, Chip Design Magazine, seeks the truth behind Pegasus – no, not the winged horse, the more earthly verification engine from Cadence.

A power strategy is one thing, but a free trial adds a new dimension to energy management. Don Dingee, Sonics, elaborates on the company’s plan to bring power to the masses, using hardware IP and ICE-Grain Power architecture.

If you are unsure about USB, Senad Lomigora, ON Semiconductor’s blog should help. It looks at what it’s for, why we can’t get enough of USB Type C, USB 3.1, connectors and re-drivers.

Every vehicle’s ADAS relies on good visuals, observes Jim Harrison, guest blogger, Maxim Integrated, and good connectivity. He looks at the securely connected autonomous car, and then homes in on explained how Maxim Integrated exploits GMSL, an alternative to Ethernet, in its MAX96707 and MAX96708 chips, to create an effective in-car communication network.

Still with the connected car, Pete Decher, Mentor Graphics, is fresh from the Autotech Council meeting in San Jose. The company’s DRS360 Autonomous Driving Platform launch was high on the list of discussion topics, along with the role of artificial intelligence (AI) in the future of driving.

Still with AI, Evens Pan, ARM provides an in-depth blog on Chinese start-up, Peceptin’s enabled embedded deep learning. The case study is fascinating and well reported in this comprehensive essay.

Making any software engineer feel insecure about software security is an everyday occurrence, helping them out is a little more out-of-the-ordinary, so if it refreshing to see a post from the editorial team, Synopsys, letting the put-upon software engineer know there is a webinar coming soon (May 2) to enlighten them on the Building Security In Maturity Model (BSIMM), with a link to register to attend.

Caroline Hayes, Senior Editor

Behold the Intrinsic Value of IP

Monday, March 13th, 2017

By Grant Pierce, CEO

Sonics, Inc.

Editor’s Note [this article was written in response to questions about IP licensing practices.  A follow-up article will be published in the next 24 hours with the title :” Determining a Fair Royalty Value for IP”].

figure

Understanding the intrinsic value of Intellectual Property is like beauty, it is in the eye of the beholder.  The beholder of IP Value is ultimately the user/consumer of that IP – the buyers. Buyers tend to value IP based upon their ability to utilize that IP to create competitive advantage, and therefore higher value for their end product. The IP Value figure above was created to capture this concept.

To be clear, this view is NOT about relative bargaining power between buyer and the supplier of IP – the seller –  that is built on the basis of patents. Mounds of court cases and text books exist that explore the question of patent strength. What I am positing is that viewing IP value as a matter of a buyer’s perception is a useful way to think of the intrinsic value of IP.

Position A on the value chart is a classification of IP that allows little differentiation by the buyer, but is addressing a more elastic market opportunity. This would likely be a Standard IP type that would implement an open standard. IP in this category would likely have multiple sources and therefore competitive pricing.  Although compliance with the standard would be valued by the buyer, the price of the IP itself would be likely lower reflecting its commodity nature. Here, the value might be equated to the cost of internally creating equivalent IP. Since few, if any, buyers in this category would see advantage for making this IP themselves and because there are likely many sellers, the intrinsic value of this IP is determined on a “buy vs buy” basis.  Buyers are going to buy this IP regardless, so they’ll look for the seller with the proposition most favorable to the buyer – which often is just about price.

Position B on the value chart is a classification of IP that allows for differentiation by the buyer, but addresses a more elastic market. IP in this category might be less constrained by standards requirements. It is likely that buyers would implement unique instantiations of this IP type and as a result command some end competitive advantage. Buyers in this category could make this IP themselves, but because there are commercial alternatives, the intrinsic value is determined by applying a “make vs buy” analysis. The value proposition of the sellers of this type of IP often include some important, but soft value propositions (e.g., ease of re-use, time-to-market, esoteric features), the make vs buy determination is highly variable and often buyer-specific. This in part explains the variability of pricing for this type of IP.

Position C on the value chart is a classification of IP that serves a less elastic market and empowers buyers to differentiate through their unique implementations of that IP. This classification of IP supports license fees and larger, more consistent, royalty rates. IP in this category becomes the competitive differentiation that sways large market share to the winning products incorporating that IP. This category supports some of the larger IP companies in the marketplace today. Buyers in this category are not going to make the IP themselves because the cost of development of the product and its ecosystem is too prohibitive and risky. The intrinsic value really comes down to what the seller charges.

This is a “buy vs not make” decision – meaning one either buys the IP or it doesn’t bother to make the product. A unique hallmark of IP in this position is that so long as the seller applies pricing consistently, then all buyers know at the very least that they are not disadvantaged relative to the competition and will continue to buy. Sellers will often give some technology away to encourage long-term lock in. For these reasons, pricing of IP in this space tends to be quite stable. That pricing level must subjectively be below the level that customers begin to perform unnatural acts and explore unusual alternatives.  So long as it does, the price charged probably represents accurately the intrinsic value.

Position D on the value chart is a classification of IP that requires adherence to a standard. Like category A, adherence to the standard does not necessarily allow differentiation to the buyer. The buyer of this category of IP might be required to use this IP in order to gain access to the market itself. Though the lack of end-product differentiation available to the buyer might suggest a lower license fee and/or lower to zero royalty rate, we see a significantly less elastic market for this IP type.

This IP category tends to comprise products adhering to closed and/or proprietary standards. IP products built on such closed and/or proprietary standards have given rise to several significant IP business franchises in the marketplace today. The IP in position D is in part characterized by the need to spend significant time and money to develop, market and maintain (defend) their position, in addition to spending on IP development. For this reason, teasing out the intrinsic value of this IP is not as straightforward as “make vs buy.” Pricing is really viewed more as a tax. So the intrinsic value determination is based on a “Fair Tax” basis. If buyers think the tax is no longer “fair,” for any reason, they will make the move to a different technology.

Examples:

Position A:  USB, PCI, memory interfaces (Synopsys)

Position B:  Configurable Processors, Analog IP cores (Synopsys, Cadence)

Position C:  General Purpose Processors, Graphics, DSP, NoC, EPU (ARM, Imagination, CEVA, Sonics)

Position D: CDMA, Noise Reduction, DDR (Qualcomm, Dolby, Rambus)

Why Customer Success is Paramount

Sonics is an IP supplier whose products tend to reside in the Type C category. Sonics sets its semiconductor IP pricing as a function of the value of the SoC design/chip that uses the IP. There is a spectrum of value functions for the Sonics IP depending upon the type of chip, complexity of design, target power/performance, expected volume, and other factors. Defining the upper and lower bounds of the value spectrum depends upon an approximation of these factors for each particular chip design and customer.

Royalties are one component of the price of IP and are a way of risk sharing to allow customers to bring their products to market without having to pay the full value of the incorporated IP up front. The benefit being that the creator and supplier of the IP is essentially investing in the overall success of the user’s product by accepting the deferred royalty payment. Sonics views the royalty component of its IP pricing as “customer success fees.”

With its recently introduced EPU technology, Sonics has adopted an IP business model based upon an annual technology access fee and a per power grain usage fee due at chip tapeout. Under this model, customers have unlimited use of the technology to explore power control for as many designs as they want, but only pay for their actual IP usage in a completed design. The tape out fee is calculated based on the number of power grains used in the design on a sliding scale. The more power grains customers use, the more energy saved, and the lower the cost per grain. Using more power grains drives lower energy consumption by the chip – buyers increase the market value of their chips using Sonics’ EPU technology. The bottom line is that Sonics’ IP business model depends on customers successfully completing their designs using Sonics IP.

Blog Review – Monday, January 23, 2017

Monday, January 23rd, 2017

This week’s blogs show the human face of automated driving; and why energy should be taken seriously. There is lift-off for SpaceX to bring more satellite comms and a poetic turn, in the style of Rudyar Kipling’s classic poem.

There is a human element to automated driving, namely Human Machine Interface (HMI) and Jack Weast, Intel, uses his second blog post to examine how and why it can be used. He promises more in part three into the company’s research.

Energy is a serious business, says Grant Pierce, Sonics, and the electronics industry must shoulder some responsibility for power savings. The company, with Semico Research is conducting a survey and wants your help into understanding today’s and tomorrow’s power requirements.

Boosting the satellites to provide point-to-point communications, the SpaceX Falcon 9 rocket put the first 10 Iridium NEXT satellites into Low Earth Orbit (LEO), equipped with Xilinx space-grade Virtex-5QV FPGAs to implement the satellites’ On Board Processor (OBP) hardware. Steve Liebson, Xilinx, includes a link to a video describing the constellation and the launch.

Celebrating the relationship with Ericsson, Dassault Systèmes’ Olivier Ribet, looks at how the latter’s Networked Society will transform the way we interact with the world around us and meet technology challenges, such as 5G, IoT and the cloud.

Moving to 10nm and lower process geometries pushes the boundaries of FinFET and the custom layout flow and this means trouble ahead, warns Graham Etchells.

A touch of culture, with a poem “wot I wrote” by Keith Hanna, Mentor Graphics. He deftly tackles Computational Fluid Dynamics (CFD) as Rudyard Kipling might.

Image data and the mysteries of how to create, access and use a Qimage to greatest effect is detailed by Laszlo Agocs, Qt, with three case studies to illustrate what can be done.

A sharp video addressing the interconnect verification challenges is hosted by Nimrod Reiss. Cadence’s Corrie Callenbach has found and highlighted the video.

Caroline Hayes, senior editor

Blog Review – Tuesday, January 10, 2017

Tuesday, January 10th, 2017

Moving on from 4K and 8K, Simon Forrest, Imagination Technologies, reports on 360° video, as seen at this year’s CES in Las Vegas. That, together with High Dynamic Range (HDR) could re-energize the TV broadcasting industry in general and the set-top box in particular.

The IoT is responsible for explosive growth in smart homes with connectivity at their centre. Dan Artusi, Intel, considers what technologies and disciplines are coming together as it introduces Intel Home Wireless Infrastructure at CES 2017.

Announcing a partnership with Renault and OSVehicle, ARM will work with the companies to develop an open source platform for cars, cities and transportation. Soshun Arai, ARM, explains how the ‘stripped down’ Twizy can release the brakes on CAN development.

Some Christmas reading has brought enlightenment to Gabe Moretti, Chip Design, as he unravels the mysteries of CEO comings and goings, and why the EDA industry could learn a thing or two from the boards of spy plane and stealth bomber manufacturers.

Still with EDA, Brian Derrick, Mentor Graphics, likens the automotive industry to sports teams, where big names dominate and capture consumers’ interest, eclipsing all others. This is changing as electric vehicles become a super power to turbo charge the industry.

It’s always good to welcome new blogs, and Sonics delivers with its announcement that it is addressing power management. Grant Pierce, Sonics, introduces the technology and product portfolio to enhance design methods.

Caroline Hayes, Senior Editor

Blog Review – Monday Oct. 20, 2014

Monday, October 20th, 2014

OCP-onwards and upwards; infotainment in Paris; lend a hand to ARM; Cadence anticipates 10nm FinFET process node.

By Caroline Hayes, Senior Editor

An online tutorial from Accellera Systems Initiative, “OCP: The Journey Continues” is a five-part tutorial spanning the past, present and future of the OCP (Open Core Protocol) IP interface socket standard. Drew Wingard draws it to our attention, as one of the presenters, discussing OCP in SoC designs, such as verification IP support, TLM 2.0 SystemC support and IP-XACT support is Herve Alexanian, Sonics and himself as well as Steve Masters, Synopsys and Prashant Karandikar, Texas Instruments.

Elektrobit and Nuance have integrated voice with natural language understanding (NLU) in the virtual cockpit in Audi’s TT Roadster which is being shown at the Paris Motor Show. John Day, Mentor Graphics marvels at the results and speculates on its practical uses.

A plea from ARM’s Brad Nemire, to the community which celebrates its first anniversary this month. He invites comment on the community and proposed changes, all designed to make the community interactive and responsive. He promises the survey will only take five minutes of your time.

An informative review of technical presentations at the recent TSMC Open Innovation Platform (OIP) Ecosystem Forum prepares the reader of 10nm FinFET process node. Richard Goering Cadence, includes some graphics from two keynotes for those who could not make the Forum, with some warnings of what it will mean for design.

Low Power Is The Norm, Not The Exception

Friday, September 26th, 2014

Gabe Moretti, Senior Editor

The issue of power consumption took front stage with the introduction of portable electronic devices.  It became necessary for the semiconductor industry and thus the EDA industry to develop new methods and new tools to confront the challenges and provide solutions.  Thus Low Power became a separate segment of the industry.  EDA vendors developed tools specifically addressing the problem of minimizing power consumption, both at the architecture, the synthesis, and the pre-fabrication stage of IC development.  Companies instituted new design methodologies that focused specifically on power distribution and consumption.

Today the majority of devices are designed and fabricated with low power as a major requirement.  As we progress toward a world that uses more wearable devices and more remote computational capabilities, low power consumption is a must.  I am not sure that dedicating a segment to low power is relevant: it makes more sense to have a sector of the industry devoted to unrestricted power use instead.

The contributions I received in preparing this article are explicit in supporting this point of view.

General Considerations

Mary Ann White, Director of Product Marketing, Galaxy Design Platform, at Synopsys concurs with my position.  She says: “Power conservation occurs everywhere, whether in mobile applications, servers or even plug-in-the-wall items.  With green initiatives and the ever-increasing cost of power, the ability to save power for any application has become very important.  In real-world applications for home consumer items (e.g. stereo equipment, set-top boxes, TVs, etc.), it used to be okay to have items go into standby mode. But, that is no longer enough when smart-plug strips that use sensors to automatically turn off any power being supplied after a period of non-usage are now populating many homes and Smart Grids are being deployed by utility companies. This trend follows what commercial companies have done for many years now, namely using motion sensors for efficient energy management throughout the day.”

Vic Kulkarni, Senior VP and GM, RTL Power Business Unit, at Apache Design, Inc., a wholly-owned subsidiary of ANSYS, Inc. approached the problem from a different point of view but also points out wasted power.

“Dynamic power consumed by SoCs continues to rise in spite of strides made in reducing the static power consumption in advanced technology nodes.

There are many reasons for dynamic power consumption waste – redundant data signal activity when clocks are shut off, excessive margin in the library characterization data leading to inefficient implementation, large active logic cones feeding deselected mux inputs, lack of sleep or standby mode for analog circuits, and even insufficient software-driven controls to shut down portions of the design. Another aspect is the memory sub-system organization. Once the amount of memory required is known, how should it be partitioned? What types of memories should be used? How often do they need to be accessed? All of these issues greatly affect power consumption. Therefore, design must perform power-performance-area tradeoffs for various alternative architectures to make an informed decision.”

The ubiquity of low power designs was also pointed out by Guillaume Boillet, Technical Marketing Manager, at Atrenta Inc.  He told me that: “Motivations for reducing the power consumed by chips are multiple. They range from purely technical considerations (i.e. ensuring integrity and longevity of the product), to differentiation factors (i.e. extend battery life or reduce cost of cooling) to simply being more socially responsible. As a result, power management techniques, which were once only deployed for wireless applications, have now become ubiquitous. The vast majority of IC designers are now making a conscious effort to configure their RTL for efficient power partitioning and to reduce power consumption, in particular the dynamic component, which is increasingly becoming more dominant at advanced technology nodes.”  Of course experience by engineers has found that minimizing power is not easy.”  Guillaume continued: “The task is vast and far from being straight-forward. First, there is a multitude of techniques which are available to designers: Power gating, use of static and variable voltage domains, Dynamic Voltage and Frequency Scaling (DVFS), biasing, architectural tradeoffs, coarse and fine-grain clock gating, micro-architectural optimizations, memory management, and light sleep are only some examples. When you try combining all of these, you soon realize the permutations are endless. Second, those techniques cannot be applied blindly and can have serious implications during floor planning, timing convergence activities, supply distribution, Clock Tree Synthesis (CTS), Clock Domain Crossing management, Design For Test (DFT) or even software development.”

Low power considerations have also been at the forefront of IP designs.  Dr. Roddy Urquhart is Vice President of Marketing at Cortus, a licensor of controllers, noted that: “A major trend in the electronics industry now, is the emergence of connected intelligent devices implemented as systems-on-chip (SoC) – the ‘third wave’ of computational devices.  This wave consists of the use of locally connected smart sensors in vehicles, the emergence of “smart homes” and “smart buildings” and the growing Internet of Things.  The majority of these types of devices will be manufactured in large volumes, and will face stringent power constraints. While users may accept charging their smartphones on a daily basis, many sensor-based devices for industrial applications, environmental monitoring or smart metering rely on the battery to last months or even a number of years. Achieving this requires a focus on radically reducing power and a completely different design approach to the SoC design.”

Architectural Considerations

Successful power management starts at the architectural level.  Designers cannot decide on a tactic to conserve power once that system has already been designed, since power consumption is the result of architectural decisions aimed at meeting functional requirements.  These tradeoffs are made very early in the development of an IC.

Jon McDonald, Senior Technical Marketing Engineer, at Mentor Graphics noted that: “Power analysis needs to begin at the system level in order to fix a disconnect between the measurement of power and the decisions that affect power consumption. The current status quo forces architectural decisions and software development to typically occur many months before implementation-based power measurement feedback is available. We’ve been shooting in the dark too long.  The lack of visibility into the impact of decisions while they are being made incurs significant hidden costs for most hardware and software engineers. System engineers have no practical way of measuring the impact of their design decisions on the system power consumption. Accurate power information is usually not available until RTL implementation, and the bulk of power feedback is not available until the initial system prototypes are available.”

Patrick Sheridan, Senior Staff Product Marketing Manager, Solutions Group, at Synopsys went into more details.

“Typical questions that the architect can answer are:

1) How to partition the SoC application into fixed hardware accelerators and software executing on processors, determining the optimal number and type of each CPU, GPU, DSP and accelerator.

2) How to partition SoC components into a set of power domains to adjust voltage and frequency at runtime in order to save power when components are not needed.

3) How to confirm the expected performance/power curve for the optimal architecture.

To help expand industry adoption, the IEEE 1801 Working Group’s charter has been updated recently to include extending the current UPF low power specification for use in system level power modeling. A dedicated system level power sub-committee of the 1801 (UPF) Working Group has been formed, led by Synopsys, which includes good representation from system and power architects from the major platform providers. The intent is to extend the UPF language where necessary to support IP power modeling for use in energy aware system level design.”  But he pointed out that more is needed from the software developers.

“In addition, power efficiency continues to be a major product differentiator – and quality concern – for the software manager. Power management functions are distributed across firmware, operating system, and application software in a multi-layered framework, serving a wide variety of system components – from multicore CPUs to hard-disks, sensors, modems, and lights – each consuming power when activated. Bringing up and testing power management software is becoming a major bottleneck in the software development process.

Virtual prototypes for software development enable the early bring-up and test of power management software and enable power-aware software development, including the ability to:

- Quickly reveal fundamental problems such as a faulty regulation of clock and voltages

- Gain visibility for software developers, to make them aware of problems that will cause major changes in power consumption

- Simulate real world scenarios and systematically test corner cases for problems that would otherwise only be revealed in field operation

This enables software developers to understand the consequences of their software changes on power sooner, improving the user-experience and accelerating software development schedules.”

Drew Wingard, CTO, at Sonics also answered my question about the importance of architectural analysis of power consumption.

“All the research shows that the most effective place to do power optimization is at the architectural level where you can examine, at the time of design partitioning, what are the collections of components which need to be turned on or can afford to be turned off. Designers need to make power partitioning choices from a good understanding of both the architecture and the use cases they are trying to support on that architecture. They need tooling that combines the analysis models together in a way that allows them to make effective tradeoffs about partitioning versus design/verification cost versus power/energy use.”

Dr. Urquhart underscored the importance of architectural planning in the development of licensable IP.  “Most ‘third wave’ computational devices will involve a combination of sensors, wireless connectivity and digital control and data processing. Managing power will start at the system level identifying what parts of the device need to be always on or always listening and which parts can be switched off when not needed. Then individual subsystems need to be designed in a way that is power efficient.

A minimalist 32-bit core saves silicon area and in smaller geometries also helps reduce static power. In systems with more complex firmware the power consumed by memory is greater than the power in the processor core. Thus a processor core needs to have an efficient instruction set so that the size of the instruction memory is minimized. However, an overly complex instruction set would result in good code density but a large processor core. Thus overall system power efficiency depends on balancing power in the processor core and memory.”

Implementation Considerations

Although there is still a need for new and more powerful architectural tools for power planning, implementation tools that help designers deal with issues of power distribution and use are reaching maturity and can be counted as reliable tools by engineers.

Guillaume Boillet observed that: “Fine-grain sequential clock gating and removal of redundant memory accesses are techniques that are now mature enough for EDA tools to decide what modifications are best suited based on specific usage scenarios (simulation data). For these techniques, it is possible to generate optimized RTL automatically, while guaranteeing its equivalence vs. the original RTL, thanks to formal techniques. EDA tools can even prevent modifications that generate new unsynchronized crossings and ensure proper coding style provided that they have a reliable CDC and lint engine.”

Vic Kulkarni provided me with an answer based on sound an detailed technical theory that lead to the following: “There are over 20 techniques to reduce power consumption which must be employed during all the design phases from system level (Figure 1), RTL to gate level sign-off to model and analyze power consumption levels and provide methodologies to meet power budgets, at the same time do the balancing act of managing trade-offs associated with each technique that will be used throughout the design flow Unfortunately there is NO single silver bullet to reduce power!

Fig. 1. A holistic approach for low-power IP and IP-based SoC design from system to final sign-off with associated trade-offs [Source: ANSYS-Apache Design]

To successfully reduce power, increase signal bandwidth, and manage cost, it is essential to simultaneously optimize across the system, chip, package, and the board. As chips migrate to sub-20 nanometer (nm) process nodes and use stacked-die technologies, the ability to model and accurately predict the power/ground noise and its impact on ICs is critical for the success of advanced low-power designs and associated systems.

Design engineers must meet power budgets for a wide variety of operating conditions.  For example, a chip for a smart phone must be tested to ensure that it meets power budget requirements in standby, dormant, charging, and shutdown modes.  A comprehensive power budgeting solution is required to accurately analyze power values in numerous operating modes (or scenarios) while running all potential applications of the system.”

Jon McDonald described Mentor’s approach.  He highlighted the need for a feedback loop between architectural analysis and implementation. “Implementation optimizations focus on the most efficient power implementation of a specific architecture. This level of optimizations can find a localized minimum power usage, but are limited by their inability to make system-wide architectural trade-offs and run real world scenarios.

Software optimizations involve efforts by software designers to use the system hardware in the most power efficient manner. However, as the hardware is fixed there are significant limitations on the kinds of changes that can be made. Also, since the prototype is already available, completing the software becomes the limiting factor to completing the system. As well, software often has been developed before a prototype is available or is being reused from prior generations of a design. Going back and rewriting this software to optimize for power is generally not possible due to time constraints on completing the system integration.

Both of these areas of power optimization focus can be vastly improved by investing more in power analysis at the system level – before architectural decisions have been locked into an implementation. Modeling power as part of a transaction-level model provides quantitative feedback to design architects on the effect their decisions have on system power consumption. It also provides feedback to software developers regarding how efficiently they use the hardware platform. Finally, the data from the software execution on the platform can be used to refine the architectural choices made in the context of the actual software workloads.

Being able to optimize the system-level architecture with quantitative feedback tightly coupled to the workload (Figure 2) allows the impact of hardware and software decisions to be measured when those decisions are made. Thus, system-level power analysis exposes the effect of decisions on system wide power consumption, making them obvious and quantifiable to the hardware and software engineers.”

Figure 2. System Level Power Optimization (Courtesy of Mentor Graphics)

Drew Wingard of Sonics underscored the advantage of having in-depth knowledge of the dynamics of Network On Chip (NOC) use.

“Required levels of power savings, especially in battery-powered SOC devices, can be simplified by exploiting knowledge the on-chip network fabric inherently contains about the transactional state of the system and applying it to effective power management (Figure 3). Advanced on-chip networks provide the capability for hardware-controlled, safe shutdown of power domains without reliance on driver software probing the system. A hardware-controlled power management approach leveraging the on-chip network intelligence is superior to a software approach that potentially introduces race conditions and delays in power shut down.”

Figure 3.On-Chip Network Power Management (courtesy of Sonics)

“The on-chip network has the address decoders for the system, and therefore is the first component in the system to know the target when a transaction happens. The on-chip network provides early indication to the SOC Power Manager that a transaction needs to use a resource, for example, in a domain that’s currently not being clocked or completely powered off. The Power Manager reacts very quickly and recovers domains rapidly enough that designers can afford to set up components in a normally off state (Dark Silicon) where they are powered down until a transaction tries to access them.

Today’s SOC integration is already at levels where designers cannot afford to have power to all the transistors available at the same time because of leakage. SOC designers should view the concept of Dark Silicon as a practical opportunity to achieve the highest possible power savings. Employing the intelligence of on-chip networks for active power management, SOC designers can set up whole chip regions with the power normally off and then, transparently wake up these chip domains from the hardware.”

Conclusion

The Green movement should be proud of its success in underlying the importance of energy conservation.  Low Power designs, I am sure, was not one of its main objective, yet the vast majority of electronic circuits today are designed with the goal of minimizing power consumption.  All is possible, or nearly so, when consumers demand it and, importantly, are willing to pay for it.

Blog Review – Mon. June 02 2014

Monday, June 2nd, 2014

In case you didn’t know, DAC is upon us, and ARM has some sightseeing tips – within the confines of the show-space. Electric vehicles are being taken seriously in Europe and North America, Dassault Systemes has some manufacturing-design tips and Sonics looks back over 20 years of IP integration. By Caroline Hayes, Senior Editor.

Electric vehicles – it’s an easy sell for John Day, Mentor Graphics, but his blog has some interesting examples from Sweden of electric transport and infrastructure ideas.

Thanks are due to Leah Schuth, ARM, who can save you some shoe leather if you are in San Francisco this week. She has been very considerate and lumped together all the best bits to see at this week’s DAC. OK, the list may be a bit ARM-centric, but if you want IoT, wearable electronics and energy management, you know where to go.

We all want innovation but can the industry afford it? Hoping to instill best practice, Eric, Dassault Systemes writes an interesting, detailed piece design-manufacturing collaboration for a harmonious development cycle.

A tutorial on your PC is a great way to learn – in this case, the low power advantage of LPDDR4 over earlier LPDDR memory. Corrie Callenbach brings this whiteboard tutorial by Kishote Kasamsetty to our attention in Whiteboard Wednesdays—Trends in the Mobile Memory World.

A review of IP integration is presented by Drew Wingard, Sonics, in who asks what has been learned over the last two decades, what matters and why.

Blog Review – Mon. May 19

Monday, May 19th, 2014

IPextreme runs down its top 10 reuse tips; Arrow questions if re-verification is necessary when using third party IP; Sonics celebrates the fact that IoT brings hardware and software teams together – whether they like it or not; and Synopsys looks at lab to silicon in 24 hours, and how the race is won. By Caroline Hayes, Senior Editor

Offering a helping hand, Warren Savage, IP-extreme, has consolidated the top 10 reasons for IP reuse failure. This informative post has some light-hearted illustrations but a serious message and real insights for IP reuse.

It’s almost like I planned this – but by coincidence (honest), Anand Shirahatti, Arrow posts an informative blog about the re-verification for externally bought IP. Despite assurances by third party vendors, there are other factors to consider, some more apparent than others.

A rather cynical Scott Seiden, Sonics, may have entered the Multicore Developers Conference, but he came away with news ideas about how IoT encourages hardware and software developers to collaborate around heterogeneous multicore technologies.

There is a lot going on in Michael Posner’s blog, with a 24 hour chip to lab video experience, a link to a video marveling at bringing up a chip in 24hour, and another reducing project risk. Finally, he relishes alternative horse power and shares an invitation to meet at DAC next month.

Blog Review – Mon. April 21 2014

Monday, April 21st, 2014

Post silicon preview; Apps to drive for; Motivate to educate; Battery warning; Break it up, bots. By Caroline Hayes, Senior Editor.

Gabe Moretti attended the Freescale Technology Forum and found the ARM Cortex-A57 Carbon Performance Analysis Kit (CPAK) that previews post silicon performance, pre-silicon.

In a considered blog post, Joel Hoffmann, Intel, looks at the top four car apps and what they mean for system designers. He knows what he is talking about, he is preparing for the panel at Open Automotive 14 – Automotive Suppliers: Collaborate or Die in Sweden next month.

How to get the next generation of EDA-focused students to commit is the topic of a short keynote at this year’s DAC by Rob Rutenbar, professor of Computer Science, University of Illinois. Richard Goering, Cadence reports on progress so far with industry collaboration and looks ahead.

Consider managing power in SoCs above all else, urges Scott Seiden, Sonics, who sounds a little frustrated with his cell phone.

Michael Posner, Synopsys, revels in a good fight – between robots in the FIRST student robot design competition. Engaging and educational.

Next Page »