Part of the  

Chip Design Magazine

  Network

About  |  Contact

Posts Tagged ‘Sonics’

Next Page »

Blog Review – Tuesday, January 10, 2017

Tuesday, January 10th, 2017

Moving on from 4K and 8K, Simon Forrest, Imagination Technologies, reports on 360° video, as seen at this year’s CES in Las Vegas. That, together with High Dynamic Range (HDR) could re-energize the TV broadcasting industry in general and the set-top box in particular.

The IoT is responsible for explosive growth in smart homes with connectivity at their centre. Dan Artusi, Intel, considers what technologies and disciplines are coming together as it introduces Intel Home Wireless Infrastructure at CES 2017.

Announcing a partnership with Renault and OSVehicle, ARM will work with the companies to develop an open source platform for cars, cities and transportation. Soshun Arai, ARM, explains how the ‘stripped down’ Twizy can release the brakes on CAN development.

Some Christmas reading has brought enlightenment to Gabe Moretti, Chip Design, as he unravels the mysteries of CEO comings and goings, and why the EDA industry could learn a thing or two from the boards of spy plane and stealth bomber manufacturers.

Still with EDA, Brian Derrick, Mentor Graphics, likens the automotive industry to sports teams, where big names dominate and capture consumers’ interest, eclipsing all others. This is changing as electric vehicles become a super power to turbo charge the industry.

It’s always good to welcome new blogs, and Sonics delivers with its announcement that it is addressing power management. Grant Pierce, Sonics, introduces the technology and product portfolio to enhance design methods.

Caroline Hayes, Senior Editor

Blog Review – Monday Oct. 20, 2014

Monday, October 20th, 2014

OCP-onwards and upwards; infotainment in Paris; lend a hand to ARM; Cadence anticipates 10nm FinFET process node.

By Caroline Hayes, Senior Editor

An online tutorial from Accellera Systems Initiative, “OCP: The Journey Continues” is a five-part tutorial spanning the past, present and future of the OCP (Open Core Protocol) IP interface socket standard. Drew Wingard draws it to our attention, as one of the presenters, discussing OCP in SoC designs, such as verification IP support, TLM 2.0 SystemC support and IP-XACT support is Herve Alexanian, Sonics and himself as well as Steve Masters, Synopsys and Prashant Karandikar, Texas Instruments.

Elektrobit and Nuance have integrated voice with natural language understanding (NLU) in the virtual cockpit in Audi’s TT Roadster which is being shown at the Paris Motor Show. John Day, Mentor Graphics marvels at the results and speculates on its practical uses.

A plea from ARM’s Brad Nemire, to the community which celebrates its first anniversary this month. He invites comment on the community and proposed changes, all designed to make the community interactive and responsive. He promises the survey will only take five minutes of your time.

An informative review of technical presentations at the recent TSMC Open Innovation Platform (OIP) Ecosystem Forum prepares the reader of 10nm FinFET process node. Richard Goering Cadence, includes some graphics from two keynotes for those who could not make the Forum, with some warnings of what it will mean for design.

Low Power Is The Norm, Not The Exception

Friday, September 26th, 2014

Gabe Moretti, Senior Editor

The issue of power consumption took front stage with the introduction of portable electronic devices.  It became necessary for the semiconductor industry and thus the EDA industry to develop new methods and new tools to confront the challenges and provide solutions.  Thus Low Power became a separate segment of the industry.  EDA vendors developed tools specifically addressing the problem of minimizing power consumption, both at the architecture, the synthesis, and the pre-fabrication stage of IC development.  Companies instituted new design methodologies that focused specifically on power distribution and consumption.

Today the majority of devices are designed and fabricated with low power as a major requirement.  As we progress toward a world that uses more wearable devices and more remote computational capabilities, low power consumption is a must.  I am not sure that dedicating a segment to low power is relevant: it makes more sense to have a sector of the industry devoted to unrestricted power use instead.

The contributions I received in preparing this article are explicit in supporting this point of view.

General Considerations

Mary Ann White, Director of Product Marketing, Galaxy Design Platform, at Synopsys concurs with my position.  She says: “Power conservation occurs everywhere, whether in mobile applications, servers or even plug-in-the-wall items.  With green initiatives and the ever-increasing cost of power, the ability to save power for any application has become very important.  In real-world applications for home consumer items (e.g. stereo equipment, set-top boxes, TVs, etc.), it used to be okay to have items go into standby mode. But, that is no longer enough when smart-plug strips that use sensors to automatically turn off any power being supplied after a period of non-usage are now populating many homes and Smart Grids are being deployed by utility companies. This trend follows what commercial companies have done for many years now, namely using motion sensors for efficient energy management throughout the day.”

Vic Kulkarni, Senior VP and GM, RTL Power Business Unit, at Apache Design, Inc., a wholly-owned subsidiary of ANSYS, Inc. approached the problem from a different point of view but also points out wasted power.

“Dynamic power consumed by SoCs continues to rise in spite of strides made in reducing the static power consumption in advanced technology nodes.

There are many reasons for dynamic power consumption waste – redundant data signal activity when clocks are shut off, excessive margin in the library characterization data leading to inefficient implementation, large active logic cones feeding deselected mux inputs, lack of sleep or standby mode for analog circuits, and even insufficient software-driven controls to shut down portions of the design. Another aspect is the memory sub-system organization. Once the amount of memory required is known, how should it be partitioned? What types of memories should be used? How often do they need to be accessed? All of these issues greatly affect power consumption. Therefore, design must perform power-performance-area tradeoffs for various alternative architectures to make an informed decision.”

The ubiquity of low power designs was also pointed out by Guillaume Boillet, Technical Marketing Manager, at Atrenta Inc.  He told me that: “Motivations for reducing the power consumed by chips are multiple. They range from purely technical considerations (i.e. ensuring integrity and longevity of the product), to differentiation factors (i.e. extend battery life or reduce cost of cooling) to simply being more socially responsible. As a result, power management techniques, which were once only deployed for wireless applications, have now become ubiquitous. The vast majority of IC designers are now making a conscious effort to configure their RTL for efficient power partitioning and to reduce power consumption, in particular the dynamic component, which is increasingly becoming more dominant at advanced technology nodes.”  Of course experience by engineers has found that minimizing power is not easy.”  Guillaume continued: “The task is vast and far from being straight-forward. First, there is a multitude of techniques which are available to designers: Power gating, use of static and variable voltage domains, Dynamic Voltage and Frequency Scaling (DVFS), biasing, architectural tradeoffs, coarse and fine-grain clock gating, micro-architectural optimizations, memory management, and light sleep are only some examples. When you try combining all of these, you soon realize the permutations are endless. Second, those techniques cannot be applied blindly and can have serious implications during floor planning, timing convergence activities, supply distribution, Clock Tree Synthesis (CTS), Clock Domain Crossing management, Design For Test (DFT) or even software development.”

Low power considerations have also been at the forefront of IP designs.  Dr. Roddy Urquhart is Vice President of Marketing at Cortus, a licensor of controllers, noted that: “A major trend in the electronics industry now, is the emergence of connected intelligent devices implemented as systems-on-chip (SoC) – the ‘third wave’ of computational devices.  This wave consists of the use of locally connected smart sensors in vehicles, the emergence of “smart homes” and “smart buildings” and the growing Internet of Things.  The majority of these types of devices will be manufactured in large volumes, and will face stringent power constraints. While users may accept charging their smartphones on a daily basis, many sensor-based devices for industrial applications, environmental monitoring or smart metering rely on the battery to last months or even a number of years. Achieving this requires a focus on radically reducing power and a completely different design approach to the SoC design.”

Architectural Considerations

Successful power management starts at the architectural level.  Designers cannot decide on a tactic to conserve power once that system has already been designed, since power consumption is the result of architectural decisions aimed at meeting functional requirements.  These tradeoffs are made very early in the development of an IC.

Jon McDonald, Senior Technical Marketing Engineer, at Mentor Graphics noted that: “Power analysis needs to begin at the system level in order to fix a disconnect between the measurement of power and the decisions that affect power consumption. The current status quo forces architectural decisions and software development to typically occur many months before implementation-based power measurement feedback is available. We’ve been shooting in the dark too long.  The lack of visibility into the impact of decisions while they are being made incurs significant hidden costs for most hardware and software engineers. System engineers have no practical way of measuring the impact of their design decisions on the system power consumption. Accurate power information is usually not available until RTL implementation, and the bulk of power feedback is not available until the initial system prototypes are available.”

Patrick Sheridan, Senior Staff Product Marketing Manager, Solutions Group, at Synopsys went into more details.

“Typical questions that the architect can answer are:

1) How to partition the SoC application into fixed hardware accelerators and software executing on processors, determining the optimal number and type of each CPU, GPU, DSP and accelerator.

2) How to partition SoC components into a set of power domains to adjust voltage and frequency at runtime in order to save power when components are not needed.

3) How to confirm the expected performance/power curve for the optimal architecture.

To help expand industry adoption, the IEEE 1801 Working Group’s charter has been updated recently to include extending the current UPF low power specification for use in system level power modeling. A dedicated system level power sub-committee of the 1801 (UPF) Working Group has been formed, led by Synopsys, which includes good representation from system and power architects from the major platform providers. The intent is to extend the UPF language where necessary to support IP power modeling for use in energy aware system level design.”  But he pointed out that more is needed from the software developers.

“In addition, power efficiency continues to be a major product differentiator – and quality concern – for the software manager. Power management functions are distributed across firmware, operating system, and application software in a multi-layered framework, serving a wide variety of system components – from multicore CPUs to hard-disks, sensors, modems, and lights – each consuming power when activated. Bringing up and testing power management software is becoming a major bottleneck in the software development process.

Virtual prototypes for software development enable the early bring-up and test of power management software and enable power-aware software development, including the ability to:

- Quickly reveal fundamental problems such as a faulty regulation of clock and voltages

- Gain visibility for software developers, to make them aware of problems that will cause major changes in power consumption

- Simulate real world scenarios and systematically test corner cases for problems that would otherwise only be revealed in field operation

This enables software developers to understand the consequences of their software changes on power sooner, improving the user-experience and accelerating software development schedules.”

Drew Wingard, CTO, at Sonics also answered my question about the importance of architectural analysis of power consumption.

“All the research shows that the most effective place to do power optimization is at the architectural level where you can examine, at the time of design partitioning, what are the collections of components which need to be turned on or can afford to be turned off. Designers need to make power partitioning choices from a good understanding of both the architecture and the use cases they are trying to support on that architecture. They need tooling that combines the analysis models together in a way that allows them to make effective tradeoffs about partitioning versus design/verification cost versus power/energy use.”

Dr. Urquhart underscored the importance of architectural planning in the development of licensable IP.  “Most ‘third wave’ computational devices will involve a combination of sensors, wireless connectivity and digital control and data processing. Managing power will start at the system level identifying what parts of the device need to be always on or always listening and which parts can be switched off when not needed. Then individual subsystems need to be designed in a way that is power efficient.

A minimalist 32-bit core saves silicon area and in smaller geometries also helps reduce static power. In systems with more complex firmware the power consumed by memory is greater than the power in the processor core. Thus a processor core needs to have an efficient instruction set so that the size of the instruction memory is minimized. However, an overly complex instruction set would result in good code density but a large processor core. Thus overall system power efficiency depends on balancing power in the processor core and memory.”

Implementation Considerations

Although there is still a need for new and more powerful architectural tools for power planning, implementation tools that help designers deal with issues of power distribution and use are reaching maturity and can be counted as reliable tools by engineers.

Guillaume Boillet observed that: “Fine-grain sequential clock gating and removal of redundant memory accesses are techniques that are now mature enough for EDA tools to decide what modifications are best suited based on specific usage scenarios (simulation data). For these techniques, it is possible to generate optimized RTL automatically, while guaranteeing its equivalence vs. the original RTL, thanks to formal techniques. EDA tools can even prevent modifications that generate new unsynchronized crossings and ensure proper coding style provided that they have a reliable CDC and lint engine.”

Vic Kulkarni provided me with an answer based on sound an detailed technical theory that lead to the following: “There are over 20 techniques to reduce power consumption which must be employed during all the design phases from system level (Figure 1), RTL to gate level sign-off to model and analyze power consumption levels and provide methodologies to meet power budgets, at the same time do the balancing act of managing trade-offs associated with each technique that will be used throughout the design flow Unfortunately there is NO single silver bullet to reduce power!

Fig. 1. A holistic approach for low-power IP and IP-based SoC design from system to final sign-off with associated trade-offs [Source: ANSYS-Apache Design]

To successfully reduce power, increase signal bandwidth, and manage cost, it is essential to simultaneously optimize across the system, chip, package, and the board. As chips migrate to sub-20 nanometer (nm) process nodes and use stacked-die technologies, the ability to model and accurately predict the power/ground noise and its impact on ICs is critical for the success of advanced low-power designs and associated systems.

Design engineers must meet power budgets for a wide variety of operating conditions.  For example, a chip for a smart phone must be tested to ensure that it meets power budget requirements in standby, dormant, charging, and shutdown modes.  A comprehensive power budgeting solution is required to accurately analyze power values in numerous operating modes (or scenarios) while running all potential applications of the system.”

Jon McDonald described Mentor’s approach.  He highlighted the need for a feedback loop between architectural analysis and implementation. “Implementation optimizations focus on the most efficient power implementation of a specific architecture. This level of optimizations can find a localized minimum power usage, but are limited by their inability to make system-wide architectural trade-offs and run real world scenarios.

Software optimizations involve efforts by software designers to use the system hardware in the most power efficient manner. However, as the hardware is fixed there are significant limitations on the kinds of changes that can be made. Also, since the prototype is already available, completing the software becomes the limiting factor to completing the system. As well, software often has been developed before a prototype is available or is being reused from prior generations of a design. Going back and rewriting this software to optimize for power is generally not possible due to time constraints on completing the system integration.

Both of these areas of power optimization focus can be vastly improved by investing more in power analysis at the system level – before architectural decisions have been locked into an implementation. Modeling power as part of a transaction-level model provides quantitative feedback to design architects on the effect their decisions have on system power consumption. It also provides feedback to software developers regarding how efficiently they use the hardware platform. Finally, the data from the software execution on the platform can be used to refine the architectural choices made in the context of the actual software workloads.

Being able to optimize the system-level architecture with quantitative feedback tightly coupled to the workload (Figure 2) allows the impact of hardware and software decisions to be measured when those decisions are made. Thus, system-level power analysis exposes the effect of decisions on system wide power consumption, making them obvious and quantifiable to the hardware and software engineers.”

Figure 2. System Level Power Optimization (Courtesy of Mentor Graphics)

Drew Wingard of Sonics underscored the advantage of having in-depth knowledge of the dynamics of Network On Chip (NOC) use.

“Required levels of power savings, especially in battery-powered SOC devices, can be simplified by exploiting knowledge the on-chip network fabric inherently contains about the transactional state of the system and applying it to effective power management (Figure 3). Advanced on-chip networks provide the capability for hardware-controlled, safe shutdown of power domains without reliance on driver software probing the system. A hardware-controlled power management approach leveraging the on-chip network intelligence is superior to a software approach that potentially introduces race conditions and delays in power shut down.”

Figure 3.On-Chip Network Power Management (courtesy of Sonics)

“The on-chip network has the address decoders for the system, and therefore is the first component in the system to know the target when a transaction happens. The on-chip network provides early indication to the SOC Power Manager that a transaction needs to use a resource, for example, in a domain that’s currently not being clocked or completely powered off. The Power Manager reacts very quickly and recovers domains rapidly enough that designers can afford to set up components in a normally off state (Dark Silicon) where they are powered down until a transaction tries to access them.

Today’s SOC integration is already at levels where designers cannot afford to have power to all the transistors available at the same time because of leakage. SOC designers should view the concept of Dark Silicon as a practical opportunity to achieve the highest possible power savings. Employing the intelligence of on-chip networks for active power management, SOC designers can set up whole chip regions with the power normally off and then, transparently wake up these chip domains from the hardware.”

Conclusion

The Green movement should be proud of its success in underlying the importance of energy conservation.  Low Power designs, I am sure, was not one of its main objective, yet the vast majority of electronic circuits today are designed with the goal of minimizing power consumption.  All is possible, or nearly so, when consumers demand it and, importantly, are willing to pay for it.

Blog Review – Mon. June 02 2014

Monday, June 2nd, 2014

In case you didn’t know, DAC is upon us, and ARM has some sightseeing tips – within the confines of the show-space. Electric vehicles are being taken seriously in Europe and North America, Dassault Systemes has some manufacturing-design tips and Sonics looks back over 20 years of IP integration. By Caroline Hayes, Senior Editor.

Electric vehicles – it’s an easy sell for John Day, Mentor Graphics, but his blog has some interesting examples from Sweden of electric transport and infrastructure ideas.

Thanks are due to Leah Schuth, ARM, who can save you some shoe leather if you are in San Francisco this week. She has been very considerate and lumped together all the best bits to see at this week’s DAC. OK, the list may be a bit ARM-centric, but if you want IoT, wearable electronics and energy management, you know where to go.

We all want innovation but can the industry afford it? Hoping to instill best practice, Eric, Dassault Systemes writes an interesting, detailed piece design-manufacturing collaboration for a harmonious development cycle.

A tutorial on your PC is a great way to learn – in this case, the low power advantage of LPDDR4 over earlier LPDDR memory. Corrie Callenbach brings this whiteboard tutorial by Kishote Kasamsetty to our attention in Whiteboard Wednesdays—Trends in the Mobile Memory World.

A review of IP integration is presented by Drew Wingard, Sonics, in who asks what has been learned over the last two decades, what matters and why.

Blog Review – Mon. May 19

Monday, May 19th, 2014

IPextreme runs down its top 10 reuse tips; Arrow questions if re-verification is necessary when using third party IP; Sonics celebrates the fact that IoT brings hardware and software teams together – whether they like it or not; and Synopsys looks at lab to silicon in 24 hours, and how the race is won. By Caroline Hayes, Senior Editor

Offering a helping hand, Warren Savage, IP-extreme, has consolidated the top 10 reasons for IP reuse failure. This informative post has some light-hearted illustrations but a serious message and real insights for IP reuse.

It’s almost like I planned this – but by coincidence (honest), Anand Shirahatti, Arrow posts an informative blog about the re-verification for externally bought IP. Despite assurances by third party vendors, there are other factors to consider, some more apparent than others.

A rather cynical Scott Seiden, Sonics, may have entered the Multicore Developers Conference, but he came away with news ideas about how IoT encourages hardware and software developers to collaborate around heterogeneous multicore technologies.

There is a lot going on in Michael Posner’s blog, with a 24 hour chip to lab video experience, a link to a video marveling at bringing up a chip in 24hour, and another reducing project risk. Finally, he relishes alternative horse power and shares an invitation to meet at DAC next month.

Blog Review – Mon. April 21 2014

Monday, April 21st, 2014

Post silicon preview; Apps to drive for; Motivate to educate; Battery warning; Break it up, bots. By Caroline Hayes, Senior Editor.

Gabe Moretti attended the Freescale Technology Forum and found the ARM Cortex-A57 Carbon Performance Analysis Kit (CPAK) that previews post silicon performance, pre-silicon.

In a considered blog post, Joel Hoffmann, Intel, looks at the top four car apps and what they mean for system designers. He knows what he is talking about, he is preparing for the panel at Open Automotive 14 – Automotive Suppliers: Collaborate or Die in Sweden next month.

How to get the next generation of EDA-focused students to commit is the topic of a short keynote at this year’s DAC by Rob Rutenbar, professor of Computer Science, University of Illinois. Richard Goering, Cadence reports on progress so far with industry collaboration and looks ahead.

Consider managing power in SoCs above all else, urges Scott Seiden, Sonics, who sounds a little frustrated with his cell phone.

Michael Posner, Synopsys, revels in a good fight – between robots in the FIRST student robot design competition. Engaging and educational.

Accellera Systems Initiative has taken over OCP-IP

Tuesday, October 15th, 2013

By Gabe Moretti

Accellera has been taking over multiple standards organization in the industry for several years and this is only the latest.  The acquisition includes the current OCP 3.0 standard and supporting infrastructure for reuse of IP blocks used in semiconductor design. OCP-IP and Accellera have been working closely together for many years, but OCP-IP lost corporate and member financial support steadily over the past five years and membership virtually flatlined. Combining the organizations may be the best way to continue  to address interoperability of IP design reuse and jumpstart adoption.

“Our acquisition of OCP assets benefits the worldwide electronic design community by leveraging our technical strengths in developing and delivering standards,” said Shishpal Rawat, Accellera Chair. “With its broad and diverse member base, OCP-IP will complement Accellera’s current portfolio and uniquely position us to further develop standards for the system-level design needs of the electronics industry.”

OCP-IP was originally started by Sonics, Inc. in December 2001 as a means to proliferate it’s network-on-chip approach.  Sonics CTO  Drew Wingard has been a primary driver of the organization.  It has long been perceived as the primary marketing tool of the company and it will be interesting to see how the company (which has been on and off the IPO trail several times since its founding) fairs without being the “big dog” in the discussion.

A comprehensive list of FAQs about the asset acquisition is available.

Power Analysis and Management

Thursday, August 25th, 2016

Gabe Moretti, Senior Editor

As the size of a transistor shrinks and modifies, power management becomes more critical.  As I was polling various DA vendors, it became clear that most were offering solutions for the analysis of power requirements and software based methods to manage power use, at least one, was offering a hardware based solution to power use.  I struggled to find a way to coherently present their responses to my questions, but decided that extracting significant pieces of their written responses would not be fair.  So, I organized a type of virtual round table, and I will present their complete answers in this article.

The companies submitting responses are; Cadence, Flex Logix, Mentor, Silvaco, and Sonics.  Some of the companies presented their own understanding of the problem.  I am including that portion of their contribution as well to provide a better meaning to the description of the solution.

Cadence

Krishna Balachandran, product management director for low power solutions at Cadence  provided the following contribution.

Not too long ago, low power design and verification involved coding a power intent file and driving a digital design from RTL to final place-and-route and having each tool in the flow understand and correctly and consistently interpret the directives specified in the power intent file. Low power techniques such as power shutdown, retention, standby and Dynamic Voltage and Frequency Scaling (DVFS) had to be supported in the power formats and EDA tools. Today, the semiconductor industry has coalesced around CPF and the IEEE 1801 standard that evolved from UPF and includes the CPF contributions as well. However, this has not equated to problem solved and case closed. Far from it! Challenges abound. Power reduction and low power design which was the bailiwick of the mobile designers has moved front-and-center into almost every semiconductor design imaginable – be it a mixed-signal device targeting the IoT market or large chips targeting the datacenter and storage markets. With competition mounting, differentiation comes in the form of better (lower) power-consuming end-products and systems.

There is an increasing realization that power needs to be tackled at the earliest stages in the design cycle. Waiting to measure power after physical implementation is usually a recipe for multiple, non-converging iterations because power is fundamentally a trade-off vs. area or timing or both. The traditional methodology of optimizing for timing and area first and then dealing with power optimization is causing power specifications to be non-convergent and product schedules to slip. However, having a good handle on power at the architecture or RTL stage of design is not a guarantee that the numbers will meet the target after implementation. In other words, it is becoming imperative to start early and stay focused on managing power at every step.

It goes without saying that what can be measured accurately can be well-optimized. Therefore, the first and necessary step to managing power is to get an accurate and consistent picture of power consumption from RTL to gate level. Most EDA flows in use today use a combination of different power estimation/analysis tools at different stages of the design. Many of the available power estimation tools at the RTL stage of design suffer from inaccuracies because physical effects like timing, clock networks, library information and place-and-route optimizations are not factored in, leading to overly optimistic or pessimistic estimates. Popular implementation tools (synthesis and place-and-route) perform optimizations based on measures of power using built-in power analysis engines. There is poor correlation between these disparate engines leading to unnecessary or incorrect optimizations. In addition, mixed EDA-vendor flows are plagued by different algorithms to compute power, making the designer’s task of understanding where the problem is and managing it much more complicated. Further complications arise from implementation algorithms that are not concurrently optimized for power along with area and timing. Finally, name-mapping issues prevent application of RTL activity to gate-level netlists, increasing the burden on signoff engineers to re-create gate-level activity to avoid poor annotation and incorrect power results.

To get a good handle on the power problem, the industry needs a highly accurate but fast power estimation engine at the RTL stage that helps evaluate and guide the design’s micro-architecture. That requires the tool to be cognizant of physical effects – timing, libraries, clock networks, even place-and-route optimizations at the RTL stage. To avoid correlation problems, the same engine should also measure power after synthesis and place-and-route. An additional requirement to simplify and shorten the design flow is for such a tool to be able to bridge the system-design world with signoff and to help apply RTL activity to a gate-level netlist without any compromise. Implementation tools, such as synthesis and place-and-route, need to have a “concurrent power” approach – that is, consider power as a fundamental cost-factor in each optimization step side-by-side with area and timing. With access to such tools, semiconductor companies can put together flows that meet the challenges of power at each stage and eliminate iterations, leading to a faster time-to-market.

Flex Logix

Geoff Tate, Co-founder and CEO of Flex Logix is the author of the following contribution.  Our company is a relatively new entry in the embedded FPGA market.  It uses TSMC as a foundry.  Microcontrollers and IOT devices being designed in TSMC’s new ultra-low power 40nm process (TSMC 40ULP) need

•             The flexibility to reconfigure critical RTL, such as I/O

•          The ability to achieve performance at lowest power

Flex Logix has designed a family of embedded FPGA’s to meet this need. The validation chip to prove out the IP is in wafer fab now.

Many products fabricated with this process are battery operated: there are brief periods of performance-sensitive activity interspersed with long periods of very low power mode while waiting for an interrupt.

Flex Logix’s embedded FPGA core provides options to enable customers to optimize power and performance based on their application requirements.

To address this requirement, the following architectural enhancements were included in the embedded FPGA core:

•             Power Management containing 5 different power states:

  • Off state where the EFLX core is completely powered off.
  • Deep Sleep state where VDDH supply to the EFLX core can be lowered from nominal of 0.9V/1.1V to 0.5V while retaining state
  • Sleep state, gates the supply (VDDL) that controls all the performance logic such as the LUTs, DSP and interconnect switches of the embedded FPGA while retaining state. The latency to exit Sleep is shorter than that that to exit from Deep Sleep
  • Idle state, idles the clocks to cut power but is ready to move into dynamic mode quicker than the Sleep state
  • Dynamic state where power is highest of the 4 power management states but where the latency is the shortest and used during periods of performance sensitive activity

The other architectural features available in the EFLX-100 embedded FPGA to optimize power-performance are:

•             State retention for all flip flops and configuration bits at voltages well below the operating range.

•          Ability to directly control body bias voltage levels (Vbp, Vbn). Controlling the body bias further controls leakage power

•             5 combinations of threshold voltage(VT) devices to optimize power and performance for static/performance logic of the embedded FPGA. Higher the threshold voltage (eHVT, HVT) lower the leakage power and lower performance while lower the threshold voltage (SVT) device, higher the leakage and higher the performance.

•             eHVT/eHVT

•             HVT/HVT

•             HVT/SVT

•             eHVT/SVT

•             SVT/SVT

In addition to the architectural features various EDA flows and tools are used to optimize the Power Performance and Area (PPA) of the FlexLogix embedded FPGA:

•             The embedded FPGA was implemented using a combination of standard floor-planning and P&R tools to place and route the configuration cells, DSP and LUTs macros and network fabric switches. This resulted in higher density thereby reducing IR drops and the need for larger drive strengths thereby optimizing power

•          Design and use longer (non-minimum) channel length devices which further help reduce leakage power with minimal to no impact to the performance

•          The EFLX-100 core was designed with an optimized power grid to effectively use metal resources for power and signal routing. Optimal power grids reduce DC/AC supply drops which further increase performance.

Mentor

Arvind Narayanan, Architect, Product Marketing, Mentor Graphics contributed the following viewpoint.

One of the biggest challenges in IC design at advanced nodes is the complexity inherent in effective power management. Whether the goal is to reduce on-chip power dissipation or to provide longer battery life, power is taking its place alongside timing and area as a critical design dimension.

While low-power design starts at the architectural level, the low-power design techniques continue through RTL synthesis and place and route. Digital implementation tools must interpret the power intent and implement the design correctly, from power aware RTL synthesis, placement of special cells, routing and optimization across power domains in the presence of multiple corners, modes, and power states.

With the introduction of every new technology node, existing power constraints are also tightened to optimize power consumption and maximize performance. 3D transistors (FinFETs) that were introduced at smaller technology nodes have higher input pin capacitance compared to their planar counterpart, resulting in the dynamic power component to be higher compared to leakage.

Power Reduction Strategies

A good strategy to reduce power consumption is to perform power optimization at multiple levels during the design flow including software optimization, architecture selection, RTL-to-GDS implementation and process technology choices. The biggest power savings are usually obtained early in the development cycle at the ESL & RTL stages. (Fig 1). During physical implementation stage there is less opportunity for power optimization in comparison and hence choices made earlier in the design flow are critical. Technology selection such as the device structure (FinFET, planar), choice of device material (HiK, SOI) and technology node selection all play a key role.

Figure 1. Power reduction opportunities at different stages of the design flow

Architecture selection

Studies have shown that only optimizations applied early in the design cycle, when a design’s architecture is not yet fixed, have the potential for radical power reduction.  To make intelligent decisions in power optimization, the tools have to simultaneously consider all factors affecting power, and apply early in the design cycle. Finding the best architecture enables to properly balance functionality, performance and power metrics.

RTL-to-GDS Power Reduction

There are a wide variety of low-power optimization techniques that can be utilized during RTL to GDS implementation for both dynamic and leakage power reduction. Some of these techniques are listed below.

RTL Design Space Exploration

During the early stages of the design, the RTL can be modified to employ architectural optimizations, such as replacing a single instantiation of a high-powered logic function with multiple instantiations of low-powered equivalents. A power-aware design environment should facilitate “what-if” exploration of different scenarios to evaluate the area/power/performance tradeoffs

Multi-VDD Flow

Multi-voltage design, a popular technique to reduce total power, is a complex task because many blocks are operating at different voltages, or intermittently shut off. Level shifter and isolation cells need to be used on nets that cross domain boundaries if the supply voltages are different or if one of the blocks is being shut down. DVFS is another technique where the supply voltage and frequency can vary dynamically to save power. Power gating using multi-threshold CMOS (MTCMOS) switches involves switching off certain portions of an IC when that functionality is not required, then restoring power when that functionality is needed.

Figure 2. Multi-voltage layout shown in a screen shot from the Nitro-SoC™ place and route system.

MCMM Based Power Optimization

Because each voltage supply and operational mode implies different timing and power constraints on the design, multi-voltage methodologies cause the number of design corners to increase exponentially with the addition of each domain or voltage island. The best solution is to analyze and optimize the design for all corners and modes concurrently. In other words, low-power design inherently requires true multi-corner/multi-mode (MCMM) optimization for both power and timing. The end result is that the design should meet timing and power requirements for all the mode/corner scenarios.

FinFET aware Power Optimization

FinFET aware power optimization flow requires technologies such as activity driven placement, multi-bit flop support, clock data optimization, interleaved power optimization and activity driven routing to ensure that the dynamic power reduction is optimal. The tools should be able to use transforms with objective costing to make trade-offs between dynamic power, leakage power, timing, and area for best QoR.

Using the strategy to optimize power at all stages of the design flow, especially at the architecture stage is critical for optimal power reduction.  Architecture selection along with the complete set of technologies for RTL-to-GDS implementation greatly impact the ability to effectively manage power.

Silvaco

Seena Shankar, Technical Marketing Manager, is the author of this contribution.

Problem:

Analysis of IR-drop, electro-migration and thermal effects have traditionally been a significant bottleneck in the physical verification of transistor level designs like analog circuits, high-speed IOs, custom digital blocks, memories and standard cells. Starting from 28 nm node and lower, all designers are concerned about power, EM/IR and thermal issues. Even at the 180 nm node if you are doing high current designs in LDMOS then EM effects, rules and thermal issues need to be analyzed. FinFET architecture has increased concerns regarding EM, IR and thermal effects. This is because of complex DFM rules, increased current and power density. There is a higher probability of failure. Even more so EM/IR effects need to be carefully analyzed and managed. This kind of analysis and testing usually occurs at the end of the design flow. Discovering these issues at that critical time makes it difficult to stick to schedule and causing expensive rework. How can we resolve this problem?

Solution:

Power integrity issues must be addressed as early in the design cycle as possible, to avoid expensive design and silicon iterations. Silvaco’s InVar Prime is an early design stage power integrity analysis solution for layout engineers. Designers can estimate EM, IR and thermal conditions before sign-off stage. It performs checks like early IR-drop analysis, check of resistive parameters of supply networks, point to point resistance check, and also estimate current densities. It also helps in finding and fixing issues that are not detectable with regular LVS check like missing vias, isolated metal shapes, inconsistent labeling, and detour routing.

InVar Prime can be used for a broad range of designs including processors, wired and wireless network ICs, power ICs, sensors and displays. Its hierarchical methodology accurately models IR-drop, electro-migration and thermal effects for designs ranging from single block to full-chip. Its patented concurrent electro-thermal analysis performs simulation of multiple physical processes together. This is critical for today’s’ designs in order to capture important interactions between power and thermal 2D/3D profiles. The result is physical measurement-like accuracy with high speed even on extremely large designs and applicability to all process nodes including FinFET technologies.

InVar Prime requires the following inputs:

●      Layout- GDSII

●      Technology- ITF or iRCX

●      Supplementary data- Layer mapping file for GDSII, Supply net names, Locations and nominal of voltage sources, Area based current consumption for P/G nets

Figure 3. Reliability Analysis provided by InVar Prime

InVar Prime enables three types of analysis on a layout database: EM, IR and Thermal. A layout engineer could start using InVar to help in the routing and planning of the power nets, VDD and VSS. IR analysis with InVar will provide them early analysis on how good the power routing is at that point. This type of early analysis flags potential issues that might otherwise appear after fabrication and result in silicon re-spins.

InVar EM/IR engine provides comprehensive analysis and retains full visibility of supply networks from top-level connectors down to each transistor. It provides a unique approach to hierarchical block modeling to reduce runtime and memory while keeping accuracy of a true flat run. Programmable EM rules enable easy adaptation to new technologies.

InVar Thermal engine scales from single cell design to full chip and provides lab-verified accuracy of thermal analysis. Feedback from thermal engine to EM/IR engines provides unprecedented overall accuracy. This helps designers understand and analyze various effects across design caused by how thermal 2D/3D profiles affect IR drop and temperature dependent EM constraints.

The main benefits of InVar Prime are:

●      Accuracy verified in lab and foundries

●      Full chip sign-off with accurate and high performance analysis

●      Analysis available early in the back end design, when more design choices are available

●      Pre-characterization not required for analysis

●      User-friendly environment designed to assist quick turn-around-times

●      Effective prevention of power integrity issues

●      Broad range of technology nodes supported

●      Reduces backend verification cycle time

●      Improves probability of first silicon success

Sonics

Scott Seiden contributed his company viewpoint.  Sonics has developed a dynamic power management solution that is hardware based.

Sonics has Developed Industry’s First Energy Processing Unit (EPU) Based on the ICE-Grain Power Architecture.  The EPUICE stands for Instant Control of Energy.

Sonics’ ICE-G1 product is a complete EPU enabling rapid design of system-on-chip (SoC) power architecture and implementation and verification of the resulting power management subsystem.

No amount of wasted energy is affordable in today’s electronic products. Designers know that their circuits are idle a significant fraction of time, but have no proven technology that exploits idle moments to save power. An EPU is a hardware subsystem that enables designers to better manage and control circuit idle time. Where the host processor (CPU) optimizes the active moments of the SoC components, the EPU optimizes the idle moments of the SoC components. By construction, an EPU delivers lower power consumption than software-controlled power management. EPUs possess the following characteristics:

  • Fine-grained power partitioning maximizes SoC energy savings opportunities
  • Autonomous hardware-based control provides orders of magnitude faster power up and power down than software-based control through a conventional processor
  • Aggregation of architectural power savings techniques ensures minimum energy consumption
  • Reprogrammable architecture supports optimization under varying operating conditions and enables observation-driven adaptation to the end system.

About ICE-G1

The Sonics’ ICE-G1 EPU accelerates the development of power-sensitive SoC designs using configurable IP and an automated methodology, which produces EPUs and operating results that improve upon the custom approach employed by expert power design teams. As the industry’s first licensable EPU, ICE-G1 makes sophisticated power savings techniques accessible to all SoC designers in a complete subsystem solution. Using ICE-G1, experienced and first-time SoC designers alike can achieve significant power savings in their designs.

Markets for ICE-G1 include:

- Application and Baseband Processors
- Tablets, Notebooks
- IoT
- Datacenters
- EnergyStar compliant systems
- Form factor constrained systems—handheld, battery operated, sealed case/no fan, wearable.

-ICE-G1 key product features are:Intelligent event and switching controllers–power grain controllers, event matrix, interrupt controller, software register interface—configurable and programmable hardware that dynamically manages both active and leakage power.

- SonicsStudio SoC development environment—graphical user interface (GUI), power grain identification (import IEEE-1801 UPF, import RTL, described directly), power architecture definition, power grain controller configuration (power modes and transition events), RTL and UPF code generation, and automated verification test bench generation tools. A single environment that streamlines the EPU development process from architectural specification to physical implementation.

- Automated SoC power design methodology integrated with standard EDA functional and physical tool flows (top down and bottom up)—abstracts the complete set of power management techniques and automatically generates EPUs to enable architectural exploration and continuous iteration as the SoC design evolves.

- Technical support and consulting services—including training, energy savings assessments, architectural recommendations, and implementation guidance.

Conclusion

As can be seen from the contributions analysis and management of power is multi-faceted.  Dynamic control of power, especially in battery powered IoT devices is critical, since some of there devices will be in locations that are not readily reachable by an operator.

Improved Power Management With Sonics’ ICE-Grain

Friday, May 22nd, 2015

Gabe Moretti, Senior Editor

It was not long ago that the issue of Low Power was handled as an exception by the EDA industry.  Today every electronic circuit designers must minimize power consumption, although the reasons for such requirement may vary.  Battery life, thermal management, energy conservation due to economic factors, regulatory concerns like Energy Star from the U.S. Department of Energy, remote execution environments like space or under oceans, are some of the reasons to minimize power consumption.

EDA vendors provide power analysis tools as well as power aware development like synthesis, place and route tools, but it has been shown by many academic papers that the best results are obtained at the architectural level.  At that point designers have a better sense of what can be either shutdown or taken to a slower state of execution.

The Power Management Challenge

Designer must worry about both active power and leakage power.  Engineers address the problem from a functional point of view by segmenting the circuit into domains dedicated to the execution of functions that are somewhat independent from each other.  Figure 1 shows the most popular techniques for power management together with the execution latency, that is the time it takes to transition from normal operation state to power saving state.  Latency can vary with each technique from  few nanoseconds to over a millisecond.  Latency can render the technique ineffective, especially when the execution latency is approximately as long as the available idle time.

Figure 1

There is a tradeoff between techniques when working on dynamic power management.  Sonics has been working on this issue with its customers and recognized the challenges involved.  In general one has to figure out when the block is idle, the idle time has to be long enough to cover the transition latency because otherwise one can run into power state trashing that not only does not save power, but uses more power because it uses power to change states.

Different tasks must be accomplish in designing and developing a power saving strategy.

  • Identify and exploit idle moments long enough to cover the latency.
  • Select optimum power domain boundaries.
  • Avoid deadlock due to domain hierarchies and interdependencies.
  • Choose the appropriate techniques for power control for each domain.

There are significant challenges with each of the above tasks.

  • Avoid power state thrashing
  • Scaling voltage and frequency to match loading and thermal limits
  • Finer domains offer more savings, but demand bigger implementation cost
  • Define how many power state transitions per second can be handled

Verification is of course a major challenge.  Designers need not only to verify power control circuitry correctness, especially absence of deadlock states, but also must evaluate the effectiveness of dynamic power control.

The ICE-Grain Solution

Because current power management solutions either  fall short or are complex to implement and verify,. Sonics is attacking power management holistically to create the industry’s most effective solution.  The company has created The ICE-Grain Power Architecture that saves far more power than ad hoc power management approaches.  It is a complete power management subsystem consisting of hardware and software IP that manages clock and power control, external voltage sources, operating points for DVFS, and others techniques providing an easy to use, unified, and automated solution.

With the ICE-Grain Power Architecture, SoC designers partition their chips into much finer “grains,” which enables up to 10x faster and more precise power control. Power “grains” are very small sections of an SoC that include functional logic that can be individually power controlled using one or more savings methods. A grain is connected to one or more clock domains and attached to at least one power domain, and includes defined signals and conditions for power control. Grains are often an order of magnitude smaller than conventionally independent power or clocking domains, and multiple grains can be composed into larger hierarchical grains. The ICE-Grain Power Architecture automates the tasks of grain connection and management by synthesizing both central and local control circuitry blocks for the greatest total SoC power reduction (see Figure 2).

Figure 2 Representation of an SoC partitioned into many Power Grains with a power controller synthesized to simultaneously control each grain.

The zoomed view shows the local control element on each grain – highlighting the modular and distributed construction of the ICE-Grain architecture.

The ICE-Grain architecture is scalable and modular, to fit all design requirements.    Each grain has its own local independent tightly coupled controller.  For example, if a number of grains are power switched from the same external supply, then this supply cannot be turned off unless all of the grains are able to power down.  For this reason the associated local controllers must be tightly coupled with a Central Controller that manages the shared resource.  A voltage supply, the output of a phase lock loop, the interface to a  software based power management system are all examples of a shared resource.  The Central Controller distributes the job of controlling every grain to Local Grain Controllers.  This architecture allows ICE-Grain to support a highly scalable number of grains autonomously without imposing any loading on the host CPU – but will interface to the host to take power directives from the hardware or operating system.  For example, the host CPU may tell the Central Controller to change system state from “Awake” to “Doze”, and then Sonics-provided drivers will program the Central Controller to determine all of the legal power states that each Local Grain Controller can exploit to achieve the lowest power state.   (Figure 3).

Figure 3 Architecture of Central Controller and associated Local Grain Controllers

The ICE-Grain Power Architecture leverages but is independent of the SonicsGN configurable On-chip Network that provides a high-performance network for the transportation of packetized data, utilizing routers as the fundamental switching elements.  The ICE-Grain Power Architecture automates the tasks of grain connection and management by analyzing every grain and then, synthesizing both central and local control circuitry blocks for the greatest total SoC power reduction.

Advantages Over Conventional Approaches

Compared to conventional, software-controlled approaches, Sonics’ scalable architecture helps designers partition their designs into much smaller power grains, providing many more opportunities to turn individual grains to the “off” state. Its hardware-controlled state transitions enable the architecture to exploit shorter idle periods by reducing the execution latency so power grains can be taken to deeper  ”off” and low-power states than can be achieved using software-controlled approaches.

ICE-Grain is a scalable, distributed, and modular architecture that provides the most efficient power management precisely where it is needed.  It is the first commercial technology that manages and controls all common power techniques in a unified environment.  It does so while complying with standard EDA tools, flows, and formats.

To developers ICE-Grain provides a complete, integrated solution that  does not require any special power expertise.  The result is an easy to use, worry-free implementation of hardware control and fine grain power reduction.

ICE-Grain saves power in three main ways: enable designers to reliably create very fine-grained power objects, execute power state transitions in hardware that is up to 10x faster than software, and finally utilize a “wake on demand” technique that detects when a grain that has been powered down needs to wake up – and does so automatically.

Figure 4 Comparison of results between ICE-Grain and CPU centric approaches

Figure 4 shows a comparison between the result of a software driven power management system and what ICE-Grain will achieve.  The dark green area represent a locality on the SoC that can be idle for a certain length of time at a specific execution point.  Total power will be saved by taking this hardware to a lower power state.  To get to the lower power state using a software controlled technique the host processor needs to execute a sequence of functions.  During that time no power is saved in fact extra power is consumed by the required processing.  When the hardware that had been taken to the lower state must resume normal execution, another sequence of functions, again requiring additional power must be performed. In the example shown, the CPU-based system will consume more energy entering and exiting the low-power state than is saved in the low-power state itself.

In contrast, the ICE-Grain switching occurs in hardware and does not involve the host processor.  So both the time it takes to transition and the amount of power needed to effect the transition in either direction is far less, as shown by the light green areas representing additional power savings compared to the CPU-based controller. In this example, expending the energy to enter and exit the low-power state will lower overall system power.  Thus, ICE-Grain can save power over conventional approaches in all cases and can exploit idle regions too short for conventional approaches to save additional power.

Conclusion

The history of the electronics industry offers a number of instances where software techniques used early to achieve specific targets are later replaced with hardware based technology that yields more efficient results.

Compared to conventional, software controlled approaches, Sonics’ fine-grain hardware-controlled state transitions enable the architecture to exploit many more “off” and low-power states.   The idea of minimizing the size of the block controlled is key because it allows designers to ignore other regions of the larger block with logic states that would otherwise inhibit the application of power management to the smaller hardware region.  Controlling smaller blocks like the grains in the ICE-Grain architecture would require such large overhead in software to make it impossible to apply.

IoT, Definition, Standards, and Security

Thursday, April 30th, 2015

Gabe Moretti, Senior Editor

Almost every day you can read something about Internet of Things (IoT).  This market segment is defined as the next big opportunity for the electronics industry and thus for EDA.   Yet many questions remain and some of them, if not answered correctly, will become stumbling blocks.  Not surprisingly I found that there are professionals that share my concerns.

There are a number of issues to be solved in IoT, and unfortunately they will be solved piecemeal, as problems arise since humans are better at solving problems than at avoiding them.  To address all of the possible issues and potential solutions the article would have turned into a book, so I decided to limit the topics discussed to just three for now.

IoT Still Needs a Full Definition

I asked  Drew Wingard, CTO of Sonics if IoT was sufficiently well defined.  His answer “Yes and no. If you think of IoT as an umbrella, it covers an incredibly wide variety of disparate applications. What is needed is some kind of characterization or taxonomy below the umbrella. For example, the I IoT is the Industrial Internet of things. IoT also includes wearables–medical wearables, other wearables…it’s multi-dimensional.”  This is a great answer.  it is great because it is honest, no marketing spin here.  What it says to me is that at the local node level IoT is well defined.  We can design and build devices that collect information, make decisions, and provide feedback in the form of controls at the local level.  What is missing is the experience and thus the strongly defined architecture that defines how and where the heterogeneous data is processed into actable information.  Also we need to define if and how diverse segments of IoT like wearables and automotive for example, should and how communicate and interact.

Developers of IP have of course turned their attention to the needs of IoT systems.  The Tensilica group of Cadence has just announced a new product, Fusion DSP, to deal with communication and computational requirements in local IoT nodes.  During the discussion they showed me how diverse IoT opportunities for IP are projected to be as shown in Figure 1.

Figure 1. The various applications of IP devices in IoT systems

Lauro Rizzatti, a well known verification consultant told me that ” If you ask 10 people for a definition of the Internet of Things, you’ll get 10 different descriptions. One thing is certain: IoT chip designs will need sophisticated verification tools to fully test the functionality.”  This is, unfortunately, no surprise.  We are still at the point where verification is a key component of design.  It is human nature to fix problems, not avoid them.

Omri Lachman, co-founder & CEO of Israeli wireless charging startup Humavox, points out that “IoT is a term that is defined yet still far from resonating with end users. People may be using IoT associated devices or products but for most, the term IoT means nothing. History shows that in order to bring a revolution you need to have all relevant stakeholders in line with one overall objective in mind. IoT is probably one of the biggest life changers we’re going to see in the coming time. It is all about personalization and optimization of technologies/products/services for us as people. Connecting humans, homes, transportation with medical, industrial and enterprise environments is a huge objective to take on. In order for this revolution to succeed, all stakeholders should be involved in the education of the people. Visual aids should be created to help individuals of all life categories to easily connect the dots. Consumers need to be better educated about the endless opportunities that can fill their life by adopting IoT. Ultimately, the key here is better consumer education, better selling of the vision that IoT is expected to deliver and the creation of visual aids so anyone can easily grasp the concept.”

Vic Kulkarni, SVP & GM, RTL Power Business, ANSYS-Apache Business Unit described the IoT architecture by breaking the system in three functional parts: Sensing and Processing, Connectivity, and Storage and Analytics.  The first part must deal with MEMS and RFID issues, the second with Network, Gateway, and Supervisory Logic design and verification, and the third deals with processing at the cloud and data center level.  Vic thinks that revenue from IoT will divide almost equally between consumer and industrial segments, with a slight advantage of the industrial sector (52% to 48%).  In the consumer segment Vic places wearables, connected cars, and connected homes.  While connected cities, healthcare, oil and gas, transportation, and the industrial internet make up the bulk of the industrial segment.  Ansys is addressing the market in its electronics and semiconductor business units by providing design and analysis tools for IC, PCB, MEMS/Antenna, Thermal, and Physical Impact.

From what Dr. Kulkarni is saying it is clear that IoT is not just an electronics system, but an heterogeneous collection of diverse parts that must be assembled into a system in order to design, verify, and build the product.  EDA already provides tools for power and signal integrity, but has either not yet addressed or not completely addressed Structural Reliability, Thermal, and Regulatory Compliance.

IoT Needs Standards

One of the most creative portion of my engineering career was spent creating standards within consortia and the IEEE.  As the discussion about IoT heated up I became concerned with the absence of standards to interconnect the “things” to the conglomerating nodes and these to the cloud.  And then I heard about the Open Interconnect Consortium (OIC).

International Data Corporation expects that the installed base of IoT will be approximately 212 billion “things” globally by the end of 2020. This is expected to include 30.1 billion installed “connected (autonomous)” things.  Today, these devices are connecting to each other using multiple, and often incompatible approaches. The members of the Open Interconnect Consortium believe that in order to achieve this scale, the industry will need both the collaboration of the open source community and industry standards to drive interoperability of these devices.

Guy Martin of Samsung describes the purpose of the consortium this way: “There’s a lot of great work going on in different areas of the IoT - you’ve got digital health, obviously smart home is huge, you’ve got in-vehicle – but there’s nothing that does a really good job of connecting all of those things together. We believe that while you may have a lot of good things going on in those individual communities, the next big thing in IoT is going to be the applications that span multiple verticals. What we’re really trying to develop is the framework for that.”

OIC is the sponsor of the IoTivity Project, an open source software framework enabling seamless device-to-device connectivity to address the emerging needs of the IoT.  The Consortium is recruiting other industry leaders to collaborate and join the efforts.   The goal is to define a comprehensive communications framework to enable emerging applications in all key vertical markets.  You can read more about the consortium at http://openinterconnect.org.

Ron Lowman, strategic marketing manager for IoT at Synopsys believes that standardization of communication protocols especially at the thing to local conglomerator nodes is either already here or will happen in a short time.  He thinks that: ” Everyone has their own definition of the concept of IoT, and the market has a lot of great semiconductor products for IoT including many microcontrollers with mixed-signal IP, such as 12-bit 5Msps ADCs, Bosch Sensortec & PNI’s sensor hubs, and Intel’s Curie module, all of which will be used in everything from wearables, smart homes and cities, and building and factory automation.  Kickstarter is a great example of where to find a sample of the limitless opportunity that IoT creates.  What will actually define IoT, and what is currently missing, is the massive adoption of connected products and we’re just on the brink of this larger adoption in 2015.”

It is curious that the architecture uses the term “Internet” since it does not look like the internet protocol will be used locally, like in the intelligent home and certainly not in wearables.  The natural question for Ron was:  “Are local protocols already standardized?  If so what are they?”

Ron responded: “Wearables obviously have seen the adoption of Bluetooth Smart as a de facto standard for a couple reasons.  Companies such as EMMicro have benefited from that with their low power Bluetooth capabilities.  The cost of implementation including die size, stack size and power budget, is significantly better in Bluetooth Smart than WiFi and it’s available on our most personal devices (mobile phones and tablets).  Ethernet and WiFi protocols weren’t initially designed for “things” and the protocols defining “the field bus wars”, such as Modbus, were not designed to be streamed to websites, however there are a myriad of standards organizations that are tackling this problem very proactively.  The important thing to note is that these standards organization’s efforts will provide an open source platform and open source abstraction layer that will enable developers and designers to focus on their key value generation to the market.  Interoperability will be a reality.  It will not be a single solution but a small array of solutions to fit the different needs for each IoT subsegment.”

IoT Needs Security

No one disputes that security is of paramount importance in IoT applications.  When everything is connected the opportunities for mischievous and illegal activities are just too great.  During my discussion with Vic Kulkarni he recalled how in 2008 it was shown that pacemaker devices could be hacked at a range of a few centimeters, that is less than one foot, but recently MIT graduate students hacked a pacemaker device at a range of 1,524 centimeters, or approximately 50 feet.  Such capability enables electronic murder perpetrated by a totally anonymous killer.

Two of the most obvious reasons for hacking are: collection of information and illegal control of functionality.  Vic also provided information on automobile vulnerability both to the control of an individual vehicle and to car-to-car communication for collision avoidance function.

Jason Oberg, CEO at Tortuga Logic, observes that: “With the advent of IoT, we are going to see a drastic shift in the security landscape. Attacks have already been demonstrated on embedded devices such as pace makers, automobiles, baby monitors, and even refrigerators. Most companies are trying to solve this problem purely with software security, but this is a constant cat-and-mouse game we cannot win. As IoT grows, we are seeing more software being pushed down into hardware and our modern chipsets are growing in complexity. This is driving attackers to begin focusing on hardware and, without ensuring our chipsets are built in a secure manner, these attackers will continue to succeed.”

When thinking about security I generally think about software based hacking, but breaches that use physical techniques are just as damaging.   The Athena Group, Inc., a provider of security, cryptography, anti-tamper, and signal processing IP cores, has introduced a comprehensive portfolio of IP cores with side-channel attack (SCA) countermeasures, based on advanced differential power analysis (DPA) countermeasure approaches pioneered by the Cryptography Research Division of Rambus.

DPA is a type of SCA that involves monitoring variations in the electrical power consumption or electromagnetic emissions from a target device. DPA attacks are non-invasive, easily automated, and can be mounted without knowing the design of the target device. Unlike invasive tampering, electromagnetic attacks can even be performed at a distance. As an example, attacks on cell phones have been demonstrated at a range of 30 feet. DPA countermeasures are essential to protect devices that use cryptographic keys, especially sensitive defense applications that require strong anti-tamper protection of advanced electronics and commercial devices that perform high-value processing, including mobile devices and IoT endpoints.

Although I am not privy to any official information from government agencies I can develop an example of security treats from published articles, both in print and on the net.  The network of cell phones is a good example of a candidate IoT.  If one wants to gather information on the location and use of individual cell phones and the relationship between and among two or more such devices it can use the cell phone networks.  My cell phone, for example, gathers environmental information, location, and behavioral profile as I go about my daily activities.  It also records and submits to my service provider who I call, how long I talk, what data I download, what pictures I upload, and so on.  Without security such information is available to any one capable and willing to build and use a tracking system to collect and analyze all that data.  Can my cell phone be disabled remotely?  Can an app be installed on it without my knowledge?  The answer is yes for both questions.

Conclusion

In spite of what some editors and analysts have written, there is not a clear, generally shared definition of IoT that can be used as a base for architectural design at all hierarchical levels of IoT.  So in this article I chose to write about IoT definition as well as two issues that are not much talked about: standards, and security.  Obviously there is much more to say about IoT, and I am grateful to all those who have sent a large volume of input for this article.  What I have learned I will not keep for myself and I will share more information about IoT in the near future.

Next Page »