Part of the  

Chip Design Magazine

  Network

About  |  Contact

Posts Tagged ‘power’

China’s Bold Strategy for Semiconductors

Thursday, October 20th, 2016

Gabe Moretti, Senior Editor

The East-West Center is a research organization Established by the U.S. Congress in 1960. The Center serves as a resource for information and analysis on critical issues of common concern, bringing people together to exchange views, build expertise, and develop policy options. The Center is an independent, public, nonprofit organization with funding from the U.S. government, and additional support provided by private agencies, individuals, foundations, corporations, and governments in the region.

The Center’s 21-acre Honolulu campus, adjacent to the University of Hawai‘i at Mānoa, is located midway between Asia and the U.S. mainland and features research, residential, and international conference facilities.  A few years ago I became acquainted with Dr. Dieter Ernst, a Senior Fellow at the Center.  He has recently published a paper with the title: ”China Bold Strategy for Semiconductors – Zero Sum Game or Catalyst for Cooperation?”

Abstract of the Paper

This paper explores whether China’s bold strategy for semiconductors will give rise to a zero-sum game or whether it will enhance cooperation that will benefit from increased innovation in China.  As the world’s largest producer and exporter of electronic products, China is by far the top market for integrated circuits (ICs), accounting for nearly a third of global demand. Yet its ability to design and produce this critical input remains seriously constrained. Despite decades and many billions of dollars of state-led investment, China’s domestic production of semiconductors covers less than 13% of the country’s demand.

As a result, China’s IC trade deficit has more than doubled since 2005, and now has surpassed crude oil to become China’s biggest import item. To correct this unsustainable imbalance, China seeks to move from catching up to forging ahead in semiconductors through progressive import substitution. The “National Semiconductor Industry Development Guidelines (Guidelines)” and the ”Made in China 2025″ (MIC 2025) plan were published by China’s State Council in June 2014 and May 2015, respectively. Both plans are backed by huge investments and a range of support policies covering intellectual property, cybersecurity, procurement, standards, rules of competition (through the “Anti-Monopoly Law”), and the negotiation of trade agreements, like the Information Technology Agreement. The objective is to strengthen simultaneously advanced manufacturing, product development and innovation capabilities in China’s semiconductor industry as well as in strategic industries that are heavy consumers of semiconductors.

Until recently, China has focused primarily on logic semiconductors and mixed-signal integrated circuits for mobile communication equipment (including smart phones), and on the assembly, testing and packaging of chips. Since the start of the 13th FYP, China’s semiconductor industry strategy now covers a much broader range of products and value chain stages, while at the same time increasing the depth and sophistication of its industrial upgrading efforts.

Based on a review of policy documents and interviews with China-based industry experts, Dr. Ernst describes a key policy initiatives and stakeholders involved in the current strategy; highlight important recent adjustments in the strategy to broaden China’s semiconductor product mix; and assess the potential for success of China’s ambitious efforts to diversify into memory semiconductors, analog semiconductors, and new semiconductor materials (compound semiconductors). The chances for success are real, giving rise to widespread worries in the US and across Asia that China’s bold strategy for semiconductors may result in a zero-sum game with disruptive effects on markets and value chains. However, Chinese semiconductor firms still have a long way to go to catch up with global industry leaders. Hence, global cooperation to integrate China into the semiconductor value chain makes more sense than ever, both for the incumbents and for China.

More About the Plan

Dr. Ernst goes to great details in his paper to describe the latest Chinese effort in semiconductors.  To begin with the present leadership team includes, contrary to the past, internationally recognized scientists and technical leaders.  The effort is focused on few areas of the industry and seems well managed.  One focus area is the design and fabrication of power and analog semiconductors especially with regards to the requirements for robotic applications.  In the paper Dr. Ernst writes: “On the demand side, China’s well funded programs to develop both electric vehicles and smart autonomous buses and cars will create a huge demand for analog semiconductors.”  Other areas that need analog devices are: smart grid, alternative energy technologies, and IoT systems.

On the supply side, Dr. Ernst points out, “analog semiconductors offer substantial advantages – they use mature process technologies, and thus are much more cost effective than digital fabs.”  This and other related advantages over digital IC design and fabrication make the choice an intelligent one especially manufacturing costs.

Dr. Ernst states that: “Of particular interest will be China’s push into compound semiconductors.  While still at an early stage, there are serious efforts under way to develop an integrated compound semiconductor value chain, drawing on the demand pull from China’s huge market for lighting/LED and power electronics.”  The paper details the names of companies, not all Chinese by the way, that are part of the effort.

Memory is a new sector of interest to the Chinese government.  In the past this segment of the industry had been neglected, but the new plan is now considering it important with significant investment for both flash memory and DRAM products.

In short, the present Chinese plan is very serious, focused, and so far, well managed.  China in a few years could become a serious disruptor of present semiconductor commerce.  American companies, as well as Taiwanese, Japanese, and South Korean, need to pay particular attention to Chinese efforts in semiconductors.   China could not only cover most of its internal needs, but can in fact develop into an international exporter of ICs.

Conclusion

Dr. Ernst paper goes into great details about the Chinese strategy for semiconductors.  What I have done is just provide highlights.  I strongly believe that the paper is must read for all those in the EDA, systems, and foundry business.  Not just to follow what the Chinese government is doing, but also to extract possible ideas on what the US companies might need to do to maintain their commercial and technological lead.  The entire paper can be found at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2836331.

Power Analysis and Management

Thursday, August 25th, 2016

Gabe Moretti, Senior Editor

As the size of a transistor shrinks and modifies, power management becomes more critical.  As I was polling various DA vendors, it became clear that most were offering solutions for the analysis of power requirements and software based methods to manage power use, at least one, was offering a hardware based solution to power use.  I struggled to find a way to coherently present their responses to my questions, but decided that extracting significant pieces of their written responses would not be fair.  So, I organized a type of virtual round table, and I will present their complete answers in this article.

The companies submitting responses are; Cadence, Flex Logix, Mentor, Silvaco, and Sonics.  Some of the companies presented their own understanding of the problem.  I am including that portion of their contribution as well to provide a better meaning to the description of the solution.

Cadence

Krishna Balachandran, product management director for low power solutions at Cadence  provided the following contribution.

Not too long ago, low power design and verification involved coding a power intent file and driving a digital design from RTL to final place-and-route and having each tool in the flow understand and correctly and consistently interpret the directives specified in the power intent file. Low power techniques such as power shutdown, retention, standby and Dynamic Voltage and Frequency Scaling (DVFS) had to be supported in the power formats and EDA tools. Today, the semiconductor industry has coalesced around CPF and the IEEE 1801 standard that evolved from UPF and includes the CPF contributions as well. However, this has not equated to problem solved and case closed. Far from it! Challenges abound. Power reduction and low power design which was the bailiwick of the mobile designers has moved front-and-center into almost every semiconductor design imaginable – be it a mixed-signal device targeting the IoT market or large chips targeting the datacenter and storage markets. With competition mounting, differentiation comes in the form of better (lower) power-consuming end-products and systems.

There is an increasing realization that power needs to be tackled at the earliest stages in the design cycle. Waiting to measure power after physical implementation is usually a recipe for multiple, non-converging iterations because power is fundamentally a trade-off vs. area or timing or both. The traditional methodology of optimizing for timing and area first and then dealing with power optimization is causing power specifications to be non-convergent and product schedules to slip. However, having a good handle on power at the architecture or RTL stage of design is not a guarantee that the numbers will meet the target after implementation. In other words, it is becoming imperative to start early and stay focused on managing power at every step.

It goes without saying that what can be measured accurately can be well-optimized. Therefore, the first and necessary step to managing power is to get an accurate and consistent picture of power consumption from RTL to gate level. Most EDA flows in use today use a combination of different power estimation/analysis tools at different stages of the design. Many of the available power estimation tools at the RTL stage of design suffer from inaccuracies because physical effects like timing, clock networks, library information and place-and-route optimizations are not factored in, leading to overly optimistic or pessimistic estimates. Popular implementation tools (synthesis and place-and-route) perform optimizations based on measures of power using built-in power analysis engines. There is poor correlation between these disparate engines leading to unnecessary or incorrect optimizations. In addition, mixed EDA-vendor flows are plagued by different algorithms to compute power, making the designer’s task of understanding where the problem is and managing it much more complicated. Further complications arise from implementation algorithms that are not concurrently optimized for power along with area and timing. Finally, name-mapping issues prevent application of RTL activity to gate-level netlists, increasing the burden on signoff engineers to re-create gate-level activity to avoid poor annotation and incorrect power results.

To get a good handle on the power problem, the industry needs a highly accurate but fast power estimation engine at the RTL stage that helps evaluate and guide the design’s micro-architecture. That requires the tool to be cognizant of physical effects – timing, libraries, clock networks, even place-and-route optimizations at the RTL stage. To avoid correlation problems, the same engine should also measure power after synthesis and place-and-route. An additional requirement to simplify and shorten the design flow is for such a tool to be able to bridge the system-design world with signoff and to help apply RTL activity to a gate-level netlist without any compromise. Implementation tools, such as synthesis and place-and-route, need to have a “concurrent power” approach – that is, consider power as a fundamental cost-factor in each optimization step side-by-side with area and timing. With access to such tools, semiconductor companies can put together flows that meet the challenges of power at each stage and eliminate iterations, leading to a faster time-to-market.

Flex Logix

Geoff Tate, Co-founder and CEO of Flex Logix is the author of the following contribution.  Our company is a relatively new entry in the embedded FPGA market.  It uses TSMC as a foundry.  Microcontrollers and IOT devices being designed in TSMC’s new ultra-low power 40nm process (TSMC 40ULP) need

•             The flexibility to reconfigure critical RTL, such as I/O

•          The ability to achieve performance at lowest power

Flex Logix has designed a family of embedded FPGA’s to meet this need. The validation chip to prove out the IP is in wafer fab now.

Many products fabricated with this process are battery operated: there are brief periods of performance-sensitive activity interspersed with long periods of very low power mode while waiting for an interrupt.

Flex Logix’s embedded FPGA core provides options to enable customers to optimize power and performance based on their application requirements.

To address this requirement, the following architectural enhancements were included in the embedded FPGA core:

•             Power Management containing 5 different power states:

  • Off state where the EFLX core is completely powered off.
  • Deep Sleep state where VDDH supply to the EFLX core can be lowered from nominal of 0.9V/1.1V to 0.5V while retaining state
  • Sleep state, gates the supply (VDDL) that controls all the performance logic such as the LUTs, DSP and interconnect switches of the embedded FPGA while retaining state. The latency to exit Sleep is shorter than that that to exit from Deep Sleep
  • Idle state, idles the clocks to cut power but is ready to move into dynamic mode quicker than the Sleep state
  • Dynamic state where power is highest of the 4 power management states but where the latency is the shortest and used during periods of performance sensitive activity

The other architectural features available in the EFLX-100 embedded FPGA to optimize power-performance are:

•             State retention for all flip flops and configuration bits at voltages well below the operating range.

•          Ability to directly control body bias voltage levels (Vbp, Vbn). Controlling the body bias further controls leakage power

•             5 combinations of threshold voltage(VT) devices to optimize power and performance for static/performance logic of the embedded FPGA. Higher the threshold voltage (eHVT, HVT) lower the leakage power and lower performance while lower the threshold voltage (SVT) device, higher the leakage and higher the performance.

•             eHVT/eHVT

•             HVT/HVT

•             HVT/SVT

•             eHVT/SVT

•             SVT/SVT

In addition to the architectural features various EDA flows and tools are used to optimize the Power Performance and Area (PPA) of the FlexLogix embedded FPGA:

•             The embedded FPGA was implemented using a combination of standard floor-planning and P&R tools to place and route the configuration cells, DSP and LUTs macros and network fabric switches. This resulted in higher density thereby reducing IR drops and the need for larger drive strengths thereby optimizing power

•          Design and use longer (non-minimum) channel length devices which further help reduce leakage power with minimal to no impact to the performance

•          The EFLX-100 core was designed with an optimized power grid to effectively use metal resources for power and signal routing. Optimal power grids reduce DC/AC supply drops which further increase performance.

Mentor

Arvind Narayanan, Architect, Product Marketing, Mentor Graphics contributed the following viewpoint.

One of the biggest challenges in IC design at advanced nodes is the complexity inherent in effective power management. Whether the goal is to reduce on-chip power dissipation or to provide longer battery life, power is taking its place alongside timing and area as a critical design dimension.

While low-power design starts at the architectural level, the low-power design techniques continue through RTL synthesis and place and route. Digital implementation tools must interpret the power intent and implement the design correctly, from power aware RTL synthesis, placement of special cells, routing and optimization across power domains in the presence of multiple corners, modes, and power states.

With the introduction of every new technology node, existing power constraints are also tightened to optimize power consumption and maximize performance. 3D transistors (FinFETs) that were introduced at smaller technology nodes have higher input pin capacitance compared to their planar counterpart, resulting in the dynamic power component to be higher compared to leakage.

Power Reduction Strategies

A good strategy to reduce power consumption is to perform power optimization at multiple levels during the design flow including software optimization, architecture selection, RTL-to-GDS implementation and process technology choices. The biggest power savings are usually obtained early in the development cycle at the ESL & RTL stages. (Fig 1). During physical implementation stage there is less opportunity for power optimization in comparison and hence choices made earlier in the design flow are critical. Technology selection such as the device structure (FinFET, planar), choice of device material (HiK, SOI) and technology node selection all play a key role.

Figure 1. Power reduction opportunities at different stages of the design flow

Architecture selection

Studies have shown that only optimizations applied early in the design cycle, when a design’s architecture is not yet fixed, have the potential for radical power reduction.  To make intelligent decisions in power optimization, the tools have to simultaneously consider all factors affecting power, and apply early in the design cycle. Finding the best architecture enables to properly balance functionality, performance and power metrics.

RTL-to-GDS Power Reduction

There are a wide variety of low-power optimization techniques that can be utilized during RTL to GDS implementation for both dynamic and leakage power reduction. Some of these techniques are listed below.

RTL Design Space Exploration

During the early stages of the design, the RTL can be modified to employ architectural optimizations, such as replacing a single instantiation of a high-powered logic function with multiple instantiations of low-powered equivalents. A power-aware design environment should facilitate “what-if” exploration of different scenarios to evaluate the area/power/performance tradeoffs

Multi-VDD Flow

Multi-voltage design, a popular technique to reduce total power, is a complex task because many blocks are operating at different voltages, or intermittently shut off. Level shifter and isolation cells need to be used on nets that cross domain boundaries if the supply voltages are different or if one of the blocks is being shut down. DVFS is another technique where the supply voltage and frequency can vary dynamically to save power. Power gating using multi-threshold CMOS (MTCMOS) switches involves switching off certain portions of an IC when that functionality is not required, then restoring power when that functionality is needed.

Figure 2. Multi-voltage layout shown in a screen shot from the Nitro-SoC™ place and route system.

MCMM Based Power Optimization

Because each voltage supply and operational mode implies different timing and power constraints on the design, multi-voltage methodologies cause the number of design corners to increase exponentially with the addition of each domain or voltage island. The best solution is to analyze and optimize the design for all corners and modes concurrently. In other words, low-power design inherently requires true multi-corner/multi-mode (MCMM) optimization for both power and timing. The end result is that the design should meet timing and power requirements for all the mode/corner scenarios.

FinFET aware Power Optimization

FinFET aware power optimization flow requires technologies such as activity driven placement, multi-bit flop support, clock data optimization, interleaved power optimization and activity driven routing to ensure that the dynamic power reduction is optimal. The tools should be able to use transforms with objective costing to make trade-offs between dynamic power, leakage power, timing, and area for best QoR.

Using the strategy to optimize power at all stages of the design flow, especially at the architecture stage is critical for optimal power reduction.  Architecture selection along with the complete set of technologies for RTL-to-GDS implementation greatly impact the ability to effectively manage power.

Silvaco

Seena Shankar, Technical Marketing Manager, is the author of this contribution.

Problem:

Analysis of IR-drop, electro-migration and thermal effects have traditionally been a significant bottleneck in the physical verification of transistor level designs like analog circuits, high-speed IOs, custom digital blocks, memories and standard cells. Starting from 28 nm node and lower, all designers are concerned about power, EM/IR and thermal issues. Even at the 180 nm node if you are doing high current designs in LDMOS then EM effects, rules and thermal issues need to be analyzed. FinFET architecture has increased concerns regarding EM, IR and thermal effects. This is because of complex DFM rules, increased current and power density. There is a higher probability of failure. Even more so EM/IR effects need to be carefully analyzed and managed. This kind of analysis and testing usually occurs at the end of the design flow. Discovering these issues at that critical time makes it difficult to stick to schedule and causing expensive rework. How can we resolve this problem?

Solution:

Power integrity issues must be addressed as early in the design cycle as possible, to avoid expensive design and silicon iterations. Silvaco’s InVar Prime is an early design stage power integrity analysis solution for layout engineers. Designers can estimate EM, IR and thermal conditions before sign-off stage. It performs checks like early IR-drop analysis, check of resistive parameters of supply networks, point to point resistance check, and also estimate current densities. It also helps in finding and fixing issues that are not detectable with regular LVS check like missing vias, isolated metal shapes, inconsistent labeling, and detour routing.

InVar Prime can be used for a broad range of designs including processors, wired and wireless network ICs, power ICs, sensors and displays. Its hierarchical methodology accurately models IR-drop, electro-migration and thermal effects for designs ranging from single block to full-chip. Its patented concurrent electro-thermal analysis performs simulation of multiple physical processes together. This is critical for today’s’ designs in order to capture important interactions between power and thermal 2D/3D profiles. The result is physical measurement-like accuracy with high speed even on extremely large designs and applicability to all process nodes including FinFET technologies.

InVar Prime requires the following inputs:

●      Layout- GDSII

●      Technology- ITF or iRCX

●      Supplementary data- Layer mapping file for GDSII, Supply net names, Locations and nominal of voltage sources, Area based current consumption for P/G nets

Figure 3. Reliability Analysis provided by InVar Prime

InVar Prime enables three types of analysis on a layout database: EM, IR and Thermal. A layout engineer could start using InVar to help in the routing and planning of the power nets, VDD and VSS. IR analysis with InVar will provide them early analysis on how good the power routing is at that point. This type of early analysis flags potential issues that might otherwise appear after fabrication and result in silicon re-spins.

InVar EM/IR engine provides comprehensive analysis and retains full visibility of supply networks from top-level connectors down to each transistor. It provides a unique approach to hierarchical block modeling to reduce runtime and memory while keeping accuracy of a true flat run. Programmable EM rules enable easy adaptation to new technologies.

InVar Thermal engine scales from single cell design to full chip and provides lab-verified accuracy of thermal analysis. Feedback from thermal engine to EM/IR engines provides unprecedented overall accuracy. This helps designers understand and analyze various effects across design caused by how thermal 2D/3D profiles affect IR drop and temperature dependent EM constraints.

The main benefits of InVar Prime are:

●      Accuracy verified in lab and foundries

●      Full chip sign-off with accurate and high performance analysis

●      Analysis available early in the back end design, when more design choices are available

●      Pre-characterization not required for analysis

●      User-friendly environment designed to assist quick turn-around-times

●      Effective prevention of power integrity issues

●      Broad range of technology nodes supported

●      Reduces backend verification cycle time

●      Improves probability of first silicon success

Sonics

Scott Seiden contributed his company viewpoint.  Sonics has developed a dynamic power management solution that is hardware based.

Sonics has Developed Industry’s First Energy Processing Unit (EPU) Based on the ICE-Grain Power Architecture.  The EPUICE stands for Instant Control of Energy.

Sonics’ ICE-G1 product is a complete EPU enabling rapid design of system-on-chip (SoC) power architecture and implementation and verification of the resulting power management subsystem.

No amount of wasted energy is affordable in today’s electronic products. Designers know that their circuits are idle a significant fraction of time, but have no proven technology that exploits idle moments to save power. An EPU is a hardware subsystem that enables designers to better manage and control circuit idle time. Where the host processor (CPU) optimizes the active moments of the SoC components, the EPU optimizes the idle moments of the SoC components. By construction, an EPU delivers lower power consumption than software-controlled power management. EPUs possess the following characteristics:

  • Fine-grained power partitioning maximizes SoC energy savings opportunities
  • Autonomous hardware-based control provides orders of magnitude faster power up and power down than software-based control through a conventional processor
  • Aggregation of architectural power savings techniques ensures minimum energy consumption
  • Reprogrammable architecture supports optimization under varying operating conditions and enables observation-driven adaptation to the end system.

About ICE-G1

The Sonics’ ICE-G1 EPU accelerates the development of power-sensitive SoC designs using configurable IP and an automated methodology, which produces EPUs and operating results that improve upon the custom approach employed by expert power design teams. As the industry’s first licensable EPU, ICE-G1 makes sophisticated power savings techniques accessible to all SoC designers in a complete subsystem solution. Using ICE-G1, experienced and first-time SoC designers alike can achieve significant power savings in their designs.

Markets for ICE-G1 include:

- Application and Baseband Processors
- Tablets, Notebooks
- IoT
- Datacenters
- EnergyStar compliant systems
- Form factor constrained systems—handheld, battery operated, sealed case/no fan, wearable.

-ICE-G1 key product features are:Intelligent event and switching controllers–power grain controllers, event matrix, interrupt controller, software register interface—configurable and programmable hardware that dynamically manages both active and leakage power.

- SonicsStudio SoC development environment—graphical user interface (GUI), power grain identification (import IEEE-1801 UPF, import RTL, described directly), power architecture definition, power grain controller configuration (power modes and transition events), RTL and UPF code generation, and automated verification test bench generation tools. A single environment that streamlines the EPU development process from architectural specification to physical implementation.

- Automated SoC power design methodology integrated with standard EDA functional and physical tool flows (top down and bottom up)—abstracts the complete set of power management techniques and automatically generates EPUs to enable architectural exploration and continuous iteration as the SoC design evolves.

- Technical support and consulting services—including training, energy savings assessments, architectural recommendations, and implementation guidance.

Conclusion

As can be seen from the contributions analysis and management of power is multi-faceted.  Dynamic control of power, especially in battery powered IoT devices is critical, since some of there devices will be in locations that are not readily reachable by an operator.

The EDA Industry Macro Projections for 2016

Monday, January 25th, 2016

Gabe Moretti, Senior Editor

How the EDA industry will fare in 2016 will be influenced by the worldwide financial climate. Instability in oil prices, the Middle East wars and the unpredictability of the Chinese market will indirectly influence the EDA industry.  EDA has seen significant growth since 1996, but the growth is indirectly influenced by the overall health of the financial community (see Figure 1).

Figure 1. EDA Quarterly Revenue Report from EDA Consortium

China has been a growing market for EDA tools and Chinese consumers have purchased a significant number of semiconductors based products in the recent past.  Consumer products demand is slowing, and China’s financial health is being questioned.  The result is that demand for EDA tools may be less than in 2015.   I have received so many forecasts for 2016 that I have decided to brake the subject into two articles.  The first article will cover the macro aspects, while the second will focus more on specific tools and market segments.

Economy and Technology

EDA itself is changing.  Here is what Bob Smith, executive director of the EDA consortium has to say:

“Cooperation and competition will be the watchwords for 2016 in our industry. The ecosystem and all the players are responsible for driving designs into the semiconductor manufacturing ecosystem. Success is highly dependent on traditional EDA, but we are realizing that there are many other critical components, including semiconductor IP, embedded software and advanced packaging such as 3D-IC. In other words, our industry is a “design ecosystem” feeding the manufacturing sector. The various players in our ecosystem are realizing that we can and should work together to increase the collective growth of our industry. Expect to see industry organizations serving as the intermediaries to bring these various constituents together.”

Bob Smith’s words acknowledge that the term “system” has taken a new meaning in EDA.  We are no longer talking about developing a hardware system, or even a hardware/software system.  A system today includes digital and analog hardware, software both at the system and application level, MEMS, third party IP, and connectivity and co-execution with other systems.  EDA vendors are morphing in order to accommodate these new requirements.  Change is difficult because it implies error as well as successes, and 2016 will be a year of changes.

Lucio Lanza, managing director of Lanza techVentures and a recipient of the Phil Kaufman award, describes it this way:

“We’ve gone from computers talking to each other to an era of PCs connecting people using PCs. Today, the connections of people and devices seem irrelevant. As we move to the Internet of Things, things will get connected to other things and won’t go through people. In fact, I call it the World of Things not IoT and the implications are vast for EDA, the semiconductor industry and society. The EDA community has been the enabler for this connected phenomenon. We now have a rare opportunity to be more creative in our thinking about where the technology is going and how we can assist in getting there in a positive and meaningful way.”

Ranjit Adhikary, director of Marketing at Cliosoft acknowledges the growing need for tools integration in his remarks:

“The world is currently undergoing a quiet revolution akin to the dot com boom in the late 1990s. There has been a growing effort to slowly but surely provide connectivity between various physical objects and enable them to share and exchange data and manage the devices using smartphones. The labors of these efforts have started to bear fruit and we can see that in the automotive and consumables industries. What this implies from a semiconductor standpoint is that the number of shipments of analog and RF ICs will grow at a remarkable pace and there will be increased efforts from design companies to have digital, analog and RF components in the same SoC. From an EDA standpoint, different players will also collaborate to share the same databases. An example of this would be Keysight Technologies and Cadence Designs Systems on OpenAccess libraries. Design companies will seek to improve the design methodologies and increase the use of IPs to ensure a faster turnaround time for SoCs. From an infrastructure standpoint a growing number of design companies will invest more in the design data and IP management to ensure better design collaboration between design teams located at geographically dispersed locations as well as to maximize their resources.”

Michiel Ligthart, president and chief operating officer at Verific Design Automation points to the need to integrate tools from various sources to achieve the most effective design flow:

“One of the more interesting trends Verific has observed over the last five years is the differentiation strategy adopted by a variety of large and small CAD departments. Single-vendor tool flows do not meet all requirements. Instead, IDMs outline their needs and devise their own design and verification flow to improve over their competition. That trend will only become more pronounced in 2016.”

New and Expanding Markets

The focus toward IoT applications has opened up new markets as well as expanded existing ones.  For example the automotive market is looking to new functionalities both in car and car-to-car applications.

Raik Brinkmann, president and chief executive officer at OneSpin Solutions wrote:

“OneSpin Solutions has witnessed the push toward automotive safety for more than two years. Demand will further increase as designers learn how to apply the ISO26262 standard. I’m not sure that security will come to the forefront in 2016 because there no standards as yet and ad hoc approaches will dominate. However, the pressure for security standards will be high, just as ISO26262 was for automotive.”

Michael Buehler-Garcia, Mentor Graphics Calibre Design Solutions, Senior Director of Marketing notes that many of the established and thought of as obsolete process nodes will instead see increased volume due to the technologies required to implement IoT architectures.

“As cutting-edge process nodes entail ever higher non-recurring engineering (NRE) costs, ‘More than Moore’ technologies are moving from the “press release” stage to broader adoption. One consequence of this adoption has been a renewed interest in more established processes. Historical older process node users, such as analog design, RFCMOS, and microelectromechanical systems (MEMS), are now being joined by silicon photonics, standalone radios, and standalone memory controllers as part of a 3D-IC implementation. In addition, the Internet of Things (IoT) functionality we crave is being driven by a “milli-cents for nano-acres of silicon,” which aligns with the increase in designs targeted for established nodes (130 nm and older). New physical verification techniques developed for advanced nodes can simplify life for design companies working at established nodes by reducing the dependency on human intervention. In 2016, we expect to see more adoption of advanced software solutions such as reliability checking, pattern matching, “smart” fill, advanced extraction solutions, “chip out” package assembly verification, and waiver processing to help IC designers implement more complex designs on established nodes. We also foresee this renewed interest in established nodes driving tighter capacity access, which in turn will drive increased use of design optimization techniques, such as DFM scoring, filling analysis, and critical area analysis, to help maximize the robustness of designs in established nodes.”

Warren Kurisu, Director of Product Management, Mentor Graphics Embedded Systems Division points to wearables, another sector within the IoT market, as an opportunity for expansion.

“We are seeing multiple trends. Wearables are increasing in functionality and complexity enabled by the availability of advanced low-power heterogeneous multicore architectures and the availability of power management tools. The IoT continues to gain momentum as we are now seeing a heavier demand for intelligent, customizable IoT gateways. Further, the emergence of IoT 2.0 has placed a new emphasis on end-to-end security from the cloud and gateway right down to the edge device.”

Power management is one of the areas that has seen significant concentration on the part of EDA vendors.  But not much has been said about battery technology.  Shreefal Mehta, president and CEO of Paper Battery Company offered the following observations.

“The year 2016 will be the year we see tremendous advances in energy storage and management.   The gap between the rate of growth of our electronic devices and the battery energy that fuels them will increase to a tipping point.   On average, battery energy density has only grown 12% while electronic capabilities have more than doubled annually.  The need for increased energy and power density will be a major trend in 2016.  More energy-efficient processors and sensors will be deployed into the market, requiring smaller, safer, longer-lasting and higher-performing energy sources. Today’s batteries won’t cut it.

Wireless devices and sensors that need pulses of peak power to transmit compute and/or perform analog functions will continue to create a tension between the need for peak power pulses and long energy cycles. For example, cell phone transmission and Bluetooth peripherals are, as a whole, low power but the peak power requirements are several orders of magnitude greater than the average power consumption.  Hence, new, hybrid power solutions will begin to emerge especially where energy-efficient delivery is needed with peak power and as the ratio of average to peak grows significantly. 

Traditional batteries will continue to improve in offering higher energy at lower prices, but current lithium ion will reach a limit in the balance between energy and power in a single cell with new materials and nanostructure electrodes being needed to provide high power and energy.  This situation is aggravated by the push towards physically smaller form factors where energy and power densities diverge significantly. Current efforts in various companies and universities are promising but will take a few more years to bring to market.

The Supercapacitor market is poised for growth in 2016 with an expected CAGR of 19% through 2020.  Between the need for more efficient form factors, high energy density and peak power performance, a new form of supercapacitors will power the ever increasing demands of portable electronics. The Hybrid supercapacitor is the bridge between the high energy batteries and high power supercapacitors. Because these devices are higher energy than traditional supercapacitors and higher power than batteries they may either be used in conjunction with or completely replace battery systems. Due to the way we are using our smartphones, supercapacitors will find a good use model there as well as applications ranging from transportation to enterprise storage.

Memory in smartphones and tablets containing solid state drives (SSDs) will become more and more accustomed to architectures which manage non-volatile cache in a manner which preserves content in the event of power failure. These devices will use large swaths of video and the media data will be stored on RAM (backed with FLASH) which can allow frequent overwrites in these mobile devices without the wear-out degradation that would significantly reduce the life of the FLASH memory if used for all storage. To meet the data integrity concerns of this shadowed memory, supercapacitors will take a prominent role in supplying bridge power in the event of an energy-depleted battery, thereby adding significant value and performance to mobile entertainment and computing devices.

Finally, safety issues with lithium ion batteries have just become front and center and will continue to plague the industry and manufacturing environments.  Flaming hoverboards, shipment and air travel restrictions on lithium batteries render the future of personal battery power questionable. Improved testing and more regulations will come to pass, however because of the widespread use of battery-powered devices safety will become a key factor.   What we will see in 2016 is the emergence of the hybrid supercapacitor, which offers a high-capacity alternative to Lithium batteries in terms of power efficiency. This alternative can operate over a wide temperature range, have long cycle lives and – most importantly are safe. “

Greg Schmergel, CEO, Founder and President of memory-maker Nantero, Inc points out that just as new power storage devices will open new opportunities so will new memory devices.

“With the traditional memories, DRAM and flash, nearing the end of the scaling roadmap, new memories will emerge and change memory from a standard commodity to a potentially powerful competitive advantage.  As an example, NRAM products such as multi-GB high-speed DDR4-compatible nonvolatile standalone memories are already being designed, giving new options to designers who can take advantage of the combination of nonvolatility, high speed, high density and low power.  The emergence of next-generation nonvolatile memory which is faster than flash will enable new and creative systems architectures to be created which will provide substantial customer value.”

Jin Zhang, Vice President of Marketing and Customer Relations at Oski Technology is of the opinion that the formal methods sector is an excellent prospect to increase the EDA market.

“Formal verification adoption is growing rapidly worldwide and that will continue into 2016. Not surprisingly, the U.S. market leads the way, with China following a close second. Usage is especially apparent in China where a heavy investment has been made in the semiconductor industry, particularly in CPU designs. Many companies are starting to build internal formal groups. Chinese project teams are discovering the benefits of improving design qualities using Formal Sign-off Methodology.”

These market forces are fueling the growth of specific design areas that are supported by EDA tools.  In the companion article some of these areas will be discussed.

Design Virtualization and Its Impact on SoC Design

Wednesday, July 15th, 2015

Mike Gianfagna, VP Marketing, eSilicon

Executive Summary

At advanced technology nodes (40nm and below), the number of options that a system-on-chip (SoC) designer faces is exploding. Choosing the correct combination of these options can have a dramatic impact on the quality, performance, cost and schedule of the final SoC. Using conventional design methodologies, it is very difficult to know if the correct options have been chosen. There is simply no way to run the required number of trial implementations to ensure the best possible option choices.

This document outlines the strategic customer benefits of applying a new technology called design virtualization to optimize SoC designs.

Design Challenges

Coupled with exploding option choices, the cost to design an SoC is skyrocketing as well. According to Semico Research Corporation, total SoC design costs increased 48 percent from the 28nm node to the 20nm node and are expected to increase 31 percent again at the 14nm node and 35 percent at the 10nm node.

Rising costs and extreme time-to-market pressures exist for most product development projects employing SoC technology. Getting the optimal SoC implemented in the shortest amount of time, with the lowest possible cost is often the margin of victory for commercial success. Since it is difficult to find the optimal choice of technology options, IP, foundation libraries, memory and operating conditions to achieve an optimal SoC, designers struggle to get the results they need with the choices they have made. These choices are made without sufficient information and so they are typically not optimal.

Figure 1. Option Choices Faced by the SoC Designer

In many cases, there is time for only one major design iteration for the SoC. Taking longer will result in a missed market window and dramatically lower market share and revenue. For many companies, there is only funding for one major design iteration as well. If you don’t get it right, the enterprise could fail. This situation demands getting the best result on the first try. All SoC design teams know this, and there is substantial effort expended to achieve the all-important first-time-right SoC project.

This backdrop creates a rich set of opportunities for technology that can reduce risk and improve results. Commercial electronic design automation (EDA) tools are intended to build the best SoC possible given a fixed set of choices. What is needed to address this problem is the ability to optimize these choices before design begins and throughout the design process as well. This will allow EDA technology and SoC design teams to improve the chances of delivering the best result possible.

Design Virtualization Defined

Design virtualization addresses SoC design challenges in a unique and novel way. The technology focuses on optimizing the recipe for an SoC using cloud-based, big data analytics and deep machine learning. In this context, recipe refers to the combined choices of process technology options, operating conditions, IP, foundation libraries and memory architectures. Design virtualization allows the optimal starting point for chip implementation.

The technology frees the SoC designer from the negative effects of early, sub-optimal decisions regarding the development of a chip implementation recipe. As we’ve discussed, decisions regarding the chip implementation recipe have substantial and far-reaching implications for the schedule, cost and ultimate quality of the SoC. A correctly defined chip implementation recipe will maximize the return-on-investment (ROI) from the costly and risky SoC design process.

Figure 2. Computer Virtualization (Source: VMware)

In traditional design methodologies, the chip implementation recipe is typically defined at the beginning of the process, often with insufficient information. As the design progresses, the ability to explore the implications of changing the implementation recipe is more difficult, resulting in longer schedules, higher design costs and sub-optimal performance for the intended application.

Design virtualization changes all that. Through an abstraction layer, the ability to explore the implications of various chip implementation recipes now becomes possible. For the first time, SoC designers have “peripheral vision” regarding their decisions. They are able to explore a very broad array of implementation recipes before design begins and throughout the design process. This creates valuable insights into the consequences of their decisions and facilitates, for the first time, discovery of the optimal implementation recipe in a deterministic way.

Figure 3. Design Virtualization: Traditional vs. Virtualized Design Flows (Source: eSilicon)

In many ways, the process is similar to the virtualization concepts made popular in the computer and enterprise software industries. Computer/network/storage virtualization facilitates the delivery of multiple and varied services with the same hardware through an abstraction layer. This abstraction layer maps the physical computing environment into a logical implementation, allowing flexible deployment of the resources. The result is a more efficient use of the underlying hardware and the delivery of multiple optimized user experiences.

Regarding SoC design, design virtualization creates an abstraction layer that maps actual physical results into predicted, logical results to assist in finding the best possible implementation recipe. The result is a more efficient use of the underlying process and design resources and delivery of an optimized SoC.

Design Virtualization – How it Works

At its core, design virtualization utilizes big data strategies to capture engineering knowledge from suppliers worldwide regarding how process options, IP, foundation libraries, memory architectures and operating conditions interact with each other to impact the power, performance and area (PPA) of an SoC design. Machine learning is then applied to this data to allow exploration of design options. The information is accessed through the cloud and real-time, predictive analysis is provided to guide the optimal choice for all these variables.

Figure 4. Architecture of Design Virtualization

As discussed, all the choices contributing to the implementation recipe for an SoC are exploding below 40nm. Understanding how these choices interact to impact the final PPA of the SoC requires extensive trial implementations, consuming large amounts of time and resources, in terms of both staff and EDA tools.

Design virtualization solves this problem with a massive parametric database of options for semiconductor value chain suppliers, worldwide. A cloud-based front-end query system is provided that facilitates real-time, predictive analysis from this database. Because all data is pre-generated, exploration of various options can be done instantly, without the need for expensive EDA tools or time-consuming trial implementations. This approach creates a new-to-the-industry capability.

Design Implementation

There is a significant gap between what EDA vendors and IP vendors deliver when compared with the new issues facing every SoC designer today. The gap can be characterized by two key observations:

EDA focuses on creating an optimal solution for one, or a limited number, of design implementation recipes

The approximately $4B EDA industry is focused on logic optimization and not memory optimization

Figure 5. Quickly Identifying Optimal Implementations

Regarding chip implementation recipes, the ability to explore the broader solution space for each design is now within reach of all design teams. Thanks to the big data, cloud-based machine learning employed by design virtualization, designers may now perform “what if” exercises for their design recipe options in real time, creating a palette of solutions that has been previously unavailable.

Using this technology, the design team can start with the desired PPA target and quickly identify the implementation recipe required to hit that target. This essentially reverses the typical time-consuming design exploration process. The result is an optimized implementation recipe that balances the PPA requirements of the SoC with the commercial options offered by the worldwide semiconductor supply chain.

With regard to memory optimization, 50 percent or more of the total area of today’s SoCs can contain on-chip memory. The detailed configuration of these memories can have a substantial impact on the final PPA of the chip, but most design teams choose a series of compiled memories early in the design process and never revisit those choices during design implementation. The result is often lost performance, wasted chip area and sub-optimal ROI for a bet-your-company SoC design project.

Design virtualization provides a way to explore all possible memory configurations for a given implementation recipe in real time. Memory customization opportunities can also be identified. Using generic memory models that can be provided by eSilicon, further refinement and optimization of the memory architecture of the SoC is possible, right up to tapeout. This design implementation flexibility is commonplace for the logic portion of the chip, but is new for the memory portion.

The EDA design flow is as important as ever, but now the starting point for design implementation can be an optimized implementation recipe, resulting in an SoC with superior PPA and optimized cost and schedule. An example of the impact of exploring implementation recipes for a 28nm design is shown below.

Conclusion

We have discussed a new approach to improve the results of SoC design called design virtualization. We believe the approaches outlined in this document provide new-to-the-industry capabilities with the opportunity for significant strategic differentiation.

Design virtualization can substantially improve the PPA and schedule of an SoC, resulting is an improved ROI for the massive cost and high risk associated with these projects.

The techniques described here are used daily inside eSilicon for all customer designs. We have achieved significant power reduction and broad design implementation improvements by analyzing customer designs and employing design virtualization techniques.

eSilicon is developing a robust product roadmap to make selected design virtualization capabilities available to all design teams worldwide, regardless of size.

For more information contact info@esilicon.com or visit www.esilicon.com.

Design Challenges Continue as New Market Opportunities Arise

Tuesday, December 10th, 2013

By Dave Bursky, Technology Editor

Designs, power and security for a new world of applications combine perspectives from ARM, Sonics, Skyworks, Inside Secure and Infinitedge (moderator) at recent Semico IP panel.

At last month’s Semico IP Impact conference in San Jose, Calif., participants in a panel titled “Designing for New World Applications” discussed design challenges, security, and power-efficiency issues related to creating the “Internet of Things” (IoT) and other new applications.

The panel kicked off with a question from moderator Kent Shimasaki, a Managing Partner at Infinitedge. He asked what customers have been asking for from the various panelists and their companies.

Responding to that question, Grant Pierce, CEO of Sonics, stated that the customers he has talked to are mostly from large companies. They are chasing large market opportunities or going after new markets like the IoT. He explained, “They want us to make their business easier and less risky for them to pursue. We can do this by integrating complex solutions of tens, hundreds, or even thousands of IP cores, and exploit every opportunity to reduce power consumption to extend battery life. Another aspect that comes up is the need to raise the security levels appropriate to the actual application. Thus, different levels of security and different access permissions are key requirements.”

Ron Moore, Director of Strategic Accounts and Marketing for Physical IP at ARM, explained that low power is by far the largest demand across the company’s entire breadth of CPU IP. The availability of a wide range of manufacturing process technologies from 180 nm all the way down to 14 nm allows the company to offer multiple performance and power options. In addition, ARM can redesign the cores to take advantage of the latest design techniques to lower power and maintain or improve performance. Security is also a concern and ARM has been seeing more interest in its ARM® TrustZone® technology.

John O’Neill, VP of Skyworks, agreed. He said that power and risk mitigation are key demands that Skyworks is addressing, thanks to its vertically integrated structure that covers design, fabrication, and system design. In doing so, it allows the company to tailor products to meet customer needs. Servicing a different type of customer, Steve Singer, Director of Systems Engineering for Embedded Security Solutions at Inside Secure, said that his customers were asking for software stacks, drivers, and semiconductor IP for embedded security applications to handle IPSEC, SSL, and other popular standards. Protecting data in motion and providing device protection to prevent hacking are key customer issues.

Another major question the panel focused on was how to determine the best technology for a given application. It’s a mix of technologies and packaging options, according to Moore. “The IoT designers need processing power, sensors, and low-power radios—technologies that do not readily integrate on a single chip.” Addressing the same question, Pierce indicated that his customers wanted chips with very power-efficient functions. In addition, they requested that those functions be enabled with very-fine-grained power management. To that end, Sonics has developed a design approach that creates multiple power domains, which can be switched on and off very quickly. However, such domains need knowledge from all of the blocks on the chip to know when to turn off or on the necessary blocks.

According to Singer, a lot of parallel processing will be needed to handle the high data rates now possible on WiFi, cable, and fiber-optic networks. Keep in mind that Crypto engines typically have to handle multiple protocols. Because only one protocol typically runs at a time, however, power consumption stays relatively low. But there could be thousands to a million tunnels running in cell systems, which means that a lot has to be done in parallel just to move the data. On top of that, there’s the need for security to mitigate any threats.

The power budget for the radio portion of a wireless system is a key concern at Skyworks, explained O’Neill. A system may typically have half-a-dozen or more RF radios—multiple channels of WiFi, Bluetooth®, LTE, GSM, GPS, and still other radios. Unlike the encryption processor, where only one algorithm runs at a time, multiple radios are often active simultaneously. This aspect increases power consumption. New techniques that modulate power using voltage scaling based on actual signal modulation can reduce instantaneous power consumption. Yet this approach requires a very complex control scheme. In sensor nodes, short-burst communications for small packets (sub-10-kbits) can have the power supplemented with the use of supercapacitors, which collect and store harvested energy from the environment to minimize the dependency on battery power. These supercaps can then power the communications circuits for short data bursts.

Keeping all of the radio and line-based devices is a challenge for customers in all areas. Simple protection schemes, such as ARM TrustZone, provide a starting point, explained Moore. But additional, more secure solutions are needed as hackers get more sophisticated. Software-only schemes provide limited security, stated Singer, emphasizing that “many other aspects need additional hardware support to provide trusted solutions in the silicon.” In some devices, such as pacemakers (which now can be updated through a wireless link), Singer asked: Where is the boundary to protect the device? Anyone can write an AES algorithm, he notes. In addition to the software, however, designers need to do a good hardware implementation. Pierce also felt that on-chip firewalling—perhaps coupled with the ARM Trustzone—will be needed to protect content.

These aspects of design, power, and security are all parts of the solution. Few companies have the resources to address them all. Thus, many companies have set up an ecosystem of multiple partners, which cooperate to support customers. As Singer put it: Companies must sell solutions, and that means they need partners with solutions that can plug into various infrastructures.

Connected IP Blocks: Busses or Networks?

Tuesday, November 5th, 2013

Gabe Moretti

David Shippy of Altera, Lawrence Loh from Jasper Design, Mentor’s Steve Bailey, and Drew Wingard from Sonics got together to discuss the issues inherent in connecting IP blocks whether in a SoC or on stack die architecture.

SLD: On chip connectivity uses area and power and also generates noise.  Have we addressed these issues sufficiently?

Wingard: The communication problem is fundamental to the high degree of integration.  We must consider how to partition the design to obtain sufficient high performance while consuming the least amount of power.  A significant portion of the cost benefits of going to a higher level of integration comes from the ability to share critical resources like off chip memory or a single control processor across a wide variety of elements.  This communication problem is fundamental to the design integration.  We cannot make it take zero area, we cannot make it have zero latency so we must think how we partition the design so that we can get things done with sufficient high performance and low power.

Loh: Another dimension that is challenging is the verification aspect.  We need to continue to innovate in order to address it.

Shippy: The number of transistors in FPGAs is growing but I do not see that the number of transistors dedicated to connectivity growing at the same pace as in other technologies.  At Altera we use network on chip to efficiently connect various processing blocks together.  It turns out that both area and power used by the network is very small compared to the rest of the die.  In general we can say that the resources dedicated to interconnect are around 3% to 5% of the die area.

Wingard: Connectivity tends to use a relatively large portion of the “long wires” on the chip, and so even if it may be a relatively modest part of the die area it runs the risk of presenting a higher challenge in the physical domain.

SLD: When people talk about the future they talk about “internet of things” or “smart everything”.  SoC are becoming more complex.  Two things we can do.  One knowing the nature of the IP that we have to connect we could develop standard protocols that use smaller busses or we can use a network on chip.  Which do you think is most promising?

Wingard: I think you cannot assume that you know what type of IP you are going to connect.  There will be a wide variety of applications and a number of wireless protocols used.  We must start with an open model that is independent of the IP in the architecture.  I am strongly in favor of the decoupled network approach.

SLD: What is the overhead we are prepared to pay for a solution?

Shippy: The solution needs to be a distributed system that uses a narrower interface with fewer wires.

SLD: What type of work do we need to do to be ready to have a common, if not standard verification method for this type of connectivity?

Loh: Again as Drew stated we can have different types of connectivity, some optimized for power, other for throughput for example.  People are creating architectural level protocols.  When you have a well defined method to describe things then you can derive a verification methodology.

Bailey: If we are looking to an equivalent to UVM you start from the protocol.  A protocol has various levels. You must consider what type of interconnect is used then you can verify if they are connected correctly and control aspects like arbitration.  Then you can move to the functional level.  To do that you must be able to generate traffic and the only way to do this is to either mimic I/O through files or have a high level model to generate the traffic so that we can replicate what it will happen under stress to make sure the network can handle the traffic.  It all starts with basic verification IP.  The protocol used will determine the properties of its various levels of abstraction, and the IP can provide ways to move across these levels to create a required verification method.

Wingard: Our customers expect that we will provide the verification system that will prove to them that we can provide the type of communication they want to implement.  The state of the art today is that there will be surprises when the communication subsystem is connected to the various IP blocks.  The advantage of having protocol verification IP is that what we see today is that the problems tend to be system interaction challenges and we do not yet have a complete way of describing how to gather the appropriate information to verify the entire system.

Bailey: Yes there is a difference between verifying the interconnect and doing system verification.  There is still work that needs to be done to capture the stress points of the entire system: it is a big challenge.

SLD: What do you think should be the next step?

Wingard: I think the trend is pretty clear.  More and more application areas are reaching the level of complexity where it makes sense to adopt network based communication solutions.  As developers start addressing these issues they will start to appreciate the fact that there may be a need for system wide solution for other things like power management for example.  As communication mechanisms become more sophisticated we need to address the issue of security at the system level as well.

Loh: In order to address the issue of system level verification we must understand that the design needs to be analyzed from the very beginning so that we can determine the choices available.  Standardization can no longer be just how things are connected, but it must grow to cover how things are defined.

Shippy: One of the challenges is how are we going to build heterogeneous systems that interact together correctly while also dealing with all of the physical characteristics of the circuit.  With many processors, and lots of DSPs and accelerators we need to figure a topology to interconnect all of those blocks and at the same time deal with the data coming from the outside or being output to the outside environment.  The problem is how to verify and optimize the system, not just the communication flow.

Bailey: As the design part of future products evolves, so the verification methods will have to evolve.  There has to be a level of coherency that covers the functionality of the system so that designers can understand the level of stress they are simulating for the entire system.  Designers also need to be able to isolate the problem when found.  Where does it originate from? An IP subsystem, the connectivity network, or is it a logic problem in the aggregate circuitry?

Contributors Biographies:

David Shippy is currently the Director of System Architecture for Altera Corporation where he manages System Architecture and Performance Modeling. Prior to that he was Chief Architect for low power X86 CPUs and SOCs at AMD. Before that he was Vice President of Engineering at Intrinsity where he led the development of the ARM CPU design for the Apple iPhone 4 and iPad. Prior to that he spent most of his career at IBM leading PowerPC microprocessor designs including the role of Chief Architect and technical leader of the PowerPC CPU microprocessors for the Xbox 360 and PlayStation 3 game machines. His experience designing high-performance microprocessor chips and leading large teams spans more than 30 years. He has over 50 patents in all areas of high performance, low power microprocessor technology.

Lawrence Loh, Vice President of Worldwide Applications Engineering, Jasper Design
Lawrence Loh holds overall management responsibility for the company’s applications engineering and methodology development. Loh has been with the company since 2002, and was formerly Jasper’s Director of Application Engineering. He holds four U.S. patents on formal technologies. His prior experience includes verification and emulation engineering for MIPS, and verification manager for Infineon’s successful LAN Business Unit. Loh holds a BSEE from California Polytechnic State University and an MSEE from San Diego State.

Stephen Bailey is the director of emerging technologies in the Design Verification and Test Division of Mentor Graphics. Steve chaired the Accellera and IEEE 1801 working group efforts that resulted in the UPF standard. He has been active in EDA standards for over two decades and has served as technical program chair and conference chair for industry conferences, including DVCon. Steve began his career designing embedded software for avionics systems before moving into the EDA industry in 1990. Since then he has worked in R&D, applications, and technical and product marketing. Steve holds BSCS and MSCS degrees from Chapman University.

Drew Wingard co-founded Sonics in September 1996 and is its chief technical officer, secretary, and board member.  Before co-founding Sonics, Drew led the development of advanced circuits and CAD methodology for MicroUnity Systems Engineering.  He co-founded and worked at Pomegranate Technology, where he designed an advanced SIMD multimedia processor.  He received is BS from the University of Texas at Austin and his MS and PHD from Stanford University, all in electrical engineering.