Part of the  

Chip Design Magazine


About  |  Contact

Posts Tagged ‘analog’

Next Page »

Blog Review – Monday, June 27, 2016

Monday, June 27th, 2016

To paraphrase Jane Austen, “Who could ever tire of digital?” Larry Hardesty, MIT, reports on how Martin Rinard was and turned to analog, creating an analog compiler could help enable simulation of whole organs and even organisms.

An interesting interview with Ray Alderman, Chairman of the Board of VITA by Chris A. Ciufo, Embedded Systems Engineering, probes about thriving companies, explores what is in the ascendency and how kitty litter can be a commercial lesson to us all.

The next phase for Bluetooth has been announced. Prithi Ramakrishnan, ARM looks at what the Bluetooth 5G standard will bring, and includes a link to a video demo.

Looking out for open source libraries, Robert Vamosi, Synopsys, looks at the battle for protection and what needs to be in an engineer’s arsenal.

Revealing the industry’s worst kept secret, Paul McLellan, Cadence, reports on imec and the thoughts of An Steegen, who is in charge of process technology at imec and thoughts of what is next for shrinking nodes.

Caroline Hayes, Senior Editor

IoT Cookbook: Analog and Digital Fusion Bus Recipe

Tuesday, December 2nd, 2014

Experts from ARM, Mathworks, Cadence, Synopsys, Analog Devices, Atrenta, Hillcrest Labs and STMicroelectronics cook up ways to integrate analog with IoT buses.

By John Blyler, Editorial Director

Many embedded engineers approach the development of Internet-of-Things (IoT) devices like a cookbook. By following previous embedded recipes, they hope to create new and deliciously innovative applications. While the recipes may be similar, today’s IoT uses strong concentration of analog, sensors and wireless ingredients. How will these parts combine with the available high-end bus structures like ARM’s AMBA? To find out, “IoT Embedded Systems” talked with the head technical cooks including Paul Williamson, Senior Marketing Manager, ARM; Rob O’Reilly, Senior Member Technical Staff at Analog Devices; Mladen Nizic , Engineering Director, Mixed Signal Solution, Cadence; Ron Lowman, Strategic Marketing Manager for IoT, Synopsys; Corey Mathis, Industry Marketing Manager -  Communications, Electronics and Semiconductors, MathWorks; Daniel Chaitow, Marketing Manager, Hillcrest Labs; Bernard Murphy, CTO, Atrenta; and Sean Newton, Field Applications Engineering Manager, STMicroelectronics. What follows is a portion of their responses. — JB

Key points:

  • System-level design is needed so that the bus interface can control the analog peripheral through a variety of modes and power-efficient scenarios.
  • One industry challenge is to sort the various sensor data streams in sequence, in types, and include the ability to do sample or rate conversion.
  • To ensure the correct sampling of analog sensor signals and the proper timing of all control and data signals, cycle accurate simulations must be performed.
  • Control system and sensor subsystems are needed to help reduce digital bus cycles by tightly integrating the necessary components.
  • Hardware design and software design have inherently different workflows, and as a result, use different design tools and methodologies.
  • For low-power IoT sensors, the analog-digital converter (ADC) power supply must be designed to minimize noise. Attention must also be paid to the routing of analog signals between the sensors and the ADC.
  • Beyond basic sensor interfacing, designer should consider digitally assisted analog (DAA) – or digital logic embedded in analog circuitry that functions as a digital signal processor.

Blyler: What challenges do designers face when integrating analog sensor and wireless IP with digital buses like ARM’s AMBA and others?

Williamson (ARM): Designers need to consider system-level performance when designing the interface between the processor core and the analog peripherals. For example a sensor peripheral might be running continuously, providing data to the CPU only when event thresholds are reached. Alternatively the analog sensor may be passing bursts of sampled data to the CPU for processing.  These different scenarios may require that the designer develop a digital interface that offers simple register control, or more advanced memory access. The design of the interface needs to enable control of the peripheral through a broad range of modes and in a manner that optimizes power efficiency at a system and application level.

O’Reilly (Analog Devices): One challenge is ultra-low power designs to enable management of the overall system power consumption. In IoT systems, typically there is one main SoC connected with multiple sensors running at different Output Data Rates (ODR) using asynchronous clocking. The application processor SoC collects the data from multiple sensors and completes the processing. To keep power consumption low, the SoC generally isn’t active all of the time. The SoC will collect data at certain intervals. To support the needs of sensor fusion it’s necessary that the sensor data includes time information. This highlights the second challenge, the ability to align a variety of different data types in a time sequence required for fusion processing. This raises the question “How can an entire industry adequately sort the various sensor data streams in sequence, in types, and include the ability to do sample or rate conversion.?”

Nizic (Cadence): Typically a sensor will generate a small (low voltage/current) analog signal which needs to be properly conditioned and amplified before converting it to digital signal sent over a bus to memory register for further processing by a DSP or a controller. Sometimes, to save area, multiple sensor signals are multiplexed (sampled) to reduce the number of A2D converters.

From the design methodology aspect, the biggest design challenge is verification. To ensure analog sensor signals are sampled correctly and all control and data signals are timed properly, cycle-accurate simulations must be performed. Since these systems now contain analog, in addition to digital and bus protocol verification, a mixed-signal simulation must cover both hardware and software. To effectively apply mixed-signal simulation, designers must model and abstract behavior of sensors, analog multiplexers, A2D converters and other analog components. On the physical implementation side, busses will require increased routing resources, which in turn mean more careful floor-planning and routing of bus and analog signals to keep chip area at minimum and avoid signal interference.

Lowman (Synopsys): For an IC designer, the digital bus provides a very easy way to snap together an IC by hanging interface controllers such as I2C, SPI, and UARTs to connect to sensors and wireless controllers.  It’s also an easy method to hang USB and Ethernet, as well as analog interfaces, memories and processing engines.  However, things are a bit more complicated on the system level. For example, the sensor in a control system helps some actuator know what to do and when to do it.  The challenge is that there is a delay in bus cycles from sensing to calculating a response to actually delivering a response that ultimately optimizes the control and efficiency of the system.  Examples include motor control, vision systems and power conversion applications. Ideally, you’d want a sensor and control subsystem that has optimized 9D Sensor Fusion application. This subsystem significantly reduces cycles spent traveling over a digital bus by essentially removing the bus and tightly integrating the necessary components needed to sense and process the algorithms. This technique will be critical to reducing power and increasing performance of IoT control systems and sensor applications in a deeply embedded world.

Mathis (Mathworks): It is no surprise that mathematical and signal processing algorithms of increasing complexity are driving many of the innovations in embedded IoT. This trend is partly enabled by the increasing capability of SoC hardware being deployed for the IoT. These SoCs provide embedded engineers greater flexibility regarding where the algorithms get implemented. The greater flexibility, however, leads to new questions in early stage design exploration. Where should the (analog and mixed) signal processing of that data occur? Should it occur in a hardware implementation, which is natively faster but more costly in on-chip resources? Or in software, where inherent latency issues may exist? One key challenge we see is that hardware design and software design have inherently different workflows, and as a result, use different design tools and methodologies. This means SoC architects need to be fluent in both C and HDL, and the hardware/software co-design environments needed for both. Another key challenge is that this integration further exacerbates the functional, gate- or circuit-level, and final sign-off verification problems that have dogged designers for decades. Interestingly, designers facing either or both of these key challenges could benefit significantly from top-down design and verification methodologies. (See last month’s discussion, “Is Hardware Really That Much Different From Software?”)

Chaitow (Hillcrest Labs): In most sensor-based applications, data is ultimately processed in the digital realm so an analog to digital conversion has to occur somewhere in the system before the processing occurs. MEMS sensors measure tiny variations in capacitance, and amplification of that signal is necessary to allow sufficient swing in the signal to ensure a reasonable resolution. Typically the analog to digital conversion is performed at the sensor to allow for reduction of error in the measurement. Errors are generally present because of the presence of noise in the system, but the design of the sensing element and amplifiers have attributes that contribute to error. For a given sensing system minimizing the noise is therefore paramount. The power supply of the ADC needs to be carefully designed to minimize noise and the routing of analog signals between the sensors and the ADC requires careful layout. If the ADC is part of an MCU, then the power regulation of the ADC and the isolation of the analog front end from the digital side of the system is vital to ensure an effective sampling system.

As always with design there are many tradeoffs. A given analog MEMS supplier may be able to provide a superior measurement system to a MEMS supplier that provides a digital output. By accepting the additional complexity of the mixed-signal system and combining the analog sensor with a capable ADC, an improved measurement system can be built. In addition if the application requires multiple sensors, using a single external multiple channel ADC with analog sensors can yield a less expensive system, which will be increasingly important as the IoT revolution continues.

Murphy (Atrenta): Aside from the software needs, there are design and integration considerations. On the design side, there is nothing very odd. The sensor needs to be presented to an AMBA fabric as a slave of some variety (eg APB or AHB), which means it needs all the digital logic to act as a well-behaved slave (see Figure). It should recognize it is not guaranteed to be serviced on demand and therefore should support internal buffering (streaming buffer if an output device for audio, video or other real-time signal). Sensors can be power-hungry so they should support power down that can be signaled by the bus (as requested by software).

The implementation side is definitely more interesting. All of that logic is generally bundled with the analog circuitry into one AMS block and it is usually difficult to pin down a floor-plan outline on such a block until quite close to final layout. This makes full-chip floor planning more challenging because you are connecting to an AMBA switch fabric, which likes to connect to well-constrained interfaces because the switch matrix itself doesn’t constrain layout well on its own. This may lead to a little more iteration of the floor plan than you otherwise might expect

Beyond basic sensor interfacing, you need to consider digitally assisted analog (DAA). This is when you have digital logic embedded in analog circuitry, functioning as a digital signal processor to perform effectively an analog function but perhaps more flexibly and certainly with more programmability that analog circuitry. Typical applications are for beamforming in radio transmission and for super-accurate ADCs.

Figure: The AMBA Bus SOC Platform is a configurable with several peripherals and system functions, e.g., AHB Bus(es), APB Bus(es), arbiters, decoders. Popular peripherals include RAM controllers, Ethernet, PCI, USB, 1394a, UARTs, PWMs, PIOs. (Courtesy of ARM Community -

Newton (STMicroelectronics): Integration of devices such as analog sensors and wireless IP (radios) is widespread today via the use of standard digital bus interfaces such as I2C and SPI. Integration of analog IP with a bus – such as ARM’s AMBA – becomes a matter of connecting the relevant buses to the digital registers contained within the IP. This is exactly what happens when you use I2C or SPI to communicate to standalone sensors or wireless radio, with the low-speed bus interfaces giving external access to the internal registers of the analog IP. The challenges for integration to devices with higher-end busses isn’t so much on the bus interface, as it is in defining and qualifying the resulting SoC. In particular, packaging characteristics, the number of GPIO’s available, the size of package, the type of processing device used (MPU or MCU), internal memory capability such as flash or internal SRAM, and of course the power capabilities of the device in question: does it need very low standby power? Wake capability?  Most of these questions are driven by market requirements and capabilities and must be weighed against the cost and complexity of the integration effort.

The challenges for integration to devices with higher-end busses isn’t so much on the bus interface, as it is in defining packaging characteristics, available GPIOs, type of processing device, memory such as flash or internal SRAM, and power capabilities.

Blyler: Thank you.

This article was sponsored by ARM.

ARM and Cortex are registered trademarks of ARM Limited (or its subsidiaries) in the EU and/or elsewhere. mbed is a trademark of ARM Limited (or its subsidiaries) in the EU and/or elsewhere. All rights reserved.

China’s Bold Strategy for Semiconductors

Thursday, October 20th, 2016

Gabe Moretti, Senior Editor

The East-West Center is a research organization Established by the U.S. Congress in 1960. The Center serves as a resource for information and analysis on critical issues of common concern, bringing people together to exchange views, build expertise, and develop policy options. The Center is an independent, public, nonprofit organization with funding from the U.S. government, and additional support provided by private agencies, individuals, foundations, corporations, and governments in the region.

The Center’s 21-acre Honolulu campus, adjacent to the University of Hawai‘i at Mānoa, is located midway between Asia and the U.S. mainland and features research, residential, and international conference facilities.  A few years ago I became acquainted with Dr. Dieter Ernst, a Senior Fellow at the Center.  He has recently published a paper with the title: ”China Bold Strategy for Semiconductors – Zero Sum Game or Catalyst for Cooperation?”

Abstract of the Paper

This paper explores whether China’s bold strategy for semiconductors will give rise to a zero-sum game or whether it will enhance cooperation that will benefit from increased innovation in China.  As the world’s largest producer and exporter of electronic products, China is by far the top market for integrated circuits (ICs), accounting for nearly a third of global demand. Yet its ability to design and produce this critical input remains seriously constrained. Despite decades and many billions of dollars of state-led investment, China’s domestic production of semiconductors covers less than 13% of the country’s demand.

As a result, China’s IC trade deficit has more than doubled since 2005, and now has surpassed crude oil to become China’s biggest import item. To correct this unsustainable imbalance, China seeks to move from catching up to forging ahead in semiconductors through progressive import substitution. The “National Semiconductor Industry Development Guidelines (Guidelines)” and the ”Made in China 2025″ (MIC 2025) plan were published by China’s State Council in June 2014 and May 2015, respectively. Both plans are backed by huge investments and a range of support policies covering intellectual property, cybersecurity, procurement, standards, rules of competition (through the “Anti-Monopoly Law”), and the negotiation of trade agreements, like the Information Technology Agreement. The objective is to strengthen simultaneously advanced manufacturing, product development and innovation capabilities in China’s semiconductor industry as well as in strategic industries that are heavy consumers of semiconductors.

Until recently, China has focused primarily on logic semiconductors and mixed-signal integrated circuits for mobile communication equipment (including smart phones), and on the assembly, testing and packaging of chips. Since the start of the 13th FYP, China’s semiconductor industry strategy now covers a much broader range of products and value chain stages, while at the same time increasing the depth and sophistication of its industrial upgrading efforts.

Based on a review of policy documents and interviews with China-based industry experts, Dr. Ernst describes a key policy initiatives and stakeholders involved in the current strategy; highlight important recent adjustments in the strategy to broaden China’s semiconductor product mix; and assess the potential for success of China’s ambitious efforts to diversify into memory semiconductors, analog semiconductors, and new semiconductor materials (compound semiconductors). The chances for success are real, giving rise to widespread worries in the US and across Asia that China’s bold strategy for semiconductors may result in a zero-sum game with disruptive effects on markets and value chains. However, Chinese semiconductor firms still have a long way to go to catch up with global industry leaders. Hence, global cooperation to integrate China into the semiconductor value chain makes more sense than ever, both for the incumbents and for China.

More About the Plan

Dr. Ernst goes to great details in his paper to describe the latest Chinese effort in semiconductors.  To begin with the present leadership team includes, contrary to the past, internationally recognized scientists and technical leaders.  The effort is focused on few areas of the industry and seems well managed.  One focus area is the design and fabrication of power and analog semiconductors especially with regards to the requirements for robotic applications.  In the paper Dr. Ernst writes: “On the demand side, China’s well funded programs to develop both electric vehicles and smart autonomous buses and cars will create a huge demand for analog semiconductors.”  Other areas that need analog devices are: smart grid, alternative energy technologies, and IoT systems.

On the supply side, Dr. Ernst points out, “analog semiconductors offer substantial advantages – they use mature process technologies, and thus are much more cost effective than digital fabs.”  This and other related advantages over digital IC design and fabrication make the choice an intelligent one especially manufacturing costs.

Dr. Ernst states that: “Of particular interest will be China’s push into compound semiconductors.  While still at an early stage, there are serious efforts under way to develop an integrated compound semiconductor value chain, drawing on the demand pull from China’s huge market for lighting/LED and power electronics.”  The paper details the names of companies, not all Chinese by the way, that are part of the effort.

Memory is a new sector of interest to the Chinese government.  In the past this segment of the industry had been neglected, but the new plan is now considering it important with significant investment for both flash memory and DRAM products.

In short, the present Chinese plan is very serious, focused, and so far, well managed.  China in a few years could become a serious disruptor of present semiconductor commerce.  American companies, as well as Taiwanese, Japanese, and South Korean, need to pay particular attention to Chinese efforts in semiconductors.   China could not only cover most of its internal needs, but can in fact develop into an international exporter of ICs.


Dr. Ernst paper goes into great details about the Chinese strategy for semiconductors.  What I have done is just provide highlights.  I strongly believe that the paper is must read for all those in the EDA, systems, and foundry business.  Not just to follow what the Chinese government is doing, but also to extract possible ideas on what the US companies might need to do to maintain their commercial and technological lead.  The entire paper can be found at:

The EDA Industry Macro Projections for 2016

Monday, January 25th, 2016

Gabe Moretti, Senior Editor

How the EDA industry will fare in 2016 will be influenced by the worldwide financial climate. Instability in oil prices, the Middle East wars and the unpredictability of the Chinese market will indirectly influence the EDA industry.  EDA has seen significant growth since 1996, but the growth is indirectly influenced by the overall health of the financial community (see Figure 1).

Figure 1. EDA Quarterly Revenue Report from EDA Consortium

China has been a growing market for EDA tools and Chinese consumers have purchased a significant number of semiconductors based products in the recent past.  Consumer products demand is slowing, and China’s financial health is being questioned.  The result is that demand for EDA tools may be less than in 2015.   I have received so many forecasts for 2016 that I have decided to brake the subject into two articles.  The first article will cover the macro aspects, while the second will focus more on specific tools and market segments.

Economy and Technology

EDA itself is changing.  Here is what Bob Smith, executive director of the EDA consortium has to say:

“Cooperation and competition will be the watchwords for 2016 in our industry. The ecosystem and all the players are responsible for driving designs into the semiconductor manufacturing ecosystem. Success is highly dependent on traditional EDA, but we are realizing that there are many other critical components, including semiconductor IP, embedded software and advanced packaging such as 3D-IC. In other words, our industry is a “design ecosystem” feeding the manufacturing sector. The various players in our ecosystem are realizing that we can and should work together to increase the collective growth of our industry. Expect to see industry organizations serving as the intermediaries to bring these various constituents together.”

Bob Smith’s words acknowledge that the term “system” has taken a new meaning in EDA.  We are no longer talking about developing a hardware system, or even a hardware/software system.  A system today includes digital and analog hardware, software both at the system and application level, MEMS, third party IP, and connectivity and co-execution with other systems.  EDA vendors are morphing in order to accommodate these new requirements.  Change is difficult because it implies error as well as successes, and 2016 will be a year of changes.

Lucio Lanza, managing director of Lanza techVentures and a recipient of the Phil Kaufman award, describes it this way:

“We’ve gone from computers talking to each other to an era of PCs connecting people using PCs. Today, the connections of people and devices seem irrelevant. As we move to the Internet of Things, things will get connected to other things and won’t go through people. In fact, I call it the World of Things not IoT and the implications are vast for EDA, the semiconductor industry and society. The EDA community has been the enabler for this connected phenomenon. We now have a rare opportunity to be more creative in our thinking about where the technology is going and how we can assist in getting there in a positive and meaningful way.”

Ranjit Adhikary, director of Marketing at Cliosoft acknowledges the growing need for tools integration in his remarks:

“The world is currently undergoing a quiet revolution akin to the dot com boom in the late 1990s. There has been a growing effort to slowly but surely provide connectivity between various physical objects and enable them to share and exchange data and manage the devices using smartphones. The labors of these efforts have started to bear fruit and we can see that in the automotive and consumables industries. What this implies from a semiconductor standpoint is that the number of shipments of analog and RF ICs will grow at a remarkable pace and there will be increased efforts from design companies to have digital, analog and RF components in the same SoC. From an EDA standpoint, different players will also collaborate to share the same databases. An example of this would be Keysight Technologies and Cadence Designs Systems on OpenAccess libraries. Design companies will seek to improve the design methodologies and increase the use of IPs to ensure a faster turnaround time for SoCs. From an infrastructure standpoint a growing number of design companies will invest more in the design data and IP management to ensure better design collaboration between design teams located at geographically dispersed locations as well as to maximize their resources.”

Michiel Ligthart, president and chief operating officer at Verific Design Automation points to the need to integrate tools from various sources to achieve the most effective design flow:

“One of the more interesting trends Verific has observed over the last five years is the differentiation strategy adopted by a variety of large and small CAD departments. Single-vendor tool flows do not meet all requirements. Instead, IDMs outline their needs and devise their own design and verification flow to improve over their competition. That trend will only become more pronounced in 2016.”

New and Expanding Markets

The focus toward IoT applications has opened up new markets as well as expanded existing ones.  For example the automotive market is looking to new functionalities both in car and car-to-car applications.

Raik Brinkmann, president and chief executive officer at OneSpin Solutions wrote:

“OneSpin Solutions has witnessed the push toward automotive safety for more than two years. Demand will further increase as designers learn how to apply the ISO26262 standard. I’m not sure that security will come to the forefront in 2016 because there no standards as yet and ad hoc approaches will dominate. However, the pressure for security standards will be high, just as ISO26262 was for automotive.”

Michael Buehler-Garcia, Mentor Graphics Calibre Design Solutions, Senior Director of Marketing notes that many of the established and thought of as obsolete process nodes will instead see increased volume due to the technologies required to implement IoT architectures.

“As cutting-edge process nodes entail ever higher non-recurring engineering (NRE) costs, ‘More than Moore’ technologies are moving from the “press release” stage to broader adoption. One consequence of this adoption has been a renewed interest in more established processes. Historical older process node users, such as analog design, RFCMOS, and microelectromechanical systems (MEMS), are now being joined by silicon photonics, standalone radios, and standalone memory controllers as part of a 3D-IC implementation. In addition, the Internet of Things (IoT) functionality we crave is being driven by a “milli-cents for nano-acres of silicon,” which aligns with the increase in designs targeted for established nodes (130 nm and older). New physical verification techniques developed for advanced nodes can simplify life for design companies working at established nodes by reducing the dependency on human intervention. In 2016, we expect to see more adoption of advanced software solutions such as reliability checking, pattern matching, “smart” fill, advanced extraction solutions, “chip out” package assembly verification, and waiver processing to help IC designers implement more complex designs on established nodes. We also foresee this renewed interest in established nodes driving tighter capacity access, which in turn will drive increased use of design optimization techniques, such as DFM scoring, filling analysis, and critical area analysis, to help maximize the robustness of designs in established nodes.”

Warren Kurisu, Director of Product Management, Mentor Graphics Embedded Systems Division points to wearables, another sector within the IoT market, as an opportunity for expansion.

“We are seeing multiple trends. Wearables are increasing in functionality and complexity enabled by the availability of advanced low-power heterogeneous multicore architectures and the availability of power management tools. The IoT continues to gain momentum as we are now seeing a heavier demand for intelligent, customizable IoT gateways. Further, the emergence of IoT 2.0 has placed a new emphasis on end-to-end security from the cloud and gateway right down to the edge device.”

Power management is one of the areas that has seen significant concentration on the part of EDA vendors.  But not much has been said about battery technology.  Shreefal Mehta, president and CEO of Paper Battery Company offered the following observations.

“The year 2016 will be the year we see tremendous advances in energy storage and management.   The gap between the rate of growth of our electronic devices and the battery energy that fuels them will increase to a tipping point.   On average, battery energy density has only grown 12% while electronic capabilities have more than doubled annually.  The need for increased energy and power density will be a major trend in 2016.  More energy-efficient processors and sensors will be deployed into the market, requiring smaller, safer, longer-lasting and higher-performing energy sources. Today’s batteries won’t cut it.

Wireless devices and sensors that need pulses of peak power to transmit compute and/or perform analog functions will continue to create a tension between the need for peak power pulses and long energy cycles. For example, cell phone transmission and Bluetooth peripherals are, as a whole, low power but the peak power requirements are several orders of magnitude greater than the average power consumption.  Hence, new, hybrid power solutions will begin to emerge especially where energy-efficient delivery is needed with peak power and as the ratio of average to peak grows significantly. 

Traditional batteries will continue to improve in offering higher energy at lower prices, but current lithium ion will reach a limit in the balance between energy and power in a single cell with new materials and nanostructure electrodes being needed to provide high power and energy.  This situation is aggravated by the push towards physically smaller form factors where energy and power densities diverge significantly. Current efforts in various companies and universities are promising but will take a few more years to bring to market.

The Supercapacitor market is poised for growth in 2016 with an expected CAGR of 19% through 2020.  Between the need for more efficient form factors, high energy density and peak power performance, a new form of supercapacitors will power the ever increasing demands of portable electronics. The Hybrid supercapacitor is the bridge between the high energy batteries and high power supercapacitors. Because these devices are higher energy than traditional supercapacitors and higher power than batteries they may either be used in conjunction with or completely replace battery systems. Due to the way we are using our smartphones, supercapacitors will find a good use model there as well as applications ranging from transportation to enterprise storage.

Memory in smartphones and tablets containing solid state drives (SSDs) will become more and more accustomed to architectures which manage non-volatile cache in a manner which preserves content in the event of power failure. These devices will use large swaths of video and the media data will be stored on RAM (backed with FLASH) which can allow frequent overwrites in these mobile devices without the wear-out degradation that would significantly reduce the life of the FLASH memory if used for all storage. To meet the data integrity concerns of this shadowed memory, supercapacitors will take a prominent role in supplying bridge power in the event of an energy-depleted battery, thereby adding significant value and performance to mobile entertainment and computing devices.

Finally, safety issues with lithium ion batteries have just become front and center and will continue to plague the industry and manufacturing environments.  Flaming hoverboards, shipment and air travel restrictions on lithium batteries render the future of personal battery power questionable. Improved testing and more regulations will come to pass, however because of the widespread use of battery-powered devices safety will become a key factor.   What we will see in 2016 is the emergence of the hybrid supercapacitor, which offers a high-capacity alternative to Lithium batteries in terms of power efficiency. This alternative can operate over a wide temperature range, have long cycle lives and – most importantly are safe. “

Greg Schmergel, CEO, Founder and President of memory-maker Nantero, Inc points out that just as new power storage devices will open new opportunities so will new memory devices.

“With the traditional memories, DRAM and flash, nearing the end of the scaling roadmap, new memories will emerge and change memory from a standard commodity to a potentially powerful competitive advantage.  As an example, NRAM products such as multi-GB high-speed DDR4-compatible nonvolatile standalone memories are already being designed, giving new options to designers who can take advantage of the combination of nonvolatility, high speed, high density and low power.  The emergence of next-generation nonvolatile memory which is faster than flash will enable new and creative systems architectures to be created which will provide substantial customer value.”

Jin Zhang, Vice President of Marketing and Customer Relations at Oski Technology is of the opinion that the formal methods sector is an excellent prospect to increase the EDA market.

“Formal verification adoption is growing rapidly worldwide and that will continue into 2016. Not surprisingly, the U.S. market leads the way, with China following a close second. Usage is especially apparent in China where a heavy investment has been made in the semiconductor industry, particularly in CPU designs. Many companies are starting to build internal formal groups. Chinese project teams are discovering the benefits of improving design qualities using Formal Sign-off Methodology.”

These market forces are fueling the growth of specific design areas that are supported by EDA tools.  In the companion article some of these areas will be discussed.

Mixed Signal Design and Verification for IoT Designs

Tuesday, November 17th, 2015

Mitch Heins, EDS Marketing Director, DSM division of Mentor Graphics

A typical Internet-of-Things (IoT) design consists of several different blocks including one or more sensors, analog signal processing for the sensors, an analog-to-digital converter and a digital interface such as I2C.  System integration and verification is challenging for these types of IoT designs as they typically are a combination of two to three different ICs.  The challenge is exacerbated by the fact that the system covers multiple domains including analog, digital, RF, and mechanical for packaging and different forms of multi-physics type simulations needed to verify the sensors and actuators of an IoT design.  The sensors and actuators are typically created as microelectromechanical systems (MEMS) which have a mechanical aspect and there is a tight interaction between them and the package in which they are encapsulated.

The verification challenge is to have the right form of models available for each stage of the design and verification process that work with your EDA vendor tool suite.  Many of the high volume IoT designs are now looking to integrate the microcontroller and radio as one die and the analog circuitry and sensors on second die to reduce cost and footprint.

In many cases the latest IoT designs are now using onboard analog and digital circuitry with multiple sensors to do data fusion at the sensor, making for “smart sensors”.  These ICs are made from scratch meaning that the designers must create their own models for both system-level and device-level verification.

Tanner EDA by Mentor Graphics has partnered with SoftMEMS to offer a complete mixed signal design and verification tool suite for these types of MEMS centric IC designs. The Tanner Analog and MEMS tool suites offers a complete design-capture, simulation, implementation and verification flow for MEMS-based IoT designs.  The Tanner AMS verification flow supports top-down hierarchical design with the ability to do co-simulation of multiple levels of design abstraction for analog, digital and mechanical environments.  All design-abstractions, simulations and resulting waveforms are controlled and viewed from a centrally integrated schematic cockpit enabling easy design trade-offs and verification.   Design abstractions can be used to swap in different models for system level vs device level verification tasks as different parts of the design are implemented.  The system includes support for popular modeling languages such as Verilog-AMS and Verilog-A.

The logic abstraction of the design is tightly tied to the physical implementation of the design through a correct-by-construction design methodology using schematic-driven-layout with interactive DRC checking.  The Tanner/SoftMEMS solution uses the 2D mask layout to automatically create a correct-by-construction 3D model of the MEMS devices using a process technology description file.

Figure 1: Tanner Analog Mixed Signal Verification Cockpit

The 3D model is combined with similar 3D package models and is then used in Finite Element or Boundary Element Analysis engines to debug the functionality and manufacturability of the MEMS devices including mechanical, thermal, acoustic, electrical, electrostatic, magnetic and fluid analysis.

Figure 2: 3D-layout & cross section created by Tanner SOFTMEMS 3D Modeler

A key feature of the design flow is that the solution allows for the automatic creation of a compact Verilog-A model for the MEMS-Package combination from the FEA/BEA analysis that can be used to close the loop in final system-level verification using the same co-simulation cockpit and test benches that were used to start the design.

An additional level of productivity can be gained by using a parameterized library of MEMS building blocks from which the designer can more quickly build complex MEMS devices.

Figure 3: Tanner S-Edit Schematic Capture Mixed Mode Schematic of the IoT System

Each building block has an associated parameterized compact simulation model.  By structurally building the MEMS device from these building blocks, the designer is automatically creating a structural simulation model for the entire device that can be used within the verification cockpit.

Figure 4:Tanner SoftMEMS BasicPro Suite with MEMS Symbol and Simulation Library

Cortex-M processor Family at the Heart of IoT Systems

Saturday, October 25th, 2014

Gabe Moretti, Senior Editor

One cannot have a discussion about the semiconductor industry without hearing the word IoT.  It is really not a word as language lawyers will be ready to point out, but an abbreviation that stands for Internet of Things.  And, of course, the abbreviation is fundamentally incorrect, since the “things” will be connected in a variety of ways, not just the Internet.  In fact it is already clear that devices, grouped to form an intelligent subsystem of the IoT, will be connected using a number of protocols like: 6LoWPAN, ZigBee, WiFi, and Bluetooth.  ARM has developed the Cortex®-M processor family that is particularly well suited for providing processing power to devices that consume very low power in their duties of physical data acquisition. This is an instrumental function of the IoT.

Figure 1. The heterogeneous IoT: lots of “things” inter-connected. (Courtesy of ARM)

Figure 1 shows the vision the semiconductor industry holds of the IoT.  I believe that the figure shows a goal the industry set for itself, and a very ambitious goal it is.  At the moment the complete architecture of the IoT is undefined, and rightly so.  The IoT re-introduces a paradigm first used when ASIC devices were thought of being the ultimate solution to everyone’s computational requirements.  The business of IP started  as an enhancement to application-specific hardware, and now general purpose platforms constitute the core of most systems.  IoT lets the application drive the architecture, and companies like ARM provide the core computational block with an off-the-shelf device like a Cortex MCU.

The ARM Cortex-M processor family is a range of scalable and compatible, energy efficient, easy to use processors designed to help developers meet the needs of tomorrow’s smart and connected embedded applications. Those demands include delivering more features at a lower cost, increasing connectivity, better code reuse and improved energy efficiency. The ARM Cortex-M7 processor is the most recent and highest performance member of the Cortex-M processor family. But where the Cortex-M7 is at the heart of ARM partner SoCs for IoT systems, other connectivity IP is required to complete the intelligent SoC subsystem.

A collection of some of my favorite IoT-related IP follows.

Figure 2. The Cortex-M7 Architecture (Courtesy of ARM)

Development Ecosystem

To efficiently build a system, no matter how small, that can communicate with other devices, one needs IP.  ARM and Cadence Design Systems have had a long-standing collaboration in the area of both IP and development tools.  In September of this year the companies extended an already existing agreement covering more than 130 IP blocks and software.  The new agreement covers an expanded collaboration for IoT and wearable devices targeting TSMC’s ultra-low power technology platform. The collaboration is expected to enable the rapid development of IoT and wearable devices by optimizing the system integration of ARM IP and Cadence’s integrated flow for mixed-signal design and verification.

The partnership will deliver reference designs and physical design knowledge to integrate ARM Cortex processors, ARM CoreLink system IP, and ARM Artisan physical IP along with RF/analog/mixed-signal IP and embedded flash in the Virtuoso-VDI Mixed-Signal Open Access integrated flow for the TSMC process technology.

“The reduction in leakage of TSMC’s new ULP technology platform combined with the proven power-efficiency of Cortex-M processors will enable a vast range of devices to operate in ultra energy-constrained environments,” said Richard York, vice president of embedded segment marketing, ARM. “Our collaboration with Cadence enables designers to continue developing the most innovative IoT devices in the market.”  One of the fundamental changes in design methodology is the aggregation of capabilities from different vendors into one distribution point, like ARM, that serve as the guarantor of a proven development environment.

Communication and Security

System developers need to know that there are a number of sources of IP when deciding on the architecture of a product.  In the case of IoT it is necessary to address both the transmission capabilities and the security of the data.

As a strong partner of ARM Synopsys provides low power IP that supports a wide range of low power features such as configurable shutdown and power modes. The DesignWare family of IP offers both digital and analog components that can be integrated with any Cortex-M MCU.  Beyond the extensive list of digital logic, analog IP including ADCs and DACs, plus audio CODECs play an important role in IoT applications. Designers also have the opportunity to use Synopsys development and verification tools that have a strong track record handling ARM based designs.

The Tensilica group at Cadence has published a paper describing how to use Cadence IP to develop a Wi-Fi 802.11ac transceiver used for WLAN (wireless local area network). This transceiver design is architected on a programmable platform consisting of Tensilica DSPs, using an anchor DSP from the ConnX BBE family of cores in combination with a smaller specialized DSP and dedicated hardware RTL. Because of the enhanced instruction set in the Cortex-M7 and superscalar pipeline, plus the addition of floating point DSP, Cadence radio IP works well with the Cortex-M7 MCU as intermediate band, digital down conversion, post-processing or WLAN provisioning can be done by the Cortex-M7.

Accent S.A. is an Italian company that is focused on RF products.  Accent’s BASEsoc RF Platform for ARM enables pre-optimized, field-proven single chip wireless systems by serving as near-finished solutions for a number of applications.  This modular platform is easily customizable and supports integration of different wireless standards, such as ZigBee, Bluetooth, RFID and UWB, allowing customers to achieve a shorter time-to-market. The company claims that an ARM processor-based, complex RF-IC could be fully specified, developed and ramped to volume production by Accent in less than nine months.

Sonics offers a network on chip (NoC) solution that is both flexible in integrating various communication protocols and highly secure.   Figure 3 shows how the Sonics NoC provides secure communication in any SoC architecture.

Figure 3.  Security is Paramount in Data Transmission (Courtesy of Sonics)

According to Drew Wingard, Sonics CTO “Security is one of the most important, if not the most important, considerations when creating IoT-focused SoCs that collect sensitive information or control expensive equipment and/or resources. ARM’s TrustZone does a good job securing the computing part of the system, but what about the communications, media and sensor/motor subsystems? SoC security goes well beyond the CPU and operating system. SoC designers need a way to ensure complete security for their entire design.”

Drew concludes “The best way to accomplish SoC-wide security is by leveraging on-chip network fabrics like SonicsGN, which has built-in NoCLock features to provide independent, mutually secure domains that enable designers to isolate each subsystem’s shared resources. By minimizing the amount of secure hardware and software in each domain, NoCLock extends ARM TrustZone to provide increased protection and reliability, ensuring that subsystem-level security defects cannot be exploited to compromise the entire system.”

More examples exist of course and this is not an exhaustive list of devices supporting protocols that can be used in the intelligent home architecture.  The intelligent home, together with wearable medical devices, is the most frequent example of IoT that could be implemented by 2020.  In fact it is a sure bet to say that by the time the intelligent home is a reality many more IP blocks to support the application will be available.

How Will Analog and Sensors Impact the IoT?

Thursday, October 23rd, 2014

By John Blyler, JB Systems Media

What challenges await designers and implementers on the monolithic mixed signal sensor side of the IoT equation? Several experts from the IoT ecosystem have differing viewpoints on these questions including Patrick Gill, Principal Research Scientist at Rambus; Ian Chen, Marketing, Systems, Applications, Software & Algorithms manager at Freescale; Pratul Sharma, Technical Marketing Manager for the IoT at ARM; and Diya Soubra, CPU Product Manager at ARM. What follows is a portion of the responses. — JB

Blyler: Many of the end nodes of the IoT will be previously unconnected objects, e.g., sensor systems. What analog IP is needed to enable these kinds of sensors?

Gill: The big three are power regulator ICs, wireless communications, and gating sensor events. Good switching regulators are important for devices where power is at a premium, for instance where power is scavenged from the environment or the battery won’t be recharged often (or ever). Power-efficient wireless communication, especially at low bit rates, is going to be very important too. There’s some interesting work in academia on radios with a very low duty cycle (see, “Ultra Low Power Impulse Radio Based Transceiver for Sensor Networks”). The trick to having power scale down with data rate is to have the sender and receiver wake up at precisely the same time.

Chen: Whereas networking thinks of sensor systems as end nodes from a topology perspective, sensor systems could be seen as source nodes from a data collection perspective. In short, they are responsible for converting the physical world into data people can use. As such, we will need precision analog to digital converters with offsets stable over temperature ranges, wireless and wired connectivity, and intelligent power management for optimal system power consumption. Many of these IPs are integrated into advanced sensor products but continuous improvements are always necessary.

Soubra: In addition to all existing types of analog IP, many new types will be needed to satisfy specific endpoint requirements for every vertical market. After successful field trials with a few thousand nodes – before the millions of nodes are installed – cost will be the next big factor. There will be a cost reduction exercise where the [sensor] module and the SoC are stripped of all items that are not required for that specific vertical market. Mass deployment dictates cost reduction which dictates specialization. That’s why a general purpose block to catch multiple markets will burden each with the added cost.

Blyler: What is your favorite or most challenging example of an IoT end-node application?

Chen: One of my favorites are tire pressure monitoring sensors. Fleet managers are requiring data about the conditions of their trucks to be uploaded to the cloud to help improve business efficiency. A tire pressure monitor includes pressure sensors, up to two accelerometers, a short range RF transmitter and an MCU for signal processing all in a 7 x 7 x 2.2 mm package operating on a coin cell battery for a 10 year life.

Soubra: My favorite is the WiFi-connected sprinkler system (see Figure and link). It checks the current weather conditions before turning on the water. This is a lower cost approach and easier to do than putting a moisture sensor in every corner of the garden with a mesh network. I am sure newer models will also measure the amount of water used so we can track consumption.

Figure: Here’s an example of a favorite IoT end-node application – the Wi-Fi/BlueTooth-based wireless water sprinkler. This one is controlled with an ARM®-based GainSpan chipset.

Gill: I like the idea of smart windows, ventilation, heating and air conditioning. An automated home that knows the weather report (and air quality forecast) as well as when its occupants will be home will be able to maintain a suitable environment for its occupants using less energy. Nest is a good first start, but there’s more to comfortable air than controlling the HVAC. Solar power is sexy, but solar hot water generation can give an even better ROI.

Blyler: What analog-to-digital interfaces issues will be faced by designers? Will additional features be needed on the microcontroller (ARM Cortex®-M) side to enable this analog end-node sensor data?

Chen: With MEMS sensors A-to-D converters must discern sub-picoFarad capacitive changes. Because of the small signal and low power requirements, these converters are normally integrated with the sensor and not on the Cortex-M processor.

Sharma: Low power will be a critical issue. Analog circuits typically have DC currents. The designer will need to cut the DC currents from the μA range to the nA range by making the analog circuits more energy efficient and design mostly to operate in sub-threshold. But decreasing the power supply will affect the voltage headroom and increase the design difficulty of the analog circuits. An additional challenge will be that threshold voltages increase at cold temperature which degrades analog circuits, thus making the voltage head-room even tighter. One solution is power gating of the analog circuit but that will increase the complexity of the validation process.

Soubra: Analog designers will be faced with having to become digital design experts. There are no (new) technological challenges; we just need to get that analog block on the Advanced Microcontroller Bus Architecture (AMBA). [Editors Note: AMBA is an ARM supported, open-standard, on-chip interconnect specification for connecting functional blocks in system-on-a-chip (SoC) designs.] This approach may seem easy to do once we understand the sequence. In reality it is a bit harder on analog designers since they need to step out of the analog design context and into a mixed digital analog setting.

This means the use of new tools, new design flow, and more validation. (see, “Best Practices for Mixed Signal, RF and Microcontroller IoT” ) Luckily, the tools are 10X better than a few years ago. The Cortex-M processor already has what is required to connect to any analog core.

Gill: Picking up the earlier thread of a low-power sentinel, it could be useful for some chips to have configurable analog functions that detect changes in the input without needing to wake up an ADC. These would make sense from a commercial perspective if they allowed the microcontroller to be able to monitor sensor data using only a few microWatts of power. Also, if security is an issue (and it will be for all sorts of things), low-power crypto cores could be useful to help relay data to a cloud base station.

Blyler: Thank you.

Read the complete story at: IoT Embedded Systems

Analog Designers Face Low Power Challenges

Monday, June 16th, 2014

By John Blyler, Chief Content Officer

Can mixed signal designs achieve the low power needed by today’s tightly integrated SoCs and embedded IoT communication systems?

System level power budgets are affected by SoC integration. Setting aside the digital scaling benefits of smaller geometric nodes, leading edge SoCs achieve higher performance and tighter integration with decreased voltage levels at a cost. If power is assumed to be constant, then that cost is the increased current flow (P=VI) delivered to an ever larger number of processor cores. That’s why SoC power delivery and distribution remain a major challenge for chip architects, designers and verification engineers.

As with digital engineers, analog and mixed signal power designers must consider ways to lower power consumption early in the design phase. Beyond that consideration, there are several common ways to reduce the analog mixed signal portion of a power budget. These ways include low-power transmitter architectures; analog signal processing in low-voltage domains; and sleep mode power reduction. (ARM’s Diya Soubra takes about mixed signal sleep modes in, “Digital Designers Grapple with Analog Mixed Signal Designs.”)

To efficiently explore the design space and make basic system-level trade-offs, SoC architects must adapt their modeling style to accommodate mixed-signal design and verification techniques. Such early efforts will also help prevent overdesign in which globally distributed and cross-discipline (digital and analog) design teams don’t share information. For example, if designers are creating company-specific intellectual property (IP) cores, then they may not be aware how various IP subsystems are being used at the full chip level.

Similarly, SoC package level designers must also understand how IP is used at the higher board level. Without this information, designers tend to over compensate their portion of the design, i.e., over design to ensure their portion of the design stays within the allocated power budget. But that leads to increased cost and power consumption.

Power Modes

From an architectural viewpoint, power systems really have two categories; active and idle/standby modes. With all of the physical level (PHY) link integration occurring on SoCs, active power considerations must apply not only to digital but also analog and input-output power designs.

Within the modes of active and idle/standby power are the many power states needed to balance the current load amongst various voltage islands. With increased performance and power demands on both digital and analog designs, there is growing interest in Time- and Frequency-Domain Power Distribution (or Delivery) Network (PDN) analysis. Vic Kulkarni, VP and GM at Apache Design, believes that a careful power budgeting at a high level enables the efficient design of the power delivery network in the downstream design flow. (See, “System Level Power Budgeting.”)

SoC power must be modeled throughout all aspects of the design implementation.  Although one single modeling approach won’t work, a number of vertical markets like automotive have found success using virtual prototypes.  “System virtual prototypes are increasingly a mix – not just of hardware and software, but also of digital, control, and analog components integrated with many different sensors and actuators,” observed Arun Mulpur of The Mathworks. (See, “Chip, Boards and Beyond: Modeling Hardware and Software.”)

Communications Modes

Next to driving a device screen or display, the communication subsystem tends to consume the most power on a SoC. That’s why several low power initiatives have recently arisen like Bluetooth Low Energy. Today there are three mainstream standards for Bluetooth in use – Bluetooth 2.0 (often referred to as Bluetooth Classic), Bluetooth 4.0, which offers both a standard high-speed mode and a low-energy mode with limited data rate referred to Bluetooth LE; and a single-mode Bluetooth LE standard that keeps power consumption to a minimum.  (See, “Wearable Technologies Meet Bluetooth Low Energy.”)

Power profiling of software is an important part of designing for cellular embedded systems. But cellular is only one connectivity option when designing the SoC or board level device. Other communication types include short range subsystems, Bluetooth, Zigbee, 6LowPAN and mesh networks. If Wi-Fi connectivity is needed, then there will be choices for fixed LAN connected things using Ethernet or proprietary cabling systems. Further, there will be interplay amongst all this different ways to connect that must be simulated at an overall system-level.  (See, “Cellular vs. WiFi Embedded Design.”)

It has only been in the last decade or so that mixed signal systems have played a more dominant role in system level power budgets. Today’s trend toward a highly connected, Internet-of-Things world (IoT) means that low power, mixed signal communication design must begin early in the design phase to be considered part of the overall system-level power management process.

Digital Designers Grapple with Analog Mixed Signal Designs

Tuesday, June 10th, 2014

By John Blyler, Chief Content Officer

Today’s growth of analog and mixed signal circuits in the Internet of Things (IoT) applications raises questions about compiling C-code, running simulations, low power designs, latency and IP integration.

Often, the most valuable portion of a technical seminar is found in the question-and-answer (Q&A) session that follows the actual presentation. For me, that was true during a recent talk on the creation of mixed signal devices for smart analog and the Internet of Things (IoT) applications. The speakers included Diya Soubra, CPU Product Marketing Manager and Joel Rosenberg, Platform Marketing Director at ARM; and Mladen Nizic, Engineering Director at Cadence. What follows is my paraphrasing of the Q&A session with reference to the presentation where appropriate. – JB

Question: Is it possible to run C and assembly code on an ARM® Cortex®-M0 processor in Cadence’s Virtuoso for custom IC design? Is there a C-compiler within the tool?

Nizic: The C compiler comes from Keil®, ARM’s software development kit. The ARM DS-5 Development Studio is an Eclipse based tool suite for the company’s processors and SoCs. Once the code is compiled, it is run together with RTL software in our (Cadence) Incisive Mixed Signal simulator. The result is a simulation of the processor driven by an instruction set with all digital peripherals simulated in RTL or at the gate level. The analog portions of the design are simulated at the appropriate behavioral level, i.e., Spice transistor level, electrical behavioral Verilog A or a real number model. [See the mixed signal trends section of, “Moore’s Cycle, Fifth Horseman, Mixed Signals, and IP Stress”)

You can use the electrical behavioral models like a Verilog A and VHDL-A and –AMS to simulate the analog portions of the design. But real number models have become increasingly popular for this task. With real number models, you can model analog signals with variable amplitudes but discrete time steps, just as required by digital simulation. Simulations with a real number model representation for analog are done at almost the same speed as the digital simulation and with very little penalty (in accuracy). For example, here (see Figure 1) are the results of a system simulation where we verify how quickly Cortex-M0 would us a regulation signal to bring pressure to a specified value. It takes some 28-clock cycles. Other test bench scenarios might be explored, e.g., sending the Cortex-M0 into sleep mode if no changes in pressure are detected or waking up the processor in a few clock cycles to stabilize the system. The point is that you can swap these real number models for electrical models in Verilog A or for transistor models to redo your simulation to verify that the transistor model performs as expected.

Figure 1: The results of a Cadence simulation to verify the accuracy of a Cortex-M0 to regulate a pressure monitoring system. (Courtesy of Cadence)

Question: Can you give some examples of applications where products are incorporating analog capabilities and how they are using them?

Soubra: Everything related to motor control, power conversion and power control are good examples of where adding a little bit of (processor) smarts placed next to the mixed signal input can make a big difference. This is a clear case of how the industry is shifting toward this analog integration.

Question: What capabilities does ARM offer to support the low power requirement for mixed signal SoC design?

Rosenberg: The answer to this question has both a memory and logic component. In terms of memory, we offer the extended range register file compilers which today can go up to 256k bits. Even though the performance requirement for a given application may be relatively low, the user will want to boot from the flash into the SRAM or the register file instance. Then they will shut down the flash and execute out of the RAM as the RAM offers significantly lower active as well as stand-by power compared to executing out of flash.

On the logic side, we offer a selection from 7, 9 and 12 tracks. Within that, there are three Vt options – one for high, nominal and lower speeds. Beyond that we also offer power management kits that provide things like level shifters and power gating so the user can shut down none active parts of the SoC circuit.

Question: What are the latency numbers for waking up different domains that have been put to sleep?

Soubra: The numbers that I shared during the presentation do not include any peripherals since I have no way of knowing what peripherals will be added. In terms of who is consuming what power, the normal progression tends to be the processor, peripherals, bus and then the flash block. The “wake-up” state latency depends upon the implementation itself. You can go from tens-of-cycles to multiple-of-tens depending upon how the clocks and phase locked loops (PLLs) are implemented. If we shut everything down, then a few cycles will be required before everything goes off an, before we can restart the processor. But we are talking about tens not hundreds of cycles.

Question: And for the wake-up clock latency?

Soubra: Wake-up is the same thing, because when the wake-up controller says “lets go,” it has to restart all the clocks before it starts the processor. So it is exactly the same amount.

ARM Cortex-M low power technologies.

Question: What analog intellectual property (IP) components are offered by ARM and Cadence? How can designers integrate their own IP in the flow?

Nizic: At Cadence, through the acquisition of Cosmic, we have a portfolio of applicable analog and mixed signal IP, e.g., converters, sensors and the like. We support all design views that are necessary for this kind of methodology including model abstracts from real number to behavioral models. Like ARM’s physical IP, all of ours are qualified for the various foundry nodes so the process of integrating IP and silicon is fairly smooth.

Soubra: From ARM’s point-of-view, we are totally focused on the digital part of the SoC, including the processors, bus infrastructure components, peripherals, and memory controllers that are part of the physical IP (standard cell libraries, I/O cells, SRAM, etc). Designers integrate the digital parts (processors, bus components, peripherals and memory controller) in RTL design stages. Also, they can add the functional simulation models of memories and I/O cells in simulations, together with models of analog components from Cadence. The actual physical IP are integrated during various implementation stages (synthesis, placement and routing, etc).

Question: How can designers integrate their own IP into the SoC?

Nizic: Some of the capabilities and flows that we described are actually used to create customer IP for later reuse in SoC integration. There is a centric flow that can be used, whether the customer’s IP is pure analog or contains a small amount of standard cell digital. For example, the behavioral modeling capabilities help package this IP for the functional simulation in full chip verification. But getting the IP ready is only one aspect of the flow.

From a physical abstract it’s possible to characterize the IP for use in timing driven mode. This approach would allow you to physically verify the IP on the SoC for full chip verification.

EDA Industry Predictions for 2014 – Part 1

Tuesday, January 7th, 2014

Gabe Moretti, Contributing Editor

I always ask for predictions for the coming year, and generally get good response.  But this year the volume of responses was so high that I could not possibly cover all of the material in one article.  So I will use two articles, one week apart, to record the opinions submitted.   This first section details the contributions of Andrew Yang of ANSYS – Apache Design, Mick Tegethoff of Berkeley Design Automation, Michel Munsey of Dassault Systèmes, Oz Levia from Jasper Design Automation, Joe Sawicki from Mentor Graphics, Grant Pierce and Jim Hogan from Sonics, and Bob Smith of Uniquify.

Andrew Yang – ANSYS Apache Design

For 2014 and beyond, we’ll see increased connectivity of the electronic devices that are pervasive in our world today. This trend will continue to drive the existing mobile market growth as well as make an impact on upcoming automotive electronics. The mobile market will be dominated by a handful of chip manufacturers and those companies that support the mobile ecosystem. The automotive market is a big consumer of electronics components that are part of a complex system that help improve safety and reliability, as well as provide users with real-time interaction with their surroundings.

For semiconductor companies to remain competitive in these markets, they will need to take a “system” view for their design and verification. The traditional silo-based methodology, where each component of the system is designed and analyzed independently can result in products with higher cost, poor quality, and schedule delay. An adoption of system-level simulation will allow engineers to carry out early system prototyping, analyze the interaction of each of the components, and achieve optimal design tradeoffs.

Mick Tegethoff – Berkeley Design Automation

FinFET Technology will dominate the landscape in semiconductor design and verification as more companies adopt the technology. FinFET is a revolutionary change to device fabrication and modeling, requiring a more complex SPICE model and challenging the existing circuit behavior “rules of thumb” on which experienced designers have relied for years with planar devices.

Designers of complex analog/RF circuits, including PLLs, ADCs, SerDes, and transceivers, will need to relearn the device behavior in these applications and to explore alternative architectures. As a result, design teams will have to rely more than ever on accurate circuit verification tools that are foundry-certified for FinFET technology and have the performance and capacity to handle complex circuits including physical effects such as device noise, complex parasitics, and process variability.

In memory applications, FinFET technology will continue to drive change and challenge the status quo of “relaxed accuracy” simulation for IP characterization. Design teams are realizing that it is no longer acceptable to tolerate 2–5% inaccuracy in memory IP characterization. They are looking for verification tools that can deliver SPICE-like accuracy in a time frame on a par with their current solutions.

However, accurate circuit verification alone will not be sufficient. The impact of FinFET devices and new circuit architectures in analog, RF, mixed-signal, and memory applications demand full confidence from design teams that their circuits will meet specifications across all operational, environmental, and process conditions. As a result, designers will need to perform an increased amount of intelligent, efficient, and effective circuit characterization at the block level and at the project level to ensure that their designs meet rigorous requirements prior to silicon.

Michael Munsey – Dassault Systèmes

We at Dassault Systèmes see a few key trends coming to the semiconductor industry in 2014.

1) Extreme Design Collaboration: Complexity and cost in IC design and manufacturing now demand that semiconductor vendors engage an ever broader, more diverse pool of specialist designers and engineers.

At the same time, total costs for designing a cutting-edge integrated circuit can top $100 million for just one project. Respins can drive these costs even higher, adding huge profitability risks to new projects.

Technology-enabled extreme collaboration, over and above that in traditional PLM, will be required to assure manufacturable, profitable designs. Why? Because defects arise at the interchange between designers. And with more designers and more complex projects, the risk of misperceptions and miscommunications increases.

Pressure for design teams to interlock using highly specialized collaboration technology will increase in parallel with the financial risk of new semiconductor design projects.

2) Enterprise IP management: The move towards more platform-based designs in order to meet shortening time to market windows, application driven designs, and the increasing cost of producing new semiconductor devices, will explode the market for IP and create a new market for enterprise IP management.

The deeper insight is how that IP will be acquired, used, configured, validated and otherwise managed. The challenges will be (1) building a intelligent process that enables project managers to evaluate the lowest cost IP blocks quickly and effectively; (2) managing the licensed IP so that configuration, integration and validation know-how is captured and is easily reused; and (3) ensuring that licensing and export compliance attributes of each licensed block of IP are visible to design decision makers.

3) Flexible Design to Manufacturing: In 2011, the Japanese earthquake forced a leading semiconductor company to cease manufacturing operations because their foundry was located close to Fukushima. That earthquake and the floods in Thailand have awakened semiconductor vendors to the stark reality that global supply chains can be dramatically and unexpectedly disrupted without any prior notice.

At the same time, with increased fragmentation and specialization occurring within the design and supply chain for integrated circuit, cross chain information automation will be mission-critical.

Examples of issues that will require IT advances are (1) the increasing variations in how IP is transferred down the supply chain. It could be a file, a wafer, a die or a packaged IC – yet vendors will need to handle all options with equal efficiency to maximize profitability; and (2) the flexible packaging of an IC design for capture into ERP systems will become mandatory, in order to enable the necessary downstream supply chain flexibility.

Oz Levia – Jasper design Automation

There are a few points that we at Jasper consider important for 2014.

1) Low power design and verification will continue to be a main challenge for SoC designers.

2) Heterogeneous multi-processor designs will continue to grow. Issue such as fabric and NOC design and verification will dominate.

3) The segments that will drive the semiconductor markets will likely continue to be in the mobile space(s) – phones, tablets, etc. But the server segment will also continue to increase in importance.

4) Process will continue to evolve, but there is a lot of head room in current processes before we run out steam.

5) Consolidation will continue in the semiconductor market. More important, the strong will get stronger and the weak will get weaker. Increasingly this is a winner takes all market and we will see a big divide between innovators and leaders and laggards.

6) EDA will continue to see consolidation.  Large EDA vendors will continue increasing investments in SIP, and Verification technologies. We will not see a new radically different technology or methodology. The total amount of investments In the EDA industry will continue to be low.

7) EDA will grow at slow pace, but Verification, Emulation and SIP will grow faster then other segments.

Joseph Sawicki – Mentor Graphics

FinFETs will move from early technology development to early adopter designs. Over the last year, the major foundry ecosystems moved from alpha to production status for 16/14nm with its dual challenges of double patterning and FinFET.  Fabless customers are just beginning to implement their first test chip tape-outs for 16 /14 nm, and 2014 will see most of the 20 nm early-adopter customers also preparing their first 16 nm/14 nm test chips.

FinFETs are driving a need for more accurate extraction tools, and EDA vendors are turning to 3D field solver technology to provide it. The trick is to also provide high performance that can deliver quick turnaround time even as the number of required extraction corners jumps from 5 to 15 and the number of gates doubles or triples.

Test data and diagnosis of test fail data will play an increasingly important role in the ramp of new FinFET technologies. The industry will face new challenges as traditional approaches to failure analysis and defect isolation struggle to keep pace with changes in transistor structures. The opportunity is for software-based diagnosis techniques that leverage ATPG test fail data to pick up the slack and provide more accurate resolution for failure and yield analysis engineers.

16/14nm will also require more advanced litho hotspot checking and more complex and accurate fill structures to help ensure planarity and to also help deal with issues in etch, lithography, stress and rapid thermal annealing (RTA) processes.

In parallel with the production ramp at 20 nm, and 16 nm/14 nm test chips, 2014 will see the expansion of work across the ecosystem for 10 nm. Early process development and EDA tool development for 10 nm began in 2012, ramped up in intensity in 2013, and will be full speed ahead in 2014.

Hardware emulation has transitioned from the engineering lab to the datacenter where today’s virtual lab enables peripheral devices such as PCIe, USB, and Ethernet to exist in virtual space without specialized hardware or a maze of I/O cables. A virtual environment permits instant reconfiguration of the emulator for any design or project team and access by more users, and access from anywhere in the world, resulting in higher utilization and lower overall costs.

The virtual lab is also enabling increased verification coverage of SoC software and hardware, supporting end-to-end validation of SW drivers, for example. Hardware emulation is now employed throughout the entire mobile device supply chain, including embedded processor and graphics IP suppliers, mobile chip developers, and mobile phone and tablet teams. Embedded SW validation and debug will be the real growth engine driving the emulation business.

The Internet of Things (IoT) will add an entirely new level of information sources, allowing us to interact with and pull data from the things around us. The ability to control the state of virtually anything will change how we manage and interact with the world. The home, the factory, transportation, energy, food and many other aspects of life will be impacted and could lead to a new era of productivity increases and wealth creation.

Accordingly, we’ll see continued growth in the MEMS market driven by sensors for mobile phones, automobiles, and medical monitoring, and we’ll see silicon photonics solutions being implemented in data and communications centers to provide higher bandwidth backplane connectivity in addition to their current use in fiber termination.

Semiconductor systems enabling the IoT trend will need to respond to difficult cost, size and energy constraints to drive real ubiquity. For example, we’ll need 3D packaging implementations that are an order of magnitude cheaper than current offerings. We’ll need a better ways to model complex system effects, putting a premium on tools that enable design and verification at the system level, and engineers that can use them. Cost constraints will also drive innovation in test to ensure that multi-die package test doesn’t explode part cost. Moreover, once we move from data to actually interacting with the real world analog/mixed signal, MEMS and other sensors role in the semiconductor solution will become much greater.

Grant Pierce and Jim Hogan – Sonics

For a hint at what’s to come in the technology sector as a whole and the EDA and IP industries specifically, let’s first look at the global macro-economic situation. The single greatest macro-economic factor impacting the technology sector is energy. Electronic products need energy to work. Electronic designers and manufacturers need energy to do their jobs. In the recent past, energy has been expensive to produce, particularly in the US market due to our reliance on foreign oil imports. Today in the US, the cost of producing energy is falling while consumption is slowing. The US is on a path to energy self-sufficiency according to the Energy Department’s annual outlook. By 2015, domestic oil output is on track to surpass its peak set in 1970.

What does cheaper energy imply for the technology industry? More investment. Less money spent on purchasing energy abroad means more capital available to fund new ventures at home and around the world. The recovery of US financial markets is also restoring investors’ confidence in earning higher ROI through public offerings. As investors begin to take more risk and inject sorely needed capital into the technology sector, we expect to see a surge in new startups. EDA and IP industries will participate in this “re-birth” because they are critical to the success of technology sector as enabling technologies.

For an understanding of where the semiconductor IP business is going, let’s look at consumer technology. Who are the leaders in the consumer technology business today? Apple, Google, Samsung, Amazon, and perhaps a few others. Why? Because they possess semiconductor knowledge coupled with software expertise. In the case of Apple, for example, they also own content and its distribution, which makes them extremely profitable with higher recurring revenues and better margins. Content is king and the world is becoming application-centric. Software apps are content. Semiconductor IP is content. Those who own content, its publication and distribution, will thrive.

In the near term, the semiconductor IP business will continue to consolidate as major players compete to build and acquire broader content portfolios. For example, witness the recent Avago/LSI and Intel/Mindspeed deals. App-happy consumers have an insatiable appetite for the latest and greatest content and devices. Consumer technology product lifecycles place immense pressure on chip and system designers when developing and verifying the flexible hardware platforms that run these apps. Among their many important considerations are functionality, performance, power, security, and cost. System architectures and software definition and control are becoming the dominant source of product differentiation rather than hardware. The need for semiconductor IP that addresses these trends and accelerates time-to-volume production is growing. The need for EDA tools that help designers successfully use and efficiently reuse IP is also growing.

So what are the market opportunities for new IP and tool companies in the coming years? These days, talk about the Internet of Things (IoT) is plentiful and there will be many different types of IP in this sensor-oriented market space. Perhaps, the most interesting and promising of these IoT IP technologies will address our growing concerns about health and quality of life. The rise of wearable technologies that help monitor our vital signs and treat chronic health conditions promises to extend our human survival rate beyond 100 years. As these technologies progress, surely the “Bionic Man” will become common place in the not-too-distant future. Personally, and being members of the aging “Baby Boomer” generation, we hope that it happens sooner rather than later!

Bob Smith – Uniquify

I spent a good deal of 2013 traveling around the globe doing a series of seminars on double data rate (DDR) synchronous dynamic random-access memory (SDRAM), the ubiquitous class of memory chips. The seminars were meant to promote the fastest, smallest and lowest power state-of-the-art adaptive DDR IP technology. They highlighted how it can be used to enhance design speed and configured to minimize the design footprint and hit increasingly smaller low-power targets.

While marketing and promotion was on the agenda, the seminars were a great way to check in with designers to better understand their current DDR challenges and identify a few trends that will emerge in 2014. What we learned may be a surprise to more than a few semiconductor industry watchers and offers some tantalizing predictions for next year.

The biggest surprise was hearing designers confirm plans to go directly to LPDDR4 (that is low-power DDR4, the latest JEDEC standard) and skip LPDDR3. The reasons are varied, but most noted that they’re getting greater gains in performance and low power by jumping to LPDDR4, especially important for mobile applications. According to JEDEC, the LPDDR4 architecture was designed to be power neutral, offer 2X bandwidth performance over previous generations, with low pin-count and low cost. It’s also backward compatible.

Even though many of the designers we heard from agreed that DDR3 is now mainstream, even more are starting projects based on DDR4. Some are motivated to move to DDR4 even without the need for extra performance for a practical and cost-effective reason. If they have a product with a long lifetime of five years or more, they are concerned that the DDR3 memory will cost more than DDR4 at some point. They have a choice: either build in the DDR4 now in anticipation or look for combination IP that handles both DDR3/4 in one IP. Many have chosen to do the former.

One final prediction I offer for 2014 is that 28nm is the technology node that will be around for a long time to come. Larger semiconductor companies, however, are starting new projects at 14/16 nm, taking advantage of the emerging FinFET technology.

According to my worldwide sources, memories and FinFET will dominate the discussion in 2014, which means it will be a lively year.

Next Page »