Part of the  

Chip Design Magazine

  Network

About  |  Contact

Posts Tagged ‘analog’

Next Page »

Cortex-M processor Family at the Heart of IoT Systems

Saturday, October 25th, 2014

Gabe Moretti, Senior Editor

One cannot have a discussion about the semiconductor industry without hearing the word IoT.  It is really not a word as language lawyers will be ready to point out, but an abbreviation that stands for Internet of Things.  And, of course, the abbreviation is fundamentally incorrect, since the “things” will be connected in a variety of ways, not just the Internet.  In fact it is already clear that devices, grouped to form an intelligent subsystem of the IoT, will be connected using a number of protocols like: 6LoWPAN, ZigBee, WiFi, and Bluetooth.  ARM has developed the Cortex®-M processor family that is particularly well suited for providing processing power to devices that consume very low power in their duties of physical data acquisition. This is an instrumental function of the IoT.

Figure 1. The heterogeneous IoT: lots of “things” inter-connected. (Courtesy of ARM)

Figure 1 shows the vision the semiconductor industry holds of the IoT.  I believe that the figure shows a goal the industry set for itself, and a very ambitious goal it is.  At the moment the complete architecture of the IoT is undefined, and rightly so.  The IoT re-introduces a paradigm first used when ASIC devices were thought of being the ultimate solution to everyone’s computational requirements.  The business of IP started  as an enhancement to application-specific hardware, and now general purpose platforms constitute the core of most systems.  IoT lets the application drive the architecture, and companies like ARM provide the core computational block with an off-the-shelf device like a Cortex MCU.

The ARM Cortex-M processor family is a range of scalable and compatible, energy efficient, easy to use processors designed to help developers meet the needs of tomorrow’s smart and connected embedded applications. Those demands include delivering more features at a lower cost, increasing connectivity, better code reuse and improved energy efficiency. The ARM Cortex-M7 processor is the most recent and highest performance member of the Cortex-M processor family. But where the Cortex-M7 is at the heart of ARM partner SoCs for IoT systems, other connectivity IP is required to complete the intelligent SoC subsystem.

A collection of some of my favorite IoT-related IP follows.

Figure 2. The Cortex-M7 Architecture (Courtesy of ARM)

Development Ecosystem

To efficiently build a system, no matter how small, that can communicate with other devices, one needs IP.  ARM and Cadence Design Systems have had a long-standing collaboration in the area of both IP and development tools.  In September of this year the companies extended an already existing agreement covering more than 130 IP blocks and software.  The new agreement covers an expanded collaboration for IoT and wearable devices targeting TSMC’s ultra-low power technology platform. The collaboration is expected to enable the rapid development of IoT and wearable devices by optimizing the system integration of ARM IP and Cadence’s integrated flow for mixed-signal design and verification.

The partnership will deliver reference designs and physical design knowledge to integrate ARM Cortex processors, ARM CoreLink system IP, and ARM Artisan physical IP along with RF/analog/mixed-signal IP and embedded flash in the Virtuoso-VDI Mixed-Signal Open Access integrated flow for the TSMC process technology.

“The reduction in leakage of TSMC’s new ULP technology platform combined with the proven power-efficiency of Cortex-M processors will enable a vast range of devices to operate in ultra energy-constrained environments,” said Richard York, vice president of embedded segment marketing, ARM. “Our collaboration with Cadence enables designers to continue developing the most innovative IoT devices in the market.”  One of the fundamental changes in design methodology is the aggregation of capabilities from different vendors into one distribution point, like ARM, that serve as the guarantor of a proven development environment.

Communication and Security

System developers need to know that there are a number of sources of IP when deciding on the architecture of a product.  In the case of IoT it is necessary to address both the transmission capabilities and the security of the data.

As a strong partner of ARM Synopsys provides low power IP that supports a wide range of low power features such as configurable shutdown and power modes. The DesignWare family of IP offers both digital and analog components that can be integrated with any Cortex-M MCU.  Beyond the extensive list of digital logic, analog IP including ADCs and DACs, plus audio CODECs play an important role in IoT applications. Designers also have the opportunity to use Synopsys development and verification tools that have a strong track record handling ARM based designs.

The Tensilica group at Cadence has published a paper describing how to use Cadence IP to develop a Wi-Fi 802.11ac transceiver used for WLAN (wireless local area network). This transceiver design is architected on a programmable platform consisting of Tensilica DSPs, using an anchor DSP from the ConnX BBE family of cores in combination with a smaller specialized DSP and dedicated hardware RTL. Because of the enhanced instruction set in the Cortex-M7 and superscalar pipeline, plus the addition of floating point DSP, Cadence radio IP works well with the Cortex-M7 MCU as intermediate band, digital down conversion, post-processing or WLAN provisioning can be done by the Cortex-M7.

Accent S.A. is an Italian company that is focused on RF products.  Accent’s BASEsoc RF Platform for ARM enables pre-optimized, field-proven single chip wireless systems by serving as near-finished solutions for a number of applications.  This modular platform is easily customizable and supports integration of different wireless standards, such as ZigBee, Bluetooth, RFID and UWB, allowing customers to achieve a shorter time-to-market. The company claims that an ARM processor-based, complex RF-IC could be fully specified, developed and ramped to volume production by Accent in less than nine months.

Sonics offers a network on chip (NoC) solution that is both flexible in integrating various communication protocols and highly secure.   Figure 3 shows how the Sonics NoC provides secure communication in any SoC architecture.

Figure 3.  Security is Paramount in Data Transmission (Courtesy of Sonics)

According to Drew Wingard, Sonics CTO “Security is one of the most important, if not the most important, considerations when creating IoT-focused SoCs that collect sensitive information or control expensive equipment and/or resources. ARM’s TrustZone does a good job securing the computing part of the system, but what about the communications, media and sensor/motor subsystems? SoC security goes well beyond the CPU and operating system. SoC designers need a way to ensure complete security for their entire design.”

Drew concludes “The best way to accomplish SoC-wide security is by leveraging on-chip network fabrics like SonicsGN, which has built-in NoCLock features to provide independent, mutually secure domains that enable designers to isolate each subsystem’s shared resources. By minimizing the amount of secure hardware and software in each domain, NoCLock extends ARM TrustZone to provide increased protection and reliability, ensuring that subsystem-level security defects cannot be exploited to compromise the entire system.”

More examples exist of course and this is not an exhaustive list of devices supporting protocols that can be used in the intelligent home architecture.  The intelligent home, together with wearable medical devices, is the most frequent example of IoT that could be implemented by 2020.  In fact it is a sure bet to say that by the time the intelligent home is a reality many more IP blocks to support the application will be available.

How Will Analog and Sensors Impact the IoT?

Thursday, October 23rd, 2014

By John Blyler, JB Systems Media

What challenges await designers and implementers on the monolithic mixed signal sensor side of the IoT equation? Several experts from the IoT ecosystem have differing viewpoints on these questions including Patrick Gill, Principal Research Scientist at Rambus; Ian Chen, Marketing, Systems, Applications, Software & Algorithms manager at Freescale; Pratul Sharma, Technical Marketing Manager for the IoT at ARM; and Diya Soubra, CPU Product Manager at ARM. What follows is a portion of the responses. — JB

Blyler: Many of the end nodes of the IoT will be previously unconnected objects, e.g., sensor systems. What analog IP is needed to enable these kinds of sensors?

Gill: The big three are power regulator ICs, wireless communications, and gating sensor events. Good switching regulators are important for devices where power is at a premium, for instance where power is scavenged from the environment or the battery won’t be recharged often (or ever). Power-efficient wireless communication, especially at low bit rates, is going to be very important too. There’s some interesting work in academia on radios with a very low duty cycle (see, “Ultra Low Power Impulse Radio Based Transceiver for Sensor Networks”). The trick to having power scale down with data rate is to have the sender and receiver wake up at precisely the same time.

Chen: Whereas networking thinks of sensor systems as end nodes from a topology perspective, sensor systems could be seen as source nodes from a data collection perspective. In short, they are responsible for converting the physical world into data people can use. As such, we will need precision analog to digital converters with offsets stable over temperature ranges, wireless and wired connectivity, and intelligent power management for optimal system power consumption. Many of these IPs are integrated into advanced sensor products but continuous improvements are always necessary.

Soubra: In addition to all existing types of analog IP, many new types will be needed to satisfy specific endpoint requirements for every vertical market. After successful field trials with a few thousand nodes – before the millions of nodes are installed – cost will be the next big factor. There will be a cost reduction exercise where the [sensor] module and the SoC are stripped of all items that are not required for that specific vertical market. Mass deployment dictates cost reduction which dictates specialization. That’s why a general purpose block to catch multiple markets will burden each with the added cost.

Blyler: What is your favorite or most challenging example of an IoT end-node application?

Chen: One of my favorites are tire pressure monitoring sensors. Fleet managers are requiring data about the conditions of their trucks to be uploaded to the cloud to help improve business efficiency. A tire pressure monitor includes pressure sensors, up to two accelerometers, a short range RF transmitter and an MCU for signal processing all in a 7 x 7 x 2.2 mm package operating on a coin cell battery for a 10 year life.

Soubra: My favorite is the WiFi-connected sprinkler system (see Figure and link). It checks the current weather conditions before turning on the water. This is a lower cost approach and easier to do than putting a moisture sensor in every corner of the garden with a mesh network. I am sure newer models will also measure the amount of water used so we can track consumption.

Figure: Here’s an example of a favorite IoT end-node application – the Wi-Fi/BlueTooth-based wireless water sprinkler. This one is controlled with an ARM®-based GainSpan chipset.

Gill: I like the idea of smart windows, ventilation, heating and air conditioning. An automated home that knows the weather report (and air quality forecast) as well as when its occupants will be home will be able to maintain a suitable environment for its occupants using less energy. Nest is a good first start, but there’s more to comfortable air than controlling the HVAC. Solar power is sexy, but solar hot water generation can give an even better ROI.

Blyler: What analog-to-digital interfaces issues will be faced by designers? Will additional features be needed on the microcontroller (ARM Cortex®-M) side to enable this analog end-node sensor data?

Chen: With MEMS sensors A-to-D converters must discern sub-picoFarad capacitive changes. Because of the small signal and low power requirements, these converters are normally integrated with the sensor and not on the Cortex-M processor.

Sharma: Low power will be a critical issue. Analog circuits typically have DC currents. The designer will need to cut the DC currents from the μA range to the nA range by making the analog circuits more energy efficient and design mostly to operate in sub-threshold. But decreasing the power supply will affect the voltage headroom and increase the design difficulty of the analog circuits. An additional challenge will be that threshold voltages increase at cold temperature which degrades analog circuits, thus making the voltage head-room even tighter. One solution is power gating of the analog circuit but that will increase the complexity of the validation process.

Soubra: Analog designers will be faced with having to become digital design experts. There are no (new) technological challenges; we just need to get that analog block on the Advanced Microcontroller Bus Architecture (AMBA). [Editors Note: AMBA is an ARM supported, open-standard, on-chip interconnect specification for connecting functional blocks in system-on-a-chip (SoC) designs.] This approach may seem easy to do once we understand the sequence. In reality it is a bit harder on analog designers since they need to step out of the analog design context and into a mixed digital analog setting.

This means the use of new tools, new design flow, and more validation. (see, “Best Practices for Mixed Signal, RF and Microcontroller IoT” ) Luckily, the tools are 10X better than a few years ago. The Cortex-M processor already has what is required to connect to any analog core.

Gill: Picking up the earlier thread of a low-power sentinel, it could be useful for some chips to have configurable analog functions that detect changes in the input without needing to wake up an ADC. These would make sense from a commercial perspective if they allowed the microcontroller to be able to monitor sensor data using only a few microWatts of power. Also, if security is an issue (and it will be for all sorts of things), low-power crypto cores could be useful to help relay data to a cloud base station.

Blyler: Thank you.

Read the complete story at: IoT Embedded Systems

Analog Designers Face Low Power Challenges

Monday, June 16th, 2014

By John Blyler, Chief Content Officer

Can mixed signal designs achieve the low power needed by today’s tightly integrated SoCs and embedded IoT communication systems?

System level power budgets are affected by SoC integration. Setting aside the digital scaling benefits of smaller geometric nodes, leading edge SoCs achieve higher performance and tighter integration with decreased voltage levels at a cost. If power is assumed to be constant, then that cost is the increased current flow (P=VI) delivered to an ever larger number of processor cores. That’s why SoC power delivery and distribution remain a major challenge for chip architects, designers and verification engineers.

As with digital engineers, analog and mixed signal power designers must consider ways to lower power consumption early in the design phase. Beyond that consideration, there are several common ways to reduce the analog mixed signal portion of a power budget. These ways include low-power transmitter architectures; analog signal processing in low-voltage domains; and sleep mode power reduction. (ARM’s Diya Soubra takes about mixed signal sleep modes in, “Digital Designers Grapple with Analog Mixed Signal Designs.”)

To efficiently explore the design space and make basic system-level trade-offs, SoC architects must adapt their modeling style to accommodate mixed-signal design and verification techniques. Such early efforts will also help prevent overdesign in which globally distributed and cross-discipline (digital and analog) design teams don’t share information. For example, if designers are creating company-specific intellectual property (IP) cores, then they may not be aware how various IP subsystems are being used at the full chip level.

Similarly, SoC package level designers must also understand how IP is used at the higher board level. Without this information, designers tend to over compensate their portion of the design, i.e., over design to ensure their portion of the design stays within the allocated power budget. But that leads to increased cost and power consumption.

Power Modes

From an architectural viewpoint, power systems really have two categories; active and idle/standby modes. With all of the physical level (PHY) link integration occurring on SoCs, active power considerations must apply not only to digital but also analog and input-output power designs.

Within the modes of active and idle/standby power are the many power states needed to balance the current load amongst various voltage islands. With increased performance and power demands on both digital and analog designs, there is growing interest in Time- and Frequency-Domain Power Distribution (or Delivery) Network (PDN) analysis. Vic Kulkarni, VP and GM at Apache Design, believes that a careful power budgeting at a high level enables the efficient design of the power delivery network in the downstream design flow. (See, “System Level Power Budgeting.”)

SoC power must be modeled throughout all aspects of the design implementation.  Although one single modeling approach won’t work, a number of vertical markets like automotive have found success using virtual prototypes.  “System virtual prototypes are increasingly a mix – not just of hardware and software, but also of digital, control, and analog components integrated with many different sensors and actuators,” observed Arun Mulpur of The Mathworks. (See, “Chip, Boards and Beyond: Modeling Hardware and Software.”)

Communications Modes

Next to driving a device screen or display, the communication subsystem tends to consume the most power on a SoC. That’s why several low power initiatives have recently arisen like Bluetooth Low Energy. Today there are three mainstream standards for Bluetooth in use – Bluetooth 2.0 (often referred to as Bluetooth Classic), Bluetooth 4.0, which offers both a standard high-speed mode and a low-energy mode with limited data rate referred to Bluetooth LE; and a single-mode Bluetooth LE standard that keeps power consumption to a minimum.  (See, “Wearable Technologies Meet Bluetooth Low Energy.”)

Power profiling of software is an important part of designing for cellular embedded systems. But cellular is only one connectivity option when designing the SoC or board level device. Other communication types include short range subsystems, Bluetooth, Zigbee, 6LowPAN and mesh networks. If Wi-Fi connectivity is needed, then there will be choices for fixed LAN connected things using Ethernet or proprietary cabling systems. Further, there will be interplay amongst all this different ways to connect that must be simulated at an overall system-level.  (See, “Cellular vs. WiFi Embedded Design.”)

It has only been in the last decade or so that mixed signal systems have played a more dominant role in system level power budgets. Today’s trend toward a highly connected, Internet-of-Things world (IoT) means that low power, mixed signal communication design must begin early in the design phase to be considered part of the overall system-level power management process.

Digital Designers Grapple with Analog Mixed Signal Designs

Tuesday, June 10th, 2014

By John Blyler, Chief Content Officer

Today’s growth of analog and mixed signal circuits in the Internet of Things (IoT) applications raises questions about compiling C-code, running simulations, low power designs, latency and IP integration.

Often, the most valuable portion of a technical seminar is found in the question-and-answer (Q&A) session that follows the actual presentation. For me, that was true during a recent talk on the creation of mixed signal devices for smart analog and the Internet of Things (IoT) applications. The speakers included Diya Soubra, CPU Product Marketing Manager and Joel Rosenberg, Platform Marketing Director at ARM; and Mladen Nizic, Engineering Director at Cadence. What follows is my paraphrasing of the Q&A session with reference to the presentation where appropriate. – JB

Question: Is it possible to run C and assembly code on an ARM® Cortex®-M0 processor in Cadence’s Virtuoso for custom IC design? Is there a C-compiler within the tool?

Nizic: The C compiler comes from Keil®, ARM’s software development kit. The ARM DS-5 Development Studio is an Eclipse based tool suite for the company’s processors and SoCs. Once the code is compiled, it is run together with RTL software in our (Cadence) Incisive Mixed Signal simulator. The result is a simulation of the processor driven by an instruction set with all digital peripherals simulated in RTL or at the gate level. The analog portions of the design are simulated at the appropriate behavioral level, i.e., Spice transistor level, electrical behavioral Verilog A or a real number model. [See the mixed signal trends section of, “Moore’s Cycle, Fifth Horseman, Mixed Signals, and IP Stress”)

You can use the electrical behavioral models like a Verilog A and VHDL-A and –AMS to simulate the analog portions of the design. But real number models have become increasingly popular for this task. With real number models, you can model analog signals with variable amplitudes but discrete time steps, just as required by digital simulation. Simulations with a real number model representation for analog are done at almost the same speed as the digital simulation and with very little penalty (in accuracy). For example, here (see Figure 1) are the results of a system simulation where we verify how quickly Cortex-M0 would us a regulation signal to bring pressure to a specified value. It takes some 28-clock cycles. Other test bench scenarios might be explored, e.g., sending the Cortex-M0 into sleep mode if no changes in pressure are detected or waking up the processor in a few clock cycles to stabilize the system. The point is that you can swap these real number models for electrical models in Verilog A or for transistor models to redo your simulation to verify that the transistor model performs as expected.

Figure 1: The results of a Cadence simulation to verify the accuracy of a Cortex-M0 to regulate a pressure monitoring system. (Courtesy of Cadence)

Question: Can you give some examples of applications where products are incorporating analog capabilities and how they are using them?

Soubra: Everything related to motor control, power conversion and power control are good examples of where adding a little bit of (processor) smarts placed next to the mixed signal input can make a big difference. This is a clear case of how the industry is shifting toward this analog integration.

Question: What capabilities does ARM offer to support the low power requirement for mixed signal SoC design?

Rosenberg: The answer to this question has both a memory and logic component. In terms of memory, we offer the extended range register file compilers which today can go up to 256k bits. Even though the performance requirement for a given application may be relatively low, the user will want to boot from the flash into the SRAM or the register file instance. Then they will shut down the flash and execute out of the RAM as the RAM offers significantly lower active as well as stand-by power compared to executing out of flash.

On the logic side, we offer a selection from 7, 9 and 12 tracks. Within that, there are three Vt options – one for high, nominal and lower speeds. Beyond that we also offer power management kits that provide things like level shifters and power gating so the user can shut down none active parts of the SoC circuit.

Question: What are the latency numbers for waking up different domains that have been put to sleep?

Soubra: The numbers that I shared during the presentation do not include any peripherals since I have no way of knowing what peripherals will be added. In terms of who is consuming what power, the normal progression tends to be the processor, peripherals, bus and then the flash block. The “wake-up” state latency depends upon the implementation itself. You can go from tens-of-cycles to multiple-of-tens depending upon how the clocks and phase locked loops (PLLs) are implemented. If we shut everything down, then a few cycles will be required before everything goes off an, before we can restart the processor. But we are talking about tens not hundreds of cycles.

Question: And for the wake-up clock latency?

Soubra: Wake-up is the same thing, because when the wake-up controller says “lets go,” it has to restart all the clocks before it starts the processor. So it is exactly the same amount.

ARM Cortex-M low power technologies.

Question: What analog intellectual property (IP) components are offered by ARM and Cadence? How can designers integrate their own IP in the flow?

Nizic: At Cadence, through the acquisition of Cosmic, we have a portfolio of applicable analog and mixed signal IP, e.g., converters, sensors and the like. We support all design views that are necessary for this kind of methodology including model abstracts from real number to behavioral models. Like ARM’s physical IP, all of ours are qualified for the various foundry nodes so the process of integrating IP and silicon is fairly smooth.

Soubra: From ARM’s point-of-view, we are totally focused on the digital part of the SoC, including the processors, bus infrastructure components, peripherals, and memory controllers that are part of the physical IP (standard cell libraries, I/O cells, SRAM, etc). Designers integrate the digital parts (processors, bus components, peripherals and memory controller) in RTL design stages. Also, they can add the functional simulation models of memories and I/O cells in simulations, together with models of analog components from Cadence. The actual physical IP are integrated during various implementation stages (synthesis, placement and routing, etc).

Question: How can designers integrate their own IP into the SoC?

Nizic: Some of the capabilities and flows that we described are actually used to create customer IP for later reuse in SoC integration. There is a centric flow that can be used, whether the customer’s IP is pure analog or contains a small amount of standard cell digital. For example, the behavioral modeling capabilities help package this IP for the functional simulation in full chip verification. But getting the IP ready is only one aspect of the flow.

From a physical abstract it’s possible to characterize the IP for use in timing driven mode. This approach would allow you to physically verify the IP on the SoC for full chip verification.

EDA Industry Predictions for 2014 – Part 1

Tuesday, January 7th, 2014

Gabe Moretti, Contributing Editor

I always ask for predictions for the coming year, and generally get good response.  But this year the volume of responses was so high that I could not possibly cover all of the material in one article.  So I will use two articles, one week apart, to record the opinions submitted.   This first section details the contributions of Andrew Yang of ANSYS – Apache Design, Mick Tegethoff of Berkeley Design Automation, Michel Munsey of Dassault Systèmes, Oz Levia from Jasper Design Automation, Joe Sawicki from Mentor Graphics, Grant Pierce and Jim Hogan from Sonics, and Bob Smith of Uniquify.

Andrew Yang – ANSYS Apache Design

For 2014 and beyond, we’ll see increased connectivity of the electronic devices that are pervasive in our world today. This trend will continue to drive the existing mobile market growth as well as make an impact on upcoming automotive electronics. The mobile market will be dominated by a handful of chip manufacturers and those companies that support the mobile ecosystem. The automotive market is a big consumer of electronics components that are part of a complex system that help improve safety and reliability, as well as provide users with real-time interaction with their surroundings.

For semiconductor companies to remain competitive in these markets, they will need to take a “system” view for their design and verification. The traditional silo-based methodology, where each component of the system is designed and analyzed independently can result in products with higher cost, poor quality, and schedule delay. An adoption of system-level simulation will allow engineers to carry out early system prototyping, analyze the interaction of each of the components, and achieve optimal design tradeoffs.

Mick Tegethoff – Berkeley Design Automation

FinFET Technology will dominate the landscape in semiconductor design and verification as more companies adopt the technology. FinFET is a revolutionary change to device fabrication and modeling, requiring a more complex SPICE model and challenging the existing circuit behavior “rules of thumb” on which experienced designers have relied for years with planar devices.

Designers of complex analog/RF circuits, including PLLs, ADCs, SerDes, and transceivers, will need to relearn the device behavior in these applications and to explore alternative architectures. As a result, design teams will have to rely more than ever on accurate circuit verification tools that are foundry-certified for FinFET technology and have the performance and capacity to handle complex circuits including physical effects such as device noise, complex parasitics, and process variability.

In memory applications, FinFET technology will continue to drive change and challenge the status quo of “relaxed accuracy” simulation for IP characterization. Design teams are realizing that it is no longer acceptable to tolerate 2–5% inaccuracy in memory IP characterization. They are looking for verification tools that can deliver SPICE-like accuracy in a time frame on a par with their current solutions.

However, accurate circuit verification alone will not be sufficient. The impact of FinFET devices and new circuit architectures in analog, RF, mixed-signal, and memory applications demand full confidence from design teams that their circuits will meet specifications across all operational, environmental, and process conditions. As a result, designers will need to perform an increased amount of intelligent, efficient, and effective circuit characterization at the block level and at the project level to ensure that their designs meet rigorous requirements prior to silicon.

Michael Munsey – Dassault Systèmes

We at Dassault Systèmes see a few key trends coming to the semiconductor industry in 2014.

1) Extreme Design Collaboration: Complexity and cost in IC design and manufacturing now demand that semiconductor vendors engage an ever broader, more diverse pool of specialist designers and engineers.

At the same time, total costs for designing a cutting-edge integrated circuit can top $100 million for just one project. Respins can drive these costs even higher, adding huge profitability risks to new projects.

Technology-enabled extreme collaboration, over and above that in traditional PLM, will be required to assure manufacturable, profitable designs. Why? Because defects arise at the interchange between designers. And with more designers and more complex projects, the risk of misperceptions and miscommunications increases.

Pressure for design teams to interlock using highly specialized collaboration technology will increase in parallel with the financial risk of new semiconductor design projects.

2) Enterprise IP management: The move towards more platform-based designs in order to meet shortening time to market windows, application driven designs, and the increasing cost of producing new semiconductor devices, will explode the market for IP and create a new market for enterprise IP management.

The deeper insight is how that IP will be acquired, used, configured, validated and otherwise managed. The challenges will be (1) building a intelligent process that enables project managers to evaluate the lowest cost IP blocks quickly and effectively; (2) managing the licensed IP so that configuration, integration and validation know-how is captured and is easily reused; and (3) ensuring that licensing and export compliance attributes of each licensed block of IP are visible to design decision makers.

3) Flexible Design to Manufacturing: In 2011, the Japanese earthquake forced a leading semiconductor company to cease manufacturing operations because their foundry was located close to Fukushima. That earthquake and the floods in Thailand have awakened semiconductor vendors to the stark reality that global supply chains can be dramatically and unexpectedly disrupted without any prior notice.

At the same time, with increased fragmentation and specialization occurring within the design and supply chain for integrated circuit, cross chain information automation will be mission-critical.

Examples of issues that will require IT advances are (1) the increasing variations in how IP is transferred down the supply chain. It could be a file, a wafer, a die or a packaged IC – yet vendors will need to handle all options with equal efficiency to maximize profitability; and (2) the flexible packaging of an IC design for capture into ERP systems will become mandatory, in order to enable the necessary downstream supply chain flexibility.

Oz Levia – Jasper design Automation

There are a few points that we at Jasper consider important for 2014.

1) Low power design and verification will continue to be a main challenge for SoC designers.

2) Heterogeneous multi-processor designs will continue to grow. Issue such as fabric and NOC design and verification will dominate.

3) The segments that will drive the semiconductor markets will likely continue to be in the mobile space(s) – phones, tablets, etc. But the server segment will also continue to increase in importance.

4) Process will continue to evolve, but there is a lot of head room in current processes before we run out steam.

5) Consolidation will continue in the semiconductor market. More important, the strong will get stronger and the weak will get weaker. Increasingly this is a winner takes all market and we will see a big divide between innovators and leaders and laggards.

6) EDA will continue to see consolidation.  Large EDA vendors will continue increasing investments in SIP, and Verification technologies. We will not see a new radically different technology or methodology. The total amount of investments In the EDA industry will continue to be low.

7) EDA will grow at slow pace, but Verification, Emulation and SIP will grow faster then other segments.

Joseph Sawicki – Mentor Graphics

FinFETs will move from early technology development to early adopter designs. Over the last year, the major foundry ecosystems moved from alpha to production status for 16/14nm with its dual challenges of double patterning and FinFET.  Fabless customers are just beginning to implement their first test chip tape-outs for 16 /14 nm, and 2014 will see most of the 20 nm early-adopter customers also preparing their first 16 nm/14 nm test chips.

FinFETs are driving a need for more accurate extraction tools, and EDA vendors are turning to 3D field solver technology to provide it. The trick is to also provide high performance that can deliver quick turnaround time even as the number of required extraction corners jumps from 5 to 15 and the number of gates doubles or triples.

Test data and diagnosis of test fail data will play an increasingly important role in the ramp of new FinFET technologies. The industry will face new challenges as traditional approaches to failure analysis and defect isolation struggle to keep pace with changes in transistor structures. The opportunity is for software-based diagnosis techniques that leverage ATPG test fail data to pick up the slack and provide more accurate resolution for failure and yield analysis engineers.

16/14nm will also require more advanced litho hotspot checking and more complex and accurate fill structures to help ensure planarity and to also help deal with issues in etch, lithography, stress and rapid thermal annealing (RTA) processes.

In parallel with the production ramp at 20 nm, and 16 nm/14 nm test chips, 2014 will see the expansion of work across the ecosystem for 10 nm. Early process development and EDA tool development for 10 nm began in 2012, ramped up in intensity in 2013, and will be full speed ahead in 2014.

Hardware emulation has transitioned from the engineering lab to the datacenter where today’s virtual lab enables peripheral devices such as PCIe, USB, and Ethernet to exist in virtual space without specialized hardware or a maze of I/O cables. A virtual environment permits instant reconfiguration of the emulator for any design or project team and access by more users, and access from anywhere in the world, resulting in higher utilization and lower overall costs.

The virtual lab is also enabling increased verification coverage of SoC software and hardware, supporting end-to-end validation of SW drivers, for example. Hardware emulation is now employed throughout the entire mobile device supply chain, including embedded processor and graphics IP suppliers, mobile chip developers, and mobile phone and tablet teams. Embedded SW validation and debug will be the real growth engine driving the emulation business.

The Internet of Things (IoT) will add an entirely new level of information sources, allowing us to interact with and pull data from the things around us. The ability to control the state of virtually anything will change how we manage and interact with the world. The home, the factory, transportation, energy, food and many other aspects of life will be impacted and could lead to a new era of productivity increases and wealth creation.

Accordingly, we’ll see continued growth in the MEMS market driven by sensors for mobile phones, automobiles, and medical monitoring, and we’ll see silicon photonics solutions being implemented in data and communications centers to provide higher bandwidth backplane connectivity in addition to their current use in fiber termination.

Semiconductor systems enabling the IoT trend will need to respond to difficult cost, size and energy constraints to drive real ubiquity. For example, we’ll need 3D packaging implementations that are an order of magnitude cheaper than current offerings. We’ll need a better ways to model complex system effects, putting a premium on tools that enable design and verification at the system level, and engineers that can use them. Cost constraints will also drive innovation in test to ensure that multi-die package test doesn’t explode part cost. Moreover, once we move from data to actually interacting with the real world analog/mixed signal, MEMS and other sensors role in the semiconductor solution will become much greater.

Grant Pierce and Jim Hogan – Sonics

For a hint at what’s to come in the technology sector as a whole and the EDA and IP industries specifically, let’s first look at the global macro-economic situation. The single greatest macro-economic factor impacting the technology sector is energy. Electronic products need energy to work. Electronic designers and manufacturers need energy to do their jobs. In the recent past, energy has been expensive to produce, particularly in the US market due to our reliance on foreign oil imports. Today in the US, the cost of producing energy is falling while consumption is slowing. The US is on a path to energy self-sufficiency according to the Energy Department’s annual outlook. By 2015, domestic oil output is on track to surpass its peak set in 1970.

What does cheaper energy imply for the technology industry? More investment. Less money spent on purchasing energy abroad means more capital available to fund new ventures at home and around the world. The recovery of US financial markets is also restoring investors’ confidence in earning higher ROI through public offerings. As investors begin to take more risk and inject sorely needed capital into the technology sector, we expect to see a surge in new startups. EDA and IP industries will participate in this “re-birth” because they are critical to the success of technology sector as enabling technologies.

For an understanding of where the semiconductor IP business is going, let’s look at consumer technology. Who are the leaders in the consumer technology business today? Apple, Google, Samsung, Amazon, and perhaps a few others. Why? Because they possess semiconductor knowledge coupled with software expertise. In the case of Apple, for example, they also own content and its distribution, which makes them extremely profitable with higher recurring revenues and better margins. Content is king and the world is becoming application-centric. Software apps are content. Semiconductor IP is content. Those who own content, its publication and distribution, will thrive.

In the near term, the semiconductor IP business will continue to consolidate as major players compete to build and acquire broader content portfolios. For example, witness the recent Avago/LSI and Intel/Mindspeed deals. App-happy consumers have an insatiable appetite for the latest and greatest content and devices. Consumer technology product lifecycles place immense pressure on chip and system designers when developing and verifying the flexible hardware platforms that run these apps. Among their many important considerations are functionality, performance, power, security, and cost. System architectures and software definition and control are becoming the dominant source of product differentiation rather than hardware. The need for semiconductor IP that addresses these trends and accelerates time-to-volume production is growing. The need for EDA tools that help designers successfully use and efficiently reuse IP is also growing.

So what are the market opportunities for new IP and tool companies in the coming years? These days, talk about the Internet of Things (IoT) is plentiful and there will be many different types of IP in this sensor-oriented market space. Perhaps, the most interesting and promising of these IoT IP technologies will address our growing concerns about health and quality of life. The rise of wearable technologies that help monitor our vital signs and treat chronic health conditions promises to extend our human survival rate beyond 100 years. As these technologies progress, surely the “Bionic Man” will become common place in the not-too-distant future. Personally, and being members of the aging “Baby Boomer” generation, we hope that it happens sooner rather than later!

Bob Smith – Uniquify

I spent a good deal of 2013 traveling around the globe doing a series of seminars on double data rate (DDR) synchronous dynamic random-access memory (SDRAM), the ubiquitous class of memory chips. The seminars were meant to promote the fastest, smallest and lowest power state-of-the-art adaptive DDR IP technology. They highlighted how it can be used to enhance design speed and configured to minimize the design footprint and hit increasingly smaller low-power targets.

While marketing and promotion was on the agenda, the seminars were a great way to check in with designers to better understand their current DDR challenges and identify a few trends that will emerge in 2014. What we learned may be a surprise to more than a few semiconductor industry watchers and offers some tantalizing predictions for next year.

The biggest surprise was hearing designers confirm plans to go directly to LPDDR4 (that is low-power DDR4, the latest JEDEC standard) and skip LPDDR3. The reasons are varied, but most noted that they’re getting greater gains in performance and low power by jumping to LPDDR4, especially important for mobile applications. According to JEDEC, the LPDDR4 architecture was designed to be power neutral, offer 2X bandwidth performance over previous generations, with low pin-count and low cost. It’s also backward compatible.

Even though many of the designers we heard from agreed that DDR3 is now mainstream, even more are starting projects based on DDR4. Some are motivated to move to DDR4 even without the need for extra performance for a practical and cost-effective reason. If they have a product with a long lifetime of five years or more, they are concerned that the DDR3 memory will cost more than DDR4 at some point. They have a choice: either build in the DDR4 now in anticipation or look for combination IP that handles both DDR3/4 in one IP. Many have chosen to do the former.

One final prediction I offer for 2014 is that 28nm is the technology node that will be around for a long time to come. Larger semiconductor companies, however, are starting new projects at 14/16 nm, taking advantage of the emerging FinFET technology.

According to my worldwide sources, memories and FinFET will dominate the discussion in 2014, which means it will be a lively year.

Solutions For Mixed-Signal SoC Verification

Thursday, March 28th, 2013

Performing full-chip verification of large mixed-signal systems on chip (SoCs) is an increasingly daunting task. As complexity grows and process nodes shrink, it’s no longer adequate to bolt together analog or digital “black boxes” that are presumed to be pre-verified. Complex analog/ digital interactions can create functional errors, which delay tapeouts and lead to costly silicon re-spins. Cadence helps customers overcome these challenges with a fully integrated mixed-signal verification solution that spans basic mixed-signal simulation to comprehensive, metric-driven mixed-signal verification.

To view this white paper, click here.

Taming The Challenges Of 20nm Custom/Analog Design

Thursday, November 29th, 2012

Custom and analog designers will lay the foundation for 20nm IC design. However, they face many challenges that arise from manufacturing complexity. The solution lies not just in improving individual tools, but in a new design methodology that allows rapid layout prototyping, in-design signoff, and close collaboration between schematic and layout designers.

To view this white paper, click here.

Solutions For Mixed-Signal IP, IC, And SoC Implementation

Thursday, September 27th, 2012

Traditional mixed-signal design environments, in which analog and digital parts are implemented separately, are no longer sufficient and lead to excess iteration and prolonged design cycle time. Realizing modern mixed-signal designs requires new flows that maximize productivity and facilitate close collaboration among analog and digital designers. This paper outlines mixed-signal implementation challenges and focuses on three advanced, highly integrated flows to meet those challenges: analog-centric schematic-driven, digital-centric netlist-driven, and concurrent mixed-signal. Each flow leverages a common OpenAccess database for both analog and digital data and constraints, ensuring tool interoperability without data translation. Each flow also offers benefits in the area of chip planning and area reduction; full transparency between analog and digital data for fewer iterations and faster design closure; and easier, more automated ECOs, even at late stages of design.

To view this white paper, click here.

Analog and RF Added To IC Simulation Discussion

Thursday, July 26th, 2012

By John Blyler
System-Level Design sat down with Nicolas Williams, Tanner EDA’s director of product management, to talk about trends in analog and RF chip design.

SLD: What are big the trends in analog and RF simulation?
Williams: The increased need to bring more layout dependent information into the front-end design early on. Layout-dependent effects influence performance, so it is no longer possible to separate “design” from “layout” phases, as we did traditionally. With nanoscale technologies, a multitude of physical device pattern separation dimensions must now be entered into the pre-layout simulation models to accurately predict post-layout circuit performance. This is more than just adding some stray capacitance to some nodes. It now includes accurate distances from gate to gate, gate to trench (SA,SB, etc.), distance in both X and Y dimensions between device active areas, distance from the gate contact to the channel edge (XGW), number of gate contacts (NGCON), distance to a single well edge (WPE), etc. Getting the pre-layout parameters accurately entered into the simulation will minimize re-design and re-layout resulting from performance deficiencies found during post-layout parameter extraction and design-verification simulations.

Another issue is larger variability at nanoscale. This is not so much due to manufacturing tolerance, but really because of layout-dependent effects. These effects include the ones listed above plus several that are not even modeled, such as nearby and overlying metal stress modifying Vt and gm and poor lithography. The lithography challenges are so severe in deep nanoscale that device patterns on final silicon look like they were drawn by Salvador Dali. Poor pattern shapes, increasing misalignment and shape-dependence on nearby patterns results in more gate length and width variation. More variability requires more complex simulations to have better confidence in your design. This requires faster simulators to simulate more corners or more Monte Carlo runs.

SLD: Statistical analysis, design-of-experiments, and corner modes—digital designers already hear many of these terms from the yield experts in the foundries. Should they now expect to hear it from the analog and RF simulator communities?
Williams: Statistical analysis and corner models have always been part of analog and RF design, but in the past it didn’t take much to try all combinations. There was no need to take a sample of the population when you could check the entire population. In nanoscale technologies, the number of effects that can affect circuit performance has grown exponentially to point where you have to take a statistical approach when checking corners. The older, alternative approach, of running the worst-case combinations of all design corners from all effects would result in an overly pessimistic result. Also, when the number of Monte Carlo simulations required to statistically represent your circuit has grown too large, that is where ‘design-of-experiments’ comes into play using methods such as Hyper Cube sampling.

Simulation accuracy is limited by model accuracy. Statistical variation of devices and parameters are more richly specified than the traditional SPICE approach for Monte Carlo (where you had “lot” and “device” parameters). Now you have spatially correlated variations, and you have the much richer .variation blocks in SPICE. Foundry models are now “expected” to provide usable models at this level, which raises all kinds of foundry-proprietary concerns.

SLD: How will this increase in statistical distribution analysis affect traditional analog electronic circuit simulators like Spice?
Williams: Statistical analysis requires a huge number of simulations, which can either take a long time to execute, or can be parallelized with CPU farms or cloud services, and smarter ways to sample which “corners” to run to get a reasonable confidence that you will be successful in silicon. Traditionally, aggregation of such results would have been a manual process, or at best some custom design flow development undertaken by the end user. Look for an upcoming sea change in how simulators are designed, sold and deployed by the EDA vendor community, to better address these needs.

All these simulations are great if your design meets all of its specifications. But what happens if it doesn’t? I feel the next step will be to use these simulations to figure out what variables your design is most sensitive to. Then you can try to mitigate the variability by improving the circuit or physical design (layout).

Trends In Analog And RF IC Simulation

Thursday, May 24th, 2012

By John Blyler
System-Level Design (SLD) sat down to discuss trends in analog and RF integrated circuit design with Ravi Subramanian, president and CEO of Berkeley Design Automation, (at the recent GlobalPress eSummit) and later with Trent McConaghy, Solido’s CTO. What follows are excerpts of those talks.

SLD: What are the important trends in analog and RF simulation?
Subramanian: I see two big trends. One is related to physics, namely, the need to bring in physical effects early in the design process. The second trend relates to the increased importance of statistics in doing design work. Expertise in statistics is becoming a must. One of the strongest demands made on our company is to help teach engineers how to do statistical analysis. What is required is an appreciation of the Design-of-Experiments (DOE) approach—common in the manufacturing world. Design engineers need to understand what simulations are needed for analog versus digital designers. For example, in a typical pre-layout simulation, you may want to characterize a block with very high confidence. Further, you may also want to do that block extracted in post layout with very high confidence. But what does ‘high confidence’ mean? How do you know when you have enough confidence? If you have a normally distributed Gaussian variable, you may have to run 500 simulations to get a 95% probability of confidence in that result. Every simulation waveform and data point has a confidence band associated with it.
McConaghy: As always, there is always a pull from customers for simulators that are faster and better. In general, simulators have been delivering on this. Simulators are getting faster, both in simulation time for larger circuits, and by easier-to-use multi-core and multi-machine implementations. Simulators are also getting better. They converge on a broader range of circuits, handle larger circuits, and more cleanly support mixed-signal circuits.
There’s another trend: meta-simulation. This term describes tools that feel like using simulators from the perspective of the designer. Just like simulators, meta-simulators input netlists, and output scalar or vector measures. However, meta-simulators actually call circuit simulators in the loop. Meta-simulators are used for fast PVT analysis, fast high-sigma statistical analysis, intelligent Monte Carlo analysis and sensitivity analysis. They bring the value of simulation to a “meta” (higher) level. I believe we’ll see a lot more meta-simulation, as the simulators themselves get faster and the need for higher-level analysis grows.

SLD: This sounds a lot like the Six Sigma methodology, a manufacturing technique use to find and remove defects from high volume productions—like CMOS wafers. Will design engineers really be able to incorporate this statistical approach into their design simulations?
Subramanian: Tools can help engineers incorporate statistic methods into their works. But let’s talk about the need for high sigma values. To achieve high sigma, you need a good experiment and a very accurate simulator. If you have a good experiment but you want to run it quickly and give up accuracy, you may have a Six-Sigma setup, but a simulator that has been relaxed so the Six-Sigma data is meaningless. This shows the difference between accuracy and precision. You can have a very precise answer but it isn’t accurate.
To summarize: Today’s low-node processes have associated physical effects that only can be handled by statistical methods. These two trends mean that new types of simulation must be run. Engineers need to give more thought as to which corners should be covered in their design simulations. Semiconductor chip foundries provided corners that are slow, fast and typical, based upon the rise- and fall-times of flip-flops. How relevant is that for a voltage-controlled oscillator (VCO)? In fact, are there more analog specific corners? Yes, there are.

SLD: Statistical analysis, design-of-experiments, and corner modes—designers already hear many of these terms from the yield experts in the foundries. Should they now expect to hear it from the analog and RF simulator communities?
Subramanian: Designers must understand or have tools that help them deal with statistical processes. For example, how do you know if a VCO will yield well? It must have a frequency and voltage characteristics that are reliable over a range of conditions. But if you only test it over common digital corners, you may miss some important analog corners where the VCO performs poorly. A corner is simply a performance metric, such as output frequency. You want to measure it within a particular confidence level, which is where statistics are needed. It may turn out that, in addition to the digital corners you’ll need to include a few analog ones.
McConaghy: These terms imply the need to address variation, and designers do need to make sure that variation doesn’t kill their design. Variation causes engineers to overdesign, wasting circuit performance, power and area or under design, hitting yield failures. To take full advantage of a process node, designers need tools that allow them to achieve optimal performance and yield. Since variation is a big issue, it won’t be surprising if simulator companies start using these terms with designers. The best EDA tools handle variation, while allowing the engineer to efficiently focus on designing with familiar flows like corner-based design and familiar analyses like PVT and Monte Carlo. But now the corners must be truly accurate, i.e., PVT corners must cause the actual worst-case behavior, and Monte Carlo corners must bound circuit (not device) performances like “gain” at the three-sigma level or even six-sigma level. These PVT and Monte Carlo analyses must be extremely fast, handling thousands of PVT corners, or billions of Monte Carlo samples.

SLD: Would a typical digital corner be a transistor’s switching speed?
Subramanian: Yes. Foundries parameterized transistors to be slow, typical and fast in terms of performance. The actual transistor model parameters will vary around those three cases, e.g., a very fast transistor will have a fast rise and switching time. So far, the whole notion of corners has been driven by the digital guys. That is natural. But now, analog shows up at the party at the same time as digital, especially at 28nm geometries.
The minimal requirement today is that all designs must pass the digital corners. But for the analog circuits to yield, they must pass the digital and specific analog corners, i.e., they must also pass the condition and variations relevant to the performance of that analog device. How do you find out what those other corners are? Most designers don’t have time to run a billion simulations. That is why people need to start doing distribution analysis for analog corners like frequency, gain, signal-to-noise ratios, jitter, power supply rejection ratio, etc. For each of these analog circuit measurements, a distribution curve is created from which Six-Sigma data can be obtained. Will it always be a Gaussian curve? Perhaps not.

SLD: How will this increase in statistical distribution analysis affect traditional analog electronic circuit simulators like Spice?
Subramanian: Spice needs to start generating these statistically-based distribution curves. I think we are at the early days of that frontier where you can literally see yourself having a design cockpit where you can make statistics simple to use. You have to make it simple to use otherwise it won’t happen. I think that is the responsibility of the EDA industry.
McConaghy: The traditional simulators will be used more than ever, as the meta-simulators call upon them to do fast and efficient PVT and statistical variation analysis up to 6-sigma design. The meta-simulators incorporate intelligent sampling algorithms to cut down the number of simulations required compared to brute force analysis. Today, many customers use hundreds of traditional SPICE simulator licenses to do these variation analysis tasks. However, they would like to be able to get the accuracy of billions of Monte Carlo samples in only thousands of actual simulations. These analyses are being done on traditional analog/RF, mixed-signal designs as well as memory, standard cell library and other custom digital design.

SLD: I know that the several of the major EDA tool vendors have recently released tools to make the statistical nature of low process node yields more accessible and useable by digital chip designers. Are their similar tools for the world of analog mixed signal design?
Subramanian: Analog and RF designs are now going through this same process, to move from an art to a science. That’s why I say that the nanometer mixed-signal era is here (see figure). Simulation tools are needed, but so are analysis capabilities. This is why our simulation tools have become platforms for analysis. We support the major EDA simulators but add an analysis cockpit for designers.

SLD: Why now? What is unique about the leading-edge 28nm process geometries? I’d have expected similar problem at a higher node, e.g., 65nm. Is it a yield issue?
Subramanian: Exactly. At 65nm, designers were still able to margin their designs sufficiently. But now the cost of the margin becomes more significant because you either pay for it with area or with power, which is really current. At 28nm, with SerDdes (high frequency and high performance) and tighter power budgets, the cost of the margin becomes too high. If you don’t do power-collapsing, then you won’t meet the power targets.

SLD: Is memory management becoming a bigger market for simulation?
Subramanian: Traditionally, memory has had some traditional analog pieces like charge pumps, sensitivity chains, etc. Now, in order to achieve higher and higher memory density, vendors are going to multi-level cells. This allows storage of 2, 4 or 8 bits on a single cell. But to achieve this density you need better voltage resolution between the different bit levels, which means you need more accurate simulation to measure the impact of noise. Noise can appear as a bit error when you have tighter voltage margins. You might wonder if this is really a significant problem. Consider Apple’s purchase of Anobit, a company that corrected those types of errors. If you can design better memory, then you can mitigate the need for error correction hardware and software. But to do that, you need more accurate analog simulation of memory. You cannot use a digital fast Spice tool, which uses a transistor table look-up model. Instead, you must use a transistor BSIM (Berkeley Short-channel IGFET Model) model.

Next Page »