Part of the  

Chip Design Magazine


About  |  Contact

Posts Tagged ‘RF’

Next Page »

Deeper Dive – A step closer to GaN on silicon for RF

Thursday, April 10th, 2014

M/A-COM Technology (MACOM) and IQE have announced a licensing and supply agreement that brings high-volume GaN on silicon (GaN on Si) closer to being realized for RF projects. By Caroline Hayes, senior editor.

The analog RF, microwave, and millimeter wave company and the nanotechnology, wafer supplier announced a license and epitaxial wafer supply agreement to manufacture GaN on Si expitaxial at four, six and eight inch wafers. The large diameter should bring down manufacturing costs down, making the GaN technology commercially viable for RF applications.

Suja Ramnath, MACOM’s Senior Vice President and General Manager of Market-Facing Businesses introduced the company’s first SiGe chipset for ISM (Industrial Scientific and Medical) applications. Of this introduction to RF, she said: “GaN on Si is ideal for applications that require high power, broadband performance in volume applications. It enables excellent power density (power per sq mm)”.

There is a strong commercial premise for making efficiency, high performance GaN technology affordable and accessible. The number of base stations, which are dependent on power amplifiers, is set to increase. This makes the availability of a reliable supply of the right materials and appropriate RF devices essential. The power amplifier transistor revenue from base stations alone is expected increase to realize over $1billion this year, according to data from Strategy Analytics.

The collaboration is expected to deliver GaN RF products with “breakthrough” bandwidth and efficiency. “Efficiency is 2x more efficient (approximately 80%) than current efficiency from other technologies (approximately 40%) and bandwidth is increase by up to 5x from what we can achieve with Silicon or GaAs technology today.”

In February this year, MACOM bought Nitronex, which designs and manufactures GaN RF products and which introduced the first GaN-on-Si RF discrete devices and MMICs (Monolithic Microwave Integrated Circuit).

IQE, has its head office in the UK and sales offices in the US, including an RF sales office in New Jersey. It supplies semiconductor wafers for wireless and optoelectronic components, photovoltaics and silicon-based epitaxy. It produces of 50% of the world’s RF epitaxial wafers and has also signed an agreement with MACOM to introduce an IP licensing program, which will make GaN-on-Si technology available to some companies for RF use.

“We are beginning to see very significant traction for GaN occurring in the compound semiconductor industry, across a wide range of applications” said Drew Nelson, President and CEO, IQE. “Our agreement with MACOM allows us to further penetrate this new market by bringing decades of high volume production experience to create the necessary supply chain needed to accelerate GaN adoption. We look forward to a powerful ongoing relationship.”

IQE provides the starting material – the epi – to produce the wafers which are already in production, confirms Ramnath.

Research Review – April 08 2014

Tuesday, April 8th, 2014

Bravo! AWR helps students; looks familiar?– algorithm adjusts the view; New material lets the light in; imec and Rohm collaborate on low power radio components.
By Caroline Hayes, Senior Editor

Electronic engineering students at Italy’s Sapienza University of Rome are led by associate professor, Dr. Stefano Pisa, who uses AWR’s Microwave Office circuit design software and graduate research to teach layout and design. It was also used in work to present a circuit model to analyze and design radars to remotely monitor breathing.

Computers are smart, but are not able to re-orientate using buildings as reference points, when the route is disrupted – until now. An algorithm has been developed by MIT researchers Julian Straub, a graduate student in electrical engineering and computer science at MIT, John Fisher, a senior research scientist in its Computer Science and Artificial Intelligence Laboratory, John Leonard, a professor of mechanical and ocean engineering, Oren Freifeld and Guy Rosman, both post doctorates in Fisher’s Sensing, Learning, and Inference Group, that could make this task easier.
At the IEEE Conference on Computer Vision and Pattern Recognition, the researcher will present the algorithm which identifies 3D scenes and simplifies understanding scenes for robots navigating new terroritory.

Described as a potential building block for the next generation of inexpensive electrical devices, molybdenum disulfide (MoS2), a transition metal dichalcogenide material, is detailed in research by Nestor Perea-Lopez (lead author), from The Pennsylvania State University. The thin film, as little as three atoms in thickness can convert photons into electrons which is delivered to a photosensor in two wavelengths by a laser.
Perea-Lopez speculates on the material’s potential for integration “with metals like graphene, with insulators such as boron nitride and semiconductors like MoS2 to create the next generation of devices”.

Researchers from Rohm Semiconductor will collaborate with nanoelectronics research center, imec, to develop ultra-low power critical radio components. The intent is to combine architectures, low power design IP and efficient low power circuits to develop low power RF components that comply with wireless standards, such as Bluetooth Low Energy and ZigBee by integrating a co-developed PLL (Phase Lock Loop) for use in wireless sensor networks.

Mixed Signal Design and Verification for IoT Designs

Tuesday, November 17th, 2015

Mitch Heins, EDS Marketing Director, DSM division of Mentor Graphics

A typical Internet-of-Things (IoT) design consists of several different blocks including one or more sensors, analog signal processing for the sensors, an analog-to-digital converter and a digital interface such as I2C.  System integration and verification is challenging for these types of IoT designs as they typically are a combination of two to three different ICs.  The challenge is exacerbated by the fact that the system covers multiple domains including analog, digital, RF, and mechanical for packaging and different forms of multi-physics type simulations needed to verify the sensors and actuators of an IoT design.  The sensors and actuators are typically created as microelectromechanical systems (MEMS) which have a mechanical aspect and there is a tight interaction between them and the package in which they are encapsulated.

The verification challenge is to have the right form of models available for each stage of the design and verification process that work with your EDA vendor tool suite.  Many of the high volume IoT designs are now looking to integrate the microcontroller and radio as one die and the analog circuitry and sensors on second die to reduce cost and footprint.

In many cases the latest IoT designs are now using onboard analog and digital circuitry with multiple sensors to do data fusion at the sensor, making for “smart sensors”.  These ICs are made from scratch meaning that the designers must create their own models for both system-level and device-level verification.

Tanner EDA by Mentor Graphics has partnered with SoftMEMS to offer a complete mixed signal design and verification tool suite for these types of MEMS centric IC designs. The Tanner Analog and MEMS tool suites offers a complete design-capture, simulation, implementation and verification flow for MEMS-based IoT designs.  The Tanner AMS verification flow supports top-down hierarchical design with the ability to do co-simulation of multiple levels of design abstraction for analog, digital and mechanical environments.  All design-abstractions, simulations and resulting waveforms are controlled and viewed from a centrally integrated schematic cockpit enabling easy design trade-offs and verification.   Design abstractions can be used to swap in different models for system level vs device level verification tasks as different parts of the design are implemented.  The system includes support for popular modeling languages such as Verilog-AMS and Verilog-A.

The logic abstraction of the design is tightly tied to the physical implementation of the design through a correct-by-construction design methodology using schematic-driven-layout with interactive DRC checking.  The Tanner/SoftMEMS solution uses the 2D mask layout to automatically create a correct-by-construction 3D model of the MEMS devices using a process technology description file.

Figure 1: Tanner Analog Mixed Signal Verification Cockpit

The 3D model is combined with similar 3D package models and is then used in Finite Element or Boundary Element Analysis engines to debug the functionality and manufacturability of the MEMS devices including mechanical, thermal, acoustic, electrical, electrostatic, magnetic and fluid analysis.

Figure 2: 3D-layout & cross section created by Tanner SOFTMEMS 3D Modeler

A key feature of the design flow is that the solution allows for the automatic creation of a compact Verilog-A model for the MEMS-Package combination from the FEA/BEA analysis that can be used to close the loop in final system-level verification using the same co-simulation cockpit and test benches that were used to start the design.

An additional level of productivity can be gained by using a parameterized library of MEMS building blocks from which the designer can more quickly build complex MEMS devices.

Figure 3: Tanner S-Edit Schematic Capture Mixed Mode Schematic of the IoT System

Each building block has an associated parameterized compact simulation model.  By structurally building the MEMS device from these building blocks, the designer is automatically creating a structural simulation model for the entire device that can be used within the verification cockpit.

Figure 4:Tanner SoftMEMS BasicPro Suite with MEMS Symbol and Simulation Library

Cortex-M processor Family at the Heart of IoT Systems

Saturday, October 25th, 2014

Gabe Moretti, Senior Editor

One cannot have a discussion about the semiconductor industry without hearing the word IoT.  It is really not a word as language lawyers will be ready to point out, but an abbreviation that stands for Internet of Things.  And, of course, the abbreviation is fundamentally incorrect, since the “things” will be connected in a variety of ways, not just the Internet.  In fact it is already clear that devices, grouped to form an intelligent subsystem of the IoT, will be connected using a number of protocols like: 6LoWPAN, ZigBee, WiFi, and Bluetooth.  ARM has developed the Cortex®-M processor family that is particularly well suited for providing processing power to devices that consume very low power in their duties of physical data acquisition. This is an instrumental function of the IoT.

Figure 1. The heterogeneous IoT: lots of “things” inter-connected. (Courtesy of ARM)

Figure 1 shows the vision the semiconductor industry holds of the IoT.  I believe that the figure shows a goal the industry set for itself, and a very ambitious goal it is.  At the moment the complete architecture of the IoT is undefined, and rightly so.  The IoT re-introduces a paradigm first used when ASIC devices were thought of being the ultimate solution to everyone’s computational requirements.  The business of IP started  as an enhancement to application-specific hardware, and now general purpose platforms constitute the core of most systems.  IoT lets the application drive the architecture, and companies like ARM provide the core computational block with an off-the-shelf device like a Cortex MCU.

The ARM Cortex-M processor family is a range of scalable and compatible, energy efficient, easy to use processors designed to help developers meet the needs of tomorrow’s smart and connected embedded applications. Those demands include delivering more features at a lower cost, increasing connectivity, better code reuse and improved energy efficiency. The ARM Cortex-M7 processor is the most recent and highest performance member of the Cortex-M processor family. But where the Cortex-M7 is at the heart of ARM partner SoCs for IoT systems, other connectivity IP is required to complete the intelligent SoC subsystem.

A collection of some of my favorite IoT-related IP follows.

Figure 2. The Cortex-M7 Architecture (Courtesy of ARM)

Development Ecosystem

To efficiently build a system, no matter how small, that can communicate with other devices, one needs IP.  ARM and Cadence Design Systems have had a long-standing collaboration in the area of both IP and development tools.  In September of this year the companies extended an already existing agreement covering more than 130 IP blocks and software.  The new agreement covers an expanded collaboration for IoT and wearable devices targeting TSMC’s ultra-low power technology platform. The collaboration is expected to enable the rapid development of IoT and wearable devices by optimizing the system integration of ARM IP and Cadence’s integrated flow for mixed-signal design and verification.

The partnership will deliver reference designs and physical design knowledge to integrate ARM Cortex processors, ARM CoreLink system IP, and ARM Artisan physical IP along with RF/analog/mixed-signal IP and embedded flash in the Virtuoso-VDI Mixed-Signal Open Access integrated flow for the TSMC process technology.

“The reduction in leakage of TSMC’s new ULP technology platform combined with the proven power-efficiency of Cortex-M processors will enable a vast range of devices to operate in ultra energy-constrained environments,” said Richard York, vice president of embedded segment marketing, ARM. “Our collaboration with Cadence enables designers to continue developing the most innovative IoT devices in the market.”  One of the fundamental changes in design methodology is the aggregation of capabilities from different vendors into one distribution point, like ARM, that serve as the guarantor of a proven development environment.

Communication and Security

System developers need to know that there are a number of sources of IP when deciding on the architecture of a product.  In the case of IoT it is necessary to address both the transmission capabilities and the security of the data.

As a strong partner of ARM Synopsys provides low power IP that supports a wide range of low power features such as configurable shutdown and power modes. The DesignWare family of IP offers both digital and analog components that can be integrated with any Cortex-M MCU.  Beyond the extensive list of digital logic, analog IP including ADCs and DACs, plus audio CODECs play an important role in IoT applications. Designers also have the opportunity to use Synopsys development and verification tools that have a strong track record handling ARM based designs.

The Tensilica group at Cadence has published a paper describing how to use Cadence IP to develop a Wi-Fi 802.11ac transceiver used for WLAN (wireless local area network). This transceiver design is architected on a programmable platform consisting of Tensilica DSPs, using an anchor DSP from the ConnX BBE family of cores in combination with a smaller specialized DSP and dedicated hardware RTL. Because of the enhanced instruction set in the Cortex-M7 and superscalar pipeline, plus the addition of floating point DSP, Cadence radio IP works well with the Cortex-M7 MCU as intermediate band, digital down conversion, post-processing or WLAN provisioning can be done by the Cortex-M7.

Accent S.A. is an Italian company that is focused on RF products.  Accent’s BASEsoc RF Platform for ARM enables pre-optimized, field-proven single chip wireless systems by serving as near-finished solutions for a number of applications.  This modular platform is easily customizable and supports integration of different wireless standards, such as ZigBee, Bluetooth, RFID and UWB, allowing customers to achieve a shorter time-to-market. The company claims that an ARM processor-based, complex RF-IC could be fully specified, developed and ramped to volume production by Accent in less than nine months.

Sonics offers a network on chip (NoC) solution that is both flexible in integrating various communication protocols and highly secure.   Figure 3 shows how the Sonics NoC provides secure communication in any SoC architecture.

Figure 3.  Security is Paramount in Data Transmission (Courtesy of Sonics)

According to Drew Wingard, Sonics CTO “Security is one of the most important, if not the most important, considerations when creating IoT-focused SoCs that collect sensitive information or control expensive equipment and/or resources. ARM’s TrustZone does a good job securing the computing part of the system, but what about the communications, media and sensor/motor subsystems? SoC security goes well beyond the CPU and operating system. SoC designers need a way to ensure complete security for their entire design.”

Drew concludes “The best way to accomplish SoC-wide security is by leveraging on-chip network fabrics like SonicsGN, which has built-in NoCLock features to provide independent, mutually secure domains that enable designers to isolate each subsystem’s shared resources. By minimizing the amount of secure hardware and software in each domain, NoCLock extends ARM TrustZone to provide increased protection and reliability, ensuring that subsystem-level security defects cannot be exploited to compromise the entire system.”

More examples exist of course and this is not an exhaustive list of devices supporting protocols that can be used in the intelligent home architecture.  The intelligent home, together with wearable medical devices, is the most frequent example of IoT that could be implemented by 2020.  In fact it is a sure bet to say that by the time the intelligent home is a reality many more IP blocks to support the application will be available.

IoT vs. Traditional Embedded for Analog, Low Power and Security

Tuesday, July 15th, 2014

By John Blyler, Chief Content Officer

In Part III, technology leaders from STMicro, Atmel, Mouser, Synopsys, Movea, and ARM, define the big challenges in IoT – mixed signal, low power and security.

Will the Internet-of-Things (IoT) bring new analog integration, low power and security challenges to traditional SoC and embedded designs? To answer these questions, System Design Engineering sat down Bernard Kasser, Director Security R&D, Advanced System Technology, STMicroelectronic; Bob Martin, Senior Manager Microcontroller Group Central Applications, Atmel; Kevin Parmenter, Director of Technical Resources, Mouser; Steve Smith, Senior Director of Marketing, AMS Group, Synopsys; Cyrille Soubeyrat, VP of Engineering at Movea; and Diya Soubra, CPU Product Marketing Manager, ARM. What follows are excerpts from that discussion. – JB

Blyler: Analog mixed signal integration with MCUs is a prerequisite for the IoT. How difficult is it for designers – especially digital designers – to incorporate analog and mixed signals into their SoCs? What tools are available? What are the best practices?

Kasser: The vast majority of IoT devices today are powered by MCUs running at modest frequencies unlike application-processor power-houses at the heart of mobile devices. But the IoT MCUs present formidable integration challenges as they come packed with all sorts of I/O and signal-acquisition peripherals. The deployment on the same silicon of traditional digital logic and datapaths can easily be overwhelmed by the typical mixed-signal MCU SoC backend design process, which can be complicated by issues such as placement of analog macros and synthesized logic, by timing-closure accounting, by poorly characterized analog IPs, and by signal integrity of digital signals and cross-talk introducing noise for analog blocks (think of ADC, DACs, bandgaps, etc.). So designers must carefully simulate their mixed signal designs to insure the behavior of control logic and analog circuits working together. These simulations can take days and weeks and, may still require substantial handcrafted black magic from practitioners of the (analog) art.

Martin: Mixed signal design has been done for years and the IoT focused products for the most part does not require cutting edge analog integration.  The type of things being monitored in a household for example are relatively slow and easily conditioned to be interfaced to the microcontrollers being used to implement the ‘motes’ or nodes. The IoT factor is the extension of taking existing microcontroller and analog front end (AFE) and making this data available to the could and subsequently allowing the cloud to control these nodes.  The real challenge for the chip designers is how to best balance feature sets against power consumption as many of these endpoint applications become very application focused.   It would seem that the shift from the MHz race to the milli-watt race over the past few years has left the EDA tools manufacturers playing catch-up to some extent.  Additionally accurate characterization of the current chip process technology in the static state (non clocked) is now much more critical since most of the IoT nodes spend most of their time asleep.

Parmenter: Essentially open source hardware and reference designs from the semiconductor companies have lowered the barriers to entry substantially. We are seeing some interesting open source hardware, specifically the BeagleBone Black from TI and others. Reference designs are becoming more common. In many cases much of the work has been done by the semiconductor companies.  It’s important to understand system level interfacing because you still have to read analog signals and keep noise out of the system. Your power supply has to be properly designed and stable. More designs have wired and wireless communications interfaces so the make versus buy decision is always a moving target. So it comes down to clever software design and selection of an OS and other software after the decision to make versus buy hardware decisions are made.

Smith: The IoT generally assumes very compact, low-power, connected devices that can be applied to everyday living applications. Thus, the assumption is that they include one or more processors – these can be discrete, using traditional or embedded microcontrollers (MCUs). Additionally, these devices require connectivity – this could be done locally via near-field communication (NFC), Bluetooth, or WiFi. The wireless components will likely be discrete due to cost, time-to-market, and multi-sourcing requirements. Likewise, any required sensors, such as those for motion detection, geolocation, temperature, and so on, will be highly integrated in themselves but still discrete. In this case, the integration is done with appropriate packaging technologies such as package on package (PoP), multi-die packages, etc.

Thus, this is mostly a system-level challenge – how to create models for each of the components and how to develop the software to drive and manage the devices. However, where there is a need to integrate at the chip level, it’s important that the entire SoC can be designed in an efficient manner. The co-design of digital and analog parts must be available and understood by engineers who are responsible for pulling together the entire chip.

Soubra: Today, mixed signal design is totally different than that of ten years ago thanks to EDA tools that have made a radical change to the design process. These tools give you views into both the digital and analogue side. For example, you can trace through the assembly code to see the digital signals and the corresponding analogue signals all at the same time. (See, “Digital Designers Grapple with Analog Mixed Signal Designs”) Further, the industry now provides analogue IP blocks for tighter integration with the digital design. The ultimate example is when you now take a radio hard macro and place it in a digital design to give your IoT SoC connectivity.

Blyler: As mixed signal and RF elements are integrated into the SoC, low power will be a key constraint. What tools and techniques are available to help in the architectural and low-level design of low power devices? What new challenges do IoT designs present that are unique from other spaces?

Kasser: All of the diverse IoT applications and use-cases are inevitably challenged by meager energy budgets. As a result, low-power design is a prerequisite and proper energy management must be addressed at the system level. An IoT wireless sensor-node SoC, for example, must be capable of working with a range of operating modes that switch internal resources on and off as necessary to reduce energy consumption to sometimes even less than 1uA in idle modes. To get there, the chip must often be fitted with multiple power islands, various forms of body bias, retention modes, and other tricks. The tradeoffs these techniques require (chip size, complexity, active vs. leakage power, etc.) must be evaluated against various technology options that might be incompatible with integrating RF blocks efficiently.

New IoT chips must incorporate more processing and sophisticated RF communication capability (beaconing, wakeup modes, higher throughputs, etc.). As a result, they present unique challenges because they are asked to perform substantial more data crunching–at roughly the same small energy budget–that the simple signal conditioning performed by traditional sensor nodes.

Martin: Almost all nodes / motes are battery powered, perhaps even just through OTA (Over The Air) Energy harvesting.  This is adding additional pressure to the chip designers to modify existing analog blocks to become even more power efficient. Digital blocks tend to be easier since the bulk of the power is consumed during active clock cycles.  Once of the new challenged introduced in the IoT space is the low power RF signal strength problem associated with personal health monitoring devices which unless cell phones and their relatively high power RF sections need to receive and transmit data at very low power levels while being strapped directly to a 150 pound bag of water.

Parmenter: These techniques have been honed in portable applications and satellite — even automotive applications — for years. We will see these techniques used in other areas, like IoT, now that they are needed. The ability to monitor system performance and decide what to power up or down to conserve power has been done before. Digital control of power supplies will help with this as well. In applications such as energy harvesting and the monitoring of infrastructure – bridges and roads for example — we will need highly efficient energy harvesting devices. Super capacitors could store energy in a lossless fashion. Low power RF could send the data to “the cloud.” .

Figure: Good ULP for energy harvesting applications have an MCU with a low amount of active time. (Courtesy of Mouser)

Smith: Low power is an assumed requirement for almost all SoCs today. For mixed-signal, these techniques are mostly handled in the digital domain, but overall the digital and analog parts must still be designed and verified together to ensure functionality. Among many challenges that are perhaps unique to other spaces is that the devices may need to be placed in harsh conditions due to their need to be in close proximity to the host for sensing or location applications. As a result, reliability across wide ranges of temperature, power sources, electromagnetic interference levels and other environmental variables will need to be analyzed and verified to have appropriate tolerance levels. EDA tools are essential to ensuring these variations in environmental conditions do not cause the designs to fail in the field.

Soubeyrat: Diffusion of communicating objects on a large scale for IoT applications will require low cost sensors and processors combined with ultra-low power designs. One example is high grade motion tracking from low grade (hence low cost) sensors. The sensor data compensates for the long term drift of low cost gyroscopes by fusing their data with data from low grade accelerometer and magnetometer sensors. On the low-power side, power for the sensors could be managed by switching on the strict minimum of sensors and the power of the processing unit (MCU) by implementing an optimal duty cycle for the data fusion.

Soubra: Today’s existing SoC tool flows support all sorts of low power techniques at the gate and block level, i.e., power domains, state retention, and others. Gating of the clocks to peripherals via the creation of sophisticated clock trees is also very well understood and implemented by the tools. The part that is missing is enabling low power at the software level, to allow the system designer to incorporate them early into the design of the application. The software application must be designed to power down the peripheral when not in use, e.g., by calling on the clock gating function. The application must put the processor or analogue block to sleep when all events have been handled. A tight coupling between low power features in silicon and low power aware design in software yields the ultimate low power design of the system.

Blyler: The third element of an IoT strategy is security. How is the design of secure systems different in an IoT environment– from a MCU and system software-hardware context? What are the best tools and practices when designing long-term secure IoT systems?

Kasser: There is no fundamental difference to security in an IoT system than might be found in another kind of system. The approach requires identifying the assets, threats, and risks for the particular application/service/use case and then applying the right mix of state-of-the-art hardware and software security techniques/technologies to reach the target protection level and performance. One priority (and challenge) is to restrict the interfaces (attack surface) to the minimum possible, and apply state-of-the-art robust hardware and software design and fire-walling techniques to eliminate (best case) or at least minimize the impact of implementation-related vulnerabilities. This is particularly important for complex hubs and gateways. For nodes, conflicting security and low-power requirements require careful analysis and experience to achieve optimal system performance.

Martin: Security in the IoT space can be split into two areas, which in the general discussion of security will always overlap. Firstly, the IoT device space suggests a mind numbingly large number of nodes distributed around the house, car, personal space and everywhere.  This mandates that software updates be performed the RF channel much like cell phones are today.  However an extra layer of fault tolerance needs to be added to ensure that the software update is indeed legitimate and that there is a fall back mechanism to recover a bad image, either because it’s tainted or through a hardware fault in programming the new image.  Microcontroller manufacturers are addressing these concerns by adding hardware encryption blocks, dual bank flash and external devices that allow for very specific checks against contaminated code.  Best practices in this space are carries over from the fault tolerant industrial space which include SHA256 digest calculation and comparison on new software downloads.

Secondly, the content of the node data itself may or may not contain personal information, and it many cases the actual payload sizes are small to keep over all power consumption low.  It is however important that the RF channel be secured specifically when commands are sent to the nodes..  It’s perhaps not a large issue if your neighbor knows that your living lights are on but it’s a far different problem if your neighbor can control your living room lights.  Best practices in this space will include specific and pseudo (or completely ) random challenge response sequences to nodes before the actual commands are sent to ensure that the target node is indeed the correct recipient and that the IoT gateway is indeed the authorized control node.

Parmenter: I can’t think of a larger target for hackers and terrorists than accessing infrastructure to control, corrupt, steal or destroy data.  The value of the data and the system will far outweigh the hardware. Thus, it might be better to buy something that is proven in other high security applications rather than trying to write something on your own because it can save money. Further, requirements that exist in the military and aerospace market will apply to these systems for robustness and security. I believe the software and I/O are going to be under close scrutiny such that they cannot be compromised, yet allow simple access by authorized users.

Soubra: The main difference in security design for the IoT is the “I,” – the Internet. Before IoT, the system was either standalone or in a closed network. Within that context, security design is much simpler. But within the open network of the Internet, designers are now faced with unlimited security threats that increase every day in variety and style. Engineers must design the endpoint to protect itself against today’s attacks and supporting updates for tomorrow’s attacks. An even better approach would be to participate with the other nodes to interactively report suspicious activity on the network. Of course, all of this must be done in a very constraint code space and power budget!

Blyler: Thank you!

EDA Industry Predictions for 2014 – Part 1

Tuesday, January 7th, 2014

Gabe Moretti, Contributing Editor

I always ask for predictions for the coming year, and generally get good response.  But this year the volume of responses was so high that I could not possibly cover all of the material in one article.  So I will use two articles, one week apart, to record the opinions submitted.   This first section details the contributions of Andrew Yang of ANSYS – Apache Design, Mick Tegethoff of Berkeley Design Automation, Michel Munsey of Dassault Systèmes, Oz Levia from Jasper Design Automation, Joe Sawicki from Mentor Graphics, Grant Pierce and Jim Hogan from Sonics, and Bob Smith of Uniquify.

Andrew Yang – ANSYS Apache Design

For 2014 and beyond, we’ll see increased connectivity of the electronic devices that are pervasive in our world today. This trend will continue to drive the existing mobile market growth as well as make an impact on upcoming automotive electronics. The mobile market will be dominated by a handful of chip manufacturers and those companies that support the mobile ecosystem. The automotive market is a big consumer of electronics components that are part of a complex system that help improve safety and reliability, as well as provide users with real-time interaction with their surroundings.

For semiconductor companies to remain competitive in these markets, they will need to take a “system” view for their design and verification. The traditional silo-based methodology, where each component of the system is designed and analyzed independently can result in products with higher cost, poor quality, and schedule delay. An adoption of system-level simulation will allow engineers to carry out early system prototyping, analyze the interaction of each of the components, and achieve optimal design tradeoffs.

Mick Tegethoff – Berkeley Design Automation

FinFET Technology will dominate the landscape in semiconductor design and verification as more companies adopt the technology. FinFET is a revolutionary change to device fabrication and modeling, requiring a more complex SPICE model and challenging the existing circuit behavior “rules of thumb” on which experienced designers have relied for years with planar devices.

Designers of complex analog/RF circuits, including PLLs, ADCs, SerDes, and transceivers, will need to relearn the device behavior in these applications and to explore alternative architectures. As a result, design teams will have to rely more than ever on accurate circuit verification tools that are foundry-certified for FinFET technology and have the performance and capacity to handle complex circuits including physical effects such as device noise, complex parasitics, and process variability.

In memory applications, FinFET technology will continue to drive change and challenge the status quo of “relaxed accuracy” simulation for IP characterization. Design teams are realizing that it is no longer acceptable to tolerate 2–5% inaccuracy in memory IP characterization. They are looking for verification tools that can deliver SPICE-like accuracy in a time frame on a par with their current solutions.

However, accurate circuit verification alone will not be sufficient. The impact of FinFET devices and new circuit architectures in analog, RF, mixed-signal, and memory applications demand full confidence from design teams that their circuits will meet specifications across all operational, environmental, and process conditions. As a result, designers will need to perform an increased amount of intelligent, efficient, and effective circuit characterization at the block level and at the project level to ensure that their designs meet rigorous requirements prior to silicon.

Michael Munsey – Dassault Systèmes

We at Dassault Systèmes see a few key trends coming to the semiconductor industry in 2014.

1) Extreme Design Collaboration: Complexity and cost in IC design and manufacturing now demand that semiconductor vendors engage an ever broader, more diverse pool of specialist designers and engineers.

At the same time, total costs for designing a cutting-edge integrated circuit can top $100 million for just one project. Respins can drive these costs even higher, adding huge profitability risks to new projects.

Technology-enabled extreme collaboration, over and above that in traditional PLM, will be required to assure manufacturable, profitable designs. Why? Because defects arise at the interchange between designers. And with more designers and more complex projects, the risk of misperceptions and miscommunications increases.

Pressure for design teams to interlock using highly specialized collaboration technology will increase in parallel with the financial risk of new semiconductor design projects.

2) Enterprise IP management: The move towards more platform-based designs in order to meet shortening time to market windows, application driven designs, and the increasing cost of producing new semiconductor devices, will explode the market for IP and create a new market for enterprise IP management.

The deeper insight is how that IP will be acquired, used, configured, validated and otherwise managed. The challenges will be (1) building a intelligent process that enables project managers to evaluate the lowest cost IP blocks quickly and effectively; (2) managing the licensed IP so that configuration, integration and validation know-how is captured and is easily reused; and (3) ensuring that licensing and export compliance attributes of each licensed block of IP are visible to design decision makers.

3) Flexible Design to Manufacturing: In 2011, the Japanese earthquake forced a leading semiconductor company to cease manufacturing operations because their foundry was located close to Fukushima. That earthquake and the floods in Thailand have awakened semiconductor vendors to the stark reality that global supply chains can be dramatically and unexpectedly disrupted without any prior notice.

At the same time, with increased fragmentation and specialization occurring within the design and supply chain for integrated circuit, cross chain information automation will be mission-critical.

Examples of issues that will require IT advances are (1) the increasing variations in how IP is transferred down the supply chain. It could be a file, a wafer, a die or a packaged IC – yet vendors will need to handle all options with equal efficiency to maximize profitability; and (2) the flexible packaging of an IC design for capture into ERP systems will become mandatory, in order to enable the necessary downstream supply chain flexibility.

Oz Levia – Jasper design Automation

There are a few points that we at Jasper consider important for 2014.

1) Low power design and verification will continue to be a main challenge for SoC designers.

2) Heterogeneous multi-processor designs will continue to grow. Issue such as fabric and NOC design and verification will dominate.

3) The segments that will drive the semiconductor markets will likely continue to be in the mobile space(s) – phones, tablets, etc. But the server segment will also continue to increase in importance.

4) Process will continue to evolve, but there is a lot of head room in current processes before we run out steam.

5) Consolidation will continue in the semiconductor market. More important, the strong will get stronger and the weak will get weaker. Increasingly this is a winner takes all market and we will see a big divide between innovators and leaders and laggards.

6) EDA will continue to see consolidation.  Large EDA vendors will continue increasing investments in SIP, and Verification technologies. We will not see a new radically different technology or methodology. The total amount of investments In the EDA industry will continue to be low.

7) EDA will grow at slow pace, but Verification, Emulation and SIP will grow faster then other segments.

Joseph Sawicki – Mentor Graphics

FinFETs will move from early technology development to early adopter designs. Over the last year, the major foundry ecosystems moved from alpha to production status for 16/14nm with its dual challenges of double patterning and FinFET.  Fabless customers are just beginning to implement their first test chip tape-outs for 16 /14 nm, and 2014 will see most of the 20 nm early-adopter customers also preparing their first 16 nm/14 nm test chips.

FinFETs are driving a need for more accurate extraction tools, and EDA vendors are turning to 3D field solver technology to provide it. The trick is to also provide high performance that can deliver quick turnaround time even as the number of required extraction corners jumps from 5 to 15 and the number of gates doubles or triples.

Test data and diagnosis of test fail data will play an increasingly important role in the ramp of new FinFET technologies. The industry will face new challenges as traditional approaches to failure analysis and defect isolation struggle to keep pace with changes in transistor structures. The opportunity is for software-based diagnosis techniques that leverage ATPG test fail data to pick up the slack and provide more accurate resolution for failure and yield analysis engineers.

16/14nm will also require more advanced litho hotspot checking and more complex and accurate fill structures to help ensure planarity and to also help deal with issues in etch, lithography, stress and rapid thermal annealing (RTA) processes.

In parallel with the production ramp at 20 nm, and 16 nm/14 nm test chips, 2014 will see the expansion of work across the ecosystem for 10 nm. Early process development and EDA tool development for 10 nm began in 2012, ramped up in intensity in 2013, and will be full speed ahead in 2014.

Hardware emulation has transitioned from the engineering lab to the datacenter where today’s virtual lab enables peripheral devices such as PCIe, USB, and Ethernet to exist in virtual space without specialized hardware or a maze of I/O cables. A virtual environment permits instant reconfiguration of the emulator for any design or project team and access by more users, and access from anywhere in the world, resulting in higher utilization and lower overall costs.

The virtual lab is also enabling increased verification coverage of SoC software and hardware, supporting end-to-end validation of SW drivers, for example. Hardware emulation is now employed throughout the entire mobile device supply chain, including embedded processor and graphics IP suppliers, mobile chip developers, and mobile phone and tablet teams. Embedded SW validation and debug will be the real growth engine driving the emulation business.

The Internet of Things (IoT) will add an entirely new level of information sources, allowing us to interact with and pull data from the things around us. The ability to control the state of virtually anything will change how we manage and interact with the world. The home, the factory, transportation, energy, food and many other aspects of life will be impacted and could lead to a new era of productivity increases and wealth creation.

Accordingly, we’ll see continued growth in the MEMS market driven by sensors for mobile phones, automobiles, and medical monitoring, and we’ll see silicon photonics solutions being implemented in data and communications centers to provide higher bandwidth backplane connectivity in addition to their current use in fiber termination.

Semiconductor systems enabling the IoT trend will need to respond to difficult cost, size and energy constraints to drive real ubiquity. For example, we’ll need 3D packaging implementations that are an order of magnitude cheaper than current offerings. We’ll need a better ways to model complex system effects, putting a premium on tools that enable design and verification at the system level, and engineers that can use them. Cost constraints will also drive innovation in test to ensure that multi-die package test doesn’t explode part cost. Moreover, once we move from data to actually interacting with the real world analog/mixed signal, MEMS and other sensors role in the semiconductor solution will become much greater.

Grant Pierce and Jim Hogan – Sonics

For a hint at what’s to come in the technology sector as a whole and the EDA and IP industries specifically, let’s first look at the global macro-economic situation. The single greatest macro-economic factor impacting the technology sector is energy. Electronic products need energy to work. Electronic designers and manufacturers need energy to do their jobs. In the recent past, energy has been expensive to produce, particularly in the US market due to our reliance on foreign oil imports. Today in the US, the cost of producing energy is falling while consumption is slowing. The US is on a path to energy self-sufficiency according to the Energy Department’s annual outlook. By 2015, domestic oil output is on track to surpass its peak set in 1970.

What does cheaper energy imply for the technology industry? More investment. Less money spent on purchasing energy abroad means more capital available to fund new ventures at home and around the world. The recovery of US financial markets is also restoring investors’ confidence in earning higher ROI through public offerings. As investors begin to take more risk and inject sorely needed capital into the technology sector, we expect to see a surge in new startups. EDA and IP industries will participate in this “re-birth” because they are critical to the success of technology sector as enabling technologies.

For an understanding of where the semiconductor IP business is going, let’s look at consumer technology. Who are the leaders in the consumer technology business today? Apple, Google, Samsung, Amazon, and perhaps a few others. Why? Because they possess semiconductor knowledge coupled with software expertise. In the case of Apple, for example, they also own content and its distribution, which makes them extremely profitable with higher recurring revenues and better margins. Content is king and the world is becoming application-centric. Software apps are content. Semiconductor IP is content. Those who own content, its publication and distribution, will thrive.

In the near term, the semiconductor IP business will continue to consolidate as major players compete to build and acquire broader content portfolios. For example, witness the recent Avago/LSI and Intel/Mindspeed deals. App-happy consumers have an insatiable appetite for the latest and greatest content and devices. Consumer technology product lifecycles place immense pressure on chip and system designers when developing and verifying the flexible hardware platforms that run these apps. Among their many important considerations are functionality, performance, power, security, and cost. System architectures and software definition and control are becoming the dominant source of product differentiation rather than hardware. The need for semiconductor IP that addresses these trends and accelerates time-to-volume production is growing. The need for EDA tools that help designers successfully use and efficiently reuse IP is also growing.

So what are the market opportunities for new IP and tool companies in the coming years? These days, talk about the Internet of Things (IoT) is plentiful and there will be many different types of IP in this sensor-oriented market space. Perhaps, the most interesting and promising of these IoT IP technologies will address our growing concerns about health and quality of life. The rise of wearable technologies that help monitor our vital signs and treat chronic health conditions promises to extend our human survival rate beyond 100 years. As these technologies progress, surely the “Bionic Man” will become common place in the not-too-distant future. Personally, and being members of the aging “Baby Boomer” generation, we hope that it happens sooner rather than later!

Bob Smith – Uniquify

I spent a good deal of 2013 traveling around the globe doing a series of seminars on double data rate (DDR) synchronous dynamic random-access memory (SDRAM), the ubiquitous class of memory chips. The seminars were meant to promote the fastest, smallest and lowest power state-of-the-art adaptive DDR IP technology. They highlighted how it can be used to enhance design speed and configured to minimize the design footprint and hit increasingly smaller low-power targets.

While marketing and promotion was on the agenda, the seminars were a great way to check in with designers to better understand their current DDR challenges and identify a few trends that will emerge in 2014. What we learned may be a surprise to more than a few semiconductor industry watchers and offers some tantalizing predictions for next year.

The biggest surprise was hearing designers confirm plans to go directly to LPDDR4 (that is low-power DDR4, the latest JEDEC standard) and skip LPDDR3. The reasons are varied, but most noted that they’re getting greater gains in performance and low power by jumping to LPDDR4, especially important for mobile applications. According to JEDEC, the LPDDR4 architecture was designed to be power neutral, offer 2X bandwidth performance over previous generations, with low pin-count and low cost. It’s also backward compatible.

Even though many of the designers we heard from agreed that DDR3 is now mainstream, even more are starting projects based on DDR4. Some are motivated to move to DDR4 even without the need for extra performance for a practical and cost-effective reason. If they have a product with a long lifetime of five years or more, they are concerned that the DDR3 memory will cost more than DDR4 at some point. They have a choice: either build in the DDR4 now in anticipation or look for combination IP that handles both DDR3/4 in one IP. Many have chosen to do the former.

One final prediction I offer for 2014 is that 28nm is the technology node that will be around for a long time to come. Larger semiconductor companies, however, are starting new projects at 14/16 nm, taking advantage of the emerging FinFET technology.

According to my worldwide sources, memories and FinFET will dominate the discussion in 2014, which means it will be a lively year.

What Powers the IoT?

Wednesday, October 16th, 2013

By Stephan Ohr, Gartner

Powering the Internet of Things (IoT) is a special challenge, says Gartner analyst Stephan Ohr, especially for the wireless sensor nodes (WSNs) that must collect and report data on their environmental states (temperature, pressure, humidity, vibration and the like). While the majority of WSNs will harness nearby power sources and batteries, there will be as many as 10% of the sensor nodes that must be entirely self-powering. Often located in places where it is difficult or impossible to replace batteries, these remote sensor nodes must continue to function for 20 years or more.

Two research and development efforts focus on self-powering remote sensor nodes: One effort looks at energy harvesting devices, which gather power from ambient sources. The major types of energy harvesting devices include specialized solar cells, vibration and motion energy harvesters, and devices that take advantage of thermal gradients warm and cool surfaces. Research and development concentrated on reducing the size and cost of these devices, and making their energy gathering more efficient. But even in their current state of development, these devices could add up to a half-billion in revenues per year within the next five years.

The other R&D effort concentrates on low-power analog semiconductors which will elevate the milli-volt outputs of energy harvesting devices to the levels necessary for powering sensors, microcontrollers, and wireless transceivers. These devices include DC-DC boost converters, sensor signal conditioning amplifiers, and, in some cases, data converter ICs which transform the analog sensor signals to digital patterns the microcontroller can utilize. Broadline analog suppliers like Linear Technology Corp. and Analog Devices have added low-power ICs to their product portfolios. In addition to boosting low-level signals, they use very little power themselves. LTC’s low-power parts, for example, have a quiescent current rating of 1.3 micro-amps. Other companies liked Advanced Linear Devices (ALD) have been working on low-threshold electronics for years, and Texas Instruments has a lineup of specialized power management devices for WSN applications.

Ohr’s projections on energy harvesting will be part of his talk on “Powering the Internet of Things” at the Sainte Claire Hotel, San Jose, CA on October 24, 2013. (Admission is free, but advance registration is required The Internet of Things – A Disruption and an Evolution)

Source: Gartner Research (Oct 2013)

Stephan (“Steve”) Ohr is the Research Director for Analog ICs, Sensors and Power Management devices at Gartner, Inc., and focuses on markets that promise semiconductor revenue growth. His recent reports have explored custom power management ICs for smart phones and tablets, the impact of Apple’s choices on the MEMs sensor industry, and a competitive landscape for MEMs sensor suppliers.

Ohr’s engineering degree, a BS in Industrial Engineering, comes from the New Jersey Institute of Technology (the Newark College of Engineering) and his graduate degree, an MA in sociology, comes from Rutgers.

Mixed Signal and Microcontrollers Enable IoT

Wednesday, October 16th, 2013

By John Blyler

The Internet of Things (IoT) has become such a hot topic that many business and technical experts see it as a key enabler for the fourth industrial revolution – following the steam engine, the conveyor belt and the first phase of IT automation technology (McKinsey Report). Still, for all the hype, the IoT concept seems hard to define.

From a technical standpoint, the IoT refers to the information system that uses smart sensors and embedded systems that connect wired or wirelessly via Internet protocols. ARM defines IoT as, “a collection of smart, sensor-enabled physical objects, and the networks, servers and services that interact with them. It is a trend and not a single sector or market.” How do these interpretations relate to the real world?

“There are two ways in which the “things” in the IoT interact with the physical world around us,” explains Diya Soubra, CPU Product Manager for ARM’s Processor Division. “First they convert physical (analog) data into information and second they act in the physical world based on information. An example of the first way is a temperature sensor that reports temperature while an example of the second way is a door lock opens upon receiving a text message.”

For many in the chip design and embedded space, IoT seems like the latest iteration of the computer-communication convergence heralded from the last decade. But this time, a new element has been added to the mix, namely, sensor systems. This addition means that the role of analog and mixed signal system must now extend beyond RF and wireless devices to include smart sensors. This combination of analog mixed signal, RF-wireless and digital microcontrollers has increase the complexity and confusion among chip, board, package and end product semiconductor developers.

“Microcontrollers (MCUs) targeting IoT applications are becoming analog-intensive due to sensors, AD converters, RF, Power Management and other analog interfaces and modules that they integrate in addition to digital processor and memory,” says Mladen Nizic, Engineering Director for Mixed Signal Solutions at Cadence Design Systems. “Therefore, challenges and methodology are determined not by the processor, but by what is being integrated around it. This makes it difficult for digital designers to integrate such large amounts of analog. Often, analog or mixed-signal skills need to be in charge of SoC integration, or the digital and analog designer must work very closely to realize the system in silicon.”

The connected devices that make up the IoT must be able to communicate via the Internet. This means the addition of wired or wireless analog functionality to the sensors and devices. But a microcontroller is needed to convert the analog signal to digital and to run the Internet Protocol software stacks. This is why IoT requires a mix of digital (Internet) and analog (physical world) integration.

Team Players?

Just how difficult is it for designers – especially digital – to incorporate analog and mix signal functionality into their SoCs? Soubra puts it this way (see Figure 1): “In the market, these are two distinct disciplines. Analogue is much harder to design and has its set of custom tools. Digital is easier since it is simpler to design, and it has its own tools. In the past (prior to the emergence of IoT devices), Team A designed the digital part of the system while Team B designed the analog part separately. Then, these two distinct subsystems where combined and tested to see which one failed. Upon failure, both teams adjusted their designs and the process was repeated until the system worked as a whole. These different groups using different tools resulted in a lengthy, time consuming process.”

Contrast that approach with the current design cycle where the entire mixed signal designers (Teams A and B) work together from the start as one project using one tool and one team. All tool vendors have offerings to do this today. New tools allow viewing the digital and analog parts at various levels and allow mixed simulations. Every year, the tools become more sophisticated to handle ever more complex designs.

Figure 1: Concurrent, OA-based mixed-signal implementations. (Courtesy of Cadence)

Simulation and IP

Today, all of the major chip- and board-level EDA and IP tool vendors have modeling and simulation tools that support mixed signal designs directly (see Figure 2).

Figure 2: Block diagram of pressures-temperature control and simulation system. (Courtesy Cadence)

Verification of the growing analog mixed-signal portion of SoCs is leading to better behavioral models, which abstract the analog upward to the register transfer level (RTL). This improvement provides a more consistent handoff between the analog and digital boundaries. Another improvement is the use of real number models (RNMs), which enable the discrete time transformations needed for pure digital solver simulation of analog mixed-signal verification. This approach enables faster simulation speeds for event-driven real-time models – a benefit over behavioral models like Verilog-A.

AMS simulations are also using assertion techniques to improve verification – especially in interface testing. Another important trend is the use of statistical analysis to handle both the analog nature of mixed signals and the increasing number of operational modes. (See, “Moore’s Cycle, Fifth Horseman, Mixed Signals, and IP Stress”).

Figure: Cadence’s Mladen Nizic (background right) talk about mixed-signal technology with John Blyler. (Photo courtesy of Lani Wong)

For digital designers, there is a lot to learn in the integration of analog systems. However, the availability of ready-to-use analog IP does make it much easier than in the past. That’s one reason why the analog IP market has grown considerable in the last several years and will continue that trend. As reported earlier this year, the wireless chip market will be the leading growth segment for the semiconductor industry in 2013, predicts IHS iSuppli Semiconductor (“Semiconductor Growth Turns Wireless”).

The report states that original-equipment-manufacturer (OEM) spending on semiconductors for wireless applications will rise by 13.5% this year to reach a value of $69.6 billion – up from $62.3 billion in 2012.

The design and development of wireless and cellular chips – part of the IoT connectivity equation – reflects a continuing need for related semiconductor IP. All wireless devices and cell phones rely on RF and analog mixed-signal (AMS) integrated circuits to convert radio signals into digital data, which can be passed to a baseband processor for data processing. That’s why a “wireless” search on the website reveals list after list of IP companies providing MIPI controllers, ADCs, DACs, PHY and MAC cores, LNAs, PAs, mixers, PLLs, VCOs, audio/video codecs, Viterbi encoders/decoders, and more.

Real-World Examples

“Many traditional analog parts are adding more intelligence to the design and some of them use microcontrollers to do so,” observes Joseph Yiu, Embedded Technology Specialist at ARM. “One example is an SoC from Analog Device (ADuCM360) that contains a 24-bit data acquisition system with multichannel analog-to-digital converters (ADCs), an 32-bit ARM Cortex-M3 processor, and Flash/EE memory. Direct interfacing is provided to external sensors in both wired and battery-powered applications.”

But, as Soubra mentioned earlier, the second way in which the IoT interacts with the physical world is to act on information – in other words, through the use of digital-to-analog converters (DACs). An example of a chip that converts digital signals back to the physical analog world is SmartBond DA14580. This System-on-Chip (SoC) is used to connect keyboards, mice and remote controls wirelessly to tablets, laptops and smart TVs. It consists of Bluetooth subsystem, a 32 -bit ARM Cortex M0 microcontroller, antenna connection and GPIO interfaces.

Challenges Ahead

In addition to tools that simulated both analog, mixed signal and digital designs, perhaps the next most critical challenge in IoT hardware and software development is the lack of standards.

“The industry needs to converge on the standard(s) on communications for IoT applications to enable information flow among different type of devices,” stressed Wang, software will be the key to the flourish of IoT applications, as demonstrated by ARM’s recent acquisition of Sensinode.” A Finnish software company, Sensinode builds a variation of the Internet Protocols (IP) designed for IoT device connection. Specifically, the company develops to the 6LoWPAN standard, a compression format for IPv6 that is designed for low-power, low-bandwidth wireless links.

If IoT devices are to receive widespread adoption by consumers, then security of the data collected and acted upon by these devices must be robust. (Security will be covered in future articles).

Analog and digital integration, interface and communication standards, and system-level security have always been challenges faced by leading edge designers. The only thing that changes is the increasing complexity of the designs. With the dawning of the IoT, that complexity will spread from every physical world sensor node to every cloud-based server receiving data from or transmitting to that node. Perhaps this complexity spreading will ultimately be the biggest challenge faced by today’s intrepid designers.

Analog and RF Added To IC Simulation Discussion

Thursday, July 26th, 2012

By John Blyler
System-Level Design sat down with Nicolas Williams, Tanner EDA’s director of product management, to talk about trends in analog and RF chip design.

SLD: What are big the trends in analog and RF simulation?
Williams: The increased need to bring more layout dependent information into the front-end design early on. Layout-dependent effects influence performance, so it is no longer possible to separate “design” from “layout” phases, as we did traditionally. With nanoscale technologies, a multitude of physical device pattern separation dimensions must now be entered into the pre-layout simulation models to accurately predict post-layout circuit performance. This is more than just adding some stray capacitance to some nodes. It now includes accurate distances from gate to gate, gate to trench (SA,SB, etc.), distance in both X and Y dimensions between device active areas, distance from the gate contact to the channel edge (XGW), number of gate contacts (NGCON), distance to a single well edge (WPE), etc. Getting the pre-layout parameters accurately entered into the simulation will minimize re-design and re-layout resulting from performance deficiencies found during post-layout parameter extraction and design-verification simulations.

Another issue is larger variability at nanoscale. This is not so much due to manufacturing tolerance, but really because of layout-dependent effects. These effects include the ones listed above plus several that are not even modeled, such as nearby and overlying metal stress modifying Vt and gm and poor lithography. The lithography challenges are so severe in deep nanoscale that device patterns on final silicon look like they were drawn by Salvador Dali. Poor pattern shapes, increasing misalignment and shape-dependence on nearby patterns results in more gate length and width variation. More variability requires more complex simulations to have better confidence in your design. This requires faster simulators to simulate more corners or more Monte Carlo runs.

SLD: Statistical analysis, design-of-experiments, and corner modes—digital designers already hear many of these terms from the yield experts in the foundries. Should they now expect to hear it from the analog and RF simulator communities?
Williams: Statistical analysis and corner models have always been part of analog and RF design, but in the past it didn’t take much to try all combinations. There was no need to take a sample of the population when you could check the entire population. In nanoscale technologies, the number of effects that can affect circuit performance has grown exponentially to point where you have to take a statistical approach when checking corners. The older, alternative approach, of running the worst-case combinations of all design corners from all effects would result in an overly pessimistic result. Also, when the number of Monte Carlo simulations required to statistically represent your circuit has grown too large, that is where ‘design-of-experiments’ comes into play using methods such as Hyper Cube sampling.

Simulation accuracy is limited by model accuracy. Statistical variation of devices and parameters are more richly specified than the traditional SPICE approach for Monte Carlo (where you had “lot” and “device” parameters). Now you have spatially correlated variations, and you have the much richer .variation blocks in SPICE. Foundry models are now “expected” to provide usable models at this level, which raises all kinds of foundry-proprietary concerns.

SLD: How will this increase in statistical distribution analysis affect traditional analog electronic circuit simulators like Spice?
Williams: Statistical analysis requires a huge number of simulations, which can either take a long time to execute, or can be parallelized with CPU farms or cloud services, and smarter ways to sample which “corners” to run to get a reasonable confidence that you will be successful in silicon. Traditionally, aggregation of such results would have been a manual process, or at best some custom design flow development undertaken by the end user. Look for an upcoming sea change in how simulators are designed, sold and deployed by the EDA vendor community, to better address these needs.

All these simulations are great if your design meets all of its specifications. But what happens if it doesn’t? I feel the next step will be to use these simulations to figure out what variables your design is most sensitive to. Then you can try to mitigate the variability by improving the circuit or physical design (layout).

Trends In Analog And RF IC Simulation

Thursday, May 24th, 2012

By John Blyler
System-Level Design (SLD) sat down to discuss trends in analog and RF integrated circuit design with Ravi Subramanian, president and CEO of Berkeley Design Automation, (at the recent GlobalPress eSummit) and later with Trent McConaghy, Solido’s CTO. What follows are excerpts of those talks.

SLD: What are the important trends in analog and RF simulation?
Subramanian: I see two big trends. One is related to physics, namely, the need to bring in physical effects early in the design process. The second trend relates to the increased importance of statistics in doing design work. Expertise in statistics is becoming a must. One of the strongest demands made on our company is to help teach engineers how to do statistical analysis. What is required is an appreciation of the Design-of-Experiments (DOE) approach—common in the manufacturing world. Design engineers need to understand what simulations are needed for analog versus digital designers. For example, in a typical pre-layout simulation, you may want to characterize a block with very high confidence. Further, you may also want to do that block extracted in post layout with very high confidence. But what does ‘high confidence’ mean? How do you know when you have enough confidence? If you have a normally distributed Gaussian variable, you may have to run 500 simulations to get a 95% probability of confidence in that result. Every simulation waveform and data point has a confidence band associated with it.
McConaghy: As always, there is always a pull from customers for simulators that are faster and better. In general, simulators have been delivering on this. Simulators are getting faster, both in simulation time for larger circuits, and by easier-to-use multi-core and multi-machine implementations. Simulators are also getting better. They converge on a broader range of circuits, handle larger circuits, and more cleanly support mixed-signal circuits.
There’s another trend: meta-simulation. This term describes tools that feel like using simulators from the perspective of the designer. Just like simulators, meta-simulators input netlists, and output scalar or vector measures. However, meta-simulators actually call circuit simulators in the loop. Meta-simulators are used for fast PVT analysis, fast high-sigma statistical analysis, intelligent Monte Carlo analysis and sensitivity analysis. They bring the value of simulation to a “meta” (higher) level. I believe we’ll see a lot more meta-simulation, as the simulators themselves get faster and the need for higher-level analysis grows.

SLD: This sounds a lot like the Six Sigma methodology, a manufacturing technique use to find and remove defects from high volume productions—like CMOS wafers. Will design engineers really be able to incorporate this statistical approach into their design simulations?
Subramanian: Tools can help engineers incorporate statistic methods into their works. But let’s talk about the need for high sigma values. To achieve high sigma, you need a good experiment and a very accurate simulator. If you have a good experiment but you want to run it quickly and give up accuracy, you may have a Six-Sigma setup, but a simulator that has been relaxed so the Six-Sigma data is meaningless. This shows the difference between accuracy and precision. You can have a very precise answer but it isn’t accurate.
To summarize: Today’s low-node processes have associated physical effects that only can be handled by statistical methods. These two trends mean that new types of simulation must be run. Engineers need to give more thought as to which corners should be covered in their design simulations. Semiconductor chip foundries provided corners that are slow, fast and typical, based upon the rise- and fall-times of flip-flops. How relevant is that for a voltage-controlled oscillator (VCO)? In fact, are there more analog specific corners? Yes, there are.

SLD: Statistical analysis, design-of-experiments, and corner modes—designers already hear many of these terms from the yield experts in the foundries. Should they now expect to hear it from the analog and RF simulator communities?
Subramanian: Designers must understand or have tools that help them deal with statistical processes. For example, how do you know if a VCO will yield well? It must have a frequency and voltage characteristics that are reliable over a range of conditions. But if you only test it over common digital corners, you may miss some important analog corners where the VCO performs poorly. A corner is simply a performance metric, such as output frequency. You want to measure it within a particular confidence level, which is where statistics are needed. It may turn out that, in addition to the digital corners you’ll need to include a few analog ones.
McConaghy: These terms imply the need to address variation, and designers do need to make sure that variation doesn’t kill their design. Variation causes engineers to overdesign, wasting circuit performance, power and area or under design, hitting yield failures. To take full advantage of a process node, designers need tools that allow them to achieve optimal performance and yield. Since variation is a big issue, it won’t be surprising if simulator companies start using these terms with designers. The best EDA tools handle variation, while allowing the engineer to efficiently focus on designing with familiar flows like corner-based design and familiar analyses like PVT and Monte Carlo. But now the corners must be truly accurate, i.e., PVT corners must cause the actual worst-case behavior, and Monte Carlo corners must bound circuit (not device) performances like “gain” at the three-sigma level or even six-sigma level. These PVT and Monte Carlo analyses must be extremely fast, handling thousands of PVT corners, or billions of Monte Carlo samples.

SLD: Would a typical digital corner be a transistor’s switching speed?
Subramanian: Yes. Foundries parameterized transistors to be slow, typical and fast in terms of performance. The actual transistor model parameters will vary around those three cases, e.g., a very fast transistor will have a fast rise and switching time. So far, the whole notion of corners has been driven by the digital guys. That is natural. But now, analog shows up at the party at the same time as digital, especially at 28nm geometries.
The minimal requirement today is that all designs must pass the digital corners. But for the analog circuits to yield, they must pass the digital and specific analog corners, i.e., they must also pass the condition and variations relevant to the performance of that analog device. How do you find out what those other corners are? Most designers don’t have time to run a billion simulations. That is why people need to start doing distribution analysis for analog corners like frequency, gain, signal-to-noise ratios, jitter, power supply rejection ratio, etc. For each of these analog circuit measurements, a distribution curve is created from which Six-Sigma data can be obtained. Will it always be a Gaussian curve? Perhaps not.

SLD: How will this increase in statistical distribution analysis affect traditional analog electronic circuit simulators like Spice?
Subramanian: Spice needs to start generating these statistically-based distribution curves. I think we are at the early days of that frontier where you can literally see yourself having a design cockpit where you can make statistics simple to use. You have to make it simple to use otherwise it won’t happen. I think that is the responsibility of the EDA industry.
McConaghy: The traditional simulators will be used more than ever, as the meta-simulators call upon them to do fast and efficient PVT and statistical variation analysis up to 6-sigma design. The meta-simulators incorporate intelligent sampling algorithms to cut down the number of simulations required compared to brute force analysis. Today, many customers use hundreds of traditional SPICE simulator licenses to do these variation analysis tasks. However, they would like to be able to get the accuracy of billions of Monte Carlo samples in only thousands of actual simulations. These analyses are being done on traditional analog/RF, mixed-signal designs as well as memory, standard cell library and other custom digital design.

SLD: I know that the several of the major EDA tool vendors have recently released tools to make the statistical nature of low process node yields more accessible and useable by digital chip designers. Are their similar tools for the world of analog mixed signal design?
Subramanian: Analog and RF designs are now going through this same process, to move from an art to a science. That’s why I say that the nanometer mixed-signal era is here (see figure). Simulation tools are needed, but so are analysis capabilities. This is why our simulation tools have become platforms for analysis. We support the major EDA simulators but add an analysis cockpit for designers.

SLD: Why now? What is unique about the leading-edge 28nm process geometries? I’d have expected similar problem at a higher node, e.g., 65nm. Is it a yield issue?
Subramanian: Exactly. At 65nm, designers were still able to margin their designs sufficiently. But now the cost of the margin becomes more significant because you either pay for it with area or with power, which is really current. At 28nm, with SerDdes (high frequency and high performance) and tighter power budgets, the cost of the margin becomes too high. If you don’t do power-collapsing, then you won’t meet the power targets.

SLD: Is memory management becoming a bigger market for simulation?
Subramanian: Traditionally, memory has had some traditional analog pieces like charge pumps, sensitivity chains, etc. Now, in order to achieve higher and higher memory density, vendors are going to multi-level cells. This allows storage of 2, 4 or 8 bits on a single cell. But to achieve this density you need better voltage resolution between the different bit levels, which means you need more accurate simulation to measure the impact of noise. Noise can appear as a bit error when you have tighter voltage margins. You might wonder if this is really a significant problem. Consider Apple’s purchase of Anobit, a company that corrected those types of errors. If you can design better memory, then you can mitigate the need for error correction hardware and software. But to do that, you need more accurate analog simulation of memory. You cannot use a digital fast Spice tool, which uses a transistor table look-up model. Instead, you must use a transistor BSIM (Berkeley Short-channel IGFET Model) model.

Next Page »

Extension Media websites place cookies on your device to give you the best user experience. By using our websites, you agree to placement of these cookies and to our Privacy Policy. Please click here to accept.