Part of the  

Chip Design Magazine

  Network

About  |  Contact

Posts Tagged ‘analog’

Analog Designers Face Low Power Challenges

Monday, June 16th, 2014

By John Blyler, Chief Content Officer

Can mixed signal designs achieve the low power needed by today’s tightly integrated SoCs and embedded IoT communication systems?

System level power budgets are affected by SoC integration. Setting aside the digital scaling benefits of smaller geometric nodes, leading edge SoCs achieve higher performance and tighter integration with decreased voltage levels at a cost. If power is assumed to be constant, then that cost is the increased current flow (P=VI) delivered to an ever larger number of processor cores. That’s why SoC power delivery and distribution remain a major challenge for chip architects, designers and verification engineers.

As with digital engineers, analog and mixed signal power designers must consider ways to lower power consumption early in the design phase. Beyond that consideration, there are several common ways to reduce the analog mixed signal portion of a power budget. These ways include low-power transmitter architectures; analog signal processing in low-voltage domains; and sleep mode power reduction. (ARM’s Diya Soubra takes about mixed signal sleep modes in, “Digital Designers Grapple with Analog Mixed Signal Designs.”)

To efficiently explore the design space and make basic system-level trade-offs, SoC architects must adapt their modeling style to accommodate mixed-signal design and verification techniques. Such early efforts will also help prevent overdesign in which globally distributed and cross-discipline (digital and analog) design teams don’t share information. For example, if designers are creating company-specific intellectual property (IP) cores, then they may not be aware how various IP subsystems are being used at the full chip level.

Similarly, SoC package level designers must also understand how IP is used at the higher board level. Without this information, designers tend to over compensate their portion of the design, i.e., over design to ensure their portion of the design stays within the allocated power budget. But that leads to increased cost and power consumption.

Power Modes

From an architectural viewpoint, power systems really have two categories; active and idle/standby modes. With all of the physical level (PHY) link integration occurring on SoCs, active power considerations must apply not only to digital but also analog and input-output power designs.

Within the modes of active and idle/standby power are the many power states needed to balance the current load amongst various voltage islands. With increased performance and power demands on both digital and analog designs, there is growing interest in Time- and Frequency-Domain Power Distribution (or Delivery) Network (PDN) analysis. Vic Kulkarni, VP and GM at Apache Design, believes that a careful power budgeting at a high level enables the efficient design of the power delivery network in the downstream design flow. (See, “System Level Power Budgeting.”)

SoC power must be modeled throughout all aspects of the design implementation.  Although one single modeling approach won’t work, a number of vertical markets like automotive have found success using virtual prototypes.  “System virtual prototypes are increasingly a mix – not just of hardware and software, but also of digital, control, and analog components integrated with many different sensors and actuators,” observed Arun Mulpur of The Mathworks. (See, “Chip, Boards and Beyond: Modeling Hardware and Software.”)

Communications Modes

Next to driving a device screen or display, the communication subsystem tends to consume the most power on a SoC. That’s why several low power initiatives have recently arisen like Bluetooth Low Energy. Today there are three mainstream standards for Bluetooth in use – Bluetooth 2.0 (often referred to as Bluetooth Classic), Bluetooth 4.0, which offers both a standard high-speed mode and a low-energy mode with limited data rate referred to Bluetooth LE; and a single-mode Bluetooth LE standard that keeps power consumption to a minimum.  (See, “Wearable Technologies Meet Bluetooth Low Energy.”)

Power profiling of software is an important part of designing for cellular embedded systems. But cellular is only one connectivity option when designing the SoC or board level device. Other communication types include short range subsystems, Bluetooth, Zigbee, 6LowPAN and mesh networks. If Wi-Fi connectivity is needed, then there will be choices for fixed LAN connected things using Ethernet or proprietary cabling systems. Further, there will be interplay amongst all this different ways to connect that must be simulated at an overall system-level.  (See, “Cellular vs. WiFi Embedded Design.”)

It has only been in the last decade or so that mixed signal systems have played a more dominant role in system level power budgets. Today’s trend toward a highly connected, Internet-of-Things world (IoT) means that low power, mixed signal communication design must begin early in the design phase to be considered part of the overall system-level power management process.

Digital Designers Grapple with Analog Mixed Signal Designs

Tuesday, June 10th, 2014

Today’s growth of analog and mixed signal circuits in the Internet of Things (IoT) applications raises questions about compiling C-code, running simulations, low power designs, latency and IP integration.

Often, the most valuable portion of a technical seminar is found in the question-and-answer (Q&A) session that follows the actual presentation. For me, that was true during a recent talk on the creation of mixed signal devices for smart analog and the Internet of Things (IoT) applications. The speakers included Diya Soubra, CPU Product Marketing Manager and Joel Rosenberg, Platform Marketing Director at ARM; and Mladen Nizic, Engineering Director at Cadence. What follows is my paraphrasing of the Q&A session with reference to the presentation where appropriate. – JB

Question: Is it possible to run C and assembly code on an ARM® Cortex®-M0 processor in Cadence’s Virtuoso for custom IC design? Is there a C-compiler within the tool?

Nizic: The C compiler comes from Keil®, ARM’s software development kit. The ARM DS-5 Development Studio is an Eclipse based tool suite for the company’s processors and SoCs. Once the code is compiled, it is run together with RTL software in our (Cadence) Incisive Mixed Signal simulator. The result is a simulation of the processor driven by an instruction set with all digital peripherals simulated in RTL or at the gate level. The analog portions of the design are simulated at the appropriate behavioral level, i.e., Spice transistor level, electrical behavioral Verilog A or a real number model. [See the mixed signal trends section of, “Moore’s Cycle, Fifth Horseman, Mixed Signals, and IP Stress”)

You can use the electrical behavioral models like a Verilog A and VHDL-A and –AMS to simulate the analog portions of the design. But real number models have become increasingly popular for this task. With real number models, you can model analog signals with variable amplitudes but discrete time steps, just as required by digital simulation. Simulations with a real number model representation for analog are done at almost the same speed as the digital simulation and with very little penalty (in accuracy). For example, here (see Figure 1) are the results of a system simulation where we verify how quickly Cortex-M0 would us a regulation signal to bring pressure to a specified value. It takes some 28-clock cycles. Other test bench scenarios might be explored, e.g., sending the Cortex-M0 into sleep mode if no changes in pressure are detected or waking up the processor in a few clock cycles to stabilize the system. The point is that you can swap these real number models for electrical models in Verilog A or for transistor models to redo your simulation to verify that the transistor model performs as expected.

Figure 1: The results of a Cadence simulation to verify the accuracy of a Cortex-M0 to regulate a pressure monitoring system. (Courtesy of Cadence)

Question: Can you give some examples of applications where products are incorporating analog capabilities and how they are using them?

Soubra: Everything related to motor control, power conversion and power control are good examples of where adding a little bit of (processor) smarts placed next to the mixed signal input can make a big difference. This is a clear case of how the industry is shifting toward this analog integration.

Question: What capabilities does ARM offer to support the low power requirement for mixed signal SoC design?

Rosenberg: The answer to this question has both a memory and logic component. In terms of memory, we offer the extended range register file compilers which today can go up to 256k bits. Even though the performance requirement for a given application may be relatively low, the user will want to boot from the flash into the SRAM or the register file instance. Then they will shut down the flash and execute out of the RAM as the RAM offers significantly lower active as well as stand-by power compared to executing out of flash.

On the logic side, we offer a selection from 7, 9 and 12 tracks. Within that, there are three Vt options – one for high, nominal and lower speeds. Beyond that we also offer power management kits that provide things like level shifters and power gating so the user can shut down none active parts of the SoC circuit.

Question: What are the latency numbers for waking up different domains that have been put to sleep?

Soubra: The numbers that I shared during the presentation do not include any peripherals since I have no way of knowing what peripherals will be added. In terms of who is consuming what power, the normal progression tends to be the processor, peripherals, bus and then the flash block. The “wake-up” state latency depends upon the implementation itself. You can go from tens-of-cycles to multiple-of-tens depending upon how the clocks and phase locked loops (PLLs) are implemented. If we shut everything down, then a few cycles will be required before everything goes off an, before we can restart the processor. But we are talking about tens not hundreds of cycles.

Question: And for the wake-up clock latency?

Soubra: Wake-up is the same thing, because when the wake-up controller says “lets go,” it has to restart all the clocks before it starts the processor. So it is exactly the same amount.

ARM Cortex-M low power technologies.

Question: What analog intellectual property (IP) components are offered by ARM and Cadence? How can designers integrate their own IP in the flow?

Nizic: At Cadence, through the acquisition of Cosmic, we have a portfolio of applicable analog and mixed signal IP, e.g., converters, sensors and the like. We support all design views that are necessary for this kind of methodology including model abstracts from real number to behavioral models. Like ARM’s physical IP, all of ours are qualified for the various foundry nodes so the process of integrating IP and silicon is fairly smooth.

Soubra: From ARM’s point-of-view, we are totally focused on the digital part of the SoC, including the processors, bus infrastructure components, peripherals, and memory controllers that are part of the physical IP (standard cell libraries, I/O cells, SRAM, etc). Designers integrate the digital parts (processors, bus components, peripherals and memory controller) in RTL design stages. Also, they can add the functional simulation models of memories and I/O cells in simulations, together with models of analog components from Cadence. The actual physical IP are integrated during various implementation stages (synthesis, placement and routing, etc).

Question: How can designers integrate their own IP into the SoC?

Nizic: Some of the capabilities and flows that we described are actually used to create customer IP for later reuse in SoC integration. There is a centric flow that can be used, whether the customer’s IP is pure analog or contains a small amount of standard cell digital. For example, the behavioral modeling capabilities help package this IP for the functional simulation in full chip verification. But getting the IP ready is only one aspect of the flow.

From a physical abstract it’s possible to characterize the IP for use in timing driven mode. This approach would allow you to physically verify the IP on the SoC for full chip verification.

EDA Industry Predictions for 2014 – Part 1

Tuesday, January 7th, 2014

Gabe Moretti, Contributing Editor

I always ask for predictions for the coming year, and generally get good response.  But this year the volume of responses was so high that I could not possibly cover all of the material in one article.  So I will use two articles, one week apart, to record the opinions submitted.   This first section details the contributions of Andrew Yang of ANSYS – Apache Design, Mick Tegethoff of Berkeley Design Automation, Michel Munsey of Dassault Systèmes, Oz Levia from Jasper Design Automation, Joe Sawicki from Mentor Graphics, Grant Pierce and Jim Hogan from Sonics, and Bob Smith of Uniquify.

Andrew Yang – ANSYS Apache Design

For 2014 and beyond, we’ll see increased connectivity of the electronic devices that are pervasive in our world today. This trend will continue to drive the existing mobile market growth as well as make an impact on upcoming automotive electronics. The mobile market will be dominated by a handful of chip manufacturers and those companies that support the mobile ecosystem. The automotive market is a big consumer of electronics components that are part of a complex system that help improve safety and reliability, as well as provide users with real-time interaction with their surroundings.

For semiconductor companies to remain competitive in these markets, they will need to take a “system” view for their design and verification. The traditional silo-based methodology, where each component of the system is designed and analyzed independently can result in products with higher cost, poor quality, and schedule delay. An adoption of system-level simulation will allow engineers to carry out early system prototyping, analyze the interaction of each of the components, and achieve optimal design tradeoffs.

Mick Tegethoff – Berkeley Design Automation

FinFET Technology will dominate the landscape in semiconductor design and verification as more companies adopt the technology. FinFET is a revolutionary change to device fabrication and modeling, requiring a more complex SPICE model and challenging the existing circuit behavior “rules of thumb” on which experienced designers have relied for years with planar devices.

Designers of complex analog/RF circuits, including PLLs, ADCs, SerDes, and transceivers, will need to relearn the device behavior in these applications and to explore alternative architectures. As a result, design teams will have to rely more than ever on accurate circuit verification tools that are foundry-certified for FinFET technology and have the performance and capacity to handle complex circuits including physical effects such as device noise, complex parasitics, and process variability.

In memory applications, FinFET technology will continue to drive change and challenge the status quo of “relaxed accuracy” simulation for IP characterization. Design teams are realizing that it is no longer acceptable to tolerate 2–5% inaccuracy in memory IP characterization. They are looking for verification tools that can deliver SPICE-like accuracy in a time frame on a par with their current solutions.

However, accurate circuit verification alone will not be sufficient. The impact of FinFET devices and new circuit architectures in analog, RF, mixed-signal, and memory applications demand full confidence from design teams that their circuits will meet specifications across all operational, environmental, and process conditions. As a result, designers will need to perform an increased amount of intelligent, efficient, and effective circuit characterization at the block level and at the project level to ensure that their designs meet rigorous requirements prior to silicon.

Michael Munsey – Dassault Systèmes

We at Dassault Systèmes see a few key trends coming to the semiconductor industry in 2014.

1) Extreme Design Collaboration: Complexity and cost in IC design and manufacturing now demand that semiconductor vendors engage an ever broader, more diverse pool of specialist designers and engineers.

At the same time, total costs for designing a cutting-edge integrated circuit can top $100 million for just one project. Respins can drive these costs even higher, adding huge profitability risks to new projects.

Technology-enabled extreme collaboration, over and above that in traditional PLM, will be required to assure manufacturable, profitable designs. Why? Because defects arise at the interchange between designers. And with more designers and more complex projects, the risk of misperceptions and miscommunications increases.

Pressure for design teams to interlock using highly specialized collaboration technology will increase in parallel with the financial risk of new semiconductor design projects.

2) Enterprise IP management: The move towards more platform-based designs in order to meet shortening time to market windows, application driven designs, and the increasing cost of producing new semiconductor devices, will explode the market for IP and create a new market for enterprise IP management.

The deeper insight is how that IP will be acquired, used, configured, validated and otherwise managed. The challenges will be (1) building a intelligent process that enables project managers to evaluate the lowest cost IP blocks quickly and effectively; (2) managing the licensed IP so that configuration, integration and validation know-how is captured and is easily reused; and (3) ensuring that licensing and export compliance attributes of each licensed block of IP are visible to design decision makers.

3) Flexible Design to Manufacturing: In 2011, the Japanese earthquake forced a leading semiconductor company to cease manufacturing operations because their foundry was located close to Fukushima. That earthquake and the floods in Thailand have awakened semiconductor vendors to the stark reality that global supply chains can be dramatically and unexpectedly disrupted without any prior notice.

At the same time, with increased fragmentation and specialization occurring within the design and supply chain for integrated circuit, cross chain information automation will be mission-critical.

Examples of issues that will require IT advances are (1) the increasing variations in how IP is transferred down the supply chain. It could be a file, a wafer, a die or a packaged IC – yet vendors will need to handle all options with equal efficiency to maximize profitability; and (2) the flexible packaging of an IC design for capture into ERP systems will become mandatory, in order to enable the necessary downstream supply chain flexibility.

Oz Levia – Jasper design Automation

There are a few points that we at Jasper consider important for 2014.

1) Low power design and verification will continue to be a main challenge for SoC designers.

2) Heterogeneous multi-processor designs will continue to grow. Issue such as fabric and NOC design and verification will dominate.

3) The segments that will drive the semiconductor markets will likely continue to be in the mobile space(s) – phones, tablets, etc. But the server segment will also continue to increase in importance.

4) Process will continue to evolve, but there is a lot of head room in current processes before we run out steam.

5) Consolidation will continue in the semiconductor market. More important, the strong will get stronger and the weak will get weaker. Increasingly this is a winner takes all market and we will see a big divide between innovators and leaders and laggards.

6) EDA will continue to see consolidation.  Large EDA vendors will continue increasing investments in SIP, and Verification technologies. We will not see a new radically different technology or methodology. The total amount of investments In the EDA industry will continue to be low.

7) EDA will grow at slow pace, but Verification, Emulation and SIP will grow faster then other segments.

Joseph Sawicki – Mentor Graphics

FinFETs will move from early technology development to early adopter designs. Over the last year, the major foundry ecosystems moved from alpha to production status for 16/14nm with its dual challenges of double patterning and FinFET.  Fabless customers are just beginning to implement their first test chip tape-outs for 16 /14 nm, and 2014 will see most of the 20 nm early-adopter customers also preparing their first 16 nm/14 nm test chips.

FinFETs are driving a need for more accurate extraction tools, and EDA vendors are turning to 3D field solver technology to provide it. The trick is to also provide high performance that can deliver quick turnaround time even as the number of required extraction corners jumps from 5 to 15 and the number of gates doubles or triples.

Test data and diagnosis of test fail data will play an increasingly important role in the ramp of new FinFET technologies. The industry will face new challenges as traditional approaches to failure analysis and defect isolation struggle to keep pace with changes in transistor structures. The opportunity is for software-based diagnosis techniques that leverage ATPG test fail data to pick up the slack and provide more accurate resolution for failure and yield analysis engineers.

16/14nm will also require more advanced litho hotspot checking and more complex and accurate fill structures to help ensure planarity and to also help deal with issues in etch, lithography, stress and rapid thermal annealing (RTA) processes.

In parallel with the production ramp at 20 nm, and 16 nm/14 nm test chips, 2014 will see the expansion of work across the ecosystem for 10 nm. Early process development and EDA tool development for 10 nm began in 2012, ramped up in intensity in 2013, and will be full speed ahead in 2014.

Hardware emulation has transitioned from the engineering lab to the datacenter where today’s virtual lab enables peripheral devices such as PCIe, USB, and Ethernet to exist in virtual space without specialized hardware or a maze of I/O cables. A virtual environment permits instant reconfiguration of the emulator for any design or project team and access by more users, and access from anywhere in the world, resulting in higher utilization and lower overall costs.

The virtual lab is also enabling increased verification coverage of SoC software and hardware, supporting end-to-end validation of SW drivers, for example. Hardware emulation is now employed throughout the entire mobile device supply chain, including embedded processor and graphics IP suppliers, mobile chip developers, and mobile phone and tablet teams. Embedded SW validation and debug will be the real growth engine driving the emulation business.

The Internet of Things (IoT) will add an entirely new level of information sources, allowing us to interact with and pull data from the things around us. The ability to control the state of virtually anything will change how we manage and interact with the world. The home, the factory, transportation, energy, food and many other aspects of life will be impacted and could lead to a new era of productivity increases and wealth creation.

Accordingly, we’ll see continued growth in the MEMS market driven by sensors for mobile phones, automobiles, and medical monitoring, and we’ll see silicon photonics solutions being implemented in data and communications centers to provide higher bandwidth backplane connectivity in addition to their current use in fiber termination.

Semiconductor systems enabling the IoT trend will need to respond to difficult cost, size and energy constraints to drive real ubiquity. For example, we’ll need 3D packaging implementations that are an order of magnitude cheaper than current offerings. We’ll need a better ways to model complex system effects, putting a premium on tools that enable design and verification at the system level, and engineers that can use them. Cost constraints will also drive innovation in test to ensure that multi-die package test doesn’t explode part cost. Moreover, once we move from data to actually interacting with the real world analog/mixed signal, MEMS and other sensors role in the semiconductor solution will become much greater.

Grant Pierce and Jim Hogan – Sonics

For a hint at what’s to come in the technology sector as a whole and the EDA and IP industries specifically, let’s first look at the global macro-economic situation. The single greatest macro-economic factor impacting the technology sector is energy. Electronic products need energy to work. Electronic designers and manufacturers need energy to do their jobs. In the recent past, energy has been expensive to produce, particularly in the US market due to our reliance on foreign oil imports. Today in the US, the cost of producing energy is falling while consumption is slowing. The US is on a path to energy self-sufficiency according to the Energy Department’s annual outlook. By 2015, domestic oil output is on track to surpass its peak set in 1970.

What does cheaper energy imply for the technology industry? More investment. Less money spent on purchasing energy abroad means more capital available to fund new ventures at home and around the world. The recovery of US financial markets is also restoring investors’ confidence in earning higher ROI through public offerings. As investors begin to take more risk and inject sorely needed capital into the technology sector, we expect to see a surge in new startups. EDA and IP industries will participate in this “re-birth” because they are critical to the success of technology sector as enabling technologies.

For an understanding of where the semiconductor IP business is going, let’s look at consumer technology. Who are the leaders in the consumer technology business today? Apple, Google, Samsung, Amazon, and perhaps a few others. Why? Because they possess semiconductor knowledge coupled with software expertise. In the case of Apple, for example, they also own content and its distribution, which makes them extremely profitable with higher recurring revenues and better margins. Content is king and the world is becoming application-centric. Software apps are content. Semiconductor IP is content. Those who own content, its publication and distribution, will thrive.

In the near term, the semiconductor IP business will continue to consolidate as major players compete to build and acquire broader content portfolios. For example, witness the recent Avago/LSI and Intel/Mindspeed deals. App-happy consumers have an insatiable appetite for the latest and greatest content and devices. Consumer technology product lifecycles place immense pressure on chip and system designers when developing and verifying the flexible hardware platforms that run these apps. Among their many important considerations are functionality, performance, power, security, and cost. System architectures and software definition and control are becoming the dominant source of product differentiation rather than hardware. The need for semiconductor IP that addresses these trends and accelerates time-to-volume production is growing. The need for EDA tools that help designers successfully use and efficiently reuse IP is also growing.

So what are the market opportunities for new IP and tool companies in the coming years? These days, talk about the Internet of Things (IoT) is plentiful and there will be many different types of IP in this sensor-oriented market space. Perhaps, the most interesting and promising of these IoT IP technologies will address our growing concerns about health and quality of life. The rise of wearable technologies that help monitor our vital signs and treat chronic health conditions promises to extend our human survival rate beyond 100 years. As these technologies progress, surely the “Bionic Man” will become common place in the not-too-distant future. Personally, and being members of the aging “Baby Boomer” generation, we hope that it happens sooner rather than later!

Bob Smith – Uniquify

I spent a good deal of 2013 traveling around the globe doing a series of seminars on double data rate (DDR) synchronous dynamic random-access memory (SDRAM), the ubiquitous class of memory chips. The seminars were meant to promote the fastest, smallest and lowest power state-of-the-art adaptive DDR IP technology. They highlighted how it can be used to enhance design speed and configured to minimize the design footprint and hit increasingly smaller low-power targets.

While marketing and promotion was on the agenda, the seminars were a great way to check in with designers to better understand their current DDR challenges and identify a few trends that will emerge in 2014. What we learned may be a surprise to more than a few semiconductor industry watchers and offers some tantalizing predictions for next year.

The biggest surprise was hearing designers confirm plans to go directly to LPDDR4 (that is low-power DDR4, the latest JEDEC standard) and skip LPDDR3. The reasons are varied, but most noted that they’re getting greater gains in performance and low power by jumping to LPDDR4, especially important for mobile applications. According to JEDEC, the LPDDR4 architecture was designed to be power neutral, offer 2X bandwidth performance over previous generations, with low pin-count and low cost. It’s also backward compatible.

Even though many of the designers we heard from agreed that DDR3 is now mainstream, even more are starting projects based on DDR4. Some are motivated to move to DDR4 even without the need for extra performance for a practical and cost-effective reason. If they have a product with a long lifetime of five years or more, they are concerned that the DDR3 memory will cost more than DDR4 at some point. They have a choice: either build in the DDR4 now in anticipation or look for combination IP that handles both DDR3/4 in one IP. Many have chosen to do the former.

One final prediction I offer for 2014 is that 28nm is the technology node that will be around for a long time to come. Larger semiconductor companies, however, are starting new projects at 14/16 nm, taking advantage of the emerging FinFET technology.

According to my worldwide sources, memories and FinFET will dominate the discussion in 2014, which means it will be a lively year.

Solutions For Mixed-Signal SoC Verification

Thursday, March 28th, 2013

Performing full-chip verification of large mixed-signal systems on chip (SoCs) is an increasingly daunting task. As complexity grows and process nodes shrink, it’s no longer adequate to bolt together analog or digital “black boxes” that are presumed to be pre-verified. Complex analog/ digital interactions can create functional errors, which delay tapeouts and lead to costly silicon re-spins. Cadence helps customers overcome these challenges with a fully integrated mixed-signal verification solution that spans basic mixed-signal simulation to comprehensive, metric-driven mixed-signal verification.

To view this white paper, click here.

Taming The Challenges Of 20nm Custom/Analog Design

Thursday, November 29th, 2012

Custom and analog designers will lay the foundation for 20nm IC design. However, they face many challenges that arise from manufacturing complexity. The solution lies not just in improving individual tools, but in a new design methodology that allows rapid layout prototyping, in-design signoff, and close collaboration between schematic and layout designers.

To view this white paper, click here.

Solutions For Mixed-Signal IP, IC, And SoC Implementation

Thursday, September 27th, 2012

Traditional mixed-signal design environments, in which analog and digital parts are implemented separately, are no longer sufficient and lead to excess iteration and prolonged design cycle time. Realizing modern mixed-signal designs requires new flows that maximize productivity and facilitate close collaboration among analog and digital designers. This paper outlines mixed-signal implementation challenges and focuses on three advanced, highly integrated flows to meet those challenges: analog-centric schematic-driven, digital-centric netlist-driven, and concurrent mixed-signal. Each flow leverages a common OpenAccess database for both analog and digital data and constraints, ensuring tool interoperability without data translation. Each flow also offers benefits in the area of chip planning and area reduction; full transparency between analog and digital data for fewer iterations and faster design closure; and easier, more automated ECOs, even at late stages of design.

To view this white paper, click here.

Analog and RF Added To IC Simulation Discussion

Thursday, July 26th, 2012

By John Blyler
System-Level Design sat down with Nicolas Williams, Tanner EDA’s director of product management, to talk about trends in analog and RF chip design.

SLD: What are big the trends in analog and RF simulation?
Williams: The increased need to bring more layout dependent information into the front-end design early on. Layout-dependent effects influence performance, so it is no longer possible to separate “design” from “layout” phases, as we did traditionally. With nanoscale technologies, a multitude of physical device pattern separation dimensions must now be entered into the pre-layout simulation models to accurately predict post-layout circuit performance. This is more than just adding some stray capacitance to some nodes. It now includes accurate distances from gate to gate, gate to trench (SA,SB, etc.), distance in both X and Y dimensions between device active areas, distance from the gate contact to the channel edge (XGW), number of gate contacts (NGCON), distance to a single well edge (WPE), etc. Getting the pre-layout parameters accurately entered into the simulation will minimize re-design and re-layout resulting from performance deficiencies found during post-layout parameter extraction and design-verification simulations.

Another issue is larger variability at nanoscale. This is not so much due to manufacturing tolerance, but really because of layout-dependent effects. These effects include the ones listed above plus several that are not even modeled, such as nearby and overlying metal stress modifying Vt and gm and poor lithography. The lithography challenges are so severe in deep nanoscale that device patterns on final silicon look like they were drawn by Salvador Dali. Poor pattern shapes, increasing misalignment and shape-dependence on nearby patterns results in more gate length and width variation. More variability requires more complex simulations to have better confidence in your design. This requires faster simulators to simulate more corners or more Monte Carlo runs.

SLD: Statistical analysis, design-of-experiments, and corner modes—digital designers already hear many of these terms from the yield experts in the foundries. Should they now expect to hear it from the analog and RF simulator communities?
Williams: Statistical analysis and corner models have always been part of analog and RF design, but in the past it didn’t take much to try all combinations. There was no need to take a sample of the population when you could check the entire population. In nanoscale technologies, the number of effects that can affect circuit performance has grown exponentially to point where you have to take a statistical approach when checking corners. The older, alternative approach, of running the worst-case combinations of all design corners from all effects would result in an overly pessimistic result. Also, when the number of Monte Carlo simulations required to statistically represent your circuit has grown too large, that is where ‘design-of-experiments’ comes into play using methods such as Hyper Cube sampling.

Simulation accuracy is limited by model accuracy. Statistical variation of devices and parameters are more richly specified than the traditional SPICE approach for Monte Carlo (where you had “lot” and “device” parameters). Now you have spatially correlated variations, and you have the much richer .variation blocks in SPICE. Foundry models are now “expected” to provide usable models at this level, which raises all kinds of foundry-proprietary concerns.

SLD: How will this increase in statistical distribution analysis affect traditional analog electronic circuit simulators like Spice?
Williams: Statistical analysis requires a huge number of simulations, which can either take a long time to execute, or can be parallelized with CPU farms or cloud services, and smarter ways to sample which “corners” to run to get a reasonable confidence that you will be successful in silicon. Traditionally, aggregation of such results would have been a manual process, or at best some custom design flow development undertaken by the end user. Look for an upcoming sea change in how simulators are designed, sold and deployed by the EDA vendor community, to better address these needs.

All these simulations are great if your design meets all of its specifications. But what happens if it doesn’t? I feel the next step will be to use these simulations to figure out what variables your design is most sensitive to. Then you can try to mitigate the variability by improving the circuit or physical design (layout).

Trends In Analog And RF IC Simulation

Thursday, May 24th, 2012

By John Blyler
System-Level Design (SLD) sat down to discuss trends in analog and RF integrated circuit design with Ravi Subramanian, president and CEO of Berkeley Design Automation, (at the recent GlobalPress eSummit) and later with Trent McConaghy, Solido’s CTO. What follows are excerpts of those talks.

SLD: What are the important trends in analog and RF simulation?
Subramanian: I see two big trends. One is related to physics, namely, the need to bring in physical effects early in the design process. The second trend relates to the increased importance of statistics in doing design work. Expertise in statistics is becoming a must. One of the strongest demands made on our company is to help teach engineers how to do statistical analysis. What is required is an appreciation of the Design-of-Experiments (DOE) approach—common in the manufacturing world. Design engineers need to understand what simulations are needed for analog versus digital designers. For example, in a typical pre-layout simulation, you may want to characterize a block with very high confidence. Further, you may also want to do that block extracted in post layout with very high confidence. But what does ‘high confidence’ mean? How do you know when you have enough confidence? If you have a normally distributed Gaussian variable, you may have to run 500 simulations to get a 95% probability of confidence in that result. Every simulation waveform and data point has a confidence band associated with it.
McConaghy: As always, there is always a pull from customers for simulators that are faster and better. In general, simulators have been delivering on this. Simulators are getting faster, both in simulation time for larger circuits, and by easier-to-use multi-core and multi-machine implementations. Simulators are also getting better. They converge on a broader range of circuits, handle larger circuits, and more cleanly support mixed-signal circuits.
There’s another trend: meta-simulation. This term describes tools that feel like using simulators from the perspective of the designer. Just like simulators, meta-simulators input netlists, and output scalar or vector measures. However, meta-simulators actually call circuit simulators in the loop. Meta-simulators are used for fast PVT analysis, fast high-sigma statistical analysis, intelligent Monte Carlo analysis and sensitivity analysis. They bring the value of simulation to a “meta” (higher) level. I believe we’ll see a lot more meta-simulation, as the simulators themselves get faster and the need for higher-level analysis grows.

SLD: This sounds a lot like the Six Sigma methodology, a manufacturing technique use to find and remove defects from high volume productions—like CMOS wafers. Will design engineers really be able to incorporate this statistical approach into their design simulations?
Subramanian: Tools can help engineers incorporate statistic methods into their works. But let’s talk about the need for high sigma values. To achieve high sigma, you need a good experiment and a very accurate simulator. If you have a good experiment but you want to run it quickly and give up accuracy, you may have a Six-Sigma setup, but a simulator that has been relaxed so the Six-Sigma data is meaningless. This shows the difference between accuracy and precision. You can have a very precise answer but it isn’t accurate.
To summarize: Today’s low-node processes have associated physical effects that only can be handled by statistical methods. These two trends mean that new types of simulation must be run. Engineers need to give more thought as to which corners should be covered in their design simulations. Semiconductor chip foundries provided corners that are slow, fast and typical, based upon the rise- and fall-times of flip-flops. How relevant is that for a voltage-controlled oscillator (VCO)? In fact, are there more analog specific corners? Yes, there are.

SLD: Statistical analysis, design-of-experiments, and corner modes—designers already hear many of these terms from the yield experts in the foundries. Should they now expect to hear it from the analog and RF simulator communities?
Subramanian: Designers must understand or have tools that help them deal with statistical processes. For example, how do you know if a VCO will yield well? It must have a frequency and voltage characteristics that are reliable over a range of conditions. But if you only test it over common digital corners, you may miss some important analog corners where the VCO performs poorly. A corner is simply a performance metric, such as output frequency. You want to measure it within a particular confidence level, which is where statistics are needed. It may turn out that, in addition to the digital corners you’ll need to include a few analog ones.
McConaghy: These terms imply the need to address variation, and designers do need to make sure that variation doesn’t kill their design. Variation causes engineers to overdesign, wasting circuit performance, power and area or under design, hitting yield failures. To take full advantage of a process node, designers need tools that allow them to achieve optimal performance and yield. Since variation is a big issue, it won’t be surprising if simulator companies start using these terms with designers. The best EDA tools handle variation, while allowing the engineer to efficiently focus on designing with familiar flows like corner-based design and familiar analyses like PVT and Monte Carlo. But now the corners must be truly accurate, i.e., PVT corners must cause the actual worst-case behavior, and Monte Carlo corners must bound circuit (not device) performances like “gain” at the three-sigma level or even six-sigma level. These PVT and Monte Carlo analyses must be extremely fast, handling thousands of PVT corners, or billions of Monte Carlo samples.

SLD: Would a typical digital corner be a transistor’s switching speed?
Subramanian: Yes. Foundries parameterized transistors to be slow, typical and fast in terms of performance. The actual transistor model parameters will vary around those three cases, e.g., a very fast transistor will have a fast rise and switching time. So far, the whole notion of corners has been driven by the digital guys. That is natural. But now, analog shows up at the party at the same time as digital, especially at 28nm geometries.
The minimal requirement today is that all designs must pass the digital corners. But for the analog circuits to yield, they must pass the digital and specific analog corners, i.e., they must also pass the condition and variations relevant to the performance of that analog device. How do you find out what those other corners are? Most designers don’t have time to run a billion simulations. That is why people need to start doing distribution analysis for analog corners like frequency, gain, signal-to-noise ratios, jitter, power supply rejection ratio, etc. For each of these analog circuit measurements, a distribution curve is created from which Six-Sigma data can be obtained. Will it always be a Gaussian curve? Perhaps not.

SLD: How will this increase in statistical distribution analysis affect traditional analog electronic circuit simulators like Spice?
Subramanian: Spice needs to start generating these statistically-based distribution curves. I think we are at the early days of that frontier where you can literally see yourself having a design cockpit where you can make statistics simple to use. You have to make it simple to use otherwise it won’t happen. I think that is the responsibility of the EDA industry.
McConaghy: The traditional simulators will be used more than ever, as the meta-simulators call upon them to do fast and efficient PVT and statistical variation analysis up to 6-sigma design. The meta-simulators incorporate intelligent sampling algorithms to cut down the number of simulations required compared to brute force analysis. Today, many customers use hundreds of traditional SPICE simulator licenses to do these variation analysis tasks. However, they would like to be able to get the accuracy of billions of Monte Carlo samples in only thousands of actual simulations. These analyses are being done on traditional analog/RF, mixed-signal designs as well as memory, standard cell library and other custom digital design.

SLD: I know that the several of the major EDA tool vendors have recently released tools to make the statistical nature of low process node yields more accessible and useable by digital chip designers. Are their similar tools for the world of analog mixed signal design?
Subramanian: Analog and RF designs are now going through this same process, to move from an art to a science. That’s why I say that the nanometer mixed-signal era is here (see figure). Simulation tools are needed, but so are analysis capabilities. This is why our simulation tools have become platforms for analysis. We support the major EDA simulators but add an analysis cockpit for designers.

SLD: Why now? What is unique about the leading-edge 28nm process geometries? I’d have expected similar problem at a higher node, e.g., 65nm. Is it a yield issue?
Subramanian: Exactly. At 65nm, designers were still able to margin their designs sufficiently. But now the cost of the margin becomes more significant because you either pay for it with area or with power, which is really current. At 28nm, with SerDdes (high frequency and high performance) and tighter power budgets, the cost of the margin becomes too high. If you don’t do power-collapsing, then you won’t meet the power targets.

SLD: Is memory management becoming a bigger market for simulation?
Subramanian: Traditionally, memory has had some traditional analog pieces like charge pumps, sensitivity chains, etc. Now, in order to achieve higher and higher memory density, vendors are going to multi-level cells. This allows storage of 2, 4 or 8 bits on a single cell. But to achieve this density you need better voltage resolution between the different bit levels, which means you need more accurate simulation to measure the impact of noise. Noise can appear as a bit error when you have tighter voltage margins. You might wonder if this is really a significant problem. Consider Apple’s purchase of Anobit, a company that corrected those types of errors. If you can design better memory, then you can mitigate the need for error correction hardware and software. But to do that, you need more accurate analog simulation of memory. You cannot use a digital fast Spice tool, which uses a transistor table look-up model. Instead, you must use a transistor BSIM (Berkeley Short-channel IGFET Model) model.

Remote RF Telescope Bring Sci-Fi To Reality

Thursday, April 22nd, 2010

By John E. Blyler
The huge RF radio observatory at Arecibo, Puerto Rico has all of the key ingredients for a high-tech adventure movie. First, its location is remote, as it’s buried deep within the rainforest of a Caribbean island. Second, the sheer size of the radio telescope renders it sublime. It measures 305 m (1001 ft.) in diameter and more than 500 m from the jungle floor to the top of the moveable radio feed platform (see Figure 1). Unlike other astronomic R&D facilities in the United States, the observatory at Arecibo also is more than just a radio telescope. It also is a complete R&D facility. Its mission – in part – is to search for the stuff of science fiction stories ranging from extraterrestrials and gravity waves to asteroids that could devastate the Earth.

We will return to the cool sci-fi aspects of Arecibo later. For now, let’s explore the technology that makes all of this possible—starting with an overview of the RF telescope and the critical electronics. Radio astronomy studies celestial objects using radio transmissions. Often traveling great distances, these radio waves are reflected from the objects of study. The returning signal is analyzed and developed into amazing images. Although this may seem like a straightforward task, the returning signal is typically so weak as to be almost indiscernible from the cosmic noise.

Thus, the successful detection of the returning signal requires the very best that modern electronics has to offer. Indeed, the noise generated by even the most modern low-noise amplifier (LNA) and other sources are orders of magnitude greater than the signals being examined. Dana Whitlow, research technician at Arecibo, estimates that the return signals may be over 40 dB below the overall system noise level—a factor of 10,000 lower!

Critical Sensitivity To Noise
Simply put, everything that can be done is done to maximize the sensitivity of the receivers. The front-end electronics are cytogenetically cooled in 99.99% pure Helium to between 10 and 15 Kelvin. These temperatures can only be achieved in a vacuum. As a result, all of the specially designed electronic systems must be evacuated before the cooling can begin.

The front-end electronic systems consist of amplifiers, filters, and mixers. The amplifiers are specifically designed to minimize noise. Toward that end, Ganesan Rajagopalan, a senior receiver engineer and head of the Electronics Deptartment at the observatory, has been improving the sensitivity of the receivers by slowly replacing the existing gallium-arsenide (GaAs) monolithic microwave integrated circuits (MMICs) with indium-phosphide (InP). MMICs are devices that operate at microwave frequencies between 300 MHz and 300 GHz.

InP-based amplifiers have lower noise and higher gain than their GaAs counterparts. Yet these circuits also must be customized for the lowest noise possible. The Cornell University-based team at Arecibo collaborated with the experts at CalTech’s JPL team to make these customized application-specific integrated circuits (ASICs) tailored to a cryogenic environment. The CalTech design also has been implemented at the Allen Telescope Array (ATA) in California. ATA is a “large number of small dishes” (LNSD) array that’s designed to be highly effective for simultaneous surveys of conventional radio-astronomy projects and Search for Extraterrestrial Intelligence (SETI) observations at centimeter wavelengths.

With such innovative LNA devices, it’s no wonder that the Arecibo Observatory is considered state of the art in receiver technology. In terms of the available bandwidth per receiver, however, the facility is playing catch-up. The receivers used at Arecibo are 2 GHz wide, ranging from 2 to 4 GHz and another from 4 to 8 GHz. The goal is to widen the current 2-GHz signals, which are being received using Ultra Wideband (UWB) technology. Here too, the R&D team is working with other scientists and engineers around the globe to develop a UWB feed that will operate from 1 to 10 GHz. Such a feed would reduce the number of existing receivers from 8 down to 1, which would further reduce the collective number of noise generators in the system.

A Noisy Planet
Reducing the noise sensitivity of the receiving electronics is critical to analyzing the radio signals returning from deep space. But another challenge exists closer to home— namely, the effective “noise” created by wireless devices ranging from cell phones to data devices. The RF telescope operates to 10 GHz and includes receivers in the S-, C-, and X-bands. Wi-Fi technology occupies a relatively small bandwidth centered around 2.4 GHz—right in the middle of the lower S-band space. Another source of radio interference comes from a much more powerful source—namely, the various airports on the island. These sources are mission critical and cannot be turned off at select times during the day.

To help reduce the opportunities for radio noise interference, the Arecibo team actively works with the Puerto Rico Spectrum users’ group. In cases involving mission-critical systems like airport radar, the team has coordinated the on-off time of the radar. The airport radar goes blank for a short period of time when it points in the direction of the Arecibo observatory. Unfortunately, this well-intentioned gesture has proven to be of limited value. The radar signal has more power located in the back lobes of the radar signature than in the front lobes.

Sci-Fi Becomes Reality
As fascinating as the engineering work at Arecibo is, does it really have any practical value? Can it turn science fiction into science fact? Some would suggest that the jungle-hidden facility will play an important role in saving humanity from near-earth objects (NEOs) like asteroids, which may be on a collision course with earth. The RF Observatory has the capability to pinpoint the orbit of NEOs as far away as Jupiter or Saturn and then calculate whether that object poses a threat to humanity. Such knowledge could be used to evacuate populations and move important property to a safe location. This is just one reason why the U.S. Congress is interested in keeping the Arecibo radar telescope working.

“We are also doing a lot of work on pulsars,” explains Rajagopalan. “Pulsar timing is very important in the detection of gravitational wave radiation.” Described as a fluctuation in the curvature of spacetime, which propagates as a wave, gravitational waves were predicted by Albert Einstein’s theory of general relativity. Sources of gravitational waves include binary star systems (e.g., white dwarfs, neutron stars, or black holes).

Pulsar astronomers believe that they can detect gravitational waves. Telescopes at Arecibo, PR and the mainland US, Europe, and Australia are all part of an array that’s being used to carefully time pulsars. All of these facilities make very long, simultaneous observations of the same deep-space source using long baselined interferometry (LBI). Precise synchronization timing among the global facilities is achieved using a hydrogen maser atomic clock. Thus, the research being done here is not just astronomy. It’s planetary radar science and ionospheric as well.

Signal Processing
What happens to the signal returning from the reflection off of nearby planets or from signals originating from a deep-space pulsar? The signal comes into the feed in a concentrated form after reflection from the big reflector (see Figure 3). An ortho-mode transducer (OMT) —some more than 3-ft. long—splits the signal into two separate channels. Noise-injection couplers are connected to one channel. These couplers inject a weak but carefully calibrated noise source into the main signal.

The injected noise signal is switched on and off at a rapid rate that’s called a “winking” rate calibration, says Dana Whitlow, a senior receiver engineer. “By a measurement of the levels later in the system with the cal on and the cal off, we can determine the system noise temperature. Also, this calibration allows us to track unique time-dependent changes and gain of the amplifiers.”

The signal then travels through isolators, which flatten out the frequency response. Effectively, they remove reflections from the amplifiers back into the earlier part of the signal path. Finally, the signal is amplified in the LNAs mentioned earlier.

All of these electronics are contained with a dewar, which is used to cool the amplifiers down to 15 Kelvin. Cables connect the dewar to the next signal-conditioning module, which contains a pulse amplifier module to provide additional amplification. Computer-selectable filters are used to exclude unwanted frequency bands, limiting the bandwidth from radio-interference sources like Wi-Fi and airport radar.

What happens if the ionospheric, planetary, or deep-space phenomena that a researcher is trying to study occur at the same frequency as the radio-interference sources—perhaps centered at 2.4 GHz (same as Wi-Fi)? To study these signals, researchers would have to go to one of the other RF telescope facilities on the mainland United States. For example, the Robert C. Byrd Green Bank Telescope in West Virginia operates in a radio quiet zone.

Aside from rejecting unwanted interference signals, filters also help to prevent the interference from compressing the gain of the subsequent signal chain. If it’s strong enough, an interfering signal could drive an amplifier into saturation. This forces the gain to go down, says Whitlow. “If there’s anything that radio astronomers hate, it’s unexpected gain changes in their signal path. It’s difficult, if not impossible, to deal with from a perspective of obtaining calibrated data of their signal or source they are looking at.” After more filtering and amplification, just to increase the signal strength, the signal is then downconverted to a lower, intermediate frequency.

One might wonder if all of these filters don’t attenuate the signal even further—especially because they are passive filters, which contain no power source to help boost the signal strength. While it’s true that passive filters attenuate the signal slightly, these attenuations can be corrected by the numerous amplifiers. Active filters would have their own problems, such as the introduction of extra noise and distortion.

Finally, the conditioned signal is sent down from the receiver platform to the control-room area some 500 m below using analog optical fiber cable. Fiber-optic cable is used because it has a much broader frequency response. Plus, it doesn’t pick up electrical noise due to the imperfect shielding of coaxial cable. Fiber cables are typically much less lossy than coaxial—especially at the higher frequency ends.

Perhaps the most compelling reason for fiber over coaxial cable is that the former doesn’t conduct lightning down to the control room, explains Whitlow. “I haven’t been down here to see this firsthand, but I’ve been told by many people that in the early days of the observatory, when lightning struck the platform, there would be sparks jumping around things inside the control room.”

Coming in Part II: We’ll delve into the technology used in the control room and laboratory, where the data is digitized and analysis is performed. Of particular interest to chip and embedded designers will be the evolution taking place from ASIC- to FPGA-based systems.

What Are They Designing?

Thursday, December 17th, 2009

By John Blyler
A just completed EDA tools and technology survey of 140 engineers conducted over the past several weeks shows a strong push into full-custom devices and FPGAs. In fact, 32% of the ICs being designed by engineers using EDA tools were building full custom devices, and another 24% were building FPGAs. Only 9% were working on ASICs, although the ASICs tend to be large and extremely complex chips.

About 14% were designing analog arrays and another 11% were using gate arrays. Another 10% were building ASSPs.