Part of the  

Chip Design Magazine

  Network

About  |  Contact

Posts Tagged ‘Silvaco’

Power Analysis and Management

Thursday, August 25th, 2016

Gabe Moretti, Senior Editor

As the size of a transistor shrinks and modifies, power management becomes more critical.  As I was polling various DA vendors, it became clear that most were offering solutions for the analysis of power requirements and software based methods to manage power use, at least one, was offering a hardware based solution to power use.  I struggled to find a way to coherently present their responses to my questions, but decided that extracting significant pieces of their written responses would not be fair.  So, I organized a type of virtual round table, and I will present their complete answers in this article.

The companies submitting responses are; Cadence, Flex Logix, Mentor, Silvaco, and Sonics.  Some of the companies presented their own understanding of the problem.  I am including that portion of their contribution as well to provide a better meaning to the description of the solution.

Cadence

Krishna Balachandran, product management director for low power solutions at Cadence  provided the following contribution.

Not too long ago, low power design and verification involved coding a power intent file and driving a digital design from RTL to final place-and-route and having each tool in the flow understand and correctly and consistently interpret the directives specified in the power intent file. Low power techniques such as power shutdown, retention, standby and Dynamic Voltage and Frequency Scaling (DVFS) had to be supported in the power formats and EDA tools. Today, the semiconductor industry has coalesced around CPF and the IEEE 1801 standard that evolved from UPF and includes the CPF contributions as well. However, this has not equated to problem solved and case closed. Far from it! Challenges abound. Power reduction and low power design which was the bailiwick of the mobile designers has moved front-and-center into almost every semiconductor design imaginable – be it a mixed-signal device targeting the IoT market or large chips targeting the datacenter and storage markets. With competition mounting, differentiation comes in the form of better (lower) power-consuming end-products and systems.

There is an increasing realization that power needs to be tackled at the earliest stages in the design cycle. Waiting to measure power after physical implementation is usually a recipe for multiple, non-converging iterations because power is fundamentally a trade-off vs. area or timing or both. The traditional methodology of optimizing for timing and area first and then dealing with power optimization is causing power specifications to be non-convergent and product schedules to slip. However, having a good handle on power at the architecture or RTL stage of design is not a guarantee that the numbers will meet the target after implementation. In other words, it is becoming imperative to start early and stay focused on managing power at every step.

It goes without saying that what can be measured accurately can be well-optimized. Therefore, the first and necessary step to managing power is to get an accurate and consistent picture of power consumption from RTL to gate level. Most EDA flows in use today use a combination of different power estimation/analysis tools at different stages of the design. Many of the available power estimation tools at the RTL stage of design suffer from inaccuracies because physical effects like timing, clock networks, library information and place-and-route optimizations are not factored in, leading to overly optimistic or pessimistic estimates. Popular implementation tools (synthesis and place-and-route) perform optimizations based on measures of power using built-in power analysis engines. There is poor correlation between these disparate engines leading to unnecessary or incorrect optimizations. In addition, mixed EDA-vendor flows are plagued by different algorithms to compute power, making the designer’s task of understanding where the problem is and managing it much more complicated. Further complications arise from implementation algorithms that are not concurrently optimized for power along with area and timing. Finally, name-mapping issues prevent application of RTL activity to gate-level netlists, increasing the burden on signoff engineers to re-create gate-level activity to avoid poor annotation and incorrect power results.

To get a good handle on the power problem, the industry needs a highly accurate but fast power estimation engine at the RTL stage that helps evaluate and guide the design’s micro-architecture. That requires the tool to be cognizant of physical effects – timing, libraries, clock networks, even place-and-route optimizations at the RTL stage. To avoid correlation problems, the same engine should also measure power after synthesis and place-and-route. An additional requirement to simplify and shorten the design flow is for such a tool to be able to bridge the system-design world with signoff and to help apply RTL activity to a gate-level netlist without any compromise. Implementation tools, such as synthesis and place-and-route, need to have a “concurrent power” approach – that is, consider power as a fundamental cost-factor in each optimization step side-by-side with area and timing. With access to such tools, semiconductor companies can put together flows that meet the challenges of power at each stage and eliminate iterations, leading to a faster time-to-market.

Flex Logix

Geoff Tate, Co-founder and CEO of Flex Logix is the author of the following contribution.  Our company is a relatively new entry in the embedded FPGA market.  It uses TSMC as a foundry.  Microcontrollers and IOT devices being designed in TSMC’s new ultra-low power 40nm process (TSMC 40ULP) need

•             The flexibility to reconfigure critical RTL, such as I/O

•          The ability to achieve performance at lowest power

Flex Logix has designed a family of embedded FPGA’s to meet this need. The validation chip to prove out the IP is in wafer fab now.

Many products fabricated with this process are battery operated: there are brief periods of performance-sensitive activity interspersed with long periods of very low power mode while waiting for an interrupt.

Flex Logix’s embedded FPGA core provides options to enable customers to optimize power and performance based on their application requirements.

To address this requirement, the following architectural enhancements were included in the embedded FPGA core:

•             Power Management containing 5 different power states:

  • Off state where the EFLX core is completely powered off.
  • Deep Sleep state where VDDH supply to the EFLX core can be lowered from nominal of 0.9V/1.1V to 0.5V while retaining state
  • Sleep state, gates the supply (VDDL) that controls all the performance logic such as the LUTs, DSP and interconnect switches of the embedded FPGA while retaining state. The latency to exit Sleep is shorter than that that to exit from Deep Sleep
  • Idle state, idles the clocks to cut power but is ready to move into dynamic mode quicker than the Sleep state
  • Dynamic state where power is highest of the 4 power management states but where the latency is the shortest and used during periods of performance sensitive activity

The other architectural features available in the EFLX-100 embedded FPGA to optimize power-performance are:

•             State retention for all flip flops and configuration bits at voltages well below the operating range.

•          Ability to directly control body bias voltage levels (Vbp, Vbn). Controlling the body bias further controls leakage power

•             5 combinations of threshold voltage(VT) devices to optimize power and performance for static/performance logic of the embedded FPGA. Higher the threshold voltage (eHVT, HVT) lower the leakage power and lower performance while lower the threshold voltage (SVT) device, higher the leakage and higher the performance.

•             eHVT/eHVT

•             HVT/HVT

•             HVT/SVT

•             eHVT/SVT

•             SVT/SVT

In addition to the architectural features various EDA flows and tools are used to optimize the Power Performance and Area (PPA) of the FlexLogix embedded FPGA:

•             The embedded FPGA was implemented using a combination of standard floor-planning and P&R tools to place and route the configuration cells, DSP and LUTs macros and network fabric switches. This resulted in higher density thereby reducing IR drops and the need for larger drive strengths thereby optimizing power

•          Design and use longer (non-minimum) channel length devices which further help reduce leakage power with minimal to no impact to the performance

•          The EFLX-100 core was designed with an optimized power grid to effectively use metal resources for power and signal routing. Optimal power grids reduce DC/AC supply drops which further increase performance.

Mentor

Arvind Narayanan, Architect, Product Marketing, Mentor Graphics contributed the following viewpoint.

One of the biggest challenges in IC design at advanced nodes is the complexity inherent in effective power management. Whether the goal is to reduce on-chip power dissipation or to provide longer battery life, power is taking its place alongside timing and area as a critical design dimension.

While low-power design starts at the architectural level, the low-power design techniques continue through RTL synthesis and place and route. Digital implementation tools must interpret the power intent and implement the design correctly, from power aware RTL synthesis, placement of special cells, routing and optimization across power domains in the presence of multiple corners, modes, and power states.

With the introduction of every new technology node, existing power constraints are also tightened to optimize power consumption and maximize performance. 3D transistors (FinFETs) that were introduced at smaller technology nodes have higher input pin capacitance compared to their planar counterpart, resulting in the dynamic power component to be higher compared to leakage.

Power Reduction Strategies

A good strategy to reduce power consumption is to perform power optimization at multiple levels during the design flow including software optimization, architecture selection, RTL-to-GDS implementation and process technology choices. The biggest power savings are usually obtained early in the development cycle at the ESL & RTL stages. (Fig 1). During physical implementation stage there is less opportunity for power optimization in comparison and hence choices made earlier in the design flow are critical. Technology selection such as the device structure (FinFET, planar), choice of device material (HiK, SOI) and technology node selection all play a key role.

Figure 1. Power reduction opportunities at different stages of the design flow

Architecture selection

Studies have shown that only optimizations applied early in the design cycle, when a design’s architecture is not yet fixed, have the potential for radical power reduction.  To make intelligent decisions in power optimization, the tools have to simultaneously consider all factors affecting power, and apply early in the design cycle. Finding the best architecture enables to properly balance functionality, performance and power metrics.

RTL-to-GDS Power Reduction

There are a wide variety of low-power optimization techniques that can be utilized during RTL to GDS implementation for both dynamic and leakage power reduction. Some of these techniques are listed below.

RTL Design Space Exploration

During the early stages of the design, the RTL can be modified to employ architectural optimizations, such as replacing a single instantiation of a high-powered logic function with multiple instantiations of low-powered equivalents. A power-aware design environment should facilitate “what-if” exploration of different scenarios to evaluate the area/power/performance tradeoffs

Multi-VDD Flow

Multi-voltage design, a popular technique to reduce total power, is a complex task because many blocks are operating at different voltages, or intermittently shut off. Level shifter and isolation cells need to be used on nets that cross domain boundaries if the supply voltages are different or if one of the blocks is being shut down. DVFS is another technique where the supply voltage and frequency can vary dynamically to save power. Power gating using multi-threshold CMOS (MTCMOS) switches involves switching off certain portions of an IC when that functionality is not required, then restoring power when that functionality is needed.

Figure 2. Multi-voltage layout shown in a screen shot from the Nitro-SoC™ place and route system.

MCMM Based Power Optimization

Because each voltage supply and operational mode implies different timing and power constraints on the design, multi-voltage methodologies cause the number of design corners to increase exponentially with the addition of each domain or voltage island. The best solution is to analyze and optimize the design for all corners and modes concurrently. In other words, low-power design inherently requires true multi-corner/multi-mode (MCMM) optimization for both power and timing. The end result is that the design should meet timing and power requirements for all the mode/corner scenarios.

FinFET aware Power Optimization

FinFET aware power optimization flow requires technologies such as activity driven placement, multi-bit flop support, clock data optimization, interleaved power optimization and activity driven routing to ensure that the dynamic power reduction is optimal. The tools should be able to use transforms with objective costing to make trade-offs between dynamic power, leakage power, timing, and area for best QoR.

Using the strategy to optimize power at all stages of the design flow, especially at the architecture stage is critical for optimal power reduction.  Architecture selection along with the complete set of technologies for RTL-to-GDS implementation greatly impact the ability to effectively manage power.

Silvaco

Seena Shankar, Technical Marketing Manager, is the author of this contribution.

Problem:

Analysis of IR-drop, electro-migration and thermal effects have traditionally been a significant bottleneck in the physical verification of transistor level designs like analog circuits, high-speed IOs, custom digital blocks, memories and standard cells. Starting from 28 nm node and lower, all designers are concerned about power, EM/IR and thermal issues. Even at the 180 nm node if you are doing high current designs in LDMOS then EM effects, rules and thermal issues need to be analyzed. FinFET architecture has increased concerns regarding EM, IR and thermal effects. This is because of complex DFM rules, increased current and power density. There is a higher probability of failure. Even more so EM/IR effects need to be carefully analyzed and managed. This kind of analysis and testing usually occurs at the end of the design flow. Discovering these issues at that critical time makes it difficult to stick to schedule and causing expensive rework. How can we resolve this problem?

Solution:

Power integrity issues must be addressed as early in the design cycle as possible, to avoid expensive design and silicon iterations. Silvaco’s InVar Prime is an early design stage power integrity analysis solution for layout engineers. Designers can estimate EM, IR and thermal conditions before sign-off stage. It performs checks like early IR-drop analysis, check of resistive parameters of supply networks, point to point resistance check, and also estimate current densities. It also helps in finding and fixing issues that are not detectable with regular LVS check like missing vias, isolated metal shapes, inconsistent labeling, and detour routing.

InVar Prime can be used for a broad range of designs including processors, wired and wireless network ICs, power ICs, sensors and displays. Its hierarchical methodology accurately models IR-drop, electro-migration and thermal effects for designs ranging from single block to full-chip. Its patented concurrent electro-thermal analysis performs simulation of multiple physical processes together. This is critical for today’s’ designs in order to capture important interactions between power and thermal 2D/3D profiles. The result is physical measurement-like accuracy with high speed even on extremely large designs and applicability to all process nodes including FinFET technologies.

InVar Prime requires the following inputs:

●      Layout- GDSII

●      Technology- ITF or iRCX

●      Supplementary data- Layer mapping file for GDSII, Supply net names, Locations and nominal of voltage sources, Area based current consumption for P/G nets

Figure 3. Reliability Analysis provided by InVar Prime

InVar Prime enables three types of analysis on a layout database: EM, IR and Thermal. A layout engineer could start using InVar to help in the routing and planning of the power nets, VDD and VSS. IR analysis with InVar will provide them early analysis on how good the power routing is at that point. This type of early analysis flags potential issues that might otherwise appear after fabrication and result in silicon re-spins.

InVar EM/IR engine provides comprehensive analysis and retains full visibility of supply networks from top-level connectors down to each transistor. It provides a unique approach to hierarchical block modeling to reduce runtime and memory while keeping accuracy of a true flat run. Programmable EM rules enable easy adaptation to new technologies.

InVar Thermal engine scales from single cell design to full chip and provides lab-verified accuracy of thermal analysis. Feedback from thermal engine to EM/IR engines provides unprecedented overall accuracy. This helps designers understand and analyze various effects across design caused by how thermal 2D/3D profiles affect IR drop and temperature dependent EM constraints.

The main benefits of InVar Prime are:

●      Accuracy verified in lab and foundries

●      Full chip sign-off with accurate and high performance analysis

●      Analysis available early in the back end design, when more design choices are available

●      Pre-characterization not required for analysis

●      User-friendly environment designed to assist quick turn-around-times

●      Effective prevention of power integrity issues

●      Broad range of technology nodes supported

●      Reduces backend verification cycle time

●      Improves probability of first silicon success

Sonics

Scott Seiden contributed his company viewpoint.  Sonics has developed a dynamic power management solution that is hardware based.

Sonics has Developed Industry’s First Energy Processing Unit (EPU) Based on the ICE-Grain Power Architecture.  The EPUICE stands for Instant Control of Energy.

Sonics’ ICE-G1 product is a complete EPU enabling rapid design of system-on-chip (SoC) power architecture and implementation and verification of the resulting power management subsystem.

No amount of wasted energy is affordable in today’s electronic products. Designers know that their circuits are idle a significant fraction of time, but have no proven technology that exploits idle moments to save power. An EPU is a hardware subsystem that enables designers to better manage and control circuit idle time. Where the host processor (CPU) optimizes the active moments of the SoC components, the EPU optimizes the idle moments of the SoC components. By construction, an EPU delivers lower power consumption than software-controlled power management. EPUs possess the following characteristics:

  • Fine-grained power partitioning maximizes SoC energy savings opportunities
  • Autonomous hardware-based control provides orders of magnitude faster power up and power down than software-based control through a conventional processor
  • Aggregation of architectural power savings techniques ensures minimum energy consumption
  • Reprogrammable architecture supports optimization under varying operating conditions and enables observation-driven adaptation to the end system.

About ICE-G1

The Sonics’ ICE-G1 EPU accelerates the development of power-sensitive SoC designs using configurable IP and an automated methodology, which produces EPUs and operating results that improve upon the custom approach employed by expert power design teams. As the industry’s first licensable EPU, ICE-G1 makes sophisticated power savings techniques accessible to all SoC designers in a complete subsystem solution. Using ICE-G1, experienced and first-time SoC designers alike can achieve significant power savings in their designs.

Markets for ICE-G1 include:

- Application and Baseband Processors
- Tablets, Notebooks
- IoT
- Datacenters
- EnergyStar compliant systems
- Form factor constrained systems—handheld, battery operated, sealed case/no fan, wearable.

-ICE-G1 key product features are:Intelligent event and switching controllers–power grain controllers, event matrix, interrupt controller, software register interface—configurable and programmable hardware that dynamically manages both active and leakage power.

- SonicsStudio SoC development environment—graphical user interface (GUI), power grain identification (import IEEE-1801 UPF, import RTL, described directly), power architecture definition, power grain controller configuration (power modes and transition events), RTL and UPF code generation, and automated verification test bench generation tools. A single environment that streamlines the EPU development process from architectural specification to physical implementation.

- Automated SoC power design methodology integrated with standard EDA functional and physical tool flows (top down and bottom up)—abstracts the complete set of power management techniques and automatically generates EPUs to enable architectural exploration and continuous iteration as the SoC design evolves.

- Technical support and consulting services—including training, energy savings assessments, architectural recommendations, and implementation guidance.

Conclusion

As can be seen from the contributions analysis and management of power is multi-faceted.  Dynamic control of power, especially in battery powered IoT devices is critical, since some of there devices will be in locations that are not readily reachable by an operator.

Verification Choices: Formal, Simulation, Emulation

Thursday, July 21st, 2016

Gabe Moretti, Senior Editor

Lately there have been articles and panels about the best type of tools to use to verify a design.  Most of the discussion has been centered on the choice between simulation and emulation, but, of course, formal techniques should also be considered.  I did not include FPGA based verification in this article because I felt to be a choice equal to emulation, but at a different price point.

I invited a few representatives of EDA companies to answer questions about the topic.  The respondents are:

Steve Bailey, Director of Emerging Technologies at Mentor Graphics,

Dave Kelf, Vice President of Marketing at OneSpin Solutions

Frank Schirrmeister, Senior Product Management Director at Cadence

Seena Shankar, Technical Marketing Manager at Silvaco

Vigyan Singhal, President and CEO at Oski Technology

Lauro Rizzatti, Verification Consultant

A Search for the Best Technique

I first wanted an opinion of what each technology does better.  Of course the question is ambiguous because the choice of tool, as Lauro Rizzatti points out, depends on the characteristics of the design to be verified.  “As much as I favor emulation, when design complexity does not stand in the way, simulation and formal are superior choices for design verification. Design debugging in simulation is unmatched by emulation. Not only interactive, flexible and versatile, simulation also supports four-state and timing analysis.
However, design complexity growth is here to stay, and the curve will only get more challenging into the future. And, we not only have to deal with complexity measured in more transistors or gates in hardware, but also measured in more code in embedded software. Tasked to address this trend, both simulation and formal would hit the wall. This is where emulation comes in to rule the day.  Performance is not the only criteria to measure the viability of a verification engine.”

Vigyan Singhal wrote: “Both formal and emulation are becoming increasingly popular. Why use a chain saw (emulation) when you can use a scalpel (formal)? Every bug that is truly a block-level bug (and most bugs are) is most cost effective to discover with formal. True system-level bugs, like bandwidth or performance for representative traffic patterns, are best left for emulation.  Too often, we make the mistake of not using formal early enough in the design flow.”

Seena Shankar provided a different point of view. “Simulation gives full visibility to the RTL and testbench. Earlier in the development cycle, it is easier to fix bugs and rerun a simulation. But we are definitely gated by the number of cycles that can be run. A basic test

exercising a couple of functional operations could take up to 12 hours for a design with a 100 million gates.

Emulation takes longer to setup because all RTL components need  to be in place before a test run can begin. The upside is that millions of operations can be run in minutes. However, debug is difficult and time consuming compared to simulation.  Formal verification needs a different kind of expertise. It is only effective for smaller blocks but can really find corner case bugs through assumptions and constraints provided to the tool.”

Steve Bailey concluded that:” It may seem that simulation is being used less today. But, it is all relative. The total number of verification cycles is growing exponentially. More simulation cycles are being performed today even though hardware acceleration and formal cycles are taking relatively larger pieces of the overall verification pie. Formal is growing in appeal as a complementary engine. Because of its comprehensive verification nature, it can significantly bend the cost curve for high-valued (difficult/challenging) verification tasks and objectives. The size and complexity of designs today require the application of all verification engines to the challenges of verifying and validating (pre-silicon) the hardware design and enabling early SW development. The use of hardware acceleration continues to shift-left and be used earlier in the verification and validation flow causing emulation and FPGA prototyping to evolve into full-fledged verification engines (not just ICE validation engines).”

If I had my choice I would like to use formal tools to develop an executable specification as early as possible in the design, making sure that all functional characteristics of the intended product will be implemented and that the execution parameters will be respected.  I agree that the choice between simulation and emulation depends on the size of the block being verified, and I also think that hardware/software co-simulation will most often require the use of an emulation/acceleration device.

Limitations to Cooperation Among the Techniques

Since all three techniques have value in some circumstance, can designers easily move from one to another?

Frank Schirrmeister provided a very exhaustive response to the question, including a good figure.

“The following figure shows some of the connections that exist today. The limitations of cooperation between the engines are often of a less technical nature. Instead, they tend to result from the gaps between different disciplines in terms of cross knowledge between them.

Figure 1: Techniques Relationships (Courtesy of Cadence)

Some example integrations include:

-          Simulation acceleration combining RTL simulation and emulation. The technical challenges have mostly been overcome using transactors to connect testbenches, often at the transaction level that runs on simulation hosts to the hardware holding the design under test (DUT) and executing at higher speed. This allows users to combine the expressiveness in simulated testbenches to increase verification efficiency with the speed of synthesizable DUTs in emulation.

-          At this point, we even have enabled hot-swap between simulation and emulation. For example, we can run gate-level netlists without timing in emulation at faster speeds. This allows users to reach a point of interest at a later point of the execution that would take hours or days in simulation. Once the point of interest is reached, users can switch (hot swap) back into simulation, adding back the timing and continue the gate-level timing simulation.

-          Emulation and FPGA-based prototyping can share a common front-end, such as in the Cadence System Development Suite, to allow faster bring-up using multi-fabric compilation.

-          Formal and simulation also combine nicely for assertions, X-propagation, etc., and, when assertions are synthesizable and can be mapped into emulation, formal techniques are linked even with hardware-based execution.

Vigyan Singhal noted that: “Interchangeability of databases and poorly architected testbenches are limitations. There is still no unification of coverage database standard enabling integration of results between formal, simulation and emulation. Often, formal or simulation testbenches are not architected for reuse, even though they can almost always be. All constraints in formal testbenches should be simulatable and emulatable; if checkers and bus functional models (BFMs) are separated in simulation, checkers can sometimes be used in formal and in emulation.”

Dave Kelf concluded that: “the real question here is: How do we describe requirements and design specs in machine-readable forms, use this information to produce a verification plan, translate them into test structures for different tools, and extract coverage information that can be checked against the verification plan? It is this top-down, closed-loop environment generally accepted as ideal, but we have yet to see it realized in the industry. We are limited fundamentally by the ability to create a machine-readable specification.”

Portable Stimulus

Accellera has formed a study group to explore the possibility of developing a portable stimulus methodology.  The group is very active and progress is being made in that direction.  Since the group has yet to publish a first proposal, it was difficult to ask any specific questions, although I thought that a judgement on the desirability of such effort was important.

Frank Schirrmeister wrote: “At the highest level, the portable stimulus project allows designers to create tests to verify SoC integration, including items like low-power scenarios and cache coherency. By keeping the tests as software routines executing on processors that are available in the design anyway, the stimulus becomes portable between the different dynamic engines, specifically simulation, emulation, and FPGA prototyping. The difference in usage with the same stimulus then really lies in execution speed – regressions can run on faster engines with less debug – and on debug insight once a bug is encountered.”

Dave Kelf also has a positive opinion about the effort. “Portable Stimulus is an excellent effort to abstract the key part of the UVM test structures such that they may be applied to both simulation and emulation. This is a worthy effort in the right direction, but it is just scraping the surface. The industry needs to bring assertions into this process, and consider how this stimulus may be better derived from high-level specifications”

SystemVerilog

The language SystemVerilog is considered by some the best language to use for SoC development.  Yet, the language has limitations according to some of the respondents.

Seena Shankar answered the question “Is SystemVerilog the best we can do for system verification? as follows: “Sort of. SystemVerilog encapsulates the best features from Software and hardware paradigms for verification. It is a standard that is very easy to follow but may not be the best in performance. If the performance hit can be managed with a combination of system C/C++ or Verilog or any other verification languages the solution might be limited in terms of portability across projects or simulators.”

Dave Kelf wrote: “One of the most misnamed languages is SystemVerilog. Possibly the only thing this language was not designed to do was any kind of system specification. The name was produced in a misguided attempt to compete or compare with SystemC, and that was clearly a mistake. Now it is possible to use SystemVerilog at the system level, but it is clear that a C derived language is far more effective.
What is required is a format that allows untimed algorithmic design with enough information for it to be synthesized, virtual platforms that provide a hardware/software test capability at an acceptable level of performance, and general system structures to be analyzed and specified. C++ is the only language close to this requirement.”

And Frank Schirrmeister observed: “SystemVerilog and technologies like universal verification methodology (UVM) work well at the IP and sub-system level, but seem to run out of steam when extended to full system-on-chip (SoC) verification. That’s where the portable stimulus project comes in, extending what is available in UVM to the SoC level and allowing vertical re-use from IP to the SoC. This approach overcomes the issues for which UVM falls short at the SoC level.”

Conclusion

Both design engineers and verification engineers are still waiting for help from EDA companies.  They have to deal with differing methodologies, and imperfect languages while tackling ever more complex designs.  It is not surprising then that verification is the most expensive portion of a development project.  Designers must be careful to insure that what they write is verifiable, while verification engineers need to not only understand the requirements and architecture of the design, but also be familiar with the characteristics of the language used by developers to describe both the architecture and the functionality of the intended product.  I believe that one way to improve the situation is for both EDA companies and system companies to approach a new design not just as a piece of silicon but as a product that integrates hardware, software, mechanical, and physical characteristics.  Then both development and verification plans can choose the most appropriate tools that can co-exist and provide coherent results.

Results from the RF and Analog/Mixed-Signal (AMS) IC Survey

Wednesday, October 2nd, 2013

A summary of the results of a survey for developers of products in RF and analog/mixed-signal (AMS) ICs.

This summary details the results of a survey for developers of products in RF and analog/mixed-signal (AMS) ICs. A total of 129 designers responded to this survey. Survey questions focused on job area, company information, end-user application markets, product development types, programming languages, tool vendors, foundries, processes and other areas.

Key Findings

  • More respondents are using Cadence’s EDA tools for RFIC designs. In order, respondents also listed Agilent EESof, Mentor, Ansys/Ansoft, Rhode & Schwartz and Synopsys.
  • More respondents are using Cadence’s EDA tool for AMS IC design. Agilent EESof, Mentor, Aniritsu, Synopsys and Ansys/Ansoft were behind Cadence.
  • Respondents had the most expertise with C/C++. Regarding expertise with programming languages, C/C++ had the highest rating, followed in order by Verilog, Matlab-RF, Matlab-Simulink, Verilog-AMS, VHDL, SystemVerilog, VHDL-AMS and SystemC.
  • For RF design-simulation-verification tools, more respondents in order listed that they use Spice, Verilog, Verilog-AMS, VHDL and Matlab/RF-Simulink. For planned projects, more respondents in order listed SystemC, VHDL-AMS, SystemVerilog, C/C++ and Matlab/RF-Simulink.
  • Regarding the foundries used for RF and/or MMICs, most respondents in order listed TSMC, IBM, TowerJazz, GlobalFoundries, RFMD and UMC.
  • Silicon-based technology is predominately used for current RF/AMS designs. GaAs and SiGe are also widely used. But for future designs, GaAs will lose ground; GaN will see wider adoption.
  • RF and analog/mixed-signal ICs still use fewer transistors than their digital counterparts. Some 30% of respondents are developing designs of less than 1,000 transistors. Only 11% are doing designs of more than 1 million transistors.
  • Digital pre-distortion is still the favorite technique to improve the efficiency of a discrete power amp. Envelope tracking has received a lot of attention in the media. But surprisingly, envelope tracking ranks low in terms of priorities for power amp development.

Implications

  • Cadence continues to dominate the RFIC/AMS EDA environment. Virtuoso remains a favorite among designers. RF/AMS designers will continue to have other EDA tool choices as well.
  • The large foundries, namely TSMC and IBM, will continue to have a solid position in RF/AMS. But the specialty foundries will continue to make inroads. Altis, Dongbu, Magnachip, TowerJazz, Vanguard and others are expanding in various RF/AMS fronts.
  • There is room for new foundry players in RF/AMS. GlobalFoundries and Altis are finding new customers in RF, RF SOI and RF CMOS.
  • The traditional GaAs foundries—TriQuint, RFMD, Win Semi and others—are under pressure in certain segments. The power amp will remain a GaAs-based device, but other RF components are moving to RF SOI, SiGe and other processes.

Detailed Summary

  • Job Function Area-Part 1: A large percentage of respondents are involved in the development of RF and/or AMS ICs. More respondents are currently involved in the development of RF and/or AMS ICs (55%). A smaller percentage said they were involved in the last two years (13%). A significant portion are not are involved in the development of RF or AMS ICs (32%).
  • Job Function Area-Part 2: Respondents listed one or a combination of functions. More respondents listed analog/digital designer (30%), followed in order by engineering management (22%), corporate management (12%) and system architect (10%). The remaining respondents listed analog/digital verification, FPGA designer/verification, software, test, student, RF engineer, among others.
  • Company Information: Respondents listed one or a combination of industries. More respondents listed a university (23%), followed in order by systems integrator (18%), design services (14%), fabless semiconductor (13%) and semiconductor manufacturer (10%). The category “other” represented a significant group (13%). The remaining respondents work for companies involved in ASICs, ASSPs, FPGAs, software and IP.
  • Company Revenue (Annual): More respondents listed less than $25 million (27%), followed in order by $100 million to $999 million (24%) and $1 billion and above (22%). Others listed $25 million to $99 million (8%). Some 19% of respondents did not know.
  • Location: More respondents listed North America (60%), followed in order by Europe (21%) and Asia-Pacific (10%). Other respondents listed Africa, China, Japan, Middle East and South America.
  • Primary End-User Application for Respondent’s ASIC/ASSP/SoC design: More respondents listed communications (67%), followed in order by industrial (28%), consumer/multimedia (24%), computer (21%), medical (15%) and automotive (12%).
  • Primary End Market for Respondent’s Design. For wired communications, more respondents listed networking (80%), followed by backhaul (20%). For wireless communications, more respondents listed handsets (32%) and basestations (32%), followed in order by networking, backhaul, metro area networks and telephony/VoIP.
  • Primary End Market If Design Is Targeted for Consumer Segment. More respondents listed smartphones (34%), followed in order by tablets (24%), displays (18%), video (13%) and audio (11%).

Programming Languages Used With RF/AMS Design Tools:

  • Respondents had the most expertise with C and C++. Regarding expertise with programming languages, C/C++ had an overall rating of 2.47 in the survey, followed by in order by Verilog (2.32), Matlab-RF (2.27), Matlab-Simulink (2.17), Verilog-AMS (2.03), VHDL (1.99), SystemVerilog (1.84), VHDL-AMS (1.70) and SystemC (1.68).
  • Respondents said they had “professional expertise” (19%) with C/C++. Respondents were “competent” (27%) or were “somewhat experienced” (37%) with C/C++. Some 17% said they had “no experience” with C/C++.
  • Respondents said they had “professional expertise” with Verilog-AMS. (13%). Respondents were “competent” (15%) and “somewhat experienced” (35%) with Verilog-AMS. Some 38% said they had “no experience” with Verilog-AMS.
  • Respondents said they had “professional expertise” with Verilog (12%), or were “competent” (30%) or were “somewhat experienced” (36%). Some 22% said they had “no experience” with Verilog.
  • Respondents said they had “professional expertise” with Matlab-RF (10%), or were “competent” (27%) or “somewhat experienced” (42%). Some 21% said they had “no experience” with the technology.
  • Respondents also had “professional experience” with VHDL (10%), SystemVerilog (9%), SystemC (7%), Matlab-Simulink (6%) and VHDL-AMS (3%).
  • Respondents had ‘’no experience” with SystemC (55%), VHDL-AMS (51%), SystemVerilog (49%), Verilog-AMS (38%), VHDL (36%), Matlab-Simulink (26%), Verilog (22%), Matlab-RF (21%) and C/C++ (17%).

Types of Programming Languages and RF Design-Simulation-Verification Tools Used

  • For current projects, more respondents listed Spice (85%), Verilog (85%), Verilog-AMS (79%), VHDL (76%), Matlab/RF-Simulink (71%), C/C++ (64%), SystemVerilog (56%), VHDL-AMS (44%) and SystemC (21%).
  • For planned projects, more respondents listed SystemC (79%), VHDL-AMS (56%), SystemVerilog (44%), C/C++ (36%), Matlab/RF-Simulink (29%), VHDL (24%), Verilog-AMS (21%), Verilog (15%) and Spice (15%).

Which Tool Vendors Are Used in RFIC Development

  • More respondents listed Cadence (60), followed in order by Agilent EESof (43), Mentor (38), Ansys/Ansoft (29), Rhode & Schwartz (26) and Synopsys (25). Others listed were Aniritsu, AWR, Berkeley Design, CST, Dolphin, EMSS, Helic, Hittite, Remcon, Silvaco, Sonnet and Tanner.
  • The respondents for Cadence primarily use the company’s tools for RF design (68%), simulation (73%), layout (67%) and verification (43%). The company’s tools were also used for EM analysis (27%) and test (22%).
  • The respondents for Agilent EESof primarily use the company’s tools for RF design (54%) and simulation (65%). The company’s tools were also used for EM analysis, layout, verification and test.
  • The respondents for Mentor Graphics primarily use the company’s tools for verification (55%), layout (37%) and design (34%). Meanwhile, the respondents for Rhode & Schwartz primarily use the company’s tools for test (69%). The respondents for Synopsys primarily use the company’s tools for design (40%), simulation (60%) and verification (48%).

Which Tool Vendors Are Used in AMS IC Development

  • More respondents listed Cadence (48), followed in order by Agilent EESof (26), Mentor (22), Aniritsu (19), Synopsys (18) and Ansys/Ansoft (15). Others listed were AWR, Berkeley Design, CST, Dolphin, EMSS, Helic, Hittite, Remcon, Rohde & Schwarz, Silvaco, Sonnet and Tanner.
  • The respondents for Cadence primarily use the company’s tools for AMS design (79%), simulation (71%), layout (71%) and verification (48%). The company’s tools were also used for EM analysis and test.
  • The respondents for Agilent EESof primarily use the company’s tools for design (42%), simulation (69%) and EM analysis (54%).
  • The respondents for Mentor Graphics primarily use the company’s tools for design (50%), simulation (46%) and verification (55%). The respondents for Aniritsu primarily use the company’s tools for test (47%). The respondents for Synopsys primarily use the company’s tools for design (61%) and simulation (67%).

Areas of Improvement for Verification and Methodologies

  • Respondents had a mix of comments.

Foundry and Processes

  • Foundry Used for RFICs and/or MMICs: More respondents listed TSMC (32), followed in order by IBM (27), TowerJazz (19), GlobalFoundries (17), RFMD (13) and UMC (13). The next group was Win Semi (12), ST (11), TriQuint (11) and GCS (10). Other respondents listed Altis, Cree, IHP, LFoundry, OMMIC, SMIC, UMS and XFab.
  • Of the respondents for TSMC, 87% use TSMC for RF foundry work and 55% for MMICs. Of the respondents for IBM, 81% use IBM for RF foundry work and 41% for MMICs. Of the respondents for TowerJazz, 84% use TowerJazz for RF foundry work and 42% for MMICs. Of the respondents for GlobalFoundries, 76% use GF for RF foundry work and 41% for MMICs.
  • Complexity of Respondent’s Designs (Transistor Count): More respondents listed less than 1,000 transistors (30%), followed in order by 10,000-99,000 transistors (14%) and 100,000-999,000 transistors (14%). Respondents also listed 1,000-4,900 transistors (11%), greater than 1 million transistors (11%) and 5,000-9,900 transistors (10%).
  • Process Technology Types: For current designs, more respondents listed silicon (66%), followed in order by GaAs (32%), SiGe (27%), GaN (23%) and InP (10%). For future designs, more respondents listed silicon (66%), followed in order by SiGe (31%), GaN (28%), GaAs (16%) and InP (13%).

Technology Selections:

  • Which Baseband Processor Does Design Interface With: More respondents listed TI (35%), ADI (22%) and Tensilica/Cadence (18%). Respondents also list other (26%).
  • Technique Used To Improve Discrete Power Amplifier Efficiency: In terms of priorities, more respondents listed digital pre-distortion (38%), followed in order by linearization (27%), envelop tracking (14%) and crest factor reduction (10%). In terms of priorities, the technique that showed the lowest ranking was envelop tracking (37%), crest factor reduction (21%) and linearization (14%).

Test and Measurement

  • Importance of Test and Measurement: More respondents listed very important (34%), followed in order by important (24%), extremely important (20%), somewhat important (19%) and unimportant (3%).

lapedus_markMark LaPedus has covered the semiconductor industry since 1986, including five years in Asia when he was based in Taiwan. He has held senior editorial positions at Electronic News, EBN and Silicon Strategies. In Asia, he was a contributing writer for Byte Magazine. Most recently, he worked as the semiconductor editor at EE Times.