Part of the  

Chip Design Magazine

  Network

About  |  Contact

Posts Tagged ‘SystemC’

Accellera Systems Initiative Continues to Grow

Thursday, October 17th, 2013

By Gabe Moretti

The convergence of system, software and semiconductor design activities to meet the increasing challenges of creating complex system-on-chips (SoCs) has brought to the forefront the need for a single organization to create new EDA and IP standards.

As one of the founders of Accellera, and the one responsible for its name, it gives me great pleasure to see how the consortium has grown and widened its interests.  Through the mergers and acquisitions of the Open SystemC Initiative (OSCI), Virtual Sockets Interface Alliance (VSIA), The SPIRIT Consortium, and now assets of OCP-IP, Accellera is the leading standards organization that develops language-based standards used by system, semiconductor, IP and EDA companies.

As its original name implies, Accellera is Italian meaning accelerate, its activities target EDA tools and methods with the aim of fostering efficiency and portability.

Created to develop standards for design and verification languages and methods, Accellera has grown by merging or acquiring other consortia, expanding its role to Electronic System Level standards, and IP standards.  It now has forty one member companies from industries such as EDA, IP, semiconductors, and electronics systems.  As a result of its wider activities it even its name has grown and now is “Accellera Systems Initiative”.

In addition to the corporate members Accellera has formed three Users Communities, to educate engineers and increase the use of standards.  The Communities are: OCP, SystemC, and UVM.  The first one deals with IP standards and issues, the second supports the SystemC modeling and verification language, while the third one works on Unified Verification procedures.

Accellera has 17 active Technical Committees.  Their work to date has resulted in 7 IEEE standards.  Accellera sponsors a yearly conference DVCON held generally in February but also collaborates with engineering conferences in Europe and Japan.  With the growth of electronics activities in nations like India and China, Accellera is considering a more active presence in those countries as well.

Accellera Systems Initiative has taken over OCP-IP

Tuesday, October 15th, 2013

By Gabe Moretti

Accellera has been taking over multiple standards organization in the industry for several years and this is only the latest.  The acquisition includes the current OCP 3.0 standard and supporting infrastructure for reuse of IP blocks used in semiconductor design. OCP-IP and Accellera have been working closely together for many years, but OCP-IP lost corporate and member financial support steadily over the past five years and membership virtually flatlined. Combining the organizations may be the best way to continue  to address interoperability of IP design reuse and jumpstart adoption.

“Our acquisition of OCP assets benefits the worldwide electronic design community by leveraging our technical strengths in developing and delivering standards,” said Shishpal Rawat, Accellera Chair. “With its broad and diverse member base, OCP-IP will complement Accellera’s current portfolio and uniquely position us to further develop standards for the system-level design needs of the electronics industry.”

OCP-IP was originally started by Sonics, Inc. in December 2001 as a means to proliferate it’s network-on-chip approach.  Sonics CTO  Drew Wingard has been a primary driver of the organization.  It has long been perceived as the primary marketing tool of the company and it will be interesting to see how the company (which has been on and off the IPO trail several times since its founding) fairs without being the “big dog” in the discussion.

A comprehensive list of FAQs about the asset acquisition is available.

Rapid Prototyping is an Enduring Methodology

Thursday, September 24th, 2015

Gabe Moretti, Senior Editor

When I started working in the electronic industry hardware development had prototyping using printed circuit boards (PCB) as its only verification tool.   The method was not “rapid” since it involved building and maintaining one or more PCBs.  With the development of the EDA industry simulators became an alternative method, although they really only achieved popularity in the ‘80s with the introduction of hardware description languages like Verilog and VHDL.

Today the majority of designs are developed using software tools, but rapid prototyping is still a method used in a significant portion of designs.  In fact hardware based prototyping is a growing methodology, mostly due to the increased power and size of FPGA devices.  It can now really be called “rapid prototyping.”

Rapid Prototyping Defined

Lauro Rizzatti, a noted expert on the subject of hardware based development of electronics, reinforces the idea in this way: “Although one of the oldest methods used to verify designs, dating back to the days of breadboarding, FPGA prototyping is still here today, and more and more it will continue to be used in the future. “

Saba Sharifi, VP Business Development, Logic Business Unit, System LSI Group at Toshiba America Electronic Components, describes the state of rapid prototyping as follows: “While traditional virtual prototyping involves using CAD and CAE tools to validate a design before building a physical prototype, rapid prototyping is growing in popularity as a method for prototyping SoC and ASIC designs on an FPGA for hardware verification and early software development. In a rapid prototyping environment, a user may start development using an FPGA-based system, and then choose either to keep the design in the FPGA, or to transfer it into a hard-coded solution such as an ASIC. There are a number of different ways to achieve this end.”

To support the hardware based prototyping methodology Toshiba has introduced a new type of Device: Toshiba’s Fast Fit Structured Array (FFSA).   The FFSA technology utilizes metal-configurable standard cell (MCSC) SoC platform technology for designing ASICs and ASSPs, and for replacing FPGAs. Designed with FPGA capabilities in mind, FFSA provides pre-developed wrappers that can realize some of the key FPGA functionality, as well as pre-defined master sizes, to facilitate the conversion process.

According to Saba “In a sense, it’s an extension to traditional FPGA-to-ASIC rapid prototyping – the second portion of the process can be achieved significantly faster using the FFSA approach.  The goal with FFSA technology is to enable developers to reduce their time to market and non-recurring engineering (NRE) costs by minimizing customizable layers (to four metal layers) while delivering the performance, power and lower unit costs associated with standard-cell ASICs.   An FFSA device speeds front-end development due to faster timing closure compared to a traditional ASIC, while back-end prototyping is improved via the pre-defined master sizes. In some cases, customers pursue concurrent development – they get the development process started in the FPGA and begin software development, and then take the FPGA into the FFSA platform. The platform takes into consideration the conversion requirements to bring the FPGA design into the hard-coded FFSA so that the process can be achieved more quickly and easily.  FFSA supports a wide array of interfaces and high speed Serdes (up to 28G), making it well suited for wired and wireless networking, SSD (storage) controllers, and a number of industrial consumer applications. With its power, speed, NRE and TTM benefits, FFSA can be a good solution anywhere that developers have traditionally pursued an FPGA-based approach, or required customized silicon for purpose-built applications with moderate volumes.”

According to Troy Scott, Product Marketing Manager, Synopsys: “FPGA-based prototypes are a class of rapid prototyping methods that are very popular for the promise of high-performance with a relatively low cost to get started. Prototyping with FPGAs is now considered a mainstream verification method which is implemented in a myriad of ways from relatively simple low-cost boards consisting of a single FPGA and memory and interface peripherals, to very complex chassis systems that integrate dozens of FPGAs in order to host billion-gate ASIC and SoC designs. The sophistication of the development tools, debug environment, and connectivity options vary as much as the equipment architectures do. This article examines the trends in rapid prototyping with FPGAs, the advantages they provide, how they compare to other simulation tools like virtual prototypes, and EDA standards that influence the methodology.

According to prototyping specialists surveyed by Synopsys the three most common goals they have are to speed up RTL simulation and verification, validate hardware/software systems, and enable software development. These goals influence the design of a prototype in order to address the unique needs of each use case. “

Figure 1. Top 3 Goals for a Prototype  (Courtesy of Synopsys, Inc)

Advantages of Rapid Prototyping

Frank Schirrmeister, Group Director for Product Marketing, System Development Suite, Cadence provides a business  description of the advantages: “The growth of FPGA-based prototyping is really primarily driven by software needs because software developers are, in terms of numbers of users, the largest consumer base. FPGA-based prototyping also provides satisfactory speed while delivering the real thing at RTL accuracy and at a reasonable replication cost.”

Stephen Bailey, Director of Emerging Technologies, DVT at Mentor graphics puts it as follows: “Performance, more verification cycles in a given period of time, especially for software development, drives the use of rapid prototypes.  Rapid prototyping typically provides 10x (~10 MHz) performance advantage over emulation which is 1,000x (~1 MHz) faster than RTL software simulation (~1 kHz).
Once a design has been implemented in a rapid prototype, that implementation can be easily replicated across as many prototype hardware platforms as the user has available.  No recompilation is required.  Replicating or cloning prototypes provides more platforms for software engineers, who appreciate having their own platform but definitely want a platform available whenever they need to debug.”

Lauro points out that: “The main advantage of rapid prototyping is very fast execution speed that comes at the expense of a very long setup time that may take months on very large designs. Partitioning and clock mapping require an uncommon expertise. Of course, the assumption is that the design fits in the maximum configuration of the prototyping board; otherwise the approach would not be applicable. The speed of FPGA prototyping makes it viable for embedded software development. They are best used for validating application software and for final system validation.”

Troy sees the advantages of the methodology as: “To address verification tasks the prototype helps to characterize timing and pipeline latencies that are not possible with more high-level representations of the design and perhaps more dramatically the prototype is able to reach execution speeds in the hundreds of megahertz. Typical prototype architectures for verification tasks rely on a CPU-based approach where traffic generation for the DUT is written as a software program. The CPU might be an external workstation or integrated with the prototype itself. Large memory ICs adjacent to the FPGAs store input traffic and results that is preloaded and read back for analysis. Prototypes that can provide an easy to implement test infrastructure that includes memory ICs and controllers, a high-bandwidth connection to the workstation, and a software API will accelerate data transfer and monitoring tasks by the prototyping team.

Software development and system validation tasks will influence the prototype design as well. The software developer is seeking an executable representation to support porting of legacy code and develop new device drivers for the latest interface protocol implementation. In some cases the prototype serves as a way for a company to deploy an architecture design and software driver examples to partners and customers. Both schemes demand high execution speed and often real world interface PHY connections. For example, consumer product developers will seek USB, HDMI, and MIPI interfaces while an industrial product will often require ADC/DAC or Ethernet interfaces. The prototype then must provide an easy way to connect to accessory interfaces and ideally a catalog of controllers and reference designs. And because protocol validation may require many cycles to validate, a means to store many full milliseconds of hardware trace data helps compliance check-out and troubleshooting.”

Rapid Prototyping versus Virtual Prototyping

According to Steve “Rapid prototyping, also called FPGA prototyping, is based on a physical representation of the design under test (DUT) that gets mapped on an array of FPGA devices. Virtual prototyping is based on a virtual representation of the DUT. The source code of an FPGA prototype is RTL code, namely synthesizable design. The source code of a virtual prototype is a design description at a higher level of abstraction, either based on C/C++/SystemC or SystemVerilog languages, that is not synthesizable.  Its rather limited support for hardware debugging hinders its ability to verify drivers and operating systems, where hardware emulation excels.”

Troy describes Synopsys position as such: “To address verification tasks the prototype helps to characterize timing and pipeline latencies that are not possible with more high-level representations of the design and perhaps more dramatically the prototype is able to reach execution speeds in the hundreds of megahertz. Typical prototype architectures for verification tasks rely on a CPU-based approach where traffic generation for the DUT is written as a software program. The CPU might be an external workstation or integrated with the prototype itself. Large memory ICs adjacent to the FPGAs store input traffic and results that is preloaded and read back for analysis. Prototypes that can provide an easy to implement test infrastructure that includes memory ICs and controllers, a high-bandwidth connection to the workstation, and a software API will accelerate data transfer and monitoring tasks by the prototyping team.

Software development and system validation tasks will influence the prototype design as well. The software developer is seeking an executable representation to support porting of legacy code and develop new device drivers for the latest interface protocol implementation. In some cases the prototype serves as a way for a company to deploy an architecture design and software driver examples to partners and customers. Both schemes demand high execution speed and often real world interface PHY connections. For example, consumer product developers will seek USB, HDMI, and MIPI interfaces while an industrial product will often require ADC/DAC or Ethernet interfaces. The prototype then must provide an easy way to connect to accessory interfaces and ideally a catalog of controllers and reference designs. And because protocol validation may require many cycles to validate, a means to store many full milliseconds of hardware trace data helps compliance check-out and troubleshooting.”

Lauro kept is answer short and to the point.  “Rapid prototyping, also called FPGA prototyping, is based on a physical representation of the design under test (DUT) that gets mapped on an array of FPGA devices. Virtual prototyping is based on a virtual representation of the DUT. The source code of an FPGA prototype is RTL code, namely synthesizable design. The source code of a virtual prototype is a design description at a higher level of abstraction, either based on C/C++/SystemC or SystemVerilog languages, that is not synthesizable.”

Cadence’s position is represented by Frank in his usual thorough style.  “Once RTL has become sufficiently stable, it can be mapped into an array of FPGAs for execution. This essentially requires a remapping from the design’s target technology into the FPGA fabric and often needs memories remodeled, different clock domains managed, and smart partitioning before the mapping into the individual FPGAs happens using standard software provided by the FPGA vendors. The main driver for the use of FPGA-based prototyping is software development, which has changed the dynamics of electronics development quite fundamentally over the last decade. Its key advantage is its ability to provide a hardware platform for software development and system validation that is fast enough to satisfy software developers. The software can reach a range of tens of MHz up to 100MHz and allows connections to external interfaces like PCIe, USB, Ethernet, etc. in real time, which leads to the ability to run system validation within the target environments.

When time to availability is a concern, virtual prototyping based on transaction-level models (TLM) can be the clear winner because virtual prototypes can be provided independently of the RTL that the engines on the continuum require. Everything depends on model availability, too. A lot of processor models today, like ARM Fast Models, are available off-the-shelf. Creating models for new IP often delays the application of virtual prototypes due to their sometimes extensive development effort, which can eliminate the time-to-availability advantage during a project. While virtual prototyping can run in the speed range of hundreds of MIPS, not unlike FPGA-based prototypes, the key differences between them are the model fidelity, replication cost, and the ability to debug the hardware.

Model fidelity often determines which prototype to use. There is often no hardware representation available earlier than virtual prototypes, so they can be the only choice for early software bring-up and even initial driver development. They are, however, limited by model fidelity – TLMs are really an abstraction of the real thing as expressed in RTL. When full hardware accuracy is required, FPGA-based prototypes are a great choice for software development and system validation at high speed. We have seen customers deliver dozens if not hundreds of FPGA-based prototypes to software developers, often three months or more prior to silicon being available.

Two more execution engines are worth mentioning. RTL simulation is the more accurate, slower version of virtual prototyping. Its low speed in the Hz or KHz range is really prohibitive for efficient software development. In contrast, due to the high speed of both virtual and FPGA-based prototypes, software development is quite efficient on both of them. Emulation is the slower equivalent of FPGA-based prototyping that can be available much earlier because its bring-up is much easier and more automated, even from not-yet-mature RTL. It also offers almost simulation-like debug and, since it also provides speed in the MHz range, emulation is often the first appropriate engine for software and OS bring-up used for Android, Linux and Windows, as well as for executing benchmarks like AnTuTu. Of course, on a per project basis, it is considered more expensive than FPGA-based prototyping, even though it can be more cost efficient from a verification perspective when considering multiple projects and a large number of regression workloads.”

Figure 2: Characteristics of the two methods (Courtesy of Synopsys Inc.)

Growth Opportunities

For Lauro the situation boils down to this: “Although one of the oldest methods used to verify designs, dating back to the days of breadboarding, FPGA prototyping is still here today, and more and more it will continue to be used in the future. The dramatic improvement in FPGA technology that made possible the manufacturing of devices of monstrous capacity will enable rapid prototyping for ever larger designs. Rapid prototyping is an essential tool in the modern verification/validation toolbox of chip designs.”
Troy thinks that there are growth opportunities and explained: “A prototype’s high-performance and relatively low investment to fabricate has led them to proliferate among IP and ASIC development teams. Market size estimates show that 70% of all ASIC designs are prototyped to some degree using an FPGA-based system today. Given this demand several commercial offerings have emerged that address the limitations exhibited by custom-built boards. The benefits of almost immediate availability, better quality, modularity for better reuse, and the ability to out-source support and maintenance are big benefits. Well documented interfaces and usage guidelines make end-users largely self-sufficient. A significant trend for commercial systems now is development and debugging features of the EDA software being integrated or co-designed along with the hardware system. Commercial systems can demonstrate superior performance, debug visibility, and bring-up efficiency as a result of the development tools using hardware characterization data, being able to target unique hardware functions of the system, and employing communication infrastructure to the DUT. Commercial high-capacity prototypes are often made available as part of the IT infrastructure so various groups can share or be budgeted prototype resources as project demands vary. Network accessibility, independent management of individual FPGA chains in a “rack” installation, and job queue management are common value-added features of such systems.

Another general trend in rapid prototyping is to mix a transaction-level model (TLM) and RTL model abstractions in order to blend the best of both to accelerate the validation task. How do Virtual versus Physical prototypes differ? The biggest contrast is often the model’s availability during the project.  In practice the latest generation CPU architectures are not available as synthesizable RTL. License and deployment restrictions can limit access or the design is so new the RTL is simply not yet available from the vendor. For these reasons virtual prototypes of key CPU subsystems are a practical alternative.  For best performance and thus for the role of software development tasks, hybrid prototypes typically join an FPGA-based prototype, a cycle-accurate implementation in hardware with a TLM prototype using a loosely-timing (LT) coding style. TLM abstracts away the individual events and phases of the behavior of the system and instead focuses on the communication transactions. This may be perfectly acceptable model for the commercial IP block of a CPU but may not be for new custom interface controller-to-PHY design that is being tailored for a particular application. The team integrating the blocks of the design will assess that the abstraction is appropriate to satisfy the verification or validation scenarios.  Although one of the oldest methods used to verify designs, dating back to the days of breadboarding, FPGA prototyping is still here today, and more and more it will continue to be used in the future. The dramatic improvement in FPGA technology that made possible the manufacturing of devices of monstrous capacity will enable rapid prototyping for ever larger designs. Rapid prototyping is an essential tool in the modern verification/validation toolbox of chip designs.”

Steve’s described his opinion as follows: “Historically, rapid prototyping has been utilized for designs sized in the 10’s of millions of gates with some advanced users pushing capacity into the low 100M gates range.  This has limited the use of rapid prototyping to full chips on the smaller range of size and IP blocks or subsystems of larger chips.  For IP block/subsystems, it is relatively common to combine virtual prototypes of the processor subsystem with a rapid prototype of the IP block or subsystem.  This is referred to as “hybrid prototyping.
With the next generation of FPGAs such as Xilinx’s UltraScale and Altera’s Stratix-10 and the continued evolution of prototyping solutions, creating larger rapid prototypes will become practical.  This should result in expanded use of rapid prototyping to cover more full chip pre-silicon validation uses.
In the past, limited silicon visibility made debugging difficult and analysis of various aspects of the design virtually impossible with rapid prototypes.  Improvements in silicon visibility and control will improve debug productivity when issues in the hardware design escape to the prototype.  Visibility improvements will also provide insight into chip and system performance and quality that were previously not possible.”

Frank concluded that: “The growth of FPGA-based prototyping is really primarily driven by software needs because software developers are, in terms of numbers of users, the largest consumer base. FPGA-based prototyping also provides satisfactory speed while delivering the real thing at RTL accuracy at a reasonable replication cost. Second, the rollout of Cadence’s multi-fabric compiler that maps RTL both into the Palladium emulation platform and into the Protium FPGA-based prototyping platform significantly eases the trade-offs with respect to speed and hardware debug between emulation and FPGA-based prototyping. This gives developers even more options than they ever had before and widens the applicability of FPGA-based prototyping. The third driver for growth in prototyping is the advent of hybrid usage of, for example, virtual prototyping with emulation, combining fast execution for some portions of the design (like the processors) with accuracy for other aspects of the design (like graphics processing units).

Overall, rapid or FPGA-based prototyping has its rightful place in the continuum of development engines, offering users high-speed execution of an accurate representation. This advantage makes rapid or FPGA-based prototyping a great platform for software development that requires hardware accuracy, as well as for system validation.”

Conclusion

All four of the contributors painted a positive figure of rapid prototyping.  The growth of FPGA devices, both in size and speed, has been critical in keeping this type of development and verification method applicable to today’s designs.  It is often the case that a development team will need to use a variety of tools as it progresses through the task, and rapid prototyping has proven to be useful and reliable.

System Level Power Budgeting

Wednesday, March 12th, 2014

Gabe Moretti, Contributing Editor

I would like to start by thanking Vic Kulkarni, VP and GM at Apache Design a wholly owned subsidiary of ANSYS, Bernard Murphy, Chief Technology Officer at Atrenta,and Steve Brown, Product Marketing Director at Cadence for contributing to this article.

Steve began by nothing that defining a system level power budget for a SoC starts from chip package selection and the power supply or battery life parameters. This sets the power/heat constraint for the design, and is selected while balancing functionality of the device, performance of the design, and area of the logic and on-chip memories.

Unfortunately, as Vic points out semiconductor design engineers must meet power specification thresholds, or power budgets, that are dictated by the electronic system vendors to whom they sell their products.   Bernard wrote that accurate pre-implementation IP power estimation is almost always required. Since almost all design today is IP-based, accurate estimation for IPs is half the battle. Today you can get power estimates for RTL with accuracy within 15% of silicon as long as you are modeling representative loads.

With the insatiable demand for handling multiple scenarios (i.e. large FSDB files) like GPS, searches, music, extreme gaming, streaming video, download data rates and more using mobile devices, dynamic power consumed by SOCs continues to rise in spite of strides made in reducing the static power consumption in advanced technology nodes. As shown in Figure 1, the end user demand for higher performance mobile devices that have longer battery life or higher thermal limit is expanding the “power gap” between power budgets and estimated power consumption levels.

Typical “chip power budget” for a mobile application could be as follows (Ref: Mobile companies): Active power budget = 700mW @100Mbps for download with MIMO, 100mW @IDLE-mode; Leakage power <5mW with all power-domain off etc.

Accurate power analysis and optimization tools must be employed during all the design phases from system level, RTL-to-gate level sign-off to model and analyze power consumption levels and provide methodologies to meet power budgets.

Skyrocketing performance vs. limited battery & thermal limit (ref. Samsung- Apache Tech Forum)

The challenge is to find ways to abstract with reasonable accuracy for different types of IP and different loads. Reasonable methods to parameterize power have been found for single and multiple processor systems, but not for more general heterogeneous systems. Absent better models, most methods used today are based on quite simple lookup tables, representing average consumption. Si2 is doing work in defining standards in this area.

Vic is convinced that careful power budgeting at a high level also enables design of the power delivery network in the downstream design flow. Power delivery with reliable and consistent power to all components of ICs and electronic systems while meeting power budgets is known as power delivery integrity.  Power delivery integrity is analogous to the way in which an electric power grid operator ensures that electricity is delivered to end users reliably, consistently and in adequate amounts while minimizing loss in the transmission network.  ICs and electronic systems designed with inadequate power delivery integrity may experience large fluctuations in supply voltage and operating power that can cause system failure. For example, these fluctuations particularly impact ICs used in mobile handsets and high performance computers, which are more sensitive to variations in supply voltage and power.  Ensuring power delivery integrity requires accurate modeling of multiple individual components, which are designed by different engineering teams, as well as comprehensive analysis of the interactions between these components.

Methods To Model System Behavior With Power

At present engineers have a few approaches at their disposal.  Vic points out that the designer must translate the power requirements into block-level power budgeting to come up with specific metrics.

Dynamic power estimation per operating power mode, leakage power and sleep power estimation at RTL, power distribution at a glance, identification of high-power consuming areas, power domains, frequency-scaling feasibility for each IP, retention flop design trade-off, power-delivery network planning, required current consumption per voltage source and so on.

Bernard thinks that Spreadsheet Modeling is probably the most common approach. The spreadsheet captures typical application use-cases, broken down into IP activities, determined from application simulations/emulations. It also represents, for each IP in the system, a power lookup table or set of curves. Power estimation simply sums across IP values in a selected use-case. An advantage is no limitation in complexity – you can model a full smart phone including battery, RF and so on. Disadvantages are the need to understand an accurate set of use-cases ahead of deployment, and the abstraction problem mentioned above.  But Steve points out that these spreadsheets are difficult to create and maintain, and fall short for identifying outlier conditions that are critical for the end users experience.

Steve also points out that some companies are adapting virtual platforms to measure dynamic power, and improve hardware / software partitioning decisions. The main barrier to this solution remains creation of the virtual platform models, and then also adding the notion of power to the models. Reuse of IP enables reuse of existing models, but they still require effort to maintain and adapt power calculations for new process nodes.

Bernard has experienced engineers that run the full RTL against realistic software loads, dump activity for all (or a large number) of nodes and compute power based on the dump. An advantage is that they can skip the modeling step and still get an estimate as good as for RTL modeling. Disadvantages include needing the full design (making it less useful for planning) and significant slowdown in emulation when dumping all nodes, making it less feasible to run extensive application experiments.  Steve concurs.  Dynamic power analysis is a particularly useful technique, available in emulation and simulation. The emulator provides MHz performance enabling analysis of many cycles, often times with test driver software to focus on the most interesting use cases.

Bernard is of the opinion that while C/C++/SystemC Modeling seems an obvious target, it also suffers from the abstraction problem. Steve thinks that a likely architecture in this scenario has the virtual platform containing the processing subsystem and memory subsystem and executes as 100s of MHz, and the emulator contains the rest of the SoC and a replication of the memory subsystem and executes at higher speeds and provides cycle accurate power analysis and functional debugging.

Again,  Bernard wants to underscore, progress has been made for specialized designs, such as single and multiple processors, but these approaches have little relevance for more common heterogeneous systems. Perhaps Si2 work in this area will help.

EDA Industry Predictions for 2014 – Part 2

Thursday, January 9th, 2014

This article provides the observations from some of the “small” EDA vendors about important issues in EDA.  These predictions serve to measure the degree of optimism in the industry.  They are not the data to be used for an end of year scorecard, to see who was right and who was not.  It looks like there is much to be done in the next twelve months, unless, of course, consumers change their “mood”.

Bernard Murphy – Atrenta

“Smart” will be the dominant watchword for semiconductors in 2014.  We’ll see the maturing of biometric identification technologies, driven by security needs for smart payment on phones, and an increase in smart-home applications.   An example of cool applications?   Well, we’ll toss our clunky 20th-century remote controls, and manage our smart TV with an app on our phone or tablet, which will, among a host of other functions, allow point / text input to your center of living entertainment system – your smart TV. We’ll see indoor GPS products, enabling the mobile user to navigate shopping malls – an application with significant market potential.  We’ll see new opportunities for Bluetooth or WiFi positioning, 3D image recognition and other technologies.

In 2014 smart phones will be the dominant driver for semiconductor growth. The IoT industry will grow but will be constrained by adoption costs and immaturity. But I foresee that one of the biggest emerging technologies will be smart cards.  Although common for many years in Europe, this technology has been delayed in the US for lack of infrastructure and security concerns.  Now check out businesses near you with new card readers. Chances are they have a slot at the bottom as well as one at the side. That bottom slot is for smart cards. Slated for widespread introduction in 2015, smart card technologies will explode due to high demand.

The EDA industry in 2014 will continue to see implementation tools challenged by conflicting requirements of technology advances against the shrinking customer-base that can afford the costs at these nodes. Only a fundamental breakthrough enabling affordability will affect significant change in these tools.  Front-end design will continue to enjoy robust growth, especially around tools to manage, analyze and debug SoCs based on multi-sourced IPs – the dominant design platform today. Software-based analysis and verification of SoCs will be an upcoming trend, which will largely skip over traditional testbench-based verification. This will likely spur innovation around static hookup checking for the SoC assembly, and methods to connect software use-cases to implementation characteristics such as power and enhanced debug tools to bridge the gap between observed software behavior and underlying implementation problems.

Thomas L. Anderson – Breker

Electronic design automation (EDA) and embedded systems have long been sibling markets and technologies, but they are increasingly drawing closer and starting to merge. 2014 will see this trend continue and even accelerate. The catalyst is that almost all significant semiconductor designs user a system-on-chip (SoC) architecture, in which one or more embedded processors lie at the heart of the functionality. Embedded processors need embedded programs, and so the link between the two worlds is growing tighter every year.

One significant driver for this evolution is the gap between simulation testbenches and hardware/software co-verification using simulation, emulation or prototypes. The popular Universal Verification Methodology (UVM) standard provides no links between testbenches and code running in the embedded processors. The UVM has other limitations at the full-SoC level, but verification teams generally run at least some minimal testbench-based simulations to verify that the IP blocks are interconnected properly.

The next step is often running production code on the SoC processors, another link between EDA and embedded. It is usually impractical to boot an operating system in simulation, so usually the verification team moves on to simulation acceleration or emulation. The embedded team is more involved during emulation, and usually in the driver’s seat by the time that the production code is running on FPGA prototypes. The line between the verification team (part of traditional EDA) and the embedded engineers becomes fuzzy.

When the actual silicon arrives from the foundry, most SoC suppliers have a dedicated validation team. This team has the explicit goal of booting the operating system and running production software, including end-user applications, in the lab. However, this rarely works when the chip is first powered up. The complexity and limited debug features of production code lead the validation team to hand-write diagnostics that incrementally validate and bring up sections of the chip. The goal is to find any lurking hardware bugs before trying to run production software.

Closer alignment between EDA and embedded will lead to two important improvements in 2014. First, the simulation gap will be filled by automatically generated multi-threaded, multi-processor C test cases that leverage portions of the UVM testbench. These test cases stress the design far more effectively than UVM testbenches, hand-written tests, or even production software (which is not designed to find bugs). Tools exist today to generate such test cases from graph-based scenario models capturing the design and verification intent for the SoC.

Second, the validation team will be able to use these same scenario models to automatically generated multi-threaded, multi-processor C test cases to run on silicon and replace their hand-written diagnostics. This establishes a continuum between the domains of EDA, embedded systems, and silicon validation. Scenario models can generate test cases for simulation, simulation acceleration, emulation, FPGA prototyping, and actual silicon in the lab. These test cases will be the first embedded code to run at every one of these stages in 2014 SoC projects.

Shawn McCloud - Calypto

While verification now leverages high-level verification languages and techniques (i.e:, UVM/OVM and SystemVerilog) to boost productivity, design creation continues to rely on RTL methodologies originally deployed almost 20 years ago. The design flow needs to be geared toward creating bug-free RTL designs. This can be realized today by automating the generation of RTL from exhaustively verified C-based models. The C++/SystemC source code is essentially an executable spec. Because the C++/SystemC source code is more concise, it executes 1,000x–10,000x faster than RTL code, providing better coverage.

C and SystemC verification today is rudimentary, relying primarily on directed tests. These approaches lack the sophistication that hardware engineers employ at the RTL, including assertions, code coverage, functional coverage, and property-based verification. For a dependable HLS flow, you need to have a very robust verification methodology, and you need metrics and visibility. Fortunately, there is no need to re-invent the wheel when we can borrow concepts from the best practices of RTL verification.

Power analysis and optimization have evolved over the last two years, with more changes ahead. Even with conventional design flows there is still a lot more to be optimized on RTL designs. The reality is, when it comes to RTL power optimization, the scope of manual optimizations is relatively limited when factoring in time to market pressure and one’s ability to predict the outcome of an RTL change for power. Designers have already started to embrace automated power optimization tools that analyze the sequential behavior of RTL designs to automatically shut down unused portions of a design through a technique called sequential clock gating. There’s a lot more we can do by being smarter and by widening the scope of power analysis. Realizing this, companies will start to move away from the limitations of predefined power budgets targets toward a strategy that enables reducing power until the bell rings, and it’s time for tape out.

Bill Neifert – Carbon

Any prediction of future advances in EDA has to include a discussion on meeting the needs of the software developer. This is hardly a new thing, of course. Software has been consuming a steadily increasing part of the design resources for a long time. EDA companies acknowledge this and discuss technologies as being “software-driven” or “enabling software development,” but it seems that EDA companies have had a difficult time in delivering tools that enable software developers.

At the heart of this is the fundamental cost structure of how EDA tools have traditionally been sold and supported. An army of direct sales people and support staff can be easily supported when the average sales price of a tool is in the many tens or hundreds of thousands of dollars. This is the tried-and-true EDA model of selling to hardware engineers.

Software develoopers, however, are accustomed to much lower cost, or even free tools. Furthermore, they expect these tools to work without multiple calls and hand-holding from their local AE.

In order to meet the needs of the software developers, EDA needs to change how it engages with them. It’s not just a matter of price. Even the lowest-priced software won’t be used if it doesn’t meet the designer’s needs or if it requires too much direct support. After all, unlike the hardware designers who need EDA tools to complete their job, a software programmer typically has multiple options to choose from. The platform of choice is generally the one that causes the least pain and that platform may be from an EDA provider. Or, it could just as likely be homegrown or even an older generation product.

If EDA is going to start bringing on more software users in 2014, it needs to come out with products that meet the needs of software developers at a price they can afford. In order to accomplish this, EDA products for programmers must be delivered in a much more “ready-to-consume” form. Platforms should be as prebuilt as possible while allowing for easy customization. Since support calls are barriers to productivity for the software engineer and costly to support for the EDA vendor, platforms for software engineers should be web-accessible. In some cases, they may reside fully in the cloud. This completely automates user access and simplifies support, if necessary.

Will 2014 be the year that EDA companies begin to meet the needs of the software engineer or will they keep trying to sell them a wolf in sheep’s clothing? I think it will be the former because the opportunity’s too great. Developing tools to support software engineers is an obvious and welcome growth path for the EDA market.

Brett Cline – Forte

In the 19th century, prevailing opinion held that American settlers were destined to expand across North America. It was called Manifest Destiny.

In December 2014, we may look back on the previous 11 months and claim SystemC-Based Design Destiny. The Semiconductor industry is already starting to see more widespread adoption of SystemC-based design sweeping across the United States. In fact, it’s the fastest growing worldwide region right now. Along with it comes SystemC-based High-level synthesis, gaining traction with more designers because it allows them to perform power tradeoffs that are difficult if not impossible in RTL due to time constraints. Of course, low power continues to be a major driver for design and will be throughout 2014.

Another trend that will be even more apparent in 2014 is the use of abstracted IP. RTL-based IP is losing traction for system design and validation due to simulation speed and because it’s difficult to update, retarget and maintain. As a result, more small IP companies emerge with SystemC as the basis of their design instead of the long-used Verilog hardware design language.

SystemC-Based Design Destiny is for real in the U.S. and elsewhere as design teams struggle to contain the multitude of challenges in the time allotted.

Dr. Raik Brinkmann – OneSpin Solution

Over the last few years, given the increase in silicon cost and slowdown in process advancement, we have witnessed the move toward standardized SoC platforms, leveraging IP from many sources, together with powerful, multicore processors.

This has driven a number of verification trends. Verification is diversifying, where the testing of IP blocks is evolving separately to SoC integration analysis, a different methodology from virtual platform software validation. In 2014, we will see this diversification extend with more advanced IP verification, a formalization of integration testing, and mainstream use of virtual platforms.

With IP being transferred from varied sources, ensuring thorough verification is absolutely essential. Ensuring IP block functionality has always been critical. Recently, this requirement has taken on an additional dimension where the IP must be signed off before usage elsewhere and designers must rely on it without running their own verification. This is true for IP from separate groups within a company or alternative organizations. This sign-off process requires a predictable metric, which may only be produced through verification coverage technology.

We predict that 2014 will be the year of coverage-driven verification. Effective coverage measurement is becoming more essential and, conversely, more difficult. Verification complexity is increasing along three dimensions: design architecture, tool combination, and somewhat unwieldy standards, such as UVM. These all affect the ability to collect, collate, and display coverage detail.

Help is on the way. During 2014, we expect new coverage technology that will enable the production of meaningful metrics. Furthermore, we will see verification management technology and the use of coverage standards to pull together information that will mitigate verification risk and move the state of the art in verification toward a coverage-driven process.

As with many recent verification developments, coverage solutions can be improved through leveraging formal verification technology. Formal is at the heart of many prepackaged solutions as well as providing a powerful verification solution in its own right.

Much like 2009 for emulation, 2014 will be the year we remember when Formal Verification usage dramatically grew to occupy a major share of the overall verification process.

Formal is becoming pervasive in block and SoC verification, and can go further. Revenue for 2013 tells the story. OneSpin Solutions, for example, tripled its new bookings. Other vendors in the same market are also reporting an increase in revenue well above overall verification market growth.

Vigyan Singhal – Oski Technologies

The worldwide semiconductor industry may be putting on formal wear in 2014 as verification engineers more fully embrace a formal verification methodology. In particular, we’re seeing rapid adoption in Asia, from Korea and Japan to Taiwan and China. Companies there are experiencing the same challenges their counterparts in other areas of the globe have found: designs are getting more and more complex, and current verification methodologies can’t keep pace. SoC simulation and emulation, for example, are failing, causing project delays and missed bugs.

Formal verification, many project teams have determined, is the only way to improve block-level verification to reduce stressing out SoC verification. The reasons are varied. Because formal is exhaustive, it will catch all corner case bugs that are hard to find in simulation. If more blocks are verified and signed-off with formal, it means much better design quality.

At the subsystem and SoC level, verification only needs to be concerned with integration issues rather than design quality issues. An added benefit, all the work spent on building a block-level formal test environment can be reused for future design revisions.

We recently heard from a group of formal verification experts in the U.S. who have successfully implemented formal into their methodology and sign-off flow. Some are long-time formal users. Others are still learning what applications work best for their needs. All are outspoken advocates eager to see more widespread adoption. They’re doing model checking, equivalence checking and clock domain checking, among other applications.

They are not alone in their assessment about formal verification. Given its proven effectiveness, semiconductor companies are starting to build engineering teams with formal verification expertise to take advantage of its powerful capabilities and benefits. Building a formal team is not easy –– it takes time and dedication. The best way to learn is by applying formal in live projects where invested effort and results matter.

Several large companies in Asia have set up rigorous programs to build internal formal expertise. Our experience has shown that it takes three years of full-time formal usage to become what we call a “formal leader” (level 6). That is, an engineer who can define an overall verification strategy, lead formal verification projects and develop internal formal expertise. While 2014 will be the watershed year for the Asian market, we will see more formal users and experts in the years following, and more formal successes.

That’s not to say that adoption of formal doesn’t need some nudging. Education and training are important, as are champions willing to publicly promote the power of the formal technology. My company has a goal to do both. We sponsor the yearly Deep Bounds Award to recognize outstanding technological research achievement for solving the most useful End-to-End formal verification problems. The award is presented at the annual Hardware Model Checking Competition (HWMCC) affiliated with FMCAD (Formal Methods in Computer Aided Design).

While we may not see anyone dressed in top hat and tails at DAC in June 2014, some happy verification engineers may feel like kicking up their heels as Fred Astaire or Ginger Rogers would. That’s because they’re celebrating the completion of a chip project that taped out on time and within budget. And no bugs.

To paraphrase a familiar Fred Astaire quote, “Do it big, do it right and do it with formal.”

Bruce McGaughy – ProPlus Design Solutions

To allow the continuation of Moore’s Law, foundries have been forced to go with 3D FinFETs transistors, and along the way a funny thing has happened. Pushing planar devices into vertical structures has helped overcome fundamental device physics limitations, but physics has responded back by imposing different physical constraints, such as parasitics and greater gate-drain, gate-source coupling capacitances.

More complex transistor structures mean more complex SPICE models. The inability to effectively body-bias and the requirement to use quantized widths in these FinFET devices means that circuit designers have new challenges resulting in more complex designs. This, coupled with increased parasitic effects at advanced technology nodes, leads to post-layout netlist sizes that are getting larger.

All this gets the focus back on the transistor physics and the verification of transistor-level designs using SPICE simulation.

Above and beyond the complexity of the device and interconnect is the reality of process variation. While some forms of variation, such as random dopant fluctuation (RDF), may be reduced at FinFET nodes, variation caused by fin profile/geometry variability comes into play.

It is expected that threshold voltage mismatch and its standard deviation will increase. Additional complexity from layout-dependent effects requires extreme care during layout. With all these variation effects in the mix, there is one direct trend –– more need for post-layout simulation, the time for which gets longer as netlist sizes gets larger.

Pre-layout simulation just does not cut it.

Let’s step back and review where the 3D FinFet transistor has taken us. We have more complex device models, more complex circuits, larger netlists and a greater need for post-layout simulation.

Pretty scary in and of itself, though, the EDA industry has used a trick whenever confronted with capacity or complexity challenges. That is, trading-off accuracy to buy a little more capacity or performance. In the SPICE world, this trick is called FastSPICE.

Now, with 3D FinFETs, we are facing the end of the road for FastSPICE as an accurate simulation and verification tool, and it will be delegated to a more confined role as a functional verification tool. When the process technology starts dropping Vdd and devices have greater capacitive coupling, it results in greater noise sensitivity of the design. The ability to achieve accurate SPICE simulations under these conditions requires extreme care in controlling convergence of currents and charges. Alas, this breaks the back of FastSPICE.

In 2014, as FinFET designs get into production mode, expect the SPICE accuracy requirements and limitations of FastSPICE to cry out for attention. Accordingly, a giga-scale Parallel SPICE simulator called NanoSpice by ProPlus Design Solutions promises to address the problem. It provides a pure SPICE engine that can scale to the capacity and approach the speed of FastSPICE simulators with no loss of accuracy.

Experienced IC design teams will recognize both the potential and challenges of 3D FinFET technology and have the foresight to adopt advanced design methodologies and tools. As a result, the semiconductor industry will be ready to usher in the FinFET production ramp in 2014.

Dave Noble – Pulsic

Custom layout tools will occupy an increased percentage of design tool budgets as process nodes gets smaller and more complex. Although legacy (digital) tools are being “updated” to address FINFeT, they were designed for 65nm/90nm, so they are running out of steam. Reference flows have too many repetitive, time-consuming, and linear steps. We anticipate that new approaches will be introduced to enable highly optimized layout by new neural tools that can “think” for themselves and anticipate the required behavior (DRC-correct APR) given a set of inputs (such as DRC and process rules). New tools will be required that can generate layout, undertake all placement permutations, complete routing for each permutation AND ensure that it is DRC-correct – all in one iteration. Custom design will catch up with digital, custom layout tools will occupy an increase percentage of design tool budgets, and analog tools will have the new high-value specialized functions.

Results from the RF and Analog/Mixed-Signal (AMS) IC Survey

Wednesday, October 2nd, 2013

A summary of the results of a survey for developers of products in RF and analog/mixed-signal (AMS) ICs.

This summary details the results of a survey for developers of products in RF and analog/mixed-signal (AMS) ICs. A total of 129 designers responded to this survey. Survey questions focused on job area, company information, end-user application markets, product development types, programming languages, tool vendors, foundries, processes and other areas.

Key Findings

  • More respondents are using Cadence’s EDA tools for RFIC designs. In order, respondents also listed Agilent EESof, Mentor, Ansys/Ansoft, Rhode & Schwartz and Synopsys.
  • More respondents are using Cadence’s EDA tool for AMS IC design. Agilent EESof, Mentor, Aniritsu, Synopsys and Ansys/Ansoft were behind Cadence.
  • Respondents had the most expertise with C/C++. Regarding expertise with programming languages, C/C++ had the highest rating, followed in order by Verilog, Matlab-RF, Matlab-Simulink, Verilog-AMS, VHDL, SystemVerilog, VHDL-AMS and SystemC.
  • For RF design-simulation-verification tools, more respondents in order listed that they use Spice, Verilog, Verilog-AMS, VHDL and Matlab/RF-Simulink. For planned projects, more respondents in order listed SystemC, VHDL-AMS, SystemVerilog, C/C++ and Matlab/RF-Simulink.
  • Regarding the foundries used for RF and/or MMICs, most respondents in order listed TSMC, IBM, TowerJazz, GlobalFoundries, RFMD and UMC.
  • Silicon-based technology is predominately used for current RF/AMS designs. GaAs and SiGe are also widely used. But for future designs, GaAs will lose ground; GaN will see wider adoption.
  • RF and analog/mixed-signal ICs still use fewer transistors than their digital counterparts. Some 30% of respondents are developing designs of less than 1,000 transistors. Only 11% are doing designs of more than 1 million transistors.
  • Digital pre-distortion is still the favorite technique to improve the efficiency of a discrete power amp. Envelope tracking has received a lot of attention in the media. But surprisingly, envelope tracking ranks low in terms of priorities for power amp development.

Implications

  • Cadence continues to dominate the RFIC/AMS EDA environment. Virtuoso remains a favorite among designers. RF/AMS designers will continue to have other EDA tool choices as well.
  • The large foundries, namely TSMC and IBM, will continue to have a solid position in RF/AMS. But the specialty foundries will continue to make inroads. Altis, Dongbu, Magnachip, TowerJazz, Vanguard and others are expanding in various RF/AMS fronts.
  • There is room for new foundry players in RF/AMS. GlobalFoundries and Altis are finding new customers in RF, RF SOI and RF CMOS.
  • The traditional GaAs foundries—TriQuint, RFMD, Win Semi and others—are under pressure in certain segments. The power amp will remain a GaAs-based device, but other RF components are moving to RF SOI, SiGe and other processes.

Detailed Summary

  • Job Function Area-Part 1: A large percentage of respondents are involved in the development of RF and/or AMS ICs. More respondents are currently involved in the development of RF and/or AMS ICs (55%). A smaller percentage said they were involved in the last two years (13%). A significant portion are not are involved in the development of RF or AMS ICs (32%).
  • Job Function Area-Part 2: Respondents listed one or a combination of functions. More respondents listed analog/digital designer (30%), followed in order by engineering management (22%), corporate management (12%) and system architect (10%). The remaining respondents listed analog/digital verification, FPGA designer/verification, software, test, student, RF engineer, among others.
  • Company Information: Respondents listed one or a combination of industries. More respondents listed a university (23%), followed in order by systems integrator (18%), design services (14%), fabless semiconductor (13%) and semiconductor manufacturer (10%). The category “other” represented a significant group (13%). The remaining respondents work for companies involved in ASICs, ASSPs, FPGAs, software and IP.
  • Company Revenue (Annual): More respondents listed less than $25 million (27%), followed in order by $100 million to $999 million (24%) and $1 billion and above (22%). Others listed $25 million to $99 million (8%). Some 19% of respondents did not know.
  • Location: More respondents listed North America (60%), followed in order by Europe (21%) and Asia-Pacific (10%). Other respondents listed Africa, China, Japan, Middle East and South America.
  • Primary End-User Application for Respondent’s ASIC/ASSP/SoC design: More respondents listed communications (67%), followed in order by industrial (28%), consumer/multimedia (24%), computer (21%), medical (15%) and automotive (12%).
  • Primary End Market for Respondent’s Design. For wired communications, more respondents listed networking (80%), followed by backhaul (20%). For wireless communications, more respondents listed handsets (32%) and basestations (32%), followed in order by networking, backhaul, metro area networks and telephony/VoIP.
  • Primary End Market If Design Is Targeted for Consumer Segment. More respondents listed smartphones (34%), followed in order by tablets (24%), displays (18%), video (13%) and audio (11%).

Programming Languages Used With RF/AMS Design Tools:

  • Respondents had the most expertise with C and C++. Regarding expertise with programming languages, C/C++ had an overall rating of 2.47 in the survey, followed by in order by Verilog (2.32), Matlab-RF (2.27), Matlab-Simulink (2.17), Verilog-AMS (2.03), VHDL (1.99), SystemVerilog (1.84), VHDL-AMS (1.70) and SystemC (1.68).
  • Respondents said they had “professional expertise” (19%) with C/C++. Respondents were “competent” (27%) or were “somewhat experienced” (37%) with C/C++. Some 17% said they had “no experience” with C/C++.
  • Respondents said they had “professional expertise” with Verilog-AMS. (13%). Respondents were “competent” (15%) and “somewhat experienced” (35%) with Verilog-AMS. Some 38% said they had “no experience” with Verilog-AMS.
  • Respondents said they had “professional expertise” with Verilog (12%), or were “competent” (30%) or were “somewhat experienced” (36%). Some 22% said they had “no experience” with Verilog.
  • Respondents said they had “professional expertise” with Matlab-RF (10%), or were “competent” (27%) or “somewhat experienced” (42%). Some 21% said they had “no experience” with the technology.
  • Respondents also had “professional experience” with VHDL (10%), SystemVerilog (9%), SystemC (7%), Matlab-Simulink (6%) and VHDL-AMS (3%).
  • Respondents had ‘’no experience” with SystemC (55%), VHDL-AMS (51%), SystemVerilog (49%), Verilog-AMS (38%), VHDL (36%), Matlab-Simulink (26%), Verilog (22%), Matlab-RF (21%) and C/C++ (17%).

Types of Programming Languages and RF Design-Simulation-Verification Tools Used

  • For current projects, more respondents listed Spice (85%), Verilog (85%), Verilog-AMS (79%), VHDL (76%), Matlab/RF-Simulink (71%), C/C++ (64%), SystemVerilog (56%), VHDL-AMS (44%) and SystemC (21%).
  • For planned projects, more respondents listed SystemC (79%), VHDL-AMS (56%), SystemVerilog (44%), C/C++ (36%), Matlab/RF-Simulink (29%), VHDL (24%), Verilog-AMS (21%), Verilog (15%) and Spice (15%).

Which Tool Vendors Are Used in RFIC Development

  • More respondents listed Cadence (60), followed in order by Agilent EESof (43), Mentor (38), Ansys/Ansoft (29), Rhode & Schwartz (26) and Synopsys (25). Others listed were Aniritsu, AWR, Berkeley Design, CST, Dolphin, EMSS, Helic, Hittite, Remcon, Silvaco, Sonnet and Tanner.
  • The respondents for Cadence primarily use the company’s tools for RF design (68%), simulation (73%), layout (67%) and verification (43%). The company’s tools were also used for EM analysis (27%) and test (22%).
  • The respondents for Agilent EESof primarily use the company’s tools for RF design (54%) and simulation (65%). The company’s tools were also used for EM analysis, layout, verification and test.
  • The respondents for Mentor Graphics primarily use the company’s tools for verification (55%), layout (37%) and design (34%). Meanwhile, the respondents for Rhode & Schwartz primarily use the company’s tools for test (69%). The respondents for Synopsys primarily use the company’s tools for design (40%), simulation (60%) and verification (48%).

Which Tool Vendors Are Used in AMS IC Development

  • More respondents listed Cadence (48), followed in order by Agilent EESof (26), Mentor (22), Aniritsu (19), Synopsys (18) and Ansys/Ansoft (15). Others listed were AWR, Berkeley Design, CST, Dolphin, EMSS, Helic, Hittite, Remcon, Rohde & Schwarz, Silvaco, Sonnet and Tanner.
  • The respondents for Cadence primarily use the company’s tools for AMS design (79%), simulation (71%), layout (71%) and verification (48%). The company’s tools were also used for EM analysis and test.
  • The respondents for Agilent EESof primarily use the company’s tools for design (42%), simulation (69%) and EM analysis (54%).
  • The respondents for Mentor Graphics primarily use the company’s tools for design (50%), simulation (46%) and verification (55%). The respondents for Aniritsu primarily use the company’s tools for test (47%). The respondents for Synopsys primarily use the company’s tools for design (61%) and simulation (67%).

Areas of Improvement for Verification and Methodologies

  • Respondents had a mix of comments.

Foundry and Processes

  • Foundry Used for RFICs and/or MMICs: More respondents listed TSMC (32), followed in order by IBM (27), TowerJazz (19), GlobalFoundries (17), RFMD (13) and UMC (13). The next group was Win Semi (12), ST (11), TriQuint (11) and GCS (10). Other respondents listed Altis, Cree, IHP, LFoundry, OMMIC, SMIC, UMS and XFab.
  • Of the respondents for TSMC, 87% use TSMC for RF foundry work and 55% for MMICs. Of the respondents for IBM, 81% use IBM for RF foundry work and 41% for MMICs. Of the respondents for TowerJazz, 84% use TowerJazz for RF foundry work and 42% for MMICs. Of the respondents for GlobalFoundries, 76% use GF for RF foundry work and 41% for MMICs.
  • Complexity of Respondent’s Designs (Transistor Count): More respondents listed less than 1,000 transistors (30%), followed in order by 10,000-99,000 transistors (14%) and 100,000-999,000 transistors (14%). Respondents also listed 1,000-4,900 transistors (11%), greater than 1 million transistors (11%) and 5,000-9,900 transistors (10%).
  • Process Technology Types: For current designs, more respondents listed silicon (66%), followed in order by GaAs (32%), SiGe (27%), GaN (23%) and InP (10%). For future designs, more respondents listed silicon (66%), followed in order by SiGe (31%), GaN (28%), GaAs (16%) and InP (13%).

Technology Selections:

  • Which Baseband Processor Does Design Interface With: More respondents listed TI (35%), ADI (22%) and Tensilica/Cadence (18%). Respondents also list other (26%).
  • Technique Used To Improve Discrete Power Amplifier Efficiency: In terms of priorities, more respondents listed digital pre-distortion (38%), followed in order by linearization (27%), envelop tracking (14%) and crest factor reduction (10%). In terms of priorities, the technique that showed the lowest ranking was envelop tracking (37%), crest factor reduction (21%) and linearization (14%).

Test and Measurement

  • Importance of Test and Measurement: More respondents listed very important (34%), followed in order by important (24%), extremely important (20%), somewhat important (19%) and unimportant (3%).

lapedus_markMark LaPedus has covered the semiconductor industry since 1986, including five years in Asia when he was based in Taiwan. He has held senior editorial positions at Electronic News, EBN and Silicon Strategies. In Asia, he was a contributing writer for Byte Magazine. Most recently, he worked as the semiconductor editor at EE Times.

Interface Additions To The e Language For Effective Communication With SystemC TLM 2.0 Models

Thursday, August 23rd, 2012

The last several years have seen strong adoption of transaction-level models using SystemC TLM 2.0. Those models are used for software validation and virtual prototyping. For functional verification, TLMs have a number of advantages—they are available earlier, they allow usersto divide their focus on verifying functionality and protocol/timing details, they enable higher level reuse, and they can be used as reference models in advanced verification environments. Leveraging these benefits requires a convenient and seamless interface to TLM 2.0. New additions to e in the Cadence Specman technology portfolio enable verification engineers to communicate efficiently with SystemC models that have TLM 2.0 interfaces.

To read more, click here.