Part of the  

Chip Design Magazine


About  |  Contact

Posts Tagged ‘eSilicon’

eSilicon Launches Integrated ASIC Design And Manufacturing Platform

Tuesday, May 26th, 2015

Gabe Moretti, Senior Editor

Since 2001 eSilicon has helped system companies with some of the most time consuming tasks needed to successfully manage a chip development project.  Today the tools provided by eSilicon allow customers to: browse and buy IP, Optimize a design, get quotes from foundries and compare them, and track a project.  Just in time for DAC, the company announced the availability of its second-generation online ASIC design and manufacturing platform the groups all of the tool under a unified and coherent environment.

Figure 1: The STAR Logical Architecture

Named eSilicon STAR (self-service, transparent, accurate, real-time), the platform supports eSilicon’s existing IP browsing, instant quoting and work-in-process tracking capabilities along with a new chip optimization offering that leverages design virtualization technology. The platform also delivers an enhanced user interface with simplified account setup and access. Tool names have also been unified under the STAR platform as follows:

  • Navigator: Search, select and try eSilicon IP online
  • Optimizer: Versatile self-service IC design optimization for power, performance and area
  • Explorer: Evaluate options and get fast, accurate quotes for MPW and GDSII handoffs
  • Tracker: Real-time design progress and IC delivery tracking, including order history, forecasts and yield data

Figure 2: Details of STAR’s  Components

The newly introduced STAR Optimizer provides ASIC designers with an easy way to access eSilicon’s block- and chip-level optimization services. Users can download free software that will analyze their design’s register transfer language (RTL) description to check for robustness. If the design passes these tests, users can then request a design optimization service engagement online. eSilicon’s design optimization service uses unique design virtualization technology to find the optimal design implementation from a power, performance or area perspective. The service is built on a “pay for results” philosophy – the customer pays for the service only if a pre-determined optimization result is achieved.

At first such income scheme may appear naïve when judged in a traditional EDA practice of quarter by quarter revenue measurement, but I think that the approach has great value from a strategic point of view.  It builds not customers but partners that feel they are being treated fairly, and this is the secret to the success of eSilicon: find partners, not just sources of income.

“We have been using our design virtualization technology to optimize the PPA of customer designs for years,” said Prasad Subramaniam, Ph.D., vice president of design technology at eSilicon. “We’ve achieved some significant results in literally minutes with this technology. We are now making this powerful capability available to all design teams worldwide through the STAR Optimizer interface.”

Optimizer is based on design virtualization technology, which rapidly explores all possible ASIC implementation scenarios to identify the best fit for a particular set of PPA and cost requirements. Design choices such as cell libraries, memory architectures, process options, operating conditions and Vt mix are enumerated instantly, without the need to perform time-consuming what-if implementation trials. Design virtualization uses big data analytics and machine learning to rapidly deliver the business and technical insights needed to build an optimized design.

“Our market research told us that the semiconductor community was ready for online technology and big data analytics,” said Mike Gianfagna, eSilicon’s vice president of marketing. “With over 500 users who have generated over 1,000 custom quotes in 47 countries, we have validation that our research was correct. The new eSilicon STAR platform takes the user experience to the next level, both from an ease-of-use and capability point of view.”

The eSilicon Star platform is available now. There is no cost or obligation to use any of the STAR tools. See for yourself how easy to use and how powerful it is in the eSilicon booth at DAC.

Blog Review – Mon. June 16 2014

Monday, June 16th, 2014

Naturally, there is a DAC theme to this week’s blogs – the old, the new, together with the soccer/football debate and the overlooked heroes of technology. By Caroline Hayes, Senior Editor.

Among those attending DAC 2014, Jack Harding, eSilicon rejoiced in seeing some familiar faces but mourns the lack of new faces and the absence of a rock and roll generation for EDA.

Football fever has affected Shelly Stalnaker, Mentor Graphics, as she celebrates the World Cup coming to a TV screen near you. The rest of the world may call soccer football but the universality of IC design and verification is an analogy that will resonate with sport enthusiasts everywhere.

Celebrating Alan Turing, Aurelien, Dassault Systemes, looks at the life and achievements of the man who broke the Enigma Code, in WWII, invented the first computer in 1936 and who defined artificial intelligence. The fact he wasn’t mentioned in the 2001 film, Engima, about the code breakers, reflects how overlooked this incredible man was.

Mixed signal IC verification was the topic for a DAC panel, and Richard Goering, Cadence runs down what was covered, from tools and methodologies, the prospects for scaling and a hint at what’s next.

Blog Review – Feb. 18 2014

Tuesday, February 18th, 2014

Grand prizes in Paris design; variability pitfalls; snap happy; volume vs innovation

By Caroline Hayes, senior editor

One of the most visually arresting blogs this week is from Neno Horvat at Dassault Systèmes. A fashion parade of projects set against the backdrop of Hôtel National des Invalides, in Paris. The occasion? The Festival de l-Automobile International (FAI) and the Creativ Experience award and the Grand Prix for research into the intelligent car.

Using a blog as a real community jumping point and information service, Shelly Stalnaker’s blog directs us to fellow Mentor Graphics author, Sudhakar Jilla article about the variability pitfalls of advanced nodes design and manufacturing.

Happy, snappy days are conjured up in the blog by ARM’s rmijat, in which he recounts his smartphone photography presentation at Electronic Imaging Conference. One of the week’s most detailed blogs, he takes us through the history of the camera phone to computational photography and future prospects.

Jack Harding, eSilicon, left Las Vegas a richer man, not from a big win, but by reflecting on the prospect of how few companies can bring to market the ICs needed for all the innovation that CES promised.

Blog Review – Nov. 11

Monday, November 11th, 2013

By Caroline Hayes, Senior Editor

Roaming and presenting; talent spotting; anniversary interview; how to go from geek to chic, corporate social responsibility – revisited

On his own European tour, Colin Walls, Mentor Graphics, is presenting papers at ECS in Stockholm, Sweden and then on to Grenoble, France for IP-SoC with two presentations, followed by a place on the Multicore Panel Session Thursday afternoon. Copies of his presentation slides can be requested from Colin.

EDA’s Simon Cowell, Michael Posner, Synopsys, believes that the HAPS-70 systems is making Eric Huang, PMM for USB IP, Synopsys, a video star as he demonstrates the new 10G USB 3.1 standard.

To mark Cadence’s quarter of a century anniversary, Brian Fuller talks to long-term customer Rainer Holzhaider, AMS, about what he likes and what he wants from EDA.

How to appeal to the next generation of engineers, muses Sherry Hess, AWR. The Graduate Gift and Professors in Partnership is a start….

Corporate Social Responsibility is a decade old but eSilicon believes it has made it fresh with the eMPW tool that can design an MPW (multi project wafer) in minutes. The twist is that it allows you to take all the technical data, capability and detail to the foundry directly.

Two Tiers EDA Industry

Thursday, June 16th, 2016

Gabe Moretti, Senior Editor

Talking to Lucio Lanza you must be open to ideas that appear strange and wrong at first sight.  I had just that talk with him during DAC.  I enjoy talking to Lucio because I too have strange ideas, certainly not as powerful as him, but strange enough to keep my brain flexible.

So we were talking about the industry when suddenly Lucio said: “You know the EDA industry needs to divide itself in two: design and manufacturing are different things.”

The statement does not make much sense from an historical perspective, in fact it is contrary to how EDA does business today, but you must think about it from today and future point of view.  The industry was born and grew under the idea that a company would want to develop its own product totally in house, growing knowledge and experience not only of its own market, but also of semiconductor capabilities.  The EDA industry provides a service that replaces what companies would otherwise have to do internally when designing and developing an IC or a PCB.  The EDA industry provides all the required tools which would have otherwise been developed internally.  But with the IoT as the prime factor for growth, dealing with the vagaries of optimizing a design for a given process is something most companies are either unprepared to do, or too costly given the sale price of the finished product.  I think that a majority of IoT products will not be sensitive to a specific process’s characteristics.

The Obstacles

So why not change, as Lucio forecasts.  The problem is design methodology.  Unfortunately, given the design flow supported today, a team is supposed to take the design through synthesis before they can analyze the design for physical characteristics.  This approach is based on the assumption that the design team is actively engaged in the layout phase of the die.  But product developers should not, in general, be concerned with how the die is laid out.  A designer should have the tool to predict leakage, power consumption, noise, and thermal at the system level.  The tools need to be accurate, but not precise.  It should be possible to predict the physical behavior of the design given the characteristics of the final product and of the chosen process.  Few companies producing a product that is leading edge and will sell in large volume will need to be fully involved in the post synthesis work, but the number of these companies continues to shrink in direct proportion to the cost of using the process.

EDA startups should not look at post synthesis markets.  They should target system level design and verification.  The EDA industry must start thinking in terms of the products its customers are developing, not the silicon used to implement them.  A profound change in both the technological and business approach to our market is needed, if we want to grow.  But change is difficult and new problems require not just new tools, but new thinking.  Change is hard and almost always uncomfortable.

Software development and debug must be supported by a true hardware/software co-design and co-development system.  At present there are co-verification tools, but true co-development is still not possible, at least not within the EDA industry.

As I have said many times before “chips don’t float” thus tier one of the new EDA must also provide packaging tools, printed circuit board (PCB) design tools, and mechanical design tools to create the product.  In other words we must develop true system level design and not be so myopic to believe that our goal is Electronic System Level support.  The electronic part is a partial solution that does not yield a product, just a piece of a product.

The Pioneers

I know of a company that has already taken a business approach that is similar to what Lucio is thinking about.  The company had always exhibited at DAC, but since its new business approach it was not there this year.  Most customers of eSilicon do not go to DAC, they go to shows and conferences that deal with their end products’ markets.  The business approach of the company, as described to me by Mike Gianfagna, VP of Marketing at eSilicon, is to partner with a customer to implement a product, not a design.  eSilicon provides the EDA knowhow and the relationship with the chosen foundry, while the customer provides the knowledge of the end market.  When the product is ready both companies share in the revenue following a prior agreed to formula.  This apparently small change in the business model takes EDA out of the service business and into the full electronic industry opportunity.  It also relives companies from the burden of understanding and working the transformation of a design into silicon.

Figure 2: Idealized eSilicon Flow (Courtesy of eSilicon)

What eSilicon offers is not what Lucio has in mind, but it comes very close in most aspects, especially in its business approach to the development of a product, not just a die.

Existing Structure

Not surprisingly there are consortia that already provide structure to help the development of a two tiers EDA industry.   The newly renamed ESDA can help define and form the new industry while its marketing agreement with SEMICO can foster a closer discourse with the IP industry.  Accellera Systems Initiative, or simply Accellera, already specializes in design and verification issues, and also focuses on IP standards, thus fitting one of the two tiers perfectly.  The SI2 consortium, on the other hand, focuses mostly on post synthesis and fabrication issues, providing support for the second tier.  Accellera, therefore, provides standards and methodology for the first tier, SI2 for the second tier, while ESDA straddles both.

The Future

In the past using the latest process was a demonstration that a company was not only a leader in its market, but an electronics technology leader.  This is no longer the case.  A company can develop and sell a leading product using   a 90 or 65nm process for example and still be considered a leader in its own market.  Most IoT products will be price sensitive, so minimizing both development and production costs will be imperative.

Having a partner that will provide the know-how to transform the description of the electronic circuit into a layout ready to manufacture will diminish development costs since the company no longer has to employ designers that are solely dedicated to post synthesis analysis, layout and TCAD.

EDA companies that target these markets will see their market size shrink significantly but the customers’ knowledge of the requirements and technological characteristics of the tools will significantly improve.

The most significant impact will be that the EDA available revenue volume will increase since EDA companies will be able to get revenue from every unit sold of a specific product.

Design Virtualization and Its Impact on SoC Design

Wednesday, July 15th, 2015

Mike Gianfagna, VP Marketing, eSilicon

Executive Summary

At advanced technology nodes (40nm and below), the number of options that a system-on-chip (SoC) designer faces is exploding. Choosing the correct combination of these options can have a dramatic impact on the quality, performance, cost and schedule of the final SoC. Using conventional design methodologies, it is very difficult to know if the correct options have been chosen. There is simply no way to run the required number of trial implementations to ensure the best possible option choices.

This document outlines the strategic customer benefits of applying a new technology called design virtualization to optimize SoC designs.

Design Challenges

Coupled with exploding option choices, the cost to design an SoC is skyrocketing as well. According to Semico Research Corporation, total SoC design costs increased 48 percent from the 28nm node to the 20nm node and are expected to increase 31 percent again at the 14nm node and 35 percent at the 10nm node.

Rising costs and extreme time-to-market pressures exist for most product development projects employing SoC technology. Getting the optimal SoC implemented in the shortest amount of time, with the lowest possible cost is often the margin of victory for commercial success. Since it is difficult to find the optimal choice of technology options, IP, foundation libraries, memory and operating conditions to achieve an optimal SoC, designers struggle to get the results they need with the choices they have made. These choices are made without sufficient information and so they are typically not optimal.

Figure 1. Option Choices Faced by the SoC Designer

In many cases, there is time for only one major design iteration for the SoC. Taking longer will result in a missed market window and dramatically lower market share and revenue. For many companies, there is only funding for one major design iteration as well. If you don’t get it right, the enterprise could fail. This situation demands getting the best result on the first try. All SoC design teams know this, and there is substantial effort expended to achieve the all-important first-time-right SoC project.

This backdrop creates a rich set of opportunities for technology that can reduce risk and improve results. Commercial electronic design automation (EDA) tools are intended to build the best SoC possible given a fixed set of choices. What is needed to address this problem is the ability to optimize these choices before design begins and throughout the design process as well. This will allow EDA technology and SoC design teams to improve the chances of delivering the best result possible.

Design Virtualization Defined

Design virtualization addresses SoC design challenges in a unique and novel way. The technology focuses on optimizing the recipe for an SoC using cloud-based, big data analytics and deep machine learning. In this context, recipe refers to the combined choices of process technology options, operating conditions, IP, foundation libraries and memory architectures. Design virtualization allows the optimal starting point for chip implementation.

The technology frees the SoC designer from the negative effects of early, sub-optimal decisions regarding the development of a chip implementation recipe. As we’ve discussed, decisions regarding the chip implementation recipe have substantial and far-reaching implications for the schedule, cost and ultimate quality of the SoC. A correctly defined chip implementation recipe will maximize the return-on-investment (ROI) from the costly and risky SoC design process.

Figure 2. Computer Virtualization (Source: VMware)

In traditional design methodologies, the chip implementation recipe is typically defined at the beginning of the process, often with insufficient information. As the design progresses, the ability to explore the implications of changing the implementation recipe is more difficult, resulting in longer schedules, higher design costs and sub-optimal performance for the intended application.

Design virtualization changes all that. Through an abstraction layer, the ability to explore the implications of various chip implementation recipes now becomes possible. For the first time, SoC designers have “peripheral vision” regarding their decisions. They are able to explore a very broad array of implementation recipes before design begins and throughout the design process. This creates valuable insights into the consequences of their decisions and facilitates, for the first time, discovery of the optimal implementation recipe in a deterministic way.

Figure 3. Design Virtualization: Traditional vs. Virtualized Design Flows (Source: eSilicon)

In many ways, the process is similar to the virtualization concepts made popular in the computer and enterprise software industries. Computer/network/storage virtualization facilitates the delivery of multiple and varied services with the same hardware through an abstraction layer. This abstraction layer maps the physical computing environment into a logical implementation, allowing flexible deployment of the resources. The result is a more efficient use of the underlying hardware and the delivery of multiple optimized user experiences.

Regarding SoC design, design virtualization creates an abstraction layer that maps actual physical results into predicted, logical results to assist in finding the best possible implementation recipe. The result is a more efficient use of the underlying process and design resources and delivery of an optimized SoC.

Design Virtualization – How it Works

At its core, design virtualization utilizes big data strategies to capture engineering knowledge from suppliers worldwide regarding how process options, IP, foundation libraries, memory architectures and operating conditions interact with each other to impact the power, performance and area (PPA) of an SoC design. Machine learning is then applied to this data to allow exploration of design options. The information is accessed through the cloud and real-time, predictive analysis is provided to guide the optimal choice for all these variables.

Figure 4. Architecture of Design Virtualization

As discussed, all the choices contributing to the implementation recipe for an SoC are exploding below 40nm. Understanding how these choices interact to impact the final PPA of the SoC requires extensive trial implementations, consuming large amounts of time and resources, in terms of both staff and EDA tools.

Design virtualization solves this problem with a massive parametric database of options for semiconductor value chain suppliers, worldwide. A cloud-based front-end query system is provided that facilitates real-time, predictive analysis from this database. Because all data is pre-generated, exploration of various options can be done instantly, without the need for expensive EDA tools or time-consuming trial implementations. This approach creates a new-to-the-industry capability.

Design Implementation

There is a significant gap between what EDA vendors and IP vendors deliver when compared with the new issues facing every SoC designer today. The gap can be characterized by two key observations:

EDA focuses on creating an optimal solution for one, or a limited number, of design implementation recipes

The approximately $4B EDA industry is focused on logic optimization and not memory optimization

Figure 5. Quickly Identifying Optimal Implementations

Regarding chip implementation recipes, the ability to explore the broader solution space for each design is now within reach of all design teams. Thanks to the big data, cloud-based machine learning employed by design virtualization, designers may now perform “what if” exercises for their design recipe options in real time, creating a palette of solutions that has been previously unavailable.

Using this technology, the design team can start with the desired PPA target and quickly identify the implementation recipe required to hit that target. This essentially reverses the typical time-consuming design exploration process. The result is an optimized implementation recipe that balances the PPA requirements of the SoC with the commercial options offered by the worldwide semiconductor supply chain.

With regard to memory optimization, 50 percent or more of the total area of today’s SoCs can contain on-chip memory. The detailed configuration of these memories can have a substantial impact on the final PPA of the chip, but most design teams choose a series of compiled memories early in the design process and never revisit those choices during design implementation. The result is often lost performance, wasted chip area and sub-optimal ROI for a bet-your-company SoC design project.

Design virtualization provides a way to explore all possible memory configurations for a given implementation recipe in real time. Memory customization opportunities can also be identified. Using generic memory models that can be provided by eSilicon, further refinement and optimization of the memory architecture of the SoC is possible, right up to tapeout. This design implementation flexibility is commonplace for the logic portion of the chip, but is new for the memory portion.

The EDA design flow is as important as ever, but now the starting point for design implementation can be an optimized implementation recipe, resulting in an SoC with superior PPA and optimized cost and schedule. An example of the impact of exploring implementation recipes for a 28nm design is shown below.


We have discussed a new approach to improve the results of SoC design called design virtualization. We believe the approaches outlined in this document provide new-to-the-industry capabilities with the opportunity for significant strategic differentiation.

Design virtualization can substantially improve the PPA and schedule of an SoC, resulting is an improved ROI for the massive cost and high risk associated with these projects.

The techniques described here are used daily inside eSilicon for all customer designs. We have achieved significant power reduction and broad design implementation improvements by analyzing customer designs and employing design virtualization techniques.

eSilicon is developing a robust product roadmap to make selected design virtualization capabilities available to all design teams worldwide, regardless of size.

For more information contact or visit

The Various Faces of IP Modeling

Friday, January 23rd, 2015

Gabe Moretti, Senior Editor

Given their complexity, the vast majority of today’s SoC designs contain a high number of third party IP components.  These can be developed outside the company or by another division of the same company.  In general they present the same type of obstacle to easy integration and require a model or multiple types of models in order to minimize the integration cost in the final design.

Generally one thinks of models when talking about verification, but in fact as Frank Schirrmeister, Product Marketing Group Director at Cadence reminded me, there are three major purposes for modeling IP cores.  Each purpose requires different models.  In fact, Bernard Murphy, Chief Technology Officer at Atrenta identified even more uses of models during our interview.

Frank Schirrmeister listed performance analysis, functional verification, and software development support as the three major uses of IP models.

Performance Analysis

Frank points out that one of the activities performed during this type of analysis is the analysis of the interconnect between the IP and the rest of the system.  This activity does not require a complete model of the IP.  Cadence’s Interconnect Workbench creates the model of the component interconnect by running different scenarios against the RT level model of the IP.  Clearly a tool like Palladium is used given the size of the required simulation of an RTL model.  So to analyze, for example, an ARM AMBA 8 interconnect, engineers will use simulations representing what the traffic of a peripheral may be and what the typical processor load may be and apply the resulting behavior models to the details of the interconnect to analyze the performance of the system.

Drew Wingard, CTO at Sonics remarked that “From the perspective of modeling on-chip network IP, Sonics separates functional verification versus performance verification. The model of on-chip network IP is much more useful in a performance verification environment because in functional verification the network is typically abstracted to its address map. Sonics’ verification engineers develop cycle accurate SystemC models for all of our IP to enable rapid performance analysis and validation.

For purposes of SoC performance verification, the on-chip network IP model cannot be a true black box because it is highly configurable. In the performance verification loop, it is very useful to have access to some of the network’s internal observation points. Sonics IP models include published observation points to enable customers to look at, for example, arbitration behaviors and queuing behaviors so they can effectively debug their SoC design.  Sonics also supports the capability to ‘freeze’ the on-chip network IP model which turns it into a configured black box as part of a larger simulation model. This is useful in the case where a semiconductor company wants to distribute a performance model of its chip to a system company for evaluation.”

Bernard Murphy, Chief Technology Officer, Atrenta noted that: ” Hierarchical timing modeling is widely used on large designs, but cannot comprehensively cover timing exceptions which may extend beyond the IP. So you have to go back to the implementation model.”  Standards, of course, make engineers’ job easier.  He continued: “SDC for constraints and ILM for timing abstraction are probably largely fine as-is (apart from continuing refinements to deal with shrinking geometries).”

Functional Verification

Tom De Schutter, Senior Product Marketing Manager, Virtualizer – VDK, Synopsys

said that “the creation of a transaction-level model (TLM) representing commercial IP has become a well-accepted practice. In many cases these transaction-level models are being used as the golden reference for the IP along with a verification test suite based on the model. The test suite and the model are then used to verify the correct functionality of the IP.  SystemC TLM-2.0 has become the standard way of creating such models. Most commonly a SystemC TLM-2.0 LT (Loosely Timed) model is created as reference model for the IP, to help pull in software development and to speed up verification in the context of a system.”

Frank Schirrmeister noted that verification requires the definition of the IP at an IP XACT level to drive the different verification scenarios.  Cadence’s Interconnect Workbench generates the appropriate RTL models from a description of the architecture of the interconnects.”

IEEE 1685, “Standard for IP-XACT, Standard Structure for Packaging, Integrating and Re-Using IP Within Tool-Flows,” describes an XML Schema for meta-data documenting Intellectual Property (IP) used in the development, implementation and verification of electronic systems and an Application Programming Interface (API) to provide tool access to the meta-data. This schema provides a standard method to document IP that is compatible with automated integration techniques. The API provides a standard method for linking tools into a System Development framework, enabling a more flexible, optimized development environment. Tools compliant with this standard will be able to interpret, configure, integrate and manipulate IP blocks that comply with the proposed IP meta-data description.

David Kelf, Vice President of Marketing at OneSpin Solutions said: “A key trend for both design and verification IP is the increased configurability required by designers. Many IP vendors have responded to this need through the application of abstraction in their IP models and synthesis to generate the required end code. This, in turn, has increased the use of languages such as SystemC and High Level Synthesis – AdaptIP is an example of a company doing this – that enables a broad range of configuration options as well as tailoring for specific end-devices. As this level of configuration increases, together with synthesis, the verification requirements of these models also changes. It is vital that the final model to be used matches the original pre-configured source that will have been thoroughly verified by the IP vendor. This in turn drives the use of a range of verification methods, and Equivalency Checking (EC) is a critical technology in this regard. A new breed of EC tools is necessary for this purpose that can process multiple languages at higher levels of abstractions, and deal with various synthesis optimizations applied to the block.  As such, advanced IP configuration requirements have an affect across many tools and design flows.”

Bernard Murphy pointed out that “Assertions are in a very real sense an abstracted model of an IP. These are quite important in formal analyses also in quality/coverage analysis at full chip level.  There is the SVA standard for assertions; but beyond that there is a wide range of expressions from very complex assertions to quite simple assertions with no real bounds on complexity, scope etc. It may be too early to suggest any additional standards.”

Software Development

Tom De Schutter pointed out that “As SystemC TLM-2.0 LT has been accepted by IP providers as the standard, it has become a lot easier to assemble systems using models from different sources. The resulting model is called a virtual prototype and enables early software development alongside the hardware design task. Virtual prototypes gave have also become a way to speed up verification, either of a specific custom IP under test or of an entire system setup. In both scenarios the virtual prototype is used to speed up software execution as part of a so-called software-driven verification effort.

A model is typically provided as a configurable executable, thus avoiding the risk of creating an illegal copy of the IP functionality. The IP vendor can decide the internal visibility and typically limits visibility to whatever is required to enable software development, which typically means insight into certain registers and memories are provided.”

Frank Schirrmeister pointed out that these models are hard to create or if they exist they may be hard to get.  Pure virtual models like ARM Fast Models connected to TLM models can be used to obtain a fast simulation of a system boot.  Hybrid use models can be used by developers of lower level software, like drivers. To build a software development environment engineers will use for example a ARM Fast Model and plug in the actual RTL connected to a transactor to enable driver development.  ARM Fast Models connected with say a graphics system running in emulation on a Palladium system is an example of such environment.

ARM Fast Models are virtual platforms used mostly by software developers without the need for expensive development boards.  They also comply with the TLM-2.0 interface specification for integration with other components in the system simulation.

Other Modeling Requirements

Although there are three main modeling requirements, complex IP components require further analysis in order to be used in designs implemented in advanced processes.  A discussion with Steve Brown, Product Marketing Director, IP Group at Cadence covered power analysis requirements.  Steve’s observations can be summed up thus: “For power analysis designers need power consumption information during the IP selection process.  How does the IP match the design criteria and how does the IP differentiate itself from other IP with respect to power use.  Here engineers even need SPICE models to understand how I/O signals work.  Signal integrity is crucial in integrating the IP into the whole system.”

Bernard Murphy added: “Power intent (UPF) is one component, but what about power estimation? Right now we can only run slow emulations for full chip implementation, then roll up into a power calculation.  Although we have UPF as a standard estimation is in early stages. IEEE 1801 (UPF) is working on extensions.  Also there are two emerging activities – P2415 and 2416 –working respectively on energy proportionality modeling at the system level and modeling at the chip/IP level.”

IP Marketplace, a recently introduced web portal from eSilicon, makes power estimation of a particular IP over a range of processes very easy and quick.  “The IP MarketPlace environment helps users avoid complicated paperwork; find which memories will best help meet their chip’s power, performance or area (PPA) targets; and easily isolate key data without navigating convoluted data sheets” said Lisa Minwell, eSilicon’s senior director of IP product marketing.

Brad Griffin, Product marketing Director, Sigrity Technology at Cadence, talked about the physical problems that can arise during integration, especially when it concerns memories.  “PHY and controllers can be from either the same vendor or from different ones.  The problem is to get the correct signal integrity and power integrity required from  a particular PHY.  So for example a cell phone using a LP DDR4 interface on a 64 bit bus means a lot of simultaneous switching.  So IP vendors, including Cadence of course, provide IBIS models,.  But Cadence goes beyond that.  We have created virtual reference designs and using the Sigrity technology we can simulate and show  that we can match the actual reference design.  And then the designer can also evaluate types of chip package and choose the correct one.  It is important to be able to simulate the chip, the package, and the board together, and Cadence can do that.”

Another problem facing SoC designers is Clock Domain Crossing (CDC).  Bernard Murphy noted that :”Full-chip flat CDC has been the standard approach but is very painful on large designs. There is a trend toward hierarchical analysis (just as happened in STA), which requires hierarchical models There are no standards for CDC.  Individual companies have individual approaches, e.g. Atrenta has its own abstraction models. Some SDC standardization around CDC-specific constraints would be welcome, but this area is still evolving rapidly.”


Although on the surface the problem of providing models for an IP component may appear straightforward and well defined, in practice it is neither well defined nor standardized.  Each IP vendor has its own set of deliverable models and often its own formats.  The task of comanies like Cadence and Synopsys that sell their own IP and also

provide EDA tools to support other IP vendors is quite complex.  Clearly, although some standard development work is ongoing, accommodating present offerings and future requirements under one standard is challenging and will certainly require compromises.

Memory Challenges In The Extreme

Wednesday, November 16th, 2011

By John Blyler and Staff
Next to computation, memory is the most important function in any electronic design. Both processor and memory devices must share the limited resources of power and performance. The relative weighting of these tightly coupled constraints varies depending upon the application.

At one extreme of the power-performance spectrum are applications that sacrifice performance to maintain the lowest possible power, e.g., a simple 8-bit microcontroller. For example, STMicroelectronics has recently introduced a 16-kbit EEPROM kit that can harvest enough energy from ambient radio-wave energy to run small, simple and battery-free electronic applications like RFIC tags. The growth of wireless power technology is an emerging field that includes other major players such as Intel and Texas Instruments. (see “Tesla’s Lost Lab Recalls Promise Of Wireless Power”)

Another example of an extremely low power-low performance memory application is in the emerging market of flexible, plastic electronics (see Figure 1). A team from the Korea Advanced Institute of Science and Technology (KAIST) recently reported such a device, i.e., a fully functional, flexible non-volatile resistive random access memory (RRAM).

The challenge with flexible, organic-based memory materials is that the devices have significant cell-to-cell interference due to limitations of the memory structures within the plastic material. One solution to this problem involves the integration of transistor switches into the memory elements. Unfortunately, transistors built on plastic substrates (organic/oxide transistors) have such poor performance that they were unusable. But the team at KAIST solved the cell-to-cell interference issue by, “integrating a memristor with a high-performance single-crystal silicon transistor on flexible substrates.” Similar breakthroughs have been reported at IMEC, (see, “Organic Processors Offer Microwatt Applications.”)

In addition to low power, memristor technology promised to provide significantly higher memory densities with a smaller footprint than today’s devices. A memristor is a two-terminal non-volatile memory technology that is seen by some as a potential replacement for flash and DRAM devices. Hewlett-Packard, the developer of memristor memory, recently announced a partnership with Hynix to fabricate memristor products by the end of 2013.

One anticipated growth market for memristor technology is in solid-state drives (SSDs), which are replacing traditional hard disk drives (HDDs) in mobile notebook applications. SSDs require less power and space than HDDs, which makes SSDs well suited for the rise of ultra-light and ultra-thin notebook computers. These ultra-“books” aim for at least 8 hours on a single battery charge. Among others, Intel recently heralded it entrance into the ultra-book market during the last Intel Developer Forum (see Figure 2). The company is shifting its focus away from traditional notebooks toward ultra-books to deal with competition from Apple’s MacBook Air and ARM processor-based tablet computers.

One consequence of the rise of Ultrabook laptops is the further erosion of the DRAM growth market (see Figure 3). Mike Howard, principal analyst for DRAM and memory at HIS, noted that, “the single biggest reason for DRAM’s reduced growth outlook in notebooks during the next four years is the Ultrabook.” Howard believes that the emphasis on form factor with minimal size and weight in Ultrabook will lead to fewer DRAMs on average than traditional notebooks.

Let’s look at the other extreme of the performance-power spectrum, i.e. high(er) power and high performance. Today, server-grade multicore processors are needed to support both ever-increasing network data bandwidths and increasing data-crunching analytics for context-aware applications. In sync with the need for more processors is the complementary need for more memory. For example, networking applications require the constant movement of massive amounts of data into and out of each processor in a multicore system.

Such high-performance processor applications may soon grind to a halt in what Linley Gwennap describes as, “the looming memory wall.” Others have echoed Gwennap’s concerns that the throughput needs of high performance multicore processors will not be met by today’s memory technology.

What can be done? Several solutions are possible, notes Gwennap:
> Increase L3 cache to help reduce traffic to external memory.
> Add more memory channels to tradition slow speed DRAM devices.
> Follow Intel’s lead on its Xeon processors by adding buffer-on-board (BoB) chips to convert traditional processor serial interfaces into standard parallel DRAM connections.
> Follow MoSys’s lead by implementing a standard high-speed serial interface directly to DRAM.
> Add Micron’s prototype Hybrid Memory Cube to re-engineer the memory subsystem.
(see, “Samsung, Micron Unveil 3D Stacked Memory And Logic.” )

Not everyone agrees with that approach, however. Sam Stewart, chief architect at eSilicon, says that off-chip memory could greatly improve performance over L3 cache and do it much more efficiently. “When you have L3 cache, you have 2 megabytes per CPU that’s shared,” said Stewart. “With a Hybrid Memory Cube you may have 17 die with 8 gigabytes versus a total of 12 megabytes. Plus it’s lower power because it’s closer and there’s high-bit bandwidth.”

Add to that custom memory, which is right sized to the specific function, specialty memories that can run at higher frequency, and the performance numbers go up even further. Put them in a stacked die package and they can go up still further. While stacked die exacerbates some issues, such as heat dissipation and electromigration, it eliminates another problem—the need for termination on signal paths. The close proximity of chips means there is insignificant reflection of electromagnetic waves as they travel through wires at the speed of light. That alone improves performance, said Stewart.

There are other technologies in the works, as well, including phase-change memory, STTRAM (spin-transfer torque RAM), and resistive RAM, according to Philip Wong, professor of electrical engineering at Stanford. He said the goal is to improve energy efficiency in all these types of memory while improving performance.

But with an estimated 50% of processing now tied up with memory and memory controllers, there is plenty of research underway to improve every aspect of memory. Not all of them will roll out in time for the next couple of designs, however, which means engineers will have to push existing boundaries a little bit further until they’re ready.

Is EDA Still EDA?

Thursday, February 25th, 2010

By John Blyler and Staff
Is the Electronic Design Automation (EDA) tools market shrinking or growing? That depends greatly upon how you define EDA.

A recent report by the Global Industry Analysts, based on information from EDA Consortium (EDAC), predicts that the global EDA tool market eventually will re-emerge to drive growth to $9.8 Billion by 2015. The report suggests that this growth will be fueled in part by the traditional efforts to improve efficiency and performance throughout the chip development process.

EDA Chip-Level Tools
Aart de Geus, chairman and CEO of Synopsys, expanded upon this finding in a recent interview. “As a percentage of our business, classic EDA is shrinking, but this is not a case of ‘classic EDA doesn’t grow.’” For example, in the past, EDA companies added front-end RTL synthesis and design tools with timing and power closure to improve the productivity of chip designers. Next, efficiencies were found in the back-end of the process by adding physical design with extraction and Design for Manufacturing (DFM) and Yield (DFY) tools. Today, EDA vendors are improving the value of system-level design with architectural tools.

Synopsys is indeed attempting to improve their architecture tool flow with the recent double acquisitions of two electronic-system level (ESL) design companies – VaST and CoWare. The emphasis on architectural integrated circuit (IC) design productivity has pushed traditional EDA chip companies to expand into the next level of product development, namely, package and board design and – on the software side – even application development.

But this time around, productivity and efficiency within the chip development process will not be enough to save EDA. In addition to continuing improvements in both front and back-end tool design, chip-level EDA companies must be successful in reaching outward to embrace new customers and industries.

Perhaps no one understands this shift in thinking better than Mentor Graphics, who has products in the chip, package, board and even embedded real-time operating system (RTOS) markets. “We (EDA chip tools) as an industry are stubbornly targeting a limited number of customers,” said Serge Leef, vice president of new ventures and General Manager of the System-Level Engineering Division at Mentor Graphics. “We really need to figure out where to go beyond that.”

There are four choices, according to Leef. One is to sell products to existing customers, which EDA companies will continue to do. The second is to sell new products to existing customers, which they are attempting in areas such as submicron design, DFM and yield enhancement. A third option is to sell existing products to new customers in places like China and India, but most of those companies are either part of multinational companies that already buy EDA tools or they’re underfunded startups that cannot afford tools. A fourth option is to sell new products to new customers.

On paper, the last option seems the most promising. The problem is getting the new customers to look at what EDA has to offer, which means that EDA companies must understand the needs of the new customers – i.e., different industries.

IP Drives Profit
A universal need shared by most new customers in today’s economically challenged markets is that of cost reduction. This has two effects. One is to increase the use of intellectual property (IP) blocks in chip-level design while the other is to move from ASIC to FPGA-based designs.

Increasing the use of IP was a primary theme in Virage Logic’s keynote address at the recent DesignCon show. That was expected, but the arguments that were used to support the growth of IP are worth noting. Brani Buric, executive vice president for marketing and sales at Virage, explained it this way: “As we move into consumer markets with low profit margins we must think beyond the technical challenges to the business issues. The question is not just how to do the design more efficiently in terms of cost, but whether to do the design at all.”

The business focus of this approach is reflected in its name, i.e., Design for Profitability (DFP). Companies focusing on profit might just write the spec for a new chip, then hand off the rest of the design and implementation to design companies such as eSilicon or Open-Silicon. Owning the spec would typically be a lot less expensive than owning any part of the implementation process. This approach relies heavily on IP blocks to build the chip to spec.

Interestingly, the growth of IP is one of the key drivers cited by in the Global Industry Analysts report for overall EDA growth. The reason that EDA tool revenues are expected to climb in 2015 is because EDA owns IP. By including Broadcom, Qualcomm and ARM as some of the largest IP licensing companies, EDA will indeed be one of the fastest growing sectors—at least on paper. The reasoning for this inclusion, according to EDAC, is that EDA tools are an integral part of licensing the IP, so IP licensing revenues should be counted in the EDA business calculations.

EDA in the Board-Level Market
The growing reliance on FPGA-based electronics is the second trend driven by profit-focused designs. But this is another area where companies like Actel, Xilinx and Altium are trying to engage a broader customer base, e.g., medical, industrial and automotive markets.

Actel’s purchase of Pigeon Point moves that company squarely into the Advanced TCA and MicroTCA world, which has been heavily utilized by communications companies and defense. Xilinx, meanwhile, is positioning its next-generation 28nm FPGA platform to help win business in non-traditional markets. And Altium has been focusing on a single database implementation of FPGA-based, board-level products that include embedded software development.

While all of these expansions reflect broader changes in the overall semiconductor industry, real growth in the EDA sector can only come from expansion beyond traditional markets. But there will always be a nagging question facing EDA companies moving into these new markets: Is this really EDA, or are we venturing into a new sector that reaches well beyond the confines of EDA to include a true system-level approach?

Extension Media websites place cookies on your device to give you the best user experience. By using our websites, you agree to placement of these cookies and to our Privacy Policy. Please click here to accept.