Part of the  

Chip Design Magazine

  Network

About  |  Contact

Posts Tagged ‘Cadence’

Next Page »

What Is Not Testable Is Not Fixable

Wednesday, September 17th, 2014

Gabe Moretti, Senior Editor

In the past I have mused that the three letter acronyms used in EDA like DFT, DFM, DFY and so on are superfluous since the only one that counts is DFP (Design For Profit).  This of course may be obvious since every IC reason for existence is to generate income for the seller.  But it is also true that the above observation is superficial since the IC must be testable, not only manufacturable and must also reach a yield figure that makes it cost effective.  Breaking down profitability into major design characteristics is an efficient approach, since a specific tool is certainly easier to work with than a generic one.

Bassilios Petrakis, Product Marketing Director at Cadence told me that: “DFT includes a number of requirements including manufacturing test, acceptance test, and power-on test.  In special cases it may be necessary to test the system while it is in operation to isolate faults or enable redundancies within mission critical systems.  For mission critical applications, usage of logic and memory build-in-self-test (BIST) is a commonly used approach to perform in-system test. Most recently, a new IEEE standard, P1687, was introduced to standardize the integration and instrument access protocol for IPs. Another IEEE proposed standard, P1838, has been initiated to define DFT IP for testing 3D-IC Through-Silicon-Via (TSV) based die stacks.”

Kiran Vittal, Senior Director of Product Marketing at Atrenta Inc. pointed out that: “The test coverage goals for advanced deep submicron designs are in the order of 99% for stuck-at faults and over 80% for transition faults and at-speed defects. These high quality goals can only be achieved by analyzing for testability and making design changes at RTL. The estimation of test coverage at RTL and the ability to find and fix issues that impact testability at RTL reduces design iterations and improves overall productivity to meet design schedules and time to market requirements.”

An 80% figure may seem an under achievement, but it points out the difficulty of proving full testability in light of other more demanding requirements, like area and power to name just two.

Planning Ahead

I also talked with Bassilios  about the need for a DFT approach in design from the point of view of the architecture of an IC.  The first thing he pointed out was that there are innumerable considerations that affect the choice of an optimal Design for Test (DFT) architecture for a design.   Designers and DFT engineers have to grapple with some considerations early in the design process.

Bassilios noted that “The number of chip pins dedicated to test is often a determining factor in selecting the right test architecture. The optimal pin count is determined by the target chip package, test time, number of pins supported by the automated test equipment (ATE), whether wafer multi-site test will be targeted, and, ultimately, the end-market segment application.”

He continued by noting that: “For instance, as mixed signal designs are mostly dominated by Analog IOs, digital test pins are at a premium and hence require an ultra low pin count test solution. These types of designs might not offer an IEEE1149.1 JTAG interface for board level test or a standardized test access mechanism. In contrast, large digital SoC designs have fewer restrictions and more flexibility in test pin allocation.  Once the pin budget has been established, determining the best test compression architecture is crucial for keeping test costs down by reducing test time and test data volume. Lower test times can be achieved by utilizing higher test compression ratios – typically 30-150X – while test data volume can be reduced by deploying sequential-based scan compression architectures. Test compression architectures are also available for low pin count designs by using the scan deserializer/serializer interface into the compression logic. Inserting test points that target random resistant faults in a design can often help reduce test pattern count (data volume).”

The early exploration of DFT architectures to meet design requirements – like area, timing, power, and testability – is facilitated by modern logic synthesis tools. Most DFT IP like JTAG boundary scan, memory BIST collars, logic BIST and compression macros are readily integrated into the design netlist and validated during the logic synthesis process per user’s recipe. Such an approach can provide tremendous improvements to designer productivity. DFT design rule checks are run early and often to intercept and correct undesirable logic that can affect testability.

Test power is another factor that needs to be considered by DFT engineers early on. Excessive scan switching activity can inadvertently lead to test pattern failures on an ATE. Testing one or more core or sub-block in a design in isolation together with power-aware Automatic Test Pattern Generation (ATPG) techniques can help mitigate power-related issues. Inserting core-wrapping (or isolation logic) using IEEE1500 is a good way to enable a core-based test, hierarchical test, and general analog mixed signal interactions.

For designs adopting advanced multi-voltage island techniques, DFT insertion has to be power domain-aware and construct scan chains appropriately levering industry standard power specifications like Common Power Format (CPF) and IEEE1801. A seamless integration between logic synthesis and downstream ATPG tools helps prime the test pattern validation and delivery.

ATPG

Kiran delved in greater details into the subject of testability by giving particular attention to issues with Automatic Test Pattern Generation (ATPG).  “For both stuck-at and transition faults, the presence of hard to detect faults has a substantial impact on overall ATPG performance in terms of pattern count, runtime, and test coverage, which in turn has a direct impact on the cost of manufacturing test. The ability to measure the density of hard to detect faults in a design early at the RTL stage, is extremely valuable. It gives RTL designers the opportunity to make design changes to address the issue while enabling them to quickly measure the impact of the changes.”

The performance of the ATPG engine is often measured by the following criteria:

- How close it comes to finding tests for all testable faults, i.e. how close the ATPG fault coverage comes to the upper bound.  This aspect of ATPG performance is referred to as its efficiency. If the ATPG engine finds tests for all testable faults, its efficiency is 100%.

- How long it has to run to generate the tests.  Full ATPG runs need to be completed within a certain allocated time, so the quest for finding a test is sometimes abandoned for some hard to test faults after the ATPG algorithm exceeds a pre-determined time limit.

- The larger the number of hard to test faults, the lower the ATPG efficiency.  The total number of tests (patterns) needed to test all testable faults. Note that a single test pattern can detect many testable faults.

To give a better idea of how test issues can be addressed Kiran provided me with an example.

Figure 1 (Courtesy of Atrenta)

Consider Figure 1, which has wide logic cones of flip flops and black boxes (memories or analog circuits) feeding a downstream flip flop. ATPG finds it extremely difficult to generate ‘exhaustive’ patterns and the test generation time is either long or the fault coverage is compromised. These kinds of designs can be analyzed early at RTL to find areas in the design that have poor controllability and observability, so that the designer can make design changes to improve the efficiency (test data and time) of downstream ATPG tools to generate optimum patterns to not only improve the quality of test, but also be economical to lower the cost of manufacturing test.

Figure 2 (Courtesy of Atrenta)

Figure 2 shows the early RTL analysis using Atrenta’s SpyGlass DFT tool suite. This figure highlights the problem through the schematic representation of the design and shows a thermal map on the low control/observe areas, which the designer can fix easily by recoding the RTL.

The analysis of the impact of hard to test faults at RTL can save significant design time in fixing low fault coverage and improving ATPG effectiveness for runtime and pattern count early in the design cycle, resulting in over 50x more efficiency in the design flow to meet the required test quality goals.

Conclusion

Bassilios concluded that “further improvements to testability can be achieved by performing a “what if” analysis with test point insertion and committing the test points once the desired coverage goals are met. Both top-down and bottom-up hierarchical test synthesis approaches can be supported. Early physical placement floorplan information can be imported into the synthesis cockpit to perform physically aware synthesis as well as scan ordering and congestion-free compression logic placement.”

One thing is certain: engineers will not rest.   DFT continues to evolve to address the increased complexity of SoC and 3D-IC design, its realization, and the emergence of new fault models required for sub-20nm process nodes.  With every advance, whether in the form of a new algorithm or new IP modules, the EDA tools will need to be updated and, probably, the approach to the IC architecture will need to be changed.  As the rate of cost increase of new processes continues to grow, designers will have to be more creative in developing better testing techniques to improve the utilization of already established processes.

Blog Review – Monday, Sept 01, 2014

Monday, September 1st, 2014

The generation gap for connectivity; seeking medical help; IoT messaging protocol; Cadence discusses IoT design challenges.

While marvelling at the Internet of Things (IoT), Seow Yin Lim, Cadence, writes effectively about its design challenges now that traditional processor architectures and approaches do not apply.

ARM hosts a guest blog by Jonah McLeod, Kilopass, who discusses MQTT, the messaging protocol standard for the IoT. His detailed post provides context and a simplified breakdown of the protocol.

Bringing engineers together is the motivation for Thierry Marchal, Ansys, who writes an impassioned blog following the biomedical track of the company’s CADFEM Users Meeting in Nuremberg, Germany. We are all in this together, is the theme, so perhaps we should exchange ideas. It just might catch on.

Admitting to stating the obvious, John Day, Mentor Graphics, states that younger people are more interested in automotive connectivity than older people are. He goes on to share some interesting stats from a recent survey on automotive connectivity.

Caroline Hayes, Senior Editor

Complexity of Mixed-signal Designs

Thursday, August 28th, 2014

Gabe Moretti, Senior Editor

The major constituent of system complexity today is the integration of computing with mechanical and human interfaces.  Both of these are analog in nature, so designing mixed-signal systems is a necessity.  The entire semiconductor chain is impacted by this requirement.  EDA tools must support mixed-signal development and the semiconductor foundries must adapt to using different processes to build one IC.

Impact on Semiconductor Foundries

Jonah McLeod, Director of Corporate Marketing Communications at Kilopass Technology was well informed about foundries situation when ARM processors became widely used in mixed-signal designs.  He told me that: ” Starting in 2012 with the appearance of smart meters, chip vendors such as NXP began integrating the 32-bit ARM Cortex processor with a analog/mixed-signal metrology engine for Smart Metering with two current inputs and a voltage input.

This integration had significant impact on both foundries and analog chip suppliers. The latter had been fabricating mixed-signal chips on process nodes of 180nm and larger, many with their own dedicated fabs. With this integration, they had to incorporate digital logic with their analog designs.

Large semiconductor IDMs like NXP, IT and ST had an advantage over dedicated chip companies like Linear and Analog Devices. The former had both logic and analog design expertise they could bring to bear building these SoCs and they had the fabrication expertise to build digital mixed-signal processes in smaller process geometries.

Long exempt from the pressure to chase smaller process geometries aggressively, the dedicated analog chip companies had a stark choice. They could invest the large capital expenditure required to build smaller geometry fabs or they could go fablite and outsource smaller process geometry designs to the major fabs. This was a major boost for the foundries that needed designs to fill fabs abandoned by digital designs chasing first 40nm and now 28nm. As a result, ffoundries now have 55nm processes tailored for power management ICs (PMICs) and display driver ICs, among others. Analog expertise still counts in this new world order but the competitive advantage goes to analog/mixed-signal design teams able to leverage smaller process geometries to achieve cost savings over competitors.”

As form factors in handheld and IoT devices become increasingly smaller, integrating all the components of a system on one IC becomes a necessity.  Thus fabricating mixed-signal chips with smaller geometries processes grows significantly in importance.

Mladen Nizic, Product Marketing Director at cadence noted that requirements on foundries are directly connected to new requirements for EDA tools.  He observed that: “Advanced process nodes typically introduce more parametric variation, Layout Dependent Effects (LDE), increased impact of parasitics, aging and reliability issues, layout restrictions and other manufacturing effects affecting device performance, making it much harder for designers to predict circuit performance in silicon. To cope with these challenges, designers need automated flow to understand impact of manufacturing effects early, sensitivity analysis to identify most critical devices, rapid analog prototyping to explore layout configurations quickly, constraint driven methodology to guide layout creation and in-design extraction and analysis to enable correct-by-construction design. Moreover, digital-assisted-analog has common approach in achieving analog performance, leading to increased need for integrate mixed-signal design flow.”

Marco Casale-Rossi, Senior Staff Product Marketing Manager, Design Group, Synopsys points out that there is still much life remaining in the 180nm process.  ”I’ll give you the 180 nanometer example: when 180 nanometers was introduced as an emerging technology node back in 1999, it offered single-poly, 6 aluminum metals, digital CMOS transistors and embedded SRAM memory only. Today, 180 nanometers is used for state-of-the-art BCD (Bipolar-CMOS-DMOS) processes, e.g. for smartpower, automotive, security, MCU applications; it features, as I said, bipolar, CMOS and DMOS devices, double-poly, triple-well, four aluminum layers, integrating a broad catalogue of memories such as DRAM, EEPROM, FLASH, OTP, nad more.   Power supplies span from 1V to several tens or even hundreds of Volts; analog & mixed-signal manufacturing processes at established technology nodes are as complex as the latest and greatest digital manufacturing processes at emerging technology nodes, only the metrics of complexity are different.”

EDA Tools and Design Flow

Mixed-signal designs require a more complex flow than strictly digital designs.  They often incorporate multiple analog, RF, mixed-signal, memory and logic blocks operating at high performance and different power domains.  For these reasons engineers designing a mixed-signal IC need different tools throughout the development process.  Architecting, developing testing and place and route functions are all impacted.

Mladen observed that “Mixed-signal chip architects must explore different configurations with all concerned in a design team to avoid costly, late iterations. Designers consider many factors like block placement relative to each other, IO locations, power grid and sensitive analog routes, noise avoidance to arrive to optimal chip architecture.”

Standard organizations, particularly Accellera and the IEEE have developed versions of generally used HD languages like Verilog and VHDL that provide support for mixed-signal descriptions.  VHDL-AMS and Verilog-AMS continue to be supported by the IEEE and working groups are making sure that the needs of designers are met.

Mladen points out that “recent extensions in standardization efforts for Real Number Modeling (RNM) enable effective abstraction of analog for simulation and functional verification almost at digital speeds.    Cadence provides tools for automating analog behavioral and RNM model generation and validation. In last couple of years, adoption of RNM is on rise driven by verification challenges of complex mixed-signal designs.”

Design verification is practically always the costlier part of development.  This is partly due to the lack of effective correct by construction tools and obviously by the increasing complexity of designs that are often a product of various company design teams as well as the use of third party IP.

Steve Smith, Sr. Marketing Director, Analog/Mixed-signal Verification at Synopsyspointed out that: “The need for exhaustive verification of mixed-signal SoCs means that verification teams need perform mixed-signal simulation as part of their automated design regression test processes. To achieve this requirement, mixed-signal verification is increasingly adopting techniques that have been well proven in the purely digital design arena. These techniques include automated stimulus generation, coverage and assertion- driven verification combined with low-power verification extended to support analog and mixed-signal functionality.  As analog/mixed-signal design circuits are developed, design teams will selectively replace high-level models for the SPICE netlist and utilize the high performance and capacity of a FastSPICE simulator coupled to a high-performance digital simulator. This combination provides acceleration for mixed-signal simulation with SPICE-like accuracy to adequately verify the full design. Another benefit of using FastSPICE in this context is post-layout simulation for design signoff within the same verification testbench environment.”

He continued by saying that: “An adjacent aspect of the tighter integration between analog and digital relates to power management – as mixed-signal designs require multiple power domains, low-power verification is becoming more critical. As a result of these growing challenges, design teams are extending proven digital verification methodologies to mixed-signal design.  Accurate and efficient low-power and multiple power domain verification require both knowledge of the overall system’s power intent and careful tracking of signals crossing these power domains. Mixed-signal verification tools are available to perform a comprehensive set of static (rule-based) and dynamic (simulation run-time) circuit checks to quickly identify electric rule violations and power management design errors. With this technology, mixed-signal designers can identify violations such as missing level shifters, leakage paths or power-up checks at the SoC level and avoid significant design errors before tape-out. During detailed mixed-signal verification, these run-time checks can be orchestrated alongside other functional and parametric checks to ensure thorough verification coverage.”

Another significant challenge to designers is placing and routing the design.  Mladen described the way Cadence supports this task.  “Analog designers use digital logic to calibrate and tune their high performance circuits. This is called digitally-assisted-analog approach. There are often tens of thousands of gates integrated with analog in a mixed-signal block, at periphery making it ready for integration to SOC as well as embedded inside the hierarchy of the block. Challenges in realizing this kind of designs are:

- time and effort needed for iteration among analog and digital designers,

- black-boxing among analog and digital domains with no transparency during the iterations,

- sharing data and constraints among analog and digital designers,

- performing ECO particularly late in the design cycle,

- applying static timing analysis across timing paths spanning gates embedded in hierarchy of mixed-signal block(s).

Cadence has addressed these challenges by integrating Virtuoso custom and Encounter digital platforms on common OpenAccess database enabling data and constraint sharing without translation, full transparency in analog and digital parts of layout in both platforms.”

Casale-Rossi has described how Synopsys addresses the problem.  “A&M/S has always had placement (e.g. symmetry, rotation) and routing (e.g. shielding, length/resistance matching) special requirements. With Synopsys’ Custom Designer – IC Compiler round-trip, and with our Galaxy Custom Router, we are providing our partners and customers with an integrated environment for digital and analog & mixed-signal design implementation that helps address the challenges.”

Conclusion

The bottom line is that EDA tools providers, standards developing organization and semiconductor foundries have significant further work to do.  IC complexity will increase and with it mixed-signal designs.  Mixed-signal third party IP is by nature directly connected to a specific foundry and process since it must be developed and verified at the transistor level.  Thus the complexity of integrating IP development and fabrication will limit the number of IP providers to those companies big enough to obtain the attention of the foundry.

Newer Processes Raise ESL Issues

Wednesday, August 13th, 2014

Gabe Moretti, Senior Editor

In June I wrote about   how EDA changed its traditional flow in order to support advanced semiconductors manufacturing.  I do not think that, although the changes are significant and meaningful they are enough to sustain the increase in productivity required by financial demands.  What is necessary, in my opinion, is a better support for system level developers.

Leaving the solution to design and integration problems to a later stage of the development process creates more complexity since the network impacted is much larger.  Each node in the architecture is now a collection of components and primitive electronic elements that dilute and thus hide the intended functional architecture.

Front End Design Issues

Changes in the way front end design is done are being implemented.  Anand Iyer, Calypto’s Director of Product Marketing focused on the need to plan power at system level.  He observed that: “Addressing DFP issues need to done in the front end tools, as the RTL logic structure and architecture choices determines 80% of the power. Designers need to minimize the activity/clock frequency across their designs since this is the only metric to control dynamic power. They can achieve this in many ways: (1) Reducing activity permanently from their design, (2) Reduce activity temporarily during the active mode of the design.”  Anand went on to cover the two points: “The first point requires a sequential analysis of the entire design to identify opportunities where we can save power. These opportunities need to be evaluated against possible timing and area impact. We need automation when it comes to large and complex designs. PowerPro can help designers optimize their designs for activity.”

As for the other point he said: “The second issue requires understanding the interaction of hardware and software. Techniques like power gating and DVFS fall under this category.”

Anand also recognized that high level synthesis can be used to achieve low power designs.  Starting from C++ or SystemC, architects can produce alternative microarchitectures and see the power impact of their choices (with physically aware RTL power analysis).  This is hugely powerful to enable exploration because if this is done only at RTL it is time consuming and unrealistic to actually try multiple implementations of a complex design.  Plus, the RTL low power techniques are automatically considered and automatically implemented once you have selected the best architecture that meets your power, performance, and cost constraints.

Steve Carlson, Director of Marketing at Cadence pointed out that about a decade ago design teams had their choice of about four active process nodes when planning their designs.  He noted that: “In 2014 there are ten or more active choices for design teams to consider.  This means that solution space for product design has become a lot more rich.  It also means that design teams needs a more fine grained approach to planning and vendor/node selection.  It follows that the assumptions made during the planning process need to be tested as early and often, and with as much accuracy as possible at each stage. The power/performance and area trade-offs create end product differentiation.  One area that can certainly be improved is the connection to trade-offs between hardware architecture and software.  Getting more accurate insight into power profiles can enable trade-offs at the architectural and micro architectural levels.

Perhaps less obvious is the need for process accurate early physical planning (i.e., understands design rules for coloring, etc.).”

As shown in the following figure designers have to be aware that parts of the design are coming from different suppliers and thus Steve states that: “It is essential for the front-end physical planning/prototyping stages of design to be process-aware to prevent costly surprises down the implementation road.”

Simulation and Verification

One of the major recent changes in IC design is the growing number of mixed/signals designs.  They present new design and verification challenges particularly when new advanced processes are targeted for manufacturing.  On the standard development side Accellera has responded by releasing a new version of its Verilog-AMS.  It is a mature standard originally released in 2000. It is built on top of the Verilog subset of the IEEE 1800 -2012 SystemVerilog.  The standard defines how analog behavior interacts with event-based functionality, providing a bridge between the analog and digital worlds. To model continuous-time behavior, Verilog-AMS is defined to be applicable to both electrical and non-electrical system descriptions.  It supports conservative and signal-flow descriptions and can also be used to describe discrete (digital) systems and the resulting mixed-signal interactions.

The revised standard, Verilog-AMS 2.4, includes extensions to benefit verification, behavioral modeling and compact modeling. There are also several clarifications and over 20 errata fixes that improve the overall quality of the standard.Resources on how best to use the standard and a sample library with power and domain application examples are available from Accellera.

Scott Little, chair of the Verilog AMS WG stated: “This revision adds several features that users have been requesting for some time, such as supply sensitive connect modules, an analog event type to enable efficient electrical-to-real conversion and current checker modules.”

The standard continues to be refined and extended to meet the expanding needs of various user communities. The Verilog-AMS WG is currently exploring options to align Verilog-AMS with SystemVerilog in the form of a dot standard to IEEE 1800. In addition, work is underway to focus on new features and enhancements requested by the community to improve mixed-signal design and verification.

Clearly another aspect of verification that has grown significantly in the past few years is the availability of Verification IP modules.  Together with the new version of the UVM 1.2 (Universal Verification Methodology) standard just released by Accellera, they represent a significant increment in the verification power available to designers.

Jonah McLeod, Director of Corporate Marketing Communications at Kilopass, is also concerned about analog issues.  He said: “Accelerating Spice has to be major tool development of this generation of tools. The biggest problem designers face in complex SoC is getting corner cases to converge. This can be time consuming an imprecise with current generation tools.  Start-ups claiming montecarlo spice accelerations like Solido Design Automation and CLK Design Automation are attempting to solve the problem. Both promise to achieve Spice-level accuracy on complex circuits within a couple of percentage points in a fraction of the time.”

One area of verification that is not often covered is its relationship with manufacturing test.  Thomas L. Anderson, Vice President of Marketing at Breker Verification Systems told me that: “The enormous complexity of a deep submicron (32, 38, 20, 14 nm) SoC has a profound impact on manufacturing test. Today, many test engineers treat the SoC as a black box, applying stimulus and checking results only at the chip I/O pins. Some write a few simple C tests to download into the SoC’s embedded processors and run as part of the manufacturing test process. Such simple tests do not validate the chip well, and many companies are seeing returns with defects missed by the tester. Test time limitations typically prohibit the download and run of an operating system and user applications, but clearly a better test is needed. The answer is available today: automatically generated C test cases that run on “bare metal” (no operating system) while stressing every aspect of the SoC. These run realistic user scenarios in multi-threaded, multi-processor mode within the SoC while coordinating with the I/O pins. These test cases validate far more functionality and performance before the SoC ever leaves the factory, greatly reducing return rates while improving the customer experience.”

Blog Review – Mon. August 11 2014

Monday, August 11th, 2014

VW e-Golf; Cadence’s power signoff launch; summer viewing; power generation research.
By Caroline Hayes, Senior Editor

VW’s plans for all-electric Golf models have captured the interest of John Day, Mentor Graphics. He attended the Management Briefing Seminar and reports on the carbon offsetting and a solar panel co-operation with SunPower. I think I know what Day will be travelling in to get to work.

Cadence has announced plans to tackle power signoff this week and Richard Goering elaborates on the Voltus-Fi Custom Power Integrity launch and provides a detailed and informative blog on the subject.

Grab some popcorn (or not) and this summer blockbusters, as lined up by Scott Knowlton, Synopsys. Perhaps not the next Harry Potter series, but certainly a must-see for anyone who missed the company’s demos at PCI-SIG DevCon. This humorous blog continues the cinema analogy for “Industry First: PCI Express 4.0 Controller IP”, “DesignWare PHY IP for PCI Express at 16Gb/s”, “PCI PHY and Controller IP for PCI Express 3.0” and “Synopsys M-PCIe Protocol Analysis with Teledyne LeCroy”.

Fusion energy could be the answer for energy demands and Steve Leibson, Xilinx, shares Dave Wilson’s (National Instruments) report of a fascinating project by National Instruments to monitor and control a compact spherical tokamak (used as neutron sources) with the UK company, Tokamak Solutions.

An EDA View of Semiconductor Manufacturing

Wednesday, June 25th, 2014

Gabe Moretti, Contributing Editor

The concern that there is a significant break between tools used by designers targeting leading edge processes, those at 32 nm and smaller to be precise, and those used to target older processes was dispelled during the recent Design Automation Conference (DAC).  In his address as a DAC keynote speaker in June at the Moscone Center in San Francisco Dr. Antun Domic, Executive Vice President and General Manager, Synopsys Design Group, pointed out that advances in EDA tools in response to the challenges posed by the newer semiconductor process technologies also benefit designs targeting older processes.

Mary Ann White, Product Marketing Director for the Galaxy Implementation Platform at Synopsys, echoed Dr. Domic remarks and stated:” There seems to be a misconception that all advanced designs needed to be fabricated on leading process geometries such as 28nm and below, including FinFET. We have seen designs with compute-intensive applications, such as processors or graphics processing, move to the most advanced process geometries for performance reasons. These products also tend to be highly digital. With more density, almost double for advanced geometries in many cases, more functionality can also be added. In this age of disposable mobile products where cellphones are quickly replaced with newer versions, this seems necessary to remain competitive.

However, even if designers are targeting larger, established process technologies (planar CMOS), it doesn’t necessarily mean that their designs are any less advanced in terms of application than those that target the advanced nodes.  There are plenty of chips inside the mobile handset that are manufactured on established nodes, such as those with noise cancellation, touchscreen, and MEMS (Micro-Electronic Sensors) functionality. MEMS chips are currently manufactured at the 180nm node, and there are no foreseeable plans to move to smaller process geometries. Other chips at established nodes tend to also have some analog capability, which doesn’t make them any less complex.”

This is very important since the companies that can afford to use leading edge processes are diminishing in number due to the very high ($100 million and more) non recurring investment required.  And of course the cost of each die is also greater than with previous processes.  If the tools could only be used by those customers doing leading edge designs revenues would necessarily fall.

Design Complexity

Steve Carlson, Director of Marketing at Cadence, states that “when you think about design complexity there are few axes that might be used to measure it.  Certainly raw gate count or transistor count is one popular measure.  From a recent article in Chip Design a look at complexity on a log scale shows the billion mark has been eclipsed.”  Figure 1, courtesy of Cadence, shows the increase of transistors per die through the last 22 years.

Figure 1.

Steve continued: “Another way to look at complexity is looking at the number of functional IP units being integrated together.  The graph in figure 2, provided by Cadence, shows the steep curve of IP integration that SoCs have been following.  This is another indication of the complexity of the design, rather than of the complexity of designing for a particular node.  At the heart of the process complexity question are metrics such as number of parasitic elements needed to adequately model a like structure in one process versus another.”  It is important to notice that the percentage of IP blocks provided by third parties is getting close to 50%.

Figure 2.

Steve concludes with: “Yet another way to look at complexity is through the lens of the design rules and the design rule decks.  The graphs below show the upward trajectory for these measures in a very significant way.” Figure 3, also courtesy of Cadence, shows the increased complexity of the Design Rules provided by each foundry.  This trend makes second sourcing a design impossible, since having a second source foundry would be similar to having a different design.

Figure 3.

Another problem designers have to deal with is the increasing complexity due to the decreasing features sizes.  Anand Iyer, Calypto Director of Product Marketing, observed that: “Complexity of design is increasing across many categories such as Variability, Design for Manufacturability (DFM) and Design for Power (DFP). Advanced geometries are prone to variation due to double patterning technology. Some foundries are worst casing the variation, which can lead to reduced design performance. DFM complexity is causing design performance to be evaluated across multiple corners much more than they were used to. There are also additional design rules that the foundry wants to impose due to DFM issues. Finally, DFP is a major factor for adding design complexity because power, especially dynamic power is a major issue in these process nodes. Voltage cannot scale due to the noise margin and process variation considerations and the capacitance is relatively unchanged or increasing.”

Impact on Back End Tools.

I have been wondering if the increasing dependency on transistors geometries and the parasitic effects peculiar to each foundry would eventually mean that a foundry specific Place and Route tool would be better than adapting a generic tool to a Design Rules file that is becoming very complex.  I my mind complexity means greater probability of errors due to ambiguity among a large set of rules.  Thus by building rules specific Place and Route tools would directly lower the number of DR checks required.

Mary Ann White of Synopsys answered: “We do not believe so.  Double and multiple patterning are definitely newer techniques introduced to mitigate the lithographic effects required to handle the small multi-gate transistors. However, in the end, even if the FinFET process differs, it doesn’t mean that the tool has to be different.  The use of multi patterning, coloring and decomposition is the same process even if the design rules between foundries may differ.”

On the other hand Steve Carlson of Cadence shares the opinion.  “There have been subtle differences between requirements at new process nodes for many generations.  Customers do not want to have different tool strategies for second source of foundry, so the implementation tools have to provide the union of capabilities needed to enable each node (or be excluded from consideration).   In more recent generations of process nodes there has been a growing divergence of the requirements to support

like-named nodes. This has led to added cost for EDA providers.  It is doubtful that different tools will be spawned for different foundries.  How the (overlapping) sets of capabilities get priced and packaged by the EDA vendors will be a business model decision.  The use model users want is singular across all foundry options.  How far things diverge and what the new requirements are at 7nm and 5nm may dictate a change in strategy.  Time will tell.”

This is clear for now.  But given the difficulty of second sourcing I expect that a de4sign company will choose one foundry and use it exclusively.  Changing foundry will be almost always a business decision based on financial considerations.

New processes also change the requirements for TCAD tools.  At the just finished DAC conference I met with Dr. Asen Asenov, CEO of Gold Standard Simulations, an EDA company in Scotland that focuses on the simulation of statistical variability in nan-CMOS devices.

He is of the opinion that Design-Technology Co-Optimization (DTCO) has become mandatory in advanced technology nodes.  Modeling and simulation play an increasing important role in the DTCO process with the benefits of speeding up and reducing the cost of the technology, circuit and system development and hence reducing the time-to-market.  He said: “It is well understood that tailoring the transistor characteristics by tuning the technology is not sufficient any more. The transistor characteristics have to meet the requirement for design and optimization of particular circuits, systems and corresponding products.  One of the main challenges is to factor accurately the device variability in the DTCO tools and practices. The focus at 28nm and 20nm bulk CMOS is the high statistical variability introduced by the high doping concentration in the channel needed to secure the required electrostatic integrity. However the introduction of FDSOI transistors and FinFETs, that tolerate low channel doping, has shifted the attention to the process induced variability related predominantly to silicon channel thickness or shape  variation.”  He continued: “However until now TCAD simulations, compact model extraction and circuit simulations are typically handled by different groups of experts and often by separate departments in the semiconductor industry and this leads to significant delays in the simulation based DTCO cycle. The fact that TCAD, compact model extraction and circuit simulation tools are typically developed and licensed by different EDA vendors does not help the DTCO practices.”

Ansys pointed out that in advanced finFET process nodes, the operating voltage for the devices have drastically reduced. This reduction in operating voltage has also lead to a decrease in operating margins for the devices. With several transient modes of operation in a low power ICs, having an accurate representation of the package model is mandatory for accurate noise coupling simulations. Distributed package models with a bump resolution are required for performing Chip-Package-System simulations for accurate noise coupling analysis.

Further Exploration

The topic of Semiconductors Manufacturing has generated a large number of responses.  As a result the next monthly article will continue to cover the topic with particular focus on the impact of leading edge processes on EDA tools and practices.

Blog Review – Mon. June 16 2014

Monday, June 16th, 2014

Naturally, there is a DAC theme to this week’s blogs – the old, the new, together with the soccer/football debate and the overlooked heroes of technology. By Caroline Hayes, Senior Editor.

Among those attending DAC 2014, Jack Harding, eSilicon rejoiced in seeing some familiar faces but mourns the lack of new faces and the absence of a rock and roll generation for EDA.

Football fever has affected Shelly Stalnaker, Mentor Graphics, as she celebrates the World Cup coming to a TV screen near you. The rest of the world may call soccer football but the universality of IC design and verification is an analogy that will resonate with sport enthusiasts everywhere.

Celebrating Alan Turing, Aurelien, Dassault Systemes, looks at the life and achievements of the man who broke the Enigma Code, in WWII, invented the first computer in 1936 and who defined artificial intelligence. The fact he wasn’t mentioned in the 2001 film, Engima, about the code breakers, reflects how overlooked this incredible man was.

Mixed signal IC verification was the topic for a DAC panel, and Richard Goering, Cadence runs down what was covered, from tools and methodologies, the prospects for scaling and a hint at what’s next.

Digital Designers Grapple with Analog Mixed Signal Designs

Tuesday, June 10th, 2014

Today’s growth of analog and mixed signal circuits in the Internet of Things (IoT) applications raises questions about compiling C-code, running simulations, low power designs, latency and IP integration.

Often, the most valuable portion of a technical seminar is found in the question-and-answer (Q&A) session that follows the actual presentation. For me, that was true during a recent talk on the creation of mixed signal devices for smart analog and the Internet of Things (IoT) applications. The speakers included Diya Soubra, CPU Product Marketing Manager and Joel Rosenberg, Platform Marketing Director at ARM; and Mladen Nizic, Engineering Director at Cadence. What follows is my paraphrasing of the Q&A session with reference to the presentation where appropriate. – JB

Question: Is it possible to run C and assembly code on an ARM® Cortex®-M0 processor in Cadence’s Virtuoso for custom IC design? Is there a C-compiler within the tool?

Nizic: The C compiler comes from Keil®, ARM’s software development kit. The ARM DS-5 Development Studio is an Eclipse based tool suite for the company’s processors and SoCs. Once the code is compiled, it is run together with RTL software in our (Cadence) Incisive Mixed Signal simulator. The result is a simulation of the processor driven by an instruction set with all digital peripherals simulated in RTL or at the gate level. The analog portions of the design are simulated at the appropriate behavioral level, i.e., Spice transistor level, electrical behavioral Verilog A or a real number model. [See the mixed signal trends section of, “Moore’s Cycle, Fifth Horseman, Mixed Signals, and IP Stress”)

You can use the electrical behavioral models like a Verilog A and VHDL-A and –AMS to simulate the analog portions of the design. But real number models have become increasingly popular for this task. With real number models, you can model analog signals with variable amplitudes but discrete time steps, just as required by digital simulation. Simulations with a real number model representation for analog are done at almost the same speed as the digital simulation and with very little penalty (in accuracy). For example, here (see Figure 1) are the results of a system simulation where we verify how quickly Cortex-M0 would us a regulation signal to bring pressure to a specified value. It takes some 28-clock cycles. Other test bench scenarios might be explored, e.g., sending the Cortex-M0 into sleep mode if no changes in pressure are detected or waking up the processor in a few clock cycles to stabilize the system. The point is that you can swap these real number models for electrical models in Verilog A or for transistor models to redo your simulation to verify that the transistor model performs as expected.

Figure 1: The results of a Cadence simulation to verify the accuracy of a Cortex-M0 to regulate a pressure monitoring system. (Courtesy of Cadence)

Question: Can you give some examples of applications where products are incorporating analog capabilities and how they are using them?

Soubra: Everything related to motor control, power conversion and power control are good examples of where adding a little bit of (processor) smarts placed next to the mixed signal input can make a big difference. This is a clear case of how the industry is shifting toward this analog integration.

Question: What capabilities does ARM offer to support the low power requirement for mixed signal SoC design?

Rosenberg: The answer to this question has both a memory and logic component. In terms of memory, we offer the extended range register file compilers which today can go up to 256k bits. Even though the performance requirement for a given application may be relatively low, the user will want to boot from the flash into the SRAM or the register file instance. Then they will shut down the flash and execute out of the RAM as the RAM offers significantly lower active as well as stand-by power compared to executing out of flash.

On the logic side, we offer a selection from 7, 9 and 12 tracks. Within that, there are three Vt options – one for high, nominal and lower speeds. Beyond that we also offer power management kits that provide things like level shifters and power gating so the user can shut down none active parts of the SoC circuit.

Question: What are the latency numbers for waking up different domains that have been put to sleep?

Soubra: The numbers that I shared during the presentation do not include any peripherals since I have no way of knowing what peripherals will be added. In terms of who is consuming what power, the normal progression tends to be the processor, peripherals, bus and then the flash block. The “wake-up” state latency depends upon the implementation itself. You can go from tens-of-cycles to multiple-of-tens depending upon how the clocks and phase locked loops (PLLs) are implemented. If we shut everything down, then a few cycles will be required before everything goes off an, before we can restart the processor. But we are talking about tens not hundreds of cycles.

Question: And for the wake-up clock latency?

Soubra: Wake-up is the same thing, because when the wake-up controller says “lets go,” it has to restart all the clocks before it starts the processor. So it is exactly the same amount.

ARM Cortex-M low power technologies.

Question: What analog intellectual property (IP) components are offered by ARM and Cadence? How can designers integrate their own IP in the flow?

Nizic: At Cadence, through the acquisition of Cosmic, we have a portfolio of applicable analog and mixed signal IP, e.g., converters, sensors and the like. We support all design views that are necessary for this kind of methodology including model abstracts from real number to behavioral models. Like ARM’s physical IP, all of ours are qualified for the various foundry nodes so the process of integrating IP and silicon is fairly smooth.

Soubra: From ARM’s point-of-view, we are totally focused on the digital part of the SoC, including the processors, bus infrastructure components, peripherals, and memory controllers that are part of the physical IP (standard cell libraries, I/O cells, SRAM, etc). Designers integrate the digital parts (processors, bus components, peripherals and memory controller) in RTL design stages. Also, they can add the functional simulation models of memories and I/O cells in simulations, together with models of analog components from Cadence. The actual physical IP are integrated during various implementation stages (synthesis, placement and routing, etc).

Question: How can designers integrate their own IP into the SoC?

Nizic: Some of the capabilities and flows that we described are actually used to create customer IP for later reuse in SoC integration. There is a centric flow that can be used, whether the customer’s IP is pure analog or contains a small amount of standard cell digital. For example, the behavioral modeling capabilities help package this IP for the functional simulation in full chip verification. But getting the IP ready is only one aspect of the flow.

From a physical abstract it’s possible to characterize the IP for use in timing driven mode. This approach would allow you to physically verify the IP on the SoC for full chip verification.

Blog Review – Mon. June 02 2014

Monday, June 2nd, 2014

In case you didn’t know, DAC is upon us, and ARM has some sightseeing tips – within the confines of the show-space. Electric vehicles are being taken seriously in Europe and North America, Dassault Systemes has some manufacturing-design tips and Sonics looks back over 20 years of IP integration. By Caroline Hayes, Senior Editor.

Electric vehicles – it’s an easy sell for John Day, Mentor Graphics, but his blog has some interesting examples from Sweden of electric transport and infrastructure ideas.

Thanks are due to Leah Schuth, ARM, who can save you some shoe leather if you are in San Francisco this week. She has been very considerate and lumped together all the best bits to see at this week’s DAC. OK, the list may be a bit ARM-centric, but if you want IoT, wearable electronics and energy management, you know where to go.

We all want innovation but can the industry afford it? Hoping to instill best practice, Eric, Dassault Systemes writes an interesting, detailed piece design-manufacturing collaboration for a harmonious development cycle.

A tutorial on your PC is a great way to learn – in this case, the low power advantage of LPDDR4 over earlier LPDDR memory. Corrie Callenbach brings this whiteboard tutorial by Kishote Kasamsetty to our attention in Whiteboard Wednesdays—Trends in the Mobile Memory World.

A review of IP integration is presented by Drew Wingard, Sonics, in who asks what has been learned over the last two decades, what matters and why.

Deeper Dive – Is IP reuse good or bad?

Friday, May 30th, 2014

To buy or to reuse, that is the question. Caroline Hayes, Senior Editor asked four industry experts, Carsten Elgert (EC), Product Marketing Director, IPG (IP Group), Cadence, Tom Feist (TF), Senior Marketing Director, Design Methodology, Xilinx, Dave Tokic (DT), Senior Director, Partner Ecosystems and Alliances, Xilinx, and Warren Savage (WS), President and CEO, IPextreme about the pros and cons of IP reuse versus third party IP.

What are the advantages and disadvantages when integrating or re-using existing IP?

WS: This is sort of analogous to asking what are the advantages/disadvantages of living a healthy lifestyle? The disadvantages are few, the advantages myriad. But in essence it’s all about practicality. If you can re-use a piece of technology, that means that you don’t have to spend money developing something different which includes a huge cost of verification. Today’s chips are simply too large to functionally verify every gate. A part of the every chip verification strategy assumes that pre-existing IP has already had its verification during its development and if it is silicon-proven, this only decreases the risk of any latent defects that may be discovered. The only reason to not to reuse an IP is that the IP itself is lacking in some ways that make the case to not reuse it but rather create a new IP.

strong>TF: Improved productivity. Reuse can have disadvantages when using older IP on new technologies, it is not always possible to target the newer features of a device with old IP.

DT: IP reuse is all about improving productivity and can result in significantly shrinking the design time especially with configurable IP. Challenges come from when the IP itself needs to be modified from the original, which then requires additional design and verification time. Verification in general could be more challenging, as most IP is verified in isolation and not in the context of the system. For example, if the IP is being used in a way the provider didn’t “think of” in their verification process, it may have bugs that are discovered during that integration verification phase that then needs to be reflected back to determining which IP has the issue and correcting/verifying that IP.

CE: The benefits of using your own IP in-house are that you know what the IP is doing and can use it again – it is also not available as third party IP, for differentiation. The disadvantage is that it is rarely documented allowing it to be used in different departments. The same engineers know what they are taking when they reuse their IP, but to properly document it, to make the product, can be time-consuming for a neighboring department. It is also the case that unwanted behavior is not verified. However, it is cheaper and it works.

What are the advantages and disadvantages of using third party IP?

WS: The advantages of using third party IP is usually related to that company being a domain expert in a certain field. By using IP from that company, you are in fact licensing expertise from this company and putting it to use in an effective way. Think of it like going to dinner at a 4-star Michelin rated restaurant. The ingredients may be ordinary but how they are assembled are far more exceptional than something the ordinary person can achieve on their own.

TF: Xilinx cannot cover all areas. Having a third party ecosystem allows us to increase the reach to customers. We have qualification processes in place to ensure the quality of the IP is up to the required standard.

DT: Xilinx and its ecosystem provide more than 600 cores across all markets, with over 130 IP providers in our Alliance Program. These partners not only provide more “fundamental” standards-based IP, but also provide a very rich set of domain and application-specific IP that would be difficult for Xilinx to develop and support. This allows Xilinx technology to be more easily and quickly adopted in 100’s of applications. Some of the challenges come in terms of consistency of deliverables, quality, and business models. Xilinx has a mature process of partner qualification and also work with the partner to expose IP quality metrics when we promote a partner IP product that helps customers make smarter decisions on choosing a provider or IP core.

EC: Third party IP means there is no rewriting. Without it, it could take hundreds of man-years to develop a function that is not major selling point for a design. Third party IP is compatible as well as bug-free/limited bugs. It is like spending hundreds of hours redesigning the steering wheel of a car – it is not a differentiating selling point of the vehicle.

Is third party IP a justifiable risk in terms of cost and/or compatibility?

DT: The third party IP ecosystem for ASIC and programmable technologies has been around for decades. There are many respected and high quality providers out there, from smaller specialized IP core partners such as Xylon, OmniTek, Northwest Logic, and PLDA to name a few, up to the industry giants like ARM and Synopsys generating $100M’s in annual revenue. But ultimately the customer is responsible for determining the system, cost, and schedule requirements, evaluating IP options, and make the “build vs. buy” decision.

EC: I would reformulate that question: Can [the industry] live without IP? Then, the risk is justifiable.

How important are industry standards in IP integration? What else would you like to see?

TF: IP standards are very important. For example, Xilinx is on the IEEE P 1735 working group for IP security. This is very important to protect customer, 3rd party and Xilinx IP throughout flows that may contain 3rd party EDA tools and still allow everyone to interoperate on the IP. We hope to see the 2.0 revision of this standard ratified this year so all tool vendors and Xilinx can adopt it and make IP truly portable, yet protected.

DT: Another example is AMBA AXI4, where Xilinx worked closely with ARM to define this high speed interconnect standard to be optimized for integrating IP on both ASIC and programmable logic platforms.

WS: Today, not so much. There has been considerable discussion on this topic for the last 15 years and various industry initiatives have come and gone over the years on this. The most successful one to date has been IP-XACT. There is massive level of IP reuse today and the lack of standards has not slowed that. I am seeing that the way the industry is handling this problem is through the deployment of pre-integrated subsystems that include both a collection of IP blocks and the embedded software that drives it. I think that within another five years, the idea of “Lego-like” construction tools will die as they do nothing to solve the verification problem associated with such constructions.

Have you any statistics you can share on the value or TAM (Total Available Market) for EDA IP?
WS: I assume you mean IP? Which I like to point out is distinctive from EDA. True, many EDA players are adopting an IP strategy but that is primarily because the growth in IP and the stagnation of the EDA markets are forcing those players to find new growth areas. Sustained double-digit growth is hard to ignore.

To the TAM (total available market) question, a lot of market research says the market is around $2billion today. I have long postulated that the real IP market is at least twice the size of the stated market, sort of like the “dark matter” theories in astrophysics. But even this ignores considerable amounts of patent licensing, embedded software licensing and such which dwarf the $2billion number.

TF: According to EDAC Semiconductor IP revenue totaled $486million, a 4.2% increase compared to Q4 2012 and four-quarters moving average increased 9.6%.

EC: For interface IP, I can see a market for interface IP – sharing no processor, no ARM, to grow 20% to $400million [Analyst IP Nest].

Next Page »