Gabe Moretti, Contributing Editor
The concern that there is a significant break between tools used by designers targeting leading edge processes, those at 32 nm and smaller to be precise, and those used to target older processes was dispelled during the recent Design Automation Conference (DAC). In his address as a DAC keynote speaker in June at the Moscone Center in San Francisco Dr. Antun Domic, Executive Vice President and General Manager, Synopsys Design Group, pointed out that advances in EDA tools in response to the challenges posed by the newer semiconductor process technologies also benefit designs targeting older processes.
Mary Ann White, Product Marketing Director for the Galaxy Implementation Platform at Synopsys, echoed Dr. Domic remarks and stated:” There seems to be a misconception that all advanced designs needed to be fabricated on leading process geometries such as 28nm and below, including FinFET. We have seen designs with compute-intensive applications, such as processors or graphics processing, move to the most advanced process geometries for performance reasons. These products also tend to be highly digital. With more density, almost double for advanced geometries in many cases, more functionality can also be added. In this age of disposable mobile products where cellphones are quickly replaced with newer versions, this seems necessary to remain competitive.
However, even if designers are targeting larger, established process technologies (planar CMOS), it doesn’t necessarily mean that their designs are any less advanced in terms of application than those that target the advanced nodes. There are plenty of chips inside the mobile handset that are manufactured on established nodes, such as those with noise cancellation, touchscreen, and MEMS (Micro-Electronic Sensors) functionality. MEMS chips are currently manufactured at the 180nm node, and there are no foreseeable plans to move to smaller process geometries. Other chips at established nodes tend to also have some analog capability, which doesn’t make them any less complex.”
This is very important since the companies that can afford to use leading edge processes are diminishing in number due to the very high ($100 million and more) non recurring investment required. And of course the cost of each die is also greater than with previous processes. If the tools could only be used by those customers doing leading edge designs revenues would necessarily fall.
Steve Carlson, Director of Marketing at Cadence, states that “when you think about design complexity there are few axes that might be used to measure it. Certainly raw gate count or transistor count is one popular measure. From a recent article in Chip Design a look at complexity on a log scale shows the billion mark has been eclipsed.” Figure 1, courtesy of Cadence, shows the increase of transistors per die through the last 22 years.
Steve continued: “Another way to look at complexity is looking at the number of functional IP units being integrated together. The graph in figure 2, provided by Cadence, shows the steep curve of IP integration that SoCs have been following. This is another indication of the complexity of the design, rather than of the complexity of designing for a particular node. At the heart of the process complexity question are metrics such as number of parasitic elements needed to adequately model a like structure in one process versus another.” It is important to notice that the percentage of IP blocks provided by third parties is getting close to 50%.
Steve concludes with: “Yet another way to look at complexity is through the lens of the design rules and the design rule decks. The graphs below show the upward trajectory for these measures in a very significant way.” Figure 3, also courtesy of Cadence, shows the increased complexity of the Design Rules provided by each foundry. This trend makes second sourcing a design impossible, since having a second source foundry would be similar to having a different design.
Another problem designers have to deal with is the increasing complexity due to the decreasing features sizes. Anand Iyer, Calypto Director of Product Marketing, observed that: “Complexity of design is increasing across many categories such as Variability, Design for Manufacturability (DFM) and Design for Power (DFP). Advanced geometries are prone to variation due to double patterning technology. Some foundries are worst casing the variation, which can lead to reduced design performance. DFM complexity is causing design performance to be evaluated across multiple corners much more than they were used to. There are also additional design rules that the foundry wants to impose due to DFM issues. Finally, DFP is a major factor for adding design complexity because power, especially dynamic power is a major issue in these process nodes. Voltage cannot scale due to the noise margin and process variation considerations and the capacitance is relatively unchanged or increasing.”
Impact on Back End Tools.
I have been wondering if the increasing dependency on transistors geometries and the parasitic effects peculiar to each foundry would eventually mean that a foundry specific Place and Route tool would be better than adapting a generic tool to a Design Rules file that is becoming very complex. I my mind complexity means greater probability of errors due to ambiguity among a large set of rules. Thus by building rules specific Place and Route tools would directly lower the number of DR checks required.
Mary Ann White of Synopsys answered: “We do not believe so. Double and multiple patterning are definitely newer techniques introduced to mitigate the lithographic effects required to handle the small multi-gate transistors. However, in the end, even if the FinFET process differs, it doesn’t mean that the tool has to be different. The use of multi patterning, coloring and decomposition is the same process even if the design rules between foundries may differ.”
On the other hand Steve Carlson of Cadence shares the opinion. “There have been subtle differences between requirements at new process nodes for many generations. Customers do not want to have different tool strategies for second source of foundry, so the implementation tools have to provide the union of capabilities needed to enable each node (or be excluded from consideration). In more recent generations of process nodes there has been a growing divergence of the requirements to support
like-named nodes. This has led to added cost for EDA providers. It is doubtful that different tools will be spawned for different foundries. How the (overlapping) sets of capabilities get priced and packaged by the EDA vendors will be a business model decision. The use model users want is singular across all foundry options. How far things diverge and what the new requirements are at 7nm and 5nm may dictate a change in strategy. Time will tell.”
This is clear for now. But given the difficulty of second sourcing I expect that a de4sign company will choose one foundry and use it exclusively. Changing foundry will be almost always a business decision based on financial considerations.
New processes also change the requirements for TCAD tools. At the just finished DAC conference I met with Dr. Asen Asenov, CEO of Gold Standard Simulations, an EDA company in Scotland that focuses on the simulation of statistical variability in nan-CMOS devices.
He is of the opinion that Design-Technology Co-Optimization (DTCO) has become mandatory in advanced technology nodes. Modeling and simulation play an increasing important role in the DTCO process with the benefits of speeding up and reducing the cost of the technology, circuit and system development and hence reducing the time-to-market. He said: “It is well understood that tailoring the transistor characteristics by tuning the technology is not sufficient any more. The transistor characteristics have to meet the requirement for design and optimization of particular circuits, systems and corresponding products. One of the main challenges is to factor accurately the device variability in the DTCO tools and practices. The focus at 28nm and 20nm bulk CMOS is the high statistical variability introduced by the high doping concentration in the channel needed to secure the required electrostatic integrity. However the introduction of FDSOI transistors and FinFETs, that tolerate low channel doping, has shifted the attention to the process induced variability related predominantly to silicon channel thickness or shape variation.” He continued: “However until now TCAD simulations, compact model extraction and circuit simulations are typically handled by different groups of experts and often by separate departments in the semiconductor industry and this leads to significant delays in the simulation based DTCO cycle. The fact that TCAD, compact model extraction and circuit simulation tools are typically developed and licensed by different EDA vendors does not help the DTCO practices.”
Ansys pointed out that in advanced finFET process nodes, the operating voltage for the devices have drastically reduced. This reduction in operating voltage has also lead to a decrease in operating margins for the devices. With several transient modes of operation in a low power ICs, having an accurate representation of the package model is mandatory for accurate noise coupling simulations. Distributed package models with a bump resolution are required for performing Chip-Package-System simulations for accurate noise coupling analysis.
The topic of Semiconductors Manufacturing has generated a large number of responses. As a result the next monthly article will continue to cover the topic with particular focus on the impact of leading edge processes on EDA tools and practices.