Published in June 2008 issue of Chip Design Magazine
Minimizing the Effects of Manufacturing Variation During Physical LayoutMobile-TV semiconductors become a reality using an algorithmic C++ synthesis environment to enhance design productivity.
For ICs targeted at 45 nm and below, variability in manufacturing is emerging as a leading cause for chip failures and delayed schedules. Achieving manufacturing closure has become very difficult, if not impossible, using aging incumbent place and route solutions that fail to consider the effects of manufacturing variations during layout.
For example, design rule checks often assume “as drawn” features, rather than modeling the actual “as manufactured” shapes and geometries for devices and interconnects. “As manufactured” shapes differ from the intended design because of manufacturing limitations, including lithographic distortions, thickness variations resulting from chemicalmechanical polishing (CMP), and unevenness in film deposition.
Because existing place and route tools do not incorporate the manufacturing variability in analysis and optimization, designers are coming out of final routing with an unmanageable number of violations that impact chip yield and reliability. Designers might not find all the violations until physical verification, when making changes to the layout involves multiple ECO iterations, which are costly, time consuming, and can lead design teams into endless non-convergent cycles.
To get a highly manufacturable design, and therefore ensure acceptable yield, you must account for the complex variability in manufacturing steps during physical implementation. This calls for a place and route system with fully integrated Design-for- Manufacturing (DFM) and lithography analysis that can optimize the design for all these manufacturing variation effects concurrently during the physical design, before the layout is committed.
Automating manufacturing closure and managing the complexity of ever more sophisticated design rules deeply affects the physical implementation tool infrastructure. It is nearly impossible to retro-fit aging place and route platforms with the required performance, capacity, and accuracy needed to control the effects of manufacturing variability. Particularly for designs at 45 nm and below, IC design teams must consider putting a solution in place before they face catastrophic yield failures or missed market windows.
Sources of Manufacturing Variability
Manufacturing variability is rooted in the inherent limitations of 193 nm light lithography, which cannot
etch the 45 nm features without significant distortion, or variation, in the manufactured shapes of devices and interconnects relative to ideal physical dimensions. Figure 1 illustrates the trend of shrinking feature sizes versus a constant optical diameter.
Figure 1. Process geometries get smaller; optical diameter does not.
You can think of manufacturing variations as global or local, although both impact yield, performance, and reliability.
Global manufacturing variations include geometric and material parameter variations in device and interconnect. Variations in effective “channel length” and “film thickness” can result in systematic electrical variations, such as threshold voltage (Vt) and leakage current variations. Similarly, global variation in metal and dielectric thickness leads to resistance and capacitance variations.
Figure 2. Manufacturing variability
Local manufacturing variations include device and interconnect geometry variations, and failures due to random material deposition, as illustrated in Figure 2. The leftmost image set shows shorts caused by random particle deposition. The middle set of images show a local effect of systematic lithographic errors. The rightmost set of images shows modeled “as drawn” versus “as manufactured” global
parametric lithography variability.
Deviations in interconnect line width and spacing arise primarily from lithographic and etch dependencies, and directly impact interconnect parasitics, which can degrade performance and signal integrity. For copper interconnect layers, there can be significant local metal density variation, resulting in dishing and erosion. Similarly, CMP can introduce strong dielectric thickness variations across the die.
Our ability to correct these variations after tapeout, using approaches like optical proximity correction (OPC) and reticle enhancement technology (RET), is reaching a limit. In an attempt to control for manufacturing variations, many designers apply excessive guardbanding, such adding margin
to timing and power constraints, and by eliminating certain physical features, even though it means giving up some of the advantages of using advanced process nodes.
The combination of these trends is creating “the perfect storm” for advanced IC designers. Tools are breaking, tapeouts are taking longer and yield ramps are becoming slower (Figure 3).
Figure 3. Trends in IC yield
These new challenges started to appear at 90 nm and have become progressively worse with each successive node. Depending on the specific design and process, these issues can be critical at 65
nm, and the majority of 45 nm designs need advanced methods and tools to achieve on-time tapeout with high confidence in performance, reliability, manufacturability and parametric yield.
Methods for Correcting Manufacturing Variability Effects
OPC and RET were introduced in physical verification to improve image fidelity by adjusting the as-drawn image to produce the desired as-printed image. However, below 65 nm, the inherent variations introduced by the offset between the diameter of the light beam and the size of design patterns are great enough that traditional modifications to a photomask can no longer ensure image fidelity. Under some process conditions, certain layout topologies will fail, no matter how much OPC and RET is applied after tapeout. Moreover, the addition of OPC features to a design not only creates more points of potential production defects, but also exponentially increases the size of the mask data set.
Achieving acceptable yield at 45 nm requires optimizing for lithography, critical area analysis (CAA), OPC, and other DFM metrics in conjunction with each other. For example, optimizing a design for CAA without also considering lithography effects will probably result in a sub-optimum solution. One practical demonstration of this requirement is via doubling. If only the CAA constraints are considered by your software when inserting redundant vias, the results could be worse from a lithographic perspective than if there was no via doubling at all. To manage this “interoptimization” of multiple effects, you need a comprehensive place and route solution that provides a framework for integrated
analysis and optimization of the different effects.
Clearly, the best approach is to generate a ‘correct-by-construction’ design during place and route. The next section describes how manufacturing variability effects can be addressed during the physical IC layout.
Addressing Manufacturing Variability Effects in Place and Route
The current state-of-the-art is to automate straightforward layout enhancements, such as wire spreading, metal fill, via doubling and via enclosure extensions, wherever DFM analysis identifies potential litho, CMP or random defect hot spots.
However, leading-edge process nodes require DFM-based layout optimization to ensure accurate timing, power and signal integrity closure while also introducing improvements to ensure manufacturability. Some established foundries now require that manufacturability optimization be handled with electronic design automation (EDA) technology. It is almost always more costeffective,
in both time and resources, to design out errors during place and route, rather than correct for them after layout is complete.
Lithography-friendly layout should start at the standard cell placement stage. Traditionally, any cell could be placed next to any other cell without degradation in yield or electrical performance. However, the size of basic standard cells is approaching the dimension of the optical diameter of lithography systems, which increases the context-dependency of the cells. This in turn limits the predictability
of standard cell behavior. Because existing place and route systems assume that the behavior of a standard cell is predictable, this new unpredictability means that timing closure during place and route
now needs to be context-aware during all steps of the design flow.
Signal routing is probably the most complex stage in the backend design process, as it relates to DRCs, manufacturability and yield. The router must lay out all the wires needed to connect millions of placed components, while obeying all the process design rules. Routing tools typically assume simplified DRC models during global routing, track assignment and detail routing. Post-processing or search-and-repair loops are then used to remove the remaining DRC violations and to make DFM-related improvements.
Prior to 65/45 nm, it only took a few search-and-repair loops to fix all the violations that remained after routing. At 45 nm, the large number of DRCs and the increased complexity of the DRCs result in hundreds of thousands of violations remaining after routing. Not only does it take far too long to make all the repairs, but in many cases, the router may be unable to converge on an optimum layout
at all. This is what we see in practice—at advanced nodes, the traditional routing algorithms are becoming the tapeout bottleneck. The tools are choking and designers are left to attempt hand-optimization for advanced DRCs and DFM improvements, or simply to take the risk of tapeout
and hope the design will be manufacturable. To move past this bottleneck, EDA tools for nanometer ICs have to incorporate advanced model-based DRC and DFM checks as early as possible into the place and route process.
In addition to the complex DRCs, place and route tools also must consider all available DFM models during place and route. This requires very fast “real-time” DFM analysis running concurrently during the routing process and feeding information into the routing optimization algorithms. For example, a fast OPC engine can guide the router, preventing it from creating “litho-unfriendly” patterns in the first place. Similarly, availability of CAA and CMP correction during place and route direct the router in global and intelligent wire spreading, wire widening and metal fill as the layout is being constructed.
By making place and route decisions in light of DRC and DFM constraints, the vast majority of potential violations are eliminated at the outset. In this manner, the place and route tool will be able to create an optimized design that results in a minimum of violations at the post-processing stage.
The prescription for IC place and route software sounds straightforward, but the implications for the tool architecture are profound. For one thing, it means that the tools need a built-in DRC engine that is extremely fast, comprehensive, accurate, and incremental. Figure 4 illustrates the DRC capabilities of an advanced place and route system. It contains advanced DRC analysis and DFM modeling.
Figure 3. Trends in IC yield
Along with being DFM-driven, an ideal place and route system must also handle multiple operational modes (e.g., power-down modes, scalable voltage and clocking), and environmental factors such as external voltage and temperature swings. The timing engine needs to know timing cost, litho cost, etc., and take them all into account while maintaining sign-off quality layout.
The combination of all these factors means that the place and route system must be capable of concurrently optimizing across dozens of design and process corners, unlike the
sequential processing that is typical today. This implies that a next-generation analysis infrastructure is needed that can simultaneously represent all the variation scenarios. In a nutshell, it needs to be a “timing, power, signal integrity, DFM, yield-driven place-and-route system.”
The system needs to have a complete suite of capabilities, driven by a proven physical sign-off technology, so designers can implement a correct-by-construction layout that drastically reduces the number of violations at sign-off.
Moving to ever smaller geometries and new process materials causes a fundamental shift in the physical defect spectrum in silicon manufacturing. At 45 nm, the issues of manufacturing process variation and interdependency of design metrics become impossible to manage with existing design tools.
The incumbent solutions are at least 10-15 years old, and “band-aids” based retrofitting is insufficient to address the variations in design, process, and manufacturability. Using old technologies, design teams will increasingly limp along, missing schedules and wasting engineering resources. An effective solution requires a new implementation platform that can concurrently analyze and optimize for variability. A highly integrated physical design system with litho and DFMdriven routing is an essential part of a successful 45 nm design strategy. The system should include the following capabilities:
- A new timing analysis architecture that can concurrently address multiple modes and corners for all variability scenarios
- A variability-aware routing approach that optimizes for OPC, CAA and other DFM metrics during place and route
- Integration with industry-standard design sign-off technologies for litho-friendly designs
Fast, multi-corner extraction “on the f ly” during routing and optimization
- Flexible tool architecture and power reduction technologies for the full spectrum of low power
- A scalable data model that can represent 100+ million gate designs in hierarchical / flat design methodologies
A new place and route system will incorporate the technologies required for advanced process nodes so that DFM becomes virtually transparent to the place and route engineer. It will enable the design teams to focus on their core mission: getting their high-performance ICs to market faster, with better yield and lower costs, then the competitor.
Sudhakar Jilla is the marketing director for Place & Route products at Mentor Graphics. Over the
past 15 years, he has held various application engineering, marketing, and management roles
in the EDA industry. He holds a Bachelors degree in Electronics and Communications from University of Mysore, a Masters degree in Electrical Engineering from the University of Hawaii, and a MBA from the Leavey School of Business, Santa Clara University.