Design-Centric Process Characterization

A design-centric process characterization methodology provides a platform for designers and manufacturers to collaborate towards ramping yields.

Driven by consumer demand, chip designers continue to design increasingly complex chips requiring higher integration, smaller geometries, improved performance and lower power consumption. They realize these complex designs through a combination of standard and custom design flows and utilization of new process technologies. However, adhering to Moore's law and traversing to smaller geometries is becoming increasingly difficult as process technologies shrink below 100nm. This is because sub-100nm process technologies suffer from many causes of yield fallout and achieving desired (or targeted) yields may be difficult or even impossible.



In order for manufacturers to recoup the significant investment made in sub-100nm fabs and for designers to build products on smaller geometries, yield fallout must be both understood and contained. This article identifies sub-100nm sources of yield loss, issues with existing process models, and a new design-centric approach to process characterization that enables containment of parametric yield fallout.



Sources of Yield Loss
At 130nm and above, yields were essentially independent of the design and primarily limited by random particulate defects. These defects manifested themselves as functional failures in the design (i.e. shorts or opens). At 90nm, several systematic yield limiters became the dominant causes of yield fallout. Examples of these are shorts and opens caused by layout configurations not adequately captured by the optical proximity correction (OPC) model. The impact of functional failures caused by systematic issues was so widespread that it masked another rapidly emerging cause of yield fallout: parametric variability.



Figure 1


Figure 1: Sources of Yield Loss



Figure 1 shows the various sources of yield loss and illustrates an accumulation of yield loss with each technology generation. Figure 2 shows the yield impact of the sources of yield loss, highlighting not only their cumulative effect but also the increasing severity of each loss mechanism. Note that as processes move below 90 nm that the design itself becomes a primary source of yield loss.



Figure 2


Figure 2: Impact of Sources of Yield Loss



Parametric variability is caused by random fluctuations inherent in any manufacturing process. Examples of this in the semiconductor manufacturing process are line edge roughness, dopant fluctuations etc. These fluctuations have always existed, but their impact on 130nm and above technologies was minimal. A 3nm line edge roughness on a critical dimension of 130nm causes a small change in the electrical characteristics of a device. However the same amount of line edge roughness at 65nm changes the electrical characteristics beyond the performance margins that were budgeted for. Particulate and systematic yield limiters exhibit failures on a die-to-die or wafer-to-wafer basis, but parametric variability can cause performance changes within a die (i.e. within-die).



What this means is that two devices with exactly the same physical characteristics that are located in different places on the same die can exhibit different electrical behavior. Analog design has historically grappled with this problem, but the severity of within-die mismatch has increased to threaten most digital designs as well. Migration to smaller and smaller geometries brings the benefit of reduced die size and better transistor performance, but also the risk of greater yield fallout. Designers that want to take advantage of these new processes will need some way to understand how the inherent process variability will affect the performance and yield of their design.



But can't a designer just use design rules & SPICE models and expect high yields?
The traditional "process model" includes design rules and Spice models. At sub-100nm process nodes it is generally accepted that the traditional process model insufficiently captures the manufacturing process capabilities and limitations. This is because of the existence of a "process intelligence" gap in the models provided to the designer.



This gap exists because the fab is primarily "process aware" and not necessarily "design aware." The fab uses a complex methodology, involving TCAD simulations, test structures, and in-line metrology in deriving design rules and Spice models. It tries to build models that will cover a large spectrum of devices and topologies needed by designers so that designers can maximize performance and yield. However, the resources available to comprehensively characterize this large spectrum of devices and topologies are limited. Additionally, it may be impossible for the fab to envision the many types of topologies designers would craft to drive the performance of their part higher or to reduce die size. Finally, technologies that allow statistical characterization of within-die variability are only emerging now and hence within-die variability is not comprehended by traditional models. Thus the fab may not be in a position to provide models that:



  • Include consideration of the many design topologies used in the design.

  • Provide a comprehensive statistical picture of key electrical parameters.

  • Include within-die variability models.


The lack of adequate design-related information in the process model leads to the process intelligence gap. In order to achieve high parametric yields, the process intelligence gap must be closed.



Closing the Process Intelligence Gap requires Design-Centric Process Characterization
Closing the process intelligence gap requires, first and foremost, that designers work closely with their manufacturing partners. This has already started to occur. It used to be that manufacturers would bring a process to pilot production before involving their design partners. But at 45nm, manufacturers are engaging their design partners right at the get-go. This is because they recognize the importance and yield impact of the process-design interaction.



Next, a "design centric" approach must be adopted by designers towards process characterization. What this means is that designers need models that include process information specific to their design style. These design-centric models complement the existing models provided by the fab. These design-centric models primarily include information about contextual dependencies of devices, inter-layer interactions not captured by process models, and within-die performance characteristics. The following sub-sections give a detailed view on each of these topics.



Contextual Dependencies: The fab provides a model for how a device performs. This model includes consideration of changes in key physical dimensions of that device. For example, the model for a transistor would include how the threshold voltage, drive current and leakage current changes as a function of changes in channel length and width. Recently, it may also include the impact on these electrical parameters when the transistor is isolated or placed in a dense environment. However, it does not include additional contexts that may be important from a design perspective. Figure 3 shows one such example of a design context. In this illustration, the performance of the target transistor can be impacted by the proximity of another poly line orthogonal to the gate of the target transistor. A short distance between the poly line and the gate can induce line end shortening of the gate (depending on the OPC model), changing the electrical characteristics of the target transistor.



Figure 3


Figure 3: Examples of Contextual Dependencies



Inter-layer interactions: Functional failures tend to occur at sub-100nm due to defects associated with vias. Adding redundant vias has become common practice in the design flow. Designers use layout processing tools to add redundant vias where design rules permit. Several place and route tools also add redundant vias where constraints are not violated. However, this is simply a "best practice" and not centered around any process model. The downside of adding redundant vias in this manner is that it could in fact hurt yields. When a redundant via is created, it can impact the shape of the metal routing it connects to. In a dual-damascene process, the via can cause a widening in the upper level metal if the via size is increased by OPC or etch recipe changes intended to improve via integrity. Figure 4 shows an illustration of this effect. Bulging of the metal line can increase crosstalk or, depending on the density and distance between neighboring vias on the line, cause a short.



Figure 4


Figure 4: Example of inter-layer interactions



Within-die device performance characterization: Historically, parts were binned based on a pass or fail criteria driven by test patterns (functional or structured) generated by designers. However, performance-based parts binning is more of the norm at sub-100nm. Even this binning is based on analysis of die-to-die and wafer-to-wafer performance variations and limited by the efficacy of the at-speed patterns used during product testing. As mentioned earlier, parametric variability causes two identical devices on the same die to have different performance. Figure 5 shows the impact of line-edge roughness (including dopant fluctuation) on leakage current variability. What can be seen from the figure is that as geometries shrink from 130nm to 65nm, not only leakage current increases due to random fluctuations but leakage variability also increases. This holds true for within-die, die-to-die and wafer-to-wafer variations.



Figure 5


Figure 5: Impact of line-edge-roughness on leakage current and variability



Design-Centric Process Characterization
A design-centric approach to process characterization allows a chip designer to have access to process information supplemental to the information already provided by the manufacturer. This supplemental information can consist of:



  • A richer set of data for statistical characterization of frequently used devices.

  • Device data encompassing special design requirements (e.g. configurations of multi-gate transistors not part of the supported design kit).

  • Performance data for special or custom cells specially designed by the customer to improve design characteristics.

  • Failure data for design features of interest.


This approach requires the crafting of test structures and design of experiments (DOE) that target specific objectives (e.g. within-die device performance characterization) described earlier. A simple example of a test structure is a transistor whose performance is to be characterized. A simple example of the DOE would be to vary the channel length and width. Statistical data can be generated by replicating the exact same device a number of times. The silicon area available on the test wafer (or scribe line) will constrain the number and depth of objectives characterized. The design house will have to then trade off the number of test structures, the size of the process window (i.e. DOE) to be characterized, and the amount of statistics that need to be gathered based on the silicon real estate available. Of course, the more data that can be gathered, the more accurate the characterization will be.



The designer company has several conventional options available in how to deploy these test structures, such as:




  • Discrete test structures that are individually tested.


    • Pro: high resolution parametric test.

    • Con: high area overhead.

  • SRAM or ROM type of memory arrays.


    • Pro: high density of test structures.

    • Con: pass/fail resolution (i.e. binary and not parametric).



Also available is a new approach, called Parametric ActiveMatrix (PAM), which melds together high resolution parametric test capabilities with a very dense array of test structures. The PAM approach provides >10x the number of high resolution test structures in the same silicon area as are conventionally attained using discrete test structures. Using PAM, a much more comprehensive set of high resolution parametric measurements can be captured without impacting mask or silicon costs.



There are several options available to the design company to manufacture these test structures, such as:



  • Test wafer from the fab used as part of the fab's yield ramp cycle.

  • Shuttle or multi project wafer (MPW).

  • Scribe line in the product wafers.

  • Incorporate it in the product design itself.


These test structures can be tested using standard parametric (or memory in the case of SRAM and ROM) test hardware and the resultant data can be analyzed using commonly available data analysis tools. Figure 6 shows the overall design-centric process characterization methodology.



Figure 6


Figure 6: Design-centric Process Characterization Methodology



The resultant data and analysis can be used for better calibration of existing process models or as a supplement to those models. This supplementary set of data would be specific to the design or design style of the design house.



Design-Centric Process Characterization Fosters Collaboration
A design-centric process characterization methodology allows the process-aware fab to ensure process performance over a broad range of design styles and devices and allows the design-aware design house to get access to data specific to its design style and topologies of interest. It provides a platform for designers and manufacturers to collaborate towards ramping yields.

Prashant Maniar is co-founder and chief strategy officer of Stratosphere Solutions, of Sunnyvale, Calif., a startup providing yield improvement solutions to the semiconductor industry with the goal to make yield a signoff item that links design to manufacturing. He has spent the past 10 years within EDA focusing on design for test (DFT), design for manufacturability (DFM) and Design for Yield (DFY). Prior to founding Stratosphere Solutions, Maniar was director of marketing of the TestChip Division of HPL Technologies.He was also a founding member of the Design for Yield Division of HPL. Prior to HPL Technologies, Maniar spent seven years at Synopsys (Nasdaq: SNPS) in a variety of applications, engineering and marketing positions. Maniar received his M.B.A from Santa Clara University, his M.E. from University of South Carolina and his B.E. from University of Bombay, India. He is also an active member of the TiE (The Indus Entrepreneurs) Semiconductor special interest group.



Dr. Jim Bordelon is co-founder and chief technology officer of Stratosphere Solutions. Previously, he was managing director of the TestChip Division of HPL Technologies., where he started as director of analog/radio frequency (RF) design. Dr. Bordelon served as senior technical manger of TestChip Technologies. He began his career at Intel Corporation (Nasdaq: INTC) as a senior computer aided design (CAD) engineer. Dr. Bordelon has a Ph.D. from the University of Texas at Austin with a focus on hot carrier effects and high-field transport in silicon devices. He holds a Master of Science degree in electrical engineering from the University of Texas at Austin and a Bachelor of Science degree in electrical engineering from The Georgia Institute of Technology.


Tech Videos

MAGAZINE

  • Download the latest issue of the Chip Design Magazine
    and subscribe to receive future issues and the email newsletter.

Chip Design Research

Are you up-to-date on important SoC and IP design trends, analysis and market forecasts?

Chip Design now offers customized market research services.

For more information contact Karen Popp at +1 415-305-5557

Calendar Of Events

©2014 Extension Media. All Rights Reserved. PRIVACY POLICY | TERMS AND CONDITIONS