Published on June 27th, 2006
Determine Foundry-Model Problems Without Touching a Wafer
Frequently, IC designers take foundry-model sets and blindly accept them. They don't question their content or accuracy. That course of action is unwise, however, as that model set then becomes the foundation upon which every chip is built. Simulation output is only as good as the model that's simulated. Unfortunately, questions on models tend to arise when the chip has been designed, fabricated, and found non-functional. Months of design work and an expensive foundry trip get wasted due to deficient models. The average fabless IC company doesn't have the resources to develop models. But there are ways to determine model quality and help avoid design problems. Those methods are listed below:
Jerry Twomey can be reached at: www.effectiveelectrons.com.
Analyze the database from which the models are derived. The support data behind the models should be requested and reviewed to understand that from which the models were created. Statistical data with depth and breadth, the analysis of multiple elements over process corners, and similar "attention to detail" indicate that the development team made a comprehensive effort to design proper models.
Examine models for default and missing variables. All SPICE models allow the exclusion of variables while remaining functional. If it's not defined, an ideal is assumed by the simulator. The so-called ideal element doesn't exist on silicon. Many foundry models have unspecified parameters. It's essential to know when the default ideal is being used. This is a common problem with transistor models. Keep in mind that there's no such thing as a diode without series resistance.
Search the model set for questionable values. Variables that are set to zero need to be questioned as to their validity. A frequent example is the improper distribution of a transistor's capacitance terms.
Check the edges of geometry bins. Limited size ranges are often used for transistor models. At either edge of the size limits, the transistor should perform the same. DC bias curves should match and a transient simulation should show the same RC time constants.
Parasitics in passive models. Do capacitors have a bottom plate and top fringe capacitance? Do resistors have perimeter and bottom parasitics? Do inductors have parasitics and eddy-current losses represented?
Thermal variance in the model. Do all devices change with temperature?
Breakdown and bias-point violation flags. When exceeding foundry-specified voltage and current limits, do simulations clearly indicate violations?
Valid geometry scaling. If the size can vary, does data show the appropriate bias and response curves? Model fitting needs to be done at multiple geometry points. It's better to have size restrictions limiting models to valid regions, rather than allow designers to run simulations with invalid models.
Matching data. Statistical data on current or threshold matching for transistors as well as mismatch between resistors and capacitors should exist. It should be done as a function of physical size with enough statistical data to show validity. In addition, close physical proximity matching and "across the die" matching should be available.
Process corners. Some questions about how the corner models were created need to be asked. Frequently, these models are done by numeric manipulation and are overly pessimistic. Over-design to meet poorly specified corner models often becomes a size or power penalty.
A lot can be determined about model quality without ever touching a wafer or going into the lab to make measurements. Everything outlined here is a process of inspecting models or running simple simulation tests. As a result, designers gain useful information about how models were developed, what got left out, or what was poorly specified.