Published on February 15th, 2010

Thanks for The Memories But…


The memory compiler.   Surely one of the most under-appreciated technologies that has been pivotal to enabling the advanced System-on-Chip (SoC) designs that drive the incredible portal computing devices we all love (or crave).

Of course, integrated synthesis, place and route (SP&R) is essential to efficiently create the millions of the logic gates that comprise these mega-chips. Yet often over 50% of the silicon real estate is used up by on-chip memories.  A SoC, by definition, is the integration of many sub-systems each with their own local storage requirements. If each memory instance had to be designed from scratch, the productivity gains from SP&R would be washed out and we might all still be lugging around Palm Pilots, brick-sized cell phones and 20lb laptops.

Fortunately, memory compliers, which came along over a decade ago, have made it simple to generate the transistor layout for just about any memory size for any given word length. Each new layout takes only minutes to generate and is guaranteed design rule clean. This gives SoC designers incredible flexibility and productivity. In addition, the memory compiler will generate memory models and library views for timing, power and signal integrity so that the memory instance can be treated as a simple black-box or mega-cell by the SP&R flow. 

While the compiler’s generation of library views is very convenient, it has always been a trade off against model accuracy.   As designers adopt advanced process technology (65nm and below) and strive to manage both performance and power I believe this tradeoff is not longer acceptable and that the “instant” memory model needs to be replaced with an “instance specific” model, i.e. one that is characterized for its targeted application.

Designing using advanced process means that the memory models need to account for process variation, signal integrity and transistor stress. Efficient power consumption means that the memory instances need to work at a range of different voltage levels, typically the lower the better to help reduce overall system power. Also, the desire to trade-off timing and power means that the models for both timing and power must be equally accurate otherwise the SP&R flow will thrash around trying to find a solution to unrealistic targets.  With so many possible outcomes from the compiler, it is only possible for the memory IP provider to accurately characterize a handful of typical instances for any given process, voltage and temperature. For any other combination of word length, memory size, voltage or temperature the compiler models will be generated from fitting the characterized data to proprietary equations. However, in reality there is no good equation set that covers all cases especially for complex low power memory architectures that use multiple rails and power gating. Consequentially additional margins are added by the IP provider just to play it safe. The net effect is over design which hurts design productivity while increasing chip area and power consumption.

The only realistic solution to this dilemma is to characterize each memory instance based on its intended use. As each instance has its own electrical characteristics based on the voltage level, clock frequency and physical environment it needs to be modeled accurately in context of the SoC being built. This will enable the SP&R tools to efficiently optimize the surrounding digital logic to achieve the best possible performance at the lowest possible power consumption.  While this approach reduces some of the ease-of-use benefit of using a pre-canned model, the potential advantages in increased design productivity and reduced system power will easily offset that. New tools exist that can automate the memory model creation that are simple to use and can create accurate models for any unique instance in less than a few hours regardless of the memory size. These tools can also create noise models for signal integrity analysis and accurately predict the impact of process variation on the performance of the memory.  Additionally by characterizing each memory instance, the design teams can be assured the memory models are consistent with the version of the silicon process models being used. They can also re-assure themselves that the memory instance will function as expected for their specific use. For example, it may be possible for the memory to work at a voltage that is below the specification with a performance that is acceptable for the given application.

In conclusion, while memory compilers are wonderful tools for efficiently generating an almost unlimited number of correct-by-construction layout combinations when it comes to generating accurate electrical models for advanced process nodes “Buyer Beware.” For challenging low-power and/or high performance designs, instance based characterization is a requirement to get the most out of the under-lying silicon process and to truly manage timing and power. In short, tell you’re your IP supplier thanks for the memories, no thanks for the models and take charge of creating the memory models you need yourself.

Prior to Altos, Jim was the Timing and Signal Integrity Marketing Group Director at Cadence. He was the VP of Marketing and Business Development at CadMOS when they were acquired by Cadence in 2001. Before CadMOS, Jim was Executive VP at Ultima Interconnect Technology (which as Celestry was acquired by Cadence in 2003), Major Account Technical Program Account Manager at EPIC Design Technology (which IPO'ed in 1994) and a Member of Group Technical Staff at Texas Instruments. Jim holds a BS in Math/Computer Science from Manchester, UK and has over 25 years experience in EDA.


Tech Videos

©2017 Extension Media. All Rights Reserved. PRIVACY POLICY | TERMS AND CONDITIONS