Part of the  

Chip Design Magazine

  Network

About  |  Contact

Archive for August, 2014

It Is Not About Moore’s Law

Friday, August 29th, 2014

Kevin Morris wrote an interesting piece in Electronic Engineering Journal (http://www.eejournal.com/archives/articles/20140812-nextmoore/) that he in my opinion wrongly titled “The Next Moore’s Law”.  I say this because Moore’s Law stems from an observation of an existing technology.  It predicts both technical and economic activity but by itself does not contribute directly to the predicted outcome.  Kevin himself states in the article that the breakthrough idea was to “print” components on a silicon substrate.

What Mr. Morris advocates in his article, instead, is a technical breakthrough that would be similar to printing components.

He offers a few examples: travel near the speed of light, wide availability of 3D printing, machine generated software.  Kevin maintains that any such fundamental paradigm shift would force drastic changes worldwide.  In particular he lingers on the fact that value is controlled by the law of supply and demand.  What if everyone could create goods with a 3D printer?  There would be no reason to buy the good thus no reason for the law to continue to guide the economic system.

True enough on the surface.  But the law of supply and demand does not have to be invalidated.  The goods required to print a particular object could have a cost that makes the decision of buying versus producing interesting.  If, for example it would cost $175 to print a flat screen terminal and $195.99 to purchase one, would one really choose the print path?  And not everyone would have a 3D printer or the knowledge to operate one successfully.  Kevin is a bit bias by the Silicon Valley location, where most people have accessibility and knowledge well above the rest of the world.

On the other hand I can see the birth of a new category of services: 3D print shops.  And these certainly would operate under the law of supply and demand.

Yes it is true that every fundamental breakthrough, like the Internet, has changed the social and economic behavior of a significant portion of the world population, but it has not invalidated capitalism and it has complicated, not diminished, security and legal entanglements.

Any fundamental breakthrough as the ones described by Kevin would not occur overnight.  Thus society would have time to absorb and adapt.  A more interesting comparison would be semiconductor technology and bitcoins.  The idea of replacing the monetary system with a virtual computer based one seemed on the surface brilliant.  But in practice it turns out the idea has not been fully thought trough and the result is that more legal and financial problems have been created than solved.

Intel’s FinFETs and Professor Asenov’s Independent Work

Thursday, August 21st, 2014

Gabe Moretti, Senior Editor

At this year’s DAC I met Professor Asen Asenov who is the Founder and CEO of Gold Standard Simulation (GSS) a TCAD company. Professor Asenov teaches in the Department of Electronics and Electrical Engineering at the University of Glasgow where he also heads the Device Modeling Group. I spent about an hour getting to know him and learning a lot about TCAD.
Over two years ago Professor Asenov published two blogs on the GSS webpage indicating that the trapezoidal shape of the Intel 22nm FinFETs is suboptimal and results in stronger short channel effects and in up to 15% reduction in current compared to an ‘ideal’ rectangular FinFETs. It would appear that Intel is a fast learner and according to a recent article in the German publication Golem website they have achieved the perfect rectangular shape in their 14nm FinFET offering.
The article shows the following figure to illustrate the improvements.

Dr. Asenov’s has written about the topic again and I am reporting part of his piece below.

“From a first glance the reduced fin pitch and increased fin height suggest more than 1.7x improvement of the drive current. However the drive current will be strongly affected by the contact resistance and the extrinsic access resistance, both of which are expected to increase with the scaling of the pitch.

GSS has recently simulated very similar rectangular shape FinFETs and the results are published in [1]. The table below compares the geometry of the 14nm Intel FinFETs with the devices simulated in the above paper.

Based on the GSS predictive Ensemble Monte Carlo simulations in [1] illustrated below the ‘intrinsic’ pFinFET drive current of the transistors simulated in the paper can be more than 1.6 mA/m; Vdd=0.75V. This is based on the assumption that 1.5 GP compressive strain can be introduced in the channel of the simulated transistors by suitable source/drain engineering. The velocity overshoot associated with the non-equilibrium velocity in the channel and the related high degree of ballisticity plays significant role in achieving such performance.

Strained Si pFinFET performance obtained form EMC, (b) Strain Si pFinFET carrier velocity. In the two graphs results of the EMC simulations are compared with results of ‘standard’ TCAD drift diffusion simulations before and after calibration.

However the Ensemble Monte Carlo simulations do not include the contact and additional access resistances. If access resistance of 1K ohm is included in the calibrated drift diffusion simulations the drive current drops from 1.6 mA/m to 1.2 mA/m. Access resistance of 2K ohm can reduce the drive current to 1.0 mA/m.”

In another instance this week I have been reminded of how small our industry really is and how well information travels, not always in “regular” channels among professionals in specific areas of the industry.

Notes:
[1] L. Shifren, R. Aitken, A. Brown, V. Chandra, B. Cheng, C. Riddet, A. Alexander, B. Cline, C. Millar, S. Sinha, G. Yeric, A. Asenov, Predictive Simulation and Benchmarking of Si and Ge PMOS FinFETs for Future CMOS, IEEE Transactions on Electron Devices Vol. 61 No. 7, pp.2271-2277 (2014).

The Intricate Puzzle Known as Chip Design

Friday, August 15th, 2014

Bob Smith
Senior Vice President of Marketing and Business Development
Uniquify

These days, chip design may seem like an intricately connected jigsaw puzzle, including small, oddly shaped interlocking pieces. Instead of static parts of a puzzle – typically, 300, 500, 750 or 1,000 pieces – spread across a coffee table, a chip under design has loads of dynamic parts located in a variety of directories or sub-directories found on various computers. The focal point is the processor, not the center of a well-known and photographed painting or skyline, as is often the case with puzzles.

Ah, but memories are playing almost as big a role as processors, especially in chips slated for mobile multimedia devices with higher bandwidth and performance, and low-cost and power requirements. That means an engineer’s attention is being drawn away from the processor to an increasingly devilish piece of the design – the DDR memory subsystem that includes the DDR controller, PHY and I/O. The DDR memory subsystem, after all, manages the data traffic flowing to and from the processor and external DDR memory. More than a few engineers have struggled to bring up the DDR interface in a new chip design, followed by weeks calibrating the DDR interface timing. If something’s amiss with the DDR memory subsystem, chances are there will be product failure.

Fortunately, things aren’t that dire any longer for engineers worried about a chip’s system yield and reliability. One clever engineering group was motivated to figure this out. It set a goal to implement a DDR memory subsystem that would deliver the highest performance and quality within a small footprint and minimal power consumption. It identified a way to improve the DDR memory controller subsystem and solved the problem, though it wasn’t easy. For example, DDR memory chips must be small and fast to keep costs down, an important consideration for the group, and not a simple endeavor.

The elegant solution is an efficient piece of logic embedded in the DDR PHY that measures the system timing in-situ for improved device and system yield and reduced system bring-up. It is able to precisely measure the timing window and automatically adjust it for each system. It offers a flexible and customizable architecture and includes patented SCL (self-calibrating logic) and DSCL (dynamic self-calibrating logic) for real-time calibration to accommodate static and dynamic variations in the system operating environment. The self-calibrating logic precisely measures latency and the phase difference between the DDR clock and system clock and aligns the capture of DDR data within the center of the timing window.

What the engineers achieved is remarkable. The DDR memory subsystem enables a system to run at maximum performance and boosts device and system yield and reliability, reducing variation effects. Also, it maintains DDR memory system performance as temperature and supply voltages fluctuate during system operation. Not surprisingly, the need for SCL and DSCL is greater when speed and clocking are higher and margins smaller.
The proof can be found with one high-end, highly reliable HDTV series currently on the market. The savvy project group picked this DDR memory subsystem to improve the TV’s quality and reliability.

With this innovation, the chip design’s system yield and reliability puzzle has been solved leaving engineers to focus on crafting a chip custom-made for any number or variety of electronics devices.

About Bob Smith


Bob Smith is senior vice president of Marketing and Business Development at Uniquify. He began his career in high tech as an analog design engineer working at Hewlett Packard. Since then, he has spent more than 30 years in various roles in marketing, business development and executive management primarily working with startup and early stage companies. These companies include IKOS Systems, Synopsys, LogicVision, and Magma Design Automation. He was a member of the IPO teams that took Synopsys public in 1992 and Magma public in 2001. Smith received a Bachelor of Science degree in Electrical Engineering from the University of California at Davis and a Master of Science degree in Electrical Engineering from Stanford University.

The Entropy of EDA Tools

Thursday, August 7th, 2014

Gabe Moretti, Senior Editor

In my free time I read books on physics and philosophy, so during my vacation I started reading “Hidden in Plain Sight # 3: The Secret of Time” by Andrew Thomas. One of the chapters in the book discusses the subject of entropy. Entropy was first defined by the German physicist Rudolf Clausius as dS/dt > 0. That means that entropy in every entity, be it material or biological, always increases with time. Maximum entropy equals maximum disorder of the component atoms in the entity. The equation defines the second law of thermodynamics.

The Austrian physicist Ludwig Boltzmann realized that the increase in entropy could be expressed as S = klogW where the value of W is the amount of disorder in the current state of the entity. That means that a system that changes randomly will tend to become more disorderly with time. Thus things (the entities) are created orderly and become disorderly: this is called aging.

The concept seemed quite obvious to me until I started thinking about software programs like EDA tools. A program starts in the intents of its creators as an orderly entity. In fact during its existence users randomly find defects (bugs) and the bugs are fixed, thus making the program more orderly. As this continues during the useful life of the program, it becomes more orderly, thus its entropy decreases. But do not despair or rejoice, depending on your point of view. We have not just broken the second law of thermodynamics.

But, just may be, we have found that entities which are not physical, like computer programs or scientific laws, have the property of having their entropy lower in the “backward” direction of time. That, in itself has given me some degree of satisfaction.

It turns out that the laws of physics are symmetrical with respect to time: that is they apply equally in both direction of the arrow of time. So software programs are a perfect example of this state of affairs. And yet, EDA tools are a constant target of complaints by their users, whose life is made harder by the bugs (disorder) in the program. This means that increasing entropy toward the past is not a good thing for users. It is also not a good thing for developers. They must work to decrease entropy by fixing bugs. And just when the program reaches a stable status with the lowest possible value of its entropy it becomes obsolete and a new development must start.