Part of the  

Chip Design Magazine

  Network

About  |  Contact

Headlines

Headlines

Architectural Changes Ahead

By John Blyler and Staff
For the past couple of process nodes chipmakers have been developing power-saving features that have been largely ignored by OEMs. That’s beginning to change.

The need to do more and faster processing within the same or smaller power budget is forcing significant architectural changes, more efficient software, and new materials into the equation. They are showing up in some of the latest announcements and presentations from companies across the semiconductor industry.

Architectural leaps
David “Dadi” Perlmutter, in his keynote address at the Intel Developer Forum this week, hinted at some architectural changes that will help pave the way for new voice and gesture-recognition interfaces. One involves near-threshold voltage scaling, something he referred to as “versatile performance.” As he put it, “if the platform is not warm enough, you scale down.”

To get to the next steps, Intel will need to add a number of architectural changes. The first will be rolled out next year with a 22nm processor, code-named Haswell, that includes its TriGate or finFET technology. That will be followed by a 14nm chip, which Intel reportedly is already testing.

Intel has been working with a variety of materials, including fully depleted SOI, and it has been experimenting with various gate structures and stacking approaches. But which ones ultimately get used depend on when it becomes economically required to change its processes and manufacturing. The company may buy some time just by using bulk CMOS combined with EUV lithography and 450mm wafer technology, in which it has invested heavily over the past few months. Bigger wafers and commercially viable EUV could well pave the way for advances at the next couple of process nodes.

In a speech prior to IDF, Intel Labs’ Gregory Ruhl talked about the energy benefits of Near Threshold Voltage (NTV) computing using Intel’s IA-32, 32nm CMOS processor technology. The so-called “Claremont” prototype chip relies on an ultra-low voltage circuit to greatly reduce energy consumption. This class of processor operates close to the transistor’s turn-on or threshold voltage—hence the NTV name. Threshold voltages vary with transistor type, but are typically low enough to be powered by a postage-stamp sized solar cell.

The other goal for the Claremont prototype was to extend the processor’s dynamic performance—from NTV to higher, more common computing voltages—while maintaining energy efficiency. Ruhl’s results showed that the technology works for ultra-low power applications that require only modest performance, from SoCs and graphics to sensor hubs and many-core CPUs. Reliable NTV operation was achieved using unique, IA-based circuit design techniques for logic and memories.

Further developments are needed to create standard NTV circuit libraries for common, low-voltage CAD methodologies. Apparently, such NTV designs require re-characterized constrained standard cell library to achieve such low corner voltages.

Rethinking standard approaches
Michael Parker, senior technical marketing manager at Altera, began a session at the recent Hot Chips conference by highlighting advances in the floating-point accuracy of FPGA devices. FPGAs are inherently better at fixed-point calculations, in part due to their routing architecture. Conversely, accurate floating-point calculations are dependent upon multiplier density for the extensive use of adders, multipliers, and other trigonometric functions. Often, these functions are pulled from libraries to form an inefficient multiplier implementation.

According to Parker, Altera took a different approach by using a new floating-point fused data path implementation instead of the existing IEEE-based method. The data path approach removes the typical normalization and de-normalization steps required in the multiplier-based IEEE representation. However, the data path approach only achieves this high floating point accuracy on smaller matrix functions (like FFTs), where low power GFlops per Watt performance and low latency—thanks to enough on-chip memory—are the primary requirements.

New materials
Robert Rogenmoser, senior vice president of product development and engineering at SuVolta, a semiconductor company focused on reducing CMOS power consumption, discussed ways to reduce transistor variability for low-power, high-performance chips.

Transistor variability at today’s lower process geometries comes from the typical sources of wafer yield variations and local transistor-to-transistor differences. Such variability has forced the semiconductor industry to look at new transistor technologies, especially for lower power chips.

What is the solution? Rogenmoser, in his Hot Chips presentation, discussed the pros and cons of three transistor alternatives: finFET or TriGate; fully-depleted silicon-on-Insulator (FD-SOI); and deeply-depleted channel (DDC) transistors. FinFET or TriGate technology promises high-drive current, but faces manufacturing, cost and intellectual property challenges. The latter point refers to IP changes required to support the new 3D transistor gate structures.

According to Rogenmoser, FD-SOI transistor technology enjoys the benefits of undoped channels, but lacks the capability of multi-voltages and a limited supply chain—a point that FD-SOI supporters say has already changed. Still, SuVolta favors deeply depleted channel transistors. This process offers straightforward insertions into bulk planar CMOS—especially from 90nm to 20nm and below. Equally important is the easy of migration of existing IP to the DDC process, he explained.

Rogenmoser concluded by explaining how DDC technology can bring back common low power tools to lower nodes, e.g., dynamic voltage and frequency scaling; body biasing and low-voltage operation.

Stacking die
Going vertical, or even horizontal through an interposer, is one of the most significant and physically observable architectural changes in the history of semiconductors. By shortening the wires and increasing the size of the data pipes, power can be reduced and performance can be increased significantly.

But how real is stacking? According to Sunil Patel, principal member of the technical staff for package technology at GlobalFoundries, it’s very real. “For 2.5D, 2014 will be a very interesting year,” said Patel. “By the end of 2013 the capability will be in place. Designs already are being considered and tried out. 3D mainly depends on memory standards and memory adoption. We’ll see a package-on-package and memory-on-logic configuration first. 3D memory has its own route, which is ahead of that. 3D memory on logic could be late 2014.”

He’s not alone in this belief. Steve Pateras, product marketing director for test at Mentor Graphics, said that from a tapeout point of view—the only window EDA companies have into architectural changes—2.5D already is happening. “We have customers taping out 2.5D. For 3D, we’re seeing design activity for memory on logic. Next year we’ll see some tapeouts.”

And Thorsten Matthias, business development director at EVGroup, said equipment is being sold to foundries right now to make this happen. “By the end of next year we believe all the major players will have production capacity for both 2.5D and 3D,” he said. “That’s probably not 20,000 to 50,000 wafers per month, but there will be production capacity at every player that wants to take a leading role. By the end of next year there will be a supply chain for 2.5D and 3D, although probably at a lower volume and for high-end products.”

Tags: , , , , , , ,

Leave a Reply