Part of the  

Chip Design Magazine


About  |  Contact



Memory Challenges In The Extreme

By John Blyler and Staff
Next to computation, memory is the most important function in any electronic design. Both processor and memory devices must share the limited resources of power and performance. The relative weighting of these tightly coupled constraints varies depending upon the application.

At one extreme of the power-performance spectrum are applications that sacrifice performance to maintain the lowest possible power, e.g., a simple 8-bit microcontroller. For example, STMicroelectronics has recently introduced a 16-kbit EEPROM kit that can harvest enough energy from ambient radio-wave energy to run small, simple and battery-free electronic applications like RFIC tags. The growth of wireless power technology is an emerging field that includes other major players such as Intel and Texas Instruments. (see “Tesla’s Lost Lab Recalls Promise Of Wireless Power”)

Another example of an extremely low power-low performance memory application is in the emerging market of flexible, plastic electronics (see Figure 1). A team from the Korea Advanced Institute of Science and Technology (KAIST) recently reported such a device, i.e., a fully functional, flexible non-volatile resistive random access memory (RRAM).

The challenge with flexible, organic-based memory materials is that the devices have significant cell-to-cell interference due to limitations of the memory structures within the plastic material. One solution to this problem involves the integration of transistor switches into the memory elements. Unfortunately, transistors built on plastic substrates (organic/oxide transistors) have such poor performance that they were unusable. But the team at KAIST solved the cell-to-cell interference issue by, “integrating a memristor with a high-performance single-crystal silicon transistor on flexible substrates.” Similar breakthroughs have been reported at IMEC, (see, “Organic Processors Offer Microwatt Applications.”)

In addition to low power, memristor technology promised to provide significantly higher memory densities with a smaller footprint than today’s devices. A memristor is a two-terminal non-volatile memory technology that is seen by some as a potential replacement for flash and DRAM devices. Hewlett-Packard, the developer of memristor memory, recently announced a partnership with Hynix to fabricate memristor products by the end of 2013.

One anticipated growth market for memristor technology is in solid-state drives (SSDs), which are replacing traditional hard disk drives (HDDs) in mobile notebook applications. SSDs require less power and space than HDDs, which makes SSDs well suited for the rise of ultra-light and ultra-thin notebook computers. These ultra-“books” aim for at least 8 hours on a single battery charge. Among others, Intel recently heralded it entrance into the ultra-book market during the last Intel Developer Forum (see Figure 2). The company is shifting its focus away from traditional notebooks toward ultra-books to deal with competition from Apple’s MacBook Air and ARM processor-based tablet computers.

One consequence of the rise of Ultrabook laptops is the further erosion of the DRAM growth market (see Figure 3). Mike Howard, principal analyst for DRAM and memory at HIS, noted that, “the single biggest reason for DRAM’s reduced growth outlook in notebooks during the next four years is the Ultrabook.” Howard believes that the emphasis on form factor with minimal size and weight in Ultrabook will lead to fewer DRAMs on average than traditional notebooks.

Let’s look at the other extreme of the performance-power spectrum, i.e. high(er) power and high performance. Today, server-grade multicore processors are needed to support both ever-increasing network data bandwidths and increasing data-crunching analytics for context-aware applications. In sync with the need for more processors is the complementary need for more memory. For example, networking applications require the constant movement of massive amounts of data into and out of each processor in a multicore system.

Such high-performance processor applications may soon grind to a halt in what Linley Gwennap describes as, “the looming memory wall.” Others have echoed Gwennap’s concerns that the throughput needs of high performance multicore processors will not be met by today’s memory technology.

What can be done? Several solutions are possible, notes Gwennap:
> Increase L3 cache to help reduce traffic to external memory.
> Add more memory channels to tradition slow speed DRAM devices.
> Follow Intel’s lead on its Xeon processors by adding buffer-on-board (BoB) chips to convert traditional processor serial interfaces into standard parallel DRAM connections.
> Follow MoSys’s lead by implementing a standard high-speed serial interface directly to DRAM.
> Add Micron’s prototype Hybrid Memory Cube to re-engineer the memory subsystem.
(see, “Samsung, Micron Unveil 3D Stacked Memory And Logic.” )

Not everyone agrees with that approach, however. Sam Stewart, chief architect at eSilicon, says that off-chip memory could greatly improve performance over L3 cache and do it much more efficiently. “When you have L3 cache, you have 2 megabytes per CPU that’s shared,” said Stewart. “With a Hybrid Memory Cube you may have 17 die with 8 gigabytes versus a total of 12 megabytes. Plus it’s lower power because it’s closer and there’s high-bit bandwidth.”

Add to that custom memory, which is right sized to the specific function, specialty memories that can run at higher frequency, and the performance numbers go up even further. Put them in a stacked die package and they can go up still further. While stacked die exacerbates some issues, such as heat dissipation and electromigration, it eliminates another problem—the need for termination on signal paths. The close proximity of chips means there is insignificant reflection of electromagnetic waves as they travel through wires at the speed of light. That alone improves performance, said Stewart.

There are other technologies in the works, as well, including phase-change memory, STTRAM (spin-transfer torque RAM), and resistive RAM, according to Philip Wong, professor of electrical engineering at Stanford. He said the goal is to improve energy efficiency in all these types of memory while improving performance.

But with an estimated 50% of processing now tied up with memory and memory controllers, there is plenty of research underway to improve every aspect of memory. Not all of them will roll out in time for the next couple of designs, however, which means engineers will have to push existing boundaries a little bit further until they’re ready.

Tags: , , , , , ,

Leave a Reply

Extension Media websites place cookies on your device to give you the best user experience. By using our websites, you agree to placement of these cookies and to our Privacy Policy. Please click here to accept.