By Ann Steffora Mutschler
Engineering teams across the globe continue to pound the process geometry treadmill to stay on the curve of Dr. Moore to achieve better speed or lower power or smaller die—and it all adds up to increased complexity in the design and packaging. However, with advanced forms of die stacking such as package-on-package, silicon-in-package, 2.5D silicon interposer technology and other techniques, engineering teams now have more degrees of freedom around how chips are constructed.
A significant consideration in moving from one process generation to the next is that there are many IP functions that must migrate. “Sometimes it’s too expensive to port it from one generation to the other and you may not need it as far as the speed or as far as the power,” noted Shafy Eltoukhy, vice president of manufacturing operations for Open-Silicon.
This is where advanced die stacking comes into play. The engineering team may consider going to 28nm for one particular aspect of the function—for example, to get a better speed in the ARM processor—while there are a lot of other interfaces for a particular die that may not have to be in that advanced process node. A USB 2.0 or 3.0 does not have to be in 28nm to achieve the requirements—it could be in 90nm or 40nm, he said.
“The whole notion of re-using IP is common, though something not as commonly discussed is the reusability of die. What we’ve been seeing a fair amount of is companies saying, ‘I’m going to use advanced packaging techniques that are available today and I’m going to take this older generation die that I’ve got sitting on the shelf. And I’m going to make a much smaller new chip to complete it or extend it or interface to it. And I’m going to put that all into a multi-chip module, or advanced packaging structure, and circle back and use a lot of the IP that is in actual hardware form and make that available.’ It’s not mainstream, but reusing IP 15 years ago wasn’t mainstream either,” said Jack Harding, president and CEO of eSilicon.
Engineering teams tend to have a certain function they really want to squeeze and go to the next generation, but there are a lot of other functions in the design that don’t have to be in the latest generation, Eltoukhy observed. In advanced SoCs, customers are paying first and foremost for the IP development. “You are paying more dollar-wise per silicon area for a function that does not have to be in 28nm.”
What process node makes sense
Naturally this leads to a discussion about not bringing every single function into the next generation, especially because some analog and RF functions do not scale very well. So why not stay in the previous generation and partition the design in order to leverage older technology where available and not re-invent it?
“What I have to do instead is some kind of interface between this technology and the new technology. I put only the function that I want in the technology that can handle it and leave the other somewhere else,” he noted.
The question then becomes how to connect these together. “You certainly can connect them on the package level, which people used to call MCM (multi-chip module). You can actually get multiple die and bolt them in the substrate of the package and connect them. But the package technology has been way, way behind compared to the silicon technology, and you may end up with much higher power and slow interfaces and so on,” Eltoukhy explained. This has led to the development of silicon interposer technology in order to replace the substrate interconnect or the package interconnect, which is commonly known as 2.5D stacking.
Essentially, silicon interposer technology connects one die to another instead of connecting to a package, thereby reducing power and improving speed. Xilinx already has made its version of 2.5D-stacked technology available with certain product families.
Another use of 2.5D would be in a processor design that needs to talk to a DRAM, he continued. “Most people have a DDR interface and you go through the board to interface with the memory. But this approach is slow and large. Instead of buying a DRAM package from a DRAM vendor, we ask the vendor to sell us a known good die, which can be attached with processors on an interposer so you don’t have to go outside the chip. The DRAM can talk to the processor right away and the form factor will be much, much smaller. So there are multiple applications for that interposer—mixing the process nodes so that you can reduce the cost and so on, and improving the yield or bringing up some known good die from the DRAM to your die.”
“The application processors, which are really only delivered with package-on-package memory, end up with a very easy knob in that system—they can pile on different amounts of DRAM. To them it’s almost the same design and it is the same software. A couple of bits different in the software and suddenly they’ve got a new derivative part,” said Drew Wingard, CTO of Sonics.
“In many cases the die itself has more package attachment or wire bonding sites than the package may have pins, so you may take the same die and put it into a different package with different amounts of I/O resources available, and then sell those chips—even though they are the same fundamental chip design—at different price points. That’s been going on for a long, long time but with some of the more advanced packaging technologies, there are new degrees of freedom there,” he added.
While sounding tantalizing, all of these options are still under development. Complicating widespread deployment are two factions in the industry at odds as to the right path forward. On one side are the semiconductor foundries, which would like to enable customers to use a transposer because, at the end of the day, they want to sell more dies to put on the interposer, Eltoukhy explained. “They say, ‘We can give you the interposer but you buy the dies from us and we can glue it together for you.’”
In another camp are packaging providers such as Amkor and ASE that fear losing business to the foundries and would also like to offer the interposer to their customers so they won’t go and do the interposer with their foundry. “These two camps are fighting now because it requires some investment from a capex point of view,” he added.
Managing complexity, saving dollars
In addition to dealing with complexity, advanced die stacking techniques can save big dollars, eSilicon’s Harding asserted. “You could measure it just in terms of NRE dollars, you could measure it in engineer years of work, you could measure it in terms of time to revenue. By any metric, going down the advanced-package, multi-die solution is better by two orders of magnitude than just actually making a new chip, and I would argue it’s probably better by one order of magnitude by just doing RTL modification, which still has high NRE and a lot of technical risk, albeit you have a product that is closer to being the final product. These decisions are classic risk-reward.”