Part of the  

Chip Design Magazine

  Network

About  |  Contact

Posts Tagged ‘Open-Silicon’

New Kinds Of Hybrid Chips

Thursday, June 28th, 2012

By John Blyler and Staff
Crack open any SoC today and it will contain a variety of third-party memory, processor cores, internally and externally developed software and analog. In fact, the main challenge of most chip designs today is integration and software development rather than developing the chip from scratch.

By that definition, almost any chip is a hybrid. But the definition is about to expand significantly over the next few years, as Moore’s Law becomes increasingly difficult to follow and more of the chip is developed in discrete pieces that may go together horizontally, vertically, and sometimes even virtually.

Stacking of die, notably 2.5D configurations, is merely the first step in this process. Going vertical with 2.5D and full 3D versions will likely create a market for subsystems that are silicon-hardened. This makes good sense from a business standpoint, because not every part of the chip needs to be manufactured using the latest process node. In fact, analog developed and verified at older process nodes will likely work fine with a processor core developed at 20nm.

“In general, the trends are toward it being harder and harder from a process technology standpoint for foundries to create a process that is good,” said Hans Bouwmeester, director of IP at Open-Silicon. “That’s true for digital CMOS, for analog, for embedded DRAM and for embedded flash. We’re going to see a lot more heterogeneous die in a package, each in its own process technology. So you’ll have the CPU die in digital using a low-power/high-performance process, a high-speed I/O die with high-speed SerDes, and then you’ll have specialized RF, DRAM, and flash.”

FPGAs could well become part of the stack, as well. In fact, both Xilinx and Altera have created 2.5D planar chips and have commented publicly that they can be used in stacked configurations with other die.

“What you’ll see is that one side will become more specialized,” said Bouwmeester. “The other side will be everything in a package, which opens up enormous possibilities.”

Software
At least part of this is being made possible by software. Getting to tape-out is still a big problem, but it’s certainly not the only one—and maybe not even the biggest. Software development has become a huge challenge. Recent IBS data (see Figure 3) agrees with other evidence that software has become the big driver of cost and schedule. What is unique to the IBS data is confirmation that this trend accelerates at each lower geometric silicon node. Like chip hardware, software—including firmware, operating systems, middleware and even applications—becomes more complex with each generation of Moore’s Law.

Software tends to be the main product differentiator, in large part because hardware has become a commodity—a trend that will likely continue as subsystems and processor platforms become too expensive for most companies to develop. In a “fast market” such as mobile handsets, manufacturers that miss the market by as little as 9 to 12 months may lose $50M to $100M in potential revenue. This revenue loss combined with the extra development time required by software is one reason why software and hardware co-design approaches are so important. In addition, it explains the rise in popularity of virtual and FPGA-based prototype systems and emulation platforms.

Chips also can be built with the assumption that they’re part of a broader communication and storage scheme. Apple’s iCloud is one example of this, where at least some of the processing is done externally, allowing devices to behave almost like thin clients at times, and as fully functional processors at others. This virtualization allows a whole new set of tradeoffs in design, putting as much or more emphasis on the I/O as on the processor and memory.

Mixing and matching
All of these considerations are the result of a big speed bump in IC design, which has forced the semiconductor industry to look elsewhere for gains in performance and efficiency. Double patterning at 20nm has greatly increased the cost of manufacturing, and it will increase further still at 14nm if EUV isn’t commercially viable. They key sticking point there is how many wafers can be processed per hour using EUV. It currently is way too slow to be a viable replacement for 193nm immersion lithography.

But there also is a possibility of double patterning only part of a chip, and developing the rest on the same planar die in an older node. Luigi Capodieci, R&D fellow at GlobalFoundries, noted this is a very real possibility for reducing development costs in the future. But so are new techniques such as directed self-assembly, which can supplement multi-patterning and potentially help keep the cost down.

Still, cost isn’t the only issue that has to be considered. Heat is difficult to remove from chips that are packaged together. While some of that can be programmed away, running a processor at maximum speed for a short period of time and then shutting down, some of it also has to be engineered out with new structures such as FinFETs and new materials such as silicon on insulator (SOI), which can reduce current leakage that causes heat in the first place. What’s new here is that chips may be a combination of all of these things, with companies investing more money in certain portions of a chip—or a die within a package—and reducing costs in other areas. So areas that don’t generate much heat, or functions that aren’t used as often, won’t require as much engineering or the latest process technology and presumably can be done using single patterning.

Conclusion
IC design and manufacturing have been largely evolutionary. After decades of slicing costs at every new process node, it’s difficult to give up on a model that has worked well. The move to 450mm wafers will help boost efficiency even further, providing that yields are reasonable.

However, there is also a growing recognition that not all parts of a chip will continue down the Moore’s Law path at 20nm and beyond. Some portions of an SoC will remain on that path, others will not. But they may all be part of the same aggregate solution, packaged together in unique ways that can actually improve performance, lower power consumption, and get to market on time and with minimal risk of failure.

Is EDA Still EDA?

Thursday, February 25th, 2010

By John Blyler and Staff
Is the Electronic Design Automation (EDA) tools market shrinking or growing? That depends greatly upon how you define EDA.

A recent report by the Global Industry Analysts, based on information from EDA Consortium (EDAC), predicts that the global EDA tool market eventually will re-emerge to drive growth to $9.8 Billion by 2015. The report suggests that this growth will be fueled in part by the traditional efforts to improve efficiency and performance throughout the chip development process.

EDA Chip-Level Tools
Aart de Geus, chairman and CEO of Synopsys, expanded upon this finding in a recent interview. “As a percentage of our business, classic EDA is shrinking, but this is not a case of ‘classic EDA doesn’t grow.’” For example, in the past, EDA companies added front-end RTL synthesis and design tools with timing and power closure to improve the productivity of chip designers. Next, efficiencies were found in the back-end of the process by adding physical design with extraction and Design for Manufacturing (DFM) and Yield (DFY) tools. Today, EDA vendors are improving the value of system-level design with architectural tools.

Synopsys is indeed attempting to improve their architecture tool flow with the recent double acquisitions of two electronic-system level (ESL) design companies – VaST and CoWare. The emphasis on architectural integrated circuit (IC) design productivity has pushed traditional EDA chip companies to expand into the next level of product development, namely, package and board design and – on the software side – even application development.

But this time around, productivity and efficiency within the chip development process will not be enough to save EDA. In addition to continuing improvements in both front and back-end tool design, chip-level EDA companies must be successful in reaching outward to embrace new customers and industries.

Perhaps no one understands this shift in thinking better than Mentor Graphics, who has products in the chip, package, board and even embedded real-time operating system (RTOS) markets. “We (EDA chip tools) as an industry are stubbornly targeting a limited number of customers,” said Serge Leef, vice president of new ventures and General Manager of the System-Level Engineering Division at Mentor Graphics. “We really need to figure out where to go beyond that.”

There are four choices, according to Leef. One is to sell products to existing customers, which EDA companies will continue to do. The second is to sell new products to existing customers, which they are attempting in areas such as submicron design, DFM and yield enhancement. A third option is to sell existing products to new customers in places like China and India, but most of those companies are either part of multinational companies that already buy EDA tools or they’re underfunded startups that cannot afford tools. A fourth option is to sell new products to new customers.

On paper, the last option seems the most promising. The problem is getting the new customers to look at what EDA has to offer, which means that EDA companies must understand the needs of the new customers – i.e., different industries.

IP Drives Profit
A universal need shared by most new customers in today’s economically challenged markets is that of cost reduction. This has two effects. One is to increase the use of intellectual property (IP) blocks in chip-level design while the other is to move from ASIC to FPGA-based designs.

Increasing the use of IP was a primary theme in Virage Logic’s keynote address at the recent DesignCon show. That was expected, but the arguments that were used to support the growth of IP are worth noting. Brani Buric, executive vice president for marketing and sales at Virage, explained it this way: “As we move into consumer markets with low profit margins we must think beyond the technical challenges to the business issues. The question is not just how to do the design more efficiently in terms of cost, but whether to do the design at all.”

The business focus of this approach is reflected in its name, i.e., Design for Profitability (DFP). Companies focusing on profit might just write the spec for a new chip, then hand off the rest of the design and implementation to design companies such as eSilicon or Open-Silicon. Owning the spec would typically be a lot less expensive than owning any part of the implementation process. This approach relies heavily on IP blocks to build the chip to spec.

Interestingly, the growth of IP is one of the key drivers cited by in the Global Industry Analysts report for overall EDA growth. The reason that EDA tool revenues are expected to climb in 2015 is because EDA owns IP. By including Broadcom, Qualcomm and ARM as some of the largest IP licensing companies, EDA will indeed be one of the fastest growing sectors—at least on paper. The reasoning for this inclusion, according to EDAC, is that EDA tools are an integral part of licensing the IP, so IP licensing revenues should be counted in the EDA business calculations.

EDA in the Board-Level Market
The growing reliance on FPGA-based electronics is the second trend driven by profit-focused designs. But this is another area where companies like Actel, Xilinx and Altium are trying to engage a broader customer base, e.g., medical, industrial and automotive markets.

Actel’s purchase of Pigeon Point moves that company squarely into the Advanced TCA and MicroTCA world, which has been heavily utilized by communications companies and defense. Xilinx, meanwhile, is positioning its next-generation 28nm FPGA platform to help win business in non-traditional markets. And Altium has been focusing on a single database implementation of FPGA-based, board-level products that include embedded software development.

While all of these expansions reflect broader changes in the overall semiconductor industry, real growth in the EDA sector can only come from expansion beyond traditional markets. But there will always be a nagging question facing EDA companies moving into these new markets: Is this really EDA, or are we venturing into a new sector that reaches well beyond the confines of EDA to include a true system-level approach?


Extension Media websites place cookies on your device to give you the best user experience. By using our websites, you agree to placement of these cookies and to our Privacy Policy. Please click here to accept.