Part of the  

Chip Design Magazine

  Network

About  |  Contact

Posts Tagged ‘TSMC’

Next Page »

Blog Review – Monday, September 28 2015

Monday, September 28th, 2015

ARM Smart Design competition winners; Nordic Semiconductor Global Tour details; Emulation alternative; Bloodhound and bridge-building drones; Imagination Summit in Taiwan; Monolithic 3D ‘game changer’; Cadence and collaboration; What size is wearable technology?

Winners of this year’s ARM Smart Product Design competition had no prior experience of using ARM tools, yet managed, in just three months to produce a sleep Apnea Observer app (by first prize winner, Clemente di Caprio), an amateur radio satellite finder, a water meter, an educational platform for IoT applications and a ‘CamBot’ camera-equipped robot, marvels, Brian Fuller, ARM.

This year’s Nordic Semiconductor Global Tech Tour will start next month, and John Leonard, ARM has details of how to register and more about this year’s focus – the nRF52 Series Bluetooth Smart SoC.

Offering an alternative to the ‘big box’ emulation model, Doug Amos, Aldec, explains FPGA-based emulation.

Justin Nescott, Ansys, has dug out some great stories from the world of technology, from the UK’s Bloodhound project and the sleek vehicle’s speed record attempt; and a story published by Giz Mag about how drones created a bridge – with video proof that it is walkable.

A review of the 2015 Imagination Summit in Taiwan earlier this month is provided by Vicky Hewlett. The report includes some photos from the event, of attendees and speakers at Hsinchu and Taipei.

It is with undeniable glee that Zvi Or-Bach, MonolithIC 3D announces that the company has been invited to a panel session titled: “Monolithic 3D: Will it Happen and if so…” at IEEE 3D-Test Workshop Oct. 9th, 2015. It is not all about the company, but a discussion of the technology challenge and the teaser of the unveiling of a ‘game changer’ technology.

A review of TSMC Open Innovation Platform (OIP) Ecosystem Forum, earlier this month, is presented in the blog by Christine Young, Cadence. There are some observations from Rick Cassidy, TSMC North America on Thursday, on automotive, IoT and foundry collaboration.

How big is wearable, ponders Ricardo Anguiano, Mentor Graphics. Unwrapping a development kit, he provides a link to Nucleus RTOS and wearable devices to help explain what’s wearable and what’s not.

A brief history of Calypto Design Systems, recently acquired by Mentor Graphics, is discussed by Graham Bell, RealIntent, and what the change of ownership means for existing partners.

Beginning a mini series of blogs about the HAPS-80 with ProtoCompiler, Michael Posner, Synospys, begins with a focus on the design flow and time constraints. He provides many helpful illustrations. (The run-on piece about a visit to the tech museum in Shanghai shows how he spends his free time: seeking out robots!)

Caroline Hayes, Senior Editor

Deeper Dive – Wed. April 30 2014

Wednesday, April 30th, 2014

The gang of three, or the Grand Alliance, refers to the co-operation of foundry, IP and EDA companies to make 14nm FinFET a reality. Caroline Hayes, Senior Editor, asked Steve Carlson, Director, office of Chief Strategy Officer, Cadence Design, what was required to bring about FinFET harmony.

What foundry support is needed for any chip maker looking to develop 14/16nm finFET?
SC: The foundry needs to supply a complete enablement kit. This includes traditional PDKs (physical design kits), along with the libraries, technology/rule files for the synthesis, design-for-test, extraction, place and route, EM, IR, Self-heat, ESD, power and timing sign-off, DFM and physical rule checking.

Put another way, enablement content spans from the transistor level support, up through complex SoC design. To get to the production phase of enablement roll-out there have been several tape-outs and test chips of complex SoCs specifically architected to mimic the needs of the early adopters.

What IP technology is needed?
SC: There are many IPs that would be useful in accelerating the development of a new 14/16nm SoC. First and foremost, getting the cell libraries (at least for use as a starting point) is critical. Along with that, many complex high speed interface IPs, such as SERDES are very useful.
If called for architecturally, processor IP, and standard interface IP make a lot of sense to buy, versus make.

What is needed to develop an efficient ecosystem for 14/16nm finFET?
SC: TSMC’s chairman [Morris Chang] has talked about the “grand alliance” with the inclusion of the foundry, IP and EDA partners in a process of early collaborative co-optimization. This co-optimization process gets the new process to production readiness sooner, with known characteristics for key industry favored IP and ensure that the tool flows will deliver on the performance, power, and area promise of the new node.

EDA (Cadence) has made some critical contributions in the roll-out of enablement for FinFET:
We have solved technology challenges such as sign-off accuracy demanded by 14/16nm to within 2 to 3% of Spice on all sign-off tools (Tempus, Voltus, QRC, etc.) We have also brought about low Vdd, which 14/16nm allows, with its challenges in terms of optimization and sign-off.

Other challenges, met and solved are to improve routability for small standard cell size (7.5 tracks).

There are multiple challenges we are meeting today. One is Hold. This is critical, especially with low Vdd and it is supported at different stages in the design and sign-off flow.
There is also signal EM optimization and technology challenges to meet 14/16 nm requirements in terms of placement rules and also routing rules

Assuming that 14/16nm finFET will be used to exploit its dielectric isolation, where do you envisage it will be used?
SC: SOI will continue to fill niche applications and is very unlikely to unseat bulk CMOS. FinFET on SOI may have some advantage over FinFet on bulk for both leakage power and radiation hardness. So military and possibly certain applications (for safety concerns, maybe automotive) may choose FinFET on SOI.

Deeper Dive – Dec. 05

Thursday, December 5th, 2013

By Caroline Hayes, Senior Editor

The twists and turns of FinFET

In an earlier Deeper Dive (Nov. 21) we looked at how TSMC’s 16nm FinFET reference design was encouraging harmony among teams, as they work together to verify designs and accommodate the three dimensional transistor structure. In this edition, members of the design community are asked about new challenges 16nm FinFET raises, such as double patterning and IP validation.

There are three key challenges for EDA tools, posed by 16nm FinFET, says Cadence’s Steve Carlson, director of marketing. Talking about the increased net resistance of wire delays, he is despondent, saying “Wire delays have been dominated by increased net resistance, and at 16nm, it’s only getting exponentially worse,” he begins. There are also new challenges, he continues, identifying pin access as a new critical design closure metric. There is a conundrum in the solution. “Double patterning techniques, critical to ultra-deep submicron fabrication – are leveraged to get the maximum possible density of tracks in lower metal layers,” he reasons, “But this makes it harder to undertake graceful via spacing and via cuts.”

On the extraction front, the challenges are to extend FinFET RC parasitic models to be closer to those extracted using a field solver, he continues, but the list does not end there. He also points out the analog waveform effects, due to extremely low VDD can cause problems for designers to achieve accurate timing and design closure. “There are new challenges for physical design and verification due to double patterning,” he adds, warming to the theme. He warns that the number of design rules are “exploding exponentially, as well as the number of parasitics”. For layout designers and manufacturing engineers this causes problems regarding DFM (design for manufacturing). He explains “Double patterning and FinFET devices affect several areas of signoff, including extracting, DFM and timing. It requires additional DFM lithography checks and planarization checks,” he concludes.

The list of challenges may be formidable, but it is not all downbeat. Carlson went on to explain how designers are adapting techniques, being inventive and benefitting from a choice of options to meet them.

First, the wire delays. In upper metal layers the wire delays can be up to 10 times less than those on the lower layers, which means that there is a significant timing gain from routing long or critical nets on the upper layers. A menu of wire thicknesses allows designers to pick the optimum thickness depending on where they are in the metal stack. “However,” warns Carlson, “there are limited routing resources on these upper metal layers, due to the presence of power or signal nets”. This can potentially cause congestion and possible routing issues if they are not dealt with.

Turning to pin access, the problem is that to via down to one pin can create a halo effect that locks pin access to the neighboring pin, he explains, making it extremely difficult to get the design to route. “The congestion can be low, but if your local pin density gets out of hand, the design won’t close, it won’t route. As a result, careful control of pin densities during cell placement, and global pin-access planning during detail routing can have a big impact on achievable design area.”
Other problem solving examples are to extend 2.5D models to be almost 2.9 “to be as accurate as a field solver,” says Carlson “and to keep runtime as low as possible compared to 28nm designs.” Bringing the RC of the FinFET into the designer’s platform as early as possible is also a sound measure, as RCs have twice the impact on delay at 16nm as they do at 20nm.

Double patterning is also touched on by Mentor’s Arvind Narayanan, product marketing manager Place & Route division. It is, he says a part of any design using FinFETs at 20nm or below. There are, he says extensive place and route layout restrictions at every stage of the flow, including placement, routing and optimization. “There are extensive place and route layout rules that tools must honor,” he says, naming fin grid alignment for standard cells and macros during placement, Vt min-area spacing rules, continuous OD rules and source-drain abutment rules. “These spacing rules are imposed by the process requirements and directly impact the placement and optimization engines in the router.” Naturally, the place and route extraction engine must be able to accurately model the 3D parasitic requirements of FinFETs but, warns Narayanan, implementation tools should be able to adhere to these additional constrains – without sacrificing power, performance or area.

The more complex FinFET structures need updated tools that are able to support more complex design rules than might be required for planar transistors. They require, for example, higher accuracy for reliable simulation and verification. He also notes that designers are finding the need to perform extraction at more corners, which prompts the demand for faster extraction processing.

“The interplay between local IPs and double patterning, pin access and noise, all become more complex at 16nm,” says Carlson. In addition to the expected increased complexities at each new node, IP validation is more sensitive to context. “Formerly, each EDA tool ran through a qualification suite and that was the end of the story.” The qualification tests have become more elaborate but EDA tools cannot be validated one at a time, he insists. “They must be validated in the context of a flow, but they must also be validated on a flow that is applied to a complex design that is representative of the first designs that will be going into production on the new process.”

Narayanan identifies that a new set of design rules has been written for 16nm FinFET “and some of these rules required new types of measurements and analysis by the physical design and verification tools, he says.

He recalls that once TSMC defined the initial set of design rules, it created a regression test suite to validate that physical verification decks produce the expected results, both at initial release and over time. “This test suite was developed by iteratively creating test chips, running the design rules against test chips, observing results, identifying and analyzing errors and updating the design rules until [TSMC] determined that the flow was acceptable for initial production”. It was down to EDA companies to extend tool capabilities to meet new measurement and analysis requirements.

Initially, the foundry focused on accuracy, he reveals, and Mentor collaborated, bringing
Experience of optimizing rules and techniques on achieving accurate results in the fastest possible time, with the smallest memory requirements, he says. He says that the collaboration for optimization for the 16nm node is “progressing even faster than 20nm, in spite of the added complexity that 16nm presents”.

The industry is working hard to create tools to develop 16nm designs – but for what end? Carlson makes an interesting point, saying that looking at enablement of 16nm FinFET also means looking at the expense that the overall ecosystem must bear to being a new process node into production (see illustration: source IBS May 2011). He refers to costs beyond building a facility, citing the process R&D cost for the node and ‘enablement collateral’. Aside for the bricks and mortar costs, he says “Fabless companies creating new SoC platforms face tremendous design costs (including masks).” He says that the implication has to be that rising design costs is that advanced nodes will be used for high margin, or very high volume, products.”

Deeper Dive – FinFet Validation Tools

Thursday, November 21st, 2013

By Caroline Hayes, Senior Editor

The industry prepares to embrace the all-encompassing FinFET validation model – a view from the supply chain.

TSMC’s 16nm FinFET reference flow has made headlines recently, and EDA and IP companies are responding with supporting products. It is not a simple support role, however, it demands a rigorous, all-encompassing model.

In response, Apache Design has announced that its RedHawk and Totem have completed methodology innovations for the thee-dimensional transistor architecture and TSMC has certified Mentor’s Olympus-SoC place and route system and its Calibre physical verification platform.

The first reaction has to be one of surprise as the excessive interest in FinFET. Apache Design’s vice president product engineering & customer support, Aveek Sarkar, provides the answer: “[FinFET] can manage voltage closely and lower the supply voltage considerably,” he told System Level Design. “Power is a quadratic formula, so to lower voltage from 1V to 0.7V reduces the dynamic power by 50%,” he adds, explaining the appeal of FinFET.

System Level Design asked if lower supply voltages can outweighed the obstacles FinFET poses to EDA? It has a more complex structure, with more restrictive design rules than planar structures and poses challenges in extraction. It seems these have not proved to be deterrents, judging by the industry’s activity.

For example, TSMC has given certification to the Mentor Olympus-SoC place and route system, and its Calibre physical verification platform. Avrind Narayanan, product marketing manager Place & Route division, Mentor Graphics, explains that the Olympus-SoC for 16nm FinFET enables efficient double patterning (DP) and timing closure. “It also has comprehensive support for new design rule checks and multi-patterning rules, fin grid alignment for standard cells and macros during placement, and Vt min-area rule and implant layer support during placement,” he adds.

Explaining the Calibre product, Michael Buehler-Garcia, senior director, marketing, Calibre Design Solutions, Mentor Graphics, tells System Level Design that it supports 16nm FinFET advanced design rule definition and litho hotspot pre-filtering. The Calibre SmartFill facility has been enhanced to support the TSMC-specified filling requirements for FinFET transistors, including support for density constraints and multilayer structures needed for FinFET layers.
Significantly, SmartFill also provides double patterning support for back end layers and, says Buehler-Garcia, “goes beyond simple polygons to automatically insert fill cells into a layout based on analysis of the design”.

He continues to point out the new challenges of 16nm FinFET designs. “[They] require careful checking for layout features that cannot be implemented with current lithography systems—so-called litho hotspots. They also require much more complex and accurate fill structures to help ensure planarity and to also help deal with issues in etch, lithography, stress and rapid thermal annealing (RTA) processes”. The value of place and route layout tools will be in implementing fin grid alignment for standard cells and macros during placement, he notes, as well as in Vt min-area rules and implant layer support during placement.

Apache has enhanced its PathFinder, which verifies ESD (electrostatic discharge) at the SoC level for the technology. Since FinFET lacks snapback protection, diodes have to be used, to protect against ESD. However, using diodes brings the drawback of degraded performance due to a higher current density. FinFET means that instead of one supply domain, there are now hundreds of voltage islands across the chip, says Sarkar, explaining Apache’s approach. These islands have to be protected individually, and the designer needs to be able to predict what problems will happen on each of the islands, which means that layout-based SoC sign-off is critical, he concludes. “It is no longer a visual check, but electrical analysis,” he says.

TSMC and Mentor Graphics introduced a fill ECO (Engineering Change Order) flow as part of the N16 reference flow. This enables incremental fill changes, which reduce run time and file size while supporting last minute engineering changes. “By preserving the vast majority of the fill, the ECO flow limits the timing impact of fill to the area around the ECO changes,” says Buehler-Garcia.

Sarkar agrees that FinFET requires more attention to fill and its impact on capacity, and the time needed for design and verification. The company works with the foundry for certification to ensure that the tool is ready in terms of capacity, performance and turnaround time. However, he warns that accuracy for the full chip is only possible by simulating the whole chip in the domain analysis. This means examining how much change is needed, and where the voltage is coming from. “Every piece has to be simulated accurately,” he says, predicting more co-design with different aspects will need to be brought into the design flow. Expanding on the theme, he says that one environment may focus on the package and the chip simultaneously, while another environment may include the package, the chip and the system. “There will be less individual silo-based analysis and more simulations that looks across multiple domains.”

For Buehler-Garcia, the main difference for 16nm FinFET was that new structures brought a new set of requirements that had to be developed and carefully verified throughout the certification process. He describes the collaboration between the foundry and the company as “an evolutionary step, not revolutionary”.

In the next Deeper Dive (December 5) System Level Design will look at the role of double patterning in FinFET processes and how different EDA tools address its IP validation.

Systems News – October 22

Tuesday, October 22nd, 2013

Everyone is getting excited about ARM Tech, held in Santa Clara Convention Center, from October 29 to October 31. Both Cadence and Videantis announced they will be there at booth 600 and booth 617, respectively.
Videantis will be showing video/vision demos and video coding and computer vision capabilities. Visitors can see its Full HD multi-standard video encode/decode and computer vision demonstrations and discuss how to futureproof SOCs for the latest standards in video coding and computer vision algorithms.

Cadence will be active at the conference, with chief technology advisor, Jim Ready, taking part in the panel discussion, “The Future of Collaborative Embedded SW Development, from the Viewpoint of One Technology Chain Gang.” (3:30pm – 4:15pm Oct. 30, at the Expo Theater.) Other colleagues will be presenting 15 papers with ARM, customers and partners.

The company will also give the first public demonstrations of its IO-SSO Analysis Suite at the EPEPS (Electrical Performance of Electronic Packaging and Systems) conference October 27 to October 30 in San Jose.
It provides system-level simultaneous switching noise analysis, addressing coupled signal, power and ground networks across chips, packages and PCBs and complements the company’s implementation tools for multi-fabric extraction, system-level connectivity and high-speed DDR interface simulation .

Synopsys has extended DesignWare with complete 40G Ethernet IP, including DesignWare Enterprise 40G Ethernet Controller IP, Enterprise 10G PHY and Verification IP. It supports 1-, 2.5-, 10- and 40G network speeds, for designs to migrate to faster data rates. It also supports IEEE 802.3 specifications for Ethernet-based LANs. To conserve power in data centers it also Energy-Efficient Ethernet and Wake-on-LAN features.

The first heterogeneous 3D ICs in production, were announced by Xilinx, with the production release of the Virtex-7 HT family. The 28nm devices were developed on TSMC’s Chip-on-Wafer-on-Substrate (CoWoS) 3D IC process and means that the company’s full line up of 28nm chips is now in full production.

Blog Review October 10 2013

Thursday, October 10th, 2013

By Caroline Hayes

At the TSMC Open Innovation Platform (TSMC OIP) Ecosystem Forum, Richard Goering hears that 16mm FinFET design and 3D ICs are moving closer to volume production. Dr Cliff Hou, vice president, R&D, TSMC warned that although EDA tools and flows have been qualified, foundation IP has been validated, and interface IP is under development, one tool does not guarantee success, calling for a “more rigorous validation methodology”.

Steve Favre was also at TSMC OIP, discussing 450nm wafers. He wondered why EUV (extreme ultra violet) patterning has become a gating item for the move to 450nm, and how are these two related? Money, as usual, is the answer, It would cost billions of dollars to build a 450nm wafer fab and billions to move to EUV – why pay twice?

Lakshmi Mandyam from ARM’s smart connected community reflects on her journey from the power-hungry, boot-up slow laptop to a touch-sensor, multi-screen tablet. She ends by marking the anniversary of her laptop-free life. Maybe she should start an LA (Laptop Anonymous) support group?

Chip Design’s John Byler cringes with embarrassment while following up a nanotechtechnology lead at IEF in Dublin, Ireland. The lapse of government funding is proclaimed on the National Institute of Standards and Technology, accounting for the website’s and its affiliated websites’ closure. He turns to the French for further research, over a croissant – naturellement.

Pity Brian Fuller, caught off-guard by the usually genial  analyst Gary Smith in an interview for Unhinged.  Smith urged EDA vendors to be bolder, pooh-poohed the idea of industry consolidation, held forth on the power of the press and then complimented John Cooley. What is the world coming to?

Michael Posner sounds the alarm that “My RTL is an alien”, neatly timed to coincide with a white paper by Synopsys which details ways to accelerate FPGA (field programmable gate array)-based prototyping. With over 70% of today’s ASICs and systems-on-chips (SoCs) being prototyped in an FPGA, designers are looking for ways to ease the creation of FPGA-based prototypes directly from the ASIC design source files.

Gabe Moretti is feeling nostalgic in preparation for the Back to the Future Dinner organized by the EDA Consortium at the Computer Museum, Mountain View, California, this month.

In this blog he remembers the early days of EDA, when it was called CAD (computer aided design) and ruylith cut by hand. Those were the days!

WEEK IN REVIEW: October 3 2013

Friday, October 4th, 2013

Caroline Hayes

Fujifilm and imec have developed photoresist technology for organic semiconductors that enables submicron patterning on large substrates, without damage to the organic materials. It could prove to be a cost-effective alternative to current methods, i.e. shadow masking and inkjet printing, which have not proved suitable for high resolution patterns on large substrates. Photolithography is successfully used in patterning silicon semiconductors, but the photoresist dissolves the organic semiconductor material during processing. OPDs (organic photo detectors) were produced at sizes down to 200µm x 200µm without degradation. OLED (organic light emitting diodes) were also produced, at a pitch of 20µm and were found to emit uniform light.

Synopsys released a new TLM (transaction level model) subsystem flow and eclipse IDE (integrated development environment) integration speed Virtualizer Development Kit. The Virtualizer 13.06 enables and disables components of the design to allow users to optimize simulation performance during software debug.

Celebration for Cadence Design Systems as it accepted not one but three Partner of the Year awards from TSMC at this month’s Open Innovation Platform forum. They were for the Analog/Mixed-Signal IP, the 16nm FinFET Design Infrastructure, and Joint Delivery of 3D-IC Design Solution categories.

NAND flash devices are looking beyond conventional semiconductor manufacturing techniques, reports IHS. Nearly two thirds (65.2%) of all NAND memory chips shipped worldwide by 2017, will be produced using 3D processes, according to a Flash Dynamics brief. At present, it is less than 1%. Time is running out for planar semiconductor technology capacity, leaving 3D manufacturing the answer to building higher densities NAND products.

EDA-IP UPDATE: 2D materials store energy; Applied Materials-Tokyo Electron, the low-down on semi spend; (Ni/Cu) platingboost solar cell efficiency

Monday, September 30th, 2013

Researchers find “massive” amounts of energy between layers of 2D materials

Layered MXene (with added intercalated ions illustrated between layers).
Photo credit: M. Lukatskaya, Y. Dall’Agnese, E. Ren, Y. Gogotsi

Materials that are as thin as a single atom, have the potential to store energy, researchers at Drexel University have discovered.

Following the finding three years ago of Dr. Michel W. Barsoum and Dr. Yury Gogotsi, both professors in Drexel’s College of Engineering, that atomically thin, two-dimensional materials -similar to graphene- have good electrical conductivity and a surface that is hydrophilic, or can hold liquids, they have investigated these “MXenes” and report the finding that they believe can push materials storage capacities to new levels while also allowing for their use in flexible devices.

Applied Materials and Tokyo Electron merger, what’s in it for IP

Although described as a merger, Applied Materials will own 68% of the new company, reports Reuters.

Together they could create the world’s largest semiconductor equipment company in terms of sales, worth an estimated $29bn. As a result, it is to be investigated by anti-trust regulators.

Chipestimate reports that one effect of the merger might be to drive automation of chip design verification, which will increase the already low costs of EDA tools. The other side of the coin could be a swifter approval process for soft IP standard, as consolidation will mean fewer companies to determine which IP design standards can be used.

The equipment companies’ customer base is shrinking. US semiconductor companies have either sold capacity or chosen to outsource manufacturing to foundries like TSMC in Asia. For many observers, the answer could be held by Moore’s Law and what the next process node determines.

Applied Materials CEO Gary Dickerson will be chief executive of the new company and Tokyo Electron chief executive, Tetsuro Higashi, will become chairman.

Japan semiconductor spend loses ground, but is still ahead of Europe

Reversal of fortunes, sees Japan’s semiconductor companies lose ground

IC Insights has reviewed the semiconductor industry over the last 30 years and found that Japan’s share of capital spending is a lowly 7% in the first half of this year. In 1985 Japanese companies accounted for 51% capital spend; but since then companies such as NEC, Hitachi and Matsushita have disappeared off the semiconductor map.

The analyst company also identifies former giants such as Sanyo which was acquired by ON Semiconductor; Sony, which cut semiconductor capital spending and announced its move to an asset-lite strategy for ICs; Fujitsu, which sold its wireless group to Intel, sold its MCU and analog IC business to Spansion, and is consolidating its system LSI business with Panasonic’s – not forgetting Mitsubishi.

The report also shows that from 2000 to the first half of this year, companies in China, South Korea, Taiwan, and Singapore invested in wafer fabs and advanced process technology.

And the rest of the world? North America accounted for 37% of capital spending in 1H 2013, mostly spending by Intel, GlobalFoundries, Micron. The capital spend has remained around 29%-33% since 1990.

The three large European semiconductor suppliers each operate a fab-lite or asset-lite strategy. As a result, says IC Insights, Europe’s share of semiconductor capital spending is 3% of total capex in 1H13. The report forecasts capex spending by STMicroelectronics, Infineon, and NXP – and all other European semiconductor suppliers combined – will amount to less than $1.5 billion in 2013. In comparison, nine semiconductor companies – headed by Samsung, Intel, and TSMC, are forecast to spend more money than Europe will spend collectively this year.

Imec and Meco present 20% efficiency in silicon solar cells

Shown at European Photovoltaic Solar Energy Conference and Exhibition (EUPVSEC), large i-PERC-type silicon solare cells achieve 20.5% average efficiency.

At this week’s EUPVSEC in Paris, France, imec and Meco, a supplier of plating equipment will, present large area (156x156mm²) i-PERC-type silicon solar cells. They use Nickel/Copper (Ni/Cu) plating for the front contacts and achieve 20.5% average efficiency using the plating on p-type Czochralski Silicon (Cz-Si) material.

The companies achieved a maximum efficiency of 20.7% (confirmed by ISE callab). This improvement in efficiency is also exciting because the coating is less expensive than screen-printed PERC cells.

The cells were processed on imec’s solar cell pilot line using Meco’s inline plating tool to deposit the Ni/Cu front contacts. The metallization process of the Ni/Cu stack included Ultraviolet laser ablation, sequential in-line plating of the metal layers and contact annealing.

WEEK IN REVIEW: September 27 2013

Friday, September 27th, 2013

By Caroline Hayes

FinFET focus for TSMC and partners; CMOS scaling research program is extended; carbon nanotubes computing breakthrough

FinFET continues to be a focus for TSMC which has released three silicon-validated reference flows with the Open Innovation Platform (OIP) to enable 16FinFET SoC designs and 3D chip stacking packages. The first is the 16FinFET Digital Reference Flow, providing technology support for post-planar design challenge, including extraction, quantized pitch placement, low Vdd operation, electromigration and power management. Secondly, there is the 16FinFET Custom Design Reference Flow with custom transistor level design and verification. Finally there is the 3D IC Reference Flow. The foundry has announced a 3D-IC reference flow with Cadence Design Systems and a reference flow, jointly developed with Synopsys, built on tool certification currently in the foundry’s V0.5 Design Rule Manual and SPICE. Collaboration will continue with device modeling and parasitic extraction, place and route, custom design, static timing analysis, circuit simulation, rail analysis, and physical and transistor verification technologies in the Galaxy Implementation Platform.


Still with collaboration, imec and Micron Technology have extended their strategic research collaboration on advanced CMOS scaling for a further three years.

Carbon nanotubes have been used by a team of engineers at Stanford University to build a basic computer. This is, says Professor Subhasish Mitra, one of the research leaders, one of the demonstrations of complete digital systems using this technology, which could succeed the silicon transistor in computing’s complex devices driving digital electronic systems, as silicon chips reach physical limits hampering size, speed and cost.

The Stanford researchers created a powerful algorithm that maps out a circuit layout that is guaranteed to work no matter whether or where carbon nanotubes might not be the desired straight lines, to assemble a basic computer with 178 transistors. (The limit is due to the University’s chip-making facilities rather than an industrial fabrication process.)

Power Analysis and Management

Thursday, August 25th, 2016

Gabe Moretti, Senior Editor

As the size of a transistor shrinks and modifies, power management becomes more critical.  As I was polling various DA vendors, it became clear that most were offering solutions for the analysis of power requirements and software based methods to manage power use, at least one, was offering a hardware based solution to power use.  I struggled to find a way to coherently present their responses to my questions, but decided that extracting significant pieces of their written responses would not be fair.  So, I organized a type of virtual round table, and I will present their complete answers in this article.

The companies submitting responses are; Cadence, Flex Logix, Mentor, Silvaco, and Sonics.  Some of the companies presented their own understanding of the problem.  I am including that portion of their contribution as well to provide a better meaning to the description of the solution.

Cadence

Krishna Balachandran, product management director for low power solutions at Cadence  provided the following contribution.

Not too long ago, low power design and verification involved coding a power intent file and driving a digital design from RTL to final place-and-route and having each tool in the flow understand and correctly and consistently interpret the directives specified in the power intent file. Low power techniques such as power shutdown, retention, standby and Dynamic Voltage and Frequency Scaling (DVFS) had to be supported in the power formats and EDA tools. Today, the semiconductor industry has coalesced around CPF and the IEEE 1801 standard that evolved from UPF and includes the CPF contributions as well. However, this has not equated to problem solved and case closed. Far from it! Challenges abound. Power reduction and low power design which was the bailiwick of the mobile designers has moved front-and-center into almost every semiconductor design imaginable – be it a mixed-signal device targeting the IoT market or large chips targeting the datacenter and storage markets. With competition mounting, differentiation comes in the form of better (lower) power-consuming end-products and systems.

There is an increasing realization that power needs to be tackled at the earliest stages in the design cycle. Waiting to measure power after physical implementation is usually a recipe for multiple, non-converging iterations because power is fundamentally a trade-off vs. area or timing or both. The traditional methodology of optimizing for timing and area first and then dealing with power optimization is causing power specifications to be non-convergent and product schedules to slip. However, having a good handle on power at the architecture or RTL stage of design is not a guarantee that the numbers will meet the target after implementation. In other words, it is becoming imperative to start early and stay focused on managing power at every step.

It goes without saying that what can be measured accurately can be well-optimized. Therefore, the first and necessary step to managing power is to get an accurate and consistent picture of power consumption from RTL to gate level. Most EDA flows in use today use a combination of different power estimation/analysis tools at different stages of the design. Many of the available power estimation tools at the RTL stage of design suffer from inaccuracies because physical effects like timing, clock networks, library information and place-and-route optimizations are not factored in, leading to overly optimistic or pessimistic estimates. Popular implementation tools (synthesis and place-and-route) perform optimizations based on measures of power using built-in power analysis engines. There is poor correlation between these disparate engines leading to unnecessary or incorrect optimizations. In addition, mixed EDA-vendor flows are plagued by different algorithms to compute power, making the designer’s task of understanding where the problem is and managing it much more complicated. Further complications arise from implementation algorithms that are not concurrently optimized for power along with area and timing. Finally, name-mapping issues prevent application of RTL activity to gate-level netlists, increasing the burden on signoff engineers to re-create gate-level activity to avoid poor annotation and incorrect power results.

To get a good handle on the power problem, the industry needs a highly accurate but fast power estimation engine at the RTL stage that helps evaluate and guide the design’s micro-architecture. That requires the tool to be cognizant of physical effects – timing, libraries, clock networks, even place-and-route optimizations at the RTL stage. To avoid correlation problems, the same engine should also measure power after synthesis and place-and-route. An additional requirement to simplify and shorten the design flow is for such a tool to be able to bridge the system-design world with signoff and to help apply RTL activity to a gate-level netlist without any compromise. Implementation tools, such as synthesis and place-and-route, need to have a “concurrent power” approach – that is, consider power as a fundamental cost-factor in each optimization step side-by-side with area and timing. With access to such tools, semiconductor companies can put together flows that meet the challenges of power at each stage and eliminate iterations, leading to a faster time-to-market.

Flex Logix

Geoff Tate, Co-founder and CEO of Flex Logix is the author of the following contribution.  Our company is a relatively new entry in the embedded FPGA market.  It uses TSMC as a foundry.  Microcontrollers and IOT devices being designed in TSMC’s new ultra-low power 40nm process (TSMC 40ULP) need

•             The flexibility to reconfigure critical RTL, such as I/O

•          The ability to achieve performance at lowest power

Flex Logix has designed a family of embedded FPGA’s to meet this need. The validation chip to prove out the IP is in wafer fab now.

Many products fabricated with this process are battery operated: there are brief periods of performance-sensitive activity interspersed with long periods of very low power mode while waiting for an interrupt.

Flex Logix’s embedded FPGA core provides options to enable customers to optimize power and performance based on their application requirements.

To address this requirement, the following architectural enhancements were included in the embedded FPGA core:

•             Power Management containing 5 different power states:

  • Off state where the EFLX core is completely powered off.
  • Deep Sleep state where VDDH supply to the EFLX core can be lowered from nominal of 0.9V/1.1V to 0.5V while retaining state
  • Sleep state, gates the supply (VDDL) that controls all the performance logic such as the LUTs, DSP and interconnect switches of the embedded FPGA while retaining state. The latency to exit Sleep is shorter than that that to exit from Deep Sleep
  • Idle state, idles the clocks to cut power but is ready to move into dynamic mode quicker than the Sleep state
  • Dynamic state where power is highest of the 4 power management states but where the latency is the shortest and used during periods of performance sensitive activity

The other architectural features available in the EFLX-100 embedded FPGA to optimize power-performance are:

•             State retention for all flip flops and configuration bits at voltages well below the operating range.

•          Ability to directly control body bias voltage levels (Vbp, Vbn). Controlling the body bias further controls leakage power

•             5 combinations of threshold voltage(VT) devices to optimize power and performance for static/performance logic of the embedded FPGA. Higher the threshold voltage (eHVT, HVT) lower the leakage power and lower performance while lower the threshold voltage (SVT) device, higher the leakage and higher the performance.

•             eHVT/eHVT

•             HVT/HVT

•             HVT/SVT

•             eHVT/SVT

•             SVT/SVT

In addition to the architectural features various EDA flows and tools are used to optimize the Power Performance and Area (PPA) of the FlexLogix embedded FPGA:

•             The embedded FPGA was implemented using a combination of standard floor-planning and P&R tools to place and route the configuration cells, DSP and LUTs macros and network fabric switches. This resulted in higher density thereby reducing IR drops and the need for larger drive strengths thereby optimizing power

•          Design and use longer (non-minimum) channel length devices which further help reduce leakage power with minimal to no impact to the performance

•          The EFLX-100 core was designed with an optimized power grid to effectively use metal resources for power and signal routing. Optimal power grids reduce DC/AC supply drops which further increase performance.

Mentor

Arvind Narayanan, Architect, Product Marketing, Mentor Graphics contributed the following viewpoint.

One of the biggest challenges in IC design at advanced nodes is the complexity inherent in effective power management. Whether the goal is to reduce on-chip power dissipation or to provide longer battery life, power is taking its place alongside timing and area as a critical design dimension.

While low-power design starts at the architectural level, the low-power design techniques continue through RTL synthesis and place and route. Digital implementation tools must interpret the power intent and implement the design correctly, from power aware RTL synthesis, placement of special cells, routing and optimization across power domains in the presence of multiple corners, modes, and power states.

With the introduction of every new technology node, existing power constraints are also tightened to optimize power consumption and maximize performance. 3D transistors (FinFETs) that were introduced at smaller technology nodes have higher input pin capacitance compared to their planar counterpart, resulting in the dynamic power component to be higher compared to leakage.

Power Reduction Strategies

A good strategy to reduce power consumption is to perform power optimization at multiple levels during the design flow including software optimization, architecture selection, RTL-to-GDS implementation and process technology choices. The biggest power savings are usually obtained early in the development cycle at the ESL & RTL stages. (Fig 1). During physical implementation stage there is less opportunity for power optimization in comparison and hence choices made earlier in the design flow are critical. Technology selection such as the device structure (FinFET, planar), choice of device material (HiK, SOI) and technology node selection all play a key role.

Figure 1. Power reduction opportunities at different stages of the design flow

Architecture selection

Studies have shown that only optimizations applied early in the design cycle, when a design’s architecture is not yet fixed, have the potential for radical power reduction.  To make intelligent decisions in power optimization, the tools have to simultaneously consider all factors affecting power, and apply early in the design cycle. Finding the best architecture enables to properly balance functionality, performance and power metrics.

RTL-to-GDS Power Reduction

There are a wide variety of low-power optimization techniques that can be utilized during RTL to GDS implementation for both dynamic and leakage power reduction. Some of these techniques are listed below.

RTL Design Space Exploration

During the early stages of the design, the RTL can be modified to employ architectural optimizations, such as replacing a single instantiation of a high-powered logic function with multiple instantiations of low-powered equivalents. A power-aware design environment should facilitate “what-if” exploration of different scenarios to evaluate the area/power/performance tradeoffs

Multi-VDD Flow

Multi-voltage design, a popular technique to reduce total power, is a complex task because many blocks are operating at different voltages, or intermittently shut off. Level shifter and isolation cells need to be used on nets that cross domain boundaries if the supply voltages are different or if one of the blocks is being shut down. DVFS is another technique where the supply voltage and frequency can vary dynamically to save power. Power gating using multi-threshold CMOS (MTCMOS) switches involves switching off certain portions of an IC when that functionality is not required, then restoring power when that functionality is needed.

Figure 2. Multi-voltage layout shown in a screen shot from the Nitro-SoC™ place and route system.

MCMM Based Power Optimization

Because each voltage supply and operational mode implies different timing and power constraints on the design, multi-voltage methodologies cause the number of design corners to increase exponentially with the addition of each domain or voltage island. The best solution is to analyze and optimize the design for all corners and modes concurrently. In other words, low-power design inherently requires true multi-corner/multi-mode (MCMM) optimization for both power and timing. The end result is that the design should meet timing and power requirements for all the mode/corner scenarios.

FinFET aware Power Optimization

FinFET aware power optimization flow requires technologies such as activity driven placement, multi-bit flop support, clock data optimization, interleaved power optimization and activity driven routing to ensure that the dynamic power reduction is optimal. The tools should be able to use transforms with objective costing to make trade-offs between dynamic power, leakage power, timing, and area for best QoR.

Using the strategy to optimize power at all stages of the design flow, especially at the architecture stage is critical for optimal power reduction.  Architecture selection along with the complete set of technologies for RTL-to-GDS implementation greatly impact the ability to effectively manage power.

Silvaco

Seena Shankar, Technical Marketing Manager, is the author of this contribution.

Problem:

Analysis of IR-drop, electro-migration and thermal effects have traditionally been a significant bottleneck in the physical verification of transistor level designs like analog circuits, high-speed IOs, custom digital blocks, memories and standard cells. Starting from 28 nm node and lower, all designers are concerned about power, EM/IR and thermal issues. Even at the 180 nm node if you are doing high current designs in LDMOS then EM effects, rules and thermal issues need to be analyzed. FinFET architecture has increased concerns regarding EM, IR and thermal effects. This is because of complex DFM rules, increased current and power density. There is a higher probability of failure. Even more so EM/IR effects need to be carefully analyzed and managed. This kind of analysis and testing usually occurs at the end of the design flow. Discovering these issues at that critical time makes it difficult to stick to schedule and causing expensive rework. How can we resolve this problem?

Solution:

Power integrity issues must be addressed as early in the design cycle as possible, to avoid expensive design and silicon iterations. Silvaco’s InVar Prime is an early design stage power integrity analysis solution for layout engineers. Designers can estimate EM, IR and thermal conditions before sign-off stage. It performs checks like early IR-drop analysis, check of resistive parameters of supply networks, point to point resistance check, and also estimate current densities. It also helps in finding and fixing issues that are not detectable with regular LVS check like missing vias, isolated metal shapes, inconsistent labeling, and detour routing.

InVar Prime can be used for a broad range of designs including processors, wired and wireless network ICs, power ICs, sensors and displays. Its hierarchical methodology accurately models IR-drop, electro-migration and thermal effects for designs ranging from single block to full-chip. Its patented concurrent electro-thermal analysis performs simulation of multiple physical processes together. This is critical for today’s’ designs in order to capture important interactions between power and thermal 2D/3D profiles. The result is physical measurement-like accuracy with high speed even on extremely large designs and applicability to all process nodes including FinFET technologies.

InVar Prime requires the following inputs:

●      Layout- GDSII

●      Technology- ITF or iRCX

●      Supplementary data- Layer mapping file for GDSII, Supply net names, Locations and nominal of voltage sources, Area based current consumption for P/G nets

Figure 3. Reliability Analysis provided by InVar Prime

InVar Prime enables three types of analysis on a layout database: EM, IR and Thermal. A layout engineer could start using InVar to help in the routing and planning of the power nets, VDD and VSS. IR analysis with InVar will provide them early analysis on how good the power routing is at that point. This type of early analysis flags potential issues that might otherwise appear after fabrication and result in silicon re-spins.

InVar EM/IR engine provides comprehensive analysis and retains full visibility of supply networks from top-level connectors down to each transistor. It provides a unique approach to hierarchical block modeling to reduce runtime and memory while keeping accuracy of a true flat run. Programmable EM rules enable easy adaptation to new technologies.

InVar Thermal engine scales from single cell design to full chip and provides lab-verified accuracy of thermal analysis. Feedback from thermal engine to EM/IR engines provides unprecedented overall accuracy. This helps designers understand and analyze various effects across design caused by how thermal 2D/3D profiles affect IR drop and temperature dependent EM constraints.

The main benefits of InVar Prime are:

●      Accuracy verified in lab and foundries

●      Full chip sign-off with accurate and high performance analysis

●      Analysis available early in the back end design, when more design choices are available

●      Pre-characterization not required for analysis

●      User-friendly environment designed to assist quick turn-around-times

●      Effective prevention of power integrity issues

●      Broad range of technology nodes supported

●      Reduces backend verification cycle time

●      Improves probability of first silicon success

Sonics

Scott Seiden contributed his company viewpoint.  Sonics has developed a dynamic power management solution that is hardware based.

Sonics has Developed Industry’s First Energy Processing Unit (EPU) Based on the ICE-Grain Power Architecture.  The EPUICE stands for Instant Control of Energy.

Sonics’ ICE-G1 product is a complete EPU enabling rapid design of system-on-chip (SoC) power architecture and implementation and verification of the resulting power management subsystem.

No amount of wasted energy is affordable in today’s electronic products. Designers know that their circuits are idle a significant fraction of time, but have no proven technology that exploits idle moments to save power. An EPU is a hardware subsystem that enables designers to better manage and control circuit idle time. Where the host processor (CPU) optimizes the active moments of the SoC components, the EPU optimizes the idle moments of the SoC components. By construction, an EPU delivers lower power consumption than software-controlled power management. EPUs possess the following characteristics:

  • Fine-grained power partitioning maximizes SoC energy savings opportunities
  • Autonomous hardware-based control provides orders of magnitude faster power up and power down than software-based control through a conventional processor
  • Aggregation of architectural power savings techniques ensures minimum energy consumption
  • Reprogrammable architecture supports optimization under varying operating conditions and enables observation-driven adaptation to the end system.

About ICE-G1

The Sonics’ ICE-G1 EPU accelerates the development of power-sensitive SoC designs using configurable IP and an automated methodology, which produces EPUs and operating results that improve upon the custom approach employed by expert power design teams. As the industry’s first licensable EPU, ICE-G1 makes sophisticated power savings techniques accessible to all SoC designers in a complete subsystem solution. Using ICE-G1, experienced and first-time SoC designers alike can achieve significant power savings in their designs.

Markets for ICE-G1 include:

- Application and Baseband Processors
- Tablets, Notebooks
- IoT
- Datacenters
- EnergyStar compliant systems
- Form factor constrained systems—handheld, battery operated, sealed case/no fan, wearable.

-ICE-G1 key product features are:Intelligent event and switching controllers–power grain controllers, event matrix, interrupt controller, software register interface—configurable and programmable hardware that dynamically manages both active and leakage power.

- SonicsStudio SoC development environment—graphical user interface (GUI), power grain identification (import IEEE-1801 UPF, import RTL, described directly), power architecture definition, power grain controller configuration (power modes and transition events), RTL and UPF code generation, and automated verification test bench generation tools. A single environment that streamlines the EPU development process from architectural specification to physical implementation.

- Automated SoC power design methodology integrated with standard EDA functional and physical tool flows (top down and bottom up)—abstracts the complete set of power management techniques and automatically generates EPUs to enable architectural exploration and continuous iteration as the SoC design evolves.

- Technical support and consulting services—including training, energy savings assessments, architectural recommendations, and implementation guidance.

Conclusion

As can be seen from the contributions analysis and management of power is multi-faceted.  Dynamic control of power, especially in battery powered IoT devices is critical, since some of there devices will be in locations that are not readily reachable by an operator.

Next Page »