Part of the  

Chip Design Magazine

  Network

About  |  Contact

Posts Tagged ‘finFET’

Next Page »

Blog Review – Monday, March 07, 2016

Monday, March 7th, 2016

IP fingerprinting; Beware- 5G!; And the award goes to – encryption; Fear of FinFET; Smart kids; Virtual vs real hardware

Keeping an eye on the kids blends with wearable technology, as demonstrated by the Omate Whercom K3, which debuted at Mobile World Congress 2016. It relies on a 3G Dual-core 1GHz ARM Cortex-A7 and an ARM Mali-400 GPU, relates Freddi Jeffries, who interviews Laurent Le Pen, CEO of Omate.

The role of MicroEJ has evolved since its inception. Brian Fuller, ARM, looks at the latest incarnation, bringing mobile OS to microcontroller platforms such as the ARM Cortex-M.

Rather overshadowned by the Oscars, the winner of this year’s Turing Award could have more impact on everyday lives. It was won, says Paul McLellan, Cadence Design Systems, by Whitfield Diffie and Martin Hellman for the invention of public key cryptography. His blog explains what the judges liked and why we will like their work too.

The inclusion of a Despicable Me photo/video is not immediately obvious, but Valerie Scott, Mentor Graphics makes a sound argument for the use of a virtual platform and includes a (relevant) image of the blog’s example hardware, the NXP i.MX6 with Vista.

Everyone is getting excited about 5G, and Matthew Rosenquist, Intel, sounds a note of caution and encourages readers to prepare for cyber risks as well as the opportunities that the technology will bring.

Fed up with FinFET issues? Graham Etchells, Synopsys, offers advice on electro-migration, why it happens and why the complexity of FinFETs does not have to mean it is an inevitable trait.

Efficiency without liabilities is the end-goal for Warren Savage, IP Extreme. He advocates IP fingerprinting and presents a compelling argument for why and how.

Caroline Hayes, Senior Editor

Deeper Dive – Wed. April 30 2014

Wednesday, April 30th, 2014

The gang of three, or the Grand Alliance, refers to the co-operation of foundry, IP and EDA companies to make 14nm FinFET a reality. Caroline Hayes, Senior Editor, asked Steve Carlson, Director, office of Chief Strategy Officer, Cadence Design, what was required to bring about FinFET harmony.

What foundry support is needed for any chip maker looking to develop 14/16nm finFET?
SC: The foundry needs to supply a complete enablement kit. This includes traditional PDKs (physical design kits), along with the libraries, technology/rule files for the synthesis, design-for-test, extraction, place and route, EM, IR, Self-heat, ESD, power and timing sign-off, DFM and physical rule checking.

Put another way, enablement content spans from the transistor level support, up through complex SoC design. To get to the production phase of enablement roll-out there have been several tape-outs and test chips of complex SoCs specifically architected to mimic the needs of the early adopters.

What IP technology is needed?
SC: There are many IPs that would be useful in accelerating the development of a new 14/16nm SoC. First and foremost, getting the cell libraries (at least for use as a starting point) is critical. Along with that, many complex high speed interface IPs, such as SERDES are very useful.
If called for architecturally, processor IP, and standard interface IP make a lot of sense to buy, versus make.

What is needed to develop an efficient ecosystem for 14/16nm finFET?
SC: TSMC’s chairman [Morris Chang] has talked about the “grand alliance” with the inclusion of the foundry, IP and EDA partners in a process of early collaborative co-optimization. This co-optimization process gets the new process to production readiness sooner, with known characteristics for key industry favored IP and ensure that the tool flows will deliver on the performance, power, and area promise of the new node.

EDA (Cadence) has made some critical contributions in the roll-out of enablement for FinFET:
We have solved technology challenges such as sign-off accuracy demanded by 14/16nm to within 2 to 3% of Spice on all sign-off tools (Tempus, Voltus, QRC, etc.) We have also brought about low Vdd, which 14/16nm allows, with its challenges in terms of optimization and sign-off.

Other challenges, met and solved are to improve routability for small standard cell size (7.5 tracks).

There are multiple challenges we are meeting today. One is Hold. This is critical, especially with low Vdd and it is supported at different stages in the design and sign-off flow.
There is also signal EM optimization and technology challenges to meet 14/16 nm requirements in terms of placement rules and also routing rules

Assuming that 14/16nm finFET will be used to exploit its dielectric isolation, where do you envisage it will be used?
SC: SOI will continue to fill niche applications and is very unlikely to unseat bulk CMOS. FinFET on SOI may have some advantage over FinFet on bulk for both leakage power and radiation hardness. So military and possibly certain applications (for safety concerns, maybe automotive) may choose FinFET on SOI.

Deeper Dive – FinFet Validation Tools

Thursday, November 21st, 2013

By Caroline Hayes, Senior Editor

The industry prepares to embrace the all-encompassing FinFET validation model – a view from the supply chain.

TSMC’s 16nm FinFET reference flow has made headlines recently, and EDA and IP companies are responding with supporting products. It is not a simple support role, however, it demands a rigorous, all-encompassing model.

In response, Apache Design has announced that its RedHawk and Totem have completed methodology innovations for the thee-dimensional transistor architecture and TSMC has certified Mentor’s Olympus-SoC place and route system and its Calibre physical verification platform.

The first reaction has to be one of surprise as the excessive interest in FinFET. Apache Design’s vice president product engineering & customer support, Aveek Sarkar, provides the answer: “[FinFET] can manage voltage closely and lower the supply voltage considerably,” he told System Level Design. “Power is a quadratic formula, so to lower voltage from 1V to 0.7V reduces the dynamic power by 50%,” he adds, explaining the appeal of FinFET.

System Level Design asked if lower supply voltages can outweighed the obstacles FinFET poses to EDA? It has a more complex structure, with more restrictive design rules than planar structures and poses challenges in extraction. It seems these have not proved to be deterrents, judging by the industry’s activity.

For example, TSMC has given certification to the Mentor Olympus-SoC place and route system, and its Calibre physical verification platform. Avrind Narayanan, product marketing manager Place & Route division, Mentor Graphics, explains that the Olympus-SoC for 16nm FinFET enables efficient double patterning (DP) and timing closure. “It also has comprehensive support for new design rule checks and multi-patterning rules, fin grid alignment for standard cells and macros during placement, and Vt min-area rule and implant layer support during placement,” he adds.

Explaining the Calibre product, Michael Buehler-Garcia, senior director, marketing, Calibre Design Solutions, Mentor Graphics, tells System Level Design that it supports 16nm FinFET advanced design rule definition and litho hotspot pre-filtering. The Calibre SmartFill facility has been enhanced to support the TSMC-specified filling requirements for FinFET transistors, including support for density constraints and multilayer structures needed for FinFET layers.
Significantly, SmartFill also provides double patterning support for back end layers and, says Buehler-Garcia, “goes beyond simple polygons to automatically insert fill cells into a layout based on analysis of the design”.

He continues to point out the new challenges of 16nm FinFET designs. “[They] require careful checking for layout features that cannot be implemented with current lithography systems—so-called litho hotspots. They also require much more complex and accurate fill structures to help ensure planarity and to also help deal with issues in etch, lithography, stress and rapid thermal annealing (RTA) processes”. The value of place and route layout tools will be in implementing fin grid alignment for standard cells and macros during placement, he notes, as well as in Vt min-area rules and implant layer support during placement.

Apache has enhanced its PathFinder, which verifies ESD (electrostatic discharge) at the SoC level for the technology. Since FinFET lacks snapback protection, diodes have to be used, to protect against ESD. However, using diodes brings the drawback of degraded performance due to a higher current density. FinFET means that instead of one supply domain, there are now hundreds of voltage islands across the chip, says Sarkar, explaining Apache’s approach. These islands have to be protected individually, and the designer needs to be able to predict what problems will happen on each of the islands, which means that layout-based SoC sign-off is critical, he concludes. “It is no longer a visual check, but electrical analysis,” he says.

TSMC and Mentor Graphics introduced a fill ECO (Engineering Change Order) flow as part of the N16 reference flow. This enables incremental fill changes, which reduce run time and file size while supporting last minute engineering changes. “By preserving the vast majority of the fill, the ECO flow limits the timing impact of fill to the area around the ECO changes,” says Buehler-Garcia.

Sarkar agrees that FinFET requires more attention to fill and its impact on capacity, and the time needed for design and verification. The company works with the foundry for certification to ensure that the tool is ready in terms of capacity, performance and turnaround time. However, he warns that accuracy for the full chip is only possible by simulating the whole chip in the domain analysis. This means examining how much change is needed, and where the voltage is coming from. “Every piece has to be simulated accurately,” he says, predicting more co-design with different aspects will need to be brought into the design flow. Expanding on the theme, he says that one environment may focus on the package and the chip simultaneously, while another environment may include the package, the chip and the system. “There will be less individual silo-based analysis and more simulations that looks across multiple domains.”

For Buehler-Garcia, the main difference for 16nm FinFET was that new structures brought a new set of requirements that had to be developed and carefully verified throughout the certification process. He describes the collaboration between the foundry and the company as “an evolutionary step, not revolutionary”.

In the next Deeper Dive (December 5) System Level Design will look at the role of double patterning in FinFET processes and how different EDA tools address its IP validation.

WEEK IN REVIEW: September 27 2013

Friday, September 27th, 2013

By Caroline Hayes

FinFET focus for TSMC and partners; CMOS scaling research program is extended; carbon nanotubes computing breakthrough

FinFET continues to be a focus for TSMC which has released three silicon-validated reference flows with the Open Innovation Platform (OIP) to enable 16FinFET SoC designs and 3D chip stacking packages. The first is the 16FinFET Digital Reference Flow, providing technology support for post-planar design challenge, including extraction, quantized pitch placement, low Vdd operation, electromigration and power management. Secondly, there is the 16FinFET Custom Design Reference Flow with custom transistor level design and verification. Finally there is the 3D IC Reference Flow. The foundry has announced a 3D-IC reference flow with Cadence Design Systems and a reference flow, jointly developed with Synopsys, built on tool certification currently in the foundry’s V0.5 Design Rule Manual and SPICE. Collaboration will continue with device modeling and parasitic extraction, place and route, custom design, static timing analysis, circuit simulation, rail analysis, and physical and transistor verification technologies in the Galaxy Implementation Platform.


Still with collaboration, imec and Micron Technology have extended their strategic research collaboration on advanced CMOS scaling for a further three years.

Carbon nanotubes have been used by a team of engineers at Stanford University to build a basic computer. This is, says Professor Subhasish Mitra, one of the research leaders, one of the demonstrations of complete digital systems using this technology, which could succeed the silicon transistor in computing’s complex devices driving digital electronic systems, as silicon chips reach physical limits hampering size, speed and cost.

The Stanford researchers created a powerful algorithm that maps out a circuit layout that is guaranteed to work no matter whether or where carbon nanotubes might not be the desired straight lines, to assemble a basic computer with 178 transistors. (The limit is due to the University’s chip-making facilities rather than an industrial fabrication process.)

Power Analysis and Management

Thursday, August 25th, 2016

Gabe Moretti, Senior Editor

As the size of a transistor shrinks and modifies, power management becomes more critical.  As I was polling various DA vendors, it became clear that most were offering solutions for the analysis of power requirements and software based methods to manage power use, at least one, was offering a hardware based solution to power use.  I struggled to find a way to coherently present their responses to my questions, but decided that extracting significant pieces of their written responses would not be fair.  So, I organized a type of virtual round table, and I will present their complete answers in this article.

The companies submitting responses are; Cadence, Flex Logix, Mentor, Silvaco, and Sonics.  Some of the companies presented their own understanding of the problem.  I am including that portion of their contribution as well to provide a better meaning to the description of the solution.

Cadence

Krishna Balachandran, product management director for low power solutions at Cadence  provided the following contribution.

Not too long ago, low power design and verification involved coding a power intent file and driving a digital design from RTL to final place-and-route and having each tool in the flow understand and correctly and consistently interpret the directives specified in the power intent file. Low power techniques such as power shutdown, retention, standby and Dynamic Voltage and Frequency Scaling (DVFS) had to be supported in the power formats and EDA tools. Today, the semiconductor industry has coalesced around CPF and the IEEE 1801 standard that evolved from UPF and includes the CPF contributions as well. However, this has not equated to problem solved and case closed. Far from it! Challenges abound. Power reduction and low power design which was the bailiwick of the mobile designers has moved front-and-center into almost every semiconductor design imaginable – be it a mixed-signal device targeting the IoT market or large chips targeting the datacenter and storage markets. With competition mounting, differentiation comes in the form of better (lower) power-consuming end-products and systems.

There is an increasing realization that power needs to be tackled at the earliest stages in the design cycle. Waiting to measure power after physical implementation is usually a recipe for multiple, non-converging iterations because power is fundamentally a trade-off vs. area or timing or both. The traditional methodology of optimizing for timing and area first and then dealing with power optimization is causing power specifications to be non-convergent and product schedules to slip. However, having a good handle on power at the architecture or RTL stage of design is not a guarantee that the numbers will meet the target after implementation. In other words, it is becoming imperative to start early and stay focused on managing power at every step.

It goes without saying that what can be measured accurately can be well-optimized. Therefore, the first and necessary step to managing power is to get an accurate and consistent picture of power consumption from RTL to gate level. Most EDA flows in use today use a combination of different power estimation/analysis tools at different stages of the design. Many of the available power estimation tools at the RTL stage of design suffer from inaccuracies because physical effects like timing, clock networks, library information and place-and-route optimizations are not factored in, leading to overly optimistic or pessimistic estimates. Popular implementation tools (synthesis and place-and-route) perform optimizations based on measures of power using built-in power analysis engines. There is poor correlation between these disparate engines leading to unnecessary or incorrect optimizations. In addition, mixed EDA-vendor flows are plagued by different algorithms to compute power, making the designer’s task of understanding where the problem is and managing it much more complicated. Further complications arise from implementation algorithms that are not concurrently optimized for power along with area and timing. Finally, name-mapping issues prevent application of RTL activity to gate-level netlists, increasing the burden on signoff engineers to re-create gate-level activity to avoid poor annotation and incorrect power results.

To get a good handle on the power problem, the industry needs a highly accurate but fast power estimation engine at the RTL stage that helps evaluate and guide the design’s micro-architecture. That requires the tool to be cognizant of physical effects – timing, libraries, clock networks, even place-and-route optimizations at the RTL stage. To avoid correlation problems, the same engine should also measure power after synthesis and place-and-route. An additional requirement to simplify and shorten the design flow is for such a tool to be able to bridge the system-design world with signoff and to help apply RTL activity to a gate-level netlist without any compromise. Implementation tools, such as synthesis and place-and-route, need to have a “concurrent power” approach – that is, consider power as a fundamental cost-factor in each optimization step side-by-side with area and timing. With access to such tools, semiconductor companies can put together flows that meet the challenges of power at each stage and eliminate iterations, leading to a faster time-to-market.

Flex Logix

Geoff Tate, Co-founder and CEO of Flex Logix is the author of the following contribution.  Our company is a relatively new entry in the embedded FPGA market.  It uses TSMC as a foundry.  Microcontrollers and IOT devices being designed in TSMC’s new ultra-low power 40nm process (TSMC 40ULP) need

•             The flexibility to reconfigure critical RTL, such as I/O

•          The ability to achieve performance at lowest power

Flex Logix has designed a family of embedded FPGA’s to meet this need. The validation chip to prove out the IP is in wafer fab now.

Many products fabricated with this process are battery operated: there are brief periods of performance-sensitive activity interspersed with long periods of very low power mode while waiting for an interrupt.

Flex Logix’s embedded FPGA core provides options to enable customers to optimize power and performance based on their application requirements.

To address this requirement, the following architectural enhancements were included in the embedded FPGA core:

•             Power Management containing 5 different power states:

  • Off state where the EFLX core is completely powered off.
  • Deep Sleep state where VDDH supply to the EFLX core can be lowered from nominal of 0.9V/1.1V to 0.5V while retaining state
  • Sleep state, gates the supply (VDDL) that controls all the performance logic such as the LUTs, DSP and interconnect switches of the embedded FPGA while retaining state. The latency to exit Sleep is shorter than that that to exit from Deep Sleep
  • Idle state, idles the clocks to cut power but is ready to move into dynamic mode quicker than the Sleep state
  • Dynamic state where power is highest of the 4 power management states but where the latency is the shortest and used during periods of performance sensitive activity

The other architectural features available in the EFLX-100 embedded FPGA to optimize power-performance are:

•             State retention for all flip flops and configuration bits at voltages well below the operating range.

•          Ability to directly control body bias voltage levels (Vbp, Vbn). Controlling the body bias further controls leakage power

•             5 combinations of threshold voltage(VT) devices to optimize power and performance for static/performance logic of the embedded FPGA. Higher the threshold voltage (eHVT, HVT) lower the leakage power and lower performance while lower the threshold voltage (SVT) device, higher the leakage and higher the performance.

•             eHVT/eHVT

•             HVT/HVT

•             HVT/SVT

•             eHVT/SVT

•             SVT/SVT

In addition to the architectural features various EDA flows and tools are used to optimize the Power Performance and Area (PPA) of the FlexLogix embedded FPGA:

•             The embedded FPGA was implemented using a combination of standard floor-planning and P&R tools to place and route the configuration cells, DSP and LUTs macros and network fabric switches. This resulted in higher density thereby reducing IR drops and the need for larger drive strengths thereby optimizing power

•          Design and use longer (non-minimum) channel length devices which further help reduce leakage power with minimal to no impact to the performance

•          The EFLX-100 core was designed with an optimized power grid to effectively use metal resources for power and signal routing. Optimal power grids reduce DC/AC supply drops which further increase performance.

Mentor

Arvind Narayanan, Architect, Product Marketing, Mentor Graphics contributed the following viewpoint.

One of the biggest challenges in IC design at advanced nodes is the complexity inherent in effective power management. Whether the goal is to reduce on-chip power dissipation or to provide longer battery life, power is taking its place alongside timing and area as a critical design dimension.

While low-power design starts at the architectural level, the low-power design techniques continue through RTL synthesis and place and route. Digital implementation tools must interpret the power intent and implement the design correctly, from power aware RTL synthesis, placement of special cells, routing and optimization across power domains in the presence of multiple corners, modes, and power states.

With the introduction of every new technology node, existing power constraints are also tightened to optimize power consumption and maximize performance. 3D transistors (FinFETs) that were introduced at smaller technology nodes have higher input pin capacitance compared to their planar counterpart, resulting in the dynamic power component to be higher compared to leakage.

Power Reduction Strategies

A good strategy to reduce power consumption is to perform power optimization at multiple levels during the design flow including software optimization, architecture selection, RTL-to-GDS implementation and process technology choices. The biggest power savings are usually obtained early in the development cycle at the ESL & RTL stages. (Fig 1). During physical implementation stage there is less opportunity for power optimization in comparison and hence choices made earlier in the design flow are critical. Technology selection such as the device structure (FinFET, planar), choice of device material (HiK, SOI) and technology node selection all play a key role.

Figure 1. Power reduction opportunities at different stages of the design flow

Architecture selection

Studies have shown that only optimizations applied early in the design cycle, when a design’s architecture is not yet fixed, have the potential for radical power reduction.  To make intelligent decisions in power optimization, the tools have to simultaneously consider all factors affecting power, and apply early in the design cycle. Finding the best architecture enables to properly balance functionality, performance and power metrics.

RTL-to-GDS Power Reduction

There are a wide variety of low-power optimization techniques that can be utilized during RTL to GDS implementation for both dynamic and leakage power reduction. Some of these techniques are listed below.

RTL Design Space Exploration

During the early stages of the design, the RTL can be modified to employ architectural optimizations, such as replacing a single instantiation of a high-powered logic function with multiple instantiations of low-powered equivalents. A power-aware design environment should facilitate “what-if” exploration of different scenarios to evaluate the area/power/performance tradeoffs

Multi-VDD Flow

Multi-voltage design, a popular technique to reduce total power, is a complex task because many blocks are operating at different voltages, or intermittently shut off. Level shifter and isolation cells need to be used on nets that cross domain boundaries if the supply voltages are different or if one of the blocks is being shut down. DVFS is another technique where the supply voltage and frequency can vary dynamically to save power. Power gating using multi-threshold CMOS (MTCMOS) switches involves switching off certain portions of an IC when that functionality is not required, then restoring power when that functionality is needed.

Figure 2. Multi-voltage layout shown in a screen shot from the Nitro-SoC™ place and route system.

MCMM Based Power Optimization

Because each voltage supply and operational mode implies different timing and power constraints on the design, multi-voltage methodologies cause the number of design corners to increase exponentially with the addition of each domain or voltage island. The best solution is to analyze and optimize the design for all corners and modes concurrently. In other words, low-power design inherently requires true multi-corner/multi-mode (MCMM) optimization for both power and timing. The end result is that the design should meet timing and power requirements for all the mode/corner scenarios.

FinFET aware Power Optimization

FinFET aware power optimization flow requires technologies such as activity driven placement, multi-bit flop support, clock data optimization, interleaved power optimization and activity driven routing to ensure that the dynamic power reduction is optimal. The tools should be able to use transforms with objective costing to make trade-offs between dynamic power, leakage power, timing, and area for best QoR.

Using the strategy to optimize power at all stages of the design flow, especially at the architecture stage is critical for optimal power reduction.  Architecture selection along with the complete set of technologies for RTL-to-GDS implementation greatly impact the ability to effectively manage power.

Silvaco

Seena Shankar, Technical Marketing Manager, is the author of this contribution.

Problem:

Analysis of IR-drop, electro-migration and thermal effects have traditionally been a significant bottleneck in the physical verification of transistor level designs like analog circuits, high-speed IOs, custom digital blocks, memories and standard cells. Starting from 28 nm node and lower, all designers are concerned about power, EM/IR and thermal issues. Even at the 180 nm node if you are doing high current designs in LDMOS then EM effects, rules and thermal issues need to be analyzed. FinFET architecture has increased concerns regarding EM, IR and thermal effects. This is because of complex DFM rules, increased current and power density. There is a higher probability of failure. Even more so EM/IR effects need to be carefully analyzed and managed. This kind of analysis and testing usually occurs at the end of the design flow. Discovering these issues at that critical time makes it difficult to stick to schedule and causing expensive rework. How can we resolve this problem?

Solution:

Power integrity issues must be addressed as early in the design cycle as possible, to avoid expensive design and silicon iterations. Silvaco’s InVar Prime is an early design stage power integrity analysis solution for layout engineers. Designers can estimate EM, IR and thermal conditions before sign-off stage. It performs checks like early IR-drop analysis, check of resistive parameters of supply networks, point to point resistance check, and also estimate current densities. It also helps in finding and fixing issues that are not detectable with regular LVS check like missing vias, isolated metal shapes, inconsistent labeling, and detour routing.

InVar Prime can be used for a broad range of designs including processors, wired and wireless network ICs, power ICs, sensors and displays. Its hierarchical methodology accurately models IR-drop, electro-migration and thermal effects for designs ranging from single block to full-chip. Its patented concurrent electro-thermal analysis performs simulation of multiple physical processes together. This is critical for today’s’ designs in order to capture important interactions between power and thermal 2D/3D profiles. The result is physical measurement-like accuracy with high speed even on extremely large designs and applicability to all process nodes including FinFET technologies.

InVar Prime requires the following inputs:

●      Layout- GDSII

●      Technology- ITF or iRCX

●      Supplementary data- Layer mapping file for GDSII, Supply net names, Locations and nominal of voltage sources, Area based current consumption for P/G nets

Figure 3. Reliability Analysis provided by InVar Prime

InVar Prime enables three types of analysis on a layout database: EM, IR and Thermal. A layout engineer could start using InVar to help in the routing and planning of the power nets, VDD and VSS. IR analysis with InVar will provide them early analysis on how good the power routing is at that point. This type of early analysis flags potential issues that might otherwise appear after fabrication and result in silicon re-spins.

InVar EM/IR engine provides comprehensive analysis and retains full visibility of supply networks from top-level connectors down to each transistor. It provides a unique approach to hierarchical block modeling to reduce runtime and memory while keeping accuracy of a true flat run. Programmable EM rules enable easy adaptation to new technologies.

InVar Thermal engine scales from single cell design to full chip and provides lab-verified accuracy of thermal analysis. Feedback from thermal engine to EM/IR engines provides unprecedented overall accuracy. This helps designers understand and analyze various effects across design caused by how thermal 2D/3D profiles affect IR drop and temperature dependent EM constraints.

The main benefits of InVar Prime are:

●      Accuracy verified in lab and foundries

●      Full chip sign-off with accurate and high performance analysis

●      Analysis available early in the back end design, when more design choices are available

●      Pre-characterization not required for analysis

●      User-friendly environment designed to assist quick turn-around-times

●      Effective prevention of power integrity issues

●      Broad range of technology nodes supported

●      Reduces backend verification cycle time

●      Improves probability of first silicon success

Sonics

Scott Seiden contributed his company viewpoint.  Sonics has developed a dynamic power management solution that is hardware based.

Sonics has Developed Industry’s First Energy Processing Unit (EPU) Based on the ICE-Grain Power Architecture.  The EPUICE stands for Instant Control of Energy.

Sonics’ ICE-G1 product is a complete EPU enabling rapid design of system-on-chip (SoC) power architecture and implementation and verification of the resulting power management subsystem.

No amount of wasted energy is affordable in today’s electronic products. Designers know that their circuits are idle a significant fraction of time, but have no proven technology that exploits idle moments to save power. An EPU is a hardware subsystem that enables designers to better manage and control circuit idle time. Where the host processor (CPU) optimizes the active moments of the SoC components, the EPU optimizes the idle moments of the SoC components. By construction, an EPU delivers lower power consumption than software-controlled power management. EPUs possess the following characteristics:

  • Fine-grained power partitioning maximizes SoC energy savings opportunities
  • Autonomous hardware-based control provides orders of magnitude faster power up and power down than software-based control through a conventional processor
  • Aggregation of architectural power savings techniques ensures minimum energy consumption
  • Reprogrammable architecture supports optimization under varying operating conditions and enables observation-driven adaptation to the end system.

About ICE-G1

The Sonics’ ICE-G1 EPU accelerates the development of power-sensitive SoC designs using configurable IP and an automated methodology, which produces EPUs and operating results that improve upon the custom approach employed by expert power design teams. As the industry’s first licensable EPU, ICE-G1 makes sophisticated power savings techniques accessible to all SoC designers in a complete subsystem solution. Using ICE-G1, experienced and first-time SoC designers alike can achieve significant power savings in their designs.

Markets for ICE-G1 include:

- Application and Baseband Processors
- Tablets, Notebooks
- IoT
- Datacenters
- EnergyStar compliant systems
- Form factor constrained systems—handheld, battery operated, sealed case/no fan, wearable.

-ICE-G1 key product features are:Intelligent event and switching controllers–power grain controllers, event matrix, interrupt controller, software register interface—configurable and programmable hardware that dynamically manages both active and leakage power.

- SonicsStudio SoC development environment—graphical user interface (GUI), power grain identification (import IEEE-1801 UPF, import RTL, described directly), power architecture definition, power grain controller configuration (power modes and transition events), RTL and UPF code generation, and automated verification test bench generation tools. A single environment that streamlines the EPU development process from architectural specification to physical implementation.

- Automated SoC power design methodology integrated with standard EDA functional and physical tool flows (top down and bottom up)—abstracts the complete set of power management techniques and automatically generates EPUs to enable architectural exploration and continuous iteration as the SoC design evolves.

- Technical support and consulting services—including training, energy savings assessments, architectural recommendations, and implementation guidance.

Conclusion

As can be seen from the contributions analysis and management of power is multi-faceted.  Dynamic control of power, especially in battery powered IoT devices is critical, since some of there devices will be in locations that are not readily reachable by an operator.

Advanced-Node Designs in 2016 and Beyond

Tuesday, January 19th, 2016

Vassilios Gerousis, Distinguished Engineer, Cadence

This year, many of the expectations in the semiconductor industry are around the technologies that enable advanced-node design, along with the applications that are driving the migration to smaller processes.

If 2015 was any indication, we will continue to see an emphasis on designs for the Internet of Things (IoT), wearable, and mobile spaces. This means a continued focus on lowering power, lowering costs, and shrinking area—the characteristics that advanced process nodes are suited to deliver.

At advanced nodes, the main concerns are around higher speed and lower power, which FinFET 14nm and 16nm both provide. We have already seen some industry announcements about designs (CPUs) done at ultra-low voltage (sub-threshold region) using mature nodes like 40nm as an example. Since the speed at these voltages will be very slow, the main targeted application will be IoT designs, where ultra-low power is needed. This year, we will likely see both CMOS and FD-SOI technologies help overcome some of the challenges of ultra-low power designs.

While few expect 10nm production, we will definitely see 10nm test chip products this year. Some will even hit production timelines and become actual product designs. At the same time, we will see more products go into production at the 14nm and 16nm process nodes. Designers are definitely migrating from 28nm, and even skipping over 20nm.

10nm Design Challenges

10nm design brings more complex design rules along with a multi-coloring approach, resulting in as many as three masks per layer, as well as multiple colors for vias. The handoff to the foundry, therefore, will have to be colored, making it essential for the entire digital implementation flow to be color-aware.

Since the designs are shrinking, we must anticipate unexpected electrical performance behaviors. The interconnect will continue to be the major bottleneck at 10nm, in terms of electromigration issues, an increase in resistance, and an increase in coupling capacitance (relative to total capacitance). The interconnect is much thinner in these designs, so electromigration is a major design challenge to address in addition to timing and signal integrity. Designers will need newer capabilities in EDA tools to provide good solutions for their 10nm designs.

Unlike previous technologies, 1D routing direction (no wrong-way routing) is becoming the normal design behavior rather than the exception at 10nm and 7nm. Improved routing features to address 1D very effectively will be essential to providing better power, performance, and area (PPA) design targets.

Extending Moore’s Law

3D-IC technology further extends Moore’s Law, generating higher bandwidth with lower power consumption at a small form factor—all without requiring traditional process scaling. At the moment, the majority of products for 3D-IC are appearing mainly in memory and FPGAs. The through-silicon vias (TSVs) in 3D-ICs consume a lot of area compared to the rest of the wiring, limiting how much functionality you can integrate onto the device and also impacting cost. The new monolithic 3D-IC technology, which uses normal sequential processing and regular vias instead of TSVs, is starting to appear. Again, memory products are leading the way in using monolithic 3D-IC processing.

New packaging technologies have already emerged to support cost-, power-, and form factor-sensitive applications like those in the IoT and mobile spaces. For example, we now have access to packaging technology that provides a thin package supporting one or two die.

We could see FinFET designs move to nanowire technology at even smaller processes, such as 5nm or 3nm. Nanowire FinFETs, architected with all gates surrounding the silicon, are ideal for their superior electrostatic control. Indeed, these smaller nodes are on the horizon. In October 2015, IMEC and Cadence announced that they completed the first tapeout of a 5nm test chip using extreme ultraviolet (EUV) and 193 immersion lithography.

Summary

In 2016, we will see an emphasis in product design starts at 10nm, and the year will also introduce a few 7nm test chips. Lines plus cuts, as illustrated in Figure 1, will be one of the main technologies to use at 7nm.

Figure 1: Lines plus multi-color cuts used in 5nm tapeout.

Cadence is working with leading foundries and research labs to prepare our tools to address the challenges that these advanced nodes bring. Innovation continues to be the key with each technology node in order to utilize these nodes more effectively.

Mentor Graphics Comments on Signal Integrity

Monday, December 7th, 2015

Gabe Moretti, Senior Editor

In planning to cover the topic of signal integrity I contacted Mentor Graphics to get their point of view.  Karen Chow, a technical marketing engineer in the Design-to-Silicon division of Mentor Graphics was kind enough to answer my questions.  Karen holds both a B.S. degree in electrical engineering and an MBA.  I sked her to answer my question in writing to make sure I captured the answers correctly.  She was kind enough to even include a figure with her answers.

Gabe: What tools does Mentor offer to deal with signal integrity analysis?

Karen: Signal integrity is an important integrated circuit design issue, and includes analysis such as measuring electromigration due to high current, IR drop issues, and crosstalk noise. It is important to accurately characterize signal integrity issues, to see if there will be any problems when the chip is manufactured. If there are problems, then the layout can be modified. Calibre xACT is a parasitic extraction tool that is fully qualified by all the major foundries at advanced nodes, and is useful for creating netlists that have accurate parasitic capacitance, resistance, and inductance values. Calibre xACT is tightly integrated with Analog FastSPICE simulator to accurately characterize the chip. It is used to see how the parasitic coupling capacitance values affect crosstalk. And the highly accurate resistance engine with reported width values can be used to measure electromigration. Lastly, the high capacity resistance engine can be used to see how much IR drop there is, and how it affects the different devices in all of the different circuits.

Gabe: What can be done to minimize signal integrity problems?

Karen: Crosstalk can be minimized by spacing the conductors farther apart if they are violating crosstalk limits. After running Calibre xACT to create the netlist, Calibre RVE can be used to see where the cross coupling capacitance is largest. These areas can then be moved farther apart to reduce crosstalk.

Figure 1. After using Calibre xACT to extract parasitic resistance and capacitance, and finding cross talk issues, Calibre RVE was used to figure out nets with the largest coupling capacitance through its sorting capability. The coupling capacitance was highlighted. After moving the wires farther apart, it was verified that the coupling capacitance was reduced in the modified layout.

For both electromigration and IR problems, the metal lines in the problem areas need to be widened, to reduce the parasitic resistance, and thereby reducing the current.

Gabe:  Does the issue become more important as the process dimensions shrink?

Karen: Signal integrity becomes a more important issue as process dimensions shrink. With IR drop, the main issue is that all of the connection layers below metal1 are highly resistive (both the local interconnect and the contacts), and it is very important to model those accurately.

Gabe: Do you foresee a point after which noise issues will become unmanageable?

Karen:  Noise still needs to be managed at these smaller nodes. Thermal noise is around the same for FinFET technologies. Pink, or 1/F noise is generally lower, and so, for new technologies, noise is still manageable.

Gabe: Do noise problems occur from sub-optimal architecture or are they mostly due to poor place and route implementations?

Karen: Signal integrity needs to be addressed as part of the entire design flow, and just at the end of the design cycle during place and route.

Cadence Introduces Innovus Implementation System

Friday, March 13th, 2015

Gabe Moretti, Senior Editor

Cadence Design Systems  has introduced its Innovus Implementation System, a next-generation physical implementation solution that aims to enable system-on-chip (SoC) developers to deliver designs with best-in-class power, performance and area (PPA) while accelerating time to market.  The Innovus Implementation System was designed to help physical design engineers achieve best-in-class performance while designing for a set power/area budget or realize maximum power/area savings while optimizing for a set target frequency.

The company claims that the Innovus Implementation System provides typically 10 to 20 percent better power/performance/area (PPA) and up to 10X full-flow speedup and capacity gain at advanced 16/14/10nm FinFET processes as well as at established process nodes.

Rod Metcalfe, Product Management Group Director pointed out the key Innovus capabilities:

- New GigaPlace solver-based placement technology that is slack-driven and topology-/pin access-/color-aware, enabling optimal pipeline placement, wirelength, utilization and PPA, and providing the best starting point for optimization
- Advanced timing- and power-driven optimization that is multi-threaded and layer aware, reducing dynamic and leakage power with optimal performance
- Unique concurrent clock and datapath optimization that includes automated hybrid H-tree generation, enhancing cross-corner variability and driving maximum performance with reduced power
- Next-generation slack-driven routing with track-aware timing optimization that tackles signal integrity early on and improves post-route correlation
Full-flow multi-objective technology enables concurrent electrical and physical optimization to avoid local optima, resulting in the most globally optimal PPA

    The Innovus Implementation System also offers multiple capabilities that boost turnaround time for each place-and-route iteration. Its core algorithms have been enhanced with multi-threading throughout the full flow, providing significant speedup on industry-standard hardware with 8 to 16 CPUs. Additionally, it features what Cadence believes to be the industry’s first massively distributed parallel solution that enables the implementation of design blocks with 10 million instances or larger. Multi-scenario acceleration throughout the flow improves turnaround time even with an increasing number of multi-mode, multi-corner scenarios.

    Rahul Deokar, Product Management Director added that the product offers a common user interface (UI) across synthesis, implementation and signoff tools, and data-model and API integration with the Tempus Timing Signoff solution and Quantus QRC Extraction solution.

    The Innovus common GUI

    Together these solutions enable fast, accurate, 10nm-ready signoff closure that facilitates ease of adoption and an end-to-end customizable flow. Customers can also benefit from robust visualization and reporting that enables enhanced debugging, root-cause analysis and metrics-driven design flow management.

    “At ARM, we push the limits of silicon and EDA tool technology to deliver products on tight schedules required for consumer markets,” said Noel Hurley, general manager, CPU group, ARM. “We partnered closely with Cadence to utilize the Innovus Implementation System during the development of our ARM Cortex-A72 processor. This demonstrated a 5X runtime improvement over previous projects and will deliver more than 2.6GHz performance within our area target. Based on our results, we are confident that the new physical implementation solution can help our mutual customers deliver complex, advanced-node SoCs on time.”

    “Customers have already started to employ the Innovus Implementation System to help achieve higher performance, lower power and minimized area to deliver designs to the market before the competition can,” said Dr. Anirudh Devgan, senior vice president of the Digital and Signoff Group at Cadence. “The early customers who have deployed the solution on production designs are reporting significantly better PPA and a substantial turnaround time reduction versus competing solutions.”

    New Markets for EDA

    Wednesday, December 3rd, 2014

    Brian Derrick, Vice President Corporate Marketing, Mentor Graphics

    EDA grows by solving new problems as discontinuities occur and design cannot proceed as usual.  Often these are incremental, but occasionally problems or transitions occur that create new markets for our industry.

    Discontinuities in Traditional EDA

    One of the most pressing challenges today is the escalating complexity of hardware verification and the need to verify software earlier in the design cycle. Emulation is rapidly becoming a mainstream methodology. As part of an integrated enterprise verification solution, it allows designers to perform pre-silicon testing and debug at accelerated hardware speeds, using real-world stimulus within a unified verification environment that seamlessly moves data and transactors between simulation and emulation.   Enterprise verification utilizing emulation delivers performance and productivity improvements ranging from 400X to 10,000X.

    Performance alone did not enable emulation to become mainstream.  There has been a transformation from project-bound engineering lab instrument to datacenter-hosted global resource.  This transformation begins by eliminating the In-Circuit Emulation (ICE) tangle of cables, speed adaptors and physical devices, replacing them with virtual devices.  The latest generation of emulators can be installed in most standard data centers, making emulators similar to any other server installation.

    What’s equally exciting is the number of software engineers who have moved their embedded software development and debug to emulators. With the accelerated throughput, developers feel as though they are debugging embedded software on the actual silicon product.  All of this explains why the emulation market has doubled in the past five years, with a three year compounded annual growth rate of 23%.

    Another discontinuity in traditional EDA is physical testing at 20nm and below.  As FinFET technology becomes pervasive at these nodes, there is strong potential for increased defects within the standard cells.  Transistor-Level automatic test pattern generation targets undetected internal faults based on the actual cell layout.  It improves the quality of wafer level test, reducing the need for system-level, functional test.  This “cell-aware” capability has been well qualified on FinFET parts and will become pervasive in leading-edge physical design verification, keeping the Design for Test market on its accelerated growth rate that is 4X the overall EDA market growth.

    EDA Growth Opportunities in New Markets

    Other important growth opportunities for our industry can be found in markets that are in transition, with emerging requirements for automated design.  There is no doubt that automotive electronics is one of the most promising segments.  As the automotive industry transitions from mechanical to electronic differentiation of their products, the need for electronic design automation is accelerating.

    Automobiles  are complex electronics systems where leading-edge vehicles have up to 150 electronic networks, 200 microprocessors, nearly 100 electronic motors, hundreds of LEDs, all connected by nearly 3 miles of wiring.  And the embedded software responsible for managing all of this can reach upwards of 65 million lines of code (Figure 1).  Automotive ICs are already a $20 billon market and are the fastest growing segment according to IC Insights.  Electronics now account for 35-40% of a car’s cost and that number is expected to increase to 50% in the future.

    Figure 1:  Complexity is driving the automation of the electronic design of automotive electronics

    Automotive suppliers are adopting EDA solutions to address the unique electronic systems and software challenges in this rapidly developing segment.  Simple wiring tools are being replaced with complete enterprise solutions from concept through design, manufacturing, costing and after-sales service.  These tools and flows are enabling the industry to handle the requirements of a highly regulated environment, while increasing quality, minimizing costs, reducing weight, and manage power across literally thousands of options for an OEM platform.

    The rapid expansion of electronic control units and networks in nearly all new automotive platforms has accelerated the demand for AUTOSAR development tools and solutions.   AUTOSAR is an open, standardized automotive software architecture, jointly developed by automobile manufacturers, suppliers and tool developers.  Now add to that the safety-critical embedded software requirements and standards such as ISO 26262, regulations for fuel efficiency, and environmental emissions, and the opportunity for design automation is just beginning.

    Driver experience is now a crucial differentiator, with in-vehicle infotainment (IVI), advance driver assistance (ADAS), and driver information, becoming the major selling points for new automobiles.  Active noise cancellation, high speed hi-def video, smart mirrors, head up display, proximity information, over the air updates, and animated graphics are just a few of the capabilities being deployed and developed for automobiles today.

    There is a strong demand for EDA solutions that combine system design and software development for heterogeneous systems. Embedded Linux, AUTOSAR and real time operating systems are deployed across diverse multi-core SOCs in a growing number of in-vehicle networks.

    Many of the EDA solutions developed for the automotive industry are being adopted by other markets with similar challenges.  Electronic systems interconnect tools are enabling the optimization of the cable/harness systems in aerospace, defense, heavy equipment, off-road, agriculture,  and other transportation-related markets.   As the automotive industry develops and deploys driver convenience and information systems, it will make them affordable for many of these adjacent markets.

    New markets for EDA are emerging as the complexity of SoCs increase and the world we interact with becomes more connected.  Solving these new problems and applying EDA solutions to markets in transition, like automotive, aerospace, the broader transportation industry, and the Internet of Things, will fuel the growth of the design automation industry into the future.

    An EDA View of Semiconductor Manufacturing

    Wednesday, June 25th, 2014

    Gabe Moretti, Contributing Editor

    The concern that there is a significant break between tools used by designers targeting leading edge processes, those at 32 nm and smaller to be precise, and those used to target older processes was dispelled during the recent Design Automation Conference (DAC).  In his address as a DAC keynote speaker in June at the Moscone Center in San Francisco Dr. Antun Domic, Executive Vice President and General Manager, Synopsys Design Group, pointed out that advances in EDA tools in response to the challenges posed by the newer semiconductor process technologies also benefit designs targeting older processes.

    Mary Ann White, Product Marketing Director for the Galaxy Implementation Platform at Synopsys, echoed Dr. Domic remarks and stated:” There seems to be a misconception that all advanced designs needed to be fabricated on leading process geometries such as 28nm and below, including FinFET. We have seen designs with compute-intensive applications, such as processors or graphics processing, move to the most advanced process geometries for performance reasons. These products also tend to be highly digital. With more density, almost double for advanced geometries in many cases, more functionality can also be added. In this age of disposable mobile products where cellphones are quickly replaced with newer versions, this seems necessary to remain competitive.

    However, even if designers are targeting larger, established process technologies (planar CMOS), it doesn’t necessarily mean that their designs are any less advanced in terms of application than those that target the advanced nodes.  There are plenty of chips inside the mobile handset that are manufactured on established nodes, such as those with noise cancellation, touchscreen, and MEMS (Micro-Electronic Sensors) functionality. MEMS chips are currently manufactured at the 180nm node, and there are no foreseeable plans to move to smaller process geometries. Other chips at established nodes tend to also have some analog capability, which doesn’t make them any less complex.”

    This is very important since the companies that can afford to use leading edge processes are diminishing in number due to the very high ($100 million and more) non recurring investment required.  And of course the cost of each die is also greater than with previous processes.  If the tools could only be used by those customers doing leading edge designs revenues would necessarily fall.

    Design Complexity

    Steve Carlson, Director of Marketing at Cadence, states that “when you think about design complexity there are few axes that might be used to measure it.  Certainly raw gate count or transistor count is one popular measure.  From a recent article in Chip Design a look at complexity on a log scale shows the billion mark has been eclipsed.”  Figure 1, courtesy of Cadence, shows the increase of transistors per die through the last 22 years.

    Figure 1.

    Steve continued: “Another way to look at complexity is looking at the number of functional IP units being integrated together.  The graph in figure 2, provided by Cadence, shows the steep curve of IP integration that SoCs have been following.  This is another indication of the complexity of the design, rather than of the complexity of designing for a particular node.  At the heart of the process complexity question are metrics such as number of parasitic elements needed to adequately model a like structure in one process versus another.”  It is important to notice that the percentage of IP blocks provided by third parties is getting close to 50%.

    Figure 2.

    Steve concludes with: “Yet another way to look at complexity is through the lens of the design rules and the design rule decks.  The graphs below show the upward trajectory for these measures in a very significant way.” Figure 3, also courtesy of Cadence, shows the increased complexity of the Design Rules provided by each foundry.  This trend makes second sourcing a design impossible, since having a second source foundry would be similar to having a different design.

    Figure 3.

    Another problem designers have to deal with is the increasing complexity due to the decreasing features sizes.  Anand Iyer, Calypto Director of Product Marketing, observed that: “Complexity of design is increasing across many categories such as Variability, Design for Manufacturability (DFM) and Design for Power (DFP). Advanced geometries are prone to variation due to double patterning technology. Some foundries are worst casing the variation, which can lead to reduced design performance. DFM complexity is causing design performance to be evaluated across multiple corners much more than they were used to. There are also additional design rules that the foundry wants to impose due to DFM issues. Finally, DFP is a major factor for adding design complexity because power, especially dynamic power is a major issue in these process nodes. Voltage cannot scale due to the noise margin and process variation considerations and the capacitance is relatively unchanged or increasing.”

    Impact on Back End Tools.

    I have been wondering if the increasing dependency on transistors geometries and the parasitic effects peculiar to each foundry would eventually mean that a foundry specific Place and Route tool would be better than adapting a generic tool to a Design Rules file that is becoming very complex.  I my mind complexity means greater probability of errors due to ambiguity among a large set of rules.  Thus by building rules specific Place and Route tools would directly lower the number of DR checks required.

    Mary Ann White of Synopsys answered: “We do not believe so.  Double and multiple patterning are definitely newer techniques introduced to mitigate the lithographic effects required to handle the small multi-gate transistors. However, in the end, even if the FinFET process differs, it doesn’t mean that the tool has to be different.  The use of multi patterning, coloring and decomposition is the same process even if the design rules between foundries may differ.”

    On the other hand Steve Carlson of Cadence shares the opinion.  “There have been subtle differences between requirements at new process nodes for many generations.  Customers do not want to have different tool strategies for second source of foundry, so the implementation tools have to provide the union of capabilities needed to enable each node (or be excluded from consideration).   In more recent generations of process nodes there has been a growing divergence of the requirements to support

    like-named nodes. This has led to added cost for EDA providers.  It is doubtful that different tools will be spawned for different foundries.  How the (overlapping) sets of capabilities get priced and packaged by the EDA vendors will be a business model decision.  The use model users want is singular across all foundry options.  How far things diverge and what the new requirements are at 7nm and 5nm may dictate a change in strategy.  Time will tell.”

    This is clear for now.  But given the difficulty of second sourcing I expect that a de4sign company will choose one foundry and use it exclusively.  Changing foundry will be almost always a business decision based on financial considerations.

    New processes also change the requirements for TCAD tools.  At the just finished DAC conference I met with Dr. Asen Asenov, CEO of Gold Standard Simulations, an EDA company in Scotland that focuses on the simulation of statistical variability in nan-CMOS devices.

    He is of the opinion that Design-Technology Co-Optimization (DTCO) has become mandatory in advanced technology nodes.  Modeling and simulation play an increasing important role in the DTCO process with the benefits of speeding up and reducing the cost of the technology, circuit and system development and hence reducing the time-to-market.  He said: “It is well understood that tailoring the transistor characteristics by tuning the technology is not sufficient any more. The transistor characteristics have to meet the requirement for design and optimization of particular circuits, systems and corresponding products.  One of the main challenges is to factor accurately the device variability in the DTCO tools and practices. The focus at 28nm and 20nm bulk CMOS is the high statistical variability introduced by the high doping concentration in the channel needed to secure the required electrostatic integrity. However the introduction of FDSOI transistors and FinFETs, that tolerate low channel doping, has shifted the attention to the process induced variability related predominantly to silicon channel thickness or shape  variation.”  He continued: “However until now TCAD simulations, compact model extraction and circuit simulations are typically handled by different groups of experts and often by separate departments in the semiconductor industry and this leads to significant delays in the simulation based DTCO cycle. The fact that TCAD, compact model extraction and circuit simulation tools are typically developed and licensed by different EDA vendors does not help the DTCO practices.”

    Ansys pointed out that in advanced finFET process nodes, the operating voltage for the devices have drastically reduced. This reduction in operating voltage has also lead to a decrease in operating margins for the devices. With several transient modes of operation in a low power ICs, having an accurate representation of the package model is mandatory for accurate noise coupling simulations. Distributed package models with a bump resolution are required for performing Chip-Package-System simulations for accurate noise coupling analysis.

    Further Exploration

    The topic of Semiconductors Manufacturing has generated a large number of responses.  As a result the next monthly article will continue to cover the topic with particular focus on the impact of leading edge processes on EDA tools and practices.

    Next Page »