Part of the  

Chip Design Magazine

  Network

About  |  Contact

Posts Tagged ‘Mentor Graphics’

Next Page »

Low Power Is The Norm, Not The Exception

Friday, September 26th, 2014

Gabe Moretti, Senior Editor

The issue of power consumption took front stage with the introduction of portable electronic devices.  It became necessary for the semiconductor industry and thus the EDA industry to develop new methods and new tools to confront the challenges and provide solutions.  Thus Low Power became a separate segment of the industry.  EDA vendors developed tools specifically addressing the problem of minimizing power consumption, both at the architecture, the synthesis, and the pre-fabrication stage of IC development.  Companies instituted new design methodologies that focused specifically on power distribution and consumption.

Today the majority of devices are designed and fabricated with low power as a major requirement.  As we progress toward a world that uses more wearable devices and more remote computational capabilities, low power consumption is a must.  I am not sure that dedicating a segment to low power is relevant: it makes more sense to have a sector of the industry devoted to unrestricted power use instead.

The contributions I received in preparing this article are explicit in supporting this point of view.

General Considerations

Mary Ann White, Director of Product Marketing, Galaxy Design Platform, at Synopsys concurs with my position.  She says: “Power conservation occurs everywhere, whether in mobile applications, servers or even plug-in-the-wall items.  With green initiatives and the ever-increasing cost of power, the ability to save power for any application has become very important.  In real-world applications for home consumer items (e.g. stereo equipment, set-top boxes, TVs, etc.), it used to be okay to have items go into standby mode. But, that is no longer enough when smart-plug strips that use sensors to automatically turn off any power being supplied after a period of non-usage are now populating many homes and Smart Grids are being deployed by utility companies. This trend follows what commercial companies have done for many years now, namely using motion sensors for efficient energy management throughout the day.”

Vic Kulkarni, Senior VP and GM, RTL Power Business Unit, at Apache Design, Inc., a wholly-owned subsidiary of ANSYS, Inc. approached the problem from a different point of view but also points out wasted power.

“Dynamic power consumed by SoCs continues to rise in spite of strides made in reducing the static power consumption in advanced technology nodes.

There are many reasons for dynamic power consumption waste – redundant data signal activity when clocks are shut off, excessive margin in the library characterization data leading to inefficient implementation, large active logic cones feeding deselected mux inputs, lack of sleep or standby mode for analog circuits, and even insufficient software-driven controls to shut down portions of the design. Another aspect is the memory sub-system organization. Once the amount of memory required is known, how should it be partitioned? What types of memories should be used? How often do they need to be accessed? All of these issues greatly affect power consumption. Therefore, design must perform power-performance-area tradeoffs for various alternative architectures to make an informed decision.”

The ubiquity of low power designs was also pointed out by Guillaume Boillet, Technical Marketing Manager, at Atrenta Inc.  He told me that: “Motivations for reducing the power consumed by chips are multiple. They range from purely technical considerations (i.e. ensuring integrity and longevity of the product), to differentiation factors (i.e. extend battery life or reduce cost of cooling) to simply being more socially responsible. As a result, power management techniques, which were once only deployed for wireless applications, have now become ubiquitous. The vast majority of IC designers are now making a conscious effort to configure their RTL for efficient power partitioning and to reduce power consumption, in particular the dynamic component, which is increasingly becoming more dominant at advanced technology nodes.”  Of course experience by engineers has found that minimizing power is not easy.”  Guillaume continued: “The task is vast and far from being straight-forward. First, there is a multitude of techniques which are available to designers: Power gating, use of static and variable voltage domains, Dynamic Voltage and Frequency Scaling (DVFS), biasing, architectural tradeoffs, coarse and fine-grain clock gating, micro-architectural optimizations, memory management, and light sleep are only some examples. When you try combining all of these, you soon realize the permutations are endless. Second, those techniques cannot be applied blindly and can have serious implications during floor planning, timing convergence activities, supply distribution, Clock Tree Synthesis (CTS), Clock Domain Crossing management, Design For Test (DFT) or even software development.”

Low power considerations have also been at the forefront of IP designs.  Dr. Roddy Urquhart is Vice President of Marketing at Cortus, a licensor of controllers, noted that: “A major trend in the electronics industry now, is the emergence of connected intelligent devices implemented as systems-on-chip (SoC) – the ‘third wave’ of computational devices.  This wave consists of the use of locally connected smart sensors in vehicles, the emergence of “smart homes” and “smart buildings” and the growing Internet of Things.  The majority of these types of devices will be manufactured in large volumes, and will face stringent power constraints. While users may accept charging their smartphones on a daily basis, many sensor-based devices for industrial applications, environmental monitoring or smart metering rely on the battery to last months or even a number of years. Achieving this requires a focus on radically reducing power and a completely different design approach to the SoC design.”

Architectural Considerations

Successful power management starts at the architectural level.  Designers cannot decide on a tactic to conserve power once that system has already been designed, since power consumption is the result of architectural decisions aimed at meeting functional requirements.  These tradeoffs are made very early in the development of an IC.

Jon McDonald, Senior Technical Marketing Engineer, at Mentor Graphics noted that: “Power analysis needs to begin at the system level in order to fix a disconnect between the measurement of power and the decisions that affect power consumption. The current status quo forces architectural decisions and software development to typically occur many months before implementation-based power measurement feedback is available. We’ve been shooting in the dark too long.  The lack of visibility into the impact of decisions while they are being made incurs significant hidden costs for most hardware and software engineers. System engineers have no practical way of measuring the impact of their design decisions on the system power consumption. Accurate power information is usually not available until RTL implementation, and the bulk of power feedback is not available until the initial system prototypes are available.”

Patrick Sheridan, Senior Staff Product Marketing Manager, Solutions Group, at Synopsys went into more details.

“Typical questions that the architect can answer are:

1) How to partition the SoC application into fixed hardware accelerators and software executing on processors, determining the optimal number and type of each CPU, GPU, DSP and accelerator.

2) How to partition SoC components into a set of power domains to adjust voltage and frequency at runtime in order to save power when components are not needed.

3) How to confirm the expected performance/power curve for the optimal architecture.

To help expand industry adoption, the IEEE 1801 Working Group’s charter has been updated recently to include extending the current UPF low power specification for use in system level power modeling. A dedicated system level power sub-committee of the 1801 (UPF) Working Group has been formed, led by Synopsys, which includes good representation from system and power architects from the major platform providers. The intent is to extend the UPF language where necessary to support IP power modeling for use in energy aware system level design.”  But he pointed out that more is needed from the software developers.

“In addition, power efficiency continues to be a major product differentiator – and quality concern – for the software manager. Power management functions are distributed across firmware, operating system, and application software in a multi-layered framework, serving a wide variety of system components – from multicore CPUs to hard-disks, sensors, modems, and lights – each consuming power when activated. Bringing up and testing power management software is becoming a major bottleneck in the software development process.

Virtual prototypes for software development enable the early bring-up and test of power management software and enable power-aware software development, including the ability to:

- Quickly reveal fundamental problems such as a faulty regulation of clock and voltages

- Gain visibility for software developers, to make them aware of problems that will cause major changes in power consumption

- Simulate real world scenarios and systematically test corner cases for problems that would otherwise only be revealed in field operation

This enables software developers to understand the consequences of their software changes on power sooner, improving the user-experience and accelerating software development schedules.”

Drew Wingard, CTO, at Sonics also answered my question about the importance of architectural analysis of power consumption.

“All the research shows that the most effective place to do power optimization is at the architectural level where you can examine, at the time of design partitioning, what are the collections of components which need to be turned on or can afford to be turned off. Designers need to make power partitioning choices from a good understanding of both the architecture and the use cases they are trying to support on that architecture. They need tooling that combines the analysis models together in a way that allows them to make effective tradeoffs about partitioning versus design/verification cost versus power/energy use.”

Dr. Urquhart underscored the importance of architectural planning in the development of licensable IP.  “Most ‘third wave’ computational devices will involve a combination of sensors, wireless connectivity and digital control and data processing. Managing power will start at the system level identifying what parts of the device need to be always on or always listening and which parts can be switched off when not needed. Then individual subsystems need to be designed in a way that is power efficient.

A minimalist 32-bit core saves silicon area and in smaller geometries also helps reduce static power. In systems with more complex firmware the power consumed by memory is greater than the power in the processor core. Thus a processor core needs to have an efficient instruction set so that the size of the instruction memory is minimized. However, an overly complex instruction set would result in good code density but a large processor core. Thus overall system power efficiency depends on balancing power in the processor core and memory.”

Implementation Considerations

Although there is still a need for new and more powerful architectural tools for power planning, implementation tools that help designers deal with issues of power distribution and use are reaching maturity and can be counted as reliable tools by engineers.

Guillaume Boillet observed that: “Fine-grain sequential clock gating and removal of redundant memory accesses are techniques that are now mature enough for EDA tools to decide what modifications are best suited based on specific usage scenarios (simulation data). For these techniques, it is possible to generate optimized RTL automatically, while guaranteeing its equivalence vs. the original RTL, thanks to formal techniques. EDA tools can even prevent modifications that generate new unsynchronized crossings and ensure proper coding style provided that they have a reliable CDC and lint engine.”

Vic Kulkarni provided me with an answer based on sound an detailed technical theory that lead to the following: “There are over 20 techniques to reduce power consumption which must be employed during all the design phases from system level (Figure 1), RTL to gate level sign-off to model and analyze power consumption levels and provide methodologies to meet power budgets, at the same time do the balancing act of managing trade-offs associated with each technique that will be used throughout the design flow Unfortunately there is NO single silver bullet to reduce power!

Fig. 1. A holistic approach for low-power IP and IP-based SoC design from system to final sign-off with associated trade-offs [Source: ANSYS-Apache Design]

To successfully reduce power, increase signal bandwidth, and manage cost, it is essential to simultaneously optimize across the system, chip, package, and the board. As chips migrate to sub-20 nanometer (nm) process nodes and use stacked-die technologies, the ability to model and accurately predict the power/ground noise and its impact on ICs is critical for the success of advanced low-power designs and associated systems.

Design engineers must meet power budgets for a wide variety of operating conditions.  For example, a chip for a smart phone must be tested to ensure that it meets power budget requirements in standby, dormant, charging, and shutdown modes.  A comprehensive power budgeting solution is required to accurately analyze power values in numerous operating modes (or scenarios) while running all potential applications of the system.”

Jon McDonald described Mentor’s approach.  He highlighted the need for a feedback loop between architectural analysis and implementation. “Implementation optimizations focus on the most efficient power implementation of a specific architecture. This level of optimizations can find a localized minimum power usage, but are limited by their inability to make system-wide architectural trade-offs and run real world scenarios.

Software optimizations involve efforts by software designers to use the system hardware in the most power efficient manner. However, as the hardware is fixed there are significant limitations on the kinds of changes that can be made. Also, since the prototype is already available, completing the software becomes the limiting factor to completing the system. As well, software often has been developed before a prototype is available or is being reused from prior generations of a design. Going back and rewriting this software to optimize for power is generally not possible due to time constraints on completing the system integration.

Both of these areas of power optimization focus can be vastly improved by investing more in power analysis at the system level – before architectural decisions have been locked into an implementation. Modeling power as part of a transaction-level model provides quantitative feedback to design architects on the effect their decisions have on system power consumption. It also provides feedback to software developers regarding how efficiently they use the hardware platform. Finally, the data from the software execution on the platform can be used to refine the architectural choices made in the context of the actual software workloads.

Being able to optimize the system-level architecture with quantitative feedback tightly coupled to the workload (Figure 2) allows the impact of hardware and software decisions to be measured when those decisions are made. Thus, system-level power analysis exposes the effect of decisions on system wide power consumption, making them obvious and quantifiable to the hardware and software engineers.”

Figure 2. System Level Power Optimization (Courtesy of Mentor Graphics)

Drew Wingard of Sonics underscored the advantage of having in-depth knowledge of the dynamics of Network On Chip (NOC) use.

“Required levels of power savings, especially in battery-powered SOC devices, can be simplified by exploiting knowledge the on-chip network fabric inherently contains about the transactional state of the system and applying it to effective power management (Figure 3). Advanced on-chip networks provide the capability for hardware-controlled, safe shutdown of power domains without reliance on driver software probing the system. A hardware-controlled power management approach leveraging the on-chip network intelligence is superior to a software approach that potentially introduces race conditions and delays in power shut down.”

Figure 3.On-Chip Network Power Management (courtesy of Sonics)

“The on-chip network has the address decoders for the system, and therefore is the first component in the system to know the target when a transaction happens. The on-chip network provides early indication to the SOC Power Manager that a transaction needs to use a resource, for example, in a domain that’s currently not being clocked or completely powered off. The Power Manager reacts very quickly and recovers domains rapidly enough that designers can afford to set up components in a normally off state (Dark Silicon) where they are powered down until a transaction tries to access them.

Today’s SOC integration is already at levels where designers cannot afford to have power to all the transistors available at the same time because of leakage. SOC designers should view the concept of Dark Silicon as a practical opportunity to achieve the highest possible power savings. Employing the intelligence of on-chip networks for active power management, SOC designers can set up whole chip regions with the power normally off and then, transparently wake up these chip domains from the hardware.”

Conclusion

The Green movement should be proud of its success in underlying the importance of energy conservation.  Low Power designs, I am sure, was not one of its main objective, yet the vast majority of electronic circuits today are designed with the goal of minimizing power consumption.  All is possible, or nearly so, when consumers demand it and, importantly, are willing to pay for it.

Blog Review – Monday Sept. 22 2014

Monday, September 22nd, 2014

Intel Developer Forum urges us to snap to it; Software sustains its price and value; STEM – from girl to womanhood; ARM shares tools of the PSoC trade.
By Caroline Hayes, Senior Editor

Happy snapper, Agnes Kwan, Intel, reports on the RealSense snapshot intelligent camera system that can add dimensional information, such as “Just how high was that cliff I climbed?” Even better, it can blur features, or figures – Agnes suggests this is for photo-bombers, the curse of our self-indulgent digital age.

A constitution for the embedded designer is proposed by Rich Rejmaniak, Mentor Graphics. He points out that software development is one of life’s cold hard truths, like death and taxes, there are no shortcuts. He lays out a comprehensive set of rules to bear in mind to make life easier.

A very personal blog from Chris Wolfe, Ansys as she looks at women in engineering, pinpoints an often overlooked reason why girls are not taking part in STEM subjects – confidence. Not just academic but physical – this blog could double up as a social commentary. Although it raises many questions, there is not enough space in the blogsphere for all the answers.

In a generous gesture, Mark Saunders, ARM, shares a link to a Cypress PSoC 4 and PSoC 5LP ARM Cortex Code Optimization application note. He even points out why it is a good resource, covering as it does performance optimization and code size and its relevance to PSoC 5LP engineers.

Blog Review – Monday Sept. 15, 2014

Monday, September 15th, 2014

Video has a CAST of one; RTL clean-up as a simple chore; Detroit spins out roadmap; it’s all about ‘I’
By Caroline Hayes, Senior Editor.

The affable Warren Savage, IP eXtreme, opens a new season of five-minute chats and interviews CAST COO Nickos Zervas, who explains about the role of Greece and a duty to customers.

Using the analogy of the latest household gadget, Graham Bell, Vice President, Marketing, Real Intent, explains how the company’s autoformal tool, cleans up the RTL code in a design.

Invigorated from the Intelligent Transportation Society in Detroit John Day, Mentor Graphics steers his way through the road ahead for the automotive industry.

It sounds like the way to annoy Kris Flautner is to ask “What does the I in IoT stand for?” Apparently he is asked this a lot, but he patiently and clearly explains both the Internet’s role and also the challenges for connectivity and security, ahead of ARM TechCon 2014.

Blog Review – Monday, Sept 01, 2014

Monday, September 1st, 2014

The generation gap for connectivity; seeking medical help; IoT messaging protocol; Cadence discusses IoT design challenges.

While marvelling at the Internet of Things (IoT), Seow Yin Lim, Cadence, writes effectively about its design challenges now that traditional processor architectures and approaches do not apply.

ARM hosts a guest blog by Jonah McLeod, Kilopass, who discusses MQTT, the messaging protocol standard for the IoT. His detailed post provides context and a simplified breakdown of the protocol.

Bringing engineers together is the motivation for Thierry Marchal, Ansys, who writes an impassioned blog following the biomedical track of the company’s CADFEM Users Meeting in Nuremberg, Germany. We are all in this together, is the theme, so perhaps we should exchange ideas. It just might catch on.

Admitting to stating the obvious, John Day, Mentor Graphics, states that younger people are more interested in automotive connectivity than older people are. He goes on to share some interesting stats from a recent survey on automotive connectivity.

Caroline Hayes, Senior Editor

Blog Review – Mon. August 25, 2014

Monday, August 25th, 2014

Real Intent shoots and scores for debug; DVCon takes on the world; Customized multi-core SoCs; Tech-history lesson.
By Caroline Hayes, Senior Editor

A football – or soccer – pun worthy of a yellow card sets Ramesh Dewangan, Real Intent, on a path of comparing power EDA tools to a line of defense that will change the scoreline so you are not left feeling “as sick as a parrot” (as they say in the post-match interview).

The Accellera Design & Verification Conference is moving outside of its Silicon Valley comfort zone, reports Dennis Brophy, Mentor Graphics. His blog has details of DVCon India (Sept. 25-26) and DVCon Europe (Oct. 14-15).

An informative blog by Mayank Sharma, ARM, shows the cost – in dollar terms – of increasingly complex SoC design and has some tips to tackle multi-core SoCs.

Nostalgic contemplation from Brian Fuller, Cadence, who recognizes the 103-year history of IBM, its contribution to storage and the needs of today’s big data society. Some interesting perspectives on then and now.

Blog Review – Mon. August 18 2014

Monday, August 18th, 2014

MonolithIC 3D speculates on what Intel’s 14nm process will bring; evergreen appeal of hybrid prototyping; smart cities boom; Cadence advocates human nature in interface design; in favor of the auto router.
By Caroline Hayes.

Digging into the Intel 14nm technology, Zvi Or-Bach, MonolithIC 3D finds some areas of contention and even some shortfall in the processor company’s cost reduction and dimensional scaling path. His argument is clear, concise and he uses some of Intel’s own slides to make a point.

Returning to a popular theme, Michael Posner, Synopsys, recaps the role of the UMRBus and the HAPS-60 for configuration and monitoring, providing some detailed, but clear illustrations and examples for connecting the latter to a host for prototyping.

The world’s population is set to grow, the number of city dwellers is expected to double and the number of smart cities is expected to quadruple over the next 12 years. Luckily, Rambus executive, Jerome Nadel agrees with the latest IHS assessment of the ‘ubiquitous connectivity and increasing bandwidth’ of our future cityscapes.

Talking to technology is one example of how the next technological innovation could allow for the ‘natural human experience’, suggest Seow Yin Lim, Cadence. To provide this kind of interface will require a new approach to system design, new processors, design IP, verification IP, tools, flows and partner ecosystems but it will be worth the effort.

Standing up for the rights of auto routers, Vern Wnek, Mentor Graphics, writes an impassioned blog about the ‘myth of pretty design’ and provides a link to a webinar in support of the ‘Routing Automation : A Breakthrough in Productivity’ protest group.

Blog Review – Monday August 04, 2014

Monday, August 4th, 2014

Industry flies high; FinFET sign-off rules; Arduindo Maker Faire; Crowdfunding is not new; Components with a conscience.
By Caroline Hayes, Senior Editor

Over $201 billion of orders and commitments were placed at the recent Farnborough International Airshow (FIA) 2014, causing J Van Domelen, Mentor Graphics, some excitement at the buoyancy of the aerospace market internationally and what this means for the industry generally.

An interesting question is posed by Muhammad Zakir, Apache Design, who asks ‘How is FinFET technology changing the meaning of chip sign-off?‘ He includes some graphics to illustrate revised sign-off rules.

Clearly an enthusiast of Arduino, Lori Kate, ARM, attended the Arduino Maker Faire and listen to Massimo Banzi talk about the Arduino Zero and shared the stage with Atmel’s Reza Kazerounian and ARM’s CEO, Simon Segars.

Crowdfunding is not a new idea, reveals Hamilton Carter, who has been digging around magazine archives to find a wonderful example of retro crowdfunding proposed in a 1958 satellite project. This blog coincides with contemporary attempts to launch a satellite using crowdfunding. Just shows, there is nothing new under the sun.

Challenging the role each of us plays in participating in the conflict minerals industry and defining humanitarian technology, Mayura Garg writes and interesting blog on just what to look for, on the basis that awareness can lead to action.

Blog Review Mon. July 21, 2014

Monday, July 21st, 2014

Plane talking at the Farnborough Air Show from Dassault Systemes and Synopsys goes vertical; Forte Q&A; New views at ARM; Interface integration at Mentor Graphics. By Caroline Hayes, Senior Editor.

Inspired by the vertical takeoff of the iconic Harrier jump jet, Michael Posner, looks at the vertical HAPS daughter board launched by Synopsys. He lists the real estate and connector benefits of thinking “laterally”.

When the acquisition of Forte by Cadence was announced, there were many questions asked and Richard Goering, Synopsys asks them of former CEO, Sean Dart. The current senior group director for R&D, Cadence System-Level Design Group has some interesting insights into high level synthesis and the advantages of being acquired by Cadence.

Dassault Systemes is flying high, with its own chalet at this year’s Farnborough International Airshow in the UK. Amongst the aircraft, the company had a dedicated meeting spaces and a 3DEXPERIENCE Playground, reports Aurelien. There are some great images on this blog, amongst the news items.

New boy on the ARM block, Varun Sarwal, dives straight in the blogsphere with news of the Juno 64bit development platform. He explains the architecture well, covering hardware and software overviews.

Welcoming a guest blogger at Mentor Graphics, Phil Brumby, examines user interface and relates how the company has finished integrating its own RTOS Nucleus and development tools with Embedded Wizard UI tool from TARA Systems for MCU or MPU-based hardware systems, such as the glucose meter illustrated on the blog.

Deeper Dive – A 3D-IC round table – part II

Thursday, July 17th, 2014

What’s needed for 3D-ICs to flourish? asks Caroline Hayes, senior editor. Experts from Mentor Graphics, Altera and Synopsys have some ideas for future progress.

The round table is made up of John Ferguson (JF), Director of Marketing, Calibre DRC Applications Design to Silicon Division, Mentor Graphics; John Park (JP), Business Development Manager and Methodology Architect, System Design Division, Mentor Graphics, Steve Pateras (SP), Product Marketing Director, Silicon Test Products Mentor Graphics, Michael White (MW), Director of Product Marketing Calibre Physical Verification Mentor Graphics, Arif Rahman (AR), Program Director and Architect, Altera Corporation, and Marco Casale-Rossi (MCR), Senior Staff Product Marketing Manager, Design Group, Synopsys.

What is the scale of 3D IC adoption/use, for example in 20 or 14nm. What can accelerate adoption?
[JF:] I am just starting to see some 20nm 3D designs. Most 3D designs I’ve seen are focused on more mature nodes: 45, 65, etc.
[JP:] 3D-IC is still at the R&D stage in both academia and the semiconductor industry, the exception being the memory providers. At the 20/14nm nodes, scaling Analog/RF blocks may not be practical, so being able to partition the device into multiple chips across a silicon interposer starts to make a lot of sense. However, issues such as Known Good Dies, assembly and supply chain challenges, including IP’s need to be overcome.
[AR] So far, we see very limited deployment of 3D IC in 20nm and 14nm technologies. Memory vendors appear to be taking the lead to deploy 3D IC technology in 20-ish nm process technology to create high bandwidth and high-capacity memory. JEDEC HBM and Micron’s HMC are two of the early adopters.
[MCR] Memory and FPGA makers are the early adopters. It enables design-level integration for digital + analogue & mixed-signal; and is a good option for ‘More than Moore’ integration which includes integration of electronics + microelectro-mechanics + photonics

What has to come first – availability of the 3D IC process, so that EDA tools can be developed, or tool flows so that foundries can produce 3D ICs economically?
[JF:] It is a chicken and egg situation. EDA doesn’t want to invest in the tools until they know there is a real design market, which implies a foundry process. Foundries don’t want to invest in a process unless there is a design market and EDA tools. Designers don’t want to take the risk without a fully supported flow from a foundry, implying EDA tools in place. The initial work has been done by those market niches which can immediately benefit. They take on the risk and have been working to push those EDA and foundry providers that are willing to share the risk. Issues identified get fixed. Successes get published and begin to establish legitimacy.
[JP:] This will require collaboration between both sides with tools and flows being developed simultaneously.
[SP:] 3D DFT related tools are essentially independent of the 3D IC process and can therefore be developed independently. Mentor already offers a commercial 3D memory BIST solution and has also developed a comprehensive 3D logic test solution that is already part of TSMC’s 3D reference flow.
[AR] EDA tool readiness is not a real show stopper to 3D technology adoption. We believe existing tools and design methodologies can be extended easily to enable 3D IC design.
[MCR] Synopsys collaborated with the foundries to enable development and adoption of 3D-IC processes. It’s not a question of one waiting for the other, but working together to make 3D-IC a reality. Synopsys has made several announcements with TSMC and other partners regarding our 3D-IC collaboration.

What standards would you like to see in place, to regulate interconnect, test for example, for 3D IC?
[JF:] This is a tricky space. Some come at it from an IC perspective, while others come at it from a packaging or board level perspective. Each has its own set of internal standards, but there is really nothing that easily crosses them all. If we focus each tool/standard in the right domain (i.e. design the IC’s with IC based tools, design boards with board tools, etc.) then we just need to make sure that all those components come together correctly. What format will be used for that will largely depend on the process being implemented to combine it all together.
[SP:] The most critical aspect of testing devices within a 3D stack is gaining access to the devices in the middle of the stack. This requires a test access architecture that is shared among all devices in the stack. Since devices will likely originate from different vendors it becomes necessary therefore to have in place an industry standard test access architecture that can adopted by all device developers. The IEEE 1838 working group is currently developing this needed standard.
[MCW:] Yes, the movement to full 3D with >2 stacked dies does bring along the need to deal with thermal, stress, etc. to a far greater degree than is required with 2.5D. For some customers and applications, 2.5D will be an easier and more attractive alternative to vertical stacking for some time. One particular thing that would make 2.5D or other side by side approaches like wafer fan out more interesting is reducing the cost of the medium being used at the interposer. Multiple foundry ecosystems and packaging houses are exploring lower cost solutions to make 2.5D cost effective to more applications.
[MCR] Synopsys is playing a critical role in defining and driving relevant standards. A Principal Engineer from Synopsys is the vice-chair for P1838 working group in IEEE. The charter of this group is to define standardised test access for 3D-IC.
[AR] Test/KGD methodology and interface standard for chip-to-chip communications.
Different types of devices in a 3D stack need to communicate with each other and these devices are expected to be procured from different companies and will be fabricated in different process nodes. There is a need for common low-power and scalable (ability to scale to higher bandwidth) interface standard to enable heterogeneous integration of 3D systems. This interface standard should define physical, electrical, logical, and test requirements.

What opportunities do you see 3D IC will bring for designs in the next five to 10 years (e.g applied to lower geometries; opportunities for new applications, e.g. mobile healthcare use)?
[JF:] Silicon photonics is gaining interest right now for its promise of very low power for use in large scale process compute farms (i.e. cloud computing). Unlike IC design, photonics does not benefit from a shrink. The geometric widths are dictated by the wave lengths of light being used, which are typically much greater than the smallest features available in today’s nanometer processes. So, there is a lot of speculation that designers who wish to combine photonics with advanced IC electronics will do so in a 2.5D or 3D infrastructure, for example adding photonic circuitry into interposers at larger process nodes, while keeping the electronics on advanced processes stacked on the interposer itself.
[MCR] The opportunities are amazing: If we are able to assemble parts from different sources, manufactured using disparate technology nodes, adding MEMS and silicon photonics into the picture, it would lead to a revolution in automotive, computing, healthcare, mobile, networking. For example, a component small enough could be injected in the human body to test something (MEMS sensors), compute (CPU/MCU), transmit (RF) and/or store (FLASH) the results onboard, and dispense (MEMS actuators) the appropriate amount of a medication…the imagination is the only limit.
[AR] Extreme miniaturization of a complete system, which includes sensing, processing, and communication functions, into a very small form factor may require the use of 3D IC technologies. Such systems can facilitate new applications in healthcare (wearable electronics/personalized monitoring, implanted drug delivery devices), environmental monitoring, and imaging.
2.5D/3D integration of optical and electrical components can enable broader deployment of integrated optical interconnect technologies for short-reach communication.

Blog Review – Mon. July 14 2014

Monday, July 14th, 2014

Accellera prepares UVM; Shades of OpenGL ES; Healthy heart in 3D; Webinar for SoC-IoT; Smart watch tear-down. By Caroline Hayes, Senior Editor.

An informative update of Universal Verification (UVM) 1.2 is set out by Dennis Brophy, Mentor Graphics on the announcement by Accellera of the update. Ahead of the final review process, which will end October 31, the author sets out what the standard may mean for current and future projects.

The addition of Compute Shaders to OpenGL ES for mobile API is one of the most notable, says Tim Hartley, ARM. He explains what these APIs do and where to use them for maximum effectiveness.

Dassault Systemes was part of The Loving Heart project, producing a video with the BBC, to advertise the world’s first, realistic 3D simulation model of a human heart, developed by Simulia software. The blog, by Alyssa, adds some background and context to how it can be used.

A webinar on Tuesday July 22, covering the SoC verification challenges in the IoT will be hosted by ARM and Cadence. Brian Fuller flags up why presenters in ‘SoC Verification Challenges in the IoT Age’ will help those migrating from 8- and 16bit systems, with a focus on using an ARM Cortex-M0 processor.

Inside the Galaxy Gear, the truly wearable smart watch, is an ARM Cortex-M4 powered STMicroelectronics device. Chris Ciufo cannot pretend to be taken off-guard by the ABI Research teardown.

Next Page »