Part of the  

Chip Design Magazine


About  |  Contact

Posts Tagged ‘Mentor’

Next Page »

Blog Review – Monday, February 16, 2015

Monday, February 16th, 2015

Repeating last year’s project, Gagan Luthra, ARM, explains this year’s 100 projects in 100 days for the new Bluetooth Low Energy (BLE) Pioneer Kit, with Cypress’s PsoC 4 BLE; an ARM Cortex-M0 CPU with BLE radio.

Steve Schulz visited Cadence Design and entranced Brian Fuller, with his ideas for standards for the IoT, the evolution of the SoC into a System on Stack and the design gaps that lie en route.

Fighting the Nucleus corner, Chris Ciufo, ee catalog, rejoices in the news that Mentor Graphics is repositioning the RTOS for Industrie 4.0, for factory automation, and standing up to the tough guys in the EDA playground.

Following the news that Intel is to buy Lantiq, Martin Bijman, Chipworks, looks into what the acquisition will bring to the Intel stable, presenting some interesting statistics.

Maybe a little one-sided, but the video presented by Bernie DeLay, Synopsys, is informative about how VIP architecture accelerates memory debug for simultaneous visualization.

The normally relaxed and affable Warren Savage, IP-extreme is getting hot under the collar at the thought of others ‘borrowing’, or plain old plagiarism, as he puts it in his post. The origins of the material will be argued, (the offending article has been removed from the publication’s site), but Savage uses the incident to make a distinction between articles with a back story and ‘traditional’ tech journalism.

To end on a light note, Raj Suri, Intel, presents a list compiled by colleagues of employees that look like celebrities. Nicholas Cage, Dame Helen Mirren and Michael J Fox doppelgangers are exposed.

Caroline Hayes, Senior Editor

Blog Review – Monday, Nov. 17 2014

Monday, November 17th, 2014

Harking back to analog; What to wear in wearables week; Multicore catch-up; Trusting biometrics
By Caroline Hayes, Senior Editor.

Adding a touch of nostalgia, Richard Goering, Cadence, reviews a mixed signal keynote at Mixed-Signal Summit that Boris Murmann made at Cadence HQ. His ideas for reinvigorating the role of analog make interesting reading.

As if there wasn’t enough stress about what to wear, ARM adds to it with its Wearables Week. Although David Blaza finds that Shane Walker, IHS is pretty relaxed, offering a positive view of the wearables and medical market.

Practise makes perfect, believes Colin Walls, Mentor, who uses his blog to highlight common misconceptions of C++, multicore and MCAPI for communication and synchronisation between cores.

Biometrics are popular and ubiquitous but Thomas Suwald, NXP looks at what needs to be done for secure integration and the future of authentication.

Blog Review – Mon. August 11 2014

Monday, August 11th, 2014

VW e-Golf; Cadence’s power signoff launch; summer viewing; power generation research.
By Caroline Hayes, Senior Editor

VW’s plans for all-electric Golf models have captured the interest of John Day, Mentor Graphics. He attended the Management Briefing Seminar and reports on the carbon offsetting and a solar panel co-operation with SunPower. I think I know what Day will be travelling in to get to work.

Cadence has announced plans to tackle power signoff this week and Richard Goering elaborates on the Voltus-Fi Custom Power Integrity launch and provides a detailed and informative blog on the subject.

Grab some popcorn (or not) and this summer blockbusters, as lined up by Scott Knowlton, Synopsys. Perhaps not the next Harry Potter series, but certainly a must-see for anyone who missed the company’s demos at PCI-SIG DevCon. This humorous blog continues the cinema analogy for “Industry First: PCI Express 4.0 Controller IP”, “DesignWare PHY IP for PCI Express at 16Gb/s”, “PCI PHY and Controller IP for PCI Express 3.0” and “Synopsys M-PCIe Protocol Analysis with Teledyne LeCroy”.

Fusion energy could be the answer for energy demands and Steve Leibson, Xilinx, shares Dave Wilson’s (National Instruments) report of a fascinating project by National Instruments to monitor and control a compact spherical tokamak (used as neutron sources) with the UK company, Tokamak Solutions.

Tech Travelogue Feb 2017 – FIT, Reliability and Siemens-Mentor Acquistion

Friday, January 20th, 2017
YouTube Preview Image

Security Issues in IoT

Tuesday, September 27th, 2016

Gabe Moretti, Senior Editor

Security is one of the essential ingredients of the success of IoT architectures.  There are two major sides to security: data security and functional security.  Data security refers to the avoidance that data contained in a system is appropriated illegally.  Functional security refers to having a particular system function in a manner it was not intended to by an outside factor.  Architects must prepare for both eventualities when designing a system to foresee devise ways to intercept or isolate the system from such actors.

Data Protection

The major threat to data contained in a system is the always connected idea.  Many systems today are always connected to the internet, whether or not they need to be.  Connection to the internet is the default state for the vast majority of computing systems, whether or not what is being executed requires such connection.  For example, the system on which I am typing this article is connected to the internet while I am using Microsoft Word: it does not need to be, all I need is already installed or saved on the local system.  It is clear that we need to architect modems that connect when needed and that use “not connected” as the default state.  This will significantly diminish the opportunity for a attacking agent to upload malware or appropriate data stored on the system.  Firewalls are fine, but clearly they can be defeated, as demonstrated in many highly publicized cases.

Encryption is another useful tool for data protection.  Data stored in a system should always be encrypted.  Using the data locally should require dynamic decryption and encryption as the data is used locally or transmitted to another system.  The resulting execution overhead is a very small price to pay given the execution speed of modern systems.

I asked Lee Sun, Field Application Engineer at Kilopass Technology what his company is doing to insure security.  Here is what is said:

“Designers of chips for secure networks have begun to conclude that the antifuse one-time programmable (OTP) technology is one of the most secure embedded non-volatile memory (eNVM) technologies available today.  Antifuse OTP is a configurable on-chip memory implemented in standard CMOS logic with no additional masks or processing steps. Most importantly, antifuse OTP offers exceptional data protection because information stored in an antifuse bitcell provides virtually no evidence of the content inside. The bitcell does not store a charge, which means there’s no electrical state of the memory bit cell. The programming of the bitcell is beneath the gate oxide so the breakdown is not visible with SEM imaging. This protection at the physical layer prevents the antifuse eNVM from being hacked by invasive or semi-invasive attacks. Additional logic is available with the Kilopass IP to prevent passive attacks such as voltage tampering, glitching or differential power analysis. To date, there have been no reports of any successful attempts to extract the contents of an antifuse OTP using any of these techniques.”

Of course such level of protection comes at a price, but a large part of IoT systems do not need to store data locally.  For example, a home management system that controls temperature, lighting, intrusion prevention, food management and more, uses a number of computing devices that can be controlled by a central system.  Only the central node needs to have data protection and commands to the server nodes can be encrypted.
Robert Murphy, System Engineer at Cypress Semiconductor addressed the problem by using a secured processor or MCU.  He continued:

“These are general purpose or fixed function devices, used in applications such as home automation or mobile payment authentication. They provide Digital Security functions and are occasionally encapsulated in a package that offers Physical Security. Because various applications require different levels of security, the FIPS 140-2 standard was created to put security standards and requirements in place for hardware and software products. FIPS 140-2 provides third-party assurance of security claims, ranging from level 1 through level 4. Certification for systems can be obtained through test facilities that have been certified by the National Institute of Standards and Technology.

Securing data in the system is mostly accomplished through Digital Security, which includes a combination of cryptography and memory protection. Cryptography secures a system through confidentially, integrity, non-repudiation and authentication. Confidentially refers to keeping a message secret from all but authorized parties through the use of a cryptographic ciphers that employ the latest symmetric (secret-key) and asymmetric (public-key) standards.  Integrity ensures that a message has not been modified during transfer though the use of a hash function such as SHA. Non-repudiation is the process by which the recipient of a message is assured that the message came from the stated receiver, through the use of asymmetric encryption. Authentication provides confirmation that the message came from the expected sender. Authentication can be addressed using either a Message Authentication Code (MAC), which relies on symmetric encryption and provides message integrity; or digital signature, which relies on asymmetric encryption and provides both message integrity and non-repudiation. Attackers will attempt to circumvent cryptography through brute-force attacks or through side-channel attacks. Considering that Cryptography is centered around symmetric or asymmetric keys, protecting those keys from being altered or extracted is critical. This is where Secure MCUs utilize Physical Security and the detection methods described earlier. For added security, devices can incorporate a Physical Unclonable Function (PUF). These are circuits that rely on the uniqueness of the physical microarchitecture of a device that are inherent to the manufacturing process.  This uniqueness can then be applied to cryptographic functions, such as key generation.

Memory protection is comprised of several aspects, which work in layers. The first layer is JTAG and SWD access to the device. This is the mechanism used to program the device initially and if left exposed, can be used to reveal memory contents and other critical information. Therefore, it is important to disable/lockout this interface once deploying a system to production. A Secure MCU can permanently remove the interface through the use of Fuse bits, which are one-time programmable (OTP) memory where an unblown bit is represented by a logical value of zero and a blown bit is represented by a logic value of one.  The next layer is the Secure Boot Process. As discussed earlier, the Secure Boot consists of a Root-of-Trust that verifies the integrity of the device firmware, and can prevent uncertified firmware from having a chance to execute. Since the root of trust cannot be modified by firmware, it is immune to malicious firmware.  Next is Memory Protection Units (MPUs), which are hardware blocks in a device that are used to control access rights to areas of device memory. For example, using MPUs, a Secure MCU can limit access to crypto key storage. In the event an attack can circumvent the secure boot procedure or initiate a software attack through a communication interface, MPUs can limit the resources that the firmware has access to.

When employing any security solution, identify what needs to be secured and focus on that. Otherwise, you run the risk of creating a backdoor. By implementing these Digital Security functions, and layering it under a solid Physical Security solution, one can have a reasonable level of confidence that the data in the system is secured.”

Robert Gates, Chief Safety Officer ESD, at Mentor graphics described the data security requirements in automotive systems.

“Critical data in the automotive context will be handled in a similar way. A Trusted Execution Environment will be established to define and enforce policies for data that must be secured, which will apply to reading sensitive data, creating new sensitive data, and overwriting this data. This data will take on several forms; customer information such as financial and location information, as well as information that is managed by the manufacturer such as calibration settings and other forms of data.”

Inhibiting alien functionality

The collective security of a system requires that it uses the correct data and performs the required functions.  The first requirement is the one most often covered and understood by the public at large, but the second one is equally important with consequences just as negative.

Angela Raucher, Product Line Manager, ARC EM Processors at Synopsys summarized the problem and required solution this way: “With security there is no magic bullet, however taking care to include as many security features as is practical will help extend the amount of time and effort required by potential hackers to the point where it isn’t feasible for them to perform their attacks.”  Ms. Raucher continued: “One method to protect highly sensitive operations such as cryptography and generation of secret keys is to add a separate secure core, such as an ARC SEM security processor, with its own memory resources, that can perform operations in a tamper-resistant isolated environment (see figure 1).  The ARC SEM processor uses multiple privilege levels of access control, a bus state signal denoting whether the processor is in a secure mode, and a memory protection unit that can allocate and protect memory regions based on the privilege level to separate the trusted and non-trusted worlds.  For the case of ARC SecureShield technology, there is also a unique feature enabling each memory region to be scrambled or encrypted independently, offering an additional layer of protection.”

Figure 1. Trusted Execution Environment using ARC SecureShield Technology

Robert Gates from Mentor takes the same direction and points out that “The root of trust starts with the microprocessor, which will generally have some kind of secure storage capabilities such as ARM® TrustZone or Intel Secure Guard Extensions (SGX). This secure storage, which is part of a trusted hardware component (an important topic in itself beyond the scope of the current discussion) contains the signature of a boot-loader (placed in secure storage by the device manufacturer) and the crypto key to be used to unlock and enable its operation; assuming these are as expected by the microprocessor, the second stage loader is allowed to execute, establishing it as a trusted component. A similar exchange occurs between the loader and the operating system on its boot-up (weather this is an RTOS or a more fully features OS like Linux or Android), establishing the kernel as a trusted component (Figure 2).”

Figure 2. ), establishing the kernel as a trusted component

Robert Murphy, System Engineer at Cypress points out that there are ways to steal information without executing code.  “Non-invasive attacks are the simplest and most inexpensive to perform; however, they often can be difficult to detect as they leave no tamper evidence. Side-channel attacks, the most common type, consist of Single Power Analysis (SPA), Differential Power Analysis (DPA) and Electromagnetic Analysis (EMA). SPA and DPA attacks are effective at determining information about general functionality or cryptographic functions, as the power consumed by a processor or MCU varies based on the operation being performed. For example, the squaring and multiplication operations of RSA encryption exhibit different power profiles and can be distinguished using an oscilloscope. Similarly, with EMA an attacker can reach the same outcome by studying the electromagnetic radiation from a device. Due to the passive nature of these attacks, countermeasures are fairly open-loop.  EMA attacks can be prevented though proper shielding. DPA and SHA attacks can be prevented by increasing the amount of power supply noise, vary algorithm timing and randomly inducing instructions, such as NOPs, that have no effect on the algorithm but impact power consumption.”

Developing a secure system

Both Imperas and OneSpin Solutions have pointed out to me that it is important to check the level of security of a system while the system is being developed.

David Kelf, Vice President of Marketing at OneSpin Solutions observed that “Many common hardware vulnerabilities take the form of enabling an unexpected path for data transfer or operational control. This can happen through the regular operational conduits of a design, or ancillary components such as the scan path, debug interfaces or an unexpected memory transfer. Even the power rails in a device may be manipulated to track private data access. The only sensible way to verify that such a path does not exist is to check every state that various blocks can get into and make sure that none of these states allows either a confidentiality breach, or an operational integrity problem.

An ideal technology for this purpose is Formal Verification, which allows questions to be asked of a design, such as “can this private key ever get to an output except the one intended” and have this question matched against all possible design states. Indeed, Formal is now being used for this purpose in designs that require a high degree of security.”

Larry Lapides, VP Sales at Imperas remarks that the use of virtual platform can significantly contribute to finding and fixing possible vulnerable issues in a system.  “The virtual platform based software development tools allow the running of the exact executables that would run on the hardware, but with additional controllability and observability, and determinism, not available when using the hardware platforms.

Among the approaches being used for security, hypervisors show excellent initial results.  Hypervisors are a layer of software that sits between the processor and the operating system and applications.  A hypervisor allows guest virtual machines (VMs) to be configured on the hardware, with each VM isolated from the others.  This isolation enables one or more guest VMs to run secure operating systems or secure applications.

Hypervisors have become increasingly common in different sectors in the industry: mil-aero, factory automation, automotive, artificial intelligence and the IoT. Embedded hypervisors face evolving demands for isolation, robustness and security in the face of more diverse and complex hardware, while at the same time needing to have minimal power and performance overhead.”

Imperas is a founding member of the prpl Foundation Security Working Group that has brought together companies and individuals with expertise in various aspects of embedded systems and security to document current best practices and derive recommended new security practices for embedded systems. Learn more at


Due to the scope of the subject this article is an overview only.  Follow up articles will present more details on the matter covered.

Power Analysis and Management

Thursday, August 25th, 2016

Gabe Moretti, Senior Editor

As the size of a transistor shrinks and modifies, power management becomes more critical.  As I was polling various DA vendors, it became clear that most were offering solutions for the analysis of power requirements and software based methods to manage power use, at least one, was offering a hardware based solution to power use.  I struggled to find a way to coherently present their responses to my questions, but decided that extracting significant pieces of their written responses would not be fair.  So, I organized a type of virtual round table, and I will present their complete answers in this article.

The companies submitting responses are; Cadence, Flex Logix, Mentor, Silvaco, and Sonics.  Some of the companies presented their own understanding of the problem.  I am including that portion of their contribution as well to provide a better meaning to the description of the solution.


Krishna Balachandran, product management director for low power solutions at Cadence  provided the following contribution.

Not too long ago, low power design and verification involved coding a power intent file and driving a digital design from RTL to final place-and-route and having each tool in the flow understand and correctly and consistently interpret the directives specified in the power intent file. Low power techniques such as power shutdown, retention, standby and Dynamic Voltage and Frequency Scaling (DVFS) had to be supported in the power formats and EDA tools. Today, the semiconductor industry has coalesced around CPF and the IEEE 1801 standard that evolved from UPF and includes the CPF contributions as well. However, this has not equated to problem solved and case closed. Far from it! Challenges abound. Power reduction and low power design which was the bailiwick of the mobile designers has moved front-and-center into almost every semiconductor design imaginable – be it a mixed-signal device targeting the IoT market or large chips targeting the datacenter and storage markets. With competition mounting, differentiation comes in the form of better (lower) power-consuming end-products and systems.

There is an increasing realization that power needs to be tackled at the earliest stages in the design cycle. Waiting to measure power after physical implementation is usually a recipe for multiple, non-converging iterations because power is fundamentally a trade-off vs. area or timing or both. The traditional methodology of optimizing for timing and area first and then dealing with power optimization is causing power specifications to be non-convergent and product schedules to slip. However, having a good handle on power at the architecture or RTL stage of design is not a guarantee that the numbers will meet the target after implementation. In other words, it is becoming imperative to start early and stay focused on managing power at every step.

It goes without saying that what can be measured accurately can be well-optimized. Therefore, the first and necessary step to managing power is to get an accurate and consistent picture of power consumption from RTL to gate level. Most EDA flows in use today use a combination of different power estimation/analysis tools at different stages of the design. Many of the available power estimation tools at the RTL stage of design suffer from inaccuracies because physical effects like timing, clock networks, library information and place-and-route optimizations are not factored in, leading to overly optimistic or pessimistic estimates. Popular implementation tools (synthesis and place-and-route) perform optimizations based on measures of power using built-in power analysis engines. There is poor correlation between these disparate engines leading to unnecessary or incorrect optimizations. In addition, mixed EDA-vendor flows are plagued by different algorithms to compute power, making the designer’s task of understanding where the problem is and managing it much more complicated. Further complications arise from implementation algorithms that are not concurrently optimized for power along with area and timing. Finally, name-mapping issues prevent application of RTL activity to gate-level netlists, increasing the burden on signoff engineers to re-create gate-level activity to avoid poor annotation and incorrect power results.

To get a good handle on the power problem, the industry needs a highly accurate but fast power estimation engine at the RTL stage that helps evaluate and guide the design’s micro-architecture. That requires the tool to be cognizant of physical effects – timing, libraries, clock networks, even place-and-route optimizations at the RTL stage. To avoid correlation problems, the same engine should also measure power after synthesis and place-and-route. An additional requirement to simplify and shorten the design flow is for such a tool to be able to bridge the system-design world with signoff and to help apply RTL activity to a gate-level netlist without any compromise. Implementation tools, such as synthesis and place-and-route, need to have a “concurrent power” approach – that is, consider power as a fundamental cost-factor in each optimization step side-by-side with area and timing. With access to such tools, semiconductor companies can put together flows that meet the challenges of power at each stage and eliminate iterations, leading to a faster time-to-market.

Flex Logix

Geoff Tate, Co-founder and CEO of Flex Logix is the author of the following contribution.  Our company is a relatively new entry in the embedded FPGA market.  It uses TSMC as a foundry.  Microcontrollers and IOT devices being designed in TSMC’s new ultra-low power 40nm process (TSMC 40ULP) need

•             The flexibility to reconfigure critical RTL, such as I/O

•          The ability to achieve performance at lowest power

Flex Logix has designed a family of embedded FPGA’s to meet this need. The validation chip to prove out the IP is in wafer fab now.

Many products fabricated with this process are battery operated: there are brief periods of performance-sensitive activity interspersed with long periods of very low power mode while waiting for an interrupt.

Flex Logix’s embedded FPGA core provides options to enable customers to optimize power and performance based on their application requirements.

To address this requirement, the following architectural enhancements were included in the embedded FPGA core:

•             Power Management containing 5 different power states:

  • Off state where the EFLX core is completely powered off.
  • Deep Sleep state where VDDH supply to the EFLX core can be lowered from nominal of 0.9V/1.1V to 0.5V while retaining state
  • Sleep state, gates the supply (VDDL) that controls all the performance logic such as the LUTs, DSP and interconnect switches of the embedded FPGA while retaining state. The latency to exit Sleep is shorter than that that to exit from Deep Sleep
  • Idle state, idles the clocks to cut power but is ready to move into dynamic mode quicker than the Sleep state
  • Dynamic state where power is highest of the 4 power management states but where the latency is the shortest and used during periods of performance sensitive activity

The other architectural features available in the EFLX-100 embedded FPGA to optimize power-performance are:

•             State retention for all flip flops and configuration bits at voltages well below the operating range.

•          Ability to directly control body bias voltage levels (Vbp, Vbn). Controlling the body bias further controls leakage power

•             5 combinations of threshold voltage(VT) devices to optimize power and performance for static/performance logic of the embedded FPGA. Higher the threshold voltage (eHVT, HVT) lower the leakage power and lower performance while lower the threshold voltage (SVT) device, higher the leakage and higher the performance.

•             eHVT/eHVT

•             HVT/HVT

•             HVT/SVT

•             eHVT/SVT

•             SVT/SVT

In addition to the architectural features various EDA flows and tools are used to optimize the Power Performance and Area (PPA) of the FlexLogix embedded FPGA:

•             The embedded FPGA was implemented using a combination of standard floor-planning and P&R tools to place and route the configuration cells, DSP and LUTs macros and network fabric switches. This resulted in higher density thereby reducing IR drops and the need for larger drive strengths thereby optimizing power

•          Design and use longer (non-minimum) channel length devices which further help reduce leakage power with minimal to no impact to the performance

•          The EFLX-100 core was designed with an optimized power grid to effectively use metal resources for power and signal routing. Optimal power grids reduce DC/AC supply drops which further increase performance.


Arvind Narayanan, Architect, Product Marketing, Mentor Graphics contributed the following viewpoint.

One of the biggest challenges in IC design at advanced nodes is the complexity inherent in effective power management. Whether the goal is to reduce on-chip power dissipation or to provide longer battery life, power is taking its place alongside timing and area as a critical design dimension.

While low-power design starts at the architectural level, the low-power design techniques continue through RTL synthesis and place and route. Digital implementation tools must interpret the power intent and implement the design correctly, from power aware RTL synthesis, placement of special cells, routing and optimization across power domains in the presence of multiple corners, modes, and power states.

With the introduction of every new technology node, existing power constraints are also tightened to optimize power consumption and maximize performance. 3D transistors (FinFETs) that were introduced at smaller technology nodes have higher input pin capacitance compared to their planar counterpart, resulting in the dynamic power component to be higher compared to leakage.

Power Reduction Strategies

A good strategy to reduce power consumption is to perform power optimization at multiple levels during the design flow including software optimization, architecture selection, RTL-to-GDS implementation and process technology choices. The biggest power savings are usually obtained early in the development cycle at the ESL & RTL stages. (Fig 1). During physical implementation stage there is less opportunity for power optimization in comparison and hence choices made earlier in the design flow are critical. Technology selection such as the device structure (FinFET, planar), choice of device material (HiK, SOI) and technology node selection all play a key role.

Figure 1. Power reduction opportunities at different stages of the design flow

Architecture selection

Studies have shown that only optimizations applied early in the design cycle, when a design’s architecture is not yet fixed, have the potential for radical power reduction.  To make intelligent decisions in power optimization, the tools have to simultaneously consider all factors affecting power, and apply early in the design cycle. Finding the best architecture enables to properly balance functionality, performance and power metrics.

RTL-to-GDS Power Reduction

There are a wide variety of low-power optimization techniques that can be utilized during RTL to GDS implementation for both dynamic and leakage power reduction. Some of these techniques are listed below.

RTL Design Space Exploration

During the early stages of the design, the RTL can be modified to employ architectural optimizations, such as replacing a single instantiation of a high-powered logic function with multiple instantiations of low-powered equivalents. A power-aware design environment should facilitate “what-if” exploration of different scenarios to evaluate the area/power/performance tradeoffs

Multi-VDD Flow

Multi-voltage design, a popular technique to reduce total power, is a complex task because many blocks are operating at different voltages, or intermittently shut off. Level shifter and isolation cells need to be used on nets that cross domain boundaries if the supply voltages are different or if one of the blocks is being shut down. DVFS is another technique where the supply voltage and frequency can vary dynamically to save power. Power gating using multi-threshold CMOS (MTCMOS) switches involves switching off certain portions of an IC when that functionality is not required, then restoring power when that functionality is needed.

Figure 2. Multi-voltage layout shown in a screen shot from the Nitro-SoC™ place and route system.

MCMM Based Power Optimization

Because each voltage supply and operational mode implies different timing and power constraints on the design, multi-voltage methodologies cause the number of design corners to increase exponentially with the addition of each domain or voltage island. The best solution is to analyze and optimize the design for all corners and modes concurrently. In other words, low-power design inherently requires true multi-corner/multi-mode (MCMM) optimization for both power and timing. The end result is that the design should meet timing and power requirements for all the mode/corner scenarios.

FinFET aware Power Optimization

FinFET aware power optimization flow requires technologies such as activity driven placement, multi-bit flop support, clock data optimization, interleaved power optimization and activity driven routing to ensure that the dynamic power reduction is optimal. The tools should be able to use transforms with objective costing to make trade-offs between dynamic power, leakage power, timing, and area for best QoR.

Using the strategy to optimize power at all stages of the design flow, especially at the architecture stage is critical for optimal power reduction.  Architecture selection along with the complete set of technologies for RTL-to-GDS implementation greatly impact the ability to effectively manage power.


Seena Shankar, Technical Marketing Manager, is the author of this contribution.


Analysis of IR-drop, electro-migration and thermal effects have traditionally been a significant bottleneck in the physical verification of transistor level designs like analog circuits, high-speed IOs, custom digital blocks, memories and standard cells. Starting from 28 nm node and lower, all designers are concerned about power, EM/IR and thermal issues. Even at the 180 nm node if you are doing high current designs in LDMOS then EM effects, rules and thermal issues need to be analyzed. FinFET architecture has increased concerns regarding EM, IR and thermal effects. This is because of complex DFM rules, increased current and power density. There is a higher probability of failure. Even more so EM/IR effects need to be carefully analyzed and managed. This kind of analysis and testing usually occurs at the end of the design flow. Discovering these issues at that critical time makes it difficult to stick to schedule and causing expensive rework. How can we resolve this problem?


Power integrity issues must be addressed as early in the design cycle as possible, to avoid expensive design and silicon iterations. Silvaco’s InVar Prime is an early design stage power integrity analysis solution for layout engineers. Designers can estimate EM, IR and thermal conditions before sign-off stage. It performs checks like early IR-drop analysis, check of resistive parameters of supply networks, point to point resistance check, and also estimate current densities. It also helps in finding and fixing issues that are not detectable with regular LVS check like missing vias, isolated metal shapes, inconsistent labeling, and detour routing.

InVar Prime can be used for a broad range of designs including processors, wired and wireless network ICs, power ICs, sensors and displays. Its hierarchical methodology accurately models IR-drop, electro-migration and thermal effects for designs ranging from single block to full-chip. Its patented concurrent electro-thermal analysis performs simulation of multiple physical processes together. This is critical for today’s’ designs in order to capture important interactions between power and thermal 2D/3D profiles. The result is physical measurement-like accuracy with high speed even on extremely large designs and applicability to all process nodes including FinFET technologies.

InVar Prime requires the following inputs:

●      Layout- GDSII

●      Technology- ITF or iRCX

●      Supplementary data- Layer mapping file for GDSII, Supply net names, Locations and nominal of voltage sources, Area based current consumption for P/G nets

Figure 3. Reliability Analysis provided by InVar Prime

InVar Prime enables three types of analysis on a layout database: EM, IR and Thermal. A layout engineer could start using InVar to help in the routing and planning of the power nets, VDD and VSS. IR analysis with InVar will provide them early analysis on how good the power routing is at that point. This type of early analysis flags potential issues that might otherwise appear after fabrication and result in silicon re-spins.

InVar EM/IR engine provides comprehensive analysis and retains full visibility of supply networks from top-level connectors down to each transistor. It provides a unique approach to hierarchical block modeling to reduce runtime and memory while keeping accuracy of a true flat run. Programmable EM rules enable easy adaptation to new technologies.

InVar Thermal engine scales from single cell design to full chip and provides lab-verified accuracy of thermal analysis. Feedback from thermal engine to EM/IR engines provides unprecedented overall accuracy. This helps designers understand and analyze various effects across design caused by how thermal 2D/3D profiles affect IR drop and temperature dependent EM constraints.

The main benefits of InVar Prime are:

●      Accuracy verified in lab and foundries

●      Full chip sign-off with accurate and high performance analysis

●      Analysis available early in the back end design, when more design choices are available

●      Pre-characterization not required for analysis

●      User-friendly environment designed to assist quick turn-around-times

●      Effective prevention of power integrity issues

●      Broad range of technology nodes supported

●      Reduces backend verification cycle time

●      Improves probability of first silicon success


Scott Seiden contributed his company viewpoint.  Sonics has developed a dynamic power management solution that is hardware based.

Sonics has Developed Industry’s First Energy Processing Unit (EPU) Based on the ICE-Grain Power Architecture.  The EPUICE stands for Instant Control of Energy.

Sonics’ ICE-G1 product is a complete EPU enabling rapid design of system-on-chip (SoC) power architecture and implementation and verification of the resulting power management subsystem.

No amount of wasted energy is affordable in today’s electronic products. Designers know that their circuits are idle a significant fraction of time, but have no proven technology that exploits idle moments to save power. An EPU is a hardware subsystem that enables designers to better manage and control circuit idle time. Where the host processor (CPU) optimizes the active moments of the SoC components, the EPU optimizes the idle moments of the SoC components. By construction, an EPU delivers lower power consumption than software-controlled power management. EPUs possess the following characteristics:

  • Fine-grained power partitioning maximizes SoC energy savings opportunities
  • Autonomous hardware-based control provides orders of magnitude faster power up and power down than software-based control through a conventional processor
  • Aggregation of architectural power savings techniques ensures minimum energy consumption
  • Reprogrammable architecture supports optimization under varying operating conditions and enables observation-driven adaptation to the end system.

About ICE-G1

The Sonics’ ICE-G1 EPU accelerates the development of power-sensitive SoC designs using configurable IP and an automated methodology, which produces EPUs and operating results that improve upon the custom approach employed by expert power design teams. As the industry’s first licensable EPU, ICE-G1 makes sophisticated power savings techniques accessible to all SoC designers in a complete subsystem solution. Using ICE-G1, experienced and first-time SoC designers alike can achieve significant power savings in their designs.

Markets for ICE-G1 include:

- Application and Baseband Processors
- Tablets, Notebooks
- IoT
- Datacenters
- EnergyStar compliant systems
- Form factor constrained systems—handheld, battery operated, sealed case/no fan, wearable.

-ICE-G1 key product features are:Intelligent event and switching controllers–power grain controllers, event matrix, interrupt controller, software register interface—configurable and programmable hardware that dynamically manages both active and leakage power.

- SonicsStudio SoC development environment—graphical user interface (GUI), power grain identification (import IEEE-1801 UPF, import RTL, described directly), power architecture definition, power grain controller configuration (power modes and transition events), RTL and UPF code generation, and automated verification test bench generation tools. A single environment that streamlines the EPU development process from architectural specification to physical implementation.

- Automated SoC power design methodology integrated with standard EDA functional and physical tool flows (top down and bottom up)—abstracts the complete set of power management techniques and automatically generates EPUs to enable architectural exploration and continuous iteration as the SoC design evolves.

- Technical support and consulting services—including training, energy savings assessments, architectural recommendations, and implementation guidance.


As can be seen from the contributions analysis and management of power is multi-faceted.  Dynamic control of power, especially in battery powered IoT devices is critical, since some of there devices will be in locations that are not readily reachable by an operator.

Mixed Signal Design and Verification for IoT Designs

Tuesday, November 17th, 2015

Mitch Heins, EDS Marketing Director, DSM division of Mentor Graphics

A typical Internet-of-Things (IoT) design consists of several different blocks including one or more sensors, analog signal processing for the sensors, an analog-to-digital converter and a digital interface such as I2C.  System integration and verification is challenging for these types of IoT designs as they typically are a combination of two to three different ICs.  The challenge is exacerbated by the fact that the system covers multiple domains including analog, digital, RF, and mechanical for packaging and different forms of multi-physics type simulations needed to verify the sensors and actuators of an IoT design.  The sensors and actuators are typically created as microelectromechanical systems (MEMS) which have a mechanical aspect and there is a tight interaction between them and the package in which they are encapsulated.

The verification challenge is to have the right form of models available for each stage of the design and verification process that work with your EDA vendor tool suite.  Many of the high volume IoT designs are now looking to integrate the microcontroller and radio as one die and the analog circuitry and sensors on second die to reduce cost and footprint.

In many cases the latest IoT designs are now using onboard analog and digital circuitry with multiple sensors to do data fusion at the sensor, making for “smart sensors”.  These ICs are made from scratch meaning that the designers must create their own models for both system-level and device-level verification.

Tanner EDA by Mentor Graphics has partnered with SoftMEMS to offer a complete mixed signal design and verification tool suite for these types of MEMS centric IC designs. The Tanner Analog and MEMS tool suites offers a complete design-capture, simulation, implementation and verification flow for MEMS-based IoT designs.  The Tanner AMS verification flow supports top-down hierarchical design with the ability to do co-simulation of multiple levels of design abstraction for analog, digital and mechanical environments.  All design-abstractions, simulations and resulting waveforms are controlled and viewed from a centrally integrated schematic cockpit enabling easy design trade-offs and verification.   Design abstractions can be used to swap in different models for system level vs device level verification tasks as different parts of the design are implemented.  The system includes support for popular modeling languages such as Verilog-AMS and Verilog-A.

The logic abstraction of the design is tightly tied to the physical implementation of the design through a correct-by-construction design methodology using schematic-driven-layout with interactive DRC checking.  The Tanner/SoftMEMS solution uses the 2D mask layout to automatically create a correct-by-construction 3D model of the MEMS devices using a process technology description file.

Figure 1: Tanner Analog Mixed Signal Verification Cockpit

The 3D model is combined with similar 3D package models and is then used in Finite Element or Boundary Element Analysis engines to debug the functionality and manufacturability of the MEMS devices including mechanical, thermal, acoustic, electrical, electrostatic, magnetic and fluid analysis.

Figure 2: 3D-layout & cross section created by Tanner SOFTMEMS 3D Modeler

A key feature of the design flow is that the solution allows for the automatic creation of a compact Verilog-A model for the MEMS-Package combination from the FEA/BEA analysis that can be used to close the loop in final system-level verification using the same co-simulation cockpit and test benches that were used to start the design.

An additional level of productivity can be gained by using a parameterized library of MEMS building blocks from which the designer can more quickly build complex MEMS devices.

Figure 3: Tanner S-Edit Schematic Capture Mixed Mode Schematic of the IoT System

Each building block has an associated parameterized compact simulation model.  By structurally building the MEMS device from these building blocks, the designer is automatically creating a structural simulation model for the entire device that can be used within the verification cockpit.

Figure 4:Tanner SoftMEMS BasicPro Suite with MEMS Symbol and Simulation Library

DVCon Highlights: Software, Complexity, and Moore’s Law

Thursday, March 12th, 2015

Gabe Moretti, Senior Editor

The first DVCon  United States was a success.  It was the 27th Conference of the series and the first one with this name to separate it from DVCon Europe and DVCon India.  The last two saw their first event last year and following their success will be held this year as well.

Overall attendance, including exhibit-only and technical conference attendees, was 932.

If we count, as DAC does, exhibitors personnel then the total number of attendees is 1213.  The conference attracted 36 exhibitors, including 10 exhibiting for the first time and 6 of them headquartered outside of the US.   The technical presentations were very well attended, almost always with standing room only, thus averaging around 175 attendees per session.  One cannot fit more in the conference rooms that the DoubleTree has.  The other thing I observed was that there was almost no attendees traffic during the presentations.  People took a seat and stayed for the entire presentation.  Almost no one came in, listened for a few minutes and then left.  In my experience this is not typical and points out that the goal of DVCon, to present topics of contemporary importance, was met.

Process Technology and Software Growth

The keynote address this year was delivered by Aart de Geus, chairman and co-CEO of Synopsys.  His speeches are always both unique and quite interesting.  This year he chose as a topic “Smart Design from Silicon to Software”.   As one could have expected Aart’s major points had to do with process technology, something he is extremely knowledgeable about.  He thinks that Moore’s law as an instrument to predict semiconductor process advances has about ten years of usable life.  After that  the industry will have to find another tool, assuming one will be required, I would add.  Since, as Aart correctly points out, we are still using a 193 nm crayon to implement 10 nm features, clearly progress is significantly impaired.  Personally I do not understand the reason for continuing to use ultraviolet light in lithography, aside for the huge costs of moving to x-ray lithography.  The industry has resisted the move for so long that I think even x-ray has a short life span which at this point would not justify the investment.  So, before the ten years are up, we might see some very unusual and creative approaches to building features on some new material.  After all whatever we will use will have to understand atoms and their structure.

For now, says Aart, most system companies are “camping” at 28 nm  while evaluating “the big leap” to more advanced lithography process.  I think it will be along time, if ever, when 10 nm processes will be popular.  Obviously the 28 nm process supports the area and power requirements of the vast majority of advanced consumers products.  Aart did not say it but it is a fact that there are still a very large number of wafers produced using a 90 nm process.  Dr. de Geus pointed out that the major factor in determining investments in product development is now economics, not available EDA technology.  Of course one can observe that economics is only a second order decision making tool, since economics is determined in part by complexity.  But Aart stopped at economics, a point he has made in previous presentations in the last twelve months.  His point is well taken since ROI is greatly dependent on hitting the market window.

A very interesting point made during the presentation is that the length of development schedules has not changed in the last ten years, content has.  Development of proprietary hardware has gotten shorter, thanks to improved EDA tools, but IP integration and software integration and co-verification has used up all the time savings in the schedule.

What Dr. De Geus slides show is that software is and will grow at about ten times the rate of hardware.  Thus investment in software tools by EDA companies makes sense now.  Approximately ten years ago, during a DATE conference in Paris I had asked Aart about the opportunity of EDA companies, Synopsys in particular, to invest in software tools.  At that time Aart was emphatic that EDA companies did not belong in the software space.  Compilers are either cheap or free, he told me, and debuggers do not offer the correct economic value to be of interest.  Well without much fanfare about the topic of “investment in software” Synopsys is now in the software business in a big way.  Virtual prototyping and software co-verification are market segments Synopsys is very active in, and making a nice profit I may add.  So, it is either a matter of definition  or new market availability, but EDA companies are in the software business.

When Aart talks I always get reasons to think.  Here are my conclusions.  On the manufacturing side, we are tinkering with what we have had for years, afraid to make the leap to a more suitable technology.  From the software side, we are just as conservative.

That software would grow at a much faster pace than hardware is not news to me.  In all the years that I worked as a software developer or managers of software development, I always found that software grows to utilize all the available hardware environment and is the major reason for hardware development, whether is memory size and management or speed of execution.  My conclusion is that nothing is new: the software industry has never put efficiency as the top goal, it is always how easier can we make the life of a programmer.  Higher level languages are more  powerful because programmers can implement functions with minimal efforts, not because the underlying hardware is used optimally.  And the result is that when it comes to software quality and security the users are playing too large a part as the verification team.

Art or Science

The Wednesday proceedings were opened early in the morning by a panel with the provocative title of Art or Science.  The panelists were Janick Bergeron from Synopsys, Harry Foster from Mentor, JL Gray from Cadence, Ken Knowlson from Intel, and Bernard Murphy from Atrenta.  The purpose of the panel was to figure out whether a developer is better served by using his or her own creativity in developing either hardware or software, or follow a defined and “proven” methodology without deviation.

After some introductory remarks which seem to show a mild support for the Science approach, I pointed out that the title of the panel was wrong.  It should have been titled Art and Science, since both must play a part in any good development process.  That changed the nature of the panel.  To begin with there had to be a definition of what art and science meant.  Here is my definition.  Art is a problem specific solution achieved through creativity.  Science is the use of a repeatable recipe encompassing both tools and methods that insures validated quality of results.

Harry Foster pointed out that is difficult to teach creativity.  This is true, but it is not impossible I maintain, especially if we changed our approach to education.  We must move from teaching the ability to repeat memorized answers that are easy to grade on a test tests, and switched to problem solving, a system better for the student but more difficult to grade.  Our present educational system is focused on teachers, not students.

The panel spent a significant amount of time discussing the issue of hardware/software co-verification.  We really do not have a complete scientific approach, but we are also limited by the schedule in using creative solutions that themselves require verification.

I really liked what Ken Knowlson said at one point.  There is a significant difference between a complicated and a complex problem.  A complicated problem is understood but it is difficult to solve while a complex problem is something we do not understand a priori.  This insight may be difficult to understand without an example, so here is mine.  Relativity is complicated, black matter is complex.


Discussing all of the technical sessions would be too long and would interest only portions of the readership, so I am leaving such matters to those who have access to the conference proceedings.  But I think that both the keynote speech and the panel provided enough understanding as well as thought material to amply justify attending the conference.  Too often I have heard that DVCon is a verification conference: it is not just for verification as both the keynote and the panel prove.  It is for all those who care about development and verification, in short for those who know that a well developed product is easier to verify, manufacture and maintain than otherwise.  So whether in India, Europe or in the US, see you at the next DVCon.

A Prototyping with FPGA Approach

Thursday, February 12th, 2015

Frank Schirrmeister, Group Director for Product Marketing of the System Development Suite, Cadence.

In general, the industry is experiencing the need for what now has been started being called the “shift left” in the design flow as shown Figure 1. Complex hardware stacks, starting from IP assembled into sub-systems, assembled into Systems on Chips (SoCs) and eventually integrated into systems, are combined with complex software stacks, integrating bare metal software and drivers with operating systems, middleware and eventually the end applications that determine the user experience.

From a chip perspective, about 60% into a project three main issues have to be resolved. First, the error rate in the hardware has to be low enough that design teams find confidence to commit to a tape out. Second, the chip has to be validated enough within its environment to be sure that it works within the system. Third, and perhaps most challenging, significant portions of the software have to be brought up to be confident that software/hardware interactions work correctly. In short, hardware verification, system validation and software development have to be performed as early as possible, requiring a “shift left” of development tasks to allow them to happen as early as possible.

Figure 1: A Hardware/Software Development Flow.

Prototyping today happens at two abstraction levels – using transaction-level models (TLM) and register transfer models (RTL) – using five basic engines.

  • Virtual prototyping based on TLM models can happen based on specifications earliest in the design flow and works well for software development but falls short when more detailed hardware models are required and is plagued by model availability and its creation cost and effort.
  • RTL simulation – which by the way today is usually integrated with SystemC based capabilities for TLM execution – allows detailed hardware execution but is limited in speed to the low KHz or even Hz range and as such is not suitable for software execution that may require billions of cycles to just boot an operating system. Hardware assisted techniques come to the rescue.
  • Emulation is used for both hardware verification and lower level software development as speeds can reach the MHz domain. Emulation is separated into processor based and FPGA based emulation, the former allowing for excellent at speed debug and fast bring-up times as long FPGA routing times can be avoided, the latter excelling at speed once the design has been brought up.
  • FPGA based prototyping is typically limited in capacity and can take months to bring up due to modifications required to the design itself and the subsequent required verification. The benefit, once brought up, is a speed range in the 10s of MHz range that is sufficient for software development.
  • The actual prototype silicon is the fifth engine used for bring up. Post-silicon debug and test techniques are finding their way into pre-silicon given the ongoing shift-left. Using software for verification bears the promise to better re-use verification across the five engines all the way into post-silicon.

Advantages Of Using FPGAs For ASIC Prototyping

FPGA providers have been pursuing aggressive roadmaps. Single FPGA devices now nominally hold up to 20 million ASIC gates, with utilization rates of 60%, 8 FPGA systems promise to hold almost 100 MG, which makes them large enough for a fair share of design starts out there. The key advantage of FPGA based systems is the speed that can be achieved and the main volume of FPGA based prototypes today is shipped to enable software development and sub-system validation. They are also relatively portable, so we have seen customers use FPGA based prototypes successfully to interact with their customers to deliver pre-silicon representations of the design for demonstration and software development purposes.

Factors That May Limit The Growth Of The Technique

There certainly is a fair amount of growth out there for FPGA based prototyping, but the challenge of long bring-up times often defies the purpose of early availability. For complex designs, requiring careful partitioning and timing optimization, we have seen cases in which the FPGA based prototype did not become available even until silicon was back. Another limitation is that the debug insight into the hardware is very limited compared to simulation and processor based emulation. While hardware probes can be inserted, they will then reduce the speed of execution because of data logging. Subsequently, FPGA based prototypes find most adoption in the later stages of projects during which RTL has become already stable and the focus can shift to software development.

The Future For Such Techniques

All prototyping techniques are more and more used in combination. Emulation and RTL simulation are combined to achieve “Simulation Acceleration”. Emulation and transaction-level models with Fast Models from ARM are combined to accelerate operating system bring-up and software driven testing. Emulation and FPGA based prototyping are combined to combine the speed of bring-up for new portions of the design in emulation with the speed of execution for stable portions of the design in FPGA based prototyping. Like in the recent introduction of the Cadence Protium FPGA based prototyping platform, both processor based emulation and FPGA based prototyping can share the same front-end to significantly accelerate FPGA based prototyping bring-up. At this point all major EDA vendors have announced a suite of connected engines (Cadence in May 2011, Mentor in March 2014 and Synopsys in September 2014).It will be interesting to see how the continuum of engines grows further together to enable most efficient prototyping at different stages of a development project.

New Markets for EDA

Wednesday, December 3rd, 2014

Brian Derrick, Vice President Corporate Marketing, Mentor Graphics

EDA grows by solving new problems as discontinuities occur and design cannot proceed as usual.  Often these are incremental, but occasionally problems or transitions occur that create new markets for our industry.

Discontinuities in Traditional EDA

One of the most pressing challenges today is the escalating complexity of hardware verification and the need to verify software earlier in the design cycle. Emulation is rapidly becoming a mainstream methodology. As part of an integrated enterprise verification solution, it allows designers to perform pre-silicon testing and debug at accelerated hardware speeds, using real-world stimulus within a unified verification environment that seamlessly moves data and transactors between simulation and emulation.   Enterprise verification utilizing emulation delivers performance and productivity improvements ranging from 400X to 10,000X.

Performance alone did not enable emulation to become mainstream.  There has been a transformation from project-bound engineering lab instrument to datacenter-hosted global resource.  This transformation begins by eliminating the In-Circuit Emulation (ICE) tangle of cables, speed adaptors and physical devices, replacing them with virtual devices.  The latest generation of emulators can be installed in most standard data centers, making emulators similar to any other server installation.

What’s equally exciting is the number of software engineers who have moved their embedded software development and debug to emulators. With the accelerated throughput, developers feel as though they are debugging embedded software on the actual silicon product.  All of this explains why the emulation market has doubled in the past five years, with a three year compounded annual growth rate of 23%.

Another discontinuity in traditional EDA is physical testing at 20nm and below.  As FinFET technology becomes pervasive at these nodes, there is strong potential for increased defects within the standard cells.  Transistor-Level automatic test pattern generation targets undetected internal faults based on the actual cell layout.  It improves the quality of wafer level test, reducing the need for system-level, functional test.  This “cell-aware” capability has been well qualified on FinFET parts and will become pervasive in leading-edge physical design verification, keeping the Design for Test market on its accelerated growth rate that is 4X the overall EDA market growth.

EDA Growth Opportunities in New Markets

Other important growth opportunities for our industry can be found in markets that are in transition, with emerging requirements for automated design.  There is no doubt that automotive electronics is one of the most promising segments.  As the automotive industry transitions from mechanical to electronic differentiation of their products, the need for electronic design automation is accelerating.

Automobiles  are complex electronics systems where leading-edge vehicles have up to 150 electronic networks, 200 microprocessors, nearly 100 electronic motors, hundreds of LEDs, all connected by nearly 3 miles of wiring.  And the embedded software responsible for managing all of this can reach upwards of 65 million lines of code (Figure 1).  Automotive ICs are already a $20 billon market and are the fastest growing segment according to IC Insights.  Electronics now account for 35-40% of a car’s cost and that number is expected to increase to 50% in the future.

Figure 1:  Complexity is driving the automation of the electronic design of automotive electronics

Automotive suppliers are adopting EDA solutions to address the unique electronic systems and software challenges in this rapidly developing segment.  Simple wiring tools are being replaced with complete enterprise solutions from concept through design, manufacturing, costing and after-sales service.  These tools and flows are enabling the industry to handle the requirements of a highly regulated environment, while increasing quality, minimizing costs, reducing weight, and manage power across literally thousands of options for an OEM platform.

The rapid expansion of electronic control units and networks in nearly all new automotive platforms has accelerated the demand for AUTOSAR development tools and solutions.   AUTOSAR is an open, standardized automotive software architecture, jointly developed by automobile manufacturers, suppliers and tool developers.  Now add to that the safety-critical embedded software requirements and standards such as ISO 26262, regulations for fuel efficiency, and environmental emissions, and the opportunity for design automation is just beginning.

Driver experience is now a crucial differentiator, with in-vehicle infotainment (IVI), advance driver assistance (ADAS), and driver information, becoming the major selling points for new automobiles.  Active noise cancellation, high speed hi-def video, smart mirrors, head up display, proximity information, over the air updates, and animated graphics are just a few of the capabilities being deployed and developed for automobiles today.

There is a strong demand for EDA solutions that combine system design and software development for heterogeneous systems. Embedded Linux, AUTOSAR and real time operating systems are deployed across diverse multi-core SOCs in a growing number of in-vehicle networks.

Many of the EDA solutions developed for the automotive industry are being adopted by other markets with similar challenges.  Electronic systems interconnect tools are enabling the optimization of the cable/harness systems in aerospace, defense, heavy equipment, off-road, agriculture,  and other transportation-related markets.   As the automotive industry develops and deploys driver convenience and information systems, it will make them affordable for many of these adjacent markets.

New markets for EDA are emerging as the complexity of SoCs increase and the world we interact with becomes more connected.  Solving these new problems and applying EDA solutions to markets in transition, like automotive, aerospace, the broader transportation industry, and the Internet of Things, will fuel the growth of the design automation industry into the future.

Next Page »