Part of the  

Chip Design Magazine

  Network

About  |  Contact

Posts Tagged ‘Oski Technologies’

The EDA Industry Macro Projections for 2016

Monday, January 25th, 2016

Gabe Moretti, Senior Editor

How the EDA industry will fare in 2016 will be influenced by the worldwide financial climate. Instability in oil prices, the Middle East wars and the unpredictability of the Chinese market will indirectly influence the EDA industry.  EDA has seen significant growth since 1996, but the growth is indirectly influenced by the overall health of the financial community (see Figure 1).

Figure 1. EDA Quarterly Revenue Report from EDA Consortium

China has been a growing market for EDA tools and Chinese consumers have purchased a significant number of semiconductors based products in the recent past.  Consumer products demand is slowing, and China’s financial health is being questioned.  The result is that demand for EDA tools may be less than in 2015.   I have received so many forecasts for 2016 that I have decided to brake the subject into two articles.  The first article will cover the macro aspects, while the second will focus more on specific tools and market segments.

Economy and Technology

EDA itself is changing.  Here is what Bob Smith, executive director of the EDA consortium has to say:

“Cooperation and competition will be the watchwords for 2016 in our industry. The ecosystem and all the players are responsible for driving designs into the semiconductor manufacturing ecosystem. Success is highly dependent on traditional EDA, but we are realizing that there are many other critical components, including semiconductor IP, embedded software and advanced packaging such as 3D-IC. In other words, our industry is a “design ecosystem” feeding the manufacturing sector. The various players in our ecosystem are realizing that we can and should work together to increase the collective growth of our industry. Expect to see industry organizations serving as the intermediaries to bring these various constituents together.”

Bob Smith’s words acknowledge that the term “system” has taken a new meaning in EDA.  We are no longer talking about developing a hardware system, or even a hardware/software system.  A system today includes digital and analog hardware, software both at the system and application level, MEMS, third party IP, and connectivity and co-execution with other systems.  EDA vendors are morphing in order to accommodate these new requirements.  Change is difficult because it implies error as well as successes, and 2016 will be a year of changes.

Lucio Lanza, managing director of Lanza techVentures and a recipient of the Phil Kaufman award, describes it this way:

“We’ve gone from computers talking to each other to an era of PCs connecting people using PCs. Today, the connections of people and devices seem irrelevant. As we move to the Internet of Things, things will get connected to other things and won’t go through people. In fact, I call it the World of Things not IoT and the implications are vast for EDA, the semiconductor industry and society. The EDA community has been the enabler for this connected phenomenon. We now have a rare opportunity to be more creative in our thinking about where the technology is going and how we can assist in getting there in a positive and meaningful way.”

Ranjit Adhikary, director of Marketing at Cliosoft acknowledges the growing need for tools integration in his remarks:

“The world is currently undergoing a quiet revolution akin to the dot com boom in the late 1990s. There has been a growing effort to slowly but surely provide connectivity between various physical objects and enable them to share and exchange data and manage the devices using smartphones. The labors of these efforts have started to bear fruit and we can see that in the automotive and consumables industries. What this implies from a semiconductor standpoint is that the number of shipments of analog and RF ICs will grow at a remarkable pace and there will be increased efforts from design companies to have digital, analog and RF components in the same SoC. From an EDA standpoint, different players will also collaborate to share the same databases. An example of this would be Keysight Technologies and Cadence Designs Systems on OpenAccess libraries. Design companies will seek to improve the design methodologies and increase the use of IPs to ensure a faster turnaround time for SoCs. From an infrastructure standpoint a growing number of design companies will invest more in the design data and IP management to ensure better design collaboration between design teams located at geographically dispersed locations as well as to maximize their resources.”

Michiel Ligthart, president and chief operating officer at Verific Design Automation points to the need to integrate tools from various sources to achieve the most effective design flow:

“One of the more interesting trends Verific has observed over the last five years is the differentiation strategy adopted by a variety of large and small CAD departments. Single-vendor tool flows do not meet all requirements. Instead, IDMs outline their needs and devise their own design and verification flow to improve over their competition. That trend will only become more pronounced in 2016.”

New and Expanding Markets

The focus toward IoT applications has opened up new markets as well as expanded existing ones.  For example the automotive market is looking to new functionalities both in car and car-to-car applications.

Raik Brinkmann, president and chief executive officer at OneSpin Solutions wrote:

“OneSpin Solutions has witnessed the push toward automotive safety for more than two years. Demand will further increase as designers learn how to apply the ISO26262 standard. I’m not sure that security will come to the forefront in 2016 because there no standards as yet and ad hoc approaches will dominate. However, the pressure for security standards will be high, just as ISO26262 was for automotive.”

Michael Buehler-Garcia, Mentor Graphics Calibre Design Solutions, Senior Director of Marketing notes that many of the established and thought of as obsolete process nodes will instead see increased volume due to the technologies required to implement IoT architectures.

“As cutting-edge process nodes entail ever higher non-recurring engineering (NRE) costs, ‘More than Moore’ technologies are moving from the “press release” stage to broader adoption. One consequence of this adoption has been a renewed interest in more established processes. Historical older process node users, such as analog design, RFCMOS, and microelectromechanical systems (MEMS), are now being joined by silicon photonics, standalone radios, and standalone memory controllers as part of a 3D-IC implementation. In addition, the Internet of Things (IoT) functionality we crave is being driven by a “milli-cents for nano-acres of silicon,” which aligns with the increase in designs targeted for established nodes (130 nm and older). New physical verification techniques developed for advanced nodes can simplify life for design companies working at established nodes by reducing the dependency on human intervention. In 2016, we expect to see more adoption of advanced software solutions such as reliability checking, pattern matching, “smart” fill, advanced extraction solutions, “chip out” package assembly verification, and waiver processing to help IC designers implement more complex designs on established nodes. We also foresee this renewed interest in established nodes driving tighter capacity access, which in turn will drive increased use of design optimization techniques, such as DFM scoring, filling analysis, and critical area analysis, to help maximize the robustness of designs in established nodes.”

Warren Kurisu, Director of Product Management, Mentor Graphics Embedded Systems Division points to wearables, another sector within the IoT market, as an opportunity for expansion.

“We are seeing multiple trends. Wearables are increasing in functionality and complexity enabled by the availability of advanced low-power heterogeneous multicore architectures and the availability of power management tools. The IoT continues to gain momentum as we are now seeing a heavier demand for intelligent, customizable IoT gateways. Further, the emergence of IoT 2.0 has placed a new emphasis on end-to-end security from the cloud and gateway right down to the edge device.”

Power management is one of the areas that has seen significant concentration on the part of EDA vendors.  But not much has been said about battery technology.  Shreefal Mehta, president and CEO of Paper Battery Company offered the following observations.

“The year 2016 will be the year we see tremendous advances in energy storage and management.   The gap between the rate of growth of our electronic devices and the battery energy that fuels them will increase to a tipping point.   On average, battery energy density has only grown 12% while electronic capabilities have more than doubled annually.  The need for increased energy and power density will be a major trend in 2016.  More energy-efficient processors and sensors will be deployed into the market, requiring smaller, safer, longer-lasting and higher-performing energy sources. Today’s batteries won’t cut it.

Wireless devices and sensors that need pulses of peak power to transmit compute and/or perform analog functions will continue to create a tension between the need for peak power pulses and long energy cycles. For example, cell phone transmission and Bluetooth peripherals are, as a whole, low power but the peak power requirements are several orders of magnitude greater than the average power consumption.  Hence, new, hybrid power solutions will begin to emerge especially where energy-efficient delivery is needed with peak power and as the ratio of average to peak grows significantly. 

Traditional batteries will continue to improve in offering higher energy at lower prices, but current lithium ion will reach a limit in the balance between energy and power in a single cell with new materials and nanostructure electrodes being needed to provide high power and energy.  This situation is aggravated by the push towards physically smaller form factors where energy and power densities diverge significantly. Current efforts in various companies and universities are promising but will take a few more years to bring to market.

The Supercapacitor market is poised for growth in 2016 with an expected CAGR of 19% through 2020.  Between the need for more efficient form factors, high energy density and peak power performance, a new form of supercapacitors will power the ever increasing demands of portable electronics. The Hybrid supercapacitor is the bridge between the high energy batteries and high power supercapacitors. Because these devices are higher energy than traditional supercapacitors and higher power than batteries they may either be used in conjunction with or completely replace battery systems. Due to the way we are using our smartphones, supercapacitors will find a good use model there as well as applications ranging from transportation to enterprise storage.

Memory in smartphones and tablets containing solid state drives (SSDs) will become more and more accustomed to architectures which manage non-volatile cache in a manner which preserves content in the event of power failure. These devices will use large swaths of video and the media data will be stored on RAM (backed with FLASH) which can allow frequent overwrites in these mobile devices without the wear-out degradation that would significantly reduce the life of the FLASH memory if used for all storage. To meet the data integrity concerns of this shadowed memory, supercapacitors will take a prominent role in supplying bridge power in the event of an energy-depleted battery, thereby adding significant value and performance to mobile entertainment and computing devices.

Finally, safety issues with lithium ion batteries have just become front and center and will continue to plague the industry and manufacturing environments.  Flaming hoverboards, shipment and air travel restrictions on lithium batteries render the future of personal battery power questionable. Improved testing and more regulations will come to pass, however because of the widespread use of battery-powered devices safety will become a key factor.   What we will see in 2016 is the emergence of the hybrid supercapacitor, which offers a high-capacity alternative to Lithium batteries in terms of power efficiency. This alternative can operate over a wide temperature range, have long cycle lives and – most importantly are safe. “

Greg Schmergel, CEO, Founder and President of memory-maker Nantero, Inc points out that just as new power storage devices will open new opportunities so will new memory devices.

“With the traditional memories, DRAM and flash, nearing the end of the scaling roadmap, new memories will emerge and change memory from a standard commodity to a potentially powerful competitive advantage.  As an example, NRAM products such as multi-GB high-speed DDR4-compatible nonvolatile standalone memories are already being designed, giving new options to designers who can take advantage of the combination of nonvolatility, high speed, high density and low power.  The emergence of next-generation nonvolatile memory which is faster than flash will enable new and creative systems architectures to be created which will provide substantial customer value.”

Jin Zhang, Vice President of Marketing and Customer Relations at Oski Technology is of the opinion that the formal methods sector is an excellent prospect to increase the EDA market.

“Formal verification adoption is growing rapidly worldwide and that will continue into 2016. Not surprisingly, the U.S. market leads the way, with China following a close second. Usage is especially apparent in China where a heavy investment has been made in the semiconductor industry, particularly in CPU designs. Many companies are starting to build internal formal groups. Chinese project teams are discovering the benefits of improving design qualities using Formal Sign-off Methodology.”

These market forces are fueling the growth of specific design areas that are supported by EDA tools.  In the companion article some of these areas will be discussed.

Internet of Things (IoT) and EDA

Tuesday, April 8th, 2014

Gabe Moretti, Contributing Editor

A number of companies contributed to this article.  In Particular: Apache Design Solutions, ARM, Atrenta, Breker Verification Systems, Cadence, Cliosoft, Dassault Systemes, Mentor Graphics, Onespin Solutions, Oski Technologies, and Uniquify.

In his keynote speech at the recent CDNLive Silicon Valley 2014 conference, Lip-Bu Tan, Cadence CEO, cited mobility, cloud computing, and Internet of Things as three key growth drivers for the semiconductor industry. He cited industry studies that predict 50 billion devices by 2020.  Of those three, IoT is the latest area attracting much conversation.  Is EDA ready to support its growth?

The consensus is that in many aspects EDA is ready to provide tools required for IoT implementation.  David Flynn, a ARM Fellow put it best.  “For the most part, we believe EDA is ready for IoT.  Products for IoT are typically not designed on ‘bleeding-edge’ technology nodes, so implementation can benefit from all the years of development of multi-voltage design techniques applied to mature semiconductor processes.”

Michael Munsey, Director of ENOVIA Semiconductor Strategy at Dassault Systèmes observed that conversely companies that will be designing devices for IoT may not be ready.  “Traditional EDA is certainly ready for the core design, verification, and implementation of the devices that will connect to the IoT.  Many of the devices that will connect to the IoT will not be the typical designs that are pushing Moore’s Law.  Many of the devices may be smaller, lower performance devices that do not necessarily need the latest and greatest process technology.  To be cost effective at producing these devices, companies will rely heavily on IP in order to assemble devices quickly in order to meet consumer and market demands.  In fact, we may begin to see companies that traditionally have not been silicon developers getting in to chip design. We will see an explosive growth in the IP ecosystem of companies producing IP to support these new devices.”

Vic Kulkarni, Senior VP and GM, Apache Design, Inc.  put it as follows: “There is nothing “new or different” about the functionality of EDA tools for the IoT applications, and EDA tool providers have to think of this market opportunity from a perspective of mainstream users, newer licensing and pricing model for “mass market”, i.e.  low-cost and low-touch technical support, data and IP security and the overall ROI.”

But IoT also requires new approaches to design and offers new challenges.  David Kelf, VP of Marketing at Onespin Solutions provided a picture of what a generalized IoT component architecture is likely to be.

Figure 1: generalized IoT component architecture (courtesy Onespin Solutions)

He went on to state: “The included graphic shows an idealized projection of the main components in a general purpose IoT platform. At a minimum, this platform will include several analog blocks, a processor able to handle protocol stacks for wireless communication and the Internet Protocol (IP). It will need some sensor-required processing, an extremely effective power control solution, and possibly, another capability such as GPS or RFID and even a Long Term Evolution (LTE) 4G Baseband.”

Jin Zhang, Senior Director of Marketing at Oski Technologies observed that “If we parse the definition of IoT, we can identify three key characteristics:

  1. IoT can sense and gather data automatically from the environment
  2. IoT can interact and communicate among themselves and the environment
  3. IoT can process all the data and perform the right action with or without human interaction

These imply that sensors of all kinds for temperature, light, movement and human vitals, fast, stable and extensive communication networks, light-speed processing power and massive data storage devices and centers will become the backbone of this infrastructure.

The realization of IoT relies on the semiconductor industry to create even larger and more complex SoC or Network-on-Chip devices to support all the capabilities. This, in turn, will drive the improvement and development of EDA tools to support the creation, verification and manufacturing of these devices, especially verification where too much time is spent on debugging the design.”

Power Management

IoT will require advanced power management and EDA companies are addressing the problem.  Rob Aitken, also a ARM fellow, said:” We see an opportunity for dedicated flows around near-threshold and low voltage operation, especially in clock tree synthesis and hold time measurement. There’s also an opportunity for per-chip voltage delivery solutions that determine on a chip-by-chip basis what the ideal operation voltage should be and enable that voltage to be delivered via a regulator, ideally on-chip but possibly off-chip as well. The key is that existing EDA solutions can cope, but better designs can be obtained with improved tools.”

Kamran Shah, Director of Marketing for Embedded Software at Mentor Graphics, noted: “SoC suppliers are investing heavily in introducing power saving features including Dynamic Voltage Frequency Scaling (DVFS), hibernate power saving modes, and peripheral clock gating techniques. Early in the design phase, it’s now possible to use Transaction Level Models (TLM) tools such as Mentor Graphics Vista to iteratively evaluate the impact of hardware and software partitioning, bus implementations, memory control management, and hardware accelerators in order to optimize for power consumption”

Figure 2: IoT Power Analysis (courtesy of Mentor Graphics)

Bernard Murphy, Chief Technology Officer at Atrenta, pointed out that: “Getting to ultra-low power is going to require a lot of dark silicon, and that will require careful scenario modeling to know when functions can be turned off. I think this is going to drive a need for software-based system power modeling, whether in virtual models, TLM (transaction-level modeling), or emulation. Optimization will also create demand for power sensitivity analysis – which signals / registers most affect power and when. Squeezing out picoAmps will become as common as squeezing out microns, which will stimulate further automation to optimize register and memory gating.”

Verification and IP

Verifying either one component or a subset of connected components will be more challenging.  Components in general will have to be designed so that they can be “fixed” remotely.  This means either fix a real bug or download an upgrade.  Intel is already marketing such a solution which is not restricted to IoT applications.Also networks will be heterogeneous by design, thus significantly complicating verification.

Ranjit Adhikary, Director of Marketing at Cliosoft, noted that “From a SoC designer’s perspective, “Internet of Things” means an increase in configurable mixed-signal designs. Since devices now must have a larger life span, they will need to have a software component associated with them that could be upgraded as the need arises over their life spans. Designs created will have a blend of analog, digital and RF components and designers will use tools from different EDA companies to develop different components of the design. The design flow will increasingly become more complex and the handshake between the digital and analog designers in the course of creating mixed-signal designs has to become better. The emphasis on mixed-signal verification will only increase to ensure all corner cases are caught early on in the design cycle.”

Thomas L. Anderson, Vice President of Marketing at Breker Verification Systems, has a similar prospective but he is more pessimistic.  He noted that “Many IoT nodes will be located in hard-to-reach places, so replacement or repair will be highly unlikely. Some nodes will support software updates via the wireless network, but this is a risky proposition since there’s not much recourse if something goes wrong. A better approach is a bulletproof SoC whose hardware, software, and combination of the two have been thoroughly verified. This means that the SoC verification team must anticipate, and test for, every possible user scenario that could occur once the node is in operation.”

One solution, according to Mr. Anderson, is “automatic generation of C test cases from graph-based scenario models that capture the design intent and the verification space. These test cases are multi-threaded and multi-processor, running realistic user scenarios based on the functions that will be provided by the IoT nodes containing the SoC. These test cases communicate and synchronize with the UVM verification components (UVCs) in the testbench when data must be sent into the chip or sent out of the chip and compared with expected results.”

Bob Smith, Senior Vice President of Marketing and Business development at Uniquify, noted that “Connecting the unconnected is no small challenge and requires complex and highly sophisticated SoCs. Yet, at the same time, unit costs must be small so that high volumes can be achieved. Arguably, the most critical IP for these SoCs to operate correctly is the DDR memory subsystem. In fact, it is ubiquitous in SoCs –– where there’s a CPU and the need for more system performance, there’s a memory interface. As a result, it needs to be fast, low power and small to keep costs low.  The SoC’s processors spend the majority of cycles reading and writing to DDR memory. This means that all of the components, including the DDR controller, PHY and I/O, need to work flawlessly as does the external DRAM memory device(s). If there’s a problem with the DDR memory subsystem, such as jitter, data/clock skew, setup/hold time or complicated physical implementation issues, the IoT product may work intermittently or not at all. Consequently, system yield and reliability are of upmost concern.”

He went on to say: “The topic may be the Internet of Things and EDA, but the big winners in the race for IoT market share will be providers of all kinds of IP. The IP content of SoC designs often reaches 70% or more, and SoCs are driving IoT, connecting the unconnected. The big three EDA vendors know this, which is why they have gobbled up some of the largest and best known IP providers over the last few years.”

Conclusion

Things that seem simple often turn out not to be.  Implementing IoT will not be simple because as the implementation goes forward, new and more complex opportunities will present themselves.

Vic Kulkarni said: “I believe that EDA solution providers have to go beyond their “comfort zone” of being hardware design tool providers and participate in the hierarchy of IoT above the “Devices” level, especially in the “Gateway” arena. There will be opportunities for providing big data analytics, security stack, efficient protocol standard between “Gateway” and “Network”, embedded software and so on. We also have to go beyond our traditional customer base to end-market OEMs.”

Frank Schirrmeister, product marketing group director at Cadence, noted that “The value chain for the Internet of Things consists not only of the devices that create data. The IoT also includes the hubs that collect data and upload data to the cloud. Finally, the value chain includes the cloud and the big data analytics it stores.  Wired/wireless communications glues all of these elements together.”

EDA Industry Predictions for 2014 – Part 2

Thursday, January 9th, 2014

This article provides the observations from some of the “small” EDA vendors about important issues in EDA.  These predictions serve to measure the degree of optimism in the industry.  They are not the data to be used for an end of year scorecard, to see who was right and who was not.  It looks like there is much to be done in the next twelve months, unless, of course, consumers change their “mood”.

Bernard Murphy – Atrenta

“Smart” will be the dominant watchword for semiconductors in 2014.  We’ll see the maturing of biometric identification technologies, driven by security needs for smart payment on phones, and an increase in smart-home applications.   An example of cool applications?   Well, we’ll toss our clunky 20th-century remote controls, and manage our smart TV with an app on our phone or tablet, which will, among a host of other functions, allow point / text input to your center of living entertainment system – your smart TV. We’ll see indoor GPS products, enabling the mobile user to navigate shopping malls – an application with significant market potential.  We’ll see new opportunities for Bluetooth or WiFi positioning, 3D image recognition and other technologies.

In 2014 smart phones will be the dominant driver for semiconductor growth. The IoT industry will grow but will be constrained by adoption costs and immaturity. But I foresee that one of the biggest emerging technologies will be smart cards.  Although common for many years in Europe, this technology has been delayed in the US for lack of infrastructure and security concerns.  Now check out businesses near you with new card readers. Chances are they have a slot at the bottom as well as one at the side. That bottom slot is for smart cards. Slated for widespread introduction in 2015, smart card technologies will explode due to high demand.

The EDA industry in 2014 will continue to see implementation tools challenged by conflicting requirements of technology advances against the shrinking customer-base that can afford the costs at these nodes. Only a fundamental breakthrough enabling affordability will affect significant change in these tools.  Front-end design will continue to enjoy robust growth, especially around tools to manage, analyze and debug SoCs based on multi-sourced IPs – the dominant design platform today. Software-based analysis and verification of SoCs will be an upcoming trend, which will largely skip over traditional testbench-based verification. This will likely spur innovation around static hookup checking for the SoC assembly, and methods to connect software use-cases to implementation characteristics such as power and enhanced debug tools to bridge the gap between observed software behavior and underlying implementation problems.

Thomas L. Anderson – Breker

Electronic design automation (EDA) and embedded systems have long been sibling markets and technologies, but they are increasingly drawing closer and starting to merge. 2014 will see this trend continue and even accelerate. The catalyst is that almost all significant semiconductor designs user a system-on-chip (SoC) architecture, in which one or more embedded processors lie at the heart of the functionality. Embedded processors need embedded programs, and so the link between the two worlds is growing tighter every year.

One significant driver for this evolution is the gap between simulation testbenches and hardware/software co-verification using simulation, emulation or prototypes. The popular Universal Verification Methodology (UVM) standard provides no links between testbenches and code running in the embedded processors. The UVM has other limitations at the full-SoC level, but verification teams generally run at least some minimal testbench-based simulations to verify that the IP blocks are interconnected properly.

The next step is often running production code on the SoC processors, another link between EDA and embedded. It is usually impractical to boot an operating system in simulation, so usually the verification team moves on to simulation acceleration or emulation. The embedded team is more involved during emulation, and usually in the driver’s seat by the time that the production code is running on FPGA prototypes. The line between the verification team (part of traditional EDA) and the embedded engineers becomes fuzzy.

When the actual silicon arrives from the foundry, most SoC suppliers have a dedicated validation team. This team has the explicit goal of booting the operating system and running production software, including end-user applications, in the lab. However, this rarely works when the chip is first powered up. The complexity and limited debug features of production code lead the validation team to hand-write diagnostics that incrementally validate and bring up sections of the chip. The goal is to find any lurking hardware bugs before trying to run production software.

Closer alignment between EDA and embedded will lead to two important improvements in 2014. First, the simulation gap will be filled by automatically generated multi-threaded, multi-processor C test cases that leverage portions of the UVM testbench. These test cases stress the design far more effectively than UVM testbenches, hand-written tests, or even production software (which is not designed to find bugs). Tools exist today to generate such test cases from graph-based scenario models capturing the design and verification intent for the SoC.

Second, the validation team will be able to use these same scenario models to automatically generated multi-threaded, multi-processor C test cases to run on silicon and replace their hand-written diagnostics. This establishes a continuum between the domains of EDA, embedded systems, and silicon validation. Scenario models can generate test cases for simulation, simulation acceleration, emulation, FPGA prototyping, and actual silicon in the lab. These test cases will be the first embedded code to run at every one of these stages in 2014 SoC projects.

Shawn McCloud - Calypto

While verification now leverages high-level verification languages and techniques (i.e:, UVM/OVM and SystemVerilog) to boost productivity, design creation continues to rely on RTL methodologies originally deployed almost 20 years ago. The design flow needs to be geared toward creating bug-free RTL designs. This can be realized today by automating the generation of RTL from exhaustively verified C-based models. The C++/SystemC source code is essentially an executable spec. Because the C++/SystemC source code is more concise, it executes 1,000x–10,000x faster than RTL code, providing better coverage.

C and SystemC verification today is rudimentary, relying primarily on directed tests. These approaches lack the sophistication that hardware engineers employ at the RTL, including assertions, code coverage, functional coverage, and property-based verification. For a dependable HLS flow, you need to have a very robust verification methodology, and you need metrics and visibility. Fortunately, there is no need to re-invent the wheel when we can borrow concepts from the best practices of RTL verification.

Power analysis and optimization have evolved over the last two years, with more changes ahead. Even with conventional design flows there is still a lot more to be optimized on RTL designs. The reality is, when it comes to RTL power optimization, the scope of manual optimizations is relatively limited when factoring in time to market pressure and one’s ability to predict the outcome of an RTL change for power. Designers have already started to embrace automated power optimization tools that analyze the sequential behavior of RTL designs to automatically shut down unused portions of a design through a technique called sequential clock gating. There’s a lot more we can do by being smarter and by widening the scope of power analysis. Realizing this, companies will start to move away from the limitations of predefined power budgets targets toward a strategy that enables reducing power until the bell rings, and it’s time for tape out.

Bill Neifert – Carbon

Any prediction of future advances in EDA has to include a discussion on meeting the needs of the software developer. This is hardly a new thing, of course. Software has been consuming a steadily increasing part of the design resources for a long time. EDA companies acknowledge this and discuss technologies as being “software-driven” or “enabling software development,” but it seems that EDA companies have had a difficult time in delivering tools that enable software developers.

At the heart of this is the fundamental cost structure of how EDA tools have traditionally been sold and supported. An army of direct sales people and support staff can be easily supported when the average sales price of a tool is in the many tens or hundreds of thousands of dollars. This is the tried-and-true EDA model of selling to hardware engineers.

Software develoopers, however, are accustomed to much lower cost, or even free tools. Furthermore, they expect these tools to work without multiple calls and hand-holding from their local AE.

In order to meet the needs of the software developers, EDA needs to change how it engages with them. It’s not just a matter of price. Even the lowest-priced software won’t be used if it doesn’t meet the designer’s needs or if it requires too much direct support. After all, unlike the hardware designers who need EDA tools to complete their job, a software programmer typically has multiple options to choose from. The platform of choice is generally the one that causes the least pain and that platform may be from an EDA provider. Or, it could just as likely be homegrown or even an older generation product.

If EDA is going to start bringing on more software users in 2014, it needs to come out with products that meet the needs of software developers at a price they can afford. In order to accomplish this, EDA products for programmers must be delivered in a much more “ready-to-consume” form. Platforms should be as prebuilt as possible while allowing for easy customization. Since support calls are barriers to productivity for the software engineer and costly to support for the EDA vendor, platforms for software engineers should be web-accessible. In some cases, they may reside fully in the cloud. This completely automates user access and simplifies support, if necessary.

Will 2014 be the year that EDA companies begin to meet the needs of the software engineer or will they keep trying to sell them a wolf in sheep’s clothing? I think it will be the former because the opportunity’s too great. Developing tools to support software engineers is an obvious and welcome growth path for the EDA market.

Brett Cline – Forte

In the 19th century, prevailing opinion held that American settlers were destined to expand across North America. It was called Manifest Destiny.

In December 2014, we may look back on the previous 11 months and claim SystemC-Based Design Destiny. The Semiconductor industry is already starting to see more widespread adoption of SystemC-based design sweeping across the United States. In fact, it’s the fastest growing worldwide region right now. Along with it comes SystemC-based High-level synthesis, gaining traction with more designers because it allows them to perform power tradeoffs that are difficult if not impossible in RTL due to time constraints. Of course, low power continues to be a major driver for design and will be throughout 2014.

Another trend that will be even more apparent in 2014 is the use of abstracted IP. RTL-based IP is losing traction for system design and validation due to simulation speed and because it’s difficult to update, retarget and maintain. As a result, more small IP companies emerge with SystemC as the basis of their design instead of the long-used Verilog hardware design language.

SystemC-Based Design Destiny is for real in the U.S. and elsewhere as design teams struggle to contain the multitude of challenges in the time allotted.

Dr. Raik Brinkmann – OneSpin Solution

Over the last few years, given the increase in silicon cost and slowdown in process advancement, we have witnessed the move toward standardized SoC platforms, leveraging IP from many sources, together with powerful, multicore processors.

This has driven a number of verification trends. Verification is diversifying, where the testing of IP blocks is evolving separately to SoC integration analysis, a different methodology from virtual platform software validation. In 2014, we will see this diversification extend with more advanced IP verification, a formalization of integration testing, and mainstream use of virtual platforms.

With IP being transferred from varied sources, ensuring thorough verification is absolutely essential. Ensuring IP block functionality has always been critical. Recently, this requirement has taken on an additional dimension where the IP must be signed off before usage elsewhere and designers must rely on it without running their own verification. This is true for IP from separate groups within a company or alternative organizations. This sign-off process requires a predictable metric, which may only be produced through verification coverage technology.

We predict that 2014 will be the year of coverage-driven verification. Effective coverage measurement is becoming more essential and, conversely, more difficult. Verification complexity is increasing along three dimensions: design architecture, tool combination, and somewhat unwieldy standards, such as UVM. These all affect the ability to collect, collate, and display coverage detail.

Help is on the way. During 2014, we expect new coverage technology that will enable the production of meaningful metrics. Furthermore, we will see verification management technology and the use of coverage standards to pull together information that will mitigate verification risk and move the state of the art in verification toward a coverage-driven process.

As with many recent verification developments, coverage solutions can be improved through leveraging formal verification technology. Formal is at the heart of many prepackaged solutions as well as providing a powerful verification solution in its own right.

Much like 2009 for emulation, 2014 will be the year we remember when Formal Verification usage dramatically grew to occupy a major share of the overall verification process.

Formal is becoming pervasive in block and SoC verification, and can go further. Revenue for 2013 tells the story. OneSpin Solutions, for example, tripled its new bookings. Other vendors in the same market are also reporting an increase in revenue well above overall verification market growth.

Vigyan Singhal – Oski Technologies

The worldwide semiconductor industry may be putting on formal wear in 2014 as verification engineers more fully embrace a formal verification methodology. In particular, we’re seeing rapid adoption in Asia, from Korea and Japan to Taiwan and China. Companies there are experiencing the same challenges their counterparts in other areas of the globe have found: designs are getting more and more complex, and current verification methodologies can’t keep pace. SoC simulation and emulation, for example, are failing, causing project delays and missed bugs.

Formal verification, many project teams have determined, is the only way to improve block-level verification to reduce stressing out SoC verification. The reasons are varied. Because formal is exhaustive, it will catch all corner case bugs that are hard to find in simulation. If more blocks are verified and signed-off with formal, it means much better design quality.

At the subsystem and SoC level, verification only needs to be concerned with integration issues rather than design quality issues. An added benefit, all the work spent on building a block-level formal test environment can be reused for future design revisions.

We recently heard from a group of formal verification experts in the U.S. who have successfully implemented formal into their methodology and sign-off flow. Some are long-time formal users. Others are still learning what applications work best for their needs. All are outspoken advocates eager to see more widespread adoption. They’re doing model checking, equivalence checking and clock domain checking, among other applications.

They are not alone in their assessment about formal verification. Given its proven effectiveness, semiconductor companies are starting to build engineering teams with formal verification expertise to take advantage of its powerful capabilities and benefits. Building a formal team is not easy –– it takes time and dedication. The best way to learn is by applying formal in live projects where invested effort and results matter.

Several large companies in Asia have set up rigorous programs to build internal formal expertise. Our experience has shown that it takes three years of full-time formal usage to become what we call a “formal leader” (level 6). That is, an engineer who can define an overall verification strategy, lead formal verification projects and develop internal formal expertise. While 2014 will be the watershed year for the Asian market, we will see more formal users and experts in the years following, and more formal successes.

That’s not to say that adoption of formal doesn’t need some nudging. Education and training are important, as are champions willing to publicly promote the power of the formal technology. My company has a goal to do both. We sponsor the yearly Deep Bounds Award to recognize outstanding technological research achievement for solving the most useful End-to-End formal verification problems. The award is presented at the annual Hardware Model Checking Competition (HWMCC) affiliated with FMCAD (Formal Methods in Computer Aided Design).

While we may not see anyone dressed in top hat and tails at DAC in June 2014, some happy verification engineers may feel like kicking up their heels as Fred Astaire or Ginger Rogers would. That’s because they’re celebrating the completion of a chip project that taped out on time and within budget. And no bugs.

To paraphrase a familiar Fred Astaire quote, “Do it big, do it right and do it with formal.”

Bruce McGaughy – ProPlus Design Solutions

To allow the continuation of Moore’s Law, foundries have been forced to go with 3D FinFETs transistors, and along the way a funny thing has happened. Pushing planar devices into vertical structures has helped overcome fundamental device physics limitations, but physics has responded back by imposing different physical constraints, such as parasitics and greater gate-drain, gate-source coupling capacitances.

More complex transistor structures mean more complex SPICE models. The inability to effectively body-bias and the requirement to use quantized widths in these FinFET devices means that circuit designers have new challenges resulting in more complex designs. This, coupled with increased parasitic effects at advanced technology nodes, leads to post-layout netlist sizes that are getting larger.

All this gets the focus back on the transistor physics and the verification of transistor-level designs using SPICE simulation.

Above and beyond the complexity of the device and interconnect is the reality of process variation. While some forms of variation, such as random dopant fluctuation (RDF), may be reduced at FinFET nodes, variation caused by fin profile/geometry variability comes into play.

It is expected that threshold voltage mismatch and its standard deviation will increase. Additional complexity from layout-dependent effects requires extreme care during layout. With all these variation effects in the mix, there is one direct trend –– more need for post-layout simulation, the time for which gets longer as netlist sizes gets larger.

Pre-layout simulation just does not cut it.

Let’s step back and review where the 3D FinFet transistor has taken us. We have more complex device models, more complex circuits, larger netlists and a greater need for post-layout simulation.

Pretty scary in and of itself, though, the EDA industry has used a trick whenever confronted with capacity or complexity challenges. That is, trading-off accuracy to buy a little more capacity or performance. In the SPICE world, this trick is called FastSPICE.

Now, with 3D FinFETs, we are facing the end of the road for FastSPICE as an accurate simulation and verification tool, and it will be delegated to a more confined role as a functional verification tool. When the process technology starts dropping Vdd and devices have greater capacitive coupling, it results in greater noise sensitivity of the design. The ability to achieve accurate SPICE simulations under these conditions requires extreme care in controlling convergence of currents and charges. Alas, this breaks the back of FastSPICE.

In 2014, as FinFET designs get into production mode, expect the SPICE accuracy requirements and limitations of FastSPICE to cry out for attention. Accordingly, a giga-scale Parallel SPICE simulator called NanoSpice by ProPlus Design Solutions promises to address the problem. It provides a pure SPICE engine that can scale to the capacity and approach the speed of FastSPICE simulators with no loss of accuracy.

Experienced IC design teams will recognize both the potential and challenges of 3D FinFET technology and have the foresight to adopt advanced design methodologies and tools. As a result, the semiconductor industry will be ready to usher in the FinFET production ramp in 2014.

Dave Noble – Pulsic

Custom layout tools will occupy an increased percentage of design tool budgets as process nodes gets smaller and more complex. Although legacy (digital) tools are being “updated” to address FINFeT, they were designed for 65nm/90nm, so they are running out of steam. Reference flows have too many repetitive, time-consuming, and linear steps. We anticipate that new approaches will be introduced to enable highly optimized layout by new neural tools that can “think” for themselves and anticipate the required behavior (DRC-correct APR) given a set of inputs (such as DRC and process rules). New tools will be required that can generate layout, undertake all placement permutations, complete routing for each permutation AND ensure that it is DRC-correct – all in one iteration. Custom design will catch up with digital, custom layout tools will occupy an increase percentage of design tool budgets, and analog tools will have the new high-value specialized functions.