Part of the  

Chip Design Magazine


About  |  Contact

Posts Tagged ‘Atrenta’

EDA Industry Predictions for 2014 – Part 2

Thursday, January 9th, 2014

This article provides the observations from some of the “small” EDA vendors about important issues in EDA.  These predictions serve to measure the degree of optimism in the industry.  They are not the data to be used for an end of year scorecard, to see who was right and who was not.  It looks like there is much to be done in the next twelve months, unless, of course, consumers change their “mood”.

Bernard Murphy – Atrenta

“Smart” will be the dominant watchword for semiconductors in 2014.  We’ll see the maturing of biometric identification technologies, driven by security needs for smart payment on phones, and an increase in smart-home applications.   An example of cool applications?   Well, we’ll toss our clunky 20th-century remote controls, and manage our smart TV with an app on our phone or tablet, which will, among a host of other functions, allow point / text input to your center of living entertainment system – your smart TV. We’ll see indoor GPS products, enabling the mobile user to navigate shopping malls – an application with significant market potential.  We’ll see new opportunities for Bluetooth or WiFi positioning, 3D image recognition and other technologies.

In 2014 smart phones will be the dominant driver for semiconductor growth. The IoT industry will grow but will be constrained by adoption costs and immaturity. But I foresee that one of the biggest emerging technologies will be smart cards.  Although common for many years in Europe, this technology has been delayed in the US for lack of infrastructure and security concerns.  Now check out businesses near you with new card readers. Chances are they have a slot at the bottom as well as one at the side. That bottom slot is for smart cards. Slated for widespread introduction in 2015, smart card technologies will explode due to high demand.

The EDA industry in 2014 will continue to see implementation tools challenged by conflicting requirements of technology advances against the shrinking customer-base that can afford the costs at these nodes. Only a fundamental breakthrough enabling affordability will affect significant change in these tools.  Front-end design will continue to enjoy robust growth, especially around tools to manage, analyze and debug SoCs based on multi-sourced IPs – the dominant design platform today. Software-based analysis and verification of SoCs will be an upcoming trend, which will largely skip over traditional testbench-based verification. This will likely spur innovation around static hookup checking for the SoC assembly, and methods to connect software use-cases to implementation characteristics such as power and enhanced debug tools to bridge the gap between observed software behavior and underlying implementation problems.

Thomas L. Anderson – Breker

Electronic design automation (EDA) and embedded systems have long been sibling markets and technologies, but they are increasingly drawing closer and starting to merge. 2014 will see this trend continue and even accelerate. The catalyst is that almost all significant semiconductor designs user a system-on-chip (SoC) architecture, in which one or more embedded processors lie at the heart of the functionality. Embedded processors need embedded programs, and so the link between the two worlds is growing tighter every year.

One significant driver for this evolution is the gap between simulation testbenches and hardware/software co-verification using simulation, emulation or prototypes. The popular Universal Verification Methodology (UVM) standard provides no links between testbenches and code running in the embedded processors. The UVM has other limitations at the full-SoC level, but verification teams generally run at least some minimal testbench-based simulations to verify that the IP blocks are interconnected properly.

The next step is often running production code on the SoC processors, another link between EDA and embedded. It is usually impractical to boot an operating system in simulation, so usually the verification team moves on to simulation acceleration or emulation. The embedded team is more involved during emulation, and usually in the driver’s seat by the time that the production code is running on FPGA prototypes. The line between the verification team (part of traditional EDA) and the embedded engineers becomes fuzzy.

When the actual silicon arrives from the foundry, most SoC suppliers have a dedicated validation team. This team has the explicit goal of booting the operating system and running production software, including end-user applications, in the lab. However, this rarely works when the chip is first powered up. The complexity and limited debug features of production code lead the validation team to hand-write diagnostics that incrementally validate and bring up sections of the chip. The goal is to find any lurking hardware bugs before trying to run production software.

Closer alignment between EDA and embedded will lead to two important improvements in 2014. First, the simulation gap will be filled by automatically generated multi-threaded, multi-processor C test cases that leverage portions of the UVM testbench. These test cases stress the design far more effectively than UVM testbenches, hand-written tests, or even production software (which is not designed to find bugs). Tools exist today to generate such test cases from graph-based scenario models capturing the design and verification intent for the SoC.

Second, the validation team will be able to use these same scenario models to automatically generated multi-threaded, multi-processor C test cases to run on silicon and replace their hand-written diagnostics. This establishes a continuum between the domains of EDA, embedded systems, and silicon validation. Scenario models can generate test cases for simulation, simulation acceleration, emulation, FPGA prototyping, and actual silicon in the lab. These test cases will be the first embedded code to run at every one of these stages in 2014 SoC projects.

Shawn McCloud - Calypto

While verification now leverages high-level verification languages and techniques (i.e:, UVM/OVM and SystemVerilog) to boost productivity, design creation continues to rely on RTL methodologies originally deployed almost 20 years ago. The design flow needs to be geared toward creating bug-free RTL designs. This can be realized today by automating the generation of RTL from exhaustively verified C-based models. The C++/SystemC source code is essentially an executable spec. Because the C++/SystemC source code is more concise, it executes 1,000x–10,000x faster than RTL code, providing better coverage.

C and SystemC verification today is rudimentary, relying primarily on directed tests. These approaches lack the sophistication that hardware engineers employ at the RTL, including assertions, code coverage, functional coverage, and property-based verification. For a dependable HLS flow, you need to have a very robust verification methodology, and you need metrics and visibility. Fortunately, there is no need to re-invent the wheel when we can borrow concepts from the best practices of RTL verification.

Power analysis and optimization have evolved over the last two years, with more changes ahead. Even with conventional design flows there is still a lot more to be optimized on RTL designs. The reality is, when it comes to RTL power optimization, the scope of manual optimizations is relatively limited when factoring in time to market pressure and one’s ability to predict the outcome of an RTL change for power. Designers have already started to embrace automated power optimization tools that analyze the sequential behavior of RTL designs to automatically shut down unused portions of a design through a technique called sequential clock gating. There’s a lot more we can do by being smarter and by widening the scope of power analysis. Realizing this, companies will start to move away from the limitations of predefined power budgets targets toward a strategy that enables reducing power until the bell rings, and it’s time for tape out.

Bill Neifert – Carbon

Any prediction of future advances in EDA has to include a discussion on meeting the needs of the software developer. This is hardly a new thing, of course. Software has been consuming a steadily increasing part of the design resources for a long time. EDA companies acknowledge this and discuss technologies as being “software-driven” or “enabling software development,” but it seems that EDA companies have had a difficult time in delivering tools that enable software developers.

At the heart of this is the fundamental cost structure of how EDA tools have traditionally been sold and supported. An army of direct sales people and support staff can be easily supported when the average sales price of a tool is in the many tens or hundreds of thousands of dollars. This is the tried-and-true EDA model of selling to hardware engineers.

Software develoopers, however, are accustomed to much lower cost, or even free tools. Furthermore, they expect these tools to work without multiple calls and hand-holding from their local AE.

In order to meet the needs of the software developers, EDA needs to change how it engages with them. It’s not just a matter of price. Even the lowest-priced software won’t be used if it doesn’t meet the designer’s needs or if it requires too much direct support. After all, unlike the hardware designers who need EDA tools to complete their job, a software programmer typically has multiple options to choose from. The platform of choice is generally the one that causes the least pain and that platform may be from an EDA provider. Or, it could just as likely be homegrown or even an older generation product.

If EDA is going to start bringing on more software users in 2014, it needs to come out with products that meet the needs of software developers at a price they can afford. In order to accomplish this, EDA products for programmers must be delivered in a much more “ready-to-consume” form. Platforms should be as prebuilt as possible while allowing for easy customization. Since support calls are barriers to productivity for the software engineer and costly to support for the EDA vendor, platforms for software engineers should be web-accessible. In some cases, they may reside fully in the cloud. This completely automates user access and simplifies support, if necessary.

Will 2014 be the year that EDA companies begin to meet the needs of the software engineer or will they keep trying to sell them a wolf in sheep’s clothing? I think it will be the former because the opportunity’s too great. Developing tools to support software engineers is an obvious and welcome growth path for the EDA market.

Brett Cline – Forte

In the 19th century, prevailing opinion held that American settlers were destined to expand across North America. It was called Manifest Destiny.

In December 2014, we may look back on the previous 11 months and claim SystemC-Based Design Destiny. The Semiconductor industry is already starting to see more widespread adoption of SystemC-based design sweeping across the United States. In fact, it’s the fastest growing worldwide region right now. Along with it comes SystemC-based High-level synthesis, gaining traction with more designers because it allows them to perform power tradeoffs that are difficult if not impossible in RTL due to time constraints. Of course, low power continues to be a major driver for design and will be throughout 2014.

Another trend that will be even more apparent in 2014 is the use of abstracted IP. RTL-based IP is losing traction for system design and validation due to simulation speed and because it’s difficult to update, retarget and maintain. As a result, more small IP companies emerge with SystemC as the basis of their design instead of the long-used Verilog hardware design language.

SystemC-Based Design Destiny is for real in the U.S. and elsewhere as design teams struggle to contain the multitude of challenges in the time allotted.

Dr. Raik Brinkmann – OneSpin Solution

Over the last few years, given the increase in silicon cost and slowdown in process advancement, we have witnessed the move toward standardized SoC platforms, leveraging IP from many sources, together with powerful, multicore processors.

This has driven a number of verification trends. Verification is diversifying, where the testing of IP blocks is evolving separately to SoC integration analysis, a different methodology from virtual platform software validation. In 2014, we will see this diversification extend with more advanced IP verification, a formalization of integration testing, and mainstream use of virtual platforms.

With IP being transferred from varied sources, ensuring thorough verification is absolutely essential. Ensuring IP block functionality has always been critical. Recently, this requirement has taken on an additional dimension where the IP must be signed off before usage elsewhere and designers must rely on it without running their own verification. This is true for IP from separate groups within a company or alternative organizations. This sign-off process requires a predictable metric, which may only be produced through verification coverage technology.

We predict that 2014 will be the year of coverage-driven verification. Effective coverage measurement is becoming more essential and, conversely, more difficult. Verification complexity is increasing along three dimensions: design architecture, tool combination, and somewhat unwieldy standards, such as UVM. These all affect the ability to collect, collate, and display coverage detail.

Help is on the way. During 2014, we expect new coverage technology that will enable the production of meaningful metrics. Furthermore, we will see verification management technology and the use of coverage standards to pull together information that will mitigate verification risk and move the state of the art in verification toward a coverage-driven process.

As with many recent verification developments, coverage solutions can be improved through leveraging formal verification technology. Formal is at the heart of many prepackaged solutions as well as providing a powerful verification solution in its own right.

Much like 2009 for emulation, 2014 will be the year we remember when Formal Verification usage dramatically grew to occupy a major share of the overall verification process.

Formal is becoming pervasive in block and SoC verification, and can go further. Revenue for 2013 tells the story. OneSpin Solutions, for example, tripled its new bookings. Other vendors in the same market are also reporting an increase in revenue well above overall verification market growth.

Vigyan Singhal – Oski Technologies

The worldwide semiconductor industry may be putting on formal wear in 2014 as verification engineers more fully embrace a formal verification methodology. In particular, we’re seeing rapid adoption in Asia, from Korea and Japan to Taiwan and China. Companies there are experiencing the same challenges their counterparts in other areas of the globe have found: designs are getting more and more complex, and current verification methodologies can’t keep pace. SoC simulation and emulation, for example, are failing, causing project delays and missed bugs.

Formal verification, many project teams have determined, is the only way to improve block-level verification to reduce stressing out SoC verification. The reasons are varied. Because formal is exhaustive, it will catch all corner case bugs that are hard to find in simulation. If more blocks are verified and signed-off with formal, it means much better design quality.

At the subsystem and SoC level, verification only needs to be concerned with integration issues rather than design quality issues. An added benefit, all the work spent on building a block-level formal test environment can be reused for future design revisions.

We recently heard from a group of formal verification experts in the U.S. who have successfully implemented formal into their methodology and sign-off flow. Some are long-time formal users. Others are still learning what applications work best for their needs. All are outspoken advocates eager to see more widespread adoption. They’re doing model checking, equivalence checking and clock domain checking, among other applications.

They are not alone in their assessment about formal verification. Given its proven effectiveness, semiconductor companies are starting to build engineering teams with formal verification expertise to take advantage of its powerful capabilities and benefits. Building a formal team is not easy –– it takes time and dedication. The best way to learn is by applying formal in live projects where invested effort and results matter.

Several large companies in Asia have set up rigorous programs to build internal formal expertise. Our experience has shown that it takes three years of full-time formal usage to become what we call a “formal leader” (level 6). That is, an engineer who can define an overall verification strategy, lead formal verification projects and develop internal formal expertise. While 2014 will be the watershed year for the Asian market, we will see more formal users and experts in the years following, and more formal successes.

That’s not to say that adoption of formal doesn’t need some nudging. Education and training are important, as are champions willing to publicly promote the power of the formal technology. My company has a goal to do both. We sponsor the yearly Deep Bounds Award to recognize outstanding technological research achievement for solving the most useful End-to-End formal verification problems. The award is presented at the annual Hardware Model Checking Competition (HWMCC) affiliated with FMCAD (Formal Methods in Computer Aided Design).

While we may not see anyone dressed in top hat and tails at DAC in June 2014, some happy verification engineers may feel like kicking up their heels as Fred Astaire or Ginger Rogers would. That’s because they’re celebrating the completion of a chip project that taped out on time and within budget. And no bugs.

To paraphrase a familiar Fred Astaire quote, “Do it big, do it right and do it with formal.”

Bruce McGaughy – ProPlus Design Solutions

To allow the continuation of Moore’s Law, foundries have been forced to go with 3D FinFETs transistors, and along the way a funny thing has happened. Pushing planar devices into vertical structures has helped overcome fundamental device physics limitations, but physics has responded back by imposing different physical constraints, such as parasitics and greater gate-drain, gate-source coupling capacitances.

More complex transistor structures mean more complex SPICE models. The inability to effectively body-bias and the requirement to use quantized widths in these FinFET devices means that circuit designers have new challenges resulting in more complex designs. This, coupled with increased parasitic effects at advanced technology nodes, leads to post-layout netlist sizes that are getting larger.

All this gets the focus back on the transistor physics and the verification of transistor-level designs using SPICE simulation.

Above and beyond the complexity of the device and interconnect is the reality of process variation. While some forms of variation, such as random dopant fluctuation (RDF), may be reduced at FinFET nodes, variation caused by fin profile/geometry variability comes into play.

It is expected that threshold voltage mismatch and its standard deviation will increase. Additional complexity from layout-dependent effects requires extreme care during layout. With all these variation effects in the mix, there is one direct trend –– more need for post-layout simulation, the time for which gets longer as netlist sizes gets larger.

Pre-layout simulation just does not cut it.

Let’s step back and review where the 3D FinFet transistor has taken us. We have more complex device models, more complex circuits, larger netlists and a greater need for post-layout simulation.

Pretty scary in and of itself, though, the EDA industry has used a trick whenever confronted with capacity or complexity challenges. That is, trading-off accuracy to buy a little more capacity or performance. In the SPICE world, this trick is called FastSPICE.

Now, with 3D FinFETs, we are facing the end of the road for FastSPICE as an accurate simulation and verification tool, and it will be delegated to a more confined role as a functional verification tool. When the process technology starts dropping Vdd and devices have greater capacitive coupling, it results in greater noise sensitivity of the design. The ability to achieve accurate SPICE simulations under these conditions requires extreme care in controlling convergence of currents and charges. Alas, this breaks the back of FastSPICE.

In 2014, as FinFET designs get into production mode, expect the SPICE accuracy requirements and limitations of FastSPICE to cry out for attention. Accordingly, a giga-scale Parallel SPICE simulator called NanoSpice by ProPlus Design Solutions promises to address the problem. It provides a pure SPICE engine that can scale to the capacity and approach the speed of FastSPICE simulators with no loss of accuracy.

Experienced IC design teams will recognize both the potential and challenges of 3D FinFET technology and have the foresight to adopt advanced design methodologies and tools. As a result, the semiconductor industry will be ready to usher in the FinFET production ramp in 2014.

Dave Noble – Pulsic

Custom layout tools will occupy an increased percentage of design tool budgets as process nodes gets smaller and more complex. Although legacy (digital) tools are being “updated” to address FINFeT, they were designed for 65nm/90nm, so they are running out of steam. Reference flows have too many repetitive, time-consuming, and linear steps. We anticipate that new approaches will be introduced to enable highly optimized layout by new neural tools that can “think” for themselves and anticipate the required behavior (DRC-correct APR) given a set of inputs (such as DRC and process rules). New tools will be required that can generate layout, undertake all placement permutations, complete routing for each permutation AND ensure that it is DRC-correct – all in one iteration. Custom design will catch up with digital, custom layout tools will occupy an increase percentage of design tool budgets, and analog tools will have the new high-value specialized functions.

Changes In The Supply Chain

Thursday, January 31st, 2013

By John Blyler and Staff

Runaway complexity in design, implementation, verification and manufacturing is being mirrored across an increasingly complex supply chain. Now the question is what to do about it.

Complexity is being driven by the continued shrinking of feature sizes and the clamor for more functionality to leverage the real estate that becomes available with each new process node. But the increased density also requires a slew of new technologies, such as finFETs, new processes such as double patterning, and potentially even new materials. On top of that, market windows are actually shrinking rather than remaining constant, putting pressure on teams to ramp up their IP reuse when possible, or to buy commercially available IP when it isn’t.

This all sounds straightforward enough, except that the number of partners critical to an SoC’s success is growing, as well. And the more partners in the supply chain for any given chip, the more chances that something will go wrong. Moreover, given the huge investment required for chips and their derivatives these days, that’s causing companies to scramble in an effort to contain the risk.

“Who you decide to work with is almost becoming a bet-your-company strategy,” said Mike Gianfagna, vice president of corporate marketing at Atrenta. “If you’re buying IP from 10 or 15 sources, you also have to worry about the interoperability of tools. Over the next few years, as we move into stacked die, you’re going to have custom, standard and FPGAs, all of which will need to be put into a package with an interposer. Then you need to get the package to yield. Who takes on the risk to manage the inventory, assemble it and fix it if it doesn’t yield?”

He’s not the only one asking that question.

“A lot of companies have tried a supermarket approach to IP,” said Kurt Shuler, vice president of marketing at Arteris. “That’s fine for standard IP. It either works or it doesn’t. But when you’re dealing with all the other stuff—the processors, the memory controller, the interconnect, there are huge differences in performance, area and ease of integration.”

But there also are subtle differences, as well. An IP block, or even a subsystem, may be fully characterized for one process node and in one configuration and still not work well in another SoC. Noise, heat, and even different user profiles can change the characteristics for how well a piece of IP functions from one design to the next.

Hardening IP
One way to ensure that IP actually works as planned is to harden it—actually turn out a test chip so that physical measurements can be taken to completely characterize it. The problem with soft IP is that it’s never completely characterized. The problem with hard IP is that once it’s in silicon, it’s impossible to change. The goal—and this is a new approach—is to harden it, fully characterize the IP, and then take it back to the lab for more tweaking and more test chips. It’s also a much more expensive way to make sure everything goes as planned, but one that appears to be increasingly necessary at advanced nodes.

“Customers want one throat to choke, and the strategy to allow that is to have bigger and more integrated pieces of IP,” said John Koeter, vice president of marketing in the solutions group at Synopsys. “Otherwise you end up with lots of finger pointing. That also means working with fellow IP partners more closely. So what we’ve been doing is to harden ARM cores. We’ve been doing the same with Imagination. We’re essentially doing test chips for IP. We’ve done this with a multi-way collaboration at 14nm with Samsung and ARM. And with most IP we’re also doing split lots, so we process characteristics over time to improve reliability.”

Collaborative test chips are a rare phenomenon in the history of the semiconductor industry, but doing that to better understanding how to integrate IP is brand new. It’s also evidence of just how complicated integration has become, and how concerned the big IP vendors are about getting the formulas right.

Massive ecosystem investments
But integration of IP is only part of the change. There has been plenty of talk about creating virtual IDM partnerships. Making that approach actually work is quite expensive, and it’s likely to have significant repercussions on both who’s successful and who’s left standing after the next wave of consolidation.

“Between the segmentation there has to be tremendous collaboration and coordination,” Chi-Ping Hsu, senior vice president of R&D in Cadence’s Silicon Realization Group. “If the PDK file from the foundry doesn’t work, who do you look for? Is it the tool vendor, the foundry? And the tool vendor has so many different versions of software coming out and the foundries definitely don’t use all of them. They can’t afford them. So how this whole coordination and validation gets done is one of the big challenges.”

He noted that the lines between IC, foundry and EDA have blurred, requiring vertical collaboration. But that collaboration is very expensive in terms of R&D. “And this is not about tools. It’s purely about collaboration.”

That collaboration takes the form of R&D, joint marketing, and an ongoing series of technical papers. In addition, it has to be repeated by each EDA vendor with every major foundry they want to work closely with, each IP vendor, and each major customer.

Other changes ahead
Another important shift that has occurred across the supply chain involves the assignment of risk. At 65nm, foundries typically sold known good die to customers. At 28nm, they are selling wafers, passing bad yields onto their customers rather than absorbing the cost. The result has been a boom in EDA and advanced tooling, and in particular a surge in both verification and design for manufacturing tools.

Coupled with that is a blurring of the lines between IDMs and fabless companies. “We are seeing IDM’s leasing parts of their fabs to external companies,” said Michael Munsey, director of product management and strategy at Dassault Systemes. “In addition, all manufacturing stages need total flexibility so that cost, yield, risk can be modeled to allow the company to meet their high level and operational KPI’s. In the future, due to the capital investment required for semiconductor manufacturing, the industry has no choice but to go to a completely flexible model that allows for dual/multi sourcing, and to manage cost, yield and risk.”

He noted that with that shift also comes a different bill of materials. “We are managing process, such as the diffusion process, test/sorting, assembly/marking, etc. A single die could have different tests applied to it, or may be characterized differently, meaning dies from the same wafer may end up in different packages, and sales may be at the wafer level, die level or at the package level. There could be many configurations that need to be precisely managed and controlled. In addition, some devices could be stacked dies or multi-chip modules or package-on-package, so any system also has to support these multi-level constructs. And when you overlay the sourcing network (different manufacturing sites may be qualified for different processes), then the configuration management problem scales to another dimension.”

Rich get richer, but not everyone goes away
This potentially bodes well for larger IP companies. Massive investments in tighter partnerships means companies are more in sync about both technology and market opportunities. For smaller companies, it’s a question of where to place much more limited resources.

“Smaller IP companies need to pick and choose their customers and partners much more carefully,” said Arteris’ Shuler. “For the overall industry this is probably a good thing. But you do have to worry about the very small semiconductor vendors. The pain level of complexity is already high at 28nm. The leading edge companies know what they want. Still, the sales cycle is getting shorter, too, and people are just starting to realize how much pain they have.”

John Heinlein, vice president of marketing for ARM’s Physical IP Division, noted a similar division from a different vantage point. “There is a lot of inequality in the ecosystem,” he said. “What that means for ARM is that we have to work with a wide range of companies. We have to support customer flows and all the EDA flows and the major interfaces and standards.”

So far, ecosystems are still enormous. ARM’s, for example, includes more than 1,000 partners. But pressures are rising everywhere and in all directions, as witnessed by ongoing consolidation in all segments, the shifts in alignment of partnerships, and the increased acceptance of commercial IP. While these changes are complex and hard to generalize across a disaggregated global infrastructure, virtually everyone agrees that changes will accelerate as the industry pushes to the next process nodes and into stacked die, where packaging and test will be required at different stages of the manufacturing flow.