Part of the  

Chip Design Magazine


About  |  Contact

Posts Tagged ‘DDR’

Behold the Intrinsic Value of IP

Monday, March 13th, 2017

By Grant Pierce, CEO

Sonics, Inc.

Editor’s Note [this article was written in response to questions about IP licensing practices.  A follow-up article will be published in the next 24 hours with the title :” Determining a Fair Royalty Value for IP”].


Understanding the intrinsic value of Intellectual Property is like beauty, it is in the eye of the beholder.  The beholder of IP Value is ultimately the user/consumer of that IP – the buyers. Buyers tend to value IP based upon their ability to utilize that IP to create competitive advantage, and therefore higher value for their end product. The IP Value figure above was created to capture this concept.

To be clear, this view is NOT about relative bargaining power between buyer and the supplier of IP – the seller –  that is built on the basis of patents. Mounds of court cases and text books exist that explore the question of patent strength. What I am positing is that viewing IP value as a matter of a buyer’s perception is a useful way to think of the intrinsic value of IP.

Position A on the value chart is a classification of IP that allows little differentiation by the buyer, but is addressing a more elastic market opportunity. This would likely be a Standard IP type that would implement an open standard. IP in this category would likely have multiple sources and therefore competitive pricing.  Although compliance with the standard would be valued by the buyer, the price of the IP itself would be likely lower reflecting its commodity nature. Here, the value might be equated to the cost of internally creating equivalent IP. Since few, if any, buyers in this category would see advantage for making this IP themselves and because there are likely many sellers, the intrinsic value of this IP is determined on a “buy vs buy” basis.  Buyers are going to buy this IP regardless, so they’ll look for the seller with the proposition most favorable to the buyer – which often is just about price.

Position B on the value chart is a classification of IP that allows for differentiation by the buyer, but addresses a more elastic market. IP in this category might be less constrained by standards requirements. It is likely that buyers would implement unique instantiations of this IP type and as a result command some end competitive advantage. Buyers in this category could make this IP themselves, but because there are commercial alternatives, the intrinsic value is determined by applying a “make vs buy” analysis. The value proposition of the sellers of this type of IP often include some important, but soft value propositions (e.g., ease of re-use, time-to-market, esoteric features), the make vs buy determination is highly variable and often buyer-specific. This in part explains the variability of pricing for this type of IP.

Position C on the value chart is a classification of IP that serves a less elastic market and empowers buyers to differentiate through their unique implementations of that IP. This classification of IP supports license fees and larger, more consistent, royalty rates. IP in this category becomes the competitive differentiation that sways large market share to the winning products incorporating that IP. This category supports some of the larger IP companies in the marketplace today. Buyers in this category are not going to make the IP themselves because the cost of development of the product and its ecosystem is too prohibitive and risky. The intrinsic value really comes down to what the seller charges.

This is a “buy vs not make” decision – meaning one either buys the IP or it doesn’t bother to make the product. A unique hallmark of IP in this position is that so long as the seller applies pricing consistently, then all buyers know at the very least that they are not disadvantaged relative to the competition and will continue to buy. Sellers will often give some technology away to encourage long-term lock in. For these reasons, pricing of IP in this space tends to be quite stable. That pricing level must subjectively be below the level that customers begin to perform unnatural acts and explore unusual alternatives.  So long as it does, the price charged probably represents accurately the intrinsic value.

Position D on the value chart is a classification of IP that requires adherence to a standard. Like category A, adherence to the standard does not necessarily allow differentiation to the buyer. The buyer of this category of IP might be required to use this IP in order to gain access to the market itself. Though the lack of end-product differentiation available to the buyer might suggest a lower license fee and/or lower to zero royalty rate, we see a significantly less elastic market for this IP type.

This IP category tends to comprise products adhering to closed and/or proprietary standards. IP products built on such closed and/or proprietary standards have given rise to several significant IP business franchises in the marketplace today. The IP in position D is in part characterized by the need to spend significant time and money to develop, market and maintain (defend) their position, in addition to spending on IP development. For this reason, teasing out the intrinsic value of this IP is not as straightforward as “make vs buy.” Pricing is really viewed more as a tax. So the intrinsic value determination is based on a “Fair Tax” basis. If buyers think the tax is no longer “fair,” for any reason, they will make the move to a different technology.


Position A:  USB, PCI, memory interfaces (Synopsys)

Position B:  Configurable Processors, Analog IP cores (Synopsys, Cadence)

Position C:  General Purpose Processors, Graphics, DSP, NoC, EPU (ARM, Imagination, CEVA, Sonics)

Position D: CDMA, Noise Reduction, DDR (Qualcomm, Dolby, Rambus)

Why Customer Success is Paramount

Sonics is an IP supplier whose products tend to reside in the Type C category. Sonics sets its semiconductor IP pricing as a function of the value of the SoC design/chip that uses the IP. There is a spectrum of value functions for the Sonics IP depending upon the type of chip, complexity of design, target power/performance, expected volume, and other factors. Defining the upper and lower bounds of the value spectrum depends upon an approximation of these factors for each particular chip design and customer.

Royalties are one component of the price of IP and are a way of risk sharing to allow customers to bring their products to market without having to pay the full value of the incorporated IP up front. The benefit being that the creator and supplier of the IP is essentially investing in the overall success of the user’s product by accepting the deferred royalty payment. Sonics views the royalty component of its IP pricing as “customer success fees.”

With its recently introduced EPU technology, Sonics has adopted an IP business model based upon an annual technology access fee and a per power grain usage fee due at chip tapeout. Under this model, customers have unlimited use of the technology to explore power control for as many designs as they want, but only pay for their actual IP usage in a completed design. The tape out fee is calculated based on the number of power grains used in the design on a sliding scale. The more power grains customers use, the more energy saved, and the lower the cost per grain. Using more power grains drives lower energy consumption by the chip – buyers increase the market value of their chips using Sonics’ EPU technology. The bottom line is that Sonics’ IP business model depends on customers successfully completing their designs using Sonics IP.

Next Year in EDA: What Will Shape 2015

Thursday, December 4th, 2014

Gabe Moretti, Senior Editor

The Big Picture

Having talked to Cadence, Mentor and Synopsys I think it is very important to hear what the rest of the EDA industry has to say about the coming year.  After all, in spite of the financial importance of the “big three” a significant amount of innovation comes from smaller companies focused on one or just a few sectors of the market.

Piyush Sancheti, VP of Marketing at Atrenta pointed out that users drive the market and that users worry about time to market.  In the companion article Chi-Ping Hsu of Cadence stated the same.  To meet the market window they need predictability in product development, and therefore must manage design size and complexity, handle IP quality and integration risks, and avoid surprises during development.  He observed that “The EDA industry as a whole is still growing in the single digits. However, certain front-end domains like RTL design and verification are growing much faster. The key drivers for growth are emulation, static and assertion-based verification, and power intent verification.

As the industry matures, consolidation around the big 3 vendors will continue to be a theme. Innovation is still fueled by startups, but EDA startup activity is not quite as robust.  In 2014 Synopsys, Cadence and Mentor continued to drive growth with their investment and acquisitions in the semiconductor IP space, which is a good trend.”

Hamhua Ng, CEO of Plunify said: “There is much truth in the saying, ‘Those who don’t learn from history are doomed to repeat it,’ especially in the data-driven world that we live in today. It seems like every retailer, social network and financial institution is analyzing and finding patterns in the data that we generate. To businesses, being able to pick out trends from consumer behavior and quickly adapt products and services to better address customer requirements will result in significant cost savings and quality improvements.

Intuitively, chip design is an ideal area to apply these data analysis techniques because of an abundance of data generated in the process and the sheer cost and expertise required in realizing a design from the drawing board all the way to silicon. If only engineers can learn useful lessons – For instance, what worked well and what didn’t work as well – from all the chips that have ever been designed in history, what insights we would have today. Many companies already have processes in place for reviewing past projects and extracting information.”

While talking about design complexity Bill Neifert, CTO at Carbon Design Systems noted that: “Initially targeted more at the server market, ARM’s 64-bit V8 architecture has been thrust into mobile since Apple announced that it was using it for the iPhone 5.  Since then, we’ve seen a mad dash as semiconductor companies start developing mobile SoC designs containing multiple clusters of multicore processors.

Mobile processors have a large amount of complexity in both hardware and software. Coping with this move to 64 bits has placed a huge amount of stress on the hardware, software and systems teams.

With the first generation of 64-bit designs, many companies are handling this migration by changing as few variables as possible. They’ll take reference implementations and heavily leverage third-party IP in order to get something out the door. For this next generation of designs though, teams are starting to add more of their own differentiating IP and software. This raises a host of new verification and validation issues especially when addressing the complications being introduced with hardware cache coherency.”

Internet of Things (IoT)

IoT is expected to drive much of the growth in the electronics industry and therefore in EDA.  One can begin to see a few examples of products designed to work in the IoT architecture, even if the architecture is not yet completely finalized.  There are wearable products that at the moment work only locally but have the potential to be connected via a cell phone to a central data processing system that generates information.  Intelligent cars are at the moment self-contained IoT architectures that collect data, generate information, and in some cases, act on the information in real time.

David Kelf, VP of Marketing at OneSpin Solutions talked about the IoT  in the automotive area. “2015 is yet again destined to be an exciting year. We see some specific IoT applications taking off, particularly in the automotive space and with other designs of a safety critical nature. It is clear that automotive electronics is accelerating. In particular is the concept of various automotive “apps” running on the central computer that interfaces with sensors around the car. This leads to a complex level of interaction, which must be fully verified from a safety critical and security point of view, and this will drive the leading edge of verification technology in 2015. Safety Critical standards will continue to be key verification drivers, not just in this industry sector but for others as well.”

Drew Wingard, CTO of Sonics said that: “The IoT market opportunity is top-of-mind for every company in the electronics industry supply chain including EDA tool vendors. For low-cost IoT devices, systems companies cannot afford to staff 1000-person SoC design teams. Furthermore, why do system design companies need two verification engineers for every designer? From an EDA tools and methodology perspective, today’s approach doesn’t work well for IoT designs.

SoC designers need to view their IoT designs in a more modular way and accept that components are “known good” across levels of abstraction. EDA tools and the verification environments that they support must eliminate the need to re-verify components whenever they are integrated into the next level up. It boils down to verification reuse. Agile design methodologies have a focus on automated component testing that SoC designers should consider carefully. IoT will drive the EDA industry trend toward a more agile methodology that delivers faster time-to-market. EDA’s role in IoT is to help lower the cost of design and verification to meet the requirements of this new market.”


Verification continues to be a hot topic.  The emphasis has shifted from logic verification to system verification, where system is understood to contain both hardware and software components.  As the level of abstraction of design under test (DUT) has increased, the task of verification has become more demanding.

Michael Sanie, Senior Director of Verification Marketing at Synopsys talked about the drivers that will influence verification progress in 2015.

“SoCs are growing in unprecedented complexity, employing a variety of advanced low power techniques and an increasing amount of embedded software.  While both SoC verification and software development/validation traditionally have been the long-poles of project timelines, they are now inseparable and together have a significant impact on time-to-market.  Advanced SoC verification teams are now driven by not only reducing functional bugs, but also by how early they can achieve software bring-up for the SoCs.  The now-combined process of finding/debugging functional bugs and software bring-up is often comprised of complex flows with several major steps including virtual platforms, static and formal verification, simulation, emulation and FPGA-based prototyping, with tedious and lengthy transitions between each step taking as long as weeks. Further complicating matters, each step requires a different technology/methodology for debug, coverage, verification IP, etc.

In 2015, the industry will continue its journey into new levels of verification productivity and early software bring-up by looking at how these steps can be approached universally with the introduction of larger platforms built from the industry’s fastest engines for each of these steps, further integration and unification compile, set up, debug, verification IP and coverage. Such an approach creates a continuum of technologies leveraging virtual platforms, static and formal verification, simulation, emulation and FPGA-based prototyping, enabling a much shorter transition time between each step.  It further creates a unified debug solution across all domains and abstraction levels.  The emergence of such platforms will then enable dramatic increases in SoC verification productivity and earlier software bring-up/development.”

Bill Neifert of Carbon says that: “In order to enable system differentiation, design teams need to take a more system-oriented approach.  Verification techniques that work well at the block level start falling apart when applied to complex SoC systems. There needs to be a greater reliance upon system-level validation methodologies to keep up with the next generation of differentiated 64-bit designs. Accurate virtual prototypes can play a huge role in this validation task and we’ve seen an enormous upswing in the adoption of our Carbon Performance Analysis Kits (CPAKs) to perform exactly this task. A CPAK from System Exchange, for example, can be customized quickly to accurately model the behavior of the SoC design and then exercised using system-level benchmarks or verification software. This approach enables teams to spend far less time developing their validation solution and a lot more time extracting value from it.”

We hear a lot about design reuse, especially in terms of IP use.  Drew Wingard of Sonics points to a lack of reuse in verification.  “One of the biggest barriers to design reuse is the lack of verification reuse. Verification remains the largest and most time-consuming task in SoC design, in large part due to the popularity of constrained-random simulation techniques and the lack of true, component-based verification reuse. Today, designers verify a component at a very small unit level, then re-verify it at the IP core level, then re-verify the IP core at the IP subsystem level, then re-verify the IP subsystem at the SoC level and then, re-verify the SoC in the context of the system.

They don’t always use the same techniques at every one of those levels, but there is significant effort spent and test code developed at every level to check the design. Designers run and re-write the tests at every level of abstraction because when they capture the test the first time, they don’t abstract the tests so that they could be reused.

SoC designers need to view their IoT designs in a more modular way and accept that components are “known good” across levels of abstraction. EDA tools and the verification environments that they support must eliminate the need to re-verify components whenever they are integrated into the next level up. It boils down to verification reuse. Agile design methodologies have a focus on automated component testing that SoC designers should consider carefully. IoT will drive the EDA industry trend toward a more agile methodology that delivers faster time-to-market. EDA’s role in IoT is to help lower the cost of design and verification to meet the requirements of this new market.”

Piyush Sancheti of Atrenta  acknowledges that front-end design and verification tools are growing driven by more complex designs and shorter time-to-market.  But design verification difficulty continues to increase with shrinking time to completion reality.  Companies are turning more and more to static verification, formal techniques and emulation.  The goal is RTL debug and signoff aiming at more automatic, or knowledge based place and route functions.

Regarding formal tools David Kelf of OneSpin noted that: “Formal techniques, in general, continue to proliferate through many verification flows. We see an increase in the use of the technology by designers to perform initial design investigation, and greater linkage into the simulation flow. Combining the advances in FPGA, the safety critical driver for high-reliability verification and increases in formal technology, we believe that this year fundamental shifts.”

Jin Zhang, Senior Director of Marketing at Oski Technology had an interesting input to the subject of formal verification because it was based on the feedback she received recently from the Decoding Formal Club.  Here is what he said: “In October, Oski Technology hosted the quarterly Decoding Formal Club where more than 40 formal enthusiasts gathered to talk about Formal Sign-off and processer verification using formal technology. The sheer energy and enthusiasm of Silicon Valley engineers speaks to the growing adoption of formal verification.

Several experts on formal technology who attended the event view the future of formal verification similarly. They echoed the trends we have been seeing –– formal adoption is in full bloom and could soon replace simulation in verification sign-off.

What’s encouraging is not just the adoption of formal technology in simple use models, such as formal lint or formal apps, but in an expert use model as well. For years, expert-level use has been regarded as academic and not applicable to solving real-world challenges without the aid of a doctoral degree. Today, End-to-End formal verification, as the most advanced formal methodology, leads to complete verification of design blocks with no bugs left behind. With ever-increasing complexity and daunting verification tasks, the promise and realization of signing off a design block using formal alone is the core driver of the formal adoption trend.

The trend is global. Semiconductor companies worldwide are recognizing the value of End-to-End formal verification and working to apply it to critical designs, as well as staffing in-house formal teams. Formal verification has never been so well regarded.

While 2015 may not be the year when every semiconductor company has adopted formal verification, it won’t be long before formal becomes as normal as simulation, and shoulders as much responsibility in verification sign off.”

Advanced Processes Challenges

Although the number of system companies that can afford to use advanced processes is diminishing, their challenges are an important indicator of future requirements for a larger set of users.

Mary Ann White, Director of Product Marketing, Galaxy Design Platform at Synopsys points out how timing and power requirement analysis are critical elements of design flows.

“The endurance of Moore’s law drives design and EDA trends where consumer appetites for all things new and shiny continue to be insatiable. More functional consolidation into a single SoC pushes ultra-large designs more into the norm, propelling the need for more hierarchically oriented implementation and signoff methodologies.  While 16- and 14-nm FinFET technologies become a reality by moving into production for high-performance applications, the popularity of the 28-nm node will persevere, especially for mobile and IoT (Internet of Things) devices.

Approximately 25% of all designs today are ≥50 million gates in size according to Synopsys’ latest Global User Survey. Nearly half of those devices are easily approaching one billion transistors. The sheer size of these designs compels adoption of the latest hierarchical implementation techniques with either black boxes or block abstract models that contain timing information with interfaces that can be further optimized. The Galaxy Design Platform has achieved several successful tapeouts of designs with hundreds of millions of instances, and newer technologies such as IC Compiler II have been architected to handle even more.  In addition, utilization of a sign-off based hierarchical approach, such as PrimeTime HyperScale technology, saves time to closure, allowing STA completion of 100+ million instances in hours vs. days while also providing rapid ECO turnaround time.

The density of FinFET processes is quite attractive, especially for high-performance designs which tend to be very large multi-core devices. FinFET transistors have brought dynamic, rather than leakage (static) power to the forefront as the main concern. Thanks to 20-nm, handling of double patterning is now well established for FinFET. However, the next-generation process nodes are now introduced at a much faster pace than ever before, and test chips for the next 10-nm node are already occurring. Handling the varying multi-patterning requirements for these next-generation technologies will be a huge focus over the next year, with early access and ecosystem partnerships between EDA vendors, foundries and customers.

Meanwhile, as mobile devices continue their popularity and IoT devices (such as wearables) become more prevalent, extending battery life and power conservation remain primary requirements. Galaxy has a plethora of different optimization techniques to help mitigate power consumption. Along with the need for more silicon efficiency to lower costs, the 28-nm process node is ideal for these types of applications. Already accounting for more than a third of revenue for foundries, 28-nm (planar and FD-SOI) is poised to last a while even as FinFET processes come online.”

Dr. Bruce McGaughy, CTO and VP of Engineering at ProPlus was kind enough to provide his point of view on the subject.

“The challenges of moving to sub-20nm process technologies are forcing designers to look far more closely at their carefully constructed design flows. The trend in 2015 could be a widespread retooling effort to stave off these challenges as the most leading-edge designs start using FinFET technology, introducing a complication at every step in the design flow.

Observers point to the obvious: Challenges facing circuit designers are mounting as the tried-and-true methodologies and design tools fall farther and farther behind. It’s especially apparent with conventional SPICE and FastSPICE simulators, the must-have tools for circuit design.

FastSPICE is faltering and the necessity of using Giga-scale SPICE is emerging. At the sub-28nm process node, for example, designers need to consider layout dependent effects (LDE) and process variations. FastSPICE tricks and techniques, such as isomorphism and table models, do not work.

As we move further into the realm of FinFET at the sub-20nm nodes, FastSPICE’s limitations become even more pronounced. FinFET design requires a new SPICE model called BSIM-CMG, more complicated than BSIM3 and BSIM4 models, industry-standard models for CMOS technology used by the industry for 20 years. New FinFET physical effects include strong Miller Capacitance effects, break FastSPICE’s partitioning and event-driven schemes. Typically, FinFET models have over 1,000 parameters per transistor, and more than 20,000 lines of C code, posing a tremendous computational challenge to SPICE simulators.

Furthermore, the latest advanced processes pose new and previously undetected challenges. With reduced supply voltage and increased process variations, circuits now are more sensitive to small currents and charges, such as SRAM read cycles and leakage currents. FastSPICE focuses on event-driven piecewise linear (PWL) voltage approximations rather than continuous waveforms of currents, charges and voltages. More delay and noise issues are appearing in the interconnect, requiring post layout simulation with high accuracy, and multiple power domains and ramp-up/ramp-down cycles are more common.

All are challenging for FastSPICE, but can be managed by Giga-scale SPICE simulators. FastSpice simulators are lacking in accuracy for sensitive currents, voltage regulators and leakage, which is where Giga-scale SPICE simulators can really shine.

Time to market and high mask costs demand tools that always give accurate and reliable results, and catch problems before tapeout.  Accordingly, designers and verification engineers are demanding a tool that does not require the “tweaking” of options to suit their situations, such as different sets of simulation options for different circuit types, and assigned accuracy where the simulator thinks it’s needed. Rather, they should get accuracy everywhere without option tweaks.

Often, verification engineers are not as familiar with the circuits as the designers, and may inadvertently choose a set of options that causes the FastSPICE simulator to ignore important effects, such as shorting out voltage regulator power nets. Weak points could be lurking in those overlooked areas of the chip. With Giga-scale SPICE, such approximations are not used and unnecessary.

Here’s where Giga-scale SPICE simulation takes over, being perfectly suited for the new process technologies including 16/14nm FinFET. They offer pure SPICE accuracy and deliver comparable FastSPICE simulator capacity and performance.

For the first time, Giga-scale SPICE makes it possible for designers to use one simulation engine to design both small and large circuit blocks, and simultaneously use the same simulator for full-chip verification with SPICE accuracy, eliminating glitches or inconsistencies. We are at the point that retooling the simulation tools, including making investments in parallel processing hardware, is the right investment to make to improve time to market and reduce the risk or respins. At the same time, tighter margins can be achieved in design, resulting in better performance and yield.”

Memory and FPGA

With the increase use of embedded software in SoC designs memories and memory controllers are gaining in importance.  Bob Smith, Senior VP of Marketing and Business Development at Uniquify presents a compelling argument.

” The term ‘EDA’ encompasses tools that both help in automating the design process (think synthesis or place and route) as well as automating the analysis process (such as timing analysis or signal integrity analysis). A new area of opportunity for EDA analysis is emerging to support the understanding and characterization of high-speed DDR memory subsystems.

The high-performance demands placed on DDR memory systems requires that design teams thoroughly analyze and understand the system margins and variations encountered during system operation. At rated DDR4 speeds, there is only about 300ps of allowable timing margin across the entire system (ASIC or SoC, package, PCB or other interconnect media, and the DDR SDRAM itself). Both static (process-related) and dynamic variations (due to environmental variables such as temperature) must also be factored into this tight margin. The goal is straightforward: optimize the complete DDR system to achieve the highest possible performance while minimizing any negative impacts on memory performance due to anticipated variations and leave enough margin such that unanticipated variations don’t cause the memory subsystem to fail.

However, understanding the available margin and how it will be impacted by static and dynamic variation is challenging. Historically, there is no visibility into the DDR subsystem itself and the JEDEC specifications only address the behavior of the DDR-SDRAM devices and not the components outside of the device. Device characterization helps, but only accounts for part-part variation. Ideally, DDR system designers would like to be able to measure timing margins in-situ, with variations present, to fully and accurately understand system behavior and gain visibility into possible issues.
A new tool for DDR System analysis provides this visibility. A special interface in the DDR PHY allows it to run numerous different analyses to check the robustness of the entire DDR system including the board and DDR-SDRAM device(s). The tool can be used to determine DDR system margins, identify board or DDR component peculiarities and be used to help tune various parameters to compensate for issues discovered and maximize DDR performance in a given system. Since it is essentially a window into the DDR subsystem, it can also be used to characterize and compare the performance of different boards and board layouts and even compare the performance and margins of different DDR-SDRAM components.

Figure 1: A new tool for DDR System analysis can check bit-level margins on a DDR read.

Charlie Cheng, CEO of Kilopass talks about the need for new memory technology.  “For the last few years, the driver for chip and EDA tool development has been the multicore SoC that is central to all smartphones and tablets. Memory dominates these devices with 85% of the chip area and an even larger percentage of the leakage power consumption. Overlay the strong friction that is slowing down the transition to 14/16nm from 28nm –– the most effective node. It becomes quickly obvious that a new high-density, power-thrifty memory technology is needed at the 28nm node. Memory design has been sorely lacking in innovation for the last 20 years with all the resources getting invested on the process side. 2015 will a be the year of major changes in this area as the industry begins to take a better look at support for low-power, security and mobile applications.”

The use of FPGA devices in SoC has also increased in 2014.  David Kelf thinks that: “The significant advancement in FPGA technology will lead to a new wave of FPGA designs in 2015. New device geometries from the leading FPGA vendors are changing the ASIC to FPGA cost/volume curve and this will have an affect in the size and complexity of these devices. In addition, we will see more specialized synthesis tools from all the vendors, which provide for greater, device-targeted optimizations. This in turn drives further advancement of new verification flows for larger FPGAs and it is our prediction that most of the larger devices will make use of a formal based verification flow to increase overall QoR (Quality of Results).”

The need for better tools to support the use of FPGA is also acknowledged by Harnhua NG at Plurify.  “FPGA software tools contain switches and parameters that influence synthesis optimizations and place-and-route quality of results. Combined with user-specified timing and location constraints, timing, area and power results can vary by as much as 70% without even modifying the design’s source code. Experienced FPGA designers intuitively know good switches and parameters through years of experience, but have to manually refine this intuition as design techniques, chip architectures and software tools rapidly evolve. Continuing refinement and improvement are better managed using data analysis and machine learning algorithms.”


It should not be surprising that EDA vendors see many financial and technical opportunities available in 2015.  Consumers’ appetite for electronic gadgets, together with the growth of cloud computing, and new IoT implementations provide markets for new EDA tools.  How many vendors will hit the proper market windows is still to be seen and timing to market will be the fundamental characteristic of the 2015 EDA industry.