Part of the  

Chip Design Magazine

  Network

About  |  Contact

EDA in the year 2017 – Part 2

January 26th, 2017

Gabe Moretti, Senior Editor

The first part of the article, published last week, covered design methods and standards in EDA together with industry predictions that impacted all of our industry.  This part will cover automotive, design verification and FPGA.  I found it interesting that David Kelf, VP of Marketing at OneSpin Solutions, thought that Machine learning will begin to penetrate the EDA industry as well.  He stated: “Machine Learning hit a renaissance and is finding its way into a number of market segments. Why should design automation be any different?  2017 will be the start of machine learning to create a new breed of design automation tool, equipped with this technology and able to configure itself for specific designs and operations to perform them more efficiently. By adapting algorithms to suit the input code, many interesting things will be possible.”

Rob Knoth, Product Management Director, Digital and Signoff Group at Cadence touched on an issue that is being talked about more recently: security.  He noted that: “In 2016, IoT bot-net attacks brought down large swaths of the Internet – the first time the security impact of IoT was felt by many. Private and nation-state attacks compromised personal/corporate/government email throughout the year. “

In 2017, we have the potential for security concerns to start a retreat from always-on social media and a growing value on private time and information. I don’t see a silver bullet for security on our horizon. Instead, I anticipate an increasing focus for products to include security managers (like their safety counterparts) on the design team and to consider safety from the initial concept through the design/production cycle.

Automotive

The automotive industry has increased the use of electronics year over year for a long time.  At this point an automobile is a true intelligent system, at least as far as what the driver and passenger can see and hear the “infotainment system”.  Late model cars also offer collision avoidance and stay-in-lane functions, but more is coming.

Here is what Wally Rhines thinks: “Automotive and aerospace designers have traditionally been driven by mechanical design.  Now the differentiation and capability of cars and planes is increasingly being driven by electronics.  Ask your children what features they want to see in a new car.  The answer will be in-vehicle infotainment.  If you are concerned about safety, the designers of automobiles are even more concerned.  They have to deal with new regulations like ISO 26262, as well as other capabilities, in addition to environmental requirements and the basic task of “sensor fusion” as we attach more and more visual, radar, laser and other sensors to the car.  There is no way to reliably design vehicles and aircraft without virtual simulation of electrical behavior.

In addition, total system simulation has become a requirement.  How do you know that the wire bundle will fit through the hole in the door frame?  EDA tools can tell you the answer, but only after seeking out the data from the mechanical design.  Wiring in a car or plane is a three dimensional problem.  EDA tools traditionally worry about two dimension routing problems.  The world is changing.  We are going to see the basic EDA technology for designing integrated circuits be applied to the design of systems. Companies that can remain at the leading edge of IC design will be able to apply that technology to systems.”

David Kelf, VP of Marketing at OneSpin Solutions, observed: “OneSpin called it last year and I’ll do it again –– Automotive will be the “killer app” of 2017. With new players entering the marketing all the time, we will see impressive designs featured in advanced cars, which themselves will move toward a driverless future.  All automotive designs currently being designed for safety will need to be built to be as secure as possible. The ISO 26262 committee is working on security as well safety and I predict security will feature in the standard in 2017. Tools to help predict vulnerabilities will become more important. Formal, of course, is the perfect platform for this capability. Watch for advanced security features in formal.”

Rob Knoth, Product Management Director, Digital and Signoff Group at Cadence noted: “In 2016, autonomous vehicle technology reached an inflection point. We started seeing more examples of private companies operating SAE 3 in America and abroad (Singapore, Pittsburgh, San Francisco).  We also saw active participation by the US and world governments to help guide tech companies in the proliferation and safety of the technology (ex. US DOT V2V/V2I standard guidelines, and creating federal ADAS guidelines to prevent state-level differences). Probably the most unique example was also the first drone delivery by a major retailer, something which was hinted at 3 years prior and seemingly just a flight of fancy then.

Looking ahead to 2017, both the breadth and depth are expected to expand, including the first operation of SAE level 4/5 in limited use on public streets outside the US, and on private roads inside US. Outside of ride sharing and city driving, I expect to see the increasing spread of ADAS technology to long distance trucking and non-urban transportation. To enable this, additional investments from traditional vehicle OEM’s partnering with both software and silicon companies will be needed to enable high-levels of autonomous functions. To help bring these to reality, I also expect the release of new standards to guide both the functional safety and reliability of automotive semiconductors. Even though the pace of government standards can lag, for ADAS technology to reach its true potential, it will require both standards and innovation.”

FPGA

The IoT market is expected to provide a significant opportunity to the electronics industry to grow revenue and open new markets.  I think the use of FPGA in IoT dvices will increase the use of these devices in system design.

I asked Geoff Tate, CEO of FlexLogix, his opinions on the subject.  He offered four points that he expects to become reality in 2017:

1. the first customer chip will be fabricated using embedded FPGA from an IP supplier

2. the first customer announcements will be made of customers adopting embedded FPGA from an IP supplier

3. embedded FPGAs will be proven in silicon running at 1GHz+

4. the number of customers doing chip design using embedded FPGA will go from a handful to dozen.

Zibi Zalewski, Hardware Division General Manager at Aldec also addressed the FPGA subject.

“I believe FPGA devices are an important technology player to mention when talking what to expect in 2017. With the growth of embedded electronics driven by Automotive, Embedded Vision and/or IoT markets, FPGA technology becomes a core element particularly for in products that require low power and re-programmability.

Features of FPGA such as pipelining and the ability to execute and easily scale parallel instances of the implemented function allow for the use of FPGA for more than just the traditionally understood embedded markets. FPGA computing power usage is exploding in the High Performance Computing (HPC) where FPGA devices are used to accelerate different scientific algorithms, big data processing and complement CPU based data centers and clouds. We can’t talk about FPGA these days without mentioning SoC FPGAs which merge the microprocessor (quite often ARM) with reprogrammable space. Thanks to such configurations, it is possible to combine software and hardware worlds into one device with the benefits of both.

All those activities have led to solid growth in FPGA engineering, which is pushing on further growth of FPGA development and verification tools. This includes not only typical solutions in simulation and implementation. We should also observe solid growth in tools and services simplifying the usage of FPGA for those who don’t even know this technology such as high level synthesis or engineering services to port C/C++ sources into FPGA implementable code. The demand for development environments like compilers supporting both software and hardware platforms will only be growing, with the main goal focused on ease of use by wide group of engineers who were not even considering the FPGA platform for their target application.

At the other end of the FPGA rainbow are the fast-growing, largest FPGA offered both from Xilinx and Intel/Altera. ASIC design emulation and prototyping will push harder and harder on the so called big-box emulators offering higher performance and significantly lower price per gate and so becoming more affordable for even smaller SoC projects. This is especially true when partnered with high quality design mapping software that handles multi-FPGA partitioning, interconnections, clocks and memories.”

Design Verification

There are many methods to verify a design and companies will, quite often, use more than one on the same design.  Each method: simulation, formal analysis, and emulation, has its strong points.

For many years, logic simulation was the only tool available, although hardware acceleration of logic simulation was also available.

Frank Schirrmeister, Senior Product Management Group Director, System and Verification Group at Cadence submitted a through analysis of verification issues.  He wrote: “From a verification perspective, we will see further market specialization in 2017 – mobile, server, automotive (especially ADAS) and aero/defense markets will further create specific requirements for tools and flows, including ISO 26262 TCL1 documentation and support for other standards. The Internet of Things (IoT) with its specific security and low power requirements really runs across application domains.  Continuing the trend in 2016, verification flows will continue to become more application-specific in 2017, often centered on specific processor architectures. For instance, verification solutions optimized for mobile applications have different requirements than for servers and automotive applications or even aerospace and defense designs. As application-specific requirements grow stronger and stronger, this trend is likely to continue going forward, but cross-impact will also happen (like mobile and multimedia on infotainment in automotive).

Traditionally ecosystems have been centered on processor architectures. Mobile and Server are key examples, with their respective leading architectures holding the lion share of their respective markets. The IoT is mixing this up a little as more processor architectures can play and offer unique advantages, with configurable and extensible architectures. No clear winner is in sight yet, but 2017 will be a key year in the race between IoT processor architectures. Even OpenSource hardware architectures are look like they will be very relevant judging from the recent momentum which eerily reminds me of the early Linux days. It’s one of the most entertaining spaces to watch in 2017 and for years to come.

Verification will become a whole lot smarter. The core engines themselves continue to compete on performance and capacity. Differentiation further moves in how smart applications run on top of the core engines and how smart they are used in conjunction.

For the dynamic engines in software-based simulation, the race towards increased speed and parallel execution will accelerate together with flows and methodologies for automotive safety and digital mixed-signal applications.

In the hardware emulation world, differentiation for the two basic ways of emulating – processor-based and FPGA-based – will be more and more determined by how the engines are used. Specifically, the various use models for core emulation like verification acceleration low power verification, dynamic power analysis, post-silicon validation—often driven by the ever growing software content—will extend further, with more virtualization joining real world connections. Yes, there will also be competition on performance, which clearly varies between processor-based and FPGA-based architectures—depending on design size and how much debug is enabled—as well as the versatility of use models, which determines the ROI of emulation.

FPGA-based prototypes address the designer’s performance needs for software development, using the same core FPGA fabrics. Therefore, differentiation moves into the software stacks on top, and the congruency between emulation and FPGA-based prototyping using multi-fabric compilation allows mapping both into emulation and FPGA-based prototyping.

All this is complemented by smart connections into formal techniques and cross-engine verification planning, debug and software-driven verification (i.e. software becoming the test bench at the SoC level). Based on standardization driven by the Portable Stimulus working group in Accellera, verification reuse between engines and cross-engine optimization will gain further importance.

Besides horizontal integration between engines—virtual prototyping, simulation, formal, emulation and FPGA-based prototyping—the vertical integration between abstraction levels will become more critical in 2017 as well. For low power specifically, activity data created from RTL execution in emulation can be connected to power information extracted from .lib technology files using gate-level representations or power estimation from RTL. This allows designers to estimate hardware-based power consumption in the context of software using deep cycles over longer timeframes that are emulated. ‘

Anyone who knows Frank will not be surprised by the length of the answer.

Wally Rhines, Chairman and CEO of Mentor Graphics was less verbose.  He said:” Total system simulation has become a requirement.  How do you know that the wire bundle will fit through the hole in the door frame?  EDA tools can tell you the answer, but only after seeking out the data from the mechanical design.  Wiring in a car or plane is a three dimensional problem.  EDA tools traditionally worry about two dimension routing problems.  The world is changing.  We are going to see the basic EDA technology for designing integrated circuits be applied to the design of systems. Companies that can remain at the leading edge of IC design will be able to apply that technology to systems.

This will create a new market for EDA.  It will be larger than the traditional IC design market for EDA.  But it will be based upon the basic simulation, verification and analysis tools of IC design EDA.  Sometime in the near future, designers of complex systems will be able to make tradeoffs early in the design cycle by using virtual simulation.  That know-how will come from integrated circuit design.  It’s no longer feasible to build prototypes of systems and test them for design problems.  That approach is going away.  In its place will be virtual prototyping.  This will be made possible by basic EDA technology.  Next year will be a year of rapid progress in that direction.  I’m excited by the possibilities as we move into the next generation of electronic design automation.”

The increasing size of chips has made emulation a more popular tool than in the past.  Lauro Rizzatti, Principal at Lauro Rizzatti LLC, is a pioneer in emulation and continues to be thought of as a leading expert in the method.  He noted: “Expect new use models for hardware emulation in 2017 that will support traditional market segments such as processor, graphics, networking and storage, and emerging markets currently underserved by emulation –– safety and security, along with automotive and IoT.

Chips will continue to be bigger and more complex, and include an ever-increasing amount of embedded software. Project groups will increasingly turn to hardware emulation because it’s the only verification tool to debug the interaction between the embedded software and the underlying hardware. It is also the only tool capable to estimate power consumption in a realistic environment, when the chip design is booting an OS and processing software apps. More to the point, hardware emulation can thoroughly test the integrity of a design after the insertion of DFT logic, since it can verify gate-level netlists of any size, a virtually impossible task with logic simulators.

Finally, its move to the data center solidifies its position as a foundational verification tool that offers a reasonable cost of ownership.”

Formal verification tools, sometimes referred to as “static analysis tools” have seen their use increase year over year once vendors found human interface methods that did not require a highly-trained user.  Roger Sabbagh, VP of Application Engineering at Oski Technology pointed out: “The world is changing at an ever-increasing pace and formal verification is one area of EDA that is leading the way. As we stand on the brink of 2017, I can only imagine what great new technologies we will experience in the coming year. Perhaps it’s having a package delivered to our house by a flying drone or riding in a driverless car or eating food created by a 3-D printer. But one thing I do know is that in the coming year, more people will have the critical features of their architectural design proven by formal verification. That’s right. System-level requirements, such as coherency, absence of deadlock, security and safety will increasingly be formally verified at the architectural design level. Traditionally, we relied on RTL verification to test these requirements, but the coverage and confidence gained at that level is insufficient. Moreover, bugs may be found very late in the design cycle where they risk generating a lot of churn. The complexity of today’s systems of systems on a chip dictates that a new approach be taken. Oski is now deploying architectural formal verification with design architects very early in the design process, before any RTL code is developed, and it’s exciting to see the benefits it brings. I’m sure we will be hearing a lot more about this in the coming year and beyond!”

Finally David Kelf, VP Marketing at OneSpin Solutions observed: “We will see tight integrations between simulation and formal that will drive adoption among simulation engineers in greater numbers than before. The integration will include the tightening of coverage models, joint debug and functionality where the formal method can pick up from simulation and even emulation with key scenarios for bug hunting.”


Conclusion

The two combined articles are indeed quite long.  But the EDA industry is serving a multi-faceted set of customers with varying and complex requirements.  To do it justice, length is unavoidable.

EDA in the year 2017 – Part 1

January 12th, 2017

Gabe Moretti, Senior Editor

The EDA industry performance is dependent on two other major economies: one technological and one financial.  EDA provides the tools and methods that leverage the growth of the semiconductor industry and begins to receive its financial rewards generally a couple of year after the introduction of the new product on the market.  It takes that long for the product to prove itself on the market and achieve general distribution.

David Fried from Coventor addressed the most important topics that may impact the foundry business in 2017.  He made two points.

“Someone is going to commit to Extreme Ultra-Violet (EUV) for specific layers at 7nm, and prove it.  I expect EUV will be used to combine 3-4 masks currently using 193i in a multi-patterning scheme (“cut” levels or Via levels) for simplicity (reduced processing), but won’t actually leverage a pattern-fidelity advantage for improved chip area density.

The real density benefit won’t come until 5nm, when the entire set of 2D design rules can be adjusted for pervasive deployment of EUV.  This initial deployment of EUV will be a “surgical substitution” for cost improvement at very specific levels, but will be crucial for the future of EUV to prove out additional high-volume manufacturing challenges before broader deployment.  I am expecting this year to be the year that the wishy-washy predictions of who will use EUV at which technology for which levels will finally crystallize with proof.

7nm foundry technology is probably going to look mostly evolutionary relative to 10nm and 14nm. But 5nm is where the novel concepts are going to emerge (nanowires, alternate channel materials, tunnel FETs, stacked devices, etc.) and in order for that to happen, someone is going to prove out a product-like scaling of these devices in a real silicon demonstration (not just single-device research).  The year 2017 is when we’ll need to see something like an SRAM array, with real electrical results, to believe that one of these novel device

concepts can be developed in time for a 5nm production schedule.”

Rob Knoth, Product Marketing Director, Digital and Signoff Group at Cadence offered the following observation.  “This past year, major IDM and pure-play foundries began to slow the rate at which new process nodes are planned to be released. This was one of the main drivers for the restless semiconductor-based advances we’ve seen the past 50 years.

Going forward, fabs and equipment makers will continue to push the boundaries of process technology, and the major semiconductor companies will continue to fill those fabs. While it may be slowing, Moore’s Law is not “dead.” However, there will be increased selection about who jumps to the “next node,” and greater emphasis will be placed on the ability of the design engineer and their tools/flows/methods to innovate and deliver value to the product. The importance for an integrated design flow to make a difference in product power/performance/area (PPA) and schedule/cost will increase.

The role that engineering innovation and semiconductors play in making the world a better place doesn’t get a holiday or have an expiration date.

The semiconductor market, in turn, depends on the general state of the world-wide economy.  This is determined mostly by consumer sentiment: when consumers buy, all industries benefit, from industrial to financial.  It does not take much negative inflection in consumers’ demand to diminish the requirement for electronic based products and thus semiconductors parts.  That in turn will have a negative effect on the EDA industry.

While companies that sell multi-years licenses can smooth the impact, new licenses, both multi-year and yearly are more difficult to sell and result in lower revenue.

The electronic industry will evolve to deal with increased complexity of designs.  Complex chips are the only vehicle that can make advance fabrication nodes profitable.  It makes no sense decreasing features’ dimensions and power requirements at the cost of increased noise and leakage just for technology sake.  As unit costs increase, only additional functionality can justify new projects.  Such designs will require new methodology, new versions of existing tools, and new industry organization to improve the use of the development/fabrication chain.

Michael Wishart, CEO of Efabless believes that in 2017 we will begin to see full-fledged community design, driven by the need for customized silicon to serve emerging smart hardware products. ICs will be created by a community of unaffiliated designers on affordable, re-purposed 180nm nodes and incorporate low cost, including open source, processors and on-demand analog IP. An online marketplace to connect demand with the community will be a must.

Design Methods

I asked Lucio Lanza of Lanza techVentures what factors would become important in 2017 regarding EDA.  As usual his answer was short and to the point.  “Cloud, machine learning, security and IoT will become the prevailing opportunities for design automation in 2017. Design technology must progress quickly to meet the needs of these emerging markets, requiring as much as possible from the design automation industry. Design automation needs to willingly and quickly take up the challenge at maximum speed for success. It’s our responsibility, as it’s always been.”

Bob Smith, Executive Director of the ESD alliance thinks that in 2017, the semiconductor design ecosystem will continue evolving from a chip-centric (integration of transistors) focus to a system-centric (integration of functional blocks) worldview. While SoCs and other complex semiconductor devices remain critical building blocks and Moore’s Law a key driver, the emphasis is shifting to system design via the extensive use of IP. New opportunities for automation will open up with the need to rapidly configure and validate system-level design based on extensive use of IP.  Industry organizations like the Electronic System Design Alliance have a mission to work across the entire design ecosystem as the electronic design market makes the transition to system-level design.

Wally Rhines, Chairman and CEO of Mentor Graphics addressed the required changes in design as follows: “EDA is a changing.  Most of its effort in the last two decades in the EDA industry has focused on the automation of integrated circuit design. Virtually all aspects of IC design are now automated with the use of computers.  But system design is in the infancy of an evolution to virtual design automation. While EDA has now given us the ability to do first pass functional integrated circuit designs, we are far from providing the same capability to system designers.

What’s needed is the design of “systems of systems”.  That capability is coming.  And it is sooner than you might think.  Designers of planes, trains and automobiles hunger for virtual simulation of their designs long before they build the physical prototypes for each sub-system.  In the past, this has been impossible.  Models were inadequate.  Simulation was limited to mechanical or thermal analysis.  The world has changed.  During 2017, we will see the adoption of EDA by companies that have never before considered EDA as part of their methodology.”

Frank Schirrmeister, Senior Product Management Group Director, System and Verification Group at Cadence offered the following observation.  “IoT that spans across application domains will further grow, especially in the industrial domain. Dubbed in Gernany as “Industrie 4.0”, industrial applications are probably the strongest IoT driver. Value shifts will accelerate from pure semiconductor value to systemic value in IoT applications. The edge node sensor itself may not contribute to profits greatly, but the systemic value of combining the edge node with a hub accumulating data and sending it through networks to cloud servers in which machine learning and big data analysis happens allows for cross monetization. The value definitely is in the system. Interesting shifts lie ahead in this area from a connectivity perspective. 5G is supposed to broadly hit is in 2020, with early deployments in 2017. There are already discussions going on regarding how the connectivity within the “trifecta” of IoT/Hub/Server is going to change, with more IoT devices bypassing the aggregation at the hub and directly accessing the network. Look for further growth in the area that Cadence calls System Design Enablement, together with some customer names you would have previously not expected to create chips themselves.

Traditionally ecosystems have been centered on processor architectures. Mobile and Server are key examples, with their respective leading architectures holding the lion share of their respective markets. The IoT is mixing this up a little as more processor architectures can play and offer unique advantages, with configurable and extensible architectures. No clear winner is in sight yet, but 2017 will be a key year in the race between IoT processor architectures. Even OpenSource hardware architectures are look like they will be very relevant judging from the recent momentum which eerily reminds me of the early Linux days. It’s definitely one of the most entertaining spaces to watch in 2017 and for years to come. “

Standards

Standards have played a key role in EDA.  Without them designers would be locked to one vendor for all of the required tools, and given the number of necessary tools very few EDA companies would be able to offer all that is required to complete, verify, and transfer to manufacturing a design.  Michiel Ligthart, President and COO at Verific, sees two standards, in particular, playing a key role in 2017.  “Watch for quite a bit of activity on the EDA standards front in 2017. First in line is the UVM standard (IEEE 1800.2), approved by the Working Group in December 2016. The IEEE may ratify it as early as February. Another one to watch is the next installment of SystemVerilog, mainly a “clarifications and corrections” release, that will be voted on in early 2017 with an IEEE release just before the end of the year. In the meantime, we are all looking at Accellera’s Portable Stimulus group to see what it will come up with in 2017.”

In regards to the Portable Stimulus activity Adnan Hamid, CEO of Breker Verification Systems goes into more details.  “While it’s been a long time coming, Portable Stimulus is now an important component of many design verification flows and that will increase significantly in 2017. The ability to specify verification intent and behaviors reusable across target platforms, coupled with the flexibility in choosing vendor solutions, is an appealing prospect to a wide range of engineering groups and the appeal is growing. While much of the momentum is rooted in Accellera’s Portable Stimulus Working Group, verification engineers deserve credit for recognizing its value to their productivity and effectiveness. Count on 2017 to be a big year for both its technological evolution and its standardization as it joins the ranks of SystemVerilog, UVM and others.

Conclusion

Given the amount of contributions received, it would be overwhelming to present all of them in one article.  Therefore the remaining topics will be covered in a follow-on article the following week.

EDA has not been successful at keeping its leaders

January 4th, 2017

Gabe Moretti, Senior Editor

I have often wondered why when a larger EDA company acquires a smaller one, the acquired CEO ends up, in a relatively short time, leaving and either joining a new start-up or a venture capital firm.  It seemed to me that that CEO thought enough of the buyer to predict his (or hers) employees and product(s) would prosper in the new environment when accepting to be acquired.  So, why leave?  It could just not be a matter of strong contrasting personalities.  I think I found the answer over the Christmas break.

I read the book “Skunk Works” by Ben R. Rich.  The book is a factual history of development projects that were carried out while Ben was first there as an employee and eventually its leader.  During his years at the Skunk Works Mr. Rich was part of the exceptional successes of the U-2 and SR-71 spy planes, and of the F117A stealth bomber.  All those projects were run independently of corporate overseers, used a comparatively small dedicated team, and modified the project when necessary to achieve the established goal.

Two major points made in the book apply both to the EDA industry and to industry in general.  First “Leaders are natural born: managers must be trained” and second “There is no substitute for astute managerial skill on any project”.

Many start-up CEOs are born leaders and do not fit well within an organization where projects are managed in a bureaucratic manner using a rigid reporting structure.  An ex-CEO will soon find such work environment counter-productive.  Successful projects need to react quickly to changing realities and parameters.  Often in the life of a project the team discovers new opportunities or new obstacles that come to light because of the work being done.  The time spent explaining and justifying the new alternative will impact the success of the project, especially if the value of the presented alternative is not fully understood by top executives or the new managers do not understand the new corporate politics.

I think that the best use of an acquired CEO is to allow him or her to continue to be an entrepreneur within the acquiring company.  This does not mean to use his talent to continue to lead the just acquired team. He can look for new opportunities within his area of expertise and possibly build a new team that will produce a new product.  In this way the acquiring company increases its ROI form the acquisition, even at the cost of increased compensation to both the CEO and his new team at the successful completion of their work.

In general Synopsys has managed to retain acquired CEOs, while Cadence has not.

The behavior in the EDA industry, with very few well known exceptions, has been to seek a quick reward through an acquisition that will satisfy financially both the venture capitalists and the original start-up team.  Once the acquisition price is monetized, many people leave the industry seeking to capitalize on their financial gains in other ways.  Thus the EDA industry must grow through the entrance of new people with new ideas but little if any experience in the industry.  The result is many academic brilliant ideas that result in failed start-ups.  Individuals with brilliant ideas are not usually good leaders or managers, and good managers do not generally possess the creativity to conceive a breakthrough product.

In its history the EDA industry has paid the price of creating both leaders and excellent managers, but has yet to find a way to retain them.  Of course there are a few exceptions, nothing is ever black and white, but the exceptions are few.  It will be interesting to see, after a couple of years, how Siemens will have handled the Mentor Graphics acquisition.  Will Mentor’s creativity improve?  Will the successful team remain?  Will they use the additional resource in an entrepreneurial manner, or either leave or adjust to a more relaxed big company life?

DVCon is a Worldwide Conference

December 21st, 2016

Gabe Moretti, Senior Editor

The DVCon conference has now solid traditions not only in the USA but also in Europe, India, and next year will start flowering in China.

DVCon U.S.

The conference will be held February 27 – March 2, 2017 at the DoubleTree in San Jose, California. Early registration was open and the program is available on line at  https://dvcon.org. “DVCon U.S. 2017 planning is taking shape,” commented Dennis Brophy, DVCon U.S. General Chair. “We look forward to a compelling and in-depth technical program full of engaging content that practicing design and verification engineers, managers and EDA tool suppliers have come to depend on from DVCon.” The four-day program offers attendees an Expo, two exciting standards-focused panels and numerous informative papers, tutorials and posters to choose from. Accellera Day starts the conference on Monday and will devote the entire morning to a tutorial on Accellera’s emerging Portable Stimulus standard titled “Creating Portable Stimulus Models with the Upcoming Accellera Standard,” with two afternoon tutorials: “SystemC Design and Verification – Solidifying the Abstraction above RTL” and “Introducing IEEE P1800.2 – The Next Step for UVM.”

DVCon India

This important technical event in India was held in Bangalore in September with almost 440 attendees over the two-day event. There were local start-ups participating and exhibiting for the first time, further demonstrating the local focus and interest in each conference. “DVCon India rightly promotes the four C’s: connect, contribute, collaborate, and celebrate,” stated Gaurav Jalan, DVCon India General Chair. The two-day event was inaugurated with a traditional lamp-lighting ceremony and welcome remarks by Jalan. Dr. Walden Rhines, Chairman and CEO of Mentor Graphics, and Professor Kamakoti Veezhinathan, Indian Institute of Technology Madras, delivered the keynotes.

DVCon Europe

As in previous years the conference was held in Munich, Germany. Held in October it enjoyed an increase in attendance of 20% over the previous year. Attendees of the two-day conference included representatives from 93 companies and organizations from 25 countries. Insightful keynotes were delivered by Hobson Bullman, General Manager of ARM’s Technology Services Group, and Jugen Weyer, Vice President of Automotive Sales for EMEA at NXP Semiconductors. Bob Smith, Executive Director of the ESD Alliance, gave the keynote at the gala dinner. “It’s fantastic to see this event continuing to do so well, meeting a clear need for a European forum that provides practical, detailed information on state-of-the-art development methodologies,” noted Oliver Bell, DVCon Europe General Chair. “This year’s conference was particularly exciting with three dynamic keynote speeches, overwhelming tutorial and paper submissions, and a vibrant exhibition. Now that DVCon Europe is established as the must-attend event in Europefor engineers to upgrade their skills, we are looking forward to an even larger event in 2017.”

DVCon China

During 2017 DVCon will premier as a one-day event in Shanghai on April 17, 2017. The steering committee is in the process of analyzing a number of excellent paper abstract submissions for its inaugural program.. “Ideas, networking, technical discussions, learning opportunities and exciting exhibits of new products and services. This is what DVCon China will offer to attendees,” stated Andy Liu Hongliang, DVCon China General Chair. “Many hot areas of ASIC design and verification such as UVM, Low Power, IP Reuse, Formal, Mixed-Signal, System Design and Debug Strategies will be distributed throughout the whole conference with lectures, discussions, presentations and demos.”

Siemens Acquisition of Mentor Graphics is Good for EDA

November 15th, 2016

Gabe Moretti, Senior Editor

Although Mentor has been the subject of take-over attempts in the past, the business specifics of the transactions have never been favorable to Mentor.  The acquisition by Siemens, instead, is a favorable occurrence for the third largest EDA company.  This time both companies obtain positive results from the affair.

Siemens acquires Mentor following the direction set forth in 2014 when its Vision 2020 was first discussed in public.  The 6 year plan describes steps the company should take to better position itself for the kind of world it envisions in 2020.

The Vision 2020 document calls for operational consolidation and optimization during the years 2016 and 2017.  It also selects three of its business division as critical to corporate growth.  It calls it the E-A-D system that include: Digitalization, Automation, and Electrification.

Although it is possible that Mentor technology and products may be strategic in Electrification, they are of significant importance in the other two areas: Digitalization and Automation.  Digitalization, for example, includes vehicle automation, including smart cars and vehicle to vehicle communication.  Mentor already has an important presence in the automotive industry and can help Siemens in the transition to state of the art car management by electronic systems to the innovation of new systems required by the self-driving automobile and the complete integration of the components into an intelligent systems including vehicle-to-vehicle communication.

Mentor also has experience in industrial robots and what is, in my mind, more remarkable, is that the PCB and cabling portions of Mentor, often minimized in an EDA industry dominated by the obsession of building ICs, are the parts that implement and integrate the systems in the products designed and built by third parties.

With its presence in the PCB and cabling markets, Mentor can bring additional customers to Siemens as well as insight in future marketing requirements and limitations that will serve extremely well in designing the next generation industrial robots.

Of course, Mentor will also find an increased internal market as other divisions of Siemens not part of the E-A-D triumvirate will utilize its products and services.

Siemens describes itself as an employee oriented company, so present Mentor employee should not have to fear aggressive cost cutting and consolidation.  Will Mentor change? Of course, it will adapt gradually to the new requirements and opportunities the Siemens environment will create and demand, but the key word is “gradually”.  Contrary to the acquisition of ARM by SoftBank, where the acquiring company had no previous activity in ARM’s business, Siemens has been active in EDA internally, both in its Research Lab and strong connections with university programs that originated a number of European EDA startups.  Siemens executives have an understanding of what EDA is all about and what it takes to be successful in EDA.  The result, I expect, is little interference and second guessing which translates in continuous success for Mentor for as long as they are capable of it.

Wil other EDA companies benefit from this acquisition? I tink they will.  First of all it attracts more attention to our industry by the financial community, but it also is likely to increase competition among the “big 3” forcing Cadence and Synopsys to focus more on key markets and while diversifying into related markets like optical, security, software development for example.  In addition I do not see the reason for an EDA company to enter into a business partnership with some of its customers to explore new revenue generating business models.

ARM TechCon is the Model for Future Successful Conferences

October 26th, 2016

Gabe Moretti, Senior Editor

It has become abundantly clear that corporate and consortia sponsored conferences are gaining in both popularity and usefulness over generic conferences like DAC and DesignCon.  The reason, in my opinion, is how development has changed.  The industry has moved from the ASIC era, to the integrated system era.

Instead of designing an entire system, engineers now integrate subsystems.  This has been made possible with the introduction of IP licensing and the growth of the IP industry.  From a fledging and challenging design opportunity in the early 1990’s the use of IP is now a routine function that embraces both hardware and software modules.

Now both IP vendors and EDA tools providers can offer an ecosystem that is complete to their customers, both in capability and in range of functions.  The result is that conferences like ARM TechCon provide greater utility to working engineers, than the exhibit areas of DAC or DesignCon.

Only specialized conferences like DVCon held by Accellera on three continents continue to grow, because attendees benefit from the focused topics offered.  An engineer is concerned with issues covering the integration of design and verification functions finds interesting content in DVCon, while the same engineer would have to work from advanced conference documentation to create his or her own program at times dealing with conflicting schedules.

ARM holds its own specific program within DAC.  So conference attendees can take advantage of focused curricula.  But the problem is that other companies that enhance the specific environment by collaborating with ARM, for example, cannot provide focused support, since their attention must be directed toward all possibilities available within the conference.

A design engineer attending DAC finds a plethora of activities that are of no interest, or of marginal interest, and has a harder time moving within the conference just to follow what he or she wishes to see and hear.

Professionals dealing with layout and fabrication issues, for example, would find a conference organized by a fab company dealing with its own fabrication environment, challenges, and guidelines, more interesting that a series of academic papers presented at DAC.  I believe that DAC sponsor organizations need to take into consideration the changed reality of IC and system design, not just in the material presented, but in the format it is presented in.

The significant increase in size and popularity of ARC Day from Synopsys, for example, is another indication that such workshops are more valuable than generic conferences.  The same can be said for Accellera’s DVCon conferences now held in the US, Europe, India and China.  Although design and verification issue are global, they have different flavors in certain important parts of the planet.

ARM IP users find at ARM TechCon everything they need to successfully complete a design.  Both design, verification, software integration issues are covered with a depth and spread that is not available any place else.

Kilopass Unveiled Vertical Layered Thyristor (VLT) Technology for DRAMs

October 19th, 2016

Gabe Moretti, Senior Editor

Kilopass Technology, Inc., is a leader in embedded non-volatile memory (NVM) intellectual property (IP).  Its patented technologies of one-time programmable (OTP) NVM solutions scale to advanced CMOS process geometries. They are portable to every major foundry and integrated device manufacturer (IDM), and meet market demands for increased integration, higher densities, lower cost, low-power management, better reliability and improved security.  The company has just announced a new device that potentially allows it to diversify into new markets.

According to Charlie Cheng, Kilopass’ CEO, VLT eliminates the need for DRAM refresh, is compatible with existing process technologies and offers significant other benefits including lower power, better area efficiency and compatibility.  When asked the reason for this additional corporate direction Charlie replied: “Kilopass built its reputation as the leader in one-time programmable memories,” says Charlie Cheng, Kilopass’ chief executive officer. “As the next step on our roadmap, we examined many possible devices that would not need new materials or complex process flows and found this vertical thyristor to be very compelling.  We look forward to commercializing VLT DRAM in early 2018.”

VLT Overview

Kilopass’ VLT is based on thyristor technology, a structure that is electrically equivalent to a cross-coupled pair of bipolar transistors that form a latch. The latch lends itself to memory applications since it stores values and, as opposed to current capacitor-based DRAM technology, does not require refresh. The thyristor was first invented in the 1950s and several attempts have been made to use it for the SRAM market without success.  Kilopass’ VLT is the realization of DRAM requirements based on implementing the thyristor structure vertically.

Since VLT does not require complex performance- and power-consuming refresh cycles, a VLT-based DDR4 DRAM lowers standby power by 10X when compared to conventional DRAM at the same process node. Furthermore, VLT requires fewer processing steps and is designed to be built using existing processing equipment, materials and flows.

The VLT bitcell operations and silicon measurement were completed in 2015 and shown to have excellent correlation to Kilopass’ proprietary ultra-fast TCAD simulator that is one hundred thousand times faster than a traditional TCAD simulator. The TCAD simulator enables Kilopass to predict the manufacturing windows for key process parameters, and optimize the design for any given manufacturing process.  A full macro level test chip was taped-out in May and initial silicon testing is underway.
Industry Perspective

The $50B DRAM market is being driven by strong demand in the server/cloud computing market as mobile phone and tablet market growth are slowing down and computing is moving increasingly to the cloud. The outlook for DRAM growth remains strong. In a report published in 2015, IC Insights forecasts DRAM CAGR of 9% over the period from 2014 – 2019. This growth rate shows DRAM growing faster than the total IC market.

Servers and server farms consume a tremendous amount of energy with memory being a major contributor. In an ideal world, the current generation of 20 nanometer (nm) DRAM would migrate to sub-20nm processes to deliver even lower power.

Current DRAM technology is based on the 1 transistor, 1 capacitor. The (1T1C) bitcell is difficult to scale since the smaller transistors exhibit more leakage and the smaller capacitor structure has less capacitance, resulting in the need to reduce the time between refresh intervals. Up to 20% of a 16Gb DDR DRAM’s raw bandwidth will be lost due to the increased frequency of refresh cycles, a negative for multi-core/multi-thread server CPUs that must squeeze out every bit of performance to remain competitive. The DRAM industry is in a quandary trying to increase memory performance while reducing power consumption, a tough challenge given the physics at play with the current 1T1C technology. In order to address the need for lower power consumption a new DRAM technology and architecture is needed.

Kilopass stated that its initial target markets include “PCs” and servers. I am of the old school and associate the term “PC” to personal computers.  But Kilopass uses the term to mean Portable Computing Devices so it is talking about a different market.  Kilopass expects to have test silicon by early 2017 that will confirm performance of the new VLT DRAM technology and manufacturability.   Kilopass has two primary reasons to announce the new technology over one year in advance of product delivery.  First the company is in the IP business, so it is giving itself time to look for licensees.  Secondly it thinks that the DRAM market has been stuck at 20nm. Adoption of new technology takes time, though VLT has been shown to be manufacturable. This is the right time to alert the market that there are alternative solutions, allowing time for investigation of this new technology.   Market penetration of new technology is not always assured.  Wide acceptance almost always requires a second source, especially with something so new as the VLT device.  Memories play a critical role cloud computing but a far smaller one in PC since power consumption in PC is not a widespread issue.

Interview with Pim Tuyls, President and CEO of Intrinsic-ID

October 4th, 2016

Gabe Moretti, Senior Editor

After the article on security published last week, I continued the conversation with more corporations.  The Apple vs. FBI case showed that the stakes are high and the debate is heated.  Privacy is important, not only for guarding sensitive information but for also ensuring functionality in our digital world.

I asked Pim Tuyls his impressions on security in electronics systems.

Pim:

“Often, privacy is equated with security. However, ‘integrity’, is often the more important issue. This is especially true with the Internet of Things (IoT) and autonomous systems, which rely on the inputs they receive to operate effectively.    If these inputs are not secure, how can they be trusted?  Researchers have already tricked sensors of semi-autonomous cars with imaginary objects on the road, triggering emergency braking actions.  Counterfeit sensors are already on the market.

Engineers have built in redundancy and ‘common-sense’ rules to help ensure input integrity. However, such mechanisms were built primarily for reliability, not for security. So something else is needed. Looking at the data itself is not enough. Integrity needs to be built into sensors and, more generally, all end-points.”

Chip Design: Are there ways you think could be effective in increasing security?

Pim:

“One way to do this is to append a Message Authentication Code (MAC) to each piece of data. This is essentially a short piece of information that authenticates a message or confirms that the message came from the claimed sender (its authenticity) and has not been changed in transit (its integrity). To protect against replay attacks the message is augmented with a timestamp or counter before the MAC is calculated.  Another approach to implement a MAC is based on hash functions (HMAC or Hash-based message authentication code). Hash functions such as the SHA-2 family are well-known and widely supported cryptographic primitives with efficient and compact implementation.”

Chip Design: These approaches sound easy but there are reasons they are not widely adopted?

Pim:

“First, even though an algorithm like HMAC is efficient and compact, it may still be too high of a burden on the tiny microcontrollers and sensors that are the nerves of a complex system.  Authenticating every piece of data naturally takes up resources such as processing, memory and power.  In some cases, like in-vitro medical sensors, any reduction in battery life is not acceptable. Tiny sensor modules often do not have any processing capabilities. In automotive, due to the sheer number of sensors and controllers, costs cannot be increased.”

Chip Design: It is true that many IoT devices are very cost sensitive, I said, however, over recent years there is an increasing use of more powerful, 32-bit, often ARM- based microcontrollers. Many of these now come with basic security features like crypto accelerators and memory management. So some of the issues that prevent adoption of security are quickly being eroded.

Pim continued:

“A second obstacle relates to the complex logistics of configuring such a system. HMAC relies on a secret key that is shared between the sensor and the host.  Ensuring that each sensor has a unique key and that the key is kept secret via a centralized approach creates a single point of failure and introduces large liabilities for the party that manages the keys.”

Chip Design: What could be a cost-effective solution?

Pim concluded:

“A new solution to all these issues is based on SRAM Physical Unclonable Functions (PUFs). An SRAM PUF can reliably extract a unique key from a standard SRAM circuit on a standard microcontroller or smart sensor. The key is determined by tiny manufacturing differences unique to each chip. There is no central point of failure and no liability for key loss at the manufacturer.  Furthermore, as nothing is programmed into the chip, the key cannot even be extracted through reverse engineering or other chip-level attacks.

Of course adapting a new security paradigm is not something that should be done overnight. OEMs and their suppliers are rightly taking a cautious approach. After all, the vehicle that is now being designed will still be on the road in 25 years. For industrial and medical systems, the lifecycle of a product may even be longer.

Still, with technologies like SRAM PUF the ingredients are in place to introduce the next level of security and integrity, and pave the road for fully autonomous systems. Using such technologies will not only help to enhance privacy but will also ensure a higher level of information integrity.”

This brought me back to the article where a solution using PUF was mentioned.

eSilicon Fully Automates Semiconductor IP Selection and Purchasing

September 21st, 2016

Gabe Moretti, Senior Editor

Approximately half of the area of advanced system-on-chip (SoC) designs is composed of memory IP. Optimizing the memory subsystem of an SoC requires exploration of hundreds to thousands of possible configurations to identify the optimal match for the chip’s power, performance and area (PPA) requirements. This process can take weeks and traditionally designers and architects have not been able to fully explore all the options available to them, resulting in sub-optimal memory architectures and design closure challenges.

eSilicon’s STAR Navigator tool addresses this challenge by allowing designers to choose, evaluate and procure eSilicon IP online. Now, designers can access a wide variety of memory and I/O options online to find the best configuration for each design. With the latest enhancements to STAR Navigator, designers can now get quotes for their chosen IP online and procure it by uploading a valid purchase order. Designers are now in control of specifying and purchasing the most appropriate IP for their projects.

STAR Navigator increases security of the transactions by providing one channel of communication that allows engineering, purchasing and finance to avoid misunderstandings.  BY eliminating multiple emails to different recipients, the progress of the choice of IP and its purchasing and delivery are all documented in one channel of communication.

Previously, purchasing memory IP and I/Os could be difficult to manage by the engineer. Accessing specific memory instances with a variety of options was time consuming and complex. STAR Navigator helps designers avoid complicated paperwork; find which memories will best help meet their SoC’s power, performance or area (PPA) targets; and easily isolate key data without navigating convoluted data sheets. Pre-loaded data is available to enable architects and designers to obtain immediate PPA information for their early chip planning.

STAR Navigator empowers chip architects and designers to choose the best and highly differentiated eSilicon-developed IP solutions by performing the following tasks online:

  • Generate dynamic, graphical analyses of PPA data
  • View data graphically, view in table format, or download to Microsoft Excel
  • Build and download a complete SoC memory subsystem
    • Generate and download IP front-end views
    • Make changes over time
    • Purchase the IP that best meets the needs of the design

STAR Navigator contains all eSilicon-developed IP across multiple foundries and technologies:

  • Standard and specialty memory compilers from 14nm to 180nm including CAMs, fast cache single-port SRAMs, multi-port register files, ultra-low-voltage SRAMs and pseudo two-port architectures targeted for specific market segments
  • General-purpose and specialty I/O libraries from 14nm to 180nm
  • High-bandwidth memory (HBM) Gen2 PHY in 14/16nm and 28nm
  • Foundries include Dongbu, GLOBALFOUNDRIES, LFoundry, Samsung, SMIC, TSMC and UMC

“STAR Navigator simplifies the comparison of results across multiple technologies, architectures and other characteristics and takes the guesswork out of hitting PPA targets,” said Lisa Minwell, eSilicon’s senior director, IP marketing. “This goes much, much deeper than IP portals that serve as IP catalogs. Using STAR Navigator, designers can download front-end views, run simulations in their own environments and then purchase the back-end views of the IP and I/Os that best fit their design. The choice of optimized IP is now in the hands of the designer.”

An interesting White Paper from S2C

September 6th, 2016

Gabe Moretti, Senior Editor

S2C has published a white paper on Chip Design with the title: “Choosing the best pin multiplexing method for your Multiple-FPGA partition”.  It is of particular interest to designers that use FPGA based prototyping in their development of SoC designs.

Using multiple FPGAs to prototype a large design requires solving a classic problem: the number of signals that must pass between devices is greater than the number of I/O’s pins on an FPGA. The classic solution is to use a TDM (Time Domain Multiplexing) scheme that multiplexes two or more signals over a single wire or pin.

There are two distinct types of TDM implementations: synchronous and asynchronous. In synchronous TDM the multiplexing circuitry is driven by a fast clock that is synchronous with the (user’s) design clock.

In asynchronous mode, the TDM fast clock runs completely independent of the design clocks. Although asynchronous mode is slower, it supports multiple clocks and the timing constraints are easier to meet.

The paper shows that S2C’s Prodigy Play Pro is a tool that provides design partitioning across multiple FPGAs, and offers automatic TDM insertion based on an asynchronous TDM using LVDS.   Prodigy Play Pro Combines the technique of using asynchronous LVDS TDM with a single clock cycle design, and can partition a design and perform automatic TDM insertion. The result is that the tool is able to:

1)   Optimize buses and match the LVDS resources in each bank considering such factors as trace lengths, matching impedances, and impedance continuity, and

2)   Avoid consuming FPGA design resources for the TDM circuity by taking advantage of built-in reference clocks (e.g.: IODELAY) to drive TDM clocks and resets.

Just click on the title of the white paper to read it in its entirety or go to http://www.s2cinc.com/resource-library/white-papers.

Next Page »