Part of the  

Chip Design Magazine

  Network

About  |  Contact

ESDA CEO Outlook Panel, not a Cause for Celebration

April 25th, 2017

Gabe Moretti, Senior Editor

About two weeks ago the ESD Alliance held its 2017 CEO Outlook Panel.  The panel used to be a yearly event for EDAC, the precursor of the ESD Alliance, but had not been held for a few years since.

I did not have the opportunity to attend the panel in person and was looking forward to read the entire transcript on the ESDA site.  But there is no transcript.  There is a picture of the panelists with Ed Sperling as the moderator, and a brief video showing introductory remarks from each of the panelists.

Unfortunately, the introductory remarks could have been delivered by a set of industry cheerleaders, since there was really no depth in them.  You can guess the contents: greater need for silicon, IoT and IIoT (Industrial IoT) are the future.  Lip-Bu Tan did say something interesting by addressing data architecture within the IIoT.  Too many articles have been written about the IoT hardware architecture and I have found almost nothing that talks about the data architecture.  Lip-Bu emphasized the need for local computational and decision making capability, thus limiting the need to transfer large amount of data up in the hierarchy of the system.  Given the complex connectivity of a IoT system, where everything is potentially connected to everything else, security would be a major concern should the need to transfer a large amount of information arise.  The smaller the amount of data transfer, the higher the security.

What Lip-Bu did not say, but is implied, is the need for distributed intelligence in the system with application specific hardware playing a greater role.  It is back to the future.  ASIC once again will step to the forefront replacing software based system running on general purpose hardware.

Wally Rhines noted that our industry economics are back at the levels of six years ago in spite of the significant consolidation of our customer base.  It is called recovery, and our industry has had less of a recovery than most other industries.

The consolidation issue points out the real problem of the EDA industry.  We sell tools used in the design and development of a product, not its manufacturing.  Manufacturing volume means nothing to the EDA bottom line.  Fewer “makers” means fewer “tools needed”.

Mentor is now part of one of its customers, so does such consolidation matter to Wally?  I hear of course that Mentor will continue to do business in an independent manner, and that will be true unless and until a conflict of interest ensues.  Will Mentor sell its best tools to companies directly competing with Siemens in a specific, competitive, market?

Simon Segers of ARM was also part of the panel.  If he said something important as a result of now working for SoftBank, you could not have guessed it from his introductory remarks either.  He just repeated Aart De Geus observation about the world needing more silicon.  It has been clear since the acquisition that SoftBank other captive industries need more silicon and more IP cores, the reason for the acquisition!  ARM will expertly create whatever SoftBank needs and will market what it creates to the outside world.  The other way around will not happen, at least not in any important way.

Speaking as the scientist that he is Aart pointed out that as algorithms’ complexity increases the need for computational capability increases as well, thus the need for more silicon.  It is an assumption that increased silicon production means increased EDA revenue.  This is a fallacy, since EDA revenues are realized at the front end of the project and do not grow with product volume!  The amount of EDA products that are used in wafer production and testing is small in comparison to the tools used to design and test before production.

There is also a significant difference in the revenue generated by different types of silicon.  A 90 nm device requires less up-front investment in tools than a 10nm device.  And there will be much more of the former than the latter type.

I think that reviving the CEO Panel is a good thing.  It shows, at least, that accepted leaders of the EDA industry are willing to appear in public and deliver statements without fear of sounding irrelevant.  And the lack of a full transcript, given the high level of professionalism now in the ESDA, must mean that nothing worthy of greater analysis was said during the panel.

Cadence Builds a Winged Horse for Verification

April 17th, 2017

Gabe Moretti, Senior Editor

Cadence reached back into Greek mythology to name its new Verification engine for digital designs: Pegasus.  The hope is, of course, that the new product will prove itself more than a popular myth, but that the product will soar to new heights in efficiency.

The company describes Pegasus as: “a massively parallel, cloud-ready physical verification signoff solution that enables engineers to deliver advanced-node ICs to market faster. The new solution is part of the full-flow Cadence® digital design and signoff suite and provides up to 10X faster design rule check (DRC) performance on hundreds of CPUs while also reducing turnaround time from days to hours versus the previous-generation Cadence solution.”

The major benefits offered by Pegasus, according to Dr. Anirudh Devgan, executive vice president and general manager of the Digital & Signoff Group and the System & Verification Group at Cadence are:

  • Massively parallel architecture: The solution incorporates a massively parallel architecture that provides unprecedented speed and capacity, enabling designers to easily run on hundreds of CPUs to speed up tapeout times.
  • Reduced full-chip physical verification runtimes: The solution’s gigascale processing offers near-linear scalability that has been demonstrated on up to 960 CPUs, allowing customers to dramatically reduce DRC signoff runtimes.
  • Low transition cost: Using existing foundry-certified rule decks, customers achieve 100 percent accurate results with a minimal learning curve.
  • Flexible cloud-ready platform: The solution offers native cloud support that provides an elastic and flexible compute environment for customers facing aggressive time-to-market deadlines.
  • Efficient use of CPU resources: The solution’s data flow architecture enables customers to optimize CPU usage, regardless of machine configurations and physical location, providing maximum flexibility to run on a wide range of hardware, achieving the fastest DRC signoff.
  • Native compatibility with Cadence digital and custom design flows: The Pegasus Verification System integrates seamlessly with the Virtuoso custom design platform, delivering instantaneous DRC signoff checks to guide designers to a correct-by-construction flow that improves layout productivity. An integration with the Innovus Implementation System enables customers to run the Pegasus Verification System during multiple stages of the flow for a wide range of checks—signoff DRC and multi-patterning decomposition, color-balancing to improve yield, timing-aware metal fill to reduce timing closure iterations, incremental DRC and metal fill during engineering change orders (ECOs) that improve turnaround time, and full-chip DRC.

Anirudh concluded that: “The Pegasus Verification System’s innovative architecture and native cloud-ready processing provides an elastic and flexible computing environment, which can enable our customers to complete full-chip signoff DRC on advanced-node designs in a matter of hours, speeding time to market.”

Industrial IoT, a Silicon Valley Opportunity

April 11th, 2017

Gabe Moretti, Senior Editor

I read a white paper written by Brian Derrick, VP of Corporate Marketing at Mentor titled Industrial IoT (IIOT) – Where Is Silicon Valley?  It is an interesting discussion about the IIoT market pointing out that most of the leading companies in the market are not located in Silicon Valley.  In fact Brian only lists Applied Material as having a measurable market share in IIoT (1.4% in 2015), HP and Avago as sensors providers.  Amazon and Google are listed as Cloud Service Providers, Cisco, ProSoft, and Cal Amp as Intelligent Gateway Providers and Sierra Wireless as Machine to Machine Communication Hardware supplier.

It does not make sense to list EDA companies in the valley that supply the tools used by many of the IIoT vendors to design their products.  Unfortunately, it is the service nature of EDA that allows analysts to overlook the significant contribution of our industry to the electronics market place.

There is actually a company in Silicon Valley that in my opinion offers a good example of what IIoT is: eSilicon.  The company started as a traditional IP provider but in the last three years it developed itself into a turn-key supplier supporting a customer from design to manufacturing of IC with integrated analysis tools, and order, billing and WIP reports, all integrated in a system it calls STAR.

A customer can submit a design that uses a eSilicon IP, analyze physical characteristics of the design, choose a foundry, receive a quote, place an order, evaluate first silicon, and go into production all in the STAR system.  This combines design, analysis, ordering, billing, and manufacturing operations, significantly increasing reliability through integration.  The development chain that usually requires dealing with many corporate contributors and often more than one accounting system, has been simplified through integration not just of engineering software tools, but accounting tools as well.

I think that we will regret the use of the term “Internet” when describing communication capabilities between and among “Things”.  Internet is not just hardware, it is a protocol.  A significant amount of communication in the IoT architecture takes place using Bluetooth and WiFi hardware and software, not internet.  In fact, I venture that soon we might find that the internet protocol s the wrong protocol to use.  We need networks that can be switched from public to private, and in fact an entire hierarchy of connectivity that offer better security, faster communication, and flexibility of protocol utilization.

I find that the distinction between real time and batch processing is disappearing because people are too used to real time.  But real time connectivity is open to more security breaches than batch processing.  On the manor, for example, a machine can perform thousands of operations without being connected to the internet all the time.  Status reports, production statistics information, for example, can be collected at specific times and only at those times does the machine need to be connected to the internet.  For the machine to continuously say that all is normal to a central control unit is redundant.  All we should care is if something is not normal.

The bottom line is that there are many opportunities for Silicon Valley corporations to become a participant to IIoT, and, of course, start-ups, a specialty of the Valley, can find a niche in the market.

Portable Stimulus

March 23rd, 2017

Gabe Moretti, Senior Editor

Portable Stimulus (PS) is not a new sex toy, and is not an Executable Specification either.  So what is it?  It is a method, or rather it will be once the work is finished to define inputs independently from the verification tool used.

As the complexity of a system increases, the cost of its functional verification increases at a more rapid pace.   Verification engineers must consider not only wanted scenarios but also erroneous one.  Increased complexity increases the number of unwanted scenarios.  To perform all the required tests, engineers use different tools, including logic simulation, accelerators and emulators, and FPGA prototyping tools as well.  To transport a test from one tool to another is a very time consuming job, which is also prone to errors.  The reason is simple.  Not only each different class of tools uses different syntax, in some cases it also uses different semantics.

The Accellera System Initiative, known commonly as simply Accellera is working on a solution.  It formed a Working Group to develop a way to define tests in a way that is independent of the tool used to perform the verification.  The group, made up of engineers and not of markting professionals, chose as its name what they are supposed to deliver, a Portable Stimulus since the verification tests are made up of stimuli to the device under test (DUT) and the stimuli will be portable among verification tools.

Adnan Hamid, CEO of Breker, gave me a demo at DVCon US this year.  Their product is trying to solve the same problem, but the standard being developed will only be similar, that is based on the same concept.  Both will be a descriptive language, Breker based on SystemC and PS based on SystemVerilog, but the approach the same.  The verification team develops a directed network where each node represents a test.  The Accellera work must, of course, be vendor independent, so their work is more complex.  The figure below may give you an idea of the complexity.

Once the working group is finished, and they expect to be finished no later than the end of 2017, each EDA vendor could then develop a generator that will translate the test described in PS language into the appropriate string of commands and stimuli required to actually perform the test with the tool in question.

The approach, of course, is such that the product of the Accellera work can then be easily submitted to the IEEE for standardization, since it will obey the IEEE requirements for standardization.

My question is: What about Formal Verification?  I believe that it would be possible to derive assertions from the PS language.  If this can be done it would be a wonderful result for the industry.  An IP vendor, for example, will then be able to provide only one definition of the test used to verify the IP, and the customer will be able to readily use it no matter which tool is appropriate at the time of acceptance and integration of the IP.

A Brief History of Verification

March 2nd, 2017

Gabe Moretti, Senior Editor

This year’s DVCon is turning out to be very interesting.  It may be something that Jim Hogan said.  Jim insisted, in more than one panel, that we are at the cusp of a major change, an opportunity for startups that has a greater probability of revolutionary success than in the last few years in the area of verification and, I add, circuit design.

His observations got me thinking, reviewing my experiences in various aspects of verification I have been involved in.  What follow is my existential history of verification.

Logic is What Counts

When in 1968 I wrote my first logic simulator I only had to deal with a two states Boolean logic.  A node was either on or off.  Everything else about the circuit was not important.  It was not very long after that that things got more complex.  Now Boolean logic had three states, I and other verification engineers met the floating gate.  And then shortly thereafter memories became a real physical quantity in designs and four-state logic became the norm.  We had to know whether a bit was initialized in a deterministic manner or not.  From then on verification engineers’ major problem was the size of the circuit to be simulated and how efficient we could make the simulator so that run times were as short as possible.  And by the way this is still going on today, although for more complex reasons.

Enters Physics

As the size of transistors got smaller and smaller verification engineers found that physics grew in importance.  We no longer could assume that a circuit was well behaved in the physical domain, but had to worry about power, electromagnetic effects, and thermal characteristics.  Verification engineers had to understand physics, or, better yet, physics professionals had to join the ranks of verification engineers.

Today, thanks in part to logic synthesis, Boolean logic issues are well understood, and development work is focused on the physical characteristics of the circuit and, just to make things interesting, its immediate environment.  Todays’ complex problems deal with clock networks management and their interaction with power distribution and consumption that determines the functional state of the circuit.  How power transmission effect the nearby circuitry, how the current of electrons warms the device and how temperature impacts the probability that a transistor can maintain the state it is supposed to have are today’s issues.

Behavioral Scientists Find a New Job

There has been much talk at this year’s DVCon about safety and security, issues that are really behavioral and not about logic or physics.  During a lunch panel on Tuesday the importance of safety and security in complex systems, often employing diverse and distributed circuitry, was indicated as being the next important aspect of a system to verify.  Defense, automotive, and mobile applications were used as examples.  More than one panelist with Jim Hogan again assuming a leading role, spoke about the desirability of self-correcting systems.  If a safety failure occurred, or a security breach, the most desirable thing would be to have the circuitry prevent the occurrence or fix itself thus limiting the damage and the cost of repairs.   Verification engineers will have to deal with behavioral science in order to understand the danger, build a knowledge base of past incidents, and develop circuitry that will identify and prevent the faulty behavior in the future.  This, or course, requires that we can define what a good behavior is and what would signify a deviation from it.

Can We Do Better?

Now that I no longer develop verification tools, I think that both hardware and software developers have gotten lazy.  Transistors are now assumed to exist to the point that no one ever asks himself or herself “will this fit?” since the answer is “of course, and with transistors to spare”.  We design and develop complex circuits not just because they have to be but because they can be.

The amount of data produced by the various verification tools is so large that it is becoming impossible for a human to correlate all of it into a meaningful indication of the wellness and effectiveness of the design.  Tools to manage big data are required, especially when decisions must be made in the field during actual deployment.

And as transistors grow smaller and smaller in order to provide more of them every couple of years, we are unconsciously marching toward the boundary of Einstein’s space time environment to get closer and closer to quantum physics.  What will happen when one day we will be able to determine the state of a transistor or the trajectory of the electrons but not both at the same time?

I believe that it is time to go back in time and asks ourselves “ Is there another implementation that is simpler and uses fewer transistors?” the same way I used to ask “How can I make my program fit in 8K bytes of memory?”  Verification engineers must develop an efficiency measurement tool that warns design engineers when they are not efficient.  Not for the sake of elegance, but for the sake of simplicity.  Simplicity in circuit design means fewer physical problems and fewer behavioral problems.

The best business opportunity of this approach, of course, is in the area of IP.  Design, develop, and market the most efficient implementation of a function and, assuming you have some understanding of business management, you will be a successful entrepreneur.

Grant Pierce Named BoD Chair of the ESD Alliance

February 21st, 2017

Gabe Moretti, Senior Editor

The ESD Alliance (ESDA) has elected Grant Pierce (CEO of Sonics) as its Chairman of the Board a few weeks ago.  Grant is only the second Chair that is not a high level executive of one of the three big three EDA companies to hold the title, and the first since the organization, formerly EDAC renamed itself.  During the EDAC days it was customary for the CEOs of Cadence, Mentor and Synopsys to pass the title among themselves in an orderly manner.  The organization then reflected the mission of the EDA industry to support the development of hardware intensive silicon chips following Moore’s Law.

Things have changed since then, and the consortium responded by first appointing a new executive director, Bob Smith, then changing its name and its mission.  I talked with Grant to understand his view from the top.

Grant Pierce, Sonics CEO

Grant pointed out that: “We are trying to better reflect what has happened in the market place, both in terms of how our customers have developed further in the world of system on chip and what we have seen in the development of the EDA world where today the IP offerings in the market, both those from independent companies but also those from EDA companies are critical and integral to all the whole ecosystem for building today’s modern chips.”

Grant pointed out that ESDA has expanded its focus and has embraced not only hardware design and development but also software.  That does not mean, Grant pointed out, that the EDA companies are loosing importance but instead they are gaining a seat at the table with the software and the system design community in order to expand the scope of their businesses.

From my point of view, I interjected, I see the desired change implemented very slowly, still reacting to and not anticipating new demands.  So what do you think can happen in the next twelve months?

“From an ESDA point of view you are going to see us broadening the membership.” answered Grant.  ”We are looking to see how we can expand the focus of the organization through its working groups to zero-in on new topics that are broader than the ones that are currently there.  Like expanding beyond what is a common operating system to support for example.  I think you will see at a minimum two fronts, one opening on the software side while at the same time continuing work on the PPA (Power, Performance, Area) issues of chip design.  This involves a level of participation from parties that have not interacted this organization before.”

Grant believes that there should be more emphasis on the needs of small companies, those where innovation is taking place.  ESDA needs to seek the best opportunity to invigorate those companies.  “At the same time we must try to get system companies involved in an appropriate fashion, at least to the degree that they represent the software that is embedded in a system” concluded Grant.

We briefly speculated on what the RISC 5 movement might mean to ESDA.  Grant does not see much value for ESDA to focus on a specific instruction set, although he conceded that there might be value if RISC 5 joined ESDA.  I agree with the first part of his judgement, but I do not see any benefit to either party, or the industry for that matter, associated with RISC 5 joining ESDA.

From my point of view ESDA has a big hurdle to overcome.  For a few years, before Bob Smith was named executive director, EDAC was somewhat stagnant, and now it must catch up with market reality and fully address the complete system issue.  Not just hardware/software, but analog/digital, and the increased use of FPGA and MEMS.

For sure, representing an IP company gives Grant an opportunity to stress a different point of view within ESDA than the traditional EDA view.  The IP industry would not even exist without a system approach to design and it has changed the way architects think when first sketching a product on the back of an envelope.

DVCon Is a Must Attend Conference for Verification and Design Engineers

February 13th, 2017

Gabe Moretti, Senior Editor

Dennis Brophy, Chair of this year’s DVCon said that “The 2017 Design and Verification Conference and Exhibition U.S. The conference will offer attendees a comprehensive selection of 39 papers, 9 tutorials, 19 posters, 2 panels, a special session on Functional Verification Industry Trends, and a keynote address by Anirudh Devgan, Senior Vice President and General Manager of the Digital & Signoff Group and System & Verification Group at Cadence.”

DVCon is   sponsored by Accellera Systems Initiative, DVCon U.S. will be held February 27-March 2, 2017 at the DoubleTree Hotel in San Jose, California.

DVCon was initially sponsored by VHDL International and Open Verilog International; but its growth really started after the two organizations merged  to form Accelera which was renames Accellera Systems Initiative during its merger with Open SystemC Initiative.  It seems that every EDA organization now-a-days needs the word “systems” in its name.  As they are afraid to be seen to be of lesser importance without making very sure that everyone knows they are aware of the exitance of systems.

DVCon is now a truly international conference holding events not only in the US but also in Europe, India and, for the first time this year, in China.

The aim of Accellera is to build and serve a community of professionals who focus on development and verification of hardware systems, with the awareness that software role in system design is growing rapidly.  Dennis Brophy observed that “Coming together as a community is fostered by the DVCon Expo. The bigger and better exposition will run from Monday evening to Wednesday evening. See the program for specific opening and closing times. The Expo is a great place to catch up with commercial vendors and learn the latest in product developments. It is also great to connect with colleagues and exchange and share information and ideas. Join us for the DVCon U.S. 2017 “Booth Crawl” where after visiting select exhibitors you will be automatically entered for a lucky draw.”

Although the titles of the papers presented in the technical conference focus on EDA technologies, the impact of the papers deal with application areas diverse from automotive to communications, from the use of MEMS and FPGA in system design, from could computing to rf.

“DVCon has long been the technical and social highlight of the year for design and verification engineers,” stated Tom Fitzpatrick, DVCon U.S. 2017 Technical Program Chair.  “Through the hard work of a large team of dedicated reviewers, we have chosen the best of over one hundred submitted abstracts from deeply knowledgeable colleagues within the industry to help attendees learn how to improve their verification efforts. We also have two days of extended tutorials where attendees can get an in-depth look at the cutting edge of verification, not to mention the Exhibit Floor where over 30 companies will be demonstrating their latest tools and technologies.  In between, there are plenty of opportunities to network, relax and take advantage of a fun and welcoming atmosphere where attendees can reconnect with old friends and former colleagues or make new friends and contacts. The value of DVCon goes well beyond the wealth of information found in the Proceedings. Being there makes all the difference.”

Dennis Brophy added that on Monday February 27th there will be a presentation covering the Portable Stimulus work being done under Accellera sponsorship.  The working group has made significant progress toward defining what it is and how it works.  The goal is to have the Board of Accellera to authorize a ballot to make the result an industry standard and to further take it to the IEEE to complete the standardization owkr.

As has happened last year the exhibit space was quickly filled by vendor who understood the advantage of talking with technologists who specialize in verification and design of complex systems.

Devgan’s keynote, “Tomorrow’s Verification Today” will review the latest trends which are redefining verification from IP to System-level with an increasingly application-specific set of demands for hardware and software development. Over the past decade, verification complexity and demands on engineering teams have continued to raise rapidly. However, the supporting automation tools and flows have been only improving incrementally, resulting in a verification gap. It is time to redefine how verification should be approached to accelerate innovation in the next decade.  In his presentation, Dr. Devgan will review the latest trends which are redefining verification from IP to System-level, with an increasingly application-specific set of demands changing the landscape for hardware and software development. The keynote will be delivered on Tuesday, February 28

On the same day from 1:30-2:30pm in the Oak/Fir Ballroom the conference will offer a special session with Harry Foster, Chief Scientist for Mentor Graphics’ Design Verification Technology Division.  Foster has been asked to present “Trends in Functional Verification: A 2016 Industry Study” based on the Wilson Research Group’s 2016 study. The findings from the 2016 study provide invaluable insight into the state of today’s electronics industry. It will be held on Tuesday, February 28 from 10:30-11:00am in the Fir Ballroom.

Two full days of in-depth tutorials: Accellera Day with three tutorials on Monday and sponsored tutorials on Thursday.  There are also many technical papers and posters and two intriguing panels.

There will be plenty of networking opportunities, especially during the exhibition.  There will be a booth crawl on Monday, February 27 from 5:00-7:00pm and receptions both Tuesday and Wednesday in the exhibit hall.  Exhibits will be open Tuesday from 5:00-7:00pm and Wednesday and Thursday from 2:30-6:00p

The awards for Best Paper and Best Poster will be presented at the beginning of the reception on Wednesday.  For the complete DVCon U.S. 2017 schedule, including a list of sessions, tutorials, sponsored luncheons and events, visit www.dvcon.org.

EDA in the year 2017 – Part 2

January 26th, 2017

Gabe Moretti, Senior Editor

The first part of the article, published last week, covered design methods and standards in EDA together with industry predictions that impacted all of our industry.  This part will cover automotive, design verification and FPGA.  I found it interesting that David Kelf, VP of Marketing at OneSpin Solutions, thought that Machine learning will begin to penetrate the EDA industry as well.  He stated: “Machine Learning hit a renaissance and is finding its way into a number of market segments. Why should design automation be any different?  2017 will be the start of machine learning to create a new breed of design automation tool, equipped with this technology and able to configure itself for specific designs and operations to perform them more efficiently. By adapting algorithms to suit the input code, many interesting things will be possible.”

Rob Knoth, Product Management Director, Digital and Signoff Group at Cadence touched on an issue that is being talked about more recently: security.  He noted that: “In 2016, IoT bot-net attacks brought down large swaths of the Internet – the first time the security impact of IoT was felt by many. Private and nation-state attacks compromised personal/corporate/government email throughout the year. “

In 2017, we have the potential for security concerns to start a retreat from always-on social media and a growing value on private time and information. I don’t see a silver bullet for security on our horizon. Instead, I anticipate an increasing focus for products to include security managers (like their safety counterparts) on the design team and to consider safety from the initial concept through the design/production cycle.

Automotive

The automotive industry has increased the use of electronics year over year for a long time.  At this point an automobile is a true intelligent system, at least as far as what the driver and passenger can see and hear the “infotainment system”.  Late model cars also offer collision avoidance and stay-in-lane functions, but more is coming.

Here is what Wally Rhines thinks: “Automotive and aerospace designers have traditionally been driven by mechanical design.  Now the differentiation and capability of cars and planes is increasingly being driven by electronics.  Ask your children what features they want to see in a new car.  The answer will be in-vehicle infotainment.  If you are concerned about safety, the designers of automobiles are even more concerned.  They have to deal with new regulations like ISO 26262, as well as other capabilities, in addition to environmental requirements and the basic task of “sensor fusion” as we attach more and more visual, radar, laser and other sensors to the car.  There is no way to reliably design vehicles and aircraft without virtual simulation of electrical behavior.

In addition, total system simulation has become a requirement.  How do you know that the wire bundle will fit through the hole in the door frame?  EDA tools can tell you the answer, but only after seeking out the data from the mechanical design.  Wiring in a car or plane is a three dimensional problem.  EDA tools traditionally worry about two dimension routing problems.  The world is changing.  We are going to see the basic EDA technology for designing integrated circuits be applied to the design of systems. Companies that can remain at the leading edge of IC design will be able to apply that technology to systems.”

David Kelf, VP of Marketing at OneSpin Solutions, observed: “OneSpin called it last year and I’ll do it again –– Automotive will be the “killer app” of 2017. With new players entering the marketing all the time, we will see impressive designs featured in advanced cars, which themselves will move toward a driverless future.  All automotive designs currently being designed for safety will need to be built to be as secure as possible. The ISO 26262 committee is working on security as well safety and I predict security will feature in the standard in 2017. Tools to help predict vulnerabilities will become more important. Formal, of course, is the perfect platform for this capability. Watch for advanced security features in formal.”

Rob Knoth, Product Management Director, Digital and Signoff Group at Cadence noted: “In 2016, autonomous vehicle technology reached an inflection point. We started seeing more examples of private companies operating SAE 3 in America and abroad (Singapore, Pittsburgh, San Francisco).  We also saw active participation by the US and world governments to help guide tech companies in the proliferation and safety of the technology (ex. US DOT V2V/V2I standard guidelines, and creating federal ADAS guidelines to prevent state-level differences). Probably the most unique example was also the first drone delivery by a major retailer, something which was hinted at 3 years prior and seemingly just a flight of fancy then.

Looking ahead to 2017, both the breadth and depth are expected to expand, including the first operation of SAE level 4/5 in limited use on public streets outside the US, and on private roads inside US. Outside of ride sharing and city driving, I expect to see the increasing spread of ADAS technology to long distance trucking and non-urban transportation. To enable this, additional investments from traditional vehicle OEM’s partnering with both software and silicon companies will be needed to enable high-levels of autonomous functions. To help bring these to reality, I also expect the release of new standards to guide both the functional safety and reliability of automotive semiconductors. Even though the pace of government standards can lag, for ADAS technology to reach its true potential, it will require both standards and innovation.”

FPGA

The IoT market is expected to provide a significant opportunity to the electronics industry to grow revenue and open new markets.  I think the use of FPGA in IoT dvices will increase the use of these devices in system design.

I asked Geoff Tate, CEO of FlexLogix, his opinions on the subject.  He offered four points that he expects to become reality in 2017:

1. the first customer chip will be fabricated using embedded FPGA from an IP supplier

2. the first customer announcements will be made of customers adopting embedded FPGA from an IP supplier

3. embedded FPGAs will be proven in silicon running at 1GHz+

4. the number of customers doing chip design using embedded FPGA will go from a handful to dozen.

Zibi Zalewski, Hardware Division General Manager at Aldec also addressed the FPGA subject.

“I believe FPGA devices are an important technology player to mention when talking what to expect in 2017. With the growth of embedded electronics driven by Automotive, Embedded Vision and/or IoT markets, FPGA technology becomes a core element particularly for in products that require low power and re-programmability.

Features of FPGA such as pipelining and the ability to execute and easily scale parallel instances of the implemented function allow for the use of FPGA for more than just the traditionally understood embedded markets. FPGA computing power usage is exploding in the High Performance Computing (HPC) where FPGA devices are used to accelerate different scientific algorithms, big data processing and complement CPU based data centers and clouds. We can’t talk about FPGA these days without mentioning SoC FPGAs which merge the microprocessor (quite often ARM) with reprogrammable space. Thanks to such configurations, it is possible to combine software and hardware worlds into one device with the benefits of both.

All those activities have led to solid growth in FPGA engineering, which is pushing on further growth of FPGA development and verification tools. This includes not only typical solutions in simulation and implementation. We should also observe solid growth in tools and services simplifying the usage of FPGA for those who don’t even know this technology such as high level synthesis or engineering services to port C/C++ sources into FPGA implementable code. The demand for development environments like compilers supporting both software and hardware platforms will only be growing, with the main goal focused on ease of use by wide group of engineers who were not even considering the FPGA platform for their target application.

At the other end of the FPGA rainbow are the fast-growing, largest FPGA offered both from Xilinx and Intel/Altera. ASIC design emulation and prototyping will push harder and harder on the so called big-box emulators offering higher performance and significantly lower price per gate and so becoming more affordable for even smaller SoC projects. This is especially true when partnered with high quality design mapping software that handles multi-FPGA partitioning, interconnections, clocks and memories.”

Design Verification

There are many methods to verify a design and companies will, quite often, use more than one on the same design.  Each method: simulation, formal analysis, and emulation, has its strong points.

For many years, logic simulation was the only tool available, although hardware acceleration of logic simulation was also available.

Frank Schirrmeister, Senior Product Management Group Director, System and Verification Group at Cadence submitted a through analysis of verification issues.  He wrote: “From a verification perspective, we will see further market specialization in 2017 – mobile, server, automotive (especially ADAS) and aero/defense markets will further create specific requirements for tools and flows, including ISO 26262 TCL1 documentation and support for other standards. The Internet of Things (IoT) with its specific security and low power requirements really runs across application domains.  Continuing the trend in 2016, verification flows will continue to become more application-specific in 2017, often centered on specific processor architectures. For instance, verification solutions optimized for mobile applications have different requirements than for servers and automotive applications or even aerospace and defense designs. As application-specific requirements grow stronger and stronger, this trend is likely to continue going forward, but cross-impact will also happen (like mobile and multimedia on infotainment in automotive).

Traditionally ecosystems have been centered on processor architectures. Mobile and Server are key examples, with their respective leading architectures holding the lion share of their respective markets. The IoT is mixing this up a little as more processor architectures can play and offer unique advantages, with configurable and extensible architectures. No clear winner is in sight yet, but 2017 will be a key year in the race between IoT processor architectures. Even OpenSource hardware architectures are look like they will be very relevant judging from the recent momentum which eerily reminds me of the early Linux days. It’s one of the most entertaining spaces to watch in 2017 and for years to come.

Verification will become a whole lot smarter. The core engines themselves continue to compete on performance and capacity. Differentiation further moves in how smart applications run on top of the core engines and how smart they are used in conjunction.

For the dynamic engines in software-based simulation, the race towards increased speed and parallel execution will accelerate together with flows and methodologies for automotive safety and digital mixed-signal applications.

In the hardware emulation world, differentiation for the two basic ways of emulating – processor-based and FPGA-based – will be more and more determined by how the engines are used. Specifically, the various use models for core emulation like verification acceleration low power verification, dynamic power analysis, post-silicon validation—often driven by the ever growing software content—will extend further, with more virtualization joining real world connections. Yes, there will also be competition on performance, which clearly varies between processor-based and FPGA-based architectures—depending on design size and how much debug is enabled—as well as the versatility of use models, which determines the ROI of emulation.

FPGA-based prototypes address the designer’s performance needs for software development, using the same core FPGA fabrics. Therefore, differentiation moves into the software stacks on top, and the congruency between emulation and FPGA-based prototyping using multi-fabric compilation allows mapping both into emulation and FPGA-based prototyping.

All this is complemented by smart connections into formal techniques and cross-engine verification planning, debug and software-driven verification (i.e. software becoming the test bench at the SoC level). Based on standardization driven by the Portable Stimulus working group in Accellera, verification reuse between engines and cross-engine optimization will gain further importance.

Besides horizontal integration between engines—virtual prototyping, simulation, formal, emulation and FPGA-based prototyping—the vertical integration between abstraction levels will become more critical in 2017 as well. For low power specifically, activity data created from RTL execution in emulation can be connected to power information extracted from .lib technology files using gate-level representations or power estimation from RTL. This allows designers to estimate hardware-based power consumption in the context of software using deep cycles over longer timeframes that are emulated. ‘

Anyone who knows Frank will not be surprised by the length of the answer.

Wally Rhines, Chairman and CEO of Mentor Graphics was less verbose.  He said:” Total system simulation has become a requirement.  How do you know that the wire bundle will fit through the hole in the door frame?  EDA tools can tell you the answer, but only after seeking out the data from the mechanical design.  Wiring in a car or plane is a three dimensional problem.  EDA tools traditionally worry about two dimension routing problems.  The world is changing.  We are going to see the basic EDA technology for designing integrated circuits be applied to the design of systems. Companies that can remain at the leading edge of IC design will be able to apply that technology to systems.

This will create a new market for EDA.  It will be larger than the traditional IC design market for EDA.  But it will be based upon the basic simulation, verification and analysis tools of IC design EDA.  Sometime in the near future, designers of complex systems will be able to make tradeoffs early in the design cycle by using virtual simulation.  That know-how will come from integrated circuit design.  It’s no longer feasible to build prototypes of systems and test them for design problems.  That approach is going away.  In its place will be virtual prototyping.  This will be made possible by basic EDA technology.  Next year will be a year of rapid progress in that direction.  I’m excited by the possibilities as we move into the next generation of electronic design automation.”

The increasing size of chips has made emulation a more popular tool than in the past.  Lauro Rizzatti, Principal at Lauro Rizzatti LLC, is a pioneer in emulation and continues to be thought of as a leading expert in the method.  He noted: “Expect new use models for hardware emulation in 2017 that will support traditional market segments such as processor, graphics, networking and storage, and emerging markets currently underserved by emulation –– safety and security, along with automotive and IoT.

Chips will continue to be bigger and more complex, and include an ever-increasing amount of embedded software. Project groups will increasingly turn to hardware emulation because it’s the only verification tool to debug the interaction between the embedded software and the underlying hardware. It is also the only tool capable to estimate power consumption in a realistic environment, when the chip design is booting an OS and processing software apps. More to the point, hardware emulation can thoroughly test the integrity of a design after the insertion of DFT logic, since it can verify gate-level netlists of any size, a virtually impossible task with logic simulators.

Finally, its move to the data center solidifies its position as a foundational verification tool that offers a reasonable cost of ownership.”

Formal verification tools, sometimes referred to as “static analysis tools” have seen their use increase year over year once vendors found human interface methods that did not require a highly-trained user.  Roger Sabbagh, VP of Application Engineering at Oski Technology pointed out: “The world is changing at an ever-increasing pace and formal verification is one area of EDA that is leading the way. As we stand on the brink of 2017, I can only imagine what great new technologies we will experience in the coming year. Perhaps it’s having a package delivered to our house by a flying drone or riding in a driverless car or eating food created by a 3-D printer. But one thing I do know is that in the coming year, more people will have the critical features of their architectural design proven by formal verification. That’s right. System-level requirements, such as coherency, absence of deadlock, security and safety will increasingly be formally verified at the architectural design level. Traditionally, we relied on RTL verification to test these requirements, but the coverage and confidence gained at that level is insufficient. Moreover, bugs may be found very late in the design cycle where they risk generating a lot of churn. The complexity of today’s systems of systems on a chip dictates that a new approach be taken. Oski is now deploying architectural formal verification with design architects very early in the design process, before any RTL code is developed, and it’s exciting to see the benefits it brings. I’m sure we will be hearing a lot more about this in the coming year and beyond!”

Finally David Kelf, VP Marketing at OneSpin Solutions observed: “We will see tight integrations between simulation and formal that will drive adoption among simulation engineers in greater numbers than before. The integration will include the tightening of coverage models, joint debug and functionality where the formal method can pick up from simulation and even emulation with key scenarios for bug hunting.”


Conclusion

The two combined articles are indeed quite long.  But the EDA industry is serving a multi-faceted set of customers with varying and complex requirements.  To do it justice, length is unavoidable.

EDA in the year 2017 – Part 1

January 12th, 2017

Gabe Moretti, Senior Editor

The EDA industry performance is dependent on two other major economies: one technological and one financial.  EDA provides the tools and methods that leverage the growth of the semiconductor industry and begins to receive its financial rewards generally a couple of year after the introduction of the new product on the market.  It takes that long for the product to prove itself on the market and achieve general distribution.

David Fried from Coventor addressed the most important topics that may impact the foundry business in 2017.  He made two points.

“Someone is going to commit to Extreme Ultra-Violet (EUV) for specific layers at 7nm, and prove it.  I expect EUV will be used to combine 3-4 masks currently using 193i in a multi-patterning scheme (“cut” levels or Via levels) for simplicity (reduced processing), but won’t actually leverage a pattern-fidelity advantage for improved chip area density.

The real density benefit won’t come until 5nm, when the entire set of 2D design rules can be adjusted for pervasive deployment of EUV.  This initial deployment of EUV will be a “surgical substitution” for cost improvement at very specific levels, but will be crucial for the future of EUV to prove out additional high-volume manufacturing challenges before broader deployment.  I am expecting this year to be the year that the wishy-washy predictions of who will use EUV at which technology for which levels will finally crystallize with proof.

7nm foundry technology is probably going to look mostly evolutionary relative to 10nm and 14nm. But 5nm is where the novel concepts are going to emerge (nanowires, alternate channel materials, tunnel FETs, stacked devices, etc.) and in order for that to happen, someone is going to prove out a product-like scaling of these devices in a real silicon demonstration (not just single-device research).  The year 2017 is when we’ll need to see something like an SRAM array, with real electrical results, to believe that one of these novel device

concepts can be developed in time for a 5nm production schedule.”

Rob Knoth, Product Marketing Director, Digital and Signoff Group at Cadence offered the following observation.  “This past year, major IDM and pure-play foundries began to slow the rate at which new process nodes are planned to be released. This was one of the main drivers for the restless semiconductor-based advances we’ve seen the past 50 years.

Going forward, fabs and equipment makers will continue to push the boundaries of process technology, and the major semiconductor companies will continue to fill those fabs. While it may be slowing, Moore’s Law is not “dead.” However, there will be increased selection about who jumps to the “next node,” and greater emphasis will be placed on the ability of the design engineer and their tools/flows/methods to innovate and deliver value to the product. The importance for an integrated design flow to make a difference in product power/performance/area (PPA) and schedule/cost will increase.

The role that engineering innovation and semiconductors play in making the world a better place doesn’t get a holiday or have an expiration date.

The semiconductor market, in turn, depends on the general state of the world-wide economy.  This is determined mostly by consumer sentiment: when consumers buy, all industries benefit, from industrial to financial.  It does not take much negative inflection in consumers’ demand to diminish the requirement for electronic based products and thus semiconductors parts.  That in turn will have a negative effect on the EDA industry.

While companies that sell multi-years licenses can smooth the impact, new licenses, both multi-year and yearly are more difficult to sell and result in lower revenue.

The electronic industry will evolve to deal with increased complexity of designs.  Complex chips are the only vehicle that can make advance fabrication nodes profitable.  It makes no sense decreasing features’ dimensions and power requirements at the cost of increased noise and leakage just for technology sake.  As unit costs increase, only additional functionality can justify new projects.  Such designs will require new methodology, new versions of existing tools, and new industry organization to improve the use of the development/fabrication chain.

Michael Wishart, CEO of Efabless believes that in 2017 we will begin to see full-fledged community design, driven by the need for customized silicon to serve emerging smart hardware products. ICs will be created by a community of unaffiliated designers on affordable, re-purposed 180nm nodes and incorporate low cost, including open source, processors and on-demand analog IP. An online marketplace to connect demand with the community will be a must.

Design Methods

I asked Lucio Lanza of Lanza techVentures what factors would become important in 2017 regarding EDA.  As usual his answer was short and to the point.  “Cloud, machine learning, security and IoT will become the prevailing opportunities for design automation in 2017. Design technology must progress quickly to meet the needs of these emerging markets, requiring as much as possible from the design automation industry. Design automation needs to willingly and quickly take up the challenge at maximum speed for success. It’s our responsibility, as it’s always been.”

Bob Smith, Executive Director of the ESD alliance thinks that in 2017, the semiconductor design ecosystem will continue evolving from a chip-centric (integration of transistors) focus to a system-centric (integration of functional blocks) worldview. While SoCs and other complex semiconductor devices remain critical building blocks and Moore’s Law a key driver, the emphasis is shifting to system design via the extensive use of IP. New opportunities for automation will open up with the need to rapidly configure and validate system-level design based on extensive use of IP.  Industry organizations like the Electronic System Design Alliance have a mission to work across the entire design ecosystem as the electronic design market makes the transition to system-level design.

Wally Rhines, Chairman and CEO of Mentor Graphics addressed the required changes in design as follows: “EDA is a changing.  Most of its effort in the last two decades in the EDA industry has focused on the automation of integrated circuit design. Virtually all aspects of IC design are now automated with the use of computers.  But system design is in the infancy of an evolution to virtual design automation. While EDA has now given us the ability to do first pass functional integrated circuit designs, we are far from providing the same capability to system designers.

What’s needed is the design of “systems of systems”.  That capability is coming.  And it is sooner than you might think.  Designers of planes, trains and automobiles hunger for virtual simulation of their designs long before they build the physical prototypes for each sub-system.  In the past, this has been impossible.  Models were inadequate.  Simulation was limited to mechanical or thermal analysis.  The world has changed.  During 2017, we will see the adoption of EDA by companies that have never before considered EDA as part of their methodology.”

Frank Schirrmeister, Senior Product Management Group Director, System and Verification Group at Cadence offered the following observation.  “IoT that spans across application domains will further grow, especially in the industrial domain. Dubbed in Gernany as “Industrie 4.0”, industrial applications are probably the strongest IoT driver. Value shifts will accelerate from pure semiconductor value to systemic value in IoT applications. The edge node sensor itself may not contribute to profits greatly, but the systemic value of combining the edge node with a hub accumulating data and sending it through networks to cloud servers in which machine learning and big data analysis happens allows for cross monetization. The value definitely is in the system. Interesting shifts lie ahead in this area from a connectivity perspective. 5G is supposed to broadly hit is in 2020, with early deployments in 2017. There are already discussions going on regarding how the connectivity within the “trifecta” of IoT/Hub/Server is going to change, with more IoT devices bypassing the aggregation at the hub and directly accessing the network. Look for further growth in the area that Cadence calls System Design Enablement, together with some customer names you would have previously not expected to create chips themselves.

Traditionally ecosystems have been centered on processor architectures. Mobile and Server are key examples, with their respective leading architectures holding the lion share of their respective markets. The IoT is mixing this up a little as more processor architectures can play and offer unique advantages, with configurable and extensible architectures. No clear winner is in sight yet, but 2017 will be a key year in the race between IoT processor architectures. Even OpenSource hardware architectures are look like they will be very relevant judging from the recent momentum which eerily reminds me of the early Linux days. It’s definitely one of the most entertaining spaces to watch in 2017 and for years to come. “

Standards

Standards have played a key role in EDA.  Without them designers would be locked to one vendor for all of the required tools, and given the number of necessary tools very few EDA companies would be able to offer all that is required to complete, verify, and transfer to manufacturing a design.  Michiel Ligthart, President and COO at Verific, sees two standards, in particular, playing a key role in 2017.  “Watch for quite a bit of activity on the EDA standards front in 2017. First in line is the UVM standard (IEEE 1800.2), approved by the Working Group in December 2016. The IEEE may ratify it as early as February. Another one to watch is the next installment of SystemVerilog, mainly a “clarifications and corrections” release, that will be voted on in early 2017 with an IEEE release just before the end of the year. In the meantime, we are all looking at Accellera’s Portable Stimulus group to see what it will come up with in 2017.”

In regards to the Portable Stimulus activity Adnan Hamid, CEO of Breker Verification Systems goes into more details.  “While it’s been a long time coming, Portable Stimulus is now an important component of many design verification flows and that will increase significantly in 2017. The ability to specify verification intent and behaviors reusable across target platforms, coupled with the flexibility in choosing vendor solutions, is an appealing prospect to a wide range of engineering groups and the appeal is growing. While much of the momentum is rooted in Accellera’s Portable Stimulus Working Group, verification engineers deserve credit for recognizing its value to their productivity and effectiveness. Count on 2017 to be a big year for both its technological evolution and its standardization as it joins the ranks of SystemVerilog, UVM and others.

Conclusion

Given the amount of contributions received, it would be overwhelming to present all of them in one article.  Therefore the remaining topics will be covered in a follow-on article the following week.

EDA has not been successful at keeping its leaders

January 4th, 2017

Gabe Moretti, Senior Editor

I have often wondered why when a larger EDA company acquires a smaller one, the acquired CEO ends up, in a relatively short time, leaving and either joining a new start-up or a venture capital firm.  It seemed to me that that CEO thought enough of the buyer to predict his (or hers) employees and product(s) would prosper in the new environment when accepting to be acquired.  So, why leave?  It could just not be a matter of strong contrasting personalities.  I think I found the answer over the Christmas break.

I read the book “Skunk Works” by Ben R. Rich.  The book is a factual history of development projects that were carried out while Ben was first there as an employee and eventually its leader.  During his years at the Skunk Works Mr. Rich was part of the exceptional successes of the U-2 and SR-71 spy planes, and of the F117A stealth bomber.  All those projects were run independently of corporate overseers, used a comparatively small dedicated team, and modified the project when necessary to achieve the established goal.

Two major points made in the book apply both to the EDA industry and to industry in general.  First “Leaders are natural born: managers must be trained” and second “There is no substitute for astute managerial skill on any project”.

Many start-up CEOs are born leaders and do not fit well within an organization where projects are managed in a bureaucratic manner using a rigid reporting structure.  An ex-CEO will soon find such work environment counter-productive.  Successful projects need to react quickly to changing realities and parameters.  Often in the life of a project the team discovers new opportunities or new obstacles that come to light because of the work being done.  The time spent explaining and justifying the new alternative will impact the success of the project, especially if the value of the presented alternative is not fully understood by top executives or the new managers do not understand the new corporate politics.

I think that the best use of an acquired CEO is to allow him or her to continue to be an entrepreneur within the acquiring company.  This does not mean to use his talent to continue to lead the just acquired team. He can look for new opportunities within his area of expertise and possibly build a new team that will produce a new product.  In this way the acquiring company increases its ROI form the acquisition, even at the cost of increased compensation to both the CEO and his new team at the successful completion of their work.

In general Synopsys has managed to retain acquired CEOs, while Cadence has not.

The behavior in the EDA industry, with very few well known exceptions, has been to seek a quick reward through an acquisition that will satisfy financially both the venture capitalists and the original start-up team.  Once the acquisition price is monetized, many people leave the industry seeking to capitalize on their financial gains in other ways.  Thus the EDA industry must grow through the entrance of new people with new ideas but little if any experience in the industry.  The result is many academic brilliant ideas that result in failed start-ups.  Individuals with brilliant ideas are not usually good leaders or managers, and good managers do not generally possess the creativity to conceive a breakthrough product.

In its history the EDA industry has paid the price of creating both leaders and excellent managers, but has yet to find a way to retain them.  Of course there are a few exceptions, nothing is ever black and white, but the exceptions are few.  It will be interesting to see, after a couple of years, how Siemens will have handled the Mentor Graphics acquisition.  Will Mentor’s creativity improve?  Will the successful team remain?  Will they use the additional resource in an entrepreneurial manner, or either leave or adjust to a more relaxed big company life?

Next Page »