Part of the  

Chip Design Magazine

  Network

About  |  Contact

Posts Tagged ‘EDA’

Next Page »

Grant Pierce Named BoD Chair of the ESD Alliance

Tuesday, February 21st, 2017

Gabe Moretti, Senior Editor

The ESD Alliance (ESDA) has elected Grant Pierce (CEO of Sonics) as its Chairman of the Board a few weeks ago.  Grant is only the second Chair that is not a high level executive of one of the three big three EDA companies to hold the title, and the first since the organization, formerly EDAC renamed itself.  During the EDAC days it was customary for the CEOs of Cadence, Mentor and Synopsys to pass the title among themselves in an orderly manner.  The organization then reflected the mission of the EDA industry to support the development of hardware intensive silicon chips following Moore’s Law.

Things have changed since then, and the consortium responded by first appointing a new executive director, Bob Smith, then changing its name and its mission.  I talked with Grant to understand his view from the top.

Grant Pierce, Sonics CEO

Grant pointed out that: “We are trying to better reflect what has happened in the market place, both in terms of how our customers have developed further in the world of system on chip and what we have seen in the development of the EDA world where today the IP offerings in the market, both those from independent companies but also those from EDA companies are critical and integral to all the whole ecosystem for building today’s modern chips.”

Grant pointed out that ESDA has expanded its focus and has embraced not only hardware design and development but also software.  That does not mean, Grant pointed out, that the EDA companies are loosing importance but instead they are gaining a seat at the table with the software and the system design community in order to expand the scope of their businesses.

From my point of view, I interjected, I see the desired change implemented very slowly, still reacting to and not anticipating new demands.  So what do you think can happen in the next twelve months?

“From an ESDA point of view you are going to see us broadening the membership.” answered Grant.  ”We are looking to see how we can expand the focus of the organization through its working groups to zero-in on new topics that are broader than the ones that are currently there.  Like expanding beyond what is a common operating system to support for example.  I think you will see at a minimum two fronts, one opening on the software side while at the same time continuing work on the PPA (Power, Performance, Area) issues of chip design.  This involves a level of participation from parties that have not interacted this organization before.”

Grant believes that there should be more emphasis on the needs of small companies, those where innovation is taking place.  ESDA needs to seek the best opportunity to invigorate those companies.  “At the same time we must try to get system companies involved in an appropriate fashion, at least to the degree that they represent the software that is embedded in a system” concluded Grant.

We briefly speculated on what the RISC 5 movement might mean to ESDA.  Grant does not see much value for ESDA to focus on a specific instruction set, although he conceded that there might be value if RISC 5 joined ESDA.  I agree with the first part of his judgement, but I do not see any benefit to either party, or the industry for that matter, associated with RISC 5 joining ESDA.

From my point of view ESDA has a big hurdle to overcome.  For a few years, before Bob Smith was named executive director, EDAC was somewhat stagnant, and now it must catch up with market reality and fully address the complete system issue.  Not just hardware/software, but analog/digital, and the increased use of FPGA and MEMS.

For sure, representing an IP company gives Grant an opportunity to stress a different point of view within ESDA than the traditional EDA view.  The IP industry would not even exist without a system approach to design and it has changed the way architects think when first sketching a product on the back of an envelope.

EDA in the year 2017 – Part 1

Thursday, January 12th, 2017

Gabe Moretti, Senior Editor

The EDA industry performance is dependent on two other major economies: one technological and one financial.  EDA provides the tools and methods that leverage the growth of the semiconductor industry and begins to receive its financial rewards generally a couple of year after the introduction of the new product on the market.  It takes that long for the product to prove itself on the market and achieve general distribution.

David Fried from Coventor addressed the most important topics that may impact the foundry business in 2017.  He made two points.

“Someone is going to commit to Extreme Ultra-Violet (EUV) for specific layers at 7nm, and prove it.  I expect EUV will be used to combine 3-4 masks currently using 193i in a multi-patterning scheme (“cut” levels or Via levels) for simplicity (reduced processing), but won’t actually leverage a pattern-fidelity advantage for improved chip area density.

The real density benefit won’t come until 5nm, when the entire set of 2D design rules can be adjusted for pervasive deployment of EUV.  This initial deployment of EUV will be a “surgical substitution” for cost improvement at very specific levels, but will be crucial for the future of EUV to prove out additional high-volume manufacturing challenges before broader deployment.  I am expecting this year to be the year that the wishy-washy predictions of who will use EUV at which technology for which levels will finally crystallize with proof.

7nm foundry technology is probably going to look mostly evolutionary relative to 10nm and 14nm. But 5nm is where the novel concepts are going to emerge (nanowires, alternate channel materials, tunnel FETs, stacked devices, etc.) and in order for that to happen, someone is going to prove out a product-like scaling of these devices in a real silicon demonstration (not just single-device research).  The year 2017 is when we’ll need to see something like an SRAM array, with real electrical results, to believe that one of these novel device

concepts can be developed in time for a 5nm production schedule.”

Rob Knoth, Product Marketing Director, Digital and Signoff Group at Cadence offered the following observation.  “This past year, major IDM and pure-play foundries began to slow the rate at which new process nodes are planned to be released. This was one of the main drivers for the restless semiconductor-based advances we’ve seen the past 50 years.

Going forward, fabs and equipment makers will continue to push the boundaries of process technology, and the major semiconductor companies will continue to fill those fabs. While it may be slowing, Moore’s Law is not “dead.” However, there will be increased selection about who jumps to the “next node,” and greater emphasis will be placed on the ability of the design engineer and their tools/flows/methods to innovate and deliver value to the product. The importance for an integrated design flow to make a difference in product power/performance/area (PPA) and schedule/cost will increase.

The role that engineering innovation and semiconductors play in making the world a better place doesn’t get a holiday or have an expiration date.

The semiconductor market, in turn, depends on the general state of the world-wide economy.  This is determined mostly by consumer sentiment: when consumers buy, all industries benefit, from industrial to financial.  It does not take much negative inflection in consumers’ demand to diminish the requirement for electronic based products and thus semiconductors parts.  That in turn will have a negative effect on the EDA industry.

While companies that sell multi-years licenses can smooth the impact, new licenses, both multi-year and yearly are more difficult to sell and result in lower revenue.

The electronic industry will evolve to deal with increased complexity of designs.  Complex chips are the only vehicle that can make advance fabrication nodes profitable.  It makes no sense decreasing features’ dimensions and power requirements at the cost of increased noise and leakage just for technology sake.  As unit costs increase, only additional functionality can justify new projects.  Such designs will require new methodology, new versions of existing tools, and new industry organization to improve the use of the development/fabrication chain.

Michael Wishart, CEO of Efabless believes that in 2017 we will begin to see full-fledged community design, driven by the need for customized silicon to serve emerging smart hardware products. ICs will be created by a community of unaffiliated designers on affordable, re-purposed 180nm nodes and incorporate low cost, including open source, processors and on-demand analog IP. An online marketplace to connect demand with the community will be a must.

Design Methods

I asked Lucio Lanza of Lanza techVentures what factors would become important in 2017 regarding EDA.  As usual his answer was short and to the point.  “Cloud, machine learning, security and IoT will become the prevailing opportunities for design automation in 2017. Design technology must progress quickly to meet the needs of these emerging markets, requiring as much as possible from the design automation industry. Design automation needs to willingly and quickly take up the challenge at maximum speed for success. It’s our responsibility, as it’s always been.”

Bob Smith, Executive Director of the ESD alliance thinks that in 2017, the semiconductor design ecosystem will continue evolving from a chip-centric (integration of transistors) focus to a system-centric (integration of functional blocks) worldview. While SoCs and other complex semiconductor devices remain critical building blocks and Moore’s Law a key driver, the emphasis is shifting to system design via the extensive use of IP. New opportunities for automation will open up with the need to rapidly configure and validate system-level design based on extensive use of IP.  Industry organizations like the Electronic System Design Alliance have a mission to work across the entire design ecosystem as the electronic design market makes the transition to system-level design.

Wally Rhines, Chairman and CEO of Mentor Graphics addressed the required changes in design as follows: “EDA is a changing.  Most of its effort in the last two decades in the EDA industry has focused on the automation of integrated circuit design. Virtually all aspects of IC design are now automated with the use of computers.  But system design is in the infancy of an evolution to virtual design automation. While EDA has now given us the ability to do first pass functional integrated circuit designs, we are far from providing the same capability to system designers.

What’s needed is the design of “systems of systems”.  That capability is coming.  And it is sooner than you might think.  Designers of planes, trains and automobiles hunger for virtual simulation of their designs long before they build the physical prototypes for each sub-system.  In the past, this has been impossible.  Models were inadequate.  Simulation was limited to mechanical or thermal analysis.  The world has changed.  During 2017, we will see the adoption of EDA by companies that have never before considered EDA as part of their methodology.”

Frank Schirrmeister, Senior Product Management Group Director, System and Verification Group at Cadence offered the following observation.  “IoT that spans across application domains will further grow, especially in the industrial domain. Dubbed in Gernany as “Industrie 4.0”, industrial applications are probably the strongest IoT driver. Value shifts will accelerate from pure semiconductor value to systemic value in IoT applications. The edge node sensor itself may not contribute to profits greatly, but the systemic value of combining the edge node with a hub accumulating data and sending it through networks to cloud servers in which machine learning and big data analysis happens allows for cross monetization. The value definitely is in the system. Interesting shifts lie ahead in this area from a connectivity perspective. 5G is supposed to broadly hit is in 2020, with early deployments in 2017. There are already discussions going on regarding how the connectivity within the “trifecta” of IoT/Hub/Server is going to change, with more IoT devices bypassing the aggregation at the hub and directly accessing the network. Look for further growth in the area that Cadence calls System Design Enablement, together with some customer names you would have previously not expected to create chips themselves.

Traditionally ecosystems have been centered on processor architectures. Mobile and Server are key examples, with their respective leading architectures holding the lion share of their respective markets. The IoT is mixing this up a little as more processor architectures can play and offer unique advantages, with configurable and extensible architectures. No clear winner is in sight yet, but 2017 will be a key year in the race between IoT processor architectures. Even OpenSource hardware architectures are look like they will be very relevant judging from the recent momentum which eerily reminds me of the early Linux days. It’s definitely one of the most entertaining spaces to watch in 2017 and for years to come. “

Standards

Standards have played a key role in EDA.  Without them designers would be locked to one vendor for all of the required tools, and given the number of necessary tools very few EDA companies would be able to offer all that is required to complete, verify, and transfer to manufacturing a design.  Michiel Ligthart, President and COO at Verific, sees two standards, in particular, playing a key role in 2017.  “Watch for quite a bit of activity on the EDA standards front in 2017. First in line is the UVM standard (IEEE 1800.2), approved by the Working Group in December 2016. The IEEE may ratify it as early as February. Another one to watch is the next installment of SystemVerilog, mainly a “clarifications and corrections” release, that will be voted on in early 2017 with an IEEE release just before the end of the year. In the meantime, we are all looking at Accellera’s Portable Stimulus group to see what it will come up with in 2017.”

In regards to the Portable Stimulus activity Adnan Hamid, CEO of Breker Verification Systems goes into more details.  “While it’s been a long time coming, Portable Stimulus is now an important component of many design verification flows and that will increase significantly in 2017. The ability to specify verification intent and behaviors reusable across target platforms, coupled with the flexibility in choosing vendor solutions, is an appealing prospect to a wide range of engineering groups and the appeal is growing. While much of the momentum is rooted in Accellera’s Portable Stimulus Working Group, verification engineers deserve credit for recognizing its value to their productivity and effectiveness. Count on 2017 to be a big year for both its technological evolution and its standardization as it joins the ranks of SystemVerilog, UVM and others.

Conclusion

Given the amount of contributions received, it would be overwhelming to present all of them in one article.  Therefore the remaining topics will be covered in a follow-on article the following week.

Siemens Acquisition of Mentor Graphics is Good for EDA

Tuesday, November 15th, 2016

Gabe Moretti, Senior Editor

Although Mentor has been the subject of take-over attempts in the past, the business specifics of the transactions have never been favorable to Mentor.  The acquisition by Siemens, instead, is a favorable occurrence for the third largest EDA company.  This time both companies obtain positive results from the affair.

Siemens acquires Mentor following the direction set forth in 2014 when its Vision 2020 was first discussed in public.  The 6 year plan describes steps the company should take to better position itself for the kind of world it envisions in 2020.

The Vision 2020 document calls for operational consolidation and optimization during the years 2016 and 2017.  It also selects three of its business division as critical to corporate growth.  It calls it the E-A-D system that include: Digitalization, Automation, and Electrification.

Although it is possible that Mentor technology and products may be strategic in Electrification, they are of significant importance in the other two areas: Digitalization and Automation.  Digitalization, for example, includes vehicle automation, including smart cars and vehicle to vehicle communication.  Mentor already has an important presence in the automotive industry and can help Siemens in the transition to state of the art car management by electronic systems to the innovation of new systems required by the self-driving automobile and the complete integration of the components into an intelligent systems including vehicle-to-vehicle communication.

Mentor also has experience in industrial robots and what is, in my mind, more remarkable, is that the PCB and cabling portions of Mentor, often minimized in an EDA industry dominated by the obsession of building ICs, are the parts that implement and integrate the systems in the products designed and built by third parties.

With its presence in the PCB and cabling markets, Mentor can bring additional customers to Siemens as well as insight in future marketing requirements and limitations that will serve extremely well in designing the next generation industrial robots.

Of course, Mentor will also find an increased internal market as other divisions of Siemens not part of the E-A-D triumvirate will utilize its products and services.

Siemens describes itself as an employee oriented company, so present Mentor employee should not have to fear aggressive cost cutting and consolidation.  Will Mentor change? Of course, it will adapt gradually to the new requirements and opportunities the Siemens environment will create and demand, but the key word is “gradually”.  Contrary to the acquisition of ARM by SoftBank, where the acquiring company had no previous activity in ARM’s business, Siemens has been active in EDA internally, both in its Research Lab and strong connections with university programs that originated a number of European EDA startups.  Siemens executives have an understanding of what EDA is all about and what it takes to be successful in EDA.  The result, I expect, is little interference and second guessing which translates in continuous success for Mentor for as long as they are capable of it.

Wil other EDA companies benefit from this acquisition? I tink they will.  First of all it attracts more attention to our industry by the financial community, but it also is likely to increase competition among the “big 3” forcing Cadence and Synopsys to focus more on key markets and while diversifying into related markets like optical, security, software development for example.  In addition I do not see the reason for an EDA company to enter into a business partnership with some of its customers to explore new revenue generating business models.

EDA Consortium Renamed Electronic System Design Alliance

Monday, April 4th, 2016

Gabe Moretti, Senior Editor

It is possible that names matter, but they only do if what they stand for reflect the intent.  A few days ago the EDA Consortium (EDAC) changed its name to the Electronic System Design (ESD) Alliance, an international association of companies providing goods and services throughout the semiconductor design ecosystem.

“EDAC was formed in 1989 during the go-go years,” says Robert Smith, its Executive Director. “EDA still is mission critical for chip design, but other complementary technologies and solutions are required to drive the design ecosystem. We intend to bring them all together under the ESD Alliance umbrella with an expanded scope of interest for a much broader design community.”

One thing that immediately comes to mind is that “ESDA” is not unique.  It turns out that ESDA means a number of things not associated with the new Alliance.  There is a group within the electronic industry called Electrostatic Discharge Association (ESDA) and also we have the Emergency Service Disaster Agency (ESDA).  Exclusivity in a search is therefore not guaranteed

But the new name has been chosen so the two names indicate that:

1)      The organization is still focused on electronics

2)      It still deals with design issues

3)      It is not focused anymore on automation of design functions but more on the creation and building of systems

4)      According to the Merriam-Webster Thesaurus Consortium is a synonym of Alliance so nothing changes as far as the organization style is concerned.

Let’s see what is behind the change.

“The ESD Alliance reflects the sea change happening in the semiconductor industry as chip design takes a more system-oriented approach,” remarks Lip-Bu Tan, president and CEO of Cadence Design Systems and co-chairman of the ESD Alliance Board of Directors. “With a new identity and expanded mission, the ESD Alliance is well-positioned to embrace and represent the design ecosystem now and in the future.”

.  By the way do we also need to change the name of our industry from EDA to ESD?

Electronic design always needed to have a system oriented approach, even if only a portion of the system was implemented on one die.  Engineers still assembled a system from a number of components.  Today so many transistors can be built on a die to make it possible to actually implement an entire system on one component.

Under its expanded charter, the ESD Alliance will deliver a forum for the interests of the integrated circuit (IC) and system design ecosystem, a critical component to the ongoing success of the $360 billion worldwide Semiconductor Industry.

New ESD Alliance initiatives are being formed to address the larger design ecosystem that includes semiconductor intellectual property (IP), embedded software, advanced packaging for system scaling and service companies that provide design know-how and resources. Working groups are being formed in all of these areas to focus on unique challenges and opportunities. They complement existing committees dedicated to the ecosystem’s interest in export, interoperability, license management and antipiracy, market statistics and trade shows.

Register Automation: A visit with Semifore

Tuesday, March 29th, 2016

Gabe Moretti, Senior Editor

During the just passed DVCon U.S. I met with Richard Weber, CEO of Semifore (www.semifore.com).  When I was asked to meet with him by Jill Jacobs I thought she was introducing a new company.  I was wrong! Semifore was founded in 2006 and is an on-going healthy company that has now decided to be more open to the press.

Semifore is a small company, only seven full time employees, with the mission to develop technology that significantly reduces cost to develop and verify complete control register automation of complex ASIC, SoC, and FPGA-based design.  The company offers an advanced compiler for specification, verification, documentation, and implementation of configuration, status registers and address maps for complex designs.

Semifore is self-funded, profitable, and with a healthy list of customers that include a significant number of tier 1 companies.  It is not presently seeking third party funding, although I have pointed out that further significant expansion of the business will require a significant investment.  I put the company in the “life-style” bucket, a good group whose recent principal alumnus is Denali.

When Richard told me that Semifore had developed its own language, CSRSpec, all sorts of warning bells went off in my head.  Not another language! I was thinking.  What amount of work would be necessary to get it accepted?  It turned out that my fears were unfounded.  The language not only interfaces with industry standard busses, but also reads and writes SystemRDL, IP-XACT, and spreadsheets.  It produces RTL, firmware header files, verification data, and documentation in HTML, Word, FrameMaker, and others.  In other words, it fits seamlessly into a design flow with third party tools.

Figure 1 How CSRCompiler fits in the design flow

The limitations of IP_XACT are main reason for the new language.  Richard described it as “kicking IP_XACT up a notch (or 10)”.  Actually the description is not quite fair since IP_XACT, and IEEE 1685 its natural derivative, is a “Standard for IP-XACT, Standard Structure for Packaging, Integrating and Re-Using IP Within Tool-Flows” does not directly address total register automation.  It is more accurate to say that CSRSpec implements functions as a supplement of the capabilities of IP_XACT.  For sure CSRSpec does a much better job than SystemRDL, a de-facto standard developed by the SPIRIT consortium, whose further development has been ignored by the EDA community.  It is true that within the Accellera System Initiative, a SystemRDL Working Group was formed in 2012, but the group is still seeking members and, to my knowledge, has done no development work so far.

It is clear that niche companies can still find a way to contribute to EDA and in the process generate a respectable revenue stream.  But it takes both ingenuity and dedication.  Before founding Semifore Richard Weber and Jamsheed Agahi worked at Cisco and each have more than twenty years of design and verification experience.  Herb Winsted, VP of business development and Customer Care was a mask designer at AMD and then provided sales support at Cadence, Silicon Valley Research, and Silvar-Lisco.  Rob Callaghan, COO has more than 25 years of experience in the electronic industry, including a stint at Cadence.

DVCon U.S. 2016 Is Around the Corner

Thursday, February 18th, 2016

Gabe Moretti, Senior Editor

Within the EDA industry, the Design & Verification Conference and Exhibition (DVCon) has created one of the most successful communities of the 21st century.  Started as a conference dealing with two design languages, Verilog and VHDL, DVCon has grown to cover all aspects of design and verification.  Beginning as a conference based in Silicon Valley, the conference is now held on three continents: America, Asia and Europe.  Both DVCon Europe and DVCon India have shown significant growth, and plans are well on their way to offer a DVCon in China as well.  As Yatin Trivedi, General Chair of this year’s DVCon U.S., says, “DVCon continues to be the premier conference for design and verification engineers of all experience levels. Compared to larger and more general conferences, DVCon affords attendees a concentrated menu of technical sessions – tutorials, papers, poster sessions and panels – focused on design and verification hot topics. In addition to participation in high quality technical sessions, DVCon attendees have the opportunity to take part in the many informal, but often intense, technical discussions that pop up around the conference venue among more than 800 design and verification engineers and engineering managers. This networking opportunity among peers is possibly the greatest benefit to DVCon attendees.”

Professionals attend DVCon to learn and to share, not just to show off their research achievements as a community.  The conference is focused on providing its attendees with the opportunity to learn by offering two days of tutorials as well as frequent networking opportunities.  The technical program offers engineers examples of how today’s problems have been solved under demanding development schedules and budgets.  Ambar Sarkar, Program Chair, offers this advice on the DVCon U.S. 2016 web site: “Find what your peers are working on and interact with the thought leaders in our industry. Learn where the trends are and become a thought leader yourself.”

Grown from the need to verify digital designs, verification technology now faces the need to verify heterogeneous systems that include analog, software, MEMS, and communication hardware and protocols.  Adapting to these new requirements is a task that the industry has not yet solved.

At the same time, methods and tools for mixed-signal or system-level design still need maturing.  The concept of system-level design is being revolutionized as architectures like those required for IoT applications demand heterogeneous systems.

Attendees to DVCon U.S. will find ample opportunity to consider, debate, and compare both requirements and solutions that impact near term projects.

Tutorials and Papers

As part of its mission to provide a learning venue for designers and verification engineers, DVCon U.S. offers two full days of tutorials.  The presentations of the 12 tutorial sessions are divided between Monday and Thursday, separate from the rest of the technical program so they do not conflict and force attendees to make difficult attendance choices.

Accellera has a unique approach to putting together its technical program.  I am slightly paraphrasing this year’s Program Chair, Ambar Sarkar, by stating that DVCon U.S. lets the industry set the agenda, not the conference asking for papers on selected topics.  He told me that the basic question is: “Can a practicing engineer get new ideas and try to use them in his or her upcoming project?” For this reason, the call for papers asks only for abstracts and those that do not meet the request are eliminated.  After a further selection, the authors of the chosen abstracts are asked to submit a full paper.  Those papers are then grouped according to their common subject areas into sessions.  The sessions that emerge automatically reflect the latest trends in the industry.

The paper presentations during Tuesday and Wednesday take the majority of the conference’s time and form the technical backbone of the event.

Of the 127 papers submitted, 36 were chosen to be presented in full.  There will be 13 sessions covering the following areas: UVM, Design and Modeling, Low Power, SystemVerilog, Fault Analysis, Emulation, Mixed-Signal, Resource Management, and Formal Techniques.  Each session offers from 3 to 4 individual papers.

Posters

Poster presentations are selected in the same manner as papers.  A poster presentation is less formal but has the advantage of giving the author the opportunity to interact with a small audience and thus the learning process can be bilateral.  There have been occasions in the past when an abstract submitted as a poster is switched to an oral presentation with the consent of the author.  Such operation is possible because the submitting and selecting process is similar and thus the poster has already been judged as presenting an approach that will be useful to the attendees.

Keynote

This year’s keynote will be delivered by Wally Rhines, the 2015 recipient of the Phil Kaufman Award.  Wally is well known in the EDA industry for both his insight and his track record as the Chairman and CEO of Mentor Graphics.  The title of his address is Design Verification Challenges: Past, Present, and Future.  Dr. Rhines will review the history of each major phase of verification evolution and then concentrate on the challenges of newly emerging problems. While functional verification still dominates the effort, new requirements for security and safety are becoming more important and will ultimately involve challenges that could be more difficult than those we have faced in the past.

Panels: One Good and One Suspect

There are two panels on the conference schedule.  One panel: “Emulation + Static Verification Will Replace Simulation”, scheduled for Wednesday March 2nd at 1:30 in the afternoon looks at the near future verification methods.  Both emulation and static verification use has been increasing significantly.  May be the verification paradigm of the future is to invest in high-end targeted static verification tools to get the design to a very high quality level, followed by very high-speed emulation or FPGA-prototyping for system-level functional verification. Where does that leave RTL simulation? Between a rock and a hard place! Gate-level simulation is already marginalized to doing basic sanity checks. May be RTL simulation will follow. Or will it?

The other panel scheduled for 8:30 in the morning of the same day concerns me a lot.  The title is “Redefining ESL” and the description of the panel is taken from a blog that Brian Bailey, the panel moderator, published on September 24 of 2015.  You can read the blog here: http://semiengineering.com/what-esl-is-really-about/.

In the blog Brian holds the point of view that ESL is not a design flow, it is a verification flow, and it will not take off until the industry recognizes that. Only now are we beginning to define what ESL verification means, but is it too little, too late?  There are a few problems with the panels committee accepting this panel.  To begin with ESL is an outdated concept.  Today’s systems include much more than digital design.  Modern SoCs, even small ones like those fund in IoT applications, include analog, firmware, and MEMS blocks.  All of these are outside of the ESL definition and fall within the System Level Design (SLD) market.

The statement made by Brian that ESL would not be made viable by the introduction of viable High Level Synthesis (HLS) tools is simply false.  ESL verification became a valuable tool only when designers began to use HLS products to automatically derive RTL models from ESL descriptions in SystemVerilog or C/C++ even if HLS covered mostly algorithms expressed in something else besides Verilog, VHDL, or SystemC.

The Search for the Executable Specification

Monday, February 8th, 2016

Gabe Moretti, Senior Editor

One of the Holy Grail of EDA is to find a way to specify a design in such a way that the specification itself could be synthesized into an architecture so that one would be assured that the design met the requirements by construction.  Work in the area has been going on for a long time, VHDL being the most significant language developed with the support of the US Department of Defense.  Unfortunately, the language was understood and immediately used as a design implementation tool and thus overtaken in popularity by Verilog which is a true design implementation language.

The EDA community has continued to look for a way to write a specification in a language that is unambiguous, that can be modeled, and can be synthesized.  Judging by the results, failure is consistent.   Even great language architects, have not found a solution.  The latest attempt, Rosetta, has been almost totally ignored.  The problem, of course, is that no one wants to learn another language in order to write a specification, or build a tool for a language that may not be used or used so infrequently to be an economic failure.

But I think the answer is under our noses.  It is being talked about, but the concept is so foreign to the design and marketing communities that it has not been recognized.  It all started with the discussion about using Agile methods for hardware development.  There are many reasons for not applying the Agile methods used in software development literally to hardware development, but surprisingly there is one that not only fits beautifully, but also solves the problem of developing a way to represent a design specification in an executable manner.  And that is the test suite.

Of course I am not suggesting that Marketing learn to write tests, and thus ambiguity could still creep in at the beginning, but it would become obvious much sooner in the development project and thus more easily resolved as soon as a first level architectural implementation was tested.

A test suite properly constructed does in fact represent a specification for functional behavior.  It defines a series of transformations that must be executed by the design in order to produce desired measurable states.  By describing a set of input values and their sequence in time, as well as the expected outputs, one in fact describes a functional specification.

In addition, such test sequence does follow the Agile intent of continuous refinement, as more is known about the behavior of the desired product the test suite can be expanded accordingly and the resulting design incrementally modified.  The result is that at all stages of development the design is testable in a consistent manner.  Even better, if the results of a test are different from what was intended, the specification itself can be changed and the test file(s) updated.

A Universal Test Methodology (UTM)

A UTM would offer a methodology to group test suites addressing both functional and physical characteristics of the design into a manageable data base of stimuli and responses that, taken together, constitute the entire design specification, including timing, power, manufacturability and so on.

In an age of big data, tools are available to manage all the relevant information about a given design, and the data can be presented in a number of ways appropriate to the expected reader.

The design community is developing expertise in porting tests among specific tools required to analyze and verify different aspects of a design so the UTM does not represent such obstacle to timely implementation.  And here again the Agile methodology of sequential refinements applies.  Let’s start using what we know and expand and refine it to meet the requirements.

The Marketing Budget is an Investment, Not an Expense

Wednesday, May 27th, 2015

Nanette Collins

I recently learned that a CEO I long respected, considered marketing a necessary business expense. This individual thought it was of questionable value and not a long-term investment. Whenever there was extra budget, the CEO opted to spend it in sales, hiring  AEs or bringing on a new sales manager.

That explains why the charismatic leader, who should have been featured regularly and prominently in industry and business onlines, was largely absent from any coverage, as was the company. Evidently, Public Relations, a vital component of any marketing plan, was not considered a strategic investment either.

Unfortunately, this is an all-too-common occurrence. In many companies, marketing’s value appears to have diminished to the point where it’s almost irrelevant and sales is the all-important component of a company’s success. The focus is on creating a highly optimized mobile-enabled website for search and leads generation, while the rest of marketing is overlooked.

It’s unlikely anyone would argue that leads generation isn’t important. After all, without sales, a channel and a sales pipeline, the company will go bust. Yet, overlooking the marketing effort could ensure the company goes bust, too. Just not quite as quickly.

Marketing is as important as sales and sales should be able to rely on marketing’s expertise for leads generation and much more. Nothing’s worse for a sales manager who gets the all-important meeting with a project team that’s never heard of the company. Sales managers have to know what they are selling, who they are selling against and the value to the prospect without spending the time to figure this out themselves. That’s were Marketing can help build awareness and provide ongoing industry research and competitive analysis.

When a company invests in a new product, it should invest in the whole product ecosystem to make it successful. Marketing managers, presumably, did market research and validation to understand they need a product with the right features to solve a problem. Because it needs a pre- and post-sales support team who can understand the product, the company should invest in training and support materials. It needs a market ready to accept a product as well, which means investing in brand recognition. That’s marketing and not sales.

Marketing should be considered a critical part of the holistic product ecosystem, making it an investment similar to product R&D, not an operational expense or the cost of sales. It will last as long as the product itself.

The importance of marketing’s role in a product launch cannot be understated. Good marketing drives the brand awareness and perception, generating leads from various programs, such as events and Public Relations. An experienced marketing group will develop a plan that begins by conditioning the market for the new product, rolling it out at the right time and exhibiting it publicly at tradeshows, seminars and webinars. While some of these tactics seem straightforward, make no mistake – not everyone has good communications skills and the ability to take a technical concept and simplify it into a useful and salient message. And, let’s not overlook boothmanship skills. Few of us can step into the company’s tradeshow booth for the first time ready to engage with potential customers. Most of us need training on booth etiquette, which means, no chewing gum or eating, being friendly and making eye contact, among other dos and don’ts.

Ah, and then there are the companies who claim to know all the companies and all the engineers who would buy their products. Well, technical companies need to think more like consumer companies and their customers’ buying patterns. Consumers like to be reminded that they purchased their goods and services from a winner, which often is why consumer companies spend lavishly on advertising and other forms of marketing.

No, I’m not suggesting a tech company’s marketing department needs a lavish budget. But consider the CEO, like so many others, who sacrificed a long-term play to invest in marketing for short-term gains. It’s a mistake many companies are making as executives see marketing as an expensive and not an investment. A company needs to be to set apart and differentiated from its competitors it. Marketing’s job is to come up with the strategic initiatives, based on a continuous study of the marketplace.  I recommend that the marketing department be allocated a reasonable yearly budget to build, maintain or enhance the company’s awareness and visibility.

About the Author:

Nanette Collins is a marketing and Public Relations consultant in the semiconductor, EDA and IP market.  She received the 2013 Marie R. Pistilli Women in EDA Achievement Award.

Design Automation Is More Than EDA

Wednesday, April 8th, 2015

Gabe Moretti, Senior Editor

In a couple of months the EDA industry will hold its yearly flagship conference: the Design Automation Conference (DAC).  And just a few days ago Vic Kulkarni, SVP & GM, RTL Power Business at ANSYS-Apache Business Unit, told me that the industry should be called Design Automation Industry, and not EDA.  Actually both the focus of the EDA industry and the contents of DAC are mostly IC design.  This activity is now only a portion of system design and thus the conference does not live up to its name and the industry does not support system design in its entirety.

For almost all its entire life the EDA industry has been focused only on hardware and it did it well.  That was certainly enough when products were designed with the approach that electronic systems ‘s purpose was to execute application software with the help of an operating system and associated device drivers.

A few years ago the EDA industry realized that the role of hardware and software had changed.  Software was used to substitute hardware to implement system tasks and to “personalize” systems that were not end-user programmable.  It than became necessary to verify these systems and new markets, those of virtual prototyping and software/hardware verification, grew and continue to grow.  Yet the name remains Electronic Design Automation and most of its popular descriptions, like in Wikipedia,  deal only with hardware design and development.

The Internet of Things (IoT), where intelligent electronic products are interconnected to form a distributed and powerful data acquisition, analysis, and control system, is considered to be a major growth area for the electronics industry and thus for EDA.  A small number of companies, like Ansys and Mathworks for example, have realized some time ago that a system is much more than “just” electronics and software and these companies now have an advantage in the IoT market segment.

By just  developing and verifying the electronic portion of a product traditional EDA companies run the risk to fool designers into thinking that the product is as efficient and secure as it can be.  In fact, even virtual prototyping tools cannot make an absolute statement regarding the robustness of the software portion of the design.  All that can be said using this tools is that the interface between hardware and software has been sufficiently verified and that the software appears to work correctly when used as intended.  But a number of organizations and individuals have pointed to critical security issues in existing real time systems that are the precursors of IoT.  The latest attention to security in automotive applications is an example.

The use of MEMS in IoT applications should push EDA leaders to ask why MCAD is still considered a discipline separate from EDA.  The interface and concurrent execution of the electro-mechanical system is as vital as the EDA portion.  Finally the EDA industry has another weakness when it comes to providing total support for IoT products.  Nature is analog, yet the analog portion of the EDA industry lags significantly from the development achieved in the digital portion.  Digital tools only offer approximations that have been, and still are, good enough for simpler systems.  Leading edge processes are clearly showing that such approximation yields results that are no longer acceptable, and product miniaturization has resulted in a new set of issues that are almost entirely in the analog domain.

The good news is that the industry has always managed to catch up in response to customers’ demand.  The hope offered by IoT growth is not simply revenue growth due to new licenses, but a maturing of the industry to finally provide support for true system design and verification.

From Data to Information to Knowledge

Monday, November 17th, 2014

Gabe Moretti, Senior Editor

During my conversation with Luco Lanza last month we exchanged observations on how computing power, both at the hardware and the system levels, had progressed since 1968, the year when we both started working full time in the electronics field.  And how much was left to do in order to achieve knowledge based computing.

Observing what computers do, we recognized that computers are now very good at collecting and processing data.  In fact the concept of the IoT is based on the capability to collect and process a variety of data types in a distributed manner.  The plan is, of course, to turn the data into information.

Definitely there are examples of computer systems that process data and generate information.  Financial systems provide profit and loss reports using income and expense data for example, and logic simulators provide the behavior of a system by using inputs and output values.  But information is not knowledge.

Knowledge requires understanding, and computers do not understand information.  The achievement of knowledge requires the correlation and synthesis of information, sometimes from disparate sources, in order to generate understanding of the information and thus abstract knowledge.  Knowledge is also a derivate of previous knowledge and cannot always be generated simply by digesting the information presented.  The human brain associates the information to be processed with knowledge already available in a learning process that assigns the proper value and quality to the information presented in order to construct a value judgment and associative analysis that generates the new knowledge.  What follows are some of my thoughts since that dialogue.

As an aside I note that unfortunately many education systems give students information but not the tools to generate knowledge.  Students are thought to give the correct answer to a question, but not to derive the correct answer from a set of information and their own existing knowledge.

We have not been able to recreate the knowledge generating processes successfully with a computer, but nothing says that it will not be possible in the future.  As computing power increases and new computer architectures are created, I know we will be able to automate the generation of knowledge.  As far as EDA is concerned I will offer just one example.

A knowledge based EDA tool would develop a SoC from a set of specifications.  Clearly if the specifications are erroneous the resulting SoC will behave differently than expected, but even this eventuality would help in improving the next design because it would provide information to those that develop a knowledge based verification system.

When we achieve this goal humanity will finally be free to dedicate its intellectual and emotional resources to address those problems that derive directly from what humans are, and prevent most of them, instead of having to solve them.

At this moment I still see a lot of indeterminism with respect of the most popular topic in our field: the IoT.  Are we on the right track in the development of the IoT?  Are we generating the correct environment to learn to handle information in a knowledge generating manner?  To even attempt such a task we need to solve not just technical problems, but financial, behavioral, and political issues as well.  The communication links to arrive to a solution are either non-existent or weak.  Just think of the existing debate regarding “free internet”.  Financial requirements demand a redefinition of the internet.  From a communication mechanism akin to a utility, to a special purpose device used in selective manners defined by price.  How would a hierarchical internet defined by financial parameters modify the IoT architecture?  Politicians and some business interests do not seem to care and engineers will have to patch things up later.  In this case we do not have knowledge.

Next Page »