Part of the  

Chip Design Magazine

  Network

About  |  Contact

Posts Tagged ‘Cadence’

Next Page »

Grant Pierce Named BoD Chair of the ESD Alliance

Tuesday, February 21st, 2017

Gabe Moretti, Senior Editor

The ESD Alliance (ESDA) has elected Grant Pierce (CEO of Sonics) as its Chairman of the Board a few weeks ago.  Grant is only the second Chair that is not a high level executive of one of the three big three EDA companies to hold the title, and the first since the organization, formerly EDAC renamed itself.  During the EDAC days it was customary for the CEOs of Cadence, Mentor and Synopsys to pass the title among themselves in an orderly manner.  The organization then reflected the mission of the EDA industry to support the development of hardware intensive silicon chips following Moore’s Law.

Things have changed since then, and the consortium responded by first appointing a new executive director, Bob Smith, then changing its name and its mission.  I talked with Grant to understand his view from the top.

Grant Pierce, Sonics CEO

Grant pointed out that: “We are trying to better reflect what has happened in the market place, both in terms of how our customers have developed further in the world of system on chip and what we have seen in the development of the EDA world where today the IP offerings in the market, both those from independent companies but also those from EDA companies are critical and integral to all the whole ecosystem for building today’s modern chips.”

Grant pointed out that ESDA has expanded its focus and has embraced not only hardware design and development but also software.  That does not mean, Grant pointed out, that the EDA companies are loosing importance but instead they are gaining a seat at the table with the software and the system design community in order to expand the scope of their businesses.

From my point of view, I interjected, I see the desired change implemented very slowly, still reacting to and not anticipating new demands.  So what do you think can happen in the next twelve months?

“From an ESDA point of view you are going to see us broadening the membership.” answered Grant.  ”We are looking to see how we can expand the focus of the organization through its working groups to zero-in on new topics that are broader than the ones that are currently there.  Like expanding beyond what is a common operating system to support for example.  I think you will see at a minimum two fronts, one opening on the software side while at the same time continuing work on the PPA (Power, Performance, Area) issues of chip design.  This involves a level of participation from parties that have not interacted this organization before.”

Grant believes that there should be more emphasis on the needs of small companies, those where innovation is taking place.  ESDA needs to seek the best opportunity to invigorate those companies.  “At the same time we must try to get system companies involved in an appropriate fashion, at least to the degree that they represent the software that is embedded in a system” concluded Grant.

We briefly speculated on what the RISC 5 movement might mean to ESDA.  Grant does not see much value for ESDA to focus on a specific instruction set, although he conceded that there might be value if RISC 5 joined ESDA.  I agree with the first part of his judgement, but I do not see any benefit to either party, or the industry for that matter, associated with RISC 5 joining ESDA.

From my point of view ESDA has a big hurdle to overcome.  For a few years, before Bob Smith was named executive director, EDAC was somewhat stagnant, and now it must catch up with market reality and fully address the complete system issue.  Not just hardware/software, but analog/digital, and the increased use of FPGA and MEMS.

For sure, representing an IP company gives Grant an opportunity to stress a different point of view within ESDA than the traditional EDA view.  The IP industry would not even exist without a system approach to design and it has changed the way architects think when first sketching a product on the back of an envelope.

EDA in the year 2017 – Part 1

Thursday, January 12th, 2017

Gabe Moretti, Senior Editor

The EDA industry performance is dependent on two other major economies: one technological and one financial.  EDA provides the tools and methods that leverage the growth of the semiconductor industry and begins to receive its financial rewards generally a couple of year after the introduction of the new product on the market.  It takes that long for the product to prove itself on the market and achieve general distribution.

David Fried from Coventor addressed the most important topics that may impact the foundry business in 2017.  He made two points.

“Someone is going to commit to Extreme Ultra-Violet (EUV) for specific layers at 7nm, and prove it.  I expect EUV will be used to combine 3-4 masks currently using 193i in a multi-patterning scheme (“cut” levels or Via levels) for simplicity (reduced processing), but won’t actually leverage a pattern-fidelity advantage for improved chip area density.

The real density benefit won’t come until 5nm, when the entire set of 2D design rules can be adjusted for pervasive deployment of EUV.  This initial deployment of EUV will be a “surgical substitution” for cost improvement at very specific levels, but will be crucial for the future of EUV to prove out additional high-volume manufacturing challenges before broader deployment.  I am expecting this year to be the year that the wishy-washy predictions of who will use EUV at which technology for which levels will finally crystallize with proof.

7nm foundry technology is probably going to look mostly evolutionary relative to 10nm and 14nm. But 5nm is where the novel concepts are going to emerge (nanowires, alternate channel materials, tunnel FETs, stacked devices, etc.) and in order for that to happen, someone is going to prove out a product-like scaling of these devices in a real silicon demonstration (not just single-device research).  The year 2017 is when we’ll need to see something like an SRAM array, with real electrical results, to believe that one of these novel device

concepts can be developed in time for a 5nm production schedule.”

Rob Knoth, Product Marketing Director, Digital and Signoff Group at Cadence offered the following observation.  “This past year, major IDM and pure-play foundries began to slow the rate at which new process nodes are planned to be released. This was one of the main drivers for the restless semiconductor-based advances we’ve seen the past 50 years.

Going forward, fabs and equipment makers will continue to push the boundaries of process technology, and the major semiconductor companies will continue to fill those fabs. While it may be slowing, Moore’s Law is not “dead.” However, there will be increased selection about who jumps to the “next node,” and greater emphasis will be placed on the ability of the design engineer and their tools/flows/methods to innovate and deliver value to the product. The importance for an integrated design flow to make a difference in product power/performance/area (PPA) and schedule/cost will increase.

The role that engineering innovation and semiconductors play in making the world a better place doesn’t get a holiday or have an expiration date.

The semiconductor market, in turn, depends on the general state of the world-wide economy.  This is determined mostly by consumer sentiment: when consumers buy, all industries benefit, from industrial to financial.  It does not take much negative inflection in consumers’ demand to diminish the requirement for electronic based products and thus semiconductors parts.  That in turn will have a negative effect on the EDA industry.

While companies that sell multi-years licenses can smooth the impact, new licenses, both multi-year and yearly are more difficult to sell and result in lower revenue.

The electronic industry will evolve to deal with increased complexity of designs.  Complex chips are the only vehicle that can make advance fabrication nodes profitable.  It makes no sense decreasing features’ dimensions and power requirements at the cost of increased noise and leakage just for technology sake.  As unit costs increase, only additional functionality can justify new projects.  Such designs will require new methodology, new versions of existing tools, and new industry organization to improve the use of the development/fabrication chain.

Michael Wishart, CEO of Efabless believes that in 2017 we will begin to see full-fledged community design, driven by the need for customized silicon to serve emerging smart hardware products. ICs will be created by a community of unaffiliated designers on affordable, re-purposed 180nm nodes and incorporate low cost, including open source, processors and on-demand analog IP. An online marketplace to connect demand with the community will be a must.

Design Methods

I asked Lucio Lanza of Lanza techVentures what factors would become important in 2017 regarding EDA.  As usual his answer was short and to the point.  “Cloud, machine learning, security and IoT will become the prevailing opportunities for design automation in 2017. Design technology must progress quickly to meet the needs of these emerging markets, requiring as much as possible from the design automation industry. Design automation needs to willingly and quickly take up the challenge at maximum speed for success. It’s our responsibility, as it’s always been.”

Bob Smith, Executive Director of the ESD alliance thinks that in 2017, the semiconductor design ecosystem will continue evolving from a chip-centric (integration of transistors) focus to a system-centric (integration of functional blocks) worldview. While SoCs and other complex semiconductor devices remain critical building blocks and Moore’s Law a key driver, the emphasis is shifting to system design via the extensive use of IP. New opportunities for automation will open up with the need to rapidly configure and validate system-level design based on extensive use of IP.  Industry organizations like the Electronic System Design Alliance have a mission to work across the entire design ecosystem as the electronic design market makes the transition to system-level design.

Wally Rhines, Chairman and CEO of Mentor Graphics addressed the required changes in design as follows: “EDA is a changing.  Most of its effort in the last two decades in the EDA industry has focused on the automation of integrated circuit design. Virtually all aspects of IC design are now automated with the use of computers.  But system design is in the infancy of an evolution to virtual design automation. While EDA has now given us the ability to do first pass functional integrated circuit designs, we are far from providing the same capability to system designers.

What’s needed is the design of “systems of systems”.  That capability is coming.  And it is sooner than you might think.  Designers of planes, trains and automobiles hunger for virtual simulation of their designs long before they build the physical prototypes for each sub-system.  In the past, this has been impossible.  Models were inadequate.  Simulation was limited to mechanical or thermal analysis.  The world has changed.  During 2017, we will see the adoption of EDA by companies that have never before considered EDA as part of their methodology.”

Frank Schirrmeister, Senior Product Management Group Director, System and Verification Group at Cadence offered the following observation.  “IoT that spans across application domains will further grow, especially in the industrial domain. Dubbed in Gernany as “Industrie 4.0”, industrial applications are probably the strongest IoT driver. Value shifts will accelerate from pure semiconductor value to systemic value in IoT applications. The edge node sensor itself may not contribute to profits greatly, but the systemic value of combining the edge node with a hub accumulating data and sending it through networks to cloud servers in which machine learning and big data analysis happens allows for cross monetization. The value definitely is in the system. Interesting shifts lie ahead in this area from a connectivity perspective. 5G is supposed to broadly hit is in 2020, with early deployments in 2017. There are already discussions going on regarding how the connectivity within the “trifecta” of IoT/Hub/Server is going to change, with more IoT devices bypassing the aggregation at the hub and directly accessing the network. Look for further growth in the area that Cadence calls System Design Enablement, together with some customer names you would have previously not expected to create chips themselves.

Traditionally ecosystems have been centered on processor architectures. Mobile and Server are key examples, with their respective leading architectures holding the lion share of their respective markets. The IoT is mixing this up a little as more processor architectures can play and offer unique advantages, with configurable and extensible architectures. No clear winner is in sight yet, but 2017 will be a key year in the race between IoT processor architectures. Even OpenSource hardware architectures are look like they will be very relevant judging from the recent momentum which eerily reminds me of the early Linux days. It’s definitely one of the most entertaining spaces to watch in 2017 and for years to come. “

Standards

Standards have played a key role in EDA.  Without them designers would be locked to one vendor for all of the required tools, and given the number of necessary tools very few EDA companies would be able to offer all that is required to complete, verify, and transfer to manufacturing a design.  Michiel Ligthart, President and COO at Verific, sees two standards, in particular, playing a key role in 2017.  “Watch for quite a bit of activity on the EDA standards front in 2017. First in line is the UVM standard (IEEE 1800.2), approved by the Working Group in December 2016. The IEEE may ratify it as early as February. Another one to watch is the next installment of SystemVerilog, mainly a “clarifications and corrections” release, that will be voted on in early 2017 with an IEEE release just before the end of the year. In the meantime, we are all looking at Accellera’s Portable Stimulus group to see what it will come up with in 2017.”

In regards to the Portable Stimulus activity Adnan Hamid, CEO of Breker Verification Systems goes into more details.  “While it’s been a long time coming, Portable Stimulus is now an important component of many design verification flows and that will increase significantly in 2017. The ability to specify verification intent and behaviors reusable across target platforms, coupled with the flexibility in choosing vendor solutions, is an appealing prospect to a wide range of engineering groups and the appeal is growing. While much of the momentum is rooted in Accellera’s Portable Stimulus Working Group, verification engineers deserve credit for recognizing its value to their productivity and effectiveness. Count on 2017 to be a big year for both its technological evolution and its standardization as it joins the ranks of SystemVerilog, UVM and others.

Conclusion

Given the amount of contributions received, it would be overwhelming to present all of them in one article.  Therefore the remaining topics will be covered in a follow-on article the following week.

EDA has not been successful at keeping its leaders

Wednesday, January 4th, 2017

Gabe Moretti, Senior Editor

I have often wondered why when a larger EDA company acquires a smaller one, the acquired CEO ends up, in a relatively short time, leaving and either joining a new start-up or a venture capital firm.  It seemed to me that that CEO thought enough of the buyer to predict his (or hers) employees and product(s) would prosper in the new environment when accepting to be acquired.  So, why leave?  It could just not be a matter of strong contrasting personalities.  I think I found the answer over the Christmas break.

I read the book “Skunk Works” by Ben R. Rich.  The book is a factual history of development projects that were carried out while Ben was first there as an employee and eventually its leader.  During his years at the Skunk Works Mr. Rich was part of the exceptional successes of the U-2 and SR-71 spy planes, and of the F117A stealth bomber.  All those projects were run independently of corporate overseers, used a comparatively small dedicated team, and modified the project when necessary to achieve the established goal.

Two major points made in the book apply both to the EDA industry and to industry in general.  First “Leaders are natural born: managers must be trained” and second “There is no substitute for astute managerial skill on any project”.

Many start-up CEOs are born leaders and do not fit well within an organization where projects are managed in a bureaucratic manner using a rigid reporting structure.  An ex-CEO will soon find such work environment counter-productive.  Successful projects need to react quickly to changing realities and parameters.  Often in the life of a project the team discovers new opportunities or new obstacles that come to light because of the work being done.  The time spent explaining and justifying the new alternative will impact the success of the project, especially if the value of the presented alternative is not fully understood by top executives or the new managers do not understand the new corporate politics.

I think that the best use of an acquired CEO is to allow him or her to continue to be an entrepreneur within the acquiring company.  This does not mean to use his talent to continue to lead the just acquired team. He can look for new opportunities within his area of expertise and possibly build a new team that will produce a new product.  In this way the acquiring company increases its ROI form the acquisition, even at the cost of increased compensation to both the CEO and his new team at the successful completion of their work.

In general Synopsys has managed to retain acquired CEOs, while Cadence has not.

The behavior in the EDA industry, with very few well known exceptions, has been to seek a quick reward through an acquisition that will satisfy financially both the venture capitalists and the original start-up team.  Once the acquisition price is monetized, many people leave the industry seeking to capitalize on their financial gains in other ways.  Thus the EDA industry must grow through the entrance of new people with new ideas but little if any experience in the industry.  The result is many academic brilliant ideas that result in failed start-ups.  Individuals with brilliant ideas are not usually good leaders or managers, and good managers do not generally possess the creativity to conceive a breakthrough product.

In its history the EDA industry has paid the price of creating both leaders and excellent managers, but has yet to find a way to retain them.  Of course there are a few exceptions, nothing is ever black and white, but the exceptions are few.  It will be interesting to see, after a couple of years, how Siemens will have handled the Mentor Graphics acquisition.  Will Mentor’s creativity improve?  Will the successful team remain?  Will they use the additional resource in an entrepreneurial manner, or either leave or adjust to a more relaxed big company life?

The ARM – Softbank Deal: Heart Before Mind

Tuesday, July 19th, 2016

Gabe Moretti, Senior Editor

If you happen to hold ARM stock, congratulation, you are likely to make a nice profit on your investment.  SoftBank, a Japanese company with diversifies interests, including Internet provider, has offered to purchase ARM for cash by tendering $32.4 billion dollars.  SoftBank is a large company whose latest financial result show that it made a profit of $9.82 before interest payments and tax obligations.

ARM, on the other hand, reported for 2015 fiscal year revenue of $1.488.6 billion with a profit of $414.8 million and an operating margin of 42%.  This is a very healthy operating margin, showing a remarkable efficiency by all aspects of the company.  So, there is little to improve in the way ARM operates.

What seems logical, then is that SoftBank expects a significant increase in ARM revenue after the acquisition, or an effect on its profit due to ARM’s impact on other parts of the company.  ARM profit for 2015 were 414.8 million British sterling and the revenue in sterling was 968.3 million for a ratio of 42.8%.  Let’s assume that SoftBank instead invested all of the $32.4 billion and obtained a 5% return or $1.62 billion per year.  To obtain the same result from the ARM acquisition it would mean that ARM must generate a profit of 3.9 times what it generated in 2015.  This is a very large increase since if we assume that all other financial ratios stay the same revenue would have to be a little over $5.5 billion. Yet, using the growth of 15% realized between 2014 an2015 for every year between 2015 and 2020 we “only” achieve a $2,913.6 billion mark.  And keeping the growth ratio constant as revenue increase gets harder and harder since it means a large increase every year.

So the numbers do not make sense to me.  I can believe that ARM could be worth $16 billion, but not twice as much.  And here is another observation.  I have read in many publications that financial analysts expect the IoT market to be $20 billion by 2020.  Assuming that the SoftBank investment, net of interest charges, returns 5% per year in 2020, it would mean that ARM’s revenue would be $5.5 billion or over 25% of TAM (Total Available Market).  This, I consider impossible to achieve, simply because the IoT market will be price sensitive, thus opening ARM to competition by other companies offering competitive microcontrollers.  SoftBank cannot possibly believe that Intel will go away, or that every person will own three cell phones each, or that Google will use only ARM processors in its offerings, or even that IP companies like Cadence and Synopsys will decide to ignore the IoT market.

I am afraid that the acquisition marks the end of ARM as we know it.  It will be squeezed for revenue and profit like it has never been before and the quality of its products will suffer.

DAC Official Results Are In

Thursday, June 23rd, 2016

Gabe Moretti, Senior Editor

Have already covered DAC in a previous blog, but a couple of days ago I received an email from Michelle Clancy, 53rd DAC PR/Marketing Chair, reporting on the conference attendance.  I have additional observations on the Austin conference as a result of the release.

As far as I am concerned the structure of the release was poor.  Readers were guided to consider the overall attendance numbers which was quite small. The increment in overall badges between the 2013 Austin DAC and this year is an increase of 125 badges.  That is an increase of 2.1% significantly less that the increase in the revenue of the EDA industry in the same span of time.  And in addition we have witnessed the growth of related industries who have a presence in and around Austin such as embedded systems and IoT.

What should be underlined is the difference between conference attendees badges from 2013 and 2016.  There were 719 more conference badge this year, while the free “I LOVE DAC” passes were down 564 for the same comparison.  To me this are the important data.  It means that there were fewer “tire kickers” who collect souvenirs and more technical program or tutorial attendees than in 2013.  These are the numbers that indicate success, but the press release did not dwell on them.

I also find it telling that the quote in the release from Howard Pakosh, managing partner of TEKSTART, which provides interim sales, marketing and business development capital to high-tech entrepreneurs, observes “The people we’ve been talking to in Austin are actually looking for information and solutions; they’re not just here because it’s an easy commute from Silicon Valley.”  Obviously Mr. Pakosh finds it a waste of time to exhibit in San Francisco.

My experience on the exhibit floor was different.  The fact that Synopsys chose to send fewer PR and marketing persons to Austin was a negative point for me.  It was difficult to find the right person to discuss business with.  The company also did not have their usual press/analysts dinner and this is unfortunate since their new message “silicon to software” was not well presented on the floor.  I left the conference without understanding the message, especially since I was told in my meeting with corporate marketing that their effort was to promote products from Coventry and Codenomicon to markets outside the electronics business.  Are those products the “software” they are talking about?  What about embedded software for all sort of applications, including those who use their ARC processors?

Cadence and Mentor booths were better staffed, at least I met all the professionals I needed to meet. It is of course time that Cadence realizes that “The Denali Party” does not take the place of a serious dinner with press and analysts.  The Heart of Technology party is a better choice if one wants music and drinks and it supports a good cause.  I go to DAC to do business, not to drink cheap drinks and fight for food in a crowded buffet line.

It is of course expected that the technical program offered by DAC covers leading edge issues and opportunities.  This part of DAC was well organized and run.

If the DAC committee sees the need to defend the choice of Austin as the venue for the conference, then why use the venue next year?  Clearly the have determined that Austin is a viable location.  I for one, did enjoy Austin as a host city and found the convention hall pleasant and well equipped.  Of course the distance between both sessions and exhibits to the press room was not at all convenient, but I do understand that the press room location was chosen because it allowed the building of the necessary temporary meeting rooms.

Cadence’s Allegro and Orcad Updates

Tuesday, May 31st, 2016

Gabe Moretti, Senior Editor

At the beginning of May, in time for CDNLive, Cadence announced major upgrades for its Allegro and Orcad products.  The Printed Circuit Board (PCB) sector of our industry gets the second cousin treatment in an industry so focused on silicon products.  Yet PCB play a very important role in system design, one that is almost never recognized by the ESL tools.  I cannot name an ESL tool that allows architects to evaluate the topology of a system on a PCB.  PCB designers are always left with the task of accommodating the electronic system within the mechanical confines of the product.  Naturally this brings about thermal and electrical issues that are not at all considered by the IC designers.

The new versions introduced by Cadence do not attempt to address this problem either, although they have improved the interoperability between Allegro and Sigrity to shorten PCB design and verification time.  Other new capabilities in the Allegro product include:

Rigid-Flex design enhancements, inter-layer checks for both flex and rigid flex, a new native 3D engine, and finally a Programmable Interface with the Sigrity tool.

By looking at the capabilities offered by similar products from Mentor and Zuken, it turns out that Allegro does not offer any capability that is not already present in Mentor ‘s Xploration or Zuken’s CR8000 products.  All three products address the problem of PCB design and verification in different manner, so that a choice among them is a matter of methods more than of capability.

What is interesting within the PCB market is that all three leading vendors have chosen a dual approach.  Cadence with Allegro and OrCAD, Mentor with Xpedition and PADS, and Zuken with the CR family and CADSTAR.  There seems to be a real division among PCB designers that supports such strategy.  OrCAD, PADS, and CADSTAR aim to support the individual designer who works on a less challenging PCB design and whose verification requirements are less demanding.  Allegro, Xpedition, and CR8000 (or CR7000 for that matter) support team design and a verification cycle that deals with power distribution, IR- Drop, noise, and thermal issues among others.

While both Mentor and Zuken address the PCB market by addressing PCB design and verification problems in their own importance, Cadence serves this market as a function of what an IC designer needs from the PCB.  The lack of consideration by Cadence for the role that a PCB plays is system design is therefore more intriguing.  It would seem to me that Cadence would be the one concerned with co-design and co-verification of IC and PCB, but this is not the case at all.  In all three cases the IC, or ICs are taken as given, there is no possibility to tradeoff IC characteristics and a PCB characteristics.  True enough, in most cases the IC is what it is, it comes from a third party, and thus the PCB designer must adapt to a set of characteristics that are unchangeable.  But that is not always the case.  Some ICs come as a family with different electrical specification, and evaluating various flavors of a CPU or MCU should be an easy thing to do.

Unfortunately, PCB designers are mostly ignored by DAC.  Zuken is not even on the exhibitors list this year, so attendees will not get the opportunity to compare products, beside may be Mentor’s and Cadence’s.  I  wrote “may be” because both booths will certainly underscore IC design and there will be a high level of discourse about IoT.  But you need to ask, if you want to find someone on the booth that can demo a PCB product.

Cadence Introduces Palladium Z1 Enterprise Emulation Platform

Thursday, November 19th, 2015

Gabe Moretti, Senior Editor

One would think that the number of customers for very large and expensive emulation systems is shrinking and thus the motivation to launch new such systems would be small.  Clearly my opinion is not correct.

Earlier this week Cadence Design Systems, Inc. launched the Palladium Z1 emulation platform, which the company calls the industry’s first datacenter-class emulation system.  According to Cadence the new platform delivers up to 5X greater emulation throughput than the previous generation, with an average 2.5X greater workload efficiency. the Palladium Z1 platform executes up to 2304 parallel jobs and scales up to 9.2 billion gates, addressing the growing market requirement for emulation technology that can be efficiently utilized across global design teams to verify increasingly complex systems-on-chip (SoCs) designs.

The Palladium Z1 enterprise emulation platform features a rack-based blade architecture, a 92 percent smaller footprint and 8X better gate density than the Palladium XP II platform according to Frank Schirrmeister, Group Director for Product Marketing, System Development Suite at cadence.  Optimizing the utilization of the emulation resource, Palladium Z1 platform offers a virtual target relocation capability, and payload allocation into available resources at run time, avoiding re-compiles. With its massively parallel processor-based architecture, Palladium Z1 platform offers 4X better user granularity than its nearest competitor according to Frank.

Additional key features and benefits of the Palladium Z1 platform include:

  • Less than one-third the power consumption per emulation cycle of the Palladium XP II platform. This is enabled by an up to 44 percent reduction in power density, an average of 2.5X better system utilization and number of parallel users, 5X better job queue turnaround time, up to 140 million gate per hour compile times on a single workstation, and superior debug depth and upload speeds
  • Full virtualization of the external interfaces using a unique virtual target relocation capability. This enables remote access of fully accurate real world devices as well as virtualized peripherals like Virtual JTAG. Pre-integrated Emulation Development Kits are available for USB and PCI-Express interfaces, providing modeling accuracy, high performance and remote access. Combined with virtualization of the databases using Virtual Verification Machine capabilities, it allows for efficient offline access of emulation runs by multiple users
  • The industry’s highest versatility with over a dozen use models, including In-Circuit Emulation running software loads, Simulation Acceleration that allows hot swapping between simulation and emulation, Dynamic Power Analysis using Cadence Joules RTL power estimation, IEEE 1801 and Si2 CPF based Power Verification, Gate-level acceleration and emulation, and OS bring-up for ARM-based SoCs running at 50X the performance of pure standard emulation
  • Seamless integration within the Cadence System Development Suite. This includes Incisive® verification platform for simulation acceleration, Incisive vManager for verification planning and unified metrics tracking, Indago Debug Analyzer and Embedded Software Apps for advanced hardware/software debug, Accelerated and Assertion-Based Verification IP, Protium FPGA-based prototyping platform with common compiler, and Perspec System Verifier for multi-engine system use-case testing.

“We continue to see customer demand for doubling of available emulation capacity every two years, driven by short project schedules amid growing verification complexity and requirements on quality, hardware-software integration, and power consumption” said Daryn Lau, vice president and general manager, Hardware and System Verification, Cadence. “With Palladium Z1 platform as one of the pillars of the System Development Suite, design teams can finally utilize emulation as a compute resource in the datacenter akin to blade server-based compute farms for simulation, and improve schedules while enabling more verification automation to address the growing impact of verification on end product delivery.”

Synopsys’ Relaunched ARC Is Not The Answer

Wednesday, October 14th, 2015

Gabe Moretti, Senior Editor

During the month of September Synopsys spent considerable marketing resources relaunching its ARC processor family of products by leveraging the IoT.  First on September 10 it published a release announcing two additional versions of the ARC EM family of deeply embedded DSP cores.  Then on September 15 the company held a free one-day ARC Processor Summit in Santa Clara and on September 22 issued another press release about its involvement in IoT again mentioning the embARC Open Software Platform and ARC Access Program.  It is not clear that ARC will fare any better in the market after this effort than it did in the past.

Background

Almost ten years ago a company called ARC International LTD designed and developed a RISC processor called Argonaut RISC Core.  Its architecture has roots in the Super FX chip for the Super Nintendo Entertainment System.  In 2009 Virage Logic purchased ARC International.  Virage specialized in embedded test systems and was acquired by Synopsys in 2010.  This is how Synopsys became the owner of the ARC architecture, although it was just interested in the embedded test technology.

Since that acquisition ARC has seen various developments that produced five product families all within the DesignWare group.  Financial success of the ARC family has been modest, especially when compared within the much more popular product families in the company.  The EM family is one of the five product families where the two new products reside.  During this year’s DVCon, at the beginning of March I had an interview with Joachim Kunkel, Sr. Vice President and General Manager of the Solutions Group at Synopsys who is responsible among other things of the IP products.  We talked about the ARC family and how Synopsys had not yet found a way to efficiently use this core.  We agreed that IoT applications could benefit from such an IP especially if well integrated with other DesignWare pieces and security software.

The Implementation

I think that the ARC family will never play a significant part in Synopsys revenue generation, even after this last marketing effort.

It seems clear to me that the IoT strategy is built on more viable corporate resources than just the ARC processor.  The two new cores are the EM9D and EM11D which implement an enhanced version of the ARCv2DSP instruction set architecture, combining RISC and DSP processing with support for an XY memory system to boost digital signal processing performance while minimizing power consumption.  Synopsys claims that the cores are from 3 to 5 times more efficient than the two previous similar cores, but the press release specifically avoids comparison with similar devices from other vendors.

When I read the data sheets of devices from possible competitors I appreciate the wisdom to avoid direct comparison.  Although the engineering work to produce the two new cores seems quite good, there is only so much that can be done with a ten years old architecture.  ARC becomes valuable only if sold as part of a sub-system that integrates other Synopsys IP and security products owned by the company.

It is also clear that those other resources will generate more revenue for Synopsys when integrated with other DSP processors from ARM, Intel, and may be Apple or even Cadence.  ARC has been neglected for too long to be competitive by itself, especially when considering the IoT market.  ARC is best used at the terminals or data acquisition nodes.  Such nodes are highly specialized, small, and above all very price sensitive.  A variation of few cents makes the difference between adoption or not.  This is not a market Synopsys is comfortable with.  Synopsys prefers to control by offering the best solution at a price it finds acceptable.

Conclusion

The ARC world will remain small.  Synopsys mark on the IoT will possibly be substantial but certainly not because of ARC.

Cadence Introduced Tensilica Vision P5 DSP

Thursday, October 8th, 2015

Gabe Moretti, Senior Editor

DSP devices are indispensable in electronic products that deal with the outside environment.  Wheter one needs to see, to touch, or in any way gather information from the environmanet, DSP devices are critical.  Improvements in their performance characteristics, therefore, have a direct impact not only on the capability of a circuit, but more importantly, on its level of competitiveness.  Cadence Design Systems has just announced the Cadence Tensilica Vision P5 digital signal processor (DSP), which it calls its flagship high-performance vision/imaging DSP core. Cadence claims that the new imaging and vision DSP core offers up to 13X performance boost, with an average of 5X less energy usage on vision tasks compared to the previous generation IVP-EP imaging and video DSP.

Jeff Bier, co-founder and president of Berkeley Design Technology, Inc. (BDTI) noted that: “There is an explosion in vision processing applications that require dedicated, efficient offload processors to handle the large streams of data in real time.  Processor innovations like the Tensilica Vision P5 DSP help provide the backbone required for increasingly complex vision applications.”

The Tensilica Vision P5 DSP core includes a significantly expanded and optimized Instruction Set Architecture (ISA) targeting mobile, automotive advanced driver assistance systems (or ADAS, which includes pedestrian detection, traffic sign recognition, lane tracking, adaptive cruise control, and accident avoidance) and Internet of Things (IoT) vision systems.

“Imaging algorithms are quickly evolving and becoming much more complex – particularly in object detection, tracking and identification,” stated Chris Rowen, CTO of the IP Group at Cadence. “Additionally, we are seeing a lot more integrated systems with multiple sensor types, feeding even more data in for processing in real time. These highly complex systems are driving us to provide more performance in our DSPs than ever before, at even lower power. The Tensilica Vision P5 DSP is a major step forward to meeting tomorrow’s market demands.”
Modern electronic systems architecture threats hardware and software with the same amount of attention.  They must balance each other in order to achieve the best possible execution while minimizing development costs.  The Tensilica Vision P5 DSP further improve the ease of software development and porting, with comprehensive support for integer, fixed-point and floating-point data types and an advanced toolchain with a proven, auto-vectorizing C compiler. The software environment also features complete support of standard OpenCV and OpenVX libraries for fast, high-level migration of existing imaging/vision applications with over 800 library functions.

The Tensilica Vision P5 DSP is specifically designed for applications requiring ultra-high memory and operation parallelism to support complex vision processing at high resolution and high frame rates. As such, it allows off-loading vision and imaging functions from the main CPU to increase throughput and reduce power. End-user applications that can benefit from the DSP’s capabilities include image and video enhancement, stereo and 3D imaging, depth map processing, robotic vision, face detection and authentication, augmented reality, object tracking, object avoidance and advanced noise reduction.

The Tensilica Vision P5 DSP is based on the Cadence Tensilica Xtensa architecture, and combines flexible hardware choices with a library of DSP functions and numerous vision/imaging applications from our established ecosystem partners. It also shares the comprehensive Tensilica partner ecosystem for other applications software, emulation and probes, silicon and services and much more.  The Tensilica Vision P5 core includes these new features:

  • Wide 1024-bit memory interface with SuperGather technology for maximum performance on the complex data patterns of vision processing
  • Up to 4 vector ALU operations per cycle, each with up to 64-way data parallelism
  • Up to 5 instructions issued per cycle from 128-bit wide instruction delivering increased operation parallelism
  • Enhanced 8-,16- and 32-bit ISA tuned for vision/imaging applications
  • Optional 16-way IEEE single-precision vector floating-point processing unit delivering a massive 32GFLOPs at 1GHz

Coventor’s MEMS+ 6.0 Enables MEMS/IoT Integration

Tuesday, October 6th, 2015

Gabe Moretti, Senior Editor

In thinking about the architecture and functioning of the IoT, I came to represent it as a nervous system.  Commands and data flow through the architecture of IoT while computations are performed at the appropriate location in the system.  The end terminal points of IoT, just like in the human nervous system function as the interface with the outside world.  MEMS are indispensable to the proper functioning of the interface, yet, as focused as we are on electronics, we seldom give prominence to MEMS when the IoT is discussed in EDA circles.

Coventor, Inc., a leading supplier of MEMS design automation solutions, introduced MEMS+ 6.0, the latest version of its MEMS design platform.   The tool is available immediately.  MEMS+ 6.0 is a significant advance toward a MEMS design automation flow that complements the well-established CMOS design flow, enabling faster integration of MEMS with electronics and packaging.  MEMS+ 6.0 features new enablement of MEMS process design kits (PDKs) and second-generation model reduction capabilities.

“The fast growing Internet of Things market will increasingly require customization of MEMS sensors and customized package-level integration to achieve lower power, higher performance, smaller form factors, and lower costs,” said Dr. Stephen R. Breit, Vice President of Engineering at Coventor.  “MEMS+6.0 is focused on enabling rapid customization and integration of MEMS while enforcing design rules and technology constraints.”

With MEMS+ 6.0, users can create a technology-defined component library that imposes technology constraints and design rules during design entry, resulting in a “correct-by-construction” methodology. This new approach reduces design errors and enables MEMS foundries to offer MEMS Process Design Kits (PDKs) to fabless MEMS designers. Both parties will benefit, with submitted designs having fewer errors, and ultimately fewer design spins and fab cycles required to bring new and derivative products to market.

“We have collaborated with Coventor in defining the requirements for MEMS PDKs for MEMS+,” said Joerg Doblaski, Director of Design Support at X-FAB Semiconductor Foundries. “We see the new capabilities in MEMS+ 6.0 as a big step toward a robust MEMS design automation flow that will reduce time to market for fabless MEMS developers and their foundry partners.”

MEMS+6.0 also includes a second-generation model reduction capability with export to MathWorks Simulink as well as the Verilog-A format. The resulting reduced-order models (ROMs) simulate nearly as fast as simple hand-crafted models, but are far more accurate. This enables system and IC designers to include accurate, non-linear MEMS device models in their system- and circuit-level simulations. For the second generation, Coventor has greatly simplified the inputs for model reduction and automatically includes the key dynamic and electrostatic non-linear effects present in capacitive motion sensors such as accelerometers and gyroscopes. ROMs can be provided to partners without revealing critical design IP.   Figure 1 shows one such integration architecture.

Figure 1: Integration of MEMS with digital/analog design

Additional advances in MEMS+ 6.0 include:

  • Support for design hierarchy, encouraging time-saving re-use of device sub-structures.
  • Refined support for including packaging effects in thermal stability analysis of sensors, reducing the impact ambient temperature can have on the thermal stability of sensor outputs such as zero offset in accelerometers and drift bias in gyros.
  • Improved modeling of devices that rely on piezo-electric effects for sensing. Interest in piezo sensing is growing because the underlying process technology for piezo materials has matured and the potential benefits over capacitive sensing, the current market champion.
  • An expanded MATLAB scripting interface that now allows design entry as well as simulation control.
Next Page »