Part of the  

Chip Design Magazine

  Network

About  |  Contact

Posts Tagged ‘Automotive’

A Brief History of Verification

Thursday, March 2nd, 2017

Gabe Moretti, Senior Editor

This year’s DVCon is turning out to be very interesting.  It may be something that Jim Hogan said.  Jim insisted, in more than one panel, that we are at the cusp of a major change, an opportunity for startups that has a greater probability of revolutionary success than in the last few years in the area of verification and, I add, circuit design.

His observations got me thinking, reviewing my experiences in various aspects of verification I have been involved in.  What follow is my existential history of verification.

Logic is What Counts

When in 1968 I wrote my first logic simulator I only had to deal with a two states Boolean logic.  A node was either on or off.  Everything else about the circuit was not important.  It was not very long after that that things got more complex.  Now Boolean logic had three states, I and other verification engineers met the floating gate.  And then shortly thereafter memories became a real physical quantity in designs and four-state logic became the norm.  We had to know whether a bit was initialized in a deterministic manner or not.  From then on verification engineers’ major problem was the size of the circuit to be simulated and how efficient we could make the simulator so that run times were as short as possible.  And by the way this is still going on today, although for more complex reasons.

Enters Physics

As the size of transistors got smaller and smaller verification engineers found that physics grew in importance.  We no longer could assume that a circuit was well behaved in the physical domain, but had to worry about power, electromagnetic effects, and thermal characteristics.  Verification engineers had to understand physics, or, better yet, physics professionals had to join the ranks of verification engineers.

Today, thanks in part to logic synthesis, Boolean logic issues are well understood, and development work is focused on the physical characteristics of the circuit and, just to make things interesting, its immediate environment.  Todays’ complex problems deal with clock networks management and their interaction with power distribution and consumption that determines the functional state of the circuit.  How power transmission effect the nearby circuitry, how the current of electrons warms the device and how temperature impacts the probability that a transistor can maintain the state it is supposed to have are today’s issues.

Behavioral Scientists Find a New Job

There has been much talk at this year’s DVCon about safety and security, issues that are really behavioral and not about logic or physics.  During a lunch panel on Tuesday the importance of safety and security in complex systems, often employing diverse and distributed circuitry, was indicated as being the next important aspect of a system to verify.  Defense, automotive, and mobile applications were used as examples.  More than one panelist with Jim Hogan again assuming a leading role, spoke about the desirability of self-correcting systems.  If a safety failure occurred, or a security breach, the most desirable thing would be to have the circuitry prevent the occurrence or fix itself thus limiting the damage and the cost of repairs.   Verification engineers will have to deal with behavioral science in order to understand the danger, build a knowledge base of past incidents, and develop circuitry that will identify and prevent the faulty behavior in the future.  This, or course, requires that we can define what a good behavior is and what would signify a deviation from it.

Can We Do Better?

Now that I no longer develop verification tools, I think that both hardware and software developers have gotten lazy.  Transistors are now assumed to exist to the point that no one ever asks himself or herself “will this fit?” since the answer is “of course, and with transistors to spare”.  We design and develop complex circuits not just because they have to be but because they can be.

The amount of data produced by the various verification tools is so large that it is becoming impossible for a human to correlate all of it into a meaningful indication of the wellness and effectiveness of the design.  Tools to manage big data are required, especially when decisions must be made in the field during actual deployment.

And as transistors grow smaller and smaller in order to provide more of them every couple of years, we are unconsciously marching toward the boundary of Einstein’s space time environment to get closer and closer to quantum physics.  What will happen when one day we will be able to determine the state of a transistor or the trajectory of the electrons but not both at the same time?

I believe that it is time to go back in time and asks ourselves “ Is there another implementation that is simpler and uses fewer transistors?” the same way I used to ask “How can I make my program fit in 8K bytes of memory?”  Verification engineers must develop an efficiency measurement tool that warns design engineers when they are not efficient.  Not for the sake of elegance, but for the sake of simplicity.  Simplicity in circuit design means fewer physical problems and fewer behavioral problems.

The best business opportunity of this approach, of course, is in the area of IP.  Design, develop, and market the most efficient implementation of a function and, assuming you have some understanding of business management, you will be a successful entrepreneur.

Siemens Acquisition of Mentor Graphics is Good for EDA

Tuesday, November 15th, 2016

Gabe Moretti, Senior Editor

Although Mentor has been the subject of take-over attempts in the past, the business specifics of the transactions have never been favorable to Mentor.  The acquisition by Siemens, instead, is a favorable occurrence for the third largest EDA company.  This time both companies obtain positive results from the affair.

Siemens acquires Mentor following the direction set forth in 2014 when its Vision 2020 was first discussed in public.  The 6 year plan describes steps the company should take to better position itself for the kind of world it envisions in 2020.

The Vision 2020 document calls for operational consolidation and optimization during the years 2016 and 2017.  It also selects three of its business division as critical to corporate growth.  It calls it the E-A-D system that include: Digitalization, Automation, and Electrification.

Although it is possible that Mentor technology and products may be strategic in Electrification, they are of significant importance in the other two areas: Digitalization and Automation.  Digitalization, for example, includes vehicle automation, including smart cars and vehicle to vehicle communication.  Mentor already has an important presence in the automotive industry and can help Siemens in the transition to state of the art car management by electronic systems to the innovation of new systems required by the self-driving automobile and the complete integration of the components into an intelligent systems including vehicle-to-vehicle communication.

Mentor also has experience in industrial robots and what is, in my mind, more remarkable, is that the PCB and cabling portions of Mentor, often minimized in an EDA industry dominated by the obsession of building ICs, are the parts that implement and integrate the systems in the products designed and built by third parties.

With its presence in the PCB and cabling markets, Mentor can bring additional customers to Siemens as well as insight in future marketing requirements and limitations that will serve extremely well in designing the next generation industrial robots.

Of course, Mentor will also find an increased internal market as other divisions of Siemens not part of the E-A-D triumvirate will utilize its products and services.

Siemens describes itself as an employee oriented company, so present Mentor employee should not have to fear aggressive cost cutting and consolidation.  Will Mentor change? Of course, it will adapt gradually to the new requirements and opportunities the Siemens environment will create and demand, but the key word is “gradually”.  Contrary to the acquisition of ARM by SoftBank, where the acquiring company had no previous activity in ARM’s business, Siemens has been active in EDA internally, both in its Research Lab and strong connections with university programs that originated a number of European EDA startups.  Siemens executives have an understanding of what EDA is all about and what it takes to be successful in EDA.  The result, I expect, is little interference and second guessing which translates in continuous success for Mentor for as long as they are capable of it.

Wil other EDA companies benefit from this acquisition? I tink they will.  First of all it attracts more attention to our industry by the financial community, but it also is likely to increase competition among the “big 3” forcing Cadence and Synopsys to focus more on key markets and while diversifying into related markets like optical, security, software development for example.  In addition I do not see the reason for an EDA company to enter into a business partnership with some of its customers to explore new revenue generating business models.

Cadence Introduced Tensilica Vision P5 DSP

Thursday, October 8th, 2015

Gabe Moretti, Senior Editor

DSP devices are indispensable in electronic products that deal with the outside environment.  Wheter one needs to see, to touch, or in any way gather information from the environmanet, DSP devices are critical.  Improvements in their performance characteristics, therefore, have a direct impact not only on the capability of a circuit, but more importantly, on its level of competitiveness.  Cadence Design Systems has just announced the Cadence Tensilica Vision P5 digital signal processor (DSP), which it calls its flagship high-performance vision/imaging DSP core. Cadence claims that the new imaging and vision DSP core offers up to 13X performance boost, with an average of 5X less energy usage on vision tasks compared to the previous generation IVP-EP imaging and video DSP.

Jeff Bier, co-founder and president of Berkeley Design Technology, Inc. (BDTI) noted that: “There is an explosion in vision processing applications that require dedicated, efficient offload processors to handle the large streams of data in real time.  Processor innovations like the Tensilica Vision P5 DSP help provide the backbone required for increasingly complex vision applications.”

The Tensilica Vision P5 DSP core includes a significantly expanded and optimized Instruction Set Architecture (ISA) targeting mobile, automotive advanced driver assistance systems (or ADAS, which includes pedestrian detection, traffic sign recognition, lane tracking, adaptive cruise control, and accident avoidance) and Internet of Things (IoT) vision systems.

“Imaging algorithms are quickly evolving and becoming much more complex – particularly in object detection, tracking and identification,” stated Chris Rowen, CTO of the IP Group at Cadence. “Additionally, we are seeing a lot more integrated systems with multiple sensor types, feeding even more data in for processing in real time. These highly complex systems are driving us to provide more performance in our DSPs than ever before, at even lower power. The Tensilica Vision P5 DSP is a major step forward to meeting tomorrow’s market demands.”
Modern electronic systems architecture threats hardware and software with the same amount of attention.  They must balance each other in order to achieve the best possible execution while minimizing development costs.  The Tensilica Vision P5 DSP further improve the ease of software development and porting, with comprehensive support for integer, fixed-point and floating-point data types and an advanced toolchain with a proven, auto-vectorizing C compiler. The software environment also features complete support of standard OpenCV and OpenVX libraries for fast, high-level migration of existing imaging/vision applications with over 800 library functions.

The Tensilica Vision P5 DSP is specifically designed for applications requiring ultra-high memory and operation parallelism to support complex vision processing at high resolution and high frame rates. As such, it allows off-loading vision and imaging functions from the main CPU to increase throughput and reduce power. End-user applications that can benefit from the DSP’s capabilities include image and video enhancement, stereo and 3D imaging, depth map processing, robotic vision, face detection and authentication, augmented reality, object tracking, object avoidance and advanced noise reduction.

The Tensilica Vision P5 DSP is based on the Cadence Tensilica Xtensa architecture, and combines flexible hardware choices with a library of DSP functions and numerous vision/imaging applications from our established ecosystem partners. It also shares the comprehensive Tensilica partner ecosystem for other applications software, emulation and probes, silicon and services and much more.  The Tensilica Vision P5 core includes these new features:

  • Wide 1024-bit memory interface with SuperGather technology for maximum performance on the complex data patterns of vision processing
  • Up to 4 vector ALU operations per cycle, each with up to 64-way data parallelism
  • Up to 5 instructions issued per cycle from 128-bit wide instruction delivering increased operation parallelism
  • Enhanced 8-,16- and 32-bit ISA tuned for vision/imaging applications
  • Optional 16-way IEEE single-precision vector floating-point processing unit delivering a massive 32GFLOPs at 1GHz

Design Automation Is More Than EDA

Wednesday, April 8th, 2015

Gabe Moretti, Senior Editor

In a couple of months the EDA industry will hold its yearly flagship conference: the Design Automation Conference (DAC).  And just a few days ago Vic Kulkarni, SVP & GM, RTL Power Business at ANSYS-Apache Business Unit, told me that the industry should be called Design Automation Industry, and not EDA.  Actually both the focus of the EDA industry and the contents of DAC are mostly IC design.  This activity is now only a portion of system design and thus the conference does not live up to its name and the industry does not support system design in its entirety.

For almost all its entire life the EDA industry has been focused only on hardware and it did it well.  That was certainly enough when products were designed with the approach that electronic systems ‘s purpose was to execute application software with the help of an operating system and associated device drivers.

A few years ago the EDA industry realized that the role of hardware and software had changed.  Software was used to substitute hardware to implement system tasks and to “personalize” systems that were not end-user programmable.  It than became necessary to verify these systems and new markets, those of virtual prototyping and software/hardware verification, grew and continue to grow.  Yet the name remains Electronic Design Automation and most of its popular descriptions, like in Wikipedia,  deal only with hardware design and development.

The Internet of Things (IoT), where intelligent electronic products are interconnected to form a distributed and powerful data acquisition, analysis, and control system, is considered to be a major growth area for the electronics industry and thus for EDA.  A small number of companies, like Ansys and Mathworks for example, have realized some time ago that a system is much more than “just” electronics and software and these companies now have an advantage in the IoT market segment.

By just  developing and verifying the electronic portion of a product traditional EDA companies run the risk to fool designers into thinking that the product is as efficient and secure as it can be.  In fact, even virtual prototyping tools cannot make an absolute statement regarding the robustness of the software portion of the design.  All that can be said using this tools is that the interface between hardware and software has been sufficiently verified and that the software appears to work correctly when used as intended.  But a number of organizations and individuals have pointed to critical security issues in existing real time systems that are the precursors of IoT.  The latest attention to security in automotive applications is an example.

The use of MEMS in IoT applications should push EDA leaders to ask why MCAD is still considered a discipline separate from EDA.  The interface and concurrent execution of the electro-mechanical system is as vital as the EDA portion.  Finally the EDA industry has another weakness when it comes to providing total support for IoT products.  Nature is analog, yet the analog portion of the EDA industry lags significantly from the development achieved in the digital portion.  Digital tools only offer approximations that have been, and still are, good enough for simpler systems.  Leading edge processes are clearly showing that such approximation yields results that are no longer acceptable, and product miniaturization has resulted in a new set of issues that are almost entirely in the analog domain.

The good news is that the industry has always managed to catch up in response to customers’ demand.  The hope offered by IoT growth is not simply revenue growth due to new licenses, but a maturing of the industry to finally provide support for true system design and verification.

EDA in 2015: Something New and Something Old

Monday, December 1st, 2014

Gabe Moretti, Senior Editor

Every year I like to contact EDA companies to ask what they expect in 2015.  When I started working on this project I visualized one article that would offer the opinions of EDA industry leaders from many companies covering their expectations for the coming year.  As work progressed I found, as I should have expected, that the responses to my questions were much broader and in depth than could possibly be covered in one article.  Doing so would have resulted in such a long article that would have surpassed the time limits most engineers have to read a magazine, even a digital one.

So I decided to publish three articles in addition to this introductory blog.  The decision is based mostly on the amount of feedback I received, and in part by how stand-alone the input was.  It turns out that both Cadence and Mentor provided me with material that can be judged to be a contributed article, while the rest of the companies submitted contributions that could be grouped into an article, albeit one significantly longer than normal.  The articles will be published during this week, one article a day.

Designers, architects, and verification engineers will find worthwhile material in more than one article.  One subject that is receiving attention lately and that is not covered directly in the articles is Agile IC Methodology.  In truth Chi-Ping Hsu of Cadence talks about the issue in his article, but not in the terms of the conversation going on under the auspices of Sonics, Inc.  I am sure that I will write about Agile IC Methodology in 2015 so this subject will receive its due attention.

Verification and mixed/signal design are the subject that have received the greatest attention.  But it is important to acknowledge the underlying drivers for such attention: hardware/software co-design, and the use of third party IP.  These are the true technology drivers.  From a market point of view, automotive looms important.  This market has been developing for a few years and has now reached the point in which it can approach its full potential.  Distributed intelligence and “Thing to Thing” communication and co-operation is within the grasps of product developers.  The automobile is the first working implementation of the Internet of Things (IoT).  IoT is in everyone’s mind in our industry, and the intelligent automobile, even if such product does not really use the internet architecture in most instances, is often used as an example.  The IoT will certainly be a significant driver of our industry, but its growth ramp in 2015 will still be linear as we continue to understand what the hierarchical architecture should really look like.  At this point anything that could possibly generate data is seen as a good prospect, but soon the market will discover that much of the data may be interst9ing but it is not necessary, and in fact would just clutter one’s life.  As usual, customers’ demand will inject sense in the market.

In a time when all festivities seem to start two months before they actually occur let me be one of the firsts to wish all of you a productive and peaceful 2015.