Part of the  

Chip Design Magazine


About  |  Contact

Posts Tagged ‘Cadence’

Next Page »

Blog Review – Monday May 18, 2015

Monday, May 18th, 2015

Zynq detects pedestrians; ARMv8-A explained; Product development demands test; Driving connectivity; Celebrating Constellations; Chip challenges

The helpful Michael Thomas, ARM, advises readers that there is The Cortex-A Series Programmer’s Guide for ARMv8-A available and introduces what is in the guide for a taster of the architecture’s features.

The Embedded Vision Summit gives many bloggers material for posts. The first is Steve Leibson, Xilinx, who includes a video Mathworks presented there, with a description if a real-time pedestrian detector running on a Zynq-based workflow, using MathWorks’ Sumulink and HDL Coder.

Another attendee was Brian Fuller, Cadence, who took away the secrets to successful product development, which he sums up as: test, test, test. (He does elaborate beyond that in his detailed blog, reviewing Mike Alrdred of Dyson’s keynote.

Anticipating another event, DAC, Ravi Ravikumar, Ansys, looks at the connected car and the role of design in intelligent vehicles.

Also with an eye on DAC, Rupert Baines, UltraSoC has a guest blog at IP-Extreme, and praises the Constellations initiative, with some solid support – and some restrained back-slapping.

Continuing a verification series, Harry Foster, Mentor, looks at the FPGA space and reflects on how the industry makes choices in formal technology.

A guest blog, at Chip Design, by Dr. Bruce McGaughy, ProPlus Design Solutions, looks at what innovative chip designs mean for chip designers. His admiration for the changing pace of design is balanced with identifying the drivers for low power design to meet the IoT portable phase.

Why do we need HDCP 2.2 and what do we need to do to ensure cryptography and security? These are addressed, and answered, by VIP Experts, Synopsys, in this informative blog.

By Caroline Hayes, Senior Editor

The Car as an IoT Node: What are the Design Implications?

Monday, May 11th, 2015

By Chris Rowen, CTO of Cadences IP Group

Its a pivotal moment in the history of automotive design. Not only is the percentage of electronics content of each automobile continuing to rise, but wireless technology has come to the car. This confluence of two technology forces changes in how we view automobiles and automotive electronics in fundamental ways. And it raises a seemingly simple question: Should the intelligent car be considered a node on the Internet of Things (IoT)?

I embrace what you might call a big-tent definition of IoT. That is, I see a wide variety of applications that are outside the traditional clusters of gateways and servers, applications between cloud and a local device. These are applications where there is content that is distributed and distributed not just in response to human interface.

Car first, Internet second

So I do believe that the intelligent car is a local device and, as such, constitutes a node on the Internet, but with some caveats. It shares many of the same design considerations as traditional IoT designs, but with a significant difference: The car, as a mode of transportation, has as its first priority, safety. Its second priority is safety. Its third priority is safety. Well address this design implication shortly.

In the past decade or so, as electronics have enhanced safety priorities, they have also added a new dimension to the driver and passenger experiences. That makes the automobile a very unique IoT node. These bifurcated features and applications require engineers to think about them in different ways.

Inside the car, we see many of the same subsystems and elements that are analogous to popular IoT categories. There are various accelerometers and gyrometers and magnetometers that are sensing motion or monitoring the performance of your car. Those are either already connected to the web or are a baby step away. But its worth noting in this context that those elements are primarily of the car and secondarily of the Internet.

With that in mind, we can view the automotive IoT node as having two major categories of functionality: mission-critical and entertainment/infotainment. And engineers need to design to these two areas with different considerations. If you have a bug and your navigation goes out, thats one thing. If you have a bug and your brakes stop working, well thats a completely different situation.

Levels of security

The mission-critical components and subsystems in this automotive IoT node will require highly documented development processes like ISO 26262 for functional safety. Where there is critical information that could be corrupted and cause damage or death, then engineering teams have to have not only the functional safety qualification but also the ability to have the system remain robust in the face of accidental or malicious intervention by rogue software.

Those systems will evolve slowly and come with enormous verification and certification requirements. Theyll have their own design constraints and their own development pace.

On the other side, as part of the human infotainment experience, we will see the car trying to look more like an open platform-like, a smartphone.  There, you wont worry quite as much about security. There, youll have various wireless options, packaging and power considerations that likely will differ from the more mission-critical components. Design constraints will likely be more flexible and the development pace faster.

Another caveat has to do with data: Traditional IoT applications suggest a more or less continual flow of information from edge devices to and from the cloud or the fog. With respect to the automotive IoT node, this dynamic is a little different. Manufacturers want to gather lots of data on the performance of their fleet to help fine-tune their design and manufacturing process. But we wont see a lot of real time, on-the-fly updates coming from the cloud. In other words, the notion that your car will be constantly fine-tuned for performance may be wishful thinking right now.

Where automotive design meets IoT demands

Now when we think about designing for the IoT, we think mostly about small form factors and ultra-tight power budgets in the context of wearables, which are the poster child for todays IoT applications.

In automotive designs, power is somewhat less of an issue, but form factor and weight can still be important considerations. And because the success of early intelligent vehicles is fueling more customer demand, design cycles are being pressured.

This suggests the need for design flows that leverage much more integrationparticularly IP integrationand newer verification considerations as software takes its seat at the automotive design table.

We are seeing much more sensor fusion solutions in automotive designs today, the need (because of the interface to the analog world) for robust floating-point computing, integrated features like audio, voice and speech (AVS) for voice recognition and triggering.

And while cars are big-ticket items, that does not mean that cost considerations at the component level are not paramount.

All of these are considerations that engineering teams in the traditional IoT space wrestle with. So the intelligent car is absolutely part of the IoT. The potential for optimizing not only the car and the driving experience but also the world around the car (think about better automated traffic-flow modeling) is profound. But engineers need to approach the design of the cars subsystems thoughtfully, in ways that reflect the primacy of safety and the potential of cloud-enabled services.

Chris Rowen, who co-founded MIPS and Tensilica and is an expert on computing architectures, is CTO of Cadences IP Group.

Cadence Introduced The New Tensilica Fusion DSP

Thursday, April 23rd, 2015

Gabe Moretti, Senior Editor

Cadence Design Systems, Inc. has announced the new Cadence Tensilica Fusion digital signal processor (DSP) based on the Xtensa Customizable Processor. This scalable DSP is meant for applications requiring specialized computation, ultra-low energy and a small footprint.  The new DSP is well positioned for IoT applications as can be seen in Figure 1.

The device can be designed into chips for wearable activity monitoring, indoor navigation, context-aware sensor fusion, secure local wireless connectivity, face trigger, voice trigger and voice recognition. The Optional Instruction Set Architecture (ISA) optimizations provide support for multiple wireless protocols including Bluetooth

Low Energy, Thread and Zigbee using IEEE 802.15.4, SmartGrid 802.15.4g, Wi-Fi 802.11n and 802.11ah, 2G and LTE Category 0 release 12 and 13, and global navigation satellite systems (GNSS).

Tensilica claims that the Fusion DSP uses 25 percent less energy, based on running a power diagnostic derived from the Sensory Truly Handsfree always-on algorithm, when compared to the current industry-leading Cadence Tensilica HiFiMini low-power DSP IP core.

The Tensilica Fusion DSP combines an enhanced 32-bit Xtensa control processor architecture with DSP capabilities and flexible algorithm-specific acceleration for a fully programmable approach, supporting multiple existing and developing standards and custom algorithms. For many IoT applications that are space- and energy-constrained, the optimal solution is  to deploy a single small processor that can perform sensor processing, wireless communications and control.  IoT device designers can pick just the options they need using the Xtensa Innovation Platform to produce a Fusion processor which can be smaller and more energy-efficient, while having higher performance than typical one-size-fits-all processor cores.

The Tensilica Fusion DSP combines flexible hardware choices with a library of DSP functions and more than 140 audio/voice/fusion applications from over 70 partners. It also shares the Tensilica partner ecosystem for other applications software, emulation and probes, silicon and services, and much more.


Blog Review – Monday April 20, 2015

Monday, April 20th, 2015

Half a century and still quoted as relevant is more than most of us could hope to achieve, so the 50 anniversary of Gordon Moore’s pronouncement which we call Moore’s Law is celebrated by Gaurav Jalan, as he reviews the observation first pronounced on April 19, 1965, which he credits with the birth of the EDA industry, and the fabless ecosystem amongst other things.

Another celebrant is Axel Scherer, Cadence, who reflects on not just shrinking silicon size but the speed of the passing of time.

On the same theme of what Moore’s Law means today for FinFets and nano-wire logic libraries, Navraj Nandra, Synopsys, also commemorates the anniversary, with an example of what the CAD team has been doing with quantum effects at lower nodes.

At NAB (National Broadcasters Association) 2015, in Las Vegas, Steve Leibson, Xilinx, had an ‘eye-opening’ experience at the CoreEL Technologies booth, where the company’s FPGA evaluation kits were the subject of some large screen demos.

Reminiscing about the introduction of the HSA Foundation, Alexandru Voica, Imagination Technologies, provides an update on why heterogeneous computing is one step closer now.

Dr. Martin Scott, the senior VP and GM of Rambus’ Cryptography Research Division, recently participated in a Silicon Summit Internet of Things (IoT) panel hosted by the Global Semiconductor Alliance (GSA). In this blog he discusses the security of the IoT and opportunities for good and its vulnerabilities.

An informative blog by Paul Black, ARM examines the ARM architecture and DS-5 v5.21 DSTREAM support for debug, discussing power in the core domain and how to manage it for effective debug and design.

Caroline Hayes, Senior Editor

Blog Review – Monday April 06, 2015

Monday, April 6th, 2015

It’s always tricky looking at blogs on April 1 st. So much technology, so many gags. I didn’t fall for Microsoft UK’s April Fool that Bing can read nerve pulses and brain waves to improve your web search, or HTC’s Rok the Sok, a smart tag which pairs socks in the wash and alerts the wearer when the sock is wearing thin. Many people downloaded Microsoft’s MS-DOS for Windows phones, and loved the joke. The most ‘successful’ or most reported, was CERN’s claim to have found The Force and that it was using it, Star Wars-style to reheat coffee in a mug and return books to a bookshelf while remaining seated. I won’t be at GSA Silicon Summit to get a chance to check McKenzie Mortensen’s claim that IPextreme’s Warren Savage has cut his long hair into a Silicon Valley ‘short back and sides’ – could it another April Fool?

I decided to narrow down my Blog Review search to genuine ones only (I hope!)

Three boards and three ways to write code are discussed by Thomas Aubin, Atmel, interviewed by David Blaza, ARM, ahead of the ARM Embedded Computing Board resource guide.

The pressure to be smart is examined by Matthew Hall, Dassault Systemes. He has latched on to the findings of the Aberdeen Group, that engineering groups must communicate and collaborate to predict system behavior ahead of testing.

Laman Sahoo, Arrow Devices, identifies three sources of confusion for Object Oriented Programming, to take the ‘oops!’ out of OOP.

The reports of the death or slowing down of Moore’s Law are exaggerated, concludes Brian Fuller, in his interview with Suk Lee, Senior Director, Design Infrastructure Marketing division, Cadence, ahead of the TSMC Technology Symposium. In conversation, Fuller pushes Lee on the progress of process development down to 7nm as well as FinFET development.

Ahead of the Embedded Vision Conference, Jeff Bier, BerkeIey Design Technology, looks at how academia and industry respond to neural networks.

3D printable heatsinks are examined by Robin Bornoff, Mentor Graphics, using FloTHERM, and FloMCAD.

Larry Lapides, VP of sales at Imperas, discusses security on connected devices using MIPS CPUs.

A biblical theme is adopted for an Eastertime post by Ramesh Dewangan, Real Intent. The David and Goliath struggle of large and small EDA companies is reported from the Confluence 2015, where one panel was ‘The paradox of leadership: Incremental approach to Big Ideas’, and ‘How to build the technology organisations of tomorrow’.

An interesting smartphone app by Philips to control lighting via WiFi is explored by Ashish D, Intel, but using an Intel Edison board.

Caroline Hayes, Senior Editor

Using Physically Aware Synthesis Techniques to Speed Design Closure of Advanced-Node SoCs

Monday, March 23rd, 2015

Gopi Kudva, Cadence


At smaller process nodes, chip designers are struggling to meet their aggressive schedules and power, performance, and area (PPA) demands in the ever-so-competitive system-on-chip (SoC) market. One of the most pressing problems designers are facing these days is not knowing how the netlist they produce in synthesis will work out in the place-and-route (P&R) process.

Not only does this lack of predictability impact the design itself, but it also dampens, unnecessarily, quality of life. After all, isn’t it always better when you know that what you created is good – and will allow you to go home at a reasonable time each evening, without worrying that an unknown problem will surface the next day?

At 28nm and below, SoCs are much more complex, making it more challenging than ever to meet PPA targets. Wires dominate the timing at these advanced nodes, so there’s a greater chance of encountering issues such as routing congestion and timing delays. You must cram more transistors into the die, and have to reduce dynamic and leakage power.

So, why are you still doing traditional synthesis?

Physically aware synthesis – the ability to bring in physical considerations much earlier in the logic synthesis process – is something that can dramatically improve the design process  and significantly shorten the time spent fixing problems. Let’s discuss some key physically aware synthesis techniques that can help you speed up the physical design closure process for your next high-performance, power-sensitive SoC.

Physically Aware Synthesis

Today’s physically aware synthesis technologies bring physical interconnect modeling earlier into the synthesis process to help you create a better netlist structure, one that’s more suitable for today’s P&R tools.

You can start with no floorplan, and allow the synthesis to come up with one. You can give it a very basic floorplan. But the better the floorplan you have, the better you can take advantage of global synthesis optimization with the more detailed physical interconnect. Essentially, you are getting rid of the old logical-physical barrier. You’ll no longer need to, with fingers crossed, wait for your “backend” engineer to say “yay” or “nay.”

There are four physically aware synthesis innovations that we will discuss here. They are:

- Physical layout estimation (PLE)
- Physically aware mapping (PAM)
- Physically aware structuring (PAS)
- Physically aware multi-bit cell inferencing (PA-MBCI)

Of course, before you can come up with a good floorplan, you need to have a good initial netlist. To create that initial netlist, you can still use physical information and use physical layout estimation (PLE). For this, you just need some basic physical information, such as LEF and cap tables/QRC tech files. The floorplan DEF is optional here.

PLE is a physical modeling technique for capturing timing closure P&R tool behavior for RTL synthesis optimization. It allows you to create a good initial netlist for floorplanning. And the result? Better timing-power-area balance. PLE:

- Uses actual design and physical library info
- Dynamically adapts to changing logic structures in the design
- Has the same runtime as synthesizing with wireload models

Once you have a good initial netlist, you can create a good initial floorplan. Previously, this floorplan was used for P&R stages, and not in synthesis. But now, you can use this floorplan to allow the synthesis engine to “see” long wires before actually building the logic gates for the improved, physically aware netlist.

The steps in the latest physically aware RTL synthesis flow are shown in Figure 1. The three main steps in this synthesis flow are:

  1. Generic gate placement
  2. Global physical RTL optimization
  3. Global physical mapping

Figure 1: Physically aware RTL synthesis flow

Physically aware mapping (PAM)

PAM is all about improving timing with increased correlation.

- Initially places the optimized generic gates and macros
- Optimizes the placed generic gates and macros (RTL level optimization, including datapath optimization)
- Estimates routes and congestion for the placed generic gates and macros, taking into account physical constraints such as placement and routing blockages
- Performs parasitic (Resistance and Capacitance) extraction using a unique extraction method on the estimated routes

Figure 2. Physically aware mapping accounts for long wire delays in RTL synthesis

After generic gate placement, every wire in the design has a physical delay. The synthesis engine can now accurately “see” which paths are critical. Global synthesis now does timing-driven cell mapping based on physical wire delays, translating the generic gates into standard gates based on the provided technology library and creating an optimized netlist.

By considering real wire delays, PAM has demonstrated the ability to deliver up to 15% improved timing. After all, if you know in advance that a certain wire will be long and you know where that extra delay is because of the long wire, you can structure the netlist more accurately to account for these delays. With this knowledge, synthesis is also in a better position to “squeeze” critical paths and “relax” non-critical paths based on wire delays.

Physically aware structuring (PAS)

What PAS does:

- Provides optimized binary/one-hot multiplexer (mux) selection
- Targets high-congestion structures, such as cross bars, barrel shifters, and memory-connected mux chains
- Decomposes a large mux into a set of smaller muxes, each of which can potentially share the decode logic. Decoding logic, in turn, is intelligently partitioned using physical input pin knowledge.
- Generates congestion-aware decode islands via smarter select line sharing and duplication

The result of PAS: better placement that decreases routing congestion.

Figure 3. Physically aware structuring RTL synthesis flow

To illustrate the benefits provided by PAS and PAM, we considered a Flash memory design whose floorplan had a small channel of digital logic surrounded by Flash memory. This design suffered from congestion and timing issues due to a poor logical synthesis wire model. Timing closure was impossible. Once the engineering team utilized Cadence® Encounter® RTL Compiler Advanced Physical Option, which features the physically aware synthesis capabilities we have been discussing, TNS improved from ~12,400ns to ~750ns. The technology helped improve timing correlation by identifying long paths during physical synthesis and it also helped identify and alleviate congestion. In the end, the engineering team was pleased to experience significantly reduced design turnaround time and synthesis to place-and-route iterations.

As another example, we have a networking SoC with a one million instance block and with a large volume of muxes. Initially, with traditional synthesis, the engineers working on this design had initial significant horizontal and vertical congestion, hence the design was not routable. Using the physically aware capabilities of Encounter RTL Compiler Advanced Physical Option, the engineering team met their timing and area goals with a routable design with little congestion.

Typically, just to get the design to route, engineers have to “pad” the layout so much in order to account for the bad structure of the netlist!  The wires are also longer due to extra spacing the padding creates, and leads to extra buffering and increased power. With physically aware synthesis, you can easily remove the extra padding and margins, thereby reducing area significantly and shrinking the die, lowering the wire length and power.

Physically aware multi-bit cell inferencing (PA-MBCI)

Multi-bit cell inferencing (MBCI) merges single-bit flops into a multi-bit version of flops. Using a physically aware MBCI (PA-MBCI) synthesis strategy can help reduce total chip power—10% or better dynamic power savings in many cases!

In synthesis, you can merge single-bit flops into a multi-bit flops using either a logical or physical method. A logical method is where synthesis considers only the netlist and converts as many flops into multi-bit flops without considering the flop locations and proximity. The disadvantage of this method is that you could end up with flops at two opposite ends of the floorplan merged, creating a placement problem and unnecessarily long wires, which in turn can create timing and routing problems.

Encounter RTL Compiler Advanced Physical Option features physically aware multi-bit merging. Physically aware multi-bit merging merges the sequential cells while considering the compatibility and physical neighborhood from the natural placement. This is a “correct by construction” process which makes sure flops are merged after placement, only when there is benefit in a specific cost factor (typically timing, area, leakage, and dynamic power), while not degrading other cost factors.

The result: the PA-MBCI process avoids timing degradation, reduces wirelength, minimizes congestion, and reduces power.

Multi-Bit Flops – Advantages and Best Practices

As an example of the benefits of using a MBCI flow, let’s take a look at the impact of this flow on development of a design based on an advanced-node embedded processor. Compared to using traditional synthesis techniques, applying physically aware synthesis to this processor yielded:

- 15% clock tree area
- 60% TNS (improved hold timing)
- 6.4% dynamic
- 4% leakage
- 4.7% routing

Tips and Tricks

To get optimal results from physically aware synthesis, consider these techniques:

- For generating an initial netlist, use PLE
- Use this PLE netlist to create a starting floorplan
- With this floorplan, perform synthesis staring from RTL

- Enable PAM

- If your design has high-congestion structures, such as cross bars, barrel shifters, and memory-connected mux chains, enable PAS
- If you have multi-bit flops in your technology libraries, enable PA-MBCI
- Now you have a physically aware netlist: use Encounter RTL Compiler Advanced Physical Option to perform standard cell placement and optimization

- Note that in this article, we only discussed generic gate placement and not standard cell placement

Physically aware synthesis techniques that can help accelerate the physical design closure process for high-performance, power-sensitive SoCs at 28nm and below.


Given the challenges of aggressive schedules, dominant wires, and the need for improved PPA in advanced-node SoCs, there’s a greater chance for routing congestion and delays in tapeout due to PPA issues. By accounting for physical considerations much earlier in the logic synthesis process, physically aware synthesis can help accelerate physical design closure. Physically aware synthesis techniques such as PLE, PAS, PAM, and PA-MBCI—available in Cadence’s Encounter RTL Compiler Advanced Physical Option—are contributing to better PPA and faster design convergence for advanced-node designs.

Blog Review – Monday, March 23, 2015

Monday, March 23rd, 2015

Warren Savage, IPextreme, has some sage, timely advice that applies to crossword solving, meeting scheduling and work flows.

At the recent Open Power Summit, Convey Computer announced the Coherent Accelerator Processor Interface (CAPI) development kit based on its Eagle PCIe coprocessor board. Steve Liebson, Xilinx, has a vested interest is telling more, as the accelerator is based on the Xilinx Virtex-7 980T FPGA.

Gloomy predictions from Zvi Or-Bach, MonolithIC 3D, who puts a line in the sand at the 28nm node as smartphone and tablet growth slows.

Saying you can see unicorns is not advisable in commerce, but Ramesh Dewangan, Real Intent has spotted some at Confluence 2015, but where, he wonders, are those for the EDA industry?

ARM’s use of Cadence’s Innovus Implementation System software to design the ARM Cortex-A72 is discussed by Richard Goering, Cadence. As well as the collaboration, the virtues of ARM’s ‘highest performance and most advanced processor’ are highlighted.

ARM has partnered with the BBC, reveals Gary Atkinson, ARM, in the Make it Digital initiative by the broadcasting corporation. One element of the campaign is the Microbit project, in which every child in school year 7 (11-12 years old) will be given a small ARM-based development board that they can program using a choice of software editor. Teachers will be trained and there will be a suite of training materials and tutorials for every child to program their first IoT device.

Mentor Graphics is celebrating a win at the first annual LEDs Magazine Sapphire Award in the category of SSL Tools and Test. Nazita Saye, Mentor Graphics, is in Hollywood Report mode and reviews the awards.

Responding to feedback from readers, Satyapriya Acharya, Syopsys, posts a very interesting blog about verifiying the AMBA system level environment. It is well thought out and informative, with the promise of more capabilities needed in a system monitor to perform checks.

Cadence Introduces Innovus Implementation System

Friday, March 13th, 2015

Gabe Moretti, Senior Editor

Cadence Design Systems  has introduced its Innovus Implementation System, a next-generation physical implementation solution that aims to enable system-on-chip (SoC) developers to deliver designs with best-in-class power, performance and area (PPA) while accelerating time to market.  The Innovus Implementation System was designed to help physical design engineers achieve best-in-class performance while designing for a set power/area budget or realize maximum power/area savings while optimizing for a set target frequency.

The company claims that the Innovus Implementation System provides typically 10 to 20 percent better power/performance/area (PPA) and up to 10X full-flow speedup and capacity gain at advanced 16/14/10nm FinFET processes as well as at established process nodes.

Rod Metcalfe, Product Management Group Director pointed out the key Innovus capabilities:

- New GigaPlace solver-based placement technology that is slack-driven and topology-/pin access-/color-aware, enabling optimal pipeline placement, wirelength, utilization and PPA, and providing the best starting point for optimization
- Advanced timing- and power-driven optimization that is multi-threaded and layer aware, reducing dynamic and leakage power with optimal performance
- Unique concurrent clock and datapath optimization that includes automated hybrid H-tree generation, enhancing cross-corner variability and driving maximum performance with reduced power
- Next-generation slack-driven routing with track-aware timing optimization that tackles signal integrity early on and improves post-route correlation
Full-flow multi-objective technology enables concurrent electrical and physical optimization to avoid local optima, resulting in the most globally optimal PPA

    The Innovus Implementation System also offers multiple capabilities that boost turnaround time for each place-and-route iteration. Its core algorithms have been enhanced with multi-threading throughout the full flow, providing significant speedup on industry-standard hardware with 8 to 16 CPUs. Additionally, it features what Cadence believes to be the industry’s first massively distributed parallel solution that enables the implementation of design blocks with 10 million instances or larger. Multi-scenario acceleration throughout the flow improves turnaround time even with an increasing number of multi-mode, multi-corner scenarios.

    Rahul Deokar, Product Management Director added that the product offers a common user interface (UI) across synthesis, implementation and signoff tools, and data-model and API integration with the Tempus Timing Signoff solution and Quantus QRC Extraction solution.

    The Innovus common GUI

    Together these solutions enable fast, accurate, 10nm-ready signoff closure that facilitates ease of adoption and an end-to-end customizable flow. Customers can also benefit from robust visualization and reporting that enables enhanced debugging, root-cause analysis and metrics-driven design flow management.

    “At ARM, we push the limits of silicon and EDA tool technology to deliver products on tight schedules required for consumer markets,” said Noel Hurley, general manager, CPU group, ARM. “We partnered closely with Cadence to utilize the Innovus Implementation System during the development of our ARM Cortex-A72 processor. This demonstrated a 5X runtime improvement over previous projects and will deliver more than 2.6GHz performance within our area target. Based on our results, we are confident that the new physical implementation solution can help our mutual customers deliver complex, advanced-node SoCs on time.”

    “Customers have already started to employ the Innovus Implementation System to help achieve higher performance, lower power and minimized area to deliver designs to the market before the competition can,” said Dr. Anirudh Devgan, senior vice president of the Digital and Signoff Group at Cadence. “The early customers who have deployed the solution on production designs are reporting significantly better PPA and a substantial turnaround time reduction versus competing solutions.”

    DVCon Highlights: Software, Complexity, and Moore’s Law

    Thursday, March 12th, 2015

    Gabe Moretti, Senior Editor

    The first DVCon  United States was a success.  It was the 27th Conference of the series and the first one with this name to separate it from DVCon Europe and DVCon India.  The last two saw their first event last year and following their success will be held this year as well.

    Overall attendance, including exhibit-only and technical conference attendees, was 932.

    If we count, as DAC does, exhibitors personnel then the total number of attendees is 1213.  The conference attracted 36 exhibitors, including 10 exhibiting for the first time and 6 of them headquartered outside of the US.   The technical presentations were very well attended, almost always with standing room only, thus averaging around 175 attendees per session.  One cannot fit more in the conference rooms that the DoubleTree has.  The other thing I observed was that there was almost no attendees traffic during the presentations.  People took a seat and stayed for the entire presentation.  Almost no one came in, listened for a few minutes and then left.  In my experience this is not typical and points out that the goal of DVCon, to present topics of contemporary importance, was met.

    Process Technology and Software Growth

    The keynote address this year was delivered by Aart de Geus, chairman and co-CEO of Synopsys.  His speeches are always both unique and quite interesting.  This year he chose as a topic “Smart Design from Silicon to Software”.   As one could have expected Aart’s major points had to do with process technology, something he is extremely knowledgeable about.  He thinks that Moore’s law as an instrument to predict semiconductor process advances has about ten years of usable life.  After that  the industry will have to find another tool, assuming one will be required, I would add.  Since, as Aart correctly points out, we are still using a 193 nm crayon to implement 10 nm features, clearly progress is significantly impaired.  Personally I do not understand the reason for continuing to use ultraviolet light in lithography, aside for the huge costs of moving to x-ray lithography.  The industry has resisted the move for so long that I think even x-ray has a short life span which at this point would not justify the investment.  So, before the ten years are up, we might see some very unusual and creative approaches to building features on some new material.  After all whatever we will use will have to understand atoms and their structure.

    For now, says Aart, most system companies are “camping” at 28 nm  while evaluating “the big leap” to more advanced lithography process.  I think it will be along time, if ever, when 10 nm processes will be popular.  Obviously the 28 nm process supports the area and power requirements of the vast majority of advanced consumers products.  Aart did not say it but it is a fact that there are still a very large number of wafers produced using a 90 nm process.  Dr. de Geus pointed out that the major factor in determining investments in product development is now economics, not available EDA technology.  Of course one can observe that economics is only a second order decision making tool, since economics is determined in part by complexity.  But Aart stopped at economics, a point he has made in previous presentations in the last twelve months.  His point is well taken since ROI is greatly dependent on hitting the market window.

    A very interesting point made during the presentation is that the length of development schedules has not changed in the last ten years, content has.  Development of proprietary hardware has gotten shorter, thanks to improved EDA tools, but IP integration and software integration and co-verification has used up all the time savings in the schedule.

    What Dr. De Geus slides show is that software is and will grow at about ten times the rate of hardware.  Thus investment in software tools by EDA companies makes sense now.  Approximately ten years ago, during a DATE conference in Paris I had asked Aart about the opportunity of EDA companies, Synopsys in particular, to invest in software tools.  At that time Aart was emphatic that EDA companies did not belong in the software space.  Compilers are either cheap or free, he told me, and debuggers do not offer the correct economic value to be of interest.  Well without much fanfare about the topic of “investment in software” Synopsys is now in the software business in a big way.  Virtual prototyping and software co-verification are market segments Synopsys is very active in, and making a nice profit I may add.  So, it is either a matter of definition  or new market availability, but EDA companies are in the software business.

    When Aart talks I always get reasons to think.  Here are my conclusions.  On the manufacturing side, we are tinkering with what we have had for years, afraid to make the leap to a more suitable technology.  From the software side, we are just as conservative.

    That software would grow at a much faster pace than hardware is not news to me.  In all the years that I worked as a software developer or managers of software development, I always found that software grows to utilize all the available hardware environment and is the major reason for hardware development, whether is memory size and management or speed of execution.  My conclusion is that nothing is new: the software industry has never put efficiency as the top goal, it is always how easier can we make the life of a programmer.  Higher level languages are more  powerful because programmers can implement functions with minimal efforts, not because the underlying hardware is used optimally.  And the result is that when it comes to software quality and security the users are playing too large a part as the verification team.

    Art or Science

    The Wednesday proceedings were opened early in the morning by a panel with the provocative title of Art or Science.  The panelists were Janick Bergeron from Synopsys, Harry Foster from Mentor, JL Gray from Cadence, Ken Knowlson from Intel, and Bernard Murphy from Atrenta.  The purpose of the panel was to figure out whether a developer is better served by using his or her own creativity in developing either hardware or software, or follow a defined and “proven” methodology without deviation.

    After some introductory remarks which seem to show a mild support for the Science approach, I pointed out that the title of the panel was wrong.  It should have been titled Art and Science, since both must play a part in any good development process.  That changed the nature of the panel.  To begin with there had to be a definition of what art and science meant.  Here is my definition.  Art is a problem specific solution achieved through creativity.  Science is the use of a repeatable recipe encompassing both tools and methods that insures validated quality of results.

    Harry Foster pointed out that is difficult to teach creativity.  This is true, but it is not impossible I maintain, especially if we changed our approach to education.  We must move from teaching the ability to repeat memorized answers that are easy to grade on a test tests, and switched to problem solving, a system better for the student but more difficult to grade.  Our present educational system is focused on teachers, not students.

    The panel spent a significant amount of time discussing the issue of hardware/software co-verification.  We really do not have a complete scientific approach, but we are also limited by the schedule in using creative solutions that themselves require verification.

    I really liked what Ken Knowlson said at one point.  There is a significant difference between a complicated and a complex problem.  A complicated problem is understood but it is difficult to solve while a complex problem is something we do not understand a priori.  This insight may be difficult to understand without an example, so here is mine.  Relativity is complicated, black matter is complex.


    Discussing all of the technical sessions would be too long and would interest only portions of the readership, so I am leaving such matters to those who have access to the conference proceedings.  But I think that both the keynote speech and the panel provided enough understanding as well as thought material to amply justify attending the conference.  Too often I have heard that DVCon is a verification conference: it is not just for verification as both the keynote and the panel prove.  It is for all those who care about development and verification, in short for those who know that a well developed product is easier to verify, manufacture and maintain than otherwise.  So whether in India, Europe or in the US, see you at the next DVCon.

    Blog Review – Tuesday March 10, 2015

    Tuesday, March 10th, 2015

    An interesting and informative tutorial on connecting Ardino to the Internet when ‘in the wild’ is the topic that caught ARM’s Joe Hanson’s interest.

    Sharing the secrets of SoC companies that accelerate the distributed design process, Kurt Shuler, Arteris, considers the interconnect conundrum.

    Never one to shy away from the big question, Richard Goering, Cadence Design Systems, asks what is the key to IC design efficiency. He has some help, with panel members from the DVCON 2015 conference, organised by the Accellera Systems Initiative.

    Contemplating NXP’s acquisition of Freescale, Ray Angers, Chip Works, and, with a series of bar charts and dot-graphics, deems the Euro-American couple a good match.

    Experiencing an identity crisis, Jeff Bier, Berkley Design, is looking forward to attending Embedded Vision Summit, in May, and particularly, it seems, to the keynote by Mike Aldred, the lead robotics developer at Dyson.

    The multi-lingual Colin Walls is brushing up on his Swedish as he packs for ESC (Embedded Conference Scandinavia) this week. He will speak at three sessions – Dynamic Memory Allocation and Fragmentation in C and C++, Power Management in Embedded Systems and Self-Testing in Embedded Systems, which he previews in this blog.

    Delighted at Intel’s call for 3D IC, Zvi Or-Bach, MonolithIC 3D, argues that the packaging technology for SoCs, using data and graphics from a variety of sources.

    Blogging from Mobile World Congress, Martijn van der Linden, NXP, looks at what the company is developing for the Internet of Things, including the connected, Tesla, car concept from Rinspeed.

    Anyone looking into serial data transfers to replace parallel data transfer, can discover more from the blog, posted by Saurabh Shrivastava, Synopsys. The acceleration of PCI Express based systems’ verification and the difference power states of the interface has never been more relevant.

    Next Page »