Part of the  

Chip Design Magazine

  Network

About  |  Contact

Posts Tagged ‘Cadence’

Next Page »

Horizontal and Vertical Flow Integration for Design and Verification

Thursday, August 20th, 2015

Frank Schirrmeister, senior group director for product marketing of the System Development Suite at Cadence.

System design and verification are a critical component for making products successful in an always-on and always-connected world. For example, I wear a device on my wrist that constantly monitors my activities and buzzes to remind me that I’ve been sitting for too long. The device transmits my activity to my mobile phone that serves as a data aggregator, only to forward it on to the cloudy sky from where I get friendly reminders about my activity progress. I’m absolutely hoping that my health insurance is not connected to my activity progress because my premium payments could easily fluctuate daily. How do we go about verifying our personal devices and the system interaction across all imaginable scenarios? It sounds like an impossibly complex task.

From personal experience, it is clear to me that flows need to be connected both in horizontal and vertical directions. Bear with me for a minute while I explain.

Rolling back about 25 years, I was involved in my first chip design. To optimize area, I designed a three-transistor dynamic memory cell for what we would today call 800nm technology at 0.8 micron. The layout was designed manually from gate-level schematics that had been entered manually as well. In order to verify throughput for the six-chip system that my chip was part of, I developed a model at the register-transfer level (RTL) using this new thing at the time called, VHSIC Hardware Description Language (VHDL) (yep, I am European). What I would call vertical integration today was clunky at best 25 years ago. I was stubbing data out from VHDL into files that would be re-used to verify the gate-level. My colleagues and I would write scripts to extract layout characteristics to determine the speed of the memory cell and annotate that to the gate level for verification. No top-down automation was used, i.e. no synthesis of any kind.

About five to seven years after my first chip design (we are now late in the ‘90s if you are counting), everything in the flow had moved upward and automation was added. My team designed an MPEG-2 decoder fully in the RTL and used logic synthesis for implementation. The golden reference data came from C-models—vertically going upward—and was not directly connected to the RTL. Instead, we used file-based verification of the RTL against the C-model. Technology data from the 130nm technology that we used at the time was annotated back into logic synthesis for timing simulation and to drive placement. Here, vertical integration really started to work. And the verification complexity had risen so much that we needed to extend horizontally, too. We verified the RTL both using simulation and emulation with a System Realizer M250. We took drops of the RTL, froze it, cross-mapped it manually to emulation and ran longer sequences—specifically around audio/video synchronization for which we needed seconds of actual real time video decoding to be executed. We used four levels vertically: layout to gate to the RTL (automated with annotations back to the RTL) and the C-level on top for reference. Horizontally, we used both simulation and emulation.

Now fast-forward another 10 years or so. At that point, I had switched to the EDA side of things. Using early electronic system-level (ESL) reference flows, we annotated .lib technology information all the way up into virtual platforms for power analysis. Based on the software driving the chip, the technology impact on power consumption could be assessed. Accuracy was a problem, and that’s why I think that flows may have been a bit too early for their time back in 2010.

So where are we today?

Well, the automation between the four levels has been greatly increased vertically. Users take .lib information all the way up into emulation using tools like the Cadence Palladium® Dynamic Power Analysis (DPA), which enables engineers using emulation to also analyze software in a system-level environment. This tool allows designers to achieve up to 90% greater accuracy compared to the actual chip power consumption as reported by TI and most recently Realtek. High-level synthesis (HLS) has become mainstream for parts of the chip. That means the fourth level above the RTL is getting more and more connected as design entry moves upward, and with it, verification is more and more connected as well.

And horizontally, we are now using at least four engines, formal, RTL simulation, emulation, and field-programmable gate array (FPGA)-based prototyping, which are increasingly integrated. A couple of examples include:

  • Simulation acceleration – combining simulation and emulation
  • Simulation/emulation hot swap – stopping in simulation and starting in emulation, as well as vice versa
  • Virtual platform/emulation hybrids – combining virtual platforms and emulation
  • Multi-fabric compilation – same flow for emulation and FPGA-based prototyping
  • United Power Format (UPF)/Common Power Format (CPF) low-power verification – using the same setup for simulation and emulation
  • Simulation/emulation coverage merge – combining data collected in simulation and emulation

Arguably, with the efforts to shift post-silicon verification even further to the left, the actual chip becomes the fifth engine.

So what’s next? It looks like we have the horizontal pillar engines complete now when we add in the chip. Vertically, integration will become even closer to allow a more accurate prediction prior to actual implementations. For example, the recent introduction of the Cadence Genus™ Synthesis Solution delivers improved productivity during RTL design and improved quality of results (QoR) in final implementation. In addition, the introduction of the Cadence Joules™ RTL Power Solution provides a more accurate measure of RTL power consumption, which greatly improves the top-down estimation flow from the RTL downstream. This further increases accuracy for the Palladium DPA and the Cadence Incisive® Enterprise Simulator that automates testbench creation and performs coverage-driven functional verification, analysis, and debug—from the system level to the gate level—boosting verification productivity and predictability.

Horizontal and vertical flow integration is really the name of the game for today’s chip designer and future chip designers.

Blog Review – Monday, July 27 2015

Monday, July 27th, 2015

IoT for ADAS; ESC 2015 focuses on security; untangling neural networks; what drives new tools; consolidation conundrum; IoT growth forecast; three ages of FPGA

Likening a business collaboration to a road trip may be stretching a metaphor that would make Jack Kerouac blush, but David McKinney, Intel, presses on as he explains Intel and QNX’s ADAS solution, based on Intel IoT for automobiles. He includes some interesting links and a video to inform the reader.

A review of ESC 2015 shows that Chris Ciufo is not only ahead of the curve, advocating embedded security, but also not one to pass by a freebie at a show. He relates some of the highlights from the first day of the Santa Clara event.

Neural network processors hold promise for computer vision, believes Jeff Bier, BDTI. His blog explains what work is needed for the scale of computation the industry expects.

Posing an interesting question, Carey Robertson, Mentor Graphics, asks what prompts the development of new tools. He blends this with helpful information about the newly launched Calibre xACT extraction tool, without too much “hard sell”.

“It works!” is the triumphant message of the blog co-authored by Jacek Duda and Steve Brown, Cadence. Reporting from this month’s workshop where Type-C USB was put through its paces.

What to do with wireless IP is asked and answered by Navari Nandra, Synopsys. He explains what can be done and how it can contribute to the IoT.

The SoC market is consolidating fast, says Rupert Baines, UltraSoC, on an IP Exteme blog. This poses two challenges that he believes licensed IP can simplify.

A common proposition is to move from Intel to ARM, and Rich Nass, ARM presents a well-rounded blog on how to make the transition, with some input from WinSystems hardware and software experts.

Forget consumer, the future of the IoT growth is in enterprise, reports Brian Fuller, ARM, observing analyst IDC’s webinar on which parts of the IoT will be lucrative and why.

Recalling the talk by Xilinx Fellow, Dr. Steve Trimberger, Steve Leibson, explains the three ages of the FPGA, with a link to a video on the history of the technology.

Caroline Hayes, Senior Editor

System Design Enablement – Looking Beyond the Chip

Thursday, July 23rd, 2015

By Craig Cochran, VP Corporate Marketing Cadence

Rapid changes are occurring in the way electronic products are developed. Driven by increasing integration and complexity, a growing number of systems companies are assuming more control over hardware, software, and mechanical development. Semiconductor makers are dealing not only with the physics of advanced process nodes, but are also expected to provide much of the embedded software for each system on chip (SoC). It’s time for the EDA industry to expand its focus beyond hardware IC design and to embrace System Design Enablement (SDE), an expanded mission that will provide tools, design content, and services for the development of whole systems or end products.

Until very recently, most electronic products were created from the bottom-up by isolated groups of developers with minimal interaction. This was true across intellectual property (IP), semiconductor, software, foundry, packaging, and systems companies. The complexity of modern-day systems, the compression of development timelines, and the pressure for product differentiation make this kind of development unfeasible, driving a shift towards the integrated design efforts we’re seeing from system companies.

While semiconductors are at the heart of any electronic system, there is much more to consider. In many electronic systems, software represents the greatest cost and biggest bottleneck.  Thermal and power restrictions apply across the chip, package, and board. Form factor and user experience impact mechanical design. Every part of the resulting system is interrelated and must be optimized concurrently to produce a leading product.

For many years, the EDA industry has focused on delivering tools to semiconductor companies to enable chip design. We call this “core” EDA, and it will remain a vital technology. With an eye to the future, successful core EDA companies will move up to system design with SDE. As shown below, SDE calls for the convergence of electrical, software, and mechanical domains, and its outcome is not just a chip but an end product.

Vertical Aggregation and Disaggregation Drive SDE

There was a time when chip design was confined to large companies with the capability to fabricate chips. Now we are in an era of fabless semiconductor companies and pure-play foundries, and as a result, hundreds of companies are engaged in IC and/or IP design. This has enabled a tremendous wave of innovation and creativity, but it has also resulted in a disaggregated product design chain.

Today, some systems companies across a variety of vertical markets are choosing to re-aggregate (albeit without chip manufacturing), with the end goal of ensuring a high-value product. For example, some of the world’s largest systems companies have created in-house chip design teams. These vertically integrated systems companies form a natural market for SDE tools and flows.

Meanwhile, semiconductors are representing a larger part of the overall value of the end products. This is one reason why systems companies are adding semiconductor design capability to their engineering teams. And systems companies expect that their semiconductor suppliers, be they in-house design groups or third parties, provide much of the software stack including drivers, OS, and middleware.

Tooling and IP for SDE

Embedded software development traditionally begins very late in the overall cycle, thereby becoming the critical path to product shipment. Hence there’s an urgent need to “shift left” and allow embedded software development and hardware/software verification to begin much earlier. SDE tools and flows support this added software responsibility by providing a continuum of pre-silicon development platforms that support hardware/software co-design and co-verification, virtual platforms, emulation, simulation, and FPGA-based prototyping.

Other tools and capabilities that support SDE include multi-fabric power, thermal, and signal integrity analysis; chip/package/PCB co-design; incremental co-design between EDA and Mechanical CAD (MCAD) tools; design of MEMS devices within custom/analog IC flows; and the development of 2.5D and 3D IC packages. All these capabilities are available today.

System Design Enablement is not just about design tools – it requires design content as well. At the chip level, that content is increasingly provided by reusable semiconductor IP blocks. Today as much as 80% of an SoC may be composed of such blocks, which may include processors, memory, communications protocols, analog functions, and verification IP (VIP).

Conclusion

As system complexity grows, the various components of an electronic system can no longer be designed in isolation. The focus of EDA needs to expand from single chips and boards to entire systems. This new challenge is addressed by System Design Enablement, and it requires tools, IP, software content, and services aimed at making whole systems possible. SDE opens a new chapter in the history of electronic system design, and it will greatly expand the reach of EDA technology to meet the challenges of today’s vertically integrated companies and their highly differentiated designs.

Craig Cochran is the vice president of corporate marketing at Cadence Design Systems, Inc. He has more than 20 years of corporate, strategic and product marketing expertise at EDA and electronics companies including Real Intent, ChipVision Design Systems, Jasper Design Automation and Synopsys. He began his career as an applications engineer at Valid Logic Systems and a digital design engineer at General Electric. Cochran holds a bachelor of science degree cum laude in electrical engineering from the Georgia Institute of Technology.

Blog Review – Monday, June 22 2015

Monday, June 22nd, 2015

Yonsei Uni team up for 5G; Hold that thought; now catch it; ARM and UNICEF; Industry and Education breathe life into EDA; Connected driving clears the road ahead

Researchers at Yonsei University have demonstrated a real-time, full-duplex LTE radio system at IEEE Globecom in Austin, Texas, using a novel antenna approach and working with National Instruments SDR platforms and LabVIEW graphical programming environment, reports Steve Leibson, Xilinx.

“Hold that thought” takes a new turn, as an anonymous blogger at Atmel describes the MYLE TAP, a wearable ‘thought catcher’. The touch-activate and voice-powered device automatically converts thoughts into actions. An interesting prototype or a recipe for disaster if it falls into the wrong hands?

Charity doesn’t always begin at home, sometimes it’s a warehouse in Copenhagen, Denmark. Dominic Vergine, ARM, visited the UNICEF global procurement hub and considers what wearable technology can provide, building on the low-tech, wearable technology of the MUAC band to test for malnutrition.

Building on a presentation at DAC 2015, Richard Goering, Cadence, considers how to academia and industry can work together to revitalize EDA.

The road ahead is smooth for the connected car, reports John Day, Mentor Graphics, if you are driving a Jaguar Land Rover (JLR), anyway. He examines the connected car technology that can identify and share data on potholes, broken manholes and other hazards.

Sloth is a deadly sin, especially in IP software development, warns Tom De Schutter, Synopsys, as he examines how laze in automotive testing can be absolved with virtual prototypes as an alternative to hardware, making earlier, broader, more automated software testing available.

Caroline Hayes, Senior Editor

Blog Review – Monday, June 08, 2015

Monday, June 8th, 2015

DAC duo announce DDA; Book a date for DAC with ARM, Ansys, Cadence; Synopsys and Xilinx; True FPGA-based verification

Announcing a partnership with Cadence Design Systems at DAC 2015, Dennis Brophy, Mentor Graphics teases with some details of Deug Data API (DDA). Full details will be unveiled at a joint presentation at the Verification Academy Booth (2408) on Tuesday at 5pm.

Amongst demonstrations of an IoT sub-system for Cortex-M processors, ARM will show a new IP tooling suite and the ARM Cordio radio core IP. There will be over a dozen partners, reports Brenda Westcott, ARM, in the Connected Community Pavillion and the ARM Scavenger Hunt. (DAC June 7 – 11, ARM booth 2428).

As if justifying its place at DAC 2015, Ravi Ravikumar, Ansys, explains how the show has evolved beyond EDA for SoCs. The company will host videos on automotive, IoT and mobile, and presentations from foundry partners. (DAC June 7 – 11, Anysys booth 1232).

If you are interested in the continuum of verification engines, DAC is the place to be this week. Frank Schirrmeister, Cadence, summarizes the company’s offerings to date, with a helpful link to a COVE (Continuum of Verification Engines) article, and provides an overview of some of the key verification sessions at the Moscone Center. (DAC June 7 – 11, Cadence booth 3515).

Back with FPGA prototyping system, HAPS, Michael Posner, Synopsys, invites visitors to DAC to come see the Xilinx UltraScale VU440-based HAPs. As well as proudly previewing the hardware software development support, he also touches on the difficulties of mapping ASICs to FPGAS.

More Xilinx-DAC news, as Doug Amos’s guest blog at Aldec, announces the era of true FPGA-based verification. He believes the end of big-box emulation is nigh, following the adoption of Xilinx’s Virtex UltraScale devices in its HES-7 (Hardware Emulation Solution, seventh generation) technology.

Caroline Hayes, Senior Editor

Cadence Introduces Genus Synthesis Solution

Wednesday, June 3rd, 2015

Gabe Moretti, Senior Editor
Historically synthesis tools have targeted the transistors, keeping in focus the architecture of the silicon and optimizing it while not paying much attention to the system architecture.  it was, of course, a natural thing to do since given a design, EDA tools focused on implementing it in the best of possible way.

This is the main reason that System level tools have been slow to gain traction, and only lately are showing that they can indeed contribute significantly to efficient products.  In fact by analyzing an architecture it is often possible to improve the efficiency of the design and, in turn, deliver a circuit that meets timing, power, and area requirements in less time than by optimizing the gate level netlist.

Genus does just that.  Its goal is to optimize the RTL netlist before logic synthesis by forecasting the physical characteristics of the resulting gate level netlist.

Cadence’s Genus Synthesis Solution is a next-generation register-transfer level (RTL) synthesis and physical synthesis engine.  The company stated that Genus Synthesis Solution incorporates a multi-level massively parallel architecture that delivers up to 5X faster synthesis turnaround times and scales linearly beyond 10M instances. In addition, the tool’s new physically aware context-generation capability can reduce iterations between unit- and chip-level synthesis by 2X or more. This combination enables up to 10X improvement in RTL design productivity.

Figure 1: Genus integrated optimization architecture

Key Genus Synthesis Solution features and capabilities include:

  • Massively parallel architecture –The tool performs timing-driven distributed synthesis of a design across multiple cores and machines. All key steps in the synthesis flow leverage both multiple machines and multiple CPU cores per machine.
  • Physically aware context generation – The complete timing and physical context for any subset of a design can be extracted and used to drive RTL unit-level synthesis with full consideration of chip-level timing and placement, significantly reducing iterations between chip-level and unit-level synthesis runs.
  • Unified global routing with Innovus Implementation System – Genus Synthesis Solution and Cadence Innovus Implementation System, a next-generation physical implementation solution, share an enhanced 4X faster timing-driven global router that enables tight correlation of both timing and wirelength to within 5 percent from synthesis to place and route.

  • Global analytical architecture-level PPA optimization – The solution incorporates a new datapath optimization engine that concurrently considers many different datapath architectures across the whole design and then leverages an analytical solver to pick the architectures that achieve the globally optimal PPA. This engine delivers up to 20 percent reduction in datapath area without any impact on performance.

Blog Review – Monday May 18, 2015

Monday, May 18th, 2015

Zynq detects pedestrians; ARMv8-A explained; Product development demands test; Driving connectivity; Celebrating Constellations; Chip challenges

The helpful Michael Thomas, ARM, advises readers that there is The Cortex-A Series Programmer’s Guide for ARMv8-A available and introduces what is in the guide for a taster of the architecture’s features.

The Embedded Vision Summit gives many bloggers material for posts. The first is Steve Leibson, Xilinx, who includes a video Mathworks presented there, with a description if a real-time pedestrian detector running on a Zynq-based workflow, using MathWorks’ Sumulink and HDL Coder.

Another attendee was Brian Fuller, Cadence, who took away the secrets to successful product development, which he sums up as: test, test, test. (He does elaborate beyond that in his detailed blog, reviewing Mike Alrdred of Dyson’s keynote.

Anticipating another event, DAC, Ravi Ravikumar, Ansys, looks at the connected car and the role of design in intelligent vehicles.

Also with an eye on DAC, Rupert Baines, UltraSoC has a guest blog at IP-Extreme, and praises the Constellations initiative, with some solid support – and some restrained back-slapping.

Continuing a verification series, Harry Foster, Mentor, looks at the FPGA space and reflects on how the industry makes choices in formal technology.

A guest blog, at Chip Design, by Dr. Bruce McGaughy, ProPlus Design Solutions, looks at what innovative chip designs mean for chip designers. His admiration for the changing pace of design is balanced with identifying the drivers for low power design to meet the IoT portable phase.

Why do we need HDCP 2.2 and what do we need to do to ensure cryptography and security? These are addressed, and answered, by VIP Experts, Synopsys, in this informative blog.

By Caroline Hayes, Senior Editor

The Car as an IoT Node: What are the Design Implications?

Monday, May 11th, 2015

By Chris Rowen, CTO of Cadences IP Group

Its a pivotal moment in the history of automotive design. Not only is the percentage of electronics content of each automobile continuing to rise, but wireless technology has come to the car. This confluence of two technology forces changes in how we view automobiles and automotive electronics in fundamental ways. And it raises a seemingly simple question: Should the intelligent car be considered a node on the Internet of Things (IoT)?

I embrace what you might call a big-tent definition of IoT. That is, I see a wide variety of applications that are outside the traditional clusters of gateways and servers, applications between cloud and a local device. These are applications where there is content that is distributed and distributed not just in response to human interface.

Car first, Internet second

So I do believe that the intelligent car is a local device and, as such, constitutes a node on the Internet, but with some caveats. It shares many of the same design considerations as traditional IoT designs, but with a significant difference: The car, as a mode of transportation, has as its first priority, safety. Its second priority is safety. Its third priority is safety. Well address this design implication shortly.

In the past decade or so, as electronics have enhanced safety priorities, they have also added a new dimension to the driver and passenger experiences. That makes the automobile a very unique IoT node. These bifurcated features and applications require engineers to think about them in different ways.

Inside the car, we see many of the same subsystems and elements that are analogous to popular IoT categories. There are various accelerometers and gyrometers and magnetometers that are sensing motion or monitoring the performance of your car. Those are either already connected to the web or are a baby step away. But its worth noting in this context that those elements are primarily of the car and secondarily of the Internet.

With that in mind, we can view the automotive IoT node as having two major categories of functionality: mission-critical and entertainment/infotainment. And engineers need to design to these two areas with different considerations. If you have a bug and your navigation goes out, thats one thing. If you have a bug and your brakes stop working, well thats a completely different situation.

Levels of security

The mission-critical components and subsystems in this automotive IoT node will require highly documented development processes like ISO 26262 for functional safety. Where there is critical information that could be corrupted and cause damage or death, then engineering teams have to have not only the functional safety qualification but also the ability to have the system remain robust in the face of accidental or malicious intervention by rogue software.

Those systems will evolve slowly and come with enormous verification and certification requirements. Theyll have their own design constraints and their own development pace.

On the other side, as part of the human infotainment experience, we will see the car trying to look more like an open platform-like, a smartphone.  There, you wont worry quite as much about security. There, youll have various wireless options, packaging and power considerations that likely will differ from the more mission-critical components. Design constraints will likely be more flexible and the development pace faster.

Another caveat has to do with data: Traditional IoT applications suggest a more or less continual flow of information from edge devices to and from the cloud or the fog. With respect to the automotive IoT node, this dynamic is a little different. Manufacturers want to gather lots of data on the performance of their fleet to help fine-tune their design and manufacturing process. But we wont see a lot of real time, on-the-fly updates coming from the cloud. In other words, the notion that your car will be constantly fine-tuned for performance may be wishful thinking right now.

Where automotive design meets IoT demands

Now when we think about designing for the IoT, we think mostly about small form factors and ultra-tight power budgets in the context of wearables, which are the poster child for todays IoT applications.

In automotive designs, power is somewhat less of an issue, but form factor and weight can still be important considerations. And because the success of early intelligent vehicles is fueling more customer demand, design cycles are being pressured.

This suggests the need for design flows that leverage much more integrationparticularly IP integrationand newer verification considerations as software takes its seat at the automotive design table.

We are seeing much more sensor fusion solutions in automotive designs today, the need (because of the interface to the analog world) for robust floating-point computing, integrated features like audio, voice and speech (AVS) for voice recognition and triggering.

And while cars are big-ticket items, that does not mean that cost considerations at the component level are not paramount.

All of these are considerations that engineering teams in the traditional IoT space wrestle with. So the intelligent car is absolutely part of the IoT. The potential for optimizing not only the car and the driving experience but also the world around the car (think about better automated traffic-flow modeling) is profound. But engineers need to approach the design of the cars subsystems thoughtfully, in ways that reflect the primacy of safety and the potential of cloud-enabled services.

Chris Rowen, who co-founded MIPS and Tensilica and is an expert on computing architectures, is CTO of Cadences IP Group.

Cadence Introduced The New Tensilica Fusion DSP

Thursday, April 23rd, 2015

Gabe Moretti, Senior Editor

Cadence Design Systems, Inc. has announced the new Cadence Tensilica Fusion digital signal processor (DSP) based on the Xtensa Customizable Processor. This scalable DSP is meant for applications requiring specialized computation, ultra-low energy and a small footprint.  The new DSP is well positioned for IoT applications as can be seen in Figure 1.

The device can be designed into chips for wearable activity monitoring, indoor navigation, context-aware sensor fusion, secure local wireless connectivity, face trigger, voice trigger and voice recognition. The Optional Instruction Set Architecture (ISA) optimizations provide support for multiple wireless protocols including Bluetooth

Low Energy, Thread and Zigbee using IEEE 802.15.4, SmartGrid 802.15.4g, Wi-Fi 802.11n and 802.11ah, 2G and LTE Category 0 release 12 and 13, and global navigation satellite systems (GNSS).

Tensilica claims that the Fusion DSP uses 25 percent less energy, based on running a power diagnostic derived from the Sensory Truly Handsfree always-on algorithm, when compared to the current industry-leading Cadence Tensilica HiFiMini low-power DSP IP core.

The Tensilica Fusion DSP combines an enhanced 32-bit Xtensa control processor architecture with DSP capabilities and flexible algorithm-specific acceleration for a fully programmable approach, supporting multiple existing and developing standards and custom algorithms. For many IoT applications that are space- and energy-constrained, the optimal solution is  to deploy a single small processor that can perform sensor processing, wireless communications and control.  IoT device designers can pick just the options they need using the Xtensa Innovation Platform to produce a Fusion processor which can be smaller and more energy-efficient, while having higher performance than typical one-size-fits-all processor cores.

The Tensilica Fusion DSP combines flexible hardware choices with a library of DSP functions and more than 140 audio/voice/fusion applications from over 70 partners. It also shares the Tensilica partner ecosystem for other applications software, emulation and probes, silicon and services, and much more.

Figure 1: TENSILICA FUSION USAGE SCENARIOS

Blog Review – Monday April 20, 2015

Monday, April 20th, 2015

Half a century and still quoted as relevant is more than most of us could hope to achieve, so the 50 anniversary of Gordon Moore’s pronouncement which we call Moore’s Law is celebrated by Gaurav Jalan, as he reviews the observation first pronounced on April 19, 1965, which he credits with the birth of the EDA industry, and the fabless ecosystem amongst other things.

Another celebrant is Axel Scherer, Cadence, who reflects on not just shrinking silicon size but the speed of the passing of time.

On the same theme of what Moore’s Law means today for FinFets and nano-wire logic libraries, Navraj Nandra, Synopsys, also commemorates the anniversary, with an example of what the CAD team has been doing with quantum effects at lower nodes.

At NAB (National Broadcasters Association) 2015, in Las Vegas, Steve Leibson, Xilinx, had an ‘eye-opening’ experience at the CoreEL Technologies booth, where the company’s FPGA evaluation kits were the subject of some large screen demos.

Reminiscing about the introduction of the HSA Foundation, Alexandru Voica, Imagination Technologies, provides an update on why heterogeneous computing is one step closer now.

Dr. Martin Scott, the senior VP and GM of Rambus’ Cryptography Research Division, recently participated in a Silicon Summit Internet of Things (IoT) panel hosted by the Global Semiconductor Alliance (GSA). In this blog he discusses the security of the IoT and opportunities for good and its vulnerabilities.

An informative blog by Paul Black, ARM examines the ARM architecture and DS-5 v5.21 DSTREAM support for debug, discussing power in the core domain and how to manage it for effective debug and design.

Caroline Hayes, Senior Editor

Next Page »