Part of the  

Chip Design Magazine

  Network

About  |  Contact

Posts Tagged ‘Software’

Next Page »

Cadence Launches New Verification Solutions

Tuesday, March 14th, 2017

Gabe Moretti, Senior Editor

During this year’s DVCon U.S. Cadence introduced two new verification solutions: the Xcelium Parallel Simulator and the Protium S1 FPGA-Based Prototyping Platform, which incorporates innovative implementation algorithms to boost engineering productivity.

Xcelium Parallel Simulator

.The new simulation engine is based on innovative multi-core parallel computing technology, enabling systems-on-chip (SoCs) to get to market faster. On average, customers can achieve 2X improved single-core performance and more than 5X improved multi-core performance versus previous generation Cadence simulators. The Xcelium simulator is production proven, having been deployed to early adopters across mobile, graphics, server, consumer, internet of things (IoT) and automotive projects.

The Xcelium simulator offers the following benefits aimed at accelerating system development:

  • Multi-core simulation improves runtime while also reducing project schedules: The third generation Xcelium simulator is built on the technology acquired from Rocketick. It speeds runtime by an average of 3X for register-transfer level (RTL) design simulation, 5X for gate-level simulation and 10X for parallel design for test (DFT) simulation, potentially saving weeks to months on project schedules.
  • Broad applicability: The simulator supports modern design styles and IEEE standards, enabling engineers to realize performance gains without recoding.
  • Easy to use: The simulator’s compilation and elaboration flow assigns the design and verification testbench code to the ideal engines and automatically selects the optimal number of cores for fast execution speed.
  • Incorporates several new patent-pending technologies to improve productivity: New features that speed overall SoC verification time include SystemVerilog testbench coverage for faster verification closure and parallel multi-core build.

“Verification is often the primary cost and schedule challenge associated with getting new, high-quality products to market,” said Dr. Anirudh Devgan, senior vice president and general manager of the Digital & Signoff Group and the System & Verification Group at Cadence. “The Xcelium simulator combined with JasperGold Apps, the Palladium Z1 Enterprise Emulation Platform and the Protium S1 FPGA-Based Prototyping Platform offer customers the strongest verification suite on the market”

The new Xcelium simulator further extends the innovation within the Cadence Verification Suite and supports the company’s System Design Enablement (SDE) strategy, which enables system and semiconductor companies to create complete, differentiated end products more efficiently. The Verification Suite is comprised of best-in-class core engines, verification fabric technologies and solutions that increase design quality and throughput, fulfilling verification requirements for a wide variety of applications and vertical segments.

Protium S1

The Protium S1 platform provides front-end congruency with the Cadence Palladium Z1 Enterprise Emulation Platform. BY using Xilinx Virtex UltraScale FPGA technology, the new Cadence platform features 6X higher design capacity and an average 2X performance improvement over the previous generation platform. The Protium S1 platform has already been deployed by early adopters in the networking, consumer and storage markets.

Protium S1 is fully compatible with the Palladium Z1 emulator

To increase designer productivity, the Protium S1 platform offers the following benefits:

  • Ultra-fast prototype bring-up: The platform’s advanced memory modeling and implementation capabilities allow designers to reduce prototype bring-up from months to days, thus enabling them to start firmware development much earlier.
  • Ease of use and adoption: The platform shares a common compile flow with the Palladium Z1 platform, which enables up to 80 percent re-use of the existing verification environment and provides front-end congruency between the two platforms.
  • Innovative software debug capabilities: The platform offers firmware and software productivity-enhancing features including memory backdoor access, waveforms across partitions, force and release, and runtime clock control.

“The rising need for early software development with reduced overall project schedules has been the key driver for the delivery of more advanced emulation and FPGA-based prototyping platforms,” said Dr. Anirudh Devgan, senior vice president and general manager of the Digital & Signoff Group and the System & Verification Group at Cadence. “The Protium S1 platform offers software development teams the required hardware and software components, a fully integrated implementation flow with fast bring-up and advanced debug capabilities so they can deliver the most compelling end products, months earlier.”

The Protium S1 platform further extends the innovation within the Cadence Verification Suite and supports the company’s System Design Enablement (SDE) strategy, which enables system and semiconductor companies to create complete, differentiated end products more efficiently. The Verification Suite is comprised of best-in-class core engines, verification fabric technologies and solutions that increase design quality and throughput, fulfilling verification requirements for a wide variety of applications and vertical segments.

Blog Review – Monday, October 10, 2016

Monday, October 10th, 2016

This week, bloggers look at the newly released ARM Cortex-R52 and its support, NVIDIA floats the idea of AI in automotives, Dassault Systèmes looks at underwater construction, Intrinsic-ID’s CEO shares about security, and there is a glimpse into the loneliness of the long distance debugger

There is a peek into the Xilinx Embedded Software Community Conference as Steve Leibson, Xilinx, shares the OKI IDS real-time, object-detection system using a Zynq SoC.

The lure of the ocean, and the glamor of Porsche and Volvo SUVs, meant that NVIDIA appealed to all-comers at its inaugural GPU Technology Conference Europe. It parked a Porsche Macan and a Volvo XC90 on top of the Ocean Diva, docked at Amsterdam. Making waves, the Xavier SoC, the Quadro demonstration and a discussion about AI in the automotive industry.

Worried about IoT security, Robert Vamosi, Synopsys, looks at the source code that targets firmware on IoT devices, and fears where else it may be used.

Following the launch of the ARM Cortex-R52 processor, which raises the bar in terms of functional safety, Jason Andrews looks at the development tools available for the new ARMv8-R architecture, alongside a review of what’s new in the processor offering.

If you are new to portable stimulus, Tom A, Cadence, has put together a comprehensive blog about the standard designed to help developers with verification reuse, test automation and coverage. Of course, he also mentions the role of the company’s Perspec System Verifier, but this is an informative blog, not a marketing pitch.

Undersea hotels sounds like the holiday of the future, and Deepak Datye, Dassault Systèmes, shows how structures for wonderful pieces of architecture can be realized with the company’s , the 3DExperience Platform.

Capturing the frustration of an engineer mid-debug, Rich Edelman, Mentor Graphics, contributes a long, blow-by-blow account of that elusive, thankless task, that he names UVM Misery, where a customer’s bug, is your bug now.

Giving Pim Tuyls, CEO of Intrinsic-ID, a grilling about security, Gabe Moretti, Chip Design magazine, teases out the difference between security and integrity and how to increase security in ways that will be adopted across the industry.

Blog Review – Monday, February 15, 2016

Monday, February 15th, 2016

Research converts contact lens to computer screens; What to see at Embedded World 2016; Remembering Professor Marvin Minsky; How fast is fast and will the IoT protect us?

The possibilities for wearable technology, where a polymer film coating can turn a contact lens into a computer screen are covered by Andrew Spence Nanontechnology University of South Australia’s Future Industries Institute. The lens can be used as a sensor to measure blood glucose levels to a pair of glasses acting as a computer screen.

If you are preparing your Embedded World 2016, Nuremberg, schedule, Philippe Bressy, ARM offers an overview of what will be at his favourite event. He covers the company’s offerings for IoT and connectivity, single board computing, software productivity, automotive and from ARM’s partners to be seen on the ARM booth (Hall 5, stand 338), as well as some of the technical conference’s sessions and classes.

Other temptations can be found at the Xilinx booth at Embedded World (Hall 1, stand 205). Steve Leibson, Xilinx explains how visitors can win a Digilent ARTY Dev Kit based on an Artix-7 A35T -1LI FPGA, with Xilinx Vivado HLx Design Edition.

Showing more of what can be done with the mbed IoT Device Platform, Liam Dillon, ARM, writes about the reference system for SoC design for IoT endpoints, and its latest proof-of-concept platform, Beetle.

How fast is fast, muses Richard Mitchell, Ansys. He focuses on the Ansys 17.0 and its increased speeds for structural analysis simulations and flags up a webinar about Ansys Mechanical using HPC on March 3.

If the IoT is going to be omnipresent, proposes Valerie C, Dassault, can we be sure that it can protect us and asks, what lies ahead.

A pioneer of artificial intelligence, Professor Marvin Minsky as died at the age of 88. Rambus fellow, Dr David G Stork, remembers the man, his career and his legacy on this field of technology.

I do enjoy Whiteboard Wednesdays, and Corrie Callenback, Cadence, has picked a great topic for this one – Sachin Dhingra’s look at automotive Ethernet.

Another thing I particularly enjoy is a party, and Hélène Thibiéroz, Synopsys reminds us that it is 35 years since HSPICE was introduced. (Note to other party-goers: fireworks to celebrate are nice, but cake is better!)

Caroline Hayes, European Editor

Is Hardware Really That Much Different From Software?

Tuesday, December 2nd, 2014

When can hardware be considered as software? Are software flows less complex? Why are hardware tools less up-to-date? Experts from ARM, Jama Software and Imec propose the answers.

By John Blyler, Editorial Director

HiResThe Internet-of-Things will bring hardware and software designers into closer collaboration than every before. Understanding the working differences between both technical domains in terms of design approaches and terminology will be the first step in harmonizing the relationships between these occasionally contentious camps. What are the these differences in hardware and software design approaches? To answer that question, I talked with the technical experts including Harmke De Groot, Program Director Ultra-Low Power Technologies at Imec; Jonathan Austin, Senior Software Engineer at ARM; and Eric Nguyen, Director of Business Intelligence at Jama Software; . What follows is a portion of their responses. — JB

Read the complete article at: JB Circuit

Blog Review – Wednesday, May 14 2014

Monday, May 12th, 2014

Women dominate today’s blogsphere review, with three of the four written by women. There is a celebration of female empowerment, honoring Intel, a vehicle design university program using Ansys software; a glimpse into the future IoT – if humans allow; and Mentor’s Wally Rhines flags up the pitfalls of verification. By Caroline Hayes, Senior Editor.

Tackling gender equality, the Women’s e-News “21 Leaders for the 21st Century” is the subject of Suzanne Fallender’s blog (Intel). Among those honored was Intel’s VP Corporate Affairs and Chair of the Intel Foundation, Shelly Esque, VP of Corporate Affairs and Chair of the Intel Foundation, recognized for her work at Intel to empower girls and women through education and technology, including programs to encourage girls enter technology careers. Some philosophical issues are included, making the reader wonder just how far we have actually progressed since women campaigned for the vote….

Gypsy Rose Gabe [Moretti] looks into his crystal ball, predicting the impact of the IoT on the world around with real-life examples for the vehicle, the home and commerce of the future. He yearns for humans willing to adapt to the technology.

Who better to list the biggest verification mistakes made by design teams, thank Mentor Graphics CEO, Wally Rhines, in an interview highlighted by Shelly Stalnaker.

Cars are a passion for Helen Renshaw and she indulges it in a review of the ANSYS academic program, culminating in the design of a lightweight vehicle by the Harbin Institute of Technology, China.

FPGAs for ASIC Prototyping Bridge Global Development

Wednesday, July 20th, 2016
YouTube Preview Image

Rapid Prototyping is an Enduring Methodology

Thursday, September 24th, 2015

Gabe Moretti, Senior Editor

When I started working in the electronic industry hardware development had prototyping using printed circuit boards (PCB) as its only verification tool.   The method was not “rapid” since it involved building and maintaining one or more PCBs.  With the development of the EDA industry simulators became an alternative method, although they really only achieved popularity in the ‘80s with the introduction of hardware description languages like Verilog and VHDL.

Today the majority of designs are developed using software tools, but rapid prototyping is still a method used in a significant portion of designs.  In fact hardware based prototyping is a growing methodology, mostly due to the increased power and size of FPGA devices.  It can now really be called “rapid prototyping.”

Rapid Prototyping Defined

Lauro Rizzatti, a noted expert on the subject of hardware based development of electronics, reinforces the idea in this way: “Although one of the oldest methods used to verify designs, dating back to the days of breadboarding, FPGA prototyping is still here today, and more and more it will continue to be used in the future. “

Saba Sharifi, VP Business Development, Logic Business Unit, System LSI Group at Toshiba America Electronic Components, describes the state of rapid prototyping as follows: “While traditional virtual prototyping involves using CAD and CAE tools to validate a design before building a physical prototype, rapid prototyping is growing in popularity as a method for prototyping SoC and ASIC designs on an FPGA for hardware verification and early software development. In a rapid prototyping environment, a user may start development using an FPGA-based system, and then choose either to keep the design in the FPGA, or to transfer it into a hard-coded solution such as an ASIC. There are a number of different ways to achieve this end.”

To support the hardware based prototyping methodology Toshiba has introduced a new type of Device: Toshiba’s Fast Fit Structured Array (FFSA).   The FFSA technology utilizes metal-configurable standard cell (MCSC) SoC platform technology for designing ASICs and ASSPs, and for replacing FPGAs. Designed with FPGA capabilities in mind, FFSA provides pre-developed wrappers that can realize some of the key FPGA functionality, as well as pre-defined master sizes, to facilitate the conversion process.

According to Saba “In a sense, it’s an extension to traditional FPGA-to-ASIC rapid prototyping – the second portion of the process can be achieved significantly faster using the FFSA approach.  The goal with FFSA technology is to enable developers to reduce their time to market and non-recurring engineering (NRE) costs by minimizing customizable layers (to four metal layers) while delivering the performance, power and lower unit costs associated with standard-cell ASICs.   An FFSA device speeds front-end development due to faster timing closure compared to a traditional ASIC, while back-end prototyping is improved via the pre-defined master sizes. In some cases, customers pursue concurrent development – they get the development process started in the FPGA and begin software development, and then take the FPGA into the FFSA platform. The platform takes into consideration the conversion requirements to bring the FPGA design into the hard-coded FFSA so that the process can be achieved more quickly and easily.  FFSA supports a wide array of interfaces and high speed Serdes (up to 28G), making it well suited for wired and wireless networking, SSD (storage) controllers, and a number of industrial consumer applications. With its power, speed, NRE and TTM benefits, FFSA can be a good solution anywhere that developers have traditionally pursued an FPGA-based approach, or required customized silicon for purpose-built applications with moderate volumes.”

According to Troy Scott, Product Marketing Manager, Synopsys: “FPGA-based prototypes are a class of rapid prototyping methods that are very popular for the promise of high-performance with a relatively low cost to get started. Prototyping with FPGAs is now considered a mainstream verification method which is implemented in a myriad of ways from relatively simple low-cost boards consisting of a single FPGA and memory and interface peripherals, to very complex chassis systems that integrate dozens of FPGAs in order to host billion-gate ASIC and SoC designs. The sophistication of the development tools, debug environment, and connectivity options vary as much as the equipment architectures do. This article examines the trends in rapid prototyping with FPGAs, the advantages they provide, how they compare to other simulation tools like virtual prototypes, and EDA standards that influence the methodology.

According to prototyping specialists surveyed by Synopsys the three most common goals they have are to speed up RTL simulation and verification, validate hardware/software systems, and enable software development. These goals influence the design of a prototype in order to address the unique needs of each use case. “

Figure 1. Top 3 Goals for a Prototype  (Courtesy of Synopsys, Inc)

Advantages of Rapid Prototyping

Frank Schirrmeister, Group Director for Product Marketing, System Development Suite, Cadence provides a business  description of the advantages: “The growth of FPGA-based prototyping is really primarily driven by software needs because software developers are, in terms of numbers of users, the largest consumer base. FPGA-based prototyping also provides satisfactory speed while delivering the real thing at RTL accuracy and at a reasonable replication cost.”

Stephen Bailey, Director of Emerging Technologies, DVT at Mentor graphics puts it as follows: “Performance, more verification cycles in a given period of time, especially for software development, drives the use of rapid prototypes.  Rapid prototyping typically provides 10x (~10 MHz) performance advantage over emulation which is 1,000x (~1 MHz) faster than RTL software simulation (~1 kHz).
Once a design has been implemented in a rapid prototype, that implementation can be easily replicated across as many prototype hardware platforms as the user has available.  No recompilation is required.  Replicating or cloning prototypes provides more platforms for software engineers, who appreciate having their own platform but definitely want a platform available whenever they need to debug.”

Lauro points out that: “The main advantage of rapid prototyping is very fast execution speed that comes at the expense of a very long setup time that may take months on very large designs. Partitioning and clock mapping require an uncommon expertise. Of course, the assumption is that the design fits in the maximum configuration of the prototyping board; otherwise the approach would not be applicable. The speed of FPGA prototyping makes it viable for embedded software development. They are best used for validating application software and for final system validation.”

Troy sees the advantages of the methodology as: “To address verification tasks the prototype helps to characterize timing and pipeline latencies that are not possible with more high-level representations of the design and perhaps more dramatically the prototype is able to reach execution speeds in the hundreds of megahertz. Typical prototype architectures for verification tasks rely on a CPU-based approach where traffic generation for the DUT is written as a software program. The CPU might be an external workstation or integrated with the prototype itself. Large memory ICs adjacent to the FPGAs store input traffic and results that is preloaded and read back for analysis. Prototypes that can provide an easy to implement test infrastructure that includes memory ICs and controllers, a high-bandwidth connection to the workstation, and a software API will accelerate data transfer and monitoring tasks by the prototyping team.

Software development and system validation tasks will influence the prototype design as well. The software developer is seeking an executable representation to support porting of legacy code and develop new device drivers for the latest interface protocol implementation. In some cases the prototype serves as a way for a company to deploy an architecture design and software driver examples to partners and customers. Both schemes demand high execution speed and often real world interface PHY connections. For example, consumer product developers will seek USB, HDMI, and MIPI interfaces while an industrial product will often require ADC/DAC or Ethernet interfaces. The prototype then must provide an easy way to connect to accessory interfaces and ideally a catalog of controllers and reference designs. And because protocol validation may require many cycles to validate, a means to store many full milliseconds of hardware trace data helps compliance check-out and troubleshooting.”

Rapid Prototyping versus Virtual Prototyping

According to Steve “Rapid prototyping, also called FPGA prototyping, is based on a physical representation of the design under test (DUT) that gets mapped on an array of FPGA devices. Virtual prototyping is based on a virtual representation of the DUT. The source code of an FPGA prototype is RTL code, namely synthesizable design. The source code of a virtual prototype is a design description at a higher level of abstraction, either based on C/C++/SystemC or SystemVerilog languages, that is not synthesizable.  Its rather limited support for hardware debugging hinders its ability to verify drivers and operating systems, where hardware emulation excels.”

Troy describes Synopsys position as such: “To address verification tasks the prototype helps to characterize timing and pipeline latencies that are not possible with more high-level representations of the design and perhaps more dramatically the prototype is able to reach execution speeds in the hundreds of megahertz. Typical prototype architectures for verification tasks rely on a CPU-based approach where traffic generation for the DUT is written as a software program. The CPU might be an external workstation or integrated with the prototype itself. Large memory ICs adjacent to the FPGAs store input traffic and results that is preloaded and read back for analysis. Prototypes that can provide an easy to implement test infrastructure that includes memory ICs and controllers, a high-bandwidth connection to the workstation, and a software API will accelerate data transfer and monitoring tasks by the prototyping team.

Software development and system validation tasks will influence the prototype design as well. The software developer is seeking an executable representation to support porting of legacy code and develop new device drivers for the latest interface protocol implementation. In some cases the prototype serves as a way for a company to deploy an architecture design and software driver examples to partners and customers. Both schemes demand high execution speed and often real world interface PHY connections. For example, consumer product developers will seek USB, HDMI, and MIPI interfaces while an industrial product will often require ADC/DAC or Ethernet interfaces. The prototype then must provide an easy way to connect to accessory interfaces and ideally a catalog of controllers and reference designs. And because protocol validation may require many cycles to validate, a means to store many full milliseconds of hardware trace data helps compliance check-out and troubleshooting.”

Lauro kept is answer short and to the point.  “Rapid prototyping, also called FPGA prototyping, is based on a physical representation of the design under test (DUT) that gets mapped on an array of FPGA devices. Virtual prototyping is based on a virtual representation of the DUT. The source code of an FPGA prototype is RTL code, namely synthesizable design. The source code of a virtual prototype is a design description at a higher level of abstraction, either based on C/C++/SystemC or SystemVerilog languages, that is not synthesizable.”

Cadence’s position is represented by Frank in his usual thorough style.  “Once RTL has become sufficiently stable, it can be mapped into an array of FPGAs for execution. This essentially requires a remapping from the design’s target technology into the FPGA fabric and often needs memories remodeled, different clock domains managed, and smart partitioning before the mapping into the individual FPGAs happens using standard software provided by the FPGA vendors. The main driver for the use of FPGA-based prototyping is software development, which has changed the dynamics of electronics development quite fundamentally over the last decade. Its key advantage is its ability to provide a hardware platform for software development and system validation that is fast enough to satisfy software developers. The software can reach a range of tens of MHz up to 100MHz and allows connections to external interfaces like PCIe, USB, Ethernet, etc. in real time, which leads to the ability to run system validation within the target environments.

When time to availability is a concern, virtual prototyping based on transaction-level models (TLM) can be the clear winner because virtual prototypes can be provided independently of the RTL that the engines on the continuum require. Everything depends on model availability, too. A lot of processor models today, like ARM Fast Models, are available off-the-shelf. Creating models for new IP often delays the application of virtual prototypes due to their sometimes extensive development effort, which can eliminate the time-to-availability advantage during a project. While virtual prototyping can run in the speed range of hundreds of MIPS, not unlike FPGA-based prototypes, the key differences between them are the model fidelity, replication cost, and the ability to debug the hardware.

Model fidelity often determines which prototype to use. There is often no hardware representation available earlier than virtual prototypes, so they can be the only choice for early software bring-up and even initial driver development. They are, however, limited by model fidelity – TLMs are really an abstraction of the real thing as expressed in RTL. When full hardware accuracy is required, FPGA-based prototypes are a great choice for software development and system validation at high speed. We have seen customers deliver dozens if not hundreds of FPGA-based prototypes to software developers, often three months or more prior to silicon being available.

Two more execution engines are worth mentioning. RTL simulation is the more accurate, slower version of virtual prototyping. Its low speed in the Hz or KHz range is really prohibitive for efficient software development. In contrast, due to the high speed of both virtual and FPGA-based prototypes, software development is quite efficient on both of them. Emulation is the slower equivalent of FPGA-based prototyping that can be available much earlier because its bring-up is much easier and more automated, even from not-yet-mature RTL. It also offers almost simulation-like debug and, since it also provides speed in the MHz range, emulation is often the first appropriate engine for software and OS bring-up used for Android, Linux and Windows, as well as for executing benchmarks like AnTuTu. Of course, on a per project basis, it is considered more expensive than FPGA-based prototyping, even though it can be more cost efficient from a verification perspective when considering multiple projects and a large number of regression workloads.”

Figure 2: Characteristics of the two methods (Courtesy of Synopsys Inc.)

Growth Opportunities

For Lauro the situation boils down to this: “Although one of the oldest methods used to verify designs, dating back to the days of breadboarding, FPGA prototyping is still here today, and more and more it will continue to be used in the future. The dramatic improvement in FPGA technology that made possible the manufacturing of devices of monstrous capacity will enable rapid prototyping for ever larger designs. Rapid prototyping is an essential tool in the modern verification/validation toolbox of chip designs.”
Troy thinks that there are growth opportunities and explained: “A prototype’s high-performance and relatively low investment to fabricate has led them to proliferate among IP and ASIC development teams. Market size estimates show that 70% of all ASIC designs are prototyped to some degree using an FPGA-based system today. Given this demand several commercial offerings have emerged that address the limitations exhibited by custom-built boards. The benefits of almost immediate availability, better quality, modularity for better reuse, and the ability to out-source support and maintenance are big benefits. Well documented interfaces and usage guidelines make end-users largely self-sufficient. A significant trend for commercial systems now is development and debugging features of the EDA software being integrated or co-designed along with the hardware system. Commercial systems can demonstrate superior performance, debug visibility, and bring-up efficiency as a result of the development tools using hardware characterization data, being able to target unique hardware functions of the system, and employing communication infrastructure to the DUT. Commercial high-capacity prototypes are often made available as part of the IT infrastructure so various groups can share or be budgeted prototype resources as project demands vary. Network accessibility, independent management of individual FPGA chains in a “rack” installation, and job queue management are common value-added features of such systems.

Another general trend in rapid prototyping is to mix a transaction-level model (TLM) and RTL model abstractions in order to blend the best of both to accelerate the validation task. How do Virtual versus Physical prototypes differ? The biggest contrast is often the model’s availability during the project.  In practice the latest generation CPU architectures are not available as synthesizable RTL. License and deployment restrictions can limit access or the design is so new the RTL is simply not yet available from the vendor. For these reasons virtual prototypes of key CPU subsystems are a practical alternative.  For best performance and thus for the role of software development tasks, hybrid prototypes typically join an FPGA-based prototype, a cycle-accurate implementation in hardware with a TLM prototype using a loosely-timing (LT) coding style. TLM abstracts away the individual events and phases of the behavior of the system and instead focuses on the communication transactions. This may be perfectly acceptable model for the commercial IP block of a CPU but may not be for new custom interface controller-to-PHY design that is being tailored for a particular application. The team integrating the blocks of the design will assess that the abstraction is appropriate to satisfy the verification or validation scenarios.  Although one of the oldest methods used to verify designs, dating back to the days of breadboarding, FPGA prototyping is still here today, and more and more it will continue to be used in the future. The dramatic improvement in FPGA technology that made possible the manufacturing of devices of monstrous capacity will enable rapid prototyping for ever larger designs. Rapid prototyping is an essential tool in the modern verification/validation toolbox of chip designs.”

Steve’s described his opinion as follows: “Historically, rapid prototyping has been utilized for designs sized in the 10’s of millions of gates with some advanced users pushing capacity into the low 100M gates range.  This has limited the use of rapid prototyping to full chips on the smaller range of size and IP blocks or subsystems of larger chips.  For IP block/subsystems, it is relatively common to combine virtual prototypes of the processor subsystem with a rapid prototype of the IP block or subsystem.  This is referred to as “hybrid prototyping.
With the next generation of FPGAs such as Xilinx’s UltraScale and Altera’s Stratix-10 and the continued evolution of prototyping solutions, creating larger rapid prototypes will become practical.  This should result in expanded use of rapid prototyping to cover more full chip pre-silicon validation uses.
In the past, limited silicon visibility made debugging difficult and analysis of various aspects of the design virtually impossible with rapid prototypes.  Improvements in silicon visibility and control will improve debug productivity when issues in the hardware design escape to the prototype.  Visibility improvements will also provide insight into chip and system performance and quality that were previously not possible.”

Frank concluded that: “The growth of FPGA-based prototyping is really primarily driven by software needs because software developers are, in terms of numbers of users, the largest consumer base. FPGA-based prototyping also provides satisfactory speed while delivering the real thing at RTL accuracy at a reasonable replication cost. Second, the rollout of Cadence’s multi-fabric compiler that maps RTL both into the Palladium emulation platform and into the Protium FPGA-based prototyping platform significantly eases the trade-offs with respect to speed and hardware debug between emulation and FPGA-based prototyping. This gives developers even more options than they ever had before and widens the applicability of FPGA-based prototyping. The third driver for growth in prototyping is the advent of hybrid usage of, for example, virtual prototyping with emulation, combining fast execution for some portions of the design (like the processors) with accuracy for other aspects of the design (like graphics processing units).

Overall, rapid or FPGA-based prototyping has its rightful place in the continuum of development engines, offering users high-speed execution of an accurate representation. This advantage makes rapid or FPGA-based prototyping a great platform for software development that requires hardware accuracy, as well as for system validation.”

Conclusion

All four of the contributors painted a positive figure of rapid prototyping.  The growth of FPGA devices, both in size and speed, has been critical in keeping this type of development and verification method applicable to today’s designs.  It is often the case that a development team will need to use a variety of tools as it progresses through the task, and rapid prototyping has proven to be useful and reliable.

DVCon Highlights: Software, Complexity, and Moore’s Law

Thursday, March 12th, 2015

Gabe Moretti, Senior Editor

The first DVCon  United States was a success.  It was the 27th Conference of the series and the first one with this name to separate it from DVCon Europe and DVCon India.  The last two saw their first event last year and following their success will be held this year as well.

Overall attendance, including exhibit-only and technical conference attendees, was 932.

If we count, as DAC does, exhibitors personnel then the total number of attendees is 1213.  The conference attracted 36 exhibitors, including 10 exhibiting for the first time and 6 of them headquartered outside of the US.   The technical presentations were very well attended, almost always with standing room only, thus averaging around 175 attendees per session.  One cannot fit more in the conference rooms that the DoubleTree has.  The other thing I observed was that there was almost no attendees traffic during the presentations.  People took a seat and stayed for the entire presentation.  Almost no one came in, listened for a few minutes and then left.  In my experience this is not typical and points out that the goal of DVCon, to present topics of contemporary importance, was met.

Process Technology and Software Growth

The keynote address this year was delivered by Aart de Geus, chairman and co-CEO of Synopsys.  His speeches are always both unique and quite interesting.  This year he chose as a topic “Smart Design from Silicon to Software”.   As one could have expected Aart’s major points had to do with process technology, something he is extremely knowledgeable about.  He thinks that Moore’s law as an instrument to predict semiconductor process advances has about ten years of usable life.  After that  the industry will have to find another tool, assuming one will be required, I would add.  Since, as Aart correctly points out, we are still using a 193 nm crayon to implement 10 nm features, clearly progress is significantly impaired.  Personally I do not understand the reason for continuing to use ultraviolet light in lithography, aside for the huge costs of moving to x-ray lithography.  The industry has resisted the move for so long that I think even x-ray has a short life span which at this point would not justify the investment.  So, before the ten years are up, we might see some very unusual and creative approaches to building features on some new material.  After all whatever we will use will have to understand atoms and their structure.

For now, says Aart, most system companies are “camping” at 28 nm  while evaluating “the big leap” to more advanced lithography process.  I think it will be along time, if ever, when 10 nm processes will be popular.  Obviously the 28 nm process supports the area and power requirements of the vast majority of advanced consumers products.  Aart did not say it but it is a fact that there are still a very large number of wafers produced using a 90 nm process.  Dr. de Geus pointed out that the major factor in determining investments in product development is now economics, not available EDA technology.  Of course one can observe that economics is only a second order decision making tool, since economics is determined in part by complexity.  But Aart stopped at economics, a point he has made in previous presentations in the last twelve months.  His point is well taken since ROI is greatly dependent on hitting the market window.

A very interesting point made during the presentation is that the length of development schedules has not changed in the last ten years, content has.  Development of proprietary hardware has gotten shorter, thanks to improved EDA tools, but IP integration and software integration and co-verification has used up all the time savings in the schedule.

What Dr. De Geus slides show is that software is and will grow at about ten times the rate of hardware.  Thus investment in software tools by EDA companies makes sense now.  Approximately ten years ago, during a DATE conference in Paris I had asked Aart about the opportunity of EDA companies, Synopsys in particular, to invest in software tools.  At that time Aart was emphatic that EDA companies did not belong in the software space.  Compilers are either cheap or free, he told me, and debuggers do not offer the correct economic value to be of interest.  Well without much fanfare about the topic of “investment in software” Synopsys is now in the software business in a big way.  Virtual prototyping and software co-verification are market segments Synopsys is very active in, and making a nice profit I may add.  So, it is either a matter of definition  or new market availability, but EDA companies are in the software business.

When Aart talks I always get reasons to think.  Here are my conclusions.  On the manufacturing side, we are tinkering with what we have had for years, afraid to make the leap to a more suitable technology.  From the software side, we are just as conservative.

That software would grow at a much faster pace than hardware is not news to me.  In all the years that I worked as a software developer or managers of software development, I always found that software grows to utilize all the available hardware environment and is the major reason for hardware development, whether is memory size and management or speed of execution.  My conclusion is that nothing is new: the software industry has never put efficiency as the top goal, it is always how easier can we make the life of a programmer.  Higher level languages are more  powerful because programmers can implement functions with minimal efforts, not because the underlying hardware is used optimally.  And the result is that when it comes to software quality and security the users are playing too large a part as the verification team.

Art or Science

The Wednesday proceedings were opened early in the morning by a panel with the provocative title of Art or Science.  The panelists were Janick Bergeron from Synopsys, Harry Foster from Mentor, JL Gray from Cadence, Ken Knowlson from Intel, and Bernard Murphy from Atrenta.  The purpose of the panel was to figure out whether a developer is better served by using his or her own creativity in developing either hardware or software, or follow a defined and “proven” methodology without deviation.

After some introductory remarks which seem to show a mild support for the Science approach, I pointed out that the title of the panel was wrong.  It should have been titled Art and Science, since both must play a part in any good development process.  That changed the nature of the panel.  To begin with there had to be a definition of what art and science meant.  Here is my definition.  Art is a problem specific solution achieved through creativity.  Science is the use of a repeatable recipe encompassing both tools and methods that insures validated quality of results.

Harry Foster pointed out that is difficult to teach creativity.  This is true, but it is not impossible I maintain, especially if we changed our approach to education.  We must move from teaching the ability to repeat memorized answers that are easy to grade on a test tests, and switched to problem solving, a system better for the student but more difficult to grade.  Our present educational system is focused on teachers, not students.

The panel spent a significant amount of time discussing the issue of hardware/software co-verification.  We really do not have a complete scientific approach, but we are also limited by the schedule in using creative solutions that themselves require verification.

I really liked what Ken Knowlson said at one point.  There is a significant difference between a complicated and a complex problem.  A complicated problem is understood but it is difficult to solve while a complex problem is something we do not understand a priori.  This insight may be difficult to understand without an example, so here is mine.  Relativity is complicated, black matter is complex.

Conclusion

Discussing all of the technical sessions would be too long and would interest only portions of the readership, so I am leaving such matters to those who have access to the conference proceedings.  But I think that both the keynote speech and the panel provided enough understanding as well as thought material to amply justify attending the conference.  Too often I have heard that DVCon is a verification conference: it is not just for verification as both the keynote and the panel prove.  It is for all those who care about development and verification, in short for those who know that a well developed product is easier to verify, manufacture and maintain than otherwise.  So whether in India, Europe or in the US, see you at the next DVCon.

Security Levels the IoT Device and Server Landscape

Monday, July 14th, 2014

By John Blyler, Chief Content Officer

Best practices, standards and a diverse ecosystem are essential for embedded developers to mitigate threats such as stack overflows and software backdoors.

What are the best practices when designing for device through server IoT security systems? This question was put to the experts at ARM, including Marc Canel, Vice President, Security Technologies; Jeff Underhill, Director of Server Programs; and Joakim Bech, Technical Lead for Security Working Group at Linaro. What follows is a portion of those interviews. – JB

Blyler: Security for the Internet of Things (IoT) spans everything from end-point sensors to connected devices, aggregated gateways, and middleware – all the way to servers. How can embedded designers deal with all the inherent complexity?

Bech: I think it’s impossible to get a detailed understanding in all areas. It is simply too much to handle. But luckily, you normally don’t have to focus on all IoT devices at the same time. Under normal conditions, the embedded designers work with a limited set of products in a specific area. The tricky part is when these devices develop their own communication that result in an un-tested area where you could potentially have both bugs and security flaws to an even greater extent than standard protocols. Therefore, if possible, it’s almost always better and preferable to adhere to a predefined standard, instead of inventing new protocols.

Canel: There is no one IoT that covers everything. Instead, you have multiple IoT silos. Software developers design applications for specific silos, e.g., health care, wearable devices, entertainment, or industrial manufacturing. Each application works for the specific use case in one of those silos and it’s unlikely to have portability between the silos. Within each of those silos you will use different security models due to the different requirements of each market.

Underhill: The IoT represents an explosion in the number of end point devices and related data that will be exchanged between devices and servers. Many of these devices – like the Nest thermostat system – can reveal when device consumers are home or not, which has raised privacy and security concerns among all involved. Securing the system will be critical as the data spreads from the end devices through the home gateway (where the data is aggregated) to the network and up to the cloud. This data flow will add volume and diversity to the entire security challenges.

Blyler: Isn’t there a base level of security requirements applies to all silo’ed markets and across both devices and servers?

Canel: Yes. Typically, there are sets of key security functions that apply across the board, e.g., the secure boot architecture that runs on the operating systems (OSs). A second key function is device identification or knowing that the device you think you are talking to is really the right one.  That is why ARM® has joined the Fast Online Identity (FIDO) alliance. We are propagating the FIDO architecture across all of our platforms and IoT markets to support the development of standards-based protocols to authenticate devices, applications and users across all markets.

Underhill: From the perspective of the server, you already have a hierarchy of privilege levels to run appropriate pieces of software.  Further, each user, OS kernel, hypervisor and other secure execution spaces can be part of the TrustZone® technology. [Editor’s Note: For further reading, see the technical report on TEE and ARM’s Trust Zone. The TrustZone technology secures a wide array of client and server computing platforms, including handsets, tablets, wearable devices and enterprise systems.

Servers have a secure container mechanism that is untouchable by the open (untrusted) environment. Multiple secure mission critical functions can reside in that container space, e.g., server administration, controlling access to keys, and the bulk encryption of data and content. A key benefit of the TrustZone is the extension of security from the server down through lower end CPUs. The IoT will require more intelligence moving down the network to handle more preprocessing of data closer to the edge of the network.

Blyler: That’s a good segue into the next question: What are the best tools and practices (heuristics) when designing long-term secure IoT systems? What standards apply?

Bech: This is a huge area to cover. When designing long-term security you must take security into account at the beginning of a design. It's much better to have security already in the early design than trying to squeeze it in later to an existing (non-secure) product. In Linaro –an open source community, we are endeavoring to follow industry standards. For example, in the open source Trusted Execution Environment (TEE), we have followed the standards created by GlobalPlatform (see Figure 1). Another important thing is that the security solutions you design today could be used for a very long time, meaning that must have some sense of what is coming in the future. It's not good if you have to re-design a solution, just because hardware has become more powerful and therefore potentially could weaken the security, e.g., key sizes and such).

Figure 1: Trusted Execution Environment (TEE) is a secure area that resides in the main processor of a smart phone (or any mobile device) and ensures that sensitive data is stored, processed and protected in a trusted environment.

Canel: When it comes to coding techniques and practices, there really isn’t a standard. Each company and each developer have their own way of developing good code. For example, most advanced companies will ensure that there are good coding techniques around the management of stacks to avoid stack overflows and things like that.

That being said, code is developed by human beings who make mistakes. To protect mission critical data against individual mistakes you must have robust encryption technology within the devices and ensure that the data remains protected for a long time. We are doing work on cryptographic (“crypto”) engines – in either hardware or software – to ensure data security.

I don’t think that ARM will become a vendor of crypto technology because there is already an established ecosystem of such vendors. But we will come up with architecture recommendations so that the developers have a consistent level of crypto engine implementations within their products.

Blyler: How should hardware and software developers design security into their applications? Does ARM have a series of IP or tools to help the developer add security to their applications?

Bech: When working towards a long-term secure system, you should make use of all available test equipment. In addition to ensuring that the software APIs are behaving as they should, you must test to reduce your vulnerability to side channel attacks. A lot of security libraries have been shown to be vulnerable for timing attacks and power analysis attacks (Differential Power Analysis). Likewise, incorrect implementation and usage of cryptographic algorithms might also completely compromise a system, therefore it is also important to have audits where domain experts complete a thorough review of the complete solution.

Canel: There are various tools companies that have such products, for example, to prevent common security breech mechanisms like stack overflows (see Figure 2). [Editor’s Note: The stack overflows when a program attempts to use more space than is available on the call stack, e.g., when it attempts to access memory beyond the call stack’s set limit. This is essentially a buffer overflow. To learn more, see this reference: "How are Compilers detect stack overflows …"]

Figure 2: After some data fills the state, i.e., "hello" is the first command line argument. (Courtesy Wikipedia)

Underhill: On the server side, we look at best practices in a standardization approach so that our partners are presented with a standard software view to the OS, the hypervisors, etc. For example, we provide customers with a secure container space. OEM and end customers will have their specific use cases for these secure containers. We provide them with the secure building block to integrate throughout their products/solutions.

Blyler: Describe briefly overall trends in security, privacy and data collection/analysis technologies.

Bech: One trend we see at Linaro is toward open source security applications. In recent years, consumers and developers alike have heard about backdoors data breeches in both hardware and software, unauthorized data mining, sites losing complete databases containing credentials and such. This security awareness has led to people to request openness. People and companies want to have the possibility to look into the source code to make sure that it doesn’t contain backdoors and such. Secondly, we have also noticed the move away from pure software security solutions to also leveraging some hardware. It could be TrustZone in an ARM powered system or it could be a dedicated co-processor that runs part or all of the security solution. For example, companies today don’t completely trust the default OS as security flaws could make it possible to extract sensitive information. If a flaw makes it possible to gain root access, keys used for decryption could potentially be read out from the OS. Companies therefore instead prefer to manage sensitive assets in a secure environment that is protected with help from hardware.

Canel: From our point-of-view, the trend is toward using the Trust Zone ecosystem. This is especially true for connected devices that can upgrade themselves over the air. Another trend is using a secure boot system across all markets to assure the image of the code that is running on the devices is authenticated securely.

Over the next 2 to 4 years we will see innovative solutions to store keys securely at low cost within devices and make sure those keys are protected against side channel attacks. [Editor’s note: Side-Channel Attacks (SCAs) collects operational characteristics – execution time, power consumers, electromagnetic emanation - of the design to retrieve keys, learn how to insert faults or to gain other insights into the design. (Here's an example of an ARM ecosystem vendor that deals with side-channel attacks - INVIA.)

Underhill: We are seeing the increased use of open source software on server systems. That trend is an important enabler to a new architecture entering the server space. Another trend is found in the changing boundaries of servers. Today’ servers have both a networking and a storage component. So having a partnership with providers of those other component areas is good. On the server side, ARM’s networking and storage partners like Applied Micro, AMD, Cavium, Broadcom, and others have announced server related plans. So the diversity we are seeing with the IoT and the exploding amount of devices is being paralleled by the increased diversity in the processing needs to handle that throughput on the network and also back in the data center.

Blyler: Thank you.

Predictions About Technology and Future Engineers

Tuesday, November 19th, 2013

By John Blyler, Content Officier

What follows is a portion of my interview with Dassault Systemes’s “Compass” magazine about the most critical technologies and issues faced by the technical community to manage increasing complexity within shrinking design cycles. My list includes hardware-software co-design; cyber-physical systems; wireless chips; low power; and motivating students to high-tech. - JB

COMPASS: The past decade has seen many milestones in hardware/software co-design. What do you think will stand out in the next decade?

JB: Thanks to Moore’s Law and the efficiencies of engineering chips and boards, these things have become commodities. Companies have been forced to differentiate themselves with the software. Also, when you design a chip, you have to think about designing the board at the same time, so you get into the co-hardware/hardware design with software tying everything together.

That trend toward tighter integration is only going to accelerate. The time to get your product to market is shrinking, so you need to have software designed while the hardware is being designed. In many instances, the software demands at the user level are dictating what the chip design will be. Before, it was the other way around.

Compass: What are today’s biggest challenges in systems modeling, integration, and designing for the user?

JB: When I tell my engineering friends the movement is toward designing for the end user’s experience, they scratch their heads. It’s easy to see how that applies to software, because with software it’s easy to change on the fly. But for hardware, that’s trickier. How is that going to be implemented? That’s something the engineering community and manufacturing community are still wrestling with.

You see it in cell phones. The end-user input must come early in the design cycle as it will affect both the software and electrical-mechanical subsystems. Further, everything has to be low-power and green. You have a mountain of considerations, aside from just getting the product to work.

Read the full interview at Compass magazine.

Next Page »