Part of the  

Chip Design Magazine

  Network

About  |  Contact

Posts Tagged ‘High-level synthesis’

Technology Implications for 2016

Monday, February 1st, 2016

Gabe Moretti, Senior Editor

Although it is logical to expect that all sectors of the EDA industry will see improvements in 2016, some sectors will be more active as they are more directly connected with the market forces that fuel the electronics consumers market.

Verification

Michael Sanie, Senior Director of Verification Marketing at Synopsys points to changes in the requirements for EDA tools:

“With the rise of the Internet of Things (IoT), Web 2.0 applications and social media comes the demand for devices that are smaller, faster and consume lower power, despite being equipped with increasing amounts of software content. As a result, SoC designs have grown tremendously in complexity. In addition, advanced verification teams are now faced with the challenge of not only reducing functional bugs, but also accelerating both software bring-up and time to market. The process of finding and fixing functional bugs and performing software bring-up involves intricate verification flows including virtual platforms, static and formal verification, simulation, emulation and finally, FPGA-based prototyping.  Up until very recently, each step in the verification flow was isolated, time-consuming and tedious, in addition to requiring several disjoint technologies and methodologies.

In 2016, the industry will continue to strive towards greater levels of verification productivity and early software bring-up.  This will be achieved through the introduction of larger, more unified platform solutions that feature a continuum of technologies enabling faster engines, native integrations and unified compile, debug, coverage and verification IP.  With this continuum of technologies being integrated into a unified platform solution, each step in the verification flow is further streamlined, and little time is spent in transitioning between steps. The rise of such platforms will continue to enable further dramatic increases in SoC verification productivity and earlier software bring-up.”

Semiconductor Processes

The persistent effort by EDA companies to follow the predictions of Gordon Moore, commonly known as Moore’s Law, are continuing in spite of the ever growing optical challenges that the present lithography process finds.

Vassilios Gerousis, Distinguished Engineer, Cadence points out that: “While few expect 10nm production, we will definitely see 10nm test chip products this year. Some will even hit production timelines and become actual product designs. At the same time, we will see more products go into production at the 14nm and 16nm process nodes. Designers are definitely migrating from 28nm, and even skipping over 20nm.”

Zhihong Liu, Executive Chairman, ProPlus Design Solutions also thinks that advanced process nodes will be utilized this year.  “In 2016, we’ll see more designs at advanced process technologies such as FinFET at 16/14nm, and even trial projects at 10nm and 7nm. It becomes necessary for the EDA community to develop one common tool platform for process development, CAD and circuit design to help designers evaluate, select and implement new processes. However, such tool platforms did not exist before. An ongoing concern within the transistor-level design community is EDA tools such as FastSPICE simulators for verification and signoff that are not as accurate or reliable as they should be. It’s becoming a critical need as project teams move to advanced nodes and larger scale designs that require higher accuracy and incur greater risk and higher costs to fabricate the chip.”

3DIC and Thermal

Michael Buehler-Garcia points to the increased use of 3D-ICs in design.  “3D IC packaging has historically been the domain of packaging and OSAT’s. Newer offerings are driving 3D implementation from the chip design perspective. With this change, chip design techniques are being used to analyze and verify the chip stack to ensure we eliminate integration issues, especially considering that chips in a 3D stack often come from different foundries, and are verified using different processes. In 2016, we project increased “chip out” physical and circuit verification that can be performed independently on each die, and at the interfacing level (die-to-die, die-to-package, etc.). In concert with this change, we are seeing a customer-driven desire for a standardized verification process – used by chip design companies and assembly houses to ensure the manufacturability and performance of IC packages – that will significantly reduce risk of package failure, while also reducing turnaround time for both the component providers and assembly houses. By implementing a repeatable, industry-wide supported process, all participants can improve both their first-time success rate and overall product quality. Initial work with customers and package assembly houses has proven the feasibility of this approach.  As we all know, standards take a long time, but 2016 is the year to start the process.”

Dr John Parry, Electronics Industry Vertical Manager, Mentor Graphics Mechanical Analysis Division adds that thermal considerations are also increasing in importance in system design.“The trend we see in the chip thermal-mechanical space is a stronger need for qualifying new materials, methods, packaging technologies and manufacturing processes. Our advanced thermal design software, thermal characterization and active power cycling hardware is helping customers to meet this expanding need.”

Emulation

One of the areas showing significant growth is the area of hardware based emulation.  Lauro Rizzatti, a noted verification experts stated:  “In 2016 and beyond, new generations of hardware emulators will be brought to market. They will have added capabilities that are more powerful than currently available, as well as support new applications.

Hardware emulation will continue to be the foundation of a verification strategy in 2016. This isn’t so much of a prediction but a fact as standard designs routinely exceed the 100-million gate mark and processor, graphics and networking designs approach one-billion gates. Hardware emulation is the only verification tool able to take on and verify these oversized, complex designs. It’s also true as new capabilities enable emulation to be a datacenter resource, enabling worldwide project teams access to this remarkable verification tool. In 2016, this will become the new normal as project teams better leverage their investment in hardware emulation and work to avoid risk.”

The importance of emulation was also underscored by Jean-Marie Brunet, Director of Marketing, Mentor Graphics Emulation Division especially in the area of hardware/software co-verification and software debug.  “Emulation is going mainstream. In 2016, its use will continue to grow faster than the overall EDA industry. Customers are starting to look beyond traditional emulation criteria – such as capacity, speed, compile time, and easy hardware and software debugging – to an expanding list of new criteria: transaction-based verification, multi-user / multi-project access, live and off-line embedded software development and validation, and data-center operation as a centrally managed resource rather than a standalone box in the testing lab. Enterprise management applications help automate emulation by maximizing uptime and supporting intelligent job queuing. This approach not only balances the workload, but also shuffles the queued workload to prioritize critical jobs.

Software applications will continue to shape emulation’s future. For example, emulation is driving a methodology shift in the way power is analyzed and measured. Now designers can boot an OS and run real software applications on SoC designs in an emulator. Real-time switching activity information generated during emulation runs is passed to power analysis tools where power issues can be evaluated. In 2016 there will be a steady stream of new emulator applications aligning with customers’ verification needs such as design for test, coverage closure and visualization.”

Power Management

Last year Sonics introduced a software based method for power management.  As Grant Pierce, the company’s CEO believes that this year the tool should see significant acceptance.  “SoC designs have hit the power wall and the time for dynamic power management solutions is here. This marks the beginning of a multi-year, SoC development trend where system architects and sub-system engineers must consider power as a primary design constraint. Reducing power consumption is now every electrical engineer’s concern–both to enable product success as well as to support reducing greenhouse gas emissions as a global issue. SoC designers can no longer afford to ignore power management without suffering serious consequences, especially in untethered applications where users expect all-day, all-month or even all-year battery life, even with increasing functionality and seemingly constant connectivity.

The majority of SoC design teams, which don’t have the luxury of employing dedicated power engineering staff, will look to purchase third-party IP solutions that deliver orders of magnitude greater energy savings than traditional software-based approaches to power management and control. While designs for mobile phones, tablets, and the application processors that operate them grow more power sensitive with each successive generation of highly capable devices, Sonics expects dynamic, hardware-based power management solutions to be extremely attractive to a broad set of designers building products for automotive, machine vision, smart TV, data center, and IoT markets.


Figure 1. Representation of an SoC partitioned into many Power Grains with a power controller synthesized to simultaneously control each grain. The zoomed view shows the local control element on each grain – highlighting the modular and distributed construction of the ICE-Grain architecture.

As our CTO Drew Wingard has stated, hardware-based power management solutions offer distinct advantages over software-based approaches in terms of speed and parallelism. SoC designers incorporating this approach can better manage the distributed nature of their heterogeneous chips and decentralize the power management to support finer granularity of power control. They can harvest shorter periods of idle time more effectively, which means increasing portions of their chips can stay turned off longer. The bottom line is that these solutions will achieve substantial energy savings to benefit both product and societal requirements.”

High Level Synthesis

Raik Brinkmann, president and CEO of OneSpin Solutions, noted:

”High-level synthesis will get more traction because the demand is there. As more designs get comfortable using the SystemC/C++ level, demand for EDA tools supporting the task will increase, including formal verification.  Additionally, algorithmic design will be driven further in 2016 by applications on FPGAs to reduce power and increase performance over GPU. That suggests FPGA implementation and verification flows will require more automation to improve turnaround time, a viable opportunity for EDA vendors.  Finally, verification challenges on the hardware/firmware interface will increase as more complex blocks are generated and need firmware to access and drive their functions.”

As it has done throughout its existence the EDA industry will continue to be the engine that propels the growth of the electronics industry.  We have seen in the past a propensity in the electronics industry to think ahead and prepare itself to be ready to offer new products as soon as the demand materializes, so I expect that the worst it can happen is a mild slowdown in the demand for EDA tools in 2016.

High Level Synthesis (HLS) Splits EDA Market

Friday, February 14th, 2014

Recent acquisitions and spin-offs by the major electronic design automation company’s reveals key differences in the design of complex chips.

Last week, Cadence Design Systems announced the acquisition of Forte Design. This announcement brought renewed interest to the high-level synthesis (HLS) of semiconductor chips. But the acquisition also raises questions about emerging changes in the electronic design automation (EDA) industry. Before looking at these wide-ranging changes, let’s see how this acquisition may affect the immediate HLS market.

At first glance, it seems that Cadence has acquired a redundant tool. Both Forte’s Cynthesizer and Cadence’s C-to-Silicon are SystemC-based applications that help chip designers create complex system-on-chips (SoCs) designs from higher levels of abstraction. “High-level synthesis (HLS) tools synthesize C/C++/SystemC code targeting hardware implementation, after the hardware-software trade-offs and partitioning activities have been performed upstream in the design flow,” explained Gary Dare, General Manager at Space Codesign,  a provider of front-end architectural EDA design tools.

Although both Cadence’s and Forte’s HLS tools are based on SystemC, they are not identical in function.

Forte’s strength lies in the optimization of data path design, i.e., the flow of data on a chip. This strength comes from Forte’s previous acquisition of the Arithmetic IP libraries, which focuses on mathematical expressions and related data types, e.g. floating-point calculations.

How do the data bits from arithmetic computations move through a SoC? That’s where the C-to-Silicon tool takes over. “Forte’s arithmetic and data focus will complement Cadence’s C-to-Silicon strength in control logic synthesis domain,” notes Craig Cochran, VP of Corporate Marketing at Cadence. The control plane serves to route and control the flow of information and arithmetic computations from the data plane world.

Aside from complementary data and control plane synthesis, the primary difference between the two tools is that C-to-Silicon was built on top of a register-transfer level (RTL) compiler, thus allowing chip designers to synthesize from high-level SystemC level down down to the hardware specific gate level.

The emphasis on the SystemC support for both tools is important. “Assuming that Cadence keeps the Forte Design team, it will be possible to enhance C-to-Silicon with better SystemC support based on Cynthesizer technology,” observed Nikolaos Kavvadias, CEO, Ajax Compilers. “However, for the following 2 or 3 years both tools will need to be offered.”

From a long-term perspective, Cadence’s acquisition of Forte’s tools should enhance their position in classic high-level synthesis (HLS). “Within 2013, Cadence acquired Tensilica’s and Evatronix’ IP businesses,” notes Kavvadias. “Both moves make sense if Cadence envisions selling the platform and the tools to specialize (e.g. add hardware accelerators), develop and test at a high level.”

These last two process areas – design and verification – are key strategies in Cadences recent push into the IP market. Several acquisitions beyond Tensilica and Evatronix over the last few years have strengthened the company’s portfolio of design and verification IP. Further, the acquisition of Forte’s HSL tool should give Cadence greater opportunities to drive the SystemC design and verification standards.

Enablement verses Realization

Does this acquisition of another HLS company support Cadence’s long-term EDA360 vision? When first introduced several years ago, the vision acknowledged the need for EDA tools to more than automate the chip development process. It shifted focus to development of a hardware and software system in which the hardware development was driven by the needs of the software application.

“Today, the company is looking beyond the classic definition of EDA – which emphasizes automation – to the enablement of the full system including hardware, software and IP on chips and boards to interconnections and verification of the complete system,” explains Cochran. “And this fits into that system (HLS) context.

The system design enablement approach was first introduced by Cadence during last month’s earning report. The company has not yet detailed how the “enablement” approach relates to its previous “realization” vision. But Cochran explains it this way: “Enablement goes beyond automation. Enablement includes our content contribution to our customer’s design in the form of licensable IP and software.” The software comes in many forms, from the drivers and applications that run on the Tensilica (IP) processors to other embedded software and codices.”

This change in semantics may reflect the change in the way EDA tool companies interface with the larger semiconductor supply chain. According to Cochran and others, design teams from larger chip companies are relying more on HLS tools for architectural development and verification of larger and larger chips.  In these ever growing SoC designs, RTL synthesis has become a bottleneck. This means that chip designers must synthesize much larger portions of their chips in a way that reduces human error and subsequent debug and verification activities. That’s the advantage offered by maturing high-level synthesis tools.

Cadence believes that SystemC is the right language for HLS development. But what is the alternative?

HLS Market Fragments

The other major high-level synthesis technology in the EDA market relies on ANSI-C and C++ implementation that involve proprietary libraries and data types, explained Cochran. “These proprietary libraries and data types are needed to define the synthesis approach in terms of mathematical functions, communication between IP blocks and to represent concurrency.” The ANSI-C approach appeals to designers writing software algorithm rather than designing chip hardware.

Kavvadias agrees, but adds this perspective. “Given the Synopsys’s recent acquisition of Target Compiler Technologies (TCT), it appears that the big three have different HLS market orientations: Cadence with a SystemC to ASIC/FPGA end-to-end flow, Snopsys moving on to application-specific instruction-set processor (ASIP) synthesis technology, while Mentor has offloaded its HLS business.”

“Further, Synopsys now has two totally distinct ASIP synthesis technologies, LISATek’s Processor Designer and TCT’s IP Designer,” noes Kavvadias. “They are based on different formalisms (LISA and nML) and have different code and model generation approaches. In order to appeal to ASIP synthesis tool users, Cadence will have to focus to the XPRES toolset. But I’m not sure this will happen.”

A few years ago, Mentor Graphics spun out it HLS technology to Calypto. But Mentor still owns a stake in the spin-off company. That’s why long-time EDA analyst Gary Smith believes that the Forte acquisition puts Cadence and Mentor-Calypto’s CatapultC way ahead of Synopsys’s Synfora Synphony C Compiler. “The Synopsys HLS tool pretty much only does algorthmic mapping to RTL, whereas Forte and Mentor-Calypto tools can do algorthmic mapping, control logic, data paths, registers, memory interfaces, etc. — a whole design.”

What does the Future hold?

Forte’s tool focus on data path synthesis and associated arithmetic IP should mean few integration issues with Cadence’s existing HLS tool C-to-Silicon. However, Kavvadias notes that the acquisition makes floating-point IP increasingly important. “It is relevant to algorithmists (e.g. using MATLAB or NumPy/SciPy) wishing to push-button algorithms to hardware.” The efficient implementation of floating-point functions is not a trivial task.

Kavvadias  predictions that, “if CDNS buys a matrix-processor oriented IP portfolio, then their next step is definitely a MATLAB- or Python-to-hardware HLS tool and maybe the MATLAB/Python platform beyond that.” Matrix processors are popular in digital signal processing (DSP) applications that require massive multiply-accumulate (MAC) data operation.

Today’s sensor and mobile designs require the selection of the most energy-efficient platforms available. In turn, this mandates the need for early, high-level power trade-off studies – perfect for High-Level Synthesis (HLS) tools.