Part of the  

Chip Design Magazine

  Network

About  |  Contact

Posts Tagged ‘Cadence’

Next Page »

IP Integration: Not a Simple Operation

Tuesday, May 13th, 2014

Gabe Moretti, Contributing Editor

Although the IP industry is about 25 years old, it still presents problems typical of immature industries.  Yet, the use of IP in systems design is now so popular one is hard press to find even one system design that does not use IP.  My first reaction to the use of IP is “back to the future”.

For many years of my professional career I dealt with board level design as well as chip design.  Between 1970 and 1990 IP was sold as discrete components by companies such as Texas Instruments, National, and Fairchild among many others.  Their databooks described precisely how to integrate the part in a design.  Although a defined standard for the contents did not exist, a de-facto standard was followed by all providers.  Engineers, using the databooks information would choose the correct part for their needs and the integration was reasonably straight forward.  All signals could be analyzed in the lab since pins and traces were available on the board.

Enters Complexity

As semiconductors fabrication progressed, the board became the chip, and the components are now IP modules.  One would think that integration would also remain reasonably straight forward.  But this is not the case.  Concerns about safeguarding intellectual property rights took over and IP developers were reluctant to provide much information about the functioning of the module, afraid that its functionality would be duplicated and thus they would loose sales.

As the number of transistors on a chip increases, the complexity of porting a design from one process to the next also increases.

Figure 1. Projected number of transistors on a chip

Developers found that by providing a hard macro, that is a module already placed and routed and ready for fabrication by the chosen foundry, was the best way to protect their intellectual rights.  But such strategy is costly because foundries cannot just validate every macro for free.  The IP provider must be in the position to guarantee volume use by the foundry’s customers.  Thus many IP modules must be synthesized.  This means they must be verified.

Karthik Srinivasan Corporate Application Engineer Manager Analog Mixed Signal at Apache Design Solutions, an Ansys company, wrote that “SoC designs today integrate significant number of IPs to accelerate their design times and to reduce the risks to their design closure. But the gap in the expectations of where and how the sign-off happens between the IP and SoC designers create design issues that affect the final product’s performance and release. IP designers often validate their IPs in isolation with expectations of near ideal operating conditions. SoCs are verified and signed off with mostly abstracted or in many cases ‘black-box’ views of IPs. But as more and more high speed and noise sensitive IPs get placed next to each other or next to the core digital logic failure conditions that were not considered emerge. This worsens when these IPs share one or more power and ground supply domains. For example, when a bank of high speed DDR IPs are placed next to a bank of memories, the switching of the DDR can generate sufficient noise on the shared ground network that can adversely affect the operation of the memory.

As designs migrate to smaller technology nodes, especially those using FinFET based technologies this gap in the design closure process is going to worsen the power noise and reliability closure process.”

DDR memory blocks are becoming a greater and greater portion of a chip as the portion of functionality implemented in firmware increases.  Bob Smith, Senior VP of Marketing and Business Development at Uniquify makes the case for a system view of memories.

” DDR IP is used in a wide variety of ASIC and SoC devices found in many different applications and market segments. If the device has an embedded processor, then it is highly likely that the processor requires access to external DDR memory. This access requires a DDR subsystem (DDR controller, PHY and IO) to manage the data traffic flowing to and from the embedded processor and external DDR memory.

Whether it is procured from an external source or developed by an internal IP group, almost all chip design projects rely on DDR IP to implement the on-chip DDR subsystem. The integration techniques used to implement the DDR IP in the chip design can have far reaching effects on DDR performance, chip area, power consumption and even reliability.

Figure 2. A non-optimized DDR implementation

The above fiogure illustrates a typical on-chip DDR implementation. Note that while the DDR I/Os span the perimeter of the chip, the DDR PHYs are configured as blocks and are placed in such a way that they are centered with the I/Os. As shown in the diagram, this not only wastes valuable chip area, but also creates other problems. ”

” A much more efficient way to implement the DDR subsystem IP is to deliver a DDR PHY that is exactly matched to the DDR I/O layout. By matching the PHY exactly to the I/Os, a tremendous amount of area is saved and power is reduced. Even better, the performance of the DDR can be improved since the PHY-I/O layout minimizes skew.”

Figure 3. Optimized DDR block

As process technology progresses and moves from 32 nm to 22nm and then 14 nm and so on, the role of the foundry in the place and route of an entire chip increases.  In direct proportion the freedom of designers to determine the final topology of a chip decreases.  Thus we are rapidly reaching the point where only hardened hard modules will be viable.  The number of viable providers in the IP industry is shrinking rapidly and many significant companies have been acquired in the last three or four years by EDA companies that becoming major providers of IP products.

Synopsys started selling IP around 1990 and has now a wide variety of IP in its portfolio, mostly developed internally.  Cadence, on the other hand, has built its extensive inventory of IP products mostly through acquisition.

Michael Munsey, Director of ENOVIA Semiconductor Strategy at Dassault Systemes points out that there are a number of issues to deal with regarding IP.

1.       IP Sourcing:  Companies are going to need a way to source IP.  They will need access to a cataloging system that allows for searching of both internally developed or under development IP as well as externally available third party IP.

2.       IP Governance: For internally developed IP, there needs to be systems and methodologies for handling the promotion of work in progress to company certified IP.  For both internal and externally acquired IP, there needs to be a process to validate that IP, and then a system to rate the IP internally based on previous use, documentation available, and other design artifacts.

3.       IP Issue Defect and Tracking: Since IP will be in use in multiple projects, a formal system is required to handle issue and defect tracking across multiple projects against all IP.  If one group finds and issue with a piece of IP, all other project groups that are using that IP need to be alerted of any issues found and the plan on resolving the issue.  Ideally this should be integrated into design tools that are used to assemble IP as well.  If a product has already gone out the door with the defective IP, these issues need to managed and corrective actions need to ensue based on any defects found.

4.       IP Security:  There are different levels of protection needed for different types of IP and robust methods must be put in place to ensure the security of IP.   First, company critical IP must be secure, and systems need to be put in place to make sure that the IP does not leave the company premises.  If collaborating with partners, any acquired IP must also be handled so that it is only used in the designs that are being collaboratively designed.  There need to be restrictions on using partner IP in design blocks which in turn can become IP in other designs.  There needs to be a way to track the ‘pedigree’ of IP.

5.   Variant driven platform based design:  Ultimately, for companies to keep up with the shortening market windows and application driven platform design, companies will need to adopt a system where there are base platforms with pre-qualified IP that can be configured on the fly and used a s a starting point for new designs.  These systems would automatically populate a design workspace with the required IP from a company approved catalog as the basis of a new design moving forward.

Integrating the Pieces

Farzad Zarrinfar, Managing Director of the Novelics Business Unit at Mentor Graphics,

provided a synthesis of the problems facing designers.

“For IP Integration, multiple IP like ‘Hard IP’, ‘Synthesizable Soft Peripheral IP’, and ‘Synthesizable Soft processor IP’ with different set of deliverables, use EDA tools for efficient ASIC/SOC designs. Selecting the optimal IP size (such as smallest embedded memory IP) is a critical design decision. While free IP is readily available, it does not always provide the best solution when compared to fee-based IP that provides much better characteristics for the specific applications.

IP integration to achieve smaller die size, lower leakage, lower dynamic power, or faster speed can provide designers with a more optimized solution that can potentially save millions of dollars over the life of the product, and better differentiate their chips in a highly competitive ASIC/SoC marketplace.”

Bill Neifert, Chief Technology Officer at Carbon Design Systems observes.  “Certainly, some designers at the bleeding edge differentiate every aspect of the subsystem and their own IP, but we’re increasingly seeing others adopt whole subsystem designs and then making configuration tweaks. Think black box design and ARM’s big.LITTLE offerings are prime examples of this trend.

Of course, in order to make these configuration changes, designers need to know the exact impact of the changes that they’re making. We see users doing this a lot on our IP Exchange web portal. They will download a CPAK (Carbon Performance Analysis Kit), a pre-built system or subsystem complete with software at the bare metal or O/S level. This gets them up and running quickly but not with their exact configuration. They’ll then iterate various configuration options in order to meet their exact design goals. It’s not unusual for a design team to compile 20 different configurations for the same IP block on our portal and then compare the impact of each of these different models on system performance.

Naturally, all of this impacts the firmware team quite a bit. The software developers don’t need to know exactly what the underlying hardware is doing but the firmware team needs the exact IP configuration. The sooner these decisions can be made, the sooner they can start being productive. Integrating this level of software on to the hardware typically exposes a new round of IP optimizations that can be made as well. Therefore, it’s not unusual for IP configuration changes to happen in waves as additional pieces of IP and software are added to the system.”

Drew Wingard, CTO at Sonics points out that standards matter.  “Because there are many sources for IP, the industry had to create and adhere to standards for integration. From a silicon vendors’ perspective, IP sources include third-party commercial components, internally designed blocks and cores, and customer-designed components. To meet the challenge of integrating IP components from many different places, SoC designers needed communication protocol standards. Communication protocol standards efforts began with the Virtual Socket Interface Alliance (VSIA), continued with the Open Core Protocol International Partnership (OCP-IP), and today reside with Accellera. Of course, our customers need to leverage de-facto standards such as ARM’s AMBA as well.  We owe our ability to integrate IP to the fundamental communication protocol standards work that these organizations performed.”

The Challenge of Verification

Sunrise’s Prithi Ramakrishnan is concerned about system verification.  “At a very high level, the main issue with IP is that the simulated environment is different from the final design environment.  Analog and RF IP is dependent on process/node, foundry, layout, extraction, model fidelity, and placement.  So you are either tied to just dropping it in ‘as is’ and treating it like a black box (nobody knows how it works and whether it meets the required specifications) or completely changing it (with the caveat that you can no longer expect the same results).  Digital IP needs to be resynthesized followed by placement and routing, and it takes several iterations to make the IP you got work the way you want it to work. In addition, this process is extremely tool-dependent.

Finally, there are system level issues like interoperability, interface and controls (how does the IP talk to the rest of the SoC). A very important, often overlooked factor is the communication between the IP providers and the SoC implementation houses – there are documents outlining integration guidelines, but without an automated process that takes in all that information, a lot could be lost in translation.”

The issue of how well a third party IP has been verified will always hunt designers unless the industry finds a way to make IP as trustworthy as the TI 7400 and equivalent parts of the early days.  Bernard Murphy, CTO at Atrenta observed: “One area that doesn’t get a lot of air-time is how a SoC verification team goes about debugging a problem around an IP. You have the old challenge – is this our bug or the IP developer’s bug? If the developer is down the hall, you can probably resolve the problem quickly. If they are now working for your biggest competitor, good luck with that. If this is a commercial IP, you work with an apps guy to circle around possibilities: maybe you are using it wrong, maybe you misunderstood the manual or the protocol, may be they didn’t test that particular configuration for that particular use-case… Then they bring in their expert and go back through the cycle until you converge on an answer. Problem is, all this burns a lot of time and you’re on a schedule. Is there a way to compress this debug cycle?”

He offers the following suggestion.  “One important class of things to check for is the above didn’t test that configuration for that use-case.  This is where synthesized assertions come in. These are derived automatically by the IP developer in the course of verifying the IP. They don’t look like traditional assertions (long, complex sequences of dependencies). They tend to be simpler, often non-obvious, and describe relationships not just at the boundary of the IP but also internal to the IP. Most importantly, they encode not just functionality but also the bounds of the use-cases in which the IP was tested. Think of it as a ‘signature’ for the function plus the verification of that function.”

Thomas L. Anderson, VP of Marketing at Breker Verification Systems pleads: integrate, but verify.  He argues that “The truth is that most SoC teams trust integration too much and verify too little. Many SoC products hit the market only after two or three iterations through the foundry. This costs a lot of money and risks losing market windows to competitors. Most SoC teams follow a five-step verification process:

  1. Ensure that each block, whether locally designed or licensed as IP, is well verified
  2. Use formal methods to verify that each block has been integrated into the SoC properly
  3. Assemble a minimal chip-level simulation testbench and run a few sanity verification tests
  4. Hand-write some simple C tests and run on the embedded processors in simulation or emulation
  5. Run the production software on the processors in simulation, emulation, or prototyping

The problem with this process is that the tests in steps 3 and 4 are too simple since they are hand-written. They typically verify only one block at a time, ignoring interaction between blocks. They also perform only one operation at a time, so they don’t stress cache coherency or any inherent concurrency within the design. Humans aren’t good at thinking and coding in parallel. Thorough SoC pre-silicon verification can occur only with multi-threaded, multi-processor test cases that string blocks together into realistic scenarios representing end-user applications of the chip.”

An Example of Complexity

Charlie Cheng, CEO of Kilopass gave me an example of complexity in choosing the correct IP for a design by using sparse matrix math.

With semiconductor IP comprising 90 percent of today’s semiconductor devices and memory IP accounting for over 50 percent of these complex SoCs, it’s no wonder that IP is the fastest growing sector of the overall semiconductor industry. As a result, managing third-party IP is a growing responsibility within today’s semiconductor companies. How to make the right choice from a growing quantity of IP is the major challenge facing engineering teams, purchasing departments, and executive management. The process of these groups buying IP can be viewed as a sparse matrix mathematical exercise but without the actual math formulas and data manipulation.

1.8V Vendor A OTP
TSMC 28HP
UMC 28HP
SMIC 28HP
GLOBALFOUNDRIES 28HP

The table shows two dimensions of a multi-dimensional matrix representing the variables confronting the purchasing company teams. In this two-dimensional matrix, imagine three additional tables for 1.8v operations with the four foundries at 28HPL, 28HPM, and 28HP.  Now, replicate this in a fourth dimension for the variable of 2.5v. Add a fifth and sixth dimension to the matrix for Vendor B’s OTP.  If this were a mathematical evaluation, a figure of merit would be assigned to each cell in each plane of the multidimensional matrix.

For example, Vendor A’s OTP at 1.8v has JEDEC three-lot at 28HP at TSMC, UMC and GLOBALFOUNDRIES, similarly for 28HPM and 28HP but has only working silicon at 28LP.  Vendor B’s OTP may not have received three lot qualification at any of the vendors on any of the processes, but may have first silicon or one or two lot qualification on one or more of the vendors. In the mathematical exercise, a figure of merit would be assigned to the being fully qualified, one or two lot qualified, first silicon or not taped out.  Using matrix algebra, the formal mathematical exercise would return a result but an intuitive evaluation of the process would suggest vendor A with more three-lot qualifications at multiple foundries would have an edge over vendor B which did not.

The above exercise would be typical of the evaluation occurring in the engineering team. A similar exercise would be occurring in the purchasing department with terms and conditions presented in the licensing agreement and royalty schedule each vendor submits. Corporate and legal would perform a similar exercise.

If the mathematical exercise was actually performed and all the cells in the matrix had assigned values, then a definitive solution is easily achieved. However, if the cells throughout the matrix are sparsely populated, then the solution ends up with a probable outcome.

Blog Review – Mon. April 28 2014

Monday, April 28th, 2014

The first PHYs; compute shaders; Accellera Day online; Security and privacy
By Caroline Hayes, Senior Editor.

Mentor Graphics’ Dennis Brophy initially questioned the need for an online Accellera Day, but soon retracted and is even offering to keep blog visitors informed as more are posted.

If you are interested in compute shaders, Sylvek at ARM, explains clearly and concisely what they are and how to add one to an application.

Another informative blog is the first of three from Corrie Callenbach, Cadence, directing us to Kevin Yee’s video: Take command of MIPI PHYS. The presenter takes us through the first of three PHY specifications introduced by MIPI.

Intel’s Mayura Garg points blog visitors to Michael Fey’s presentation at ISS2014. Fey, Executive Vice President, General Manager of Corporate Products and Intel Security CTO focuses on Security and Privacy in the Information Age – the blog’s own ‘scary video’.

Cadence to Acquire Jasper Design Automation

Wednesday, April 23rd, 2014

By Hamilton Carter, Verification-Low Power Editor

Earlier this week, Cadence announced that they have entered into a definitive agreement to purchase Jasper Design Automation. The two companies are both focused on SoC verification and cited their shared customer base and the current verification budget allotment on most SoC development projects of over 70% as reasons for the transaction. Cadence will pay $170 million for the privately held Jasper.  Jasper shows $24 million in cash and cash equivalents on its balance sheet at the moment. Cadence will be funding the purchase using cash-on-hand and an existing rotating line of credit.

Jasper is a market leader in the growing formal analysis verification sector.  They’ve been very successful in distributing verification solutions termed Verification Apps built on top of their formal engine Jasper Gold.  Cadence hopes to expand the formal analysis sector more quickly by leveraging the combination of Jasper’s expertise and Cadence’s existing worldwide sales team.

Both companies are excited about the potential for integrating Jasper’s tools with the metric driven verification flow provided by Cadence.  Charlie Huang, senior vice president of the System & Verification Group and Worldwide Field Operations at Cadence, said about the acquisition, “Jasper’s formal analysis solutions are used by customers today alongside Cadence’s metric-driven verification flow to form a broad verification solution. We look forward to welcoming Jasper’s strong formal development expertise and skilled team to Cadence.”

Kathryn Kranen, president and CEO of Jasper,  commented, “The verification technologies, when combined, will benefit customers through a comprehensive metric-driven verification approach that unites formal and dynamic techniques, realizing the strength of each and leveraging the integration between them.”

The acquisition is currently scheduled to take place in the second quarter of fiscal 2014 subject to typical closing conditions and regulatory approvals.  There is no word yet as to when customers can expect a consumer-ready integration of Jasper and Cadence technologies.

Blog Review – Mon. April 21 2014

Monday, April 21st, 2014

Post silicon preview; Apps to drive for; Motivate to educate; Battery warning; Break it up, bots. By Caroline Hayes, Senior Editor.

Gabe Moretti attended the Freescale Technology Forum and found the ARM Cortex-A57 Carbon Performance Analysis Kit (CPAK) that previews post silicon performance, pre-silicon.

In a considered blog post, Joel Hoffmann, Intel, looks at the top four car apps and what they mean for system designers. He knows what he is talking about, he is preparing for the panel at Open Automotive 14 – Automotive Suppliers: Collaborate or Die in Sweden next month.

How to get the next generation of EDA-focused students to commit is the topic of a short keynote at this year’s DAC by Rob Rutenbar, professor of Computer Science, University of Illinois. Richard Goering, Cadence reports on progress so far with industry collaboration and looks ahead.

Consider managing power in SoCs above all else, urges Scott Seiden, Sonics, who sounds a little frustrated with his cell phone.

Michael Posner, Synopsys, revels in a good fight – between robots in the FIRST student robot design competition. Engaging and educational.

Internet of Things (IoT) and EDA

Tuesday, April 8th, 2014

Gabe Moretti, Contributing Editor

A number of companies contributed to this article.  In Particular: Apache Design Solutions, ARM, Atrenta, Breker Verification Systems, Cadence, Cliosoft, Dassault Systemes, Mentor Graphics, Onespin Solutions, Oski Technologies, and Uniquify.

In his keynote speech at the recent CDNLive Silicon Valley 2014 conference, Lip-Bu Tan, Cadence CEO, cited mobility, cloud computing, and Internet of Things as three key growth drivers for the semiconductor industry. He cited industry studies that predict 50 billion devices by 2020.  Of those three, IoT is the latest area attracting much conversation.  Is EDA ready to support its growth?

The consensus is that in many aspects EDA is ready to provide tools required for IoT implementation.  David Flynn, a ARM Fellow put it best.  “For the most part, we believe EDA is ready for IoT.  Products for IoT are typically not designed on ‘bleeding-edge’ technology nodes, so implementation can benefit from all the years of development of multi-voltage design techniques applied to mature semiconductor processes.”

Michael Munsey, Director of ENOVIA Semiconductor Strategy at Dassault Systèmes observed that conversely companies that will be designing devices for IoT may not be ready.  “Traditional EDA is certainly ready for the core design, verification, and implementation of the devices that will connect to the IoT.  Many of the devices that will connect to the IoT will not be the typical designs that are pushing Moore’s Law.  Many of the devices may be smaller, lower performance devices that do not necessarily need the latest and greatest process technology.  To be cost effective at producing these devices, companies will rely heavily on IP in order to assemble devices quickly in order to meet consumer and market demands.  In fact, we may begin to see companies that traditionally have not been silicon developers getting in to chip design. We will see an explosive growth in the IP ecosystem of companies producing IP to support these new devices.”

Vic Kulkarni, Senior VP and GM, Apache Design, Inc.  put it as follows: “There is nothing “new or different” about the functionality of EDA tools for the IoT applications, and EDA tool providers have to think of this market opportunity from a perspective of mainstream users, newer licensing and pricing model for “mass market”, i.e.  low-cost and low-touch technical support, data and IP security and the overall ROI.”

But IoT also requires new approaches to design and offers new challenges.  David Kelf, VP of Marketing at Onespin Solutions provided a picture of what a generalized IoT component architecture is likely to be.

Figure 1: generalized IoT component architecture (courtesy Onespin Solutions)

He went on to state: “The included graphic shows an idealized projection of the main components in a general purpose IoT platform. At a minimum, this platform will include several analog blocks, a processor able to handle protocol stacks for wireless communication and the Internet Protocol (IP). It will need some sensor-required processing, an extremely effective power control solution, and possibly, another capability such as GPS or RFID and even a Long Term Evolution (LTE) 4G Baseband.”

Jin Zhang, Senior Director of Marketing at Oski Technologies observed that “If we parse the definition of IoT, we can identify three key characteristics:

  1. IoT can sense and gather data automatically from the environment
  2. IoT can interact and communicate among themselves and the environment
  3. IoT can process all the data and perform the right action with or without human interaction

These imply that sensors of all kinds for temperature, light, movement and human vitals, fast, stable and extensive communication networks, light-speed processing power and massive data storage devices and centers will become the backbone of this infrastructure.

The realization of IoT relies on the semiconductor industry to create even larger and more complex SoC or Network-on-Chip devices to support all the capabilities. This, in turn, will drive the improvement and development of EDA tools to support the creation, verification and manufacturing of these devices, especially verification where too much time is spent on debugging the design.”

Power Management

IoT will require advanced power management and EDA companies are addressing the problem.  Rob Aitken, also a ARM fellow, said:” We see an opportunity for dedicated flows around near-threshold and low voltage operation, especially in clock tree synthesis and hold time measurement. There’s also an opportunity for per-chip voltage delivery solutions that determine on a chip-by-chip basis what the ideal operation voltage should be and enable that voltage to be delivered via a regulator, ideally on-chip but possibly off-chip as well. The key is that existing EDA solutions can cope, but better designs can be obtained with improved tools.”

Kamran Shah, Director of Marketing for Embedded Software at Mentor Graphics, noted: “SoC suppliers are investing heavily in introducing power saving features including Dynamic Voltage Frequency Scaling (DVFS), hibernate power saving modes, and peripheral clock gating techniques. Early in the design phase, it’s now possible to use Transaction Level Models (TLM) tools such as Mentor Graphics Vista to iteratively evaluate the impact of hardware and software partitioning, bus implementations, memory control management, and hardware accelerators in order to optimize for power consumption”

Figure 2: IoT Power Analysis (courtesy of Mentor Graphics)

Bernard Murphy, Chief Technology Officer at Atrenta, pointed out that: “Getting to ultra-low power is going to require a lot of dark silicon, and that will require careful scenario modeling to know when functions can be turned off. I think this is going to drive a need for software-based system power modeling, whether in virtual models, TLM (transaction-level modeling), or emulation. Optimization will also create demand for power sensitivity analysis – which signals / registers most affect power and when. Squeezing out picoAmps will become as common as squeezing out microns, which will stimulate further automation to optimize register and memory gating.”

Verification and IP

Verifying either one component or a subset of connected components will be more challenging.  Components in general will have to be designed so that they can be “fixed” remotely.  This means either fix a real bug or download an upgrade.  Intel is already marketing such a solution which is not restricted to IoT applications.Also networks will be heterogeneous by design, thus significantly complicating verification.

Ranjit Adhikary, Director of Marketing at Cliosoft, noted that “From a SoC designer’s perspective, “Internet of Things” means an increase in configurable mixed-signal designs. Since devices now must have a larger life span, they will need to have a software component associated with them that could be upgraded as the need arises over their life spans. Designs created will have a blend of analog, digital and RF components and designers will use tools from different EDA companies to develop different components of the design. The design flow will increasingly become more complex and the handshake between the digital and analog designers in the course of creating mixed-signal designs has to become better. The emphasis on mixed-signal verification will only increase to ensure all corner cases are caught early on in the design cycle.”

Thomas L. Anderson, Vice President of Marketing at Breker Verification Systems, has a similar prospective but he is more pessimistic.  He noted that “Many IoT nodes will be located in hard-to-reach places, so replacement or repair will be highly unlikely. Some nodes will support software updates via the wireless network, but this is a risky proposition since there’s not much recourse if something goes wrong. A better approach is a bulletproof SoC whose hardware, software, and combination of the two have been thoroughly verified. This means that the SoC verification team must anticipate, and test for, every possible user scenario that could occur once the node is in operation.”

One solution, according to Mr. Anderson, is “automatic generation of C test cases from graph-based scenario models that capture the design intent and the verification space. These test cases are multi-threaded and multi-processor, running realistic user scenarios based on the functions that will be provided by the IoT nodes containing the SoC. These test cases communicate and synchronize with the UVM verification components (UVCs) in the testbench when data must be sent into the chip or sent out of the chip and compared with expected results.”

Bob Smith, Senior Vice President of Marketing and Business development at Uniquify, noted that “Connecting the unconnected is no small challenge and requires complex and highly sophisticated SoCs. Yet, at the same time, unit costs must be small so that high volumes can be achieved. Arguably, the most critical IP for these SoCs to operate correctly is the DDR memory subsystem. In fact, it is ubiquitous in SoCs –– where there’s a CPU and the need for more system performance, there’s a memory interface. As a result, it needs to be fast, low power and small to keep costs low.  The SoC’s processors spend the majority of cycles reading and writing to DDR memory. This means that all of the components, including the DDR controller, PHY and I/O, need to work flawlessly as does the external DRAM memory device(s). If there’s a problem with the DDR memory subsystem, such as jitter, data/clock skew, setup/hold time or complicated physical implementation issues, the IoT product may work intermittently or not at all. Consequently, system yield and reliability are of upmost concern.”

He went on to say: “The topic may be the Internet of Things and EDA, but the big winners in the race for IoT market share will be providers of all kinds of IP. The IP content of SoC designs often reaches 70% or more, and SoCs are driving IoT, connecting the unconnected. The big three EDA vendors know this, which is why they have gobbled up some of the largest and best known IP providers over the last few years.”

Conclusion

Things that seem simple often turn out not to be.  Implementing IoT will not be simple because as the implementation goes forward, new and more complex opportunities will present themselves.

Vic Kulkarni said: “I believe that EDA solution providers have to go beyond their “comfort zone” of being hardware design tool providers and participate in the hierarchy of IoT above the “Devices” level, especially in the “Gateway” arena. There will be opportunities for providing big data analytics, security stack, efficient protocol standard between “Gateway” and “Network”, embedded software and so on. We also have to go beyond our traditional customer base to end-market OEMs.”

Frank Schirrmeister, product marketing group director at Cadence, noted that “The value chain for the Internet of Things consists not only of the devices that create data. The IoT also includes the hubs that collect data and upload data to the cloud. Finally, the value chain includes the cloud and the big data analytics it stores.  Wired/wireless communications glues all of these elements together.”

Blog Review – April 07 2014

Monday, April 7th, 2014

Interesting comments from PADS users are teasingly outlined by John McMillan, Mentor Graphics.

An impressed Dominic Pajak, ARM, relates hopes for the IoT using ARM mbed and Google Glass to control an industrial tank control which can be viewed by Google Glass.

The cultural gap between engineers is examined by Brian Fuller, Cadence, as he reviews DesignCon 2014 and group director,Frank Schirrmeister’s call for a “new species” of system designer

More IoT thoughts, this time a warning from Divya Naidu Kol, Intel, with ideas of how to welcome the IoT without losing control of our information.

The Wheels of Industry Roll On

Wednesday, March 19th, 2014

The wheels of industry roll on

Apart from pretzels and weissbier, Embedded World in Nuremberg was distinguished by the car, the factory and the IoT. By Caroline Hayes, Senior Editor.

On reflection, it should be no surprise that automotive themes were all around the exhibition halls of Embedded World 2014 in Nuremberg, Germany. The country produces 44% of the cars and light trucks manufactured in western Europe (defined as Europe apart from Turkey and the former communist bloc).

Software
My Embedded World began with a breakfast meeting at Cadence, where Senior Director, Frank Schirrmeister, outlined that the classic EDA model is evolving to what has been termed System Design Enablement. This shift is largely due to the growing electronics content in vehicles, he explained, with the proliferation of CAN (Controlled Area Network), LIN (Local Interconnect Network), MOST (Media Oriented Systems Transport), Ethernet and FlexRay protocols. “For the consumer, it is satellite navigation systems; it is under-the-hood with software and there is an automotive element to the IoT (Internet of Things),” he said.
Traditionally, he explained, Cadence’s domain was the SoC (System on Chip), augmented by the acquisitions last year of Tensilica, Cosmic Circuits and Evatronix. System Design Enablement is based around adding the software stack for applications: “We are broadening EDA to take the subsystem in the chip, and enable interaction between the hardware and software to broaden the system design environment.” Listing the company’s attributes for automotive, he ran down the list of Ethernet IP; virtual platforms, such as automated driver assist systems; FPGA platforms for prototyping, mixed signal for under-the-hood. The last two are also used in IoT. For embedded control, notes Schirrmeister, virtualization needs to adapt, with virtual hardware methodology to meet the complexities and variants of cars today. “We need more simulation that ever before – automotive is the second largest growth area in the USA, Europe and Japan.”

Industry
More rugged modes of transport were occupying Daniel Piper, Senior Marketing Manager, EMEA, Kontron. The company was highlighting embedded boards based on the Intel Atom E3800 and COM Express modules based on the Intel Atom E3800 and Intel Celeron N2900/J1900.
There was also the first SMARC (Smart Mobility ARChitecture) CoMs (Computer on Modules) – the first design from the German company based on an x86 Atom processor. The SMARC-sXBTi CoMs are scalable so that the same look and feel for software development can be shared in the industrial space, said Piper, reaching from the automated shop floor to the connected tablet. For this, the benefits of SMARC, the low power consumption and small footprint are exploited, says Piper. Power consumption is 5 to 10W and the low profile, mini-computer form factor measures 82x50mm. “As well as long-term availability [an average of seven years] the software services, low profile and low power consumption suit the mobile applications and interfaces, such as eye camera interfaces, that are adapted from the ARM world but on an x86.” This, he says, extends SMARC into x86 modules and markets, using the same form factor, which brings ease of use for the end customer.

Industrial virtualization
Perhaps the most targeted announcement of Embedded World was that from Intel, which announced virtualization platforms for industrial systems with software and tools to create industrial, embedded systems.
Jim Robinson, General Manager, Segments and Broad Market Division, Internet of Things Solutions Group, Intel, echoed Piper’s sentiment, that the industrial sector is looking for new, innovative ways to connect and build: “Bringing together what have typically been different sub-systems into a single computing platform, makes it easier – and more affordable – for OEMs, machine builders and system integrators to deliver consolidated, virtualized platforms,” he said.
The Intel Industrial Solutions Systems Consolidation Series bundles together an embedded computer with an Intel Core i7 processor and pre-integrated virtualization software stack, which includes Wind River Hypervisor, pre-configured to support three partitions, running Wind River VxWorks for real-time applications and Wind River Linux 5.0 for non-rea-time applications. Robinson explained that the role of virtualization is to partition important workloads using multiple virtual machines. With it, developers can consolidate multiple discrete sub-systems onto a single device, reducing costs, increasing flexibility and reducing factory space, he said.
Intel System Studio software tools were also announced. They are designed for the Industrial Solutions System Consolidation Series to build and analyse industrial, embedded systems.

The IoT network
The software and tool suite are part of the company’s Developer Program for Internet of Things. This was another phrase often heard, and seen, around the halls in Nuremberg.
Dan Demers, Director of Marketing, congatec, was enthusiastic about IoT for its role in bridging mobile connectivity into the industrial space. “The longevity of industrial applications, and the IoT, from mobile devices into the industrial area demands customised solutions. Markets are opening up, because silicon innovations are bringing the power consumption down and the performance levels up. Historically, we could not put an x86 processor on such a board – now we can. Off-the-shelf is the easiest option, but not all off-the-shelf offerings have the I/O requirements, so the next best thing is a CoM.” There are, he says many advantages to employing a CoM, such as reduced time to market, with around six to nine months development time.
ARM took a difference approach to the IoT. Its IP dominates phones today, shipping 20billion ARM-based mobile phone chips to date, but the IoT market could be 100billion connected devices. Chris Turner, Senior Product Marketing Manager, Processor Division, ARM, explained to me about the software layer of the IoT and security. “The ARM ecosystem demonstrates the underlying code – the hypervisors – the protocol stack and the partnerships within the ecosystems. It is a long product plan to build a hypervisor for an architecture and this is where the ecosystem co-operation comes into play.” The ARM ecosystem includes more than 1,000 partners, developers and engineers developing ARM-based solutions and providing support.
Still with the IoT, Wind River had a creative booth, which demonstrated device connectivity in many forms, supported by its Intelligent Device Platform. As well as IoT protocol support, the scalable, secure development environment supports WiFi, Bluetooth and ZigBee.
It can be customized, which reduces development time. Provisioning and device management is via a web-based tool. Security in any connected network is critical, particularly if used in public transport, where signaling has to be accurate and timely, to utilities, where companies not only want to prevent fraud, or theft, but also to protect against outages. A customized secure remote management feature provides encrypted communication between a device and cloud based management. Lua and Java programming environments are supported to allow engineers to build gateway applications, connect and to send and receive data from the cloud.

Although there were several themes at this year’s exhibition and conference, the healthy competition between the processor of choice and co-operation between software, chip and board companies to work together to integrate and innovate was encouraging both in terms of the economy and in terms of the potential for the diverse, engaging design initiatives of tomorrow.

Blog Review – March 17 2014

Monday, March 17th, 2014

Robotic vehicles; CDNLive’s system focus; Rubik challenge; Design data stats; vManager renovation. By Caroline Hayes, Senior Editor

Start with a Tamiya tracked vehicle chassis kit and add a gearbox, a PCB loaded with a Spartan-6 FPGA and present to a boy-racer and you have the contents of sleibso’s blog at Cadence. He takes a look at the Logitraxx tracked robotic vehicle on Kickstarter and revs up an explanatory video.

Brian Fuller reports on Cadence’s CDNLive event and Imagination President, Krishna Yarlagadda’s keynote vision for a system-driven approach to power, area, cost, software, and security in IoT design.

David Gilday and Mike Dobson broke a Guiness World Record in the UK at the Big Bang Fair in Birmingham, with the CubeStormer 3 robot solving a Rubik’s cube in 3.25s – smashing the existing 5.27s record, set by the CubeStorner’s predecessor. Lorikate anticpates the event in the ARM Community blog, with videos of past escapades by the duo.

Sage advice from Gabe Moretti, on the news that DVCon Europe has been announced. He advocates new ideas and opportunities for growth and suggests the conference spreads its wings further afield.

Interesting stats from Graham Bell, Real Intent, with the survey data about verification data adoption, clock domain bugs, and other design issue questions.

Hamilton Carter commemorates 10 years since the introduction of vManager and looks forward to the latest version, even if it is unimaginatively named. His blog delves into the new tool’s features.

System Level Power Budgeting

Wednesday, March 12th, 2014

Gabe Moretti, Contributing Editor

I would like to start by thanking Vic Kulkarni, VP and GM at Apache Design a wholly owned subsidiary of ANSYS, Bernard Murphy, Chief Technology Officer at Atrenta,and Steve Brown, Product Marketing Director at Cadence for contributing to this article.

Steve began by nothing that defining a system level power budget for a SoC starts from chip package selection and the power supply or battery life parameters. This sets the power/heat constraint for the design, and is selected while balancing functionality of the device, performance of the design, and area of the logic and on-chip memories.

Unfortunately, as Vic points out semiconductor design engineers must meet power specification thresholds, or power budgets, that are dictated by the electronic system vendors to whom they sell their products.   Bernard wrote that accurate pre-implementation IP power estimation is almost always required. Since almost all design today is IP-based, accurate estimation for IPs is half the battle. Today you can get power estimates for RTL with accuracy within 15% of silicon as long as you are modeling representative loads.

With the insatiable demand for handling multiple scenarios (i.e. large FSDB files) like GPS, searches, music, extreme gaming, streaming video, download data rates and more using mobile devices, dynamic power consumed by SOCs continues to rise in spite of strides made in reducing the static power consumption in advanced technology nodes. As shown in Figure 1, the end user demand for higher performance mobile devices that have longer battery life or higher thermal limit is expanding the “power gap” between power budgets and estimated power consumption levels.

Typical “chip power budget” for a mobile application could be as follows (Ref: Mobile companies): Active power budget = 700mW @100Mbps for download with MIMO, 100mW @IDLE-mode; Leakage power <5mW with all power-domain off etc.

Accurate power analysis and optimization tools must be employed during all the design phases from system level, RTL-to-gate level sign-off to model and analyze power consumption levels and provide methodologies to meet power budgets.

Skyrocketing performance vs. limited battery & thermal limit (ref. Samsung- Apache Tech Forum)

The challenge is to find ways to abstract with reasonable accuracy for different types of IP and different loads. Reasonable methods to parameterize power have been found for single and multiple processor systems, but not for more general heterogeneous systems. Absent better models, most methods used today are based on quite simple lookup tables, representing average consumption. Si2 is doing work in defining standards in this area.

Vic is convinced that careful power budgeting at a high level also enables design of the power delivery network in the downstream design flow. Power delivery with reliable and consistent power to all components of ICs and electronic systems while meeting power budgets is known as power delivery integrity.  Power delivery integrity is analogous to the way in which an electric power grid operator ensures that electricity is delivered to end users reliably, consistently and in adequate amounts while minimizing loss in the transmission network.  ICs and electronic systems designed with inadequate power delivery integrity may experience large fluctuations in supply voltage and operating power that can cause system failure. For example, these fluctuations particularly impact ICs used in mobile handsets and high performance computers, which are more sensitive to variations in supply voltage and power.  Ensuring power delivery integrity requires accurate modeling of multiple individual components, which are designed by different engineering teams, as well as comprehensive analysis of the interactions between these components.

Methods To Model System Behavior With Power

At present engineers have a few approaches at their disposal.  Vic points out that the designer must translate the power requirements into block-level power budgeting to come up with specific metrics.

Dynamic power estimation per operating power mode, leakage power and sleep power estimation at RTL, power distribution at a glance, identification of high-power consuming areas, power domains, frequency-scaling feasibility for each IP, retention flop design trade-off, power-delivery network planning, required current consumption per voltage source and so on.

Bernard thinks that Spreadsheet Modeling is probably the most common approach. The spreadsheet captures typical application use-cases, broken down into IP activities, determined from application simulations/emulations. It also represents, for each IP in the system, a power lookup table or set of curves. Power estimation simply sums across IP values in a selected use-case. An advantage is no limitation in complexity – you can model a full smart phone including battery, RF and so on. Disadvantages are the need to understand an accurate set of use-cases ahead of deployment, and the abstraction problem mentioned above.  But Steve points out that these spreadsheets are difficult to create and maintain, and fall short for identifying outlier conditions that are critical for the end users experience.

Steve also points out that some companies are adapting virtual platforms to measure dynamic power, and improve hardware / software partitioning decisions. The main barrier to this solution remains creation of the virtual platform models, and then also adding the notion of power to the models. Reuse of IP enables reuse of existing models, but they still require effort to maintain and adapt power calculations for new process nodes.

Bernard has experienced engineers that run the full RTL against realistic software loads, dump activity for all (or a large number) of nodes and compute power based on the dump. An advantage is that they can skip the modeling step and still get an estimate as good as for RTL modeling. Disadvantages include needing the full design (making it less useful for planning) and significant slowdown in emulation when dumping all nodes, making it less feasible to run extensive application experiments.  Steve concurs.  Dynamic power analysis is a particularly useful technique, available in emulation and simulation. The emulator provides MHz performance enabling analysis of many cycles, often times with test driver software to focus on the most interesting use cases.

Bernard is of the opinion that while C/C++/SystemC Modeling seems an obvious target, it also suffers from the abstraction problem. Steve thinks that a likely architecture in this scenario has the virtual platform containing the processing subsystem and memory subsystem and executes as 100s of MHz, and the emulator contains the rest of the SoC and a replication of the memory subsystem and executes at higher speeds and provides cycle accurate power analysis and functional debugging.

Again,  Bernard wants to underscore, progress has been made for specialized designs, such as single and multiple processors, but these approaches have little relevance for more common heterogeneous systems. Perhaps Si2 work in this area will help.

Are Best Practices Resulting in a Verification Gap?

Tuesday, March 4th, 2014

By John Blyler, Chief Content Officer

A panel of experts from Cadence, Mentor, NXP, Synopsys and Xilinx debate the reality and causes of the apparently widening verification gap in chip design.

A panel of semiconductor experts debate the reality and causes of the apparently widening verification gap in chip design.Does a verification gap exist in the design of complex system-on-chips (SoCs)? This is the focus of a panel of experts at DVCon 2014, which will include Janick Bergeron, Fellow at Synopsys; Jim Caravella, VP of Engineering at NXP – Harry Foster, Chief Verification Technologist at Mentor Graphics, John Goodenough, VP, ARM,  Bill Grundmann, a Fellow at Xilinx; and Mike Stellfox, a Fellow at Cadence. JL Gray, a  Senior Architect at Cadence, organized the panel. What follows is a position statement from the panelist in preparation for this discussion. – JB

Panel Description: “Did We Create the Verification Gap?”

According to industry experts, the “Verification Gap” between what we need to do and what we’re actually able to do to verify large designs is growing worse each year. According to these experts, we must do our best to improve our verification methods and tools before our entire project schedule is taken up by verification tasks.

But what if the Verification Gap is actually occurring as a result of continued adoption of industry standard methods. Are we blindly following industry best practices without keeping in mind that the actual point of our efforts is to create a product with as few bugs as possible, as opposed to simply trying to find as many bugs as we can?

Are we blindly following industry best practices …

Panelists will explore how verification teams interact with broader project teams and examine the characteristics of a typical verification effort, including the wall between design and verification, verification involvement (or lack thereof) in the design and architecture phase, and reliance on constrained random in absence of robust planning and prioritization to determine the reasons behind today’s Verification Gap.

Panelist Responses:

Grundmann: Here are my key points:

  • Methodologies and tools for constructing and implementing hardware have dramatically improved, while verification processes appear to have not kept pace with the same improvements.  As hardware construction is simplified, then there is a trend to have less resources building hardware but same or more resources performing verification.  Design teams with 3X verification to hardware design are not unrealistic and that ratio is trending higher.

    … we have to expect to provide a means to make in-field changes …

  • As it gets easier to build hardware, performing hardware verification is approaching software development type of resourcing in a project.
  • As of now, it very easy to quickly construction various hardware “crap”, but it very hard to prove any are what you want.
  • It possible that we can never be thoroughly verification “clean” without delivering some version of the product with a reasonable quality level of verification.  This may mean we have to expect to provide a means to make in-field changes to the products through software-like patches.

Stellfox: Most chips are developed today based on highly configurable modular IP cores with many embedded CPUs and a large amount of embedded SW content, and I think a big part of the “verification gap” is due to the fact that most development flows have not been optimized with this in mind.  To address the verification gap, design and verification teams need to focus more on the following:

  • IP teams need to develop and deliver the IP in a way that it is more optimized for SoC HW and SW integration.  While the IP cores need to be high quality, it is not sufficient to only deliver high quality IP since much of the work today is spent in integrating the IP and enabling earlier SW bring-up and validation.

    There needs to be more focus on integrating and verifying the SW modules with HW blocks …

  • There needs to be more focus on integrating and verifying the SW modules with HW blocks early and often, starting at the IP level to Subsystem to SoC.  After all, the SW APIs largely determine how the HW can be used in a given application, so time might be wasted “over-verifying” designs for use cases which may not be applicable in a specific product.
  • Much of the work in developing a chip is about integrating design IPs, VIPs, and SW, but most companies do not have a systematic, automated approach with supporting infrastructure for this type of development work.

Foster: No, the industry as a whole did not create the verification challenge.  To say this lacks an understanding of the problem.  While design grows at a Moore’s Law rate, verification grows at a double exponential rate. Compounded with increased complexity due to Moore’s Law are the additional dimensions of hardware-software interaction validation, complex power management schemes, and other physical effects that now directly affect functional correctness.  Emerging solutions, such as constrained-random, formal property checking, emulation (and so on) didn’t emerge because they were just cool ideas.  The emerged to address specific problems. Many design teams are looking for a single hammer that they can use to address today’s verification challenges. Unfortunately, we are dealing with an NP-hard problem, which means that there will never be a single solution that will solve all classes of problems.

Many design teams are looking for a single hammer that they can use to address today’s verification challenges.

Historically, the industry has always addressed complexity through abstraction (e.g., the move from transistors to gates, the move from gates to RTL, etc.). Strategically, the industry will be forced to move up in abstraction to address today’s challenges. However, there is still a lot of work to be done (in terms of research and tool development) to make this shift in design and verification a reality.

Caravella: The verification gap is a broad topic so I’m not exactly sure what you’re looking for, but here’s a good guess.

Balancing resource and budget for a product must be done across much more than just verification.

 Verification is only a portion of the total effort, resources and investment required to develop products and release them to market. Balancing resource and budget for a product must be done across much more than just verification. Bringing a chip to market (and hence revenue) requires design, validation, test, DFT, qualification and yield optimization. Given this and the insatiable need for more pre-tape out verification, what is the best balance? I would say that the chip does not need to be perfect when it comes to verification/bugs, it must be “good enough”. Spending 2x the resources/budget to identify bugs that do not impact the system or the customer is a waste of resources. These resources could be better spent elsewhere in the product development food chain or it could be used to do more products and grow the business. The main challenge is how best to quantify the risk to maximize the ROI of any verification effort.

Jasper: [Editor’s Note: Although not part of the panel, Jasper provided an additional perspective on the verification gap.]

  • Customers are realizing that UVM is very “heavy” for IP verification.  Specifically, writing and debugging a UVM testbench for block and unit level IP is very time consuming task in-and-of-itself, plus it incurs an ongoing overhead in regressions when the UVC’s are effectively “turned off” and/or simply used as passive monitors for system level verification.  Increasingly, we see customers ditching the low level UVM testbench and exhaustively verifying their IPs with formal-based.  In this way, the users can focus on system integration verification and not have to deal with bugs that should have been caught much sooner.

    UVM is very “heavy” for IP verification.

  • Speaking of system-level verification: we see customers applying formal at this level as well.  In addition to now familiar SoC connectivity and register validation flows, we see formal replacing simulation in architectural design and analysis.  In short, even without any RTL or SystemC, customers can use an architectural spec to feed into formal under-the-hood to exhaustively verify that a given architecture or protocol is correct by construction, won’t deadlock, etc.
  • The need for sharing coverage data between multiple vendors’ tool chains is increasing, yet companies appear to be ignoring the UCIS interoperability API.  This is creating a big gap in customers’ verification closure processes because it’s a challenge to compare verification metrics across multi-vendor flows, and they are none too happy about it.
Next Page »