Part of the  

Chip Design Magazine


About  |  Contact

Posts Tagged ‘Cadence’

Next Page »

Blog Review – Mon. June 02 2014

Monday, June 2nd, 2014

In case you didn’t know, DAC is upon us, and ARM has some sightseeing tips – within the confines of the show-space. Electric vehicles are being taken seriously in Europe and North America, Dassault Systemes has some manufacturing-design tips and Sonics looks back over 20 years of IP integration. By Caroline Hayes, Senior Editor.

Electric vehicles – it’s an easy sell for John Day, Mentor Graphics, but his blog has some interesting examples from Sweden of electric transport and infrastructure ideas.

Thanks are due to Leah Schuth, ARM, who can save you some shoe leather if you are in San Francisco this week. She has been very considerate and lumped together all the best bits to see at this week’s DAC. OK, the list may be a bit ARM-centric, but if you want IoT, wearable electronics and energy management, you know where to go.

We all want innovation but can the industry afford it? Hoping to instill best practice, Eric, Dassault Systemes writes an interesting, detailed piece design-manufacturing collaboration for a harmonious development cycle.

A tutorial on your PC is a great way to learn – in this case, the low power advantage of LPDDR4 over earlier LPDDR memory. Corrie Callenbach brings this whiteboard tutorial by Kishote Kasamsetty to our attention in Whiteboard Wednesdays—Trends in the Mobile Memory World.

A review of IP integration is presented by Drew Wingard, Sonics, in who asks what has been learned over the last two decades, what matters and why.

Deeper Dive – Is IP reuse good or bad?

Friday, May 30th, 2014

To buy or to reuse, that is the question. Caroline Hayes, Senior Editor asked four industry experts, Carsten Elgert (EC), Product Marketing Director, IPG (IP Group), Cadence, Tom Feist (TF), Senior Marketing Director, Design Methodology, Xilinx, Dave Tokic (DT), Senior Director, Partner Ecosystems and Alliances, Xilinx, and Warren Savage (WS), President and CEO, IPextreme about the pros and cons of IP reuse versus third party IP.

What are the advantages and disadvantages when integrating or re-using existing IP?

WS: This is sort of analogous to asking what are the advantages/disadvantages of living a healthy lifestyle? The disadvantages are few, the advantages myriad. But in essence it’s all about practicality. If you can re-use a piece of technology, that means that you don’t have to spend money developing something different which includes a huge cost of verification. Today’s chips are simply too large to functionally verify every gate. A part of the every chip verification strategy assumes that pre-existing IP has already had its verification during its development and if it is silicon-proven, this only decreases the risk of any latent defects that may be discovered. The only reason to not to reuse an IP is that the IP itself is lacking in some ways that make the case to not reuse it but rather create a new IP.

strong>TF: Improved productivity. Reuse can have disadvantages when using older IP on new technologies, it is not always possible to target the newer features of a device with old IP.

DT: IP reuse is all about improving productivity and can result in significantly shrinking the design time especially with configurable IP. Challenges come from when the IP itself needs to be modified from the original, which then requires additional design and verification time. Verification in general could be more challenging, as most IP is verified in isolation and not in the context of the system. For example, if the IP is being used in a way the provider didn’t “think of” in their verification process, it may have bugs that are discovered during that integration verification phase that then needs to be reflected back to determining which IP has the issue and correcting/verifying that IP.

CE: The benefits of using your own IP in-house are that you know what the IP is doing and can use it again – it is also not available as third party IP, for differentiation. The disadvantage is that it is rarely documented allowing it to be used in different departments. The same engineers know what they are taking when they reuse their IP, but to properly document it, to make the product, can be time-consuming for a neighboring department. It is also the case that unwanted behavior is not verified. However, it is cheaper and it works.

What are the advantages and disadvantages of using third party IP?

WS: The advantages of using third party IP is usually related to that company being a domain expert in a certain field. By using IP from that company, you are in fact licensing expertise from this company and putting it to use in an effective way. Think of it like going to dinner at a 4-star Michelin rated restaurant. The ingredients may be ordinary but how they are assembled are far more exceptional than something the ordinary person can achieve on their own.

TF: Xilinx cannot cover all areas. Having a third party ecosystem allows us to increase the reach to customers. We have qualification processes in place to ensure the quality of the IP is up to the required standard.

DT: Xilinx and its ecosystem provide more than 600 cores across all markets, with over 130 IP providers in our Alliance Program. These partners not only provide more “fundamental” standards-based IP, but also provide a very rich set of domain and application-specific IP that would be difficult for Xilinx to develop and support. This allows Xilinx technology to be more easily and quickly adopted in 100’s of applications. Some of the challenges come in terms of consistency of deliverables, quality, and business models. Xilinx has a mature process of partner qualification and also work with the partner to expose IP quality metrics when we promote a partner IP product that helps customers make smarter decisions on choosing a provider or IP core.

EC: Third party IP means there is no rewriting. Without it, it could take hundreds of man-years to develop a function that is not major selling point for a design. Third party IP is compatible as well as bug-free/limited bugs. It is like spending hundreds of hours redesigning the steering wheel of a car – it is not a differentiating selling point of the vehicle.

Is third party IP a justifiable risk in terms of cost and/or compatibility?

DT: The third party IP ecosystem for ASIC and programmable technologies has been around for decades. There are many respected and high quality providers out there, from smaller specialized IP core partners such as Xylon, OmniTek, Northwest Logic, and PLDA to name a few, up to the industry giants like ARM and Synopsys generating $100M’s in annual revenue. But ultimately the customer is responsible for determining the system, cost, and schedule requirements, evaluating IP options, and make the “build vs. buy” decision.

EC: I would reformulate that question: Can [the industry] live without IP? Then, the risk is justifiable.

How important are industry standards in IP integration? What else would you like to see?

TF: IP standards are very important. For example, Xilinx is on the IEEE P 1735 working group for IP security. This is very important to protect customer, 3rd party and Xilinx IP throughout flows that may contain 3rd party EDA tools and still allow everyone to interoperate on the IP. We hope to see the 2.0 revision of this standard ratified this year so all tool vendors and Xilinx can adopt it and make IP truly portable, yet protected.

DT: Another example is AMBA AXI4, where Xilinx worked closely with ARM to define this high speed interconnect standard to be optimized for integrating IP on both ASIC and programmable logic platforms.

WS: Today, not so much. There has been considerable discussion on this topic for the last 15 years and various industry initiatives have come and gone over the years on this. The most successful one to date has been IP-XACT. There is massive level of IP reuse today and the lack of standards has not slowed that. I am seeing that the way the industry is handling this problem is through the deployment of pre-integrated subsystems that include both a collection of IP blocks and the embedded software that drives it. I think that within another five years, the idea of “Lego-like” construction tools will die as they do nothing to solve the verification problem associated with such constructions.

Have you any statistics you can share on the value or TAM (Total Available Market) for EDA IP?
WS: I assume you mean IP? Which I like to point out is distinctive from EDA. True, many EDA players are adopting an IP strategy but that is primarily because the growth in IP and the stagnation of the EDA markets are forcing those players to find new growth areas. Sustained double-digit growth is hard to ignore.

To the TAM (total available market) question, a lot of market research says the market is around $2billion today. I have long postulated that the real IP market is at least twice the size of the stated market, sort of like the “dark matter” theories in astrophysics. But even this ignores considerable amounts of patent licensing, embedded software licensing and such which dwarf the $2billion number.

TF: According to EDAC Semiconductor IP revenue totaled $486million, a 4.2% increase compared to Q4 2012 and four-quarters moving average increased 9.6%.

EC: For interface IP, I can see a market for interface IP – sharing no processor, no ARM, to grow 20% to $400million [Analyst IP Nest].

IP Integration to accelerate SoCs

Thursday, May 29th, 2014

As the accelerating market of SoCs inevitably means a faster rate of adoption, system level designers are also faced with fragmenting markets, with new standards adopted and with multiple types of complex requirements in a system. The integration of IP (Intellectual Property) can be a boon, but there are caveats, as Caroline Hayes, Senior Editor reports.

Today, there are many types of IP in use today, processor, DSP, high-speed IP, clocking IP, even whole systems. There are also a variety of high-speed I/O and infrastructure IP (such as the AXI4 interconnect). It is used to support protocols, interfaces, controllers and processor on the core.

Dave Tokic, Senior Director, Partner Ecosystems and Alliances, Xilinx believes that IP is invaluable. “It just isn’t possible to create the systems being developed without substantial IP integration,” he says. He goes on to explain the appeal of IP. “More and more, designers are being asked to create increasingly complex designs much faster on programmable devices. As such, we’ve seen a tremendous growth in the number of available IP cores and their use and reuse on programmable devices. Typically, we see the use of a broad range of connectivity and high speed I/O such as Ethernet, PCI Express, SDI, MIPI, and HDMI; general purpose and specialized processors, such as compact Xilinx Microblaze microcontrollers and high-speed DSP, to powerful multi-core ARM A9 application processors we’ve integrated into the Zynq All Programmable SoC ; optimized DMA and memory controllers, all the way up to the highest performance Hybrid Memory Cube controllers; application-specific IP for compression such as HEVC, JPEG-2000 to high performance image and video processing pipeline IP; and a broad range of clocking and infrastructure IP, such as the AMBA AXI4 interconnect.”
For Carsten Elgert, Product Marketing Director, IPG (IP Group), Cadence, this diversity in the system design is why the IP market is growing – according to EDAC Semiconductor, IP revenue totaled $486million Q4 2013, a 4.2 % increase compared to Q4 2012.

Tom Feist, Senior Marketing Director, Design Methodology, Xilinx, summarizes it thus: “As geometries shrink, typically parts and designs get larger and more complex yet design teams do not. This means higher productivity is required to produce larger designs. A key factor to address this is IP reuse.”
The proliferation of connections to an outside DDR controller, flash memory controller, display output or other display standards and communications protocols like USB, are all defined and the designer has to meet the standards of protocol committees. Egbert says: “When designing an SoC, there is no time, and [the designer] usually does not have the knowledge to design the standard protocol by himself – the end product will be sold because of a unique functionality, not because of an excellent USB protocol.”
It is, he argues far easier to use IP blocks as components, buying a peripheral 1²C bus or UART protocol rather build 20k gates. The availability of IP helps – there is interface IP, communications IP, microprocessor IP and IP from third party vendors, such as EDA companies and ARM for example, says Elgert.
He points out that IP is bought to ensure code compatibility, and includes high-end versions, such as ARM, and low-end, 8bit processors in SoCs, with less than 3k gates. There is also EDA IP and legacy processors. “And do not forget analog IP,” counsels Elgert. The SoC analog or mixed signal portion needs PLL and sensors using ADC and AC – this is very common if the SoC is analog.”
With so much to choose from, I asked what are the ‘rules’ if there are any, to choosing IP. Tokic believes “Some of the key factors for successful IP use is the quality of the IP and ease of integration,” he says, referring to the company’s functional verification and validation. “We work closely with our ecosystem of qualified IP providers to provide visibility to our customers on various IP quality metrics… [We] led the industry in automating IP integration with the IP Integrator technology built into the Vivado Design Suite that shortens the design cycle up to 10x over more manual processes.”

Elgert breaks it down into technology nodes, posing the question for engineers to ask – “What is available on process node that I’m forgetting? This can be an extensive shipping list for complex systems,” he says. He also advises narrowing the choice down to a preferred silicon vendor.
For example, he says, the PHY and the layer stack is relevant, being specific for each available technology node. “I would consider as many silicon technologies as possible,” is his advice.
Other items on Elgert’s list are to consider the technical performance, the PPA, or Power Performance Areas. “What is the power consumer, and how big is the silicon? Silicon still costs money. Compare PPA, which is often confidential information,” he says “for responsiveness and clarity of data – EDA support is important here”.
To implement the IP, Feist observes that customers prefer to use industry standards and avoid proprietary bus / interconnect. “By leveraging ARM’s AMBA AXI4 protocol [customers] can leverage state-of-the-industry interconnect that enables point to point connections versus a bus structure. This accelerates their design development and the designs are more portable.” (The vast majority of internal bus structures use AMBA AXI, confirms Elgert.

Xilinx relies on an ecosystem for IP integration support. Feist explains the role of the Vivado design suite. “In Vivado IP all parts of the IPI design should be packaged as IP. If just using third party or Xilinx IP this is done for you. For customer IP, they need to package this up and we provide a packager for this. To get the most out of IPI, the IP should be designed in a way that uses interfaces where possible like the Xilinx IP. By using AXI, it allows customers to leverage IPs that come with Vivado like bus monitors, bandwidth checkers, JTAG stimulus, bus functional models etc. that should speed up development of the IP.”
By providing an extensible IP catalog, which can contain Xilinx, third-party, and intra-company IP that can be shared across a design team, division, or company, the suite leverages industry standards such as ARM’s AXI interconnect and IP-XACT metadata when packaging IP. With IP Integrator, Vivado facilitates rapid development of smarter systems, which can include embedded, DSP, video, analog, networking, interface, building block, and connectivity IP consolidated into a hierarchical system and IP-centric view.
Users can also package their own RTL, or C/C++/SystemC and MATLAB/Simulink algorithms into the IP catalog using High-Level Synthesis (HLS) or System Generator for DSP with the IP packager.
Xilinx describes IPI as providing an automated IP subsystem generation to speed configuration. For Feist, IPI provides the second step to allow a user to quickly integrate an IP created in HLS into a full design. “The first step happens earlier in HLS tool that is adding AXI interfaces automatically to the C code. With both these steps it is possible to connect an IP generated in HLS to an ARM core pretty quickly.”
Synopsys is also helping customers reduce configuration and debug time with a ‘plug and play’ IP accelerator program, to be announced at DAC next month. It will package its DesignWare IP together with hardware IP prototyping kits and software development kits to modify IP with a fast iteration flow and without the time penalties, was all Dr Johannes Stahl, Director, Product Marketing, Virtual Prototyping, Synopsys would reveal on a visit to the UK ahead of DAC.

Turning to EDA’s role in IP, Elgert says the same challenges occur as are met when designing an SoC. The GDS2 or RTL has to match the RTL of an in-house design. The IP vendor has to support the IP flow, he says, but there is a delivery burden on EDA companies to support customers to ensure that the various tool flows and verification platforms are enabled.
All agree that as geometries shrink, parts and designs get larger and more complex. This creates demands on small (even shrinking) design teams and IP use and reuse can accelerate the timeframe of design, verification and eventually time to market, with designs that are differentiated with functions specifically for the application.

IP Integration: Not a Simple Operation

Tuesday, May 13th, 2014

Gabe Moretti, Contributing Editor

Although the IP industry is about 25 years old, it still presents problems typical of immature industries.  Yet, the use of IP in systems design is now so popular one is hard press to find even one system design that does not use IP.  My first reaction to the use of IP is “back to the future”.

For many years of my professional career I dealt with board level design as well as chip design.  Between 1970 and 1990 IP was sold as discrete components by companies such as Texas Instruments, National, and Fairchild among many others.  Their databooks described precisely how to integrate the part in a design.  Although a defined standard for the contents did not exist, a de-facto standard was followed by all providers.  Engineers, using the databooks information would choose the correct part for their needs and the integration was reasonably straight forward.  All signals could be analyzed in the lab since pins and traces were available on the board.

Enters Complexity

As semiconductors fabrication progressed, the board became the chip, and the components are now IP modules.  One would think that integration would also remain reasonably straight forward.  But this is not the case.  Concerns about safeguarding intellectual property rights took over and IP developers were reluctant to provide much information about the functioning of the module, afraid that its functionality would be duplicated and thus they would loose sales.

As the number of transistors on a chip increases, the complexity of porting a design from one process to the next also increases.

Figure 1. Projected number of transistors on a chip

Developers found that by providing a hard macro, that is a module already placed and routed and ready for fabrication by the chosen foundry, was the best way to protect their intellectual rights.  But such strategy is costly because foundries cannot just validate every macro for free.  The IP provider must be in the position to guarantee volume use by the foundry’s customers.  Thus many IP modules must be synthesized.  This means they must be verified.

Karthik Srinivasan Corporate Application Engineer Manager Analog Mixed Signal at Apache Design Solutions, an Ansys company, wrote that “SoC designs today integrate significant number of IPs to accelerate their design times and to reduce the risks to their design closure. But the gap in the expectations of where and how the sign-off happens between the IP and SoC designers create design issues that affect the final product’s performance and release. IP designers often validate their IPs in isolation with expectations of near ideal operating conditions. SoCs are verified and signed off with mostly abstracted or in many cases ‘black-box’ views of IPs. But as more and more high speed and noise sensitive IPs get placed next to each other or next to the core digital logic failure conditions that were not considered emerge. This worsens when these IPs share one or more power and ground supply domains. For example, when a bank of high speed DDR IPs are placed next to a bank of memories, the switching of the DDR can generate sufficient noise on the shared ground network that can adversely affect the operation of the memory.

As designs migrate to smaller technology nodes, especially those using FinFET based technologies this gap in the design closure process is going to worsen the power noise and reliability closure process.”

DDR memory blocks are becoming a greater and greater portion of a chip as the portion of functionality implemented in firmware increases.  Bob Smith, Senior VP of Marketing and Business Development at Uniquify makes the case for a system view of memories.

” DDR IP is used in a wide variety of ASIC and SoC devices found in many different applications and market segments. If the device has an embedded processor, then it is highly likely that the processor requires access to external DDR memory. This access requires a DDR subsystem (DDR controller, PHY and IO) to manage the data traffic flowing to and from the embedded processor and external DDR memory.

Whether it is procured from an external source or developed by an internal IP group, almost all chip design projects rely on DDR IP to implement the on-chip DDR subsystem. The integration techniques used to implement the DDR IP in the chip design can have far reaching effects on DDR performance, chip area, power consumption and even reliability.

Figure 2. A non-optimized DDR implementation

The above fiogure illustrates a typical on-chip DDR implementation. Note that while the DDR I/Os span the perimeter of the chip, the DDR PHYs are configured as blocks and are placed in such a way that they are centered with the I/Os. As shown in the diagram, this not only wastes valuable chip area, but also creates other problems. ”

” A much more efficient way to implement the DDR subsystem IP is to deliver a DDR PHY that is exactly matched to the DDR I/O layout. By matching the PHY exactly to the I/Os, a tremendous amount of area is saved and power is reduced. Even better, the performance of the DDR can be improved since the PHY-I/O layout minimizes skew.”

Figure 3. Optimized DDR block

As process technology progresses and moves from 32 nm to 22nm and then 14 nm and so on, the role of the foundry in the place and route of an entire chip increases.  In direct proportion the freedom of designers to determine the final topology of a chip decreases.  Thus we are rapidly reaching the point where only hardened hard modules will be viable.  The number of viable providers in the IP industry is shrinking rapidly and many significant companies have been acquired in the last three or four years by EDA companies that becoming major providers of IP products.

Synopsys started selling IP around 1990 and has now a wide variety of IP in its portfolio, mostly developed internally.  Cadence, on the other hand, has built its extensive inventory of IP products mostly through acquisition.

Michael Munsey, Director of ENOVIA Semiconductor Strategy at Dassault Systemes points out that there are a number of issues to deal with regarding IP.

1.       IP Sourcing:  Companies are going to need a way to source IP.  They will need access to a cataloging system that allows for searching of both internally developed or under development IP as well as externally available third party IP.

2.       IP Governance: For internally developed IP, there needs to be systems and methodologies for handling the promotion of work in progress to company certified IP.  For both internal and externally acquired IP, there needs to be a process to validate that IP, and then a system to rate the IP internally based on previous use, documentation available, and other design artifacts.

3.       IP Issue Defect and Tracking: Since IP will be in use in multiple projects, a formal system is required to handle issue and defect tracking across multiple projects against all IP.  If one group finds and issue with a piece of IP, all other project groups that are using that IP need to be alerted of any issues found and the plan on resolving the issue.  Ideally this should be integrated into design tools that are used to assemble IP as well.  If a product has already gone out the door with the defective IP, these issues need to managed and corrective actions need to ensue based on any defects found.

4.       IP Security:  There are different levels of protection needed for different types of IP and robust methods must be put in place to ensure the security of IP.   First, company critical IP must be secure, and systems need to be put in place to make sure that the IP does not leave the company premises.  If collaborating with partners, any acquired IP must also be handled so that it is only used in the designs that are being collaboratively designed.  There need to be restrictions on using partner IP in design blocks which in turn can become IP in other designs.  There needs to be a way to track the ‘pedigree’ of IP.

5.   Variant driven platform based design:  Ultimately, for companies to keep up with the shortening market windows and application driven platform design, companies will need to adopt a system where there are base platforms with pre-qualified IP that can be configured on the fly and used a s a starting point for new designs.  These systems would automatically populate a design workspace with the required IP from a company approved catalog as the basis of a new design moving forward.

Integrating the Pieces

Farzad Zarrinfar, Managing Director of the Novelics Business Unit at Mentor Graphics,

provided a synthesis of the problems facing designers.

“For IP Integration, multiple IP like ‘Hard IP’, ‘Synthesizable Soft Peripheral IP’, and ‘Synthesizable Soft processor IP’ with different set of deliverables, use EDA tools for efficient ASIC/SOC designs. Selecting the optimal IP size (such as smallest embedded memory IP) is a critical design decision. While free IP is readily available, it does not always provide the best solution when compared to fee-based IP that provides much better characteristics for the specific applications.

IP integration to achieve smaller die size, lower leakage, lower dynamic power, or faster speed can provide designers with a more optimized solution that can potentially save millions of dollars over the life of the product, and better differentiate their chips in a highly competitive ASIC/SoC marketplace.”

Bill Neifert, Chief Technology Officer at Carbon Design Systems observes.  “Certainly, some designers at the bleeding edge differentiate every aspect of the subsystem and their own IP, but we’re increasingly seeing others adopt whole subsystem designs and then making configuration tweaks. Think black box design and ARM’s big.LITTLE offerings are prime examples of this trend.

Of course, in order to make these configuration changes, designers need to know the exact impact of the changes that they’re making. We see users doing this a lot on our IP Exchange web portal. They will download a CPAK (Carbon Performance Analysis Kit), a pre-built system or subsystem complete with software at the bare metal or O/S level. This gets them up and running quickly but not with their exact configuration. They’ll then iterate various configuration options in order to meet their exact design goals. It’s not unusual for a design team to compile 20 different configurations for the same IP block on our portal and then compare the impact of each of these different models on system performance.

Naturally, all of this impacts the firmware team quite a bit. The software developers don’t need to know exactly what the underlying hardware is doing but the firmware team needs the exact IP configuration. The sooner these decisions can be made, the sooner they can start being productive. Integrating this level of software on to the hardware typically exposes a new round of IP optimizations that can be made as well. Therefore, it’s not unusual for IP configuration changes to happen in waves as additional pieces of IP and software are added to the system.”

Drew Wingard, CTO at Sonics points out that standards matter.  “Because there are many sources for IP, the industry had to create and adhere to standards for integration. From a silicon vendors’ perspective, IP sources include third-party commercial components, internally designed blocks and cores, and customer-designed components. To meet the challenge of integrating IP components from many different places, SoC designers needed communication protocol standards. Communication protocol standards efforts began with the Virtual Socket Interface Alliance (VSIA), continued with the Open Core Protocol International Partnership (OCP-IP), and today reside with Accellera. Of course, our customers need to leverage de-facto standards such as ARM’s AMBA as well.  We owe our ability to integrate IP to the fundamental communication protocol standards work that these organizations performed.”

The Challenge of Verification

Sunrise’s Prithi Ramakrishnan is concerned about system verification.  “At a very high level, the main issue with IP is that the simulated environment is different from the final design environment.  Analog and RF IP is dependent on process/node, foundry, layout, extraction, model fidelity, and placement.  So you are either tied to just dropping it in ‘as is’ and treating it like a black box (nobody knows how it works and whether it meets the required specifications) or completely changing it (with the caveat that you can no longer expect the same results).  Digital IP needs to be resynthesized followed by placement and routing, and it takes several iterations to make the IP you got work the way you want it to work. In addition, this process is extremely tool-dependent.

Finally, there are system level issues like interoperability, interface and controls (how does the IP talk to the rest of the SoC). A very important, often overlooked factor is the communication between the IP providers and the SoC implementation houses – there are documents outlining integration guidelines, but without an automated process that takes in all that information, a lot could be lost in translation.”

The issue of how well a third party IP has been verified will always hunt designers unless the industry finds a way to make IP as trustworthy as the TI 7400 and equivalent parts of the early days.  Bernard Murphy, CTO at Atrenta observed: “One area that doesn’t get a lot of air-time is how a SoC verification team goes about debugging a problem around an IP. You have the old challenge – is this our bug or the IP developer’s bug? If the developer is down the hall, you can probably resolve the problem quickly. If they are now working for your biggest competitor, good luck with that. If this is a commercial IP, you work with an apps guy to circle around possibilities: maybe you are using it wrong, maybe you misunderstood the manual or the protocol, may be they didn’t test that particular configuration for that particular use-case… Then they bring in their expert and go back through the cycle until you converge on an answer. Problem is, all this burns a lot of time and you’re on a schedule. Is there a way to compress this debug cycle?”

He offers the following suggestion.  “One important class of things to check for is the above didn’t test that configuration for that use-case.  This is where synthesized assertions come in. These are derived automatically by the IP developer in the course of verifying the IP. They don’t look like traditional assertions (long, complex sequences of dependencies). They tend to be simpler, often non-obvious, and describe relationships not just at the boundary of the IP but also internal to the IP. Most importantly, they encode not just functionality but also the bounds of the use-cases in which the IP was tested. Think of it as a ‘signature’ for the function plus the verification of that function.”

Thomas L. Anderson, VP of Marketing at Breker Verification Systems pleads: integrate, but verify.  He argues that “The truth is that most SoC teams trust integration too much and verify too little. Many SoC products hit the market only after two or three iterations through the foundry. This costs a lot of money and risks losing market windows to competitors. Most SoC teams follow a five-step verification process:

  1. Ensure that each block, whether locally designed or licensed as IP, is well verified
  2. Use formal methods to verify that each block has been integrated into the SoC properly
  3. Assemble a minimal chip-level simulation testbench and run a few sanity verification tests
  4. Hand-write some simple C tests and run on the embedded processors in simulation or emulation
  5. Run the production software on the processors in simulation, emulation, or prototyping

The problem with this process is that the tests in steps 3 and 4 are too simple since they are hand-written. They typically verify only one block at a time, ignoring interaction between blocks. They also perform only one operation at a time, so they don’t stress cache coherency or any inherent concurrency within the design. Humans aren’t good at thinking and coding in parallel. Thorough SoC pre-silicon verification can occur only with multi-threaded, multi-processor test cases that string blocks together into realistic scenarios representing end-user applications of the chip.”

An Example of Complexity

Charlie Cheng, CEO of Kilopass gave me an example of complexity in choosing the correct IP for a design by using sparse matrix math.

With semiconductor IP comprising 90 percent of today’s semiconductor devices and memory IP accounting for over 50 percent of these complex SoCs, it’s no wonder that IP is the fastest growing sector of the overall semiconductor industry. As a result, managing third-party IP is a growing responsibility within today’s semiconductor companies. How to make the right choice from a growing quantity of IP is the major challenge facing engineering teams, purchasing departments, and executive management. The process of these groups buying IP can be viewed as a sparse matrix mathematical exercise but without the actual math formulas and data manipulation.

1.8V Vendor A OTP

The table shows two dimensions of a multi-dimensional matrix representing the variables confronting the purchasing company teams. In this two-dimensional matrix, imagine three additional tables for 1.8v operations with the four foundries at 28HPL, 28HPM, and 28HP.  Now, replicate this in a fourth dimension for the variable of 2.5v. Add a fifth and sixth dimension to the matrix for Vendor B’s OTP.  If this were a mathematical evaluation, a figure of merit would be assigned to each cell in each plane of the multidimensional matrix.

For example, Vendor A’s OTP at 1.8v has JEDEC three-lot at 28HP at TSMC, UMC and GLOBALFOUNDRIES, similarly for 28HPM and 28HP but has only working silicon at 28LP.  Vendor B’s OTP may not have received three lot qualification at any of the vendors on any of the processes, but may have first silicon or one or two lot qualification on one or more of the vendors. In the mathematical exercise, a figure of merit would be assigned to the being fully qualified, one or two lot qualified, first silicon or not taped out.  Using matrix algebra, the formal mathematical exercise would return a result but an intuitive evaluation of the process would suggest vendor A with more three-lot qualifications at multiple foundries would have an edge over vendor B which did not.

The above exercise would be typical of the evaluation occurring in the engineering team. A similar exercise would be occurring in the purchasing department with terms and conditions presented in the licensing agreement and royalty schedule each vendor submits. Corporate and legal would perform a similar exercise.

If the mathematical exercise was actually performed and all the cells in the matrix had assigned values, then a definitive solution is easily achieved. However, if the cells throughout the matrix are sparsely populated, then the solution ends up with a probable outcome.

Blog Review – Mon. April 28 2014

Monday, April 28th, 2014

The first PHYs; compute shaders; Accellera Day online; Security and privacy
By Caroline Hayes, Senior Editor.

Mentor Graphics’ Dennis Brophy initially questioned the need for an online Accellera Day, but soon retracted and is even offering to keep blog visitors informed as more are posted.

If you are interested in compute shaders, Sylvek at ARM, explains clearly and concisely what they are and how to add one to an application.

Another informative blog is the first of three from Corrie Callenbach, Cadence, directing us to Kevin Yee’s video: Take command of MIPI PHYS. The presenter takes us through the first of three PHY specifications introduced by MIPI.

Intel’s Mayura Garg points blog visitors to Michael Fey’s presentation at ISS2014. Fey, Executive Vice President, General Manager of Corporate Products and Intel Security CTO focuses on Security and Privacy in the Information Age – the blog’s own ‘scary video’.

Cadence to Acquire Jasper Design Automation

Wednesday, April 23rd, 2014

By Hamilton Carter, Verification-Low Power Editor

Earlier this week, Cadence announced that they have entered into a definitive agreement to purchase Jasper Design Automation. The two companies are both focused on SoC verification and cited their shared customer base and the current verification budget allotment on most SoC development projects of over 70% as reasons for the transaction. Cadence will pay $170 million for the privately held Jasper.  Jasper shows $24 million in cash and cash equivalents on its balance sheet at the moment. Cadence will be funding the purchase using cash-on-hand and an existing rotating line of credit.

Jasper is a market leader in the growing formal analysis verification sector.  They’ve been very successful in distributing verification solutions termed Verification Apps built on top of their formal engine Jasper Gold.  Cadence hopes to expand the formal analysis sector more quickly by leveraging the combination of Jasper’s expertise and Cadence’s existing worldwide sales team.

Both companies are excited about the potential for integrating Jasper’s tools with the metric driven verification flow provided by Cadence.  Charlie Huang, senior vice president of the System & Verification Group and Worldwide Field Operations at Cadence, said about the acquisition, “Jasper’s formal analysis solutions are used by customers today alongside Cadence’s metric-driven verification flow to form a broad verification solution. We look forward to welcoming Jasper’s strong formal development expertise and skilled team to Cadence.”

Kathryn Kranen, president and CEO of Jasper,  commented, “The verification technologies, when combined, will benefit customers through a comprehensive metric-driven verification approach that unites formal and dynamic techniques, realizing the strength of each and leveraging the integration between them.”

The acquisition is currently scheduled to take place in the second quarter of fiscal 2014 subject to typical closing conditions and regulatory approvals.  There is no word yet as to when customers can expect a consumer-ready integration of Jasper and Cadence technologies.

Blog Review – Mon. April 21 2014

Monday, April 21st, 2014

Post silicon preview; Apps to drive for; Motivate to educate; Battery warning; Break it up, bots. By Caroline Hayes, Senior Editor.

Gabe Moretti attended the Freescale Technology Forum and found the ARM Cortex-A57 Carbon Performance Analysis Kit (CPAK) that previews post silicon performance, pre-silicon.

In a considered blog post, Joel Hoffmann, Intel, looks at the top four car apps and what they mean for system designers. He knows what he is talking about, he is preparing for the panel at Open Automotive 14 – Automotive Suppliers: Collaborate or Die in Sweden next month.

How to get the next generation of EDA-focused students to commit is the topic of a short keynote at this year’s DAC by Rob Rutenbar, professor of Computer Science, University of Illinois. Richard Goering, Cadence reports on progress so far with industry collaboration and looks ahead.

Consider managing power in SoCs above all else, urges Scott Seiden, Sonics, who sounds a little frustrated with his cell phone.

Michael Posner, Synopsys, revels in a good fight – between robots in the FIRST student robot design competition. Engaging and educational.

Internet of Things (IoT) and EDA

Tuesday, April 8th, 2014

Gabe Moretti, Contributing Editor

A number of companies contributed to this article.  In Particular: Apache Design Solutions, ARM, Atrenta, Breker Verification Systems, Cadence, Cliosoft, Dassault Systemes, Mentor Graphics, Onespin Solutions, Oski Technologies, and Uniquify.

In his keynote speech at the recent CDNLive Silicon Valley 2014 conference, Lip-Bu Tan, Cadence CEO, cited mobility, cloud computing, and Internet of Things as three key growth drivers for the semiconductor industry. He cited industry studies that predict 50 billion devices by 2020.  Of those three, IoT is the latest area attracting much conversation.  Is EDA ready to support its growth?

The consensus is that in many aspects EDA is ready to provide tools required for IoT implementation.  David Flynn, a ARM Fellow put it best.  “For the most part, we believe EDA is ready for IoT.  Products for IoT are typically not designed on ‘bleeding-edge’ technology nodes, so implementation can benefit from all the years of development of multi-voltage design techniques applied to mature semiconductor processes.”

Michael Munsey, Director of ENOVIA Semiconductor Strategy at Dassault Systèmes observed that conversely companies that will be designing devices for IoT may not be ready.  “Traditional EDA is certainly ready for the core design, verification, and implementation of the devices that will connect to the IoT.  Many of the devices that will connect to the IoT will not be the typical designs that are pushing Moore’s Law.  Many of the devices may be smaller, lower performance devices that do not necessarily need the latest and greatest process technology.  To be cost effective at producing these devices, companies will rely heavily on IP in order to assemble devices quickly in order to meet consumer and market demands.  In fact, we may begin to see companies that traditionally have not been silicon developers getting in to chip design. We will see an explosive growth in the IP ecosystem of companies producing IP to support these new devices.”

Vic Kulkarni, Senior VP and GM, Apache Design, Inc.  put it as follows: “There is nothing “new or different” about the functionality of EDA tools for the IoT applications, and EDA tool providers have to think of this market opportunity from a perspective of mainstream users, newer licensing and pricing model for “mass market”, i.e.  low-cost and low-touch technical support, data and IP security and the overall ROI.”

But IoT also requires new approaches to design and offers new challenges.  David Kelf, VP of Marketing at Onespin Solutions provided a picture of what a generalized IoT component architecture is likely to be.

Figure 1: generalized IoT component architecture (courtesy Onespin Solutions)

He went on to state: “The included graphic shows an idealized projection of the main components in a general purpose IoT platform. At a minimum, this platform will include several analog blocks, a processor able to handle protocol stacks for wireless communication and the Internet Protocol (IP). It will need some sensor-required processing, an extremely effective power control solution, and possibly, another capability such as GPS or RFID and even a Long Term Evolution (LTE) 4G Baseband.”

Jin Zhang, Senior Director of Marketing at Oski Technologies observed that “If we parse the definition of IoT, we can identify three key characteristics:

  1. IoT can sense and gather data automatically from the environment
  2. IoT can interact and communicate among themselves and the environment
  3. IoT can process all the data and perform the right action with or without human interaction

These imply that sensors of all kinds for temperature, light, movement and human vitals, fast, stable and extensive communication networks, light-speed processing power and massive data storage devices and centers will become the backbone of this infrastructure.

The realization of IoT relies on the semiconductor industry to create even larger and more complex SoC or Network-on-Chip devices to support all the capabilities. This, in turn, will drive the improvement and development of EDA tools to support the creation, verification and manufacturing of these devices, especially verification where too much time is spent on debugging the design.”

Power Management

IoT will require advanced power management and EDA companies are addressing the problem.  Rob Aitken, also a ARM fellow, said:” We see an opportunity for dedicated flows around near-threshold and low voltage operation, especially in clock tree synthesis and hold time measurement. There’s also an opportunity for per-chip voltage delivery solutions that determine on a chip-by-chip basis what the ideal operation voltage should be and enable that voltage to be delivered via a regulator, ideally on-chip but possibly off-chip as well. The key is that existing EDA solutions can cope, but better designs can be obtained with improved tools.”

Kamran Shah, Director of Marketing for Embedded Software at Mentor Graphics, noted: “SoC suppliers are investing heavily in introducing power saving features including Dynamic Voltage Frequency Scaling (DVFS), hibernate power saving modes, and peripheral clock gating techniques. Early in the design phase, it’s now possible to use Transaction Level Models (TLM) tools such as Mentor Graphics Vista to iteratively evaluate the impact of hardware and software partitioning, bus implementations, memory control management, and hardware accelerators in order to optimize for power consumption”

Figure 2: IoT Power Analysis (courtesy of Mentor Graphics)

Bernard Murphy, Chief Technology Officer at Atrenta, pointed out that: “Getting to ultra-low power is going to require a lot of dark silicon, and that will require careful scenario modeling to know when functions can be turned off. I think this is going to drive a need for software-based system power modeling, whether in virtual models, TLM (transaction-level modeling), or emulation. Optimization will also create demand for power sensitivity analysis – which signals / registers most affect power and when. Squeezing out picoAmps will become as common as squeezing out microns, which will stimulate further automation to optimize register and memory gating.”

Verification and IP

Verifying either one component or a subset of connected components will be more challenging.  Components in general will have to be designed so that they can be “fixed” remotely.  This means either fix a real bug or download an upgrade.  Intel is already marketing such a solution which is not restricted to IoT applications.Also networks will be heterogeneous by design, thus significantly complicating verification.

Ranjit Adhikary, Director of Marketing at Cliosoft, noted that “From a SoC designer’s perspective, “Internet of Things” means an increase in configurable mixed-signal designs. Since devices now must have a larger life span, they will need to have a software component associated with them that could be upgraded as the need arises over their life spans. Designs created will have a blend of analog, digital and RF components and designers will use tools from different EDA companies to develop different components of the design. The design flow will increasingly become more complex and the handshake between the digital and analog designers in the course of creating mixed-signal designs has to become better. The emphasis on mixed-signal verification will only increase to ensure all corner cases are caught early on in the design cycle.”

Thomas L. Anderson, Vice President of Marketing at Breker Verification Systems, has a similar prospective but he is more pessimistic.  He noted that “Many IoT nodes will be located in hard-to-reach places, so replacement or repair will be highly unlikely. Some nodes will support software updates via the wireless network, but this is a risky proposition since there’s not much recourse if something goes wrong. A better approach is a bulletproof SoC whose hardware, software, and combination of the two have been thoroughly verified. This means that the SoC verification team must anticipate, and test for, every possible user scenario that could occur once the node is in operation.”

One solution, according to Mr. Anderson, is “automatic generation of C test cases from graph-based scenario models that capture the design intent and the verification space. These test cases are multi-threaded and multi-processor, running realistic user scenarios based on the functions that will be provided by the IoT nodes containing the SoC. These test cases communicate and synchronize with the UVM verification components (UVCs) in the testbench when data must be sent into the chip or sent out of the chip and compared with expected results.”

Bob Smith, Senior Vice President of Marketing and Business development at Uniquify, noted that “Connecting the unconnected is no small challenge and requires complex and highly sophisticated SoCs. Yet, at the same time, unit costs must be small so that high volumes can be achieved. Arguably, the most critical IP for these SoCs to operate correctly is the DDR memory subsystem. In fact, it is ubiquitous in SoCs –– where there’s a CPU and the need for more system performance, there’s a memory interface. As a result, it needs to be fast, low power and small to keep costs low.  The SoC’s processors spend the majority of cycles reading and writing to DDR memory. This means that all of the components, including the DDR controller, PHY and I/O, need to work flawlessly as does the external DRAM memory device(s). If there’s a problem with the DDR memory subsystem, such as jitter, data/clock skew, setup/hold time or complicated physical implementation issues, the IoT product may work intermittently or not at all. Consequently, system yield and reliability are of upmost concern.”

He went on to say: “The topic may be the Internet of Things and EDA, but the big winners in the race for IoT market share will be providers of all kinds of IP. The IP content of SoC designs often reaches 70% or more, and SoCs are driving IoT, connecting the unconnected. The big three EDA vendors know this, which is why they have gobbled up some of the largest and best known IP providers over the last few years.”


Things that seem simple often turn out not to be.  Implementing IoT will not be simple because as the implementation goes forward, new and more complex opportunities will present themselves.

Vic Kulkarni said: “I believe that EDA solution providers have to go beyond their “comfort zone” of being hardware design tool providers and participate in the hierarchy of IoT above the “Devices” level, especially in the “Gateway” arena. There will be opportunities for providing big data analytics, security stack, efficient protocol standard between “Gateway” and “Network”, embedded software and so on. We also have to go beyond our traditional customer base to end-market OEMs.”

Frank Schirrmeister, product marketing group director at Cadence, noted that “The value chain for the Internet of Things consists not only of the devices that create data. The IoT also includes the hubs that collect data and upload data to the cloud. Finally, the value chain includes the cloud and the big data analytics it stores.  Wired/wireless communications glues all of these elements together.”

Blog Review – April 07 2014

Monday, April 7th, 2014

Interesting comments from PADS users are teasingly outlined by John McMillan, Mentor Graphics.

An impressed Dominic Pajak, ARM, relates hopes for the IoT using ARM mbed and Google Glass to control an industrial tank control which can be viewed by Google Glass.

The cultural gap between engineers is examined by Brian Fuller, Cadence, as he reviews DesignCon 2014 and group director,Frank Schirrmeister’s call for a “new species” of system designer

More IoT thoughts, this time a warning from Divya Naidu Kol, Intel, with ideas of how to welcome the IoT without losing control of our information.

The Wheels of Industry Roll On

Wednesday, March 19th, 2014

The wheels of industry roll on

Apart from pretzels and weissbier, Embedded World in Nuremberg was distinguished by the car, the factory and the IoT. By Caroline Hayes, Senior Editor.

On reflection, it should be no surprise that automotive themes were all around the exhibition halls of Embedded World 2014 in Nuremberg, Germany. The country produces 44% of the cars and light trucks manufactured in western Europe (defined as Europe apart from Turkey and the former communist bloc).

My Embedded World began with a breakfast meeting at Cadence, where Senior Director, Frank Schirrmeister, outlined that the classic EDA model is evolving to what has been termed System Design Enablement. This shift is largely due to the growing electronics content in vehicles, he explained, with the proliferation of CAN (Controlled Area Network), LIN (Local Interconnect Network), MOST (Media Oriented Systems Transport), Ethernet and FlexRay protocols. “For the consumer, it is satellite navigation systems; it is under-the-hood with software and there is an automotive element to the IoT (Internet of Things),” he said.
Traditionally, he explained, Cadence’s domain was the SoC (System on Chip), augmented by the acquisitions last year of Tensilica, Cosmic Circuits and Evatronix. System Design Enablement is based around adding the software stack for applications: “We are broadening EDA to take the subsystem in the chip, and enable interaction between the hardware and software to broaden the system design environment.” Listing the company’s attributes for automotive, he ran down the list of Ethernet IP; virtual platforms, such as automated driver assist systems; FPGA platforms for prototyping, mixed signal for under-the-hood. The last two are also used in IoT. For embedded control, notes Schirrmeister, virtualization needs to adapt, with virtual hardware methodology to meet the complexities and variants of cars today. “We need more simulation that ever before – automotive is the second largest growth area in the USA, Europe and Japan.”

More rugged modes of transport were occupying Daniel Piper, Senior Marketing Manager, EMEA, Kontron. The company was highlighting embedded boards based on the Intel Atom E3800 and COM Express modules based on the Intel Atom E3800 and Intel Celeron N2900/J1900.
There was also the first SMARC (Smart Mobility ARChitecture) CoMs (Computer on Modules) – the first design from the German company based on an x86 Atom processor. The SMARC-sXBTi CoMs are scalable so that the same look and feel for software development can be shared in the industrial space, said Piper, reaching from the automated shop floor to the connected tablet. For this, the benefits of SMARC, the low power consumption and small footprint are exploited, says Piper. Power consumption is 5 to 10W and the low profile, mini-computer form factor measures 82x50mm. “As well as long-term availability [an average of seven years] the software services, low profile and low power consumption suit the mobile applications and interfaces, such as eye camera interfaces, that are adapted from the ARM world but on an x86.” This, he says, extends SMARC into x86 modules and markets, using the same form factor, which brings ease of use for the end customer.

Industrial virtualization
Perhaps the most targeted announcement of Embedded World was that from Intel, which announced virtualization platforms for industrial systems with software and tools to create industrial, embedded systems.
Jim Robinson, General Manager, Segments and Broad Market Division, Internet of Things Solutions Group, Intel, echoed Piper’s sentiment, that the industrial sector is looking for new, innovative ways to connect and build: “Bringing together what have typically been different sub-systems into a single computing platform, makes it easier – and more affordable – for OEMs, machine builders and system integrators to deliver consolidated, virtualized platforms,” he said.
The Intel Industrial Solutions Systems Consolidation Series bundles together an embedded computer with an Intel Core i7 processor and pre-integrated virtualization software stack, which includes Wind River Hypervisor, pre-configured to support three partitions, running Wind River VxWorks for real-time applications and Wind River Linux 5.0 for non-rea-time applications. Robinson explained that the role of virtualization is to partition important workloads using multiple virtual machines. With it, developers can consolidate multiple discrete sub-systems onto a single device, reducing costs, increasing flexibility and reducing factory space, he said.
Intel System Studio software tools were also announced. They are designed for the Industrial Solutions System Consolidation Series to build and analyse industrial, embedded systems.

The IoT network
The software and tool suite are part of the company’s Developer Program for Internet of Things. This was another phrase often heard, and seen, around the halls in Nuremberg.
Dan Demers, Director of Marketing, congatec, was enthusiastic about IoT for its role in bridging mobile connectivity into the industrial space. “The longevity of industrial applications, and the IoT, from mobile devices into the industrial area demands customised solutions. Markets are opening up, because silicon innovations are bringing the power consumption down and the performance levels up. Historically, we could not put an x86 processor on such a board – now we can. Off-the-shelf is the easiest option, but not all off-the-shelf offerings have the I/O requirements, so the next best thing is a CoM.” There are, he says many advantages to employing a CoM, such as reduced time to market, with around six to nine months development time.
ARM took a difference approach to the IoT. Its IP dominates phones today, shipping 20billion ARM-based mobile phone chips to date, but the IoT market could be 100billion connected devices. Chris Turner, Senior Product Marketing Manager, Processor Division, ARM, explained to me about the software layer of the IoT and security. “The ARM ecosystem demonstrates the underlying code – the hypervisors – the protocol stack and the partnerships within the ecosystems. It is a long product plan to build a hypervisor for an architecture and this is where the ecosystem co-operation comes into play.” The ARM ecosystem includes more than 1,000 partners, developers and engineers developing ARM-based solutions and providing support.
Still with the IoT, Wind River had a creative booth, which demonstrated device connectivity in many forms, supported by its Intelligent Device Platform. As well as IoT protocol support, the scalable, secure development environment supports WiFi, Bluetooth and ZigBee.
It can be customized, which reduces development time. Provisioning and device management is via a web-based tool. Security in any connected network is critical, particularly if used in public transport, where signaling has to be accurate and timely, to utilities, where companies not only want to prevent fraud, or theft, but also to protect against outages. A customized secure remote management feature provides encrypted communication between a device and cloud based management. Lua and Java programming environments are supported to allow engineers to build gateway applications, connect and to send and receive data from the cloud.

Although there were several themes at this year’s exhibition and conference, the healthy competition between the processor of choice and co-operation between software, chip and board companies to work together to integrate and innovate was encouraging both in terms of the economy and in terms of the potential for the diverse, engaging design initiatives of tomorrow.

Next Page »