Part of the  

Chip Design Magazine

  Network

About  |  Contact

Posts Tagged ‘Cadence’

Next Page »

Newer Processes Raise ESL Issues

Wednesday, August 13th, 2014

Gabe Moretti, Senior Editor

In June I wrote about   how EDA changed its traditional flow in order to support advanced semiconductors manufacturing.  I do not think that, although the changes are significant and meaningful they are enough to sustain the increase in productivity required by financial demands.  What is necessary, in my opinion, is a better support for system level developers.

Leaving the solution to design and integration problems to a later stage of the development process creates more complexity since the network impacted is much larger.  Each node in the architecture is now a collection of components and primitive electronic elements that dilute and thus hide the intended functional architecture.

Front End Design Issues

Changes in the way front end design is done are being implemented.  Anand Iyer, Calypto’s Director of Product Marketing focused on the need to plan power at system level.  He observed that: “Addressing DFP issues need to done in the front end tools, as the RTL logic structure and architecture choices determines 80% of the power. Designers need to minimize the activity/clock frequency across their designs since this is the only metric to control dynamic power. They can achieve this in many ways: (1) Reducing activity permanently from their design, (2) Reduce activity temporarily during the active mode of the design.”  Anand went on to cover the two points: “The first point requires a sequential analysis of the entire design to identify opportunities where we can save power. These opportunities need to be evaluated against possible timing and area impact. We need automation when it comes to large and complex designs. PowerPro can help designers optimize their designs for activity.”

As for the other point he said: “The second issue requires understanding the interaction of hardware and software. Techniques like power gating and DVFS fall under this category.”

Anand also recognized that high level synthesis can be used to achieve low power designs.  Starting from C++ or SystemC, architects can produce alternative microarchitectures and see the power impact of their choices (with physically aware RTL power analysis).  This is hugely powerful to enable exploration because if this is done only at RTL it is time consuming and unrealistic to actually try multiple implementations of a complex design.  Plus, the RTL low power techniques are automatically considered and automatically implemented once you have selected the best architecture that meets your power, performance, and cost constraints.

Steve Carlson, Director of Marketing at Cadence pointed out that about a decade ago design teams had their choice of about four active process nodes when planning their designs.  He noted that: “In 2014 there are ten or more active choices for design teams to consider.  This means that solution space for product design has become a lot more rich.  It also means that design teams needs a more fine grained approach to planning and vendor/node selection.  It follows that the assumptions made during the planning process need to be tested as early and often, and with as much accuracy as possible at each stage. The power/performance and area trade-offs create end product differentiation.  One area that can certainly be improved is the connection to trade-offs between hardware architecture and software.  Getting more accurate insight into power profiles can enable trade-offs at the architectural and micro architectural levels.

Perhaps less obvious is the need for process accurate early physical planning (i.e., understands design rules for coloring, etc.).”

As shown in the following figure designers have to be aware that parts of the design are coming from different suppliers and thus Steve states that: “It is essential for the front-end physical planning/prototyping stages of design to be process-aware to prevent costly surprises down the implementation road.”

Simulation and Verification

One of the major recent changes in IC design is the growing number of mixed/signals designs.  They present new design and verification challenges particularly when new advanced processes are targeted for manufacturing.  On the standard development side Accellera has responded by releasing a new version of its Verilog-AMS.  It is a mature standard originally released in 2000. It is built on top of the Verilog subset of the IEEE 1800 -2012 SystemVerilog.  The standard defines how analog behavior interacts with event-based functionality, providing a bridge between the analog and digital worlds. To model continuous-time behavior, Verilog-AMS is defined to be applicable to both electrical and non-electrical system descriptions.  It supports conservative and signal-flow descriptions and can also be used to describe discrete (digital) systems and the resulting mixed-signal interactions.

The revised standard, Verilog-AMS 2.4, includes extensions to benefit verification, behavioral modeling and compact modeling. There are also several clarifications and over 20 errata fixes that improve the overall quality of the standard.Resources on how best to use the standard and a sample library with power and domain application examples are available from Accellera.

Scott Little, chair of the Verilog AMS WG stated: “This revision adds several features that users have been requesting for some time, such as supply sensitive connect modules, an analog event type to enable efficient electrical-to-real conversion and current checker modules.”

The standard continues to be refined and extended to meet the expanding needs of various user communities. The Verilog-AMS WG is currently exploring options to align Verilog-AMS with SystemVerilog in the form of a dot standard to IEEE 1800. In addition, work is underway to focus on new features and enhancements requested by the community to improve mixed-signal design and verification.

Clearly another aspect of verification that has grown significantly in the past few years is the availability of Verification IP modules.  Together with the new version of the UVM 1.2 (Universal Verification Methodology) standard just released by Accellera, they represent a significant increment in the verification power available to designers.

Jonah McLeod, Director of Corporate Marketing Communications at Kilopass, is also concerned about analog issues.  He said: “Accelerating Spice has to be major tool development of this generation of tools. The biggest problem designers face in complex SoC is getting corner cases to converge. This can be time consuming an imprecise with current generation tools.  Start-ups claiming montecarlo spice accelerations like Solido Design Automation and CLK Design Automation are attempting to solve the problem. Both promise to achieve Spice-level accuracy on complex circuits within a couple of percentage points in a fraction of the time.”

One area of verification that is not often covered is its relationship with manufacturing test.  Thomas L. Anderson, Vice President of Marketing at Breker Verification Systems told me that: “The enormous complexity of a deep submicron (32, 38, 20, 14 nm) SoC has a profound impact on manufacturing test. Today, many test engineers treat the SoC as a black box, applying stimulus and checking results only at the chip I/O pins. Some write a few simple C tests to download into the SoC’s embedded processors and run as part of the manufacturing test process. Such simple tests do not validate the chip well, and many companies are seeing returns with defects missed by the tester. Test time limitations typically prohibit the download and run of an operating system and user applications, but clearly a better test is needed. The answer is available today: automatically generated C test cases that run on “bare metal” (no operating system) while stressing every aspect of the SoC. These run realistic user scenarios in multi-threaded, multi-processor mode within the SoC while coordinating with the I/O pins. These test cases validate far more functionality and performance before the SoC ever leaves the factory, greatly reducing return rates while improving the customer experience.”

Blog Review – Mon. August 11 2014

Monday, August 11th, 2014

VW e-Golf; Cadence’s power signoff launch; summer viewing; power generation research.
By Caroline Hayes, Senior Editor

VW’s plans for all-electric Golf models have captured the interest of John Day, Mentor Graphics. He attended the Management Briefing Seminar and reports on the carbon offsetting and a solar panel co-operation with SunPower. I think I know what Day will be travelling in to get to work.

Cadence has announced plans to tackle power signoff this week and Richard Goering elaborates on the Voltus-Fi Custom Power Integrity launch and provides a detailed and informative blog on the subject.

Grab some popcorn (or not) and this summer blockbusters, as lined up by Scott Knowlton, Synopsys. Perhaps not the next Harry Potter series, but certainly a must-see for anyone who missed the company’s demos at PCI-SIG DevCon. This humorous blog continues the cinema analogy for “Industry First: PCI Express 4.0 Controller IP”, “DesignWare PHY IP for PCI Express at 16Gb/s”, “PCI PHY and Controller IP for PCI Express 3.0” and “Synopsys M-PCIe Protocol Analysis with Teledyne LeCroy”.

Fusion energy could be the answer for energy demands and Steve Leibson, Xilinx, shares Dave Wilson’s (National Instruments) report of a fascinating project by National Instruments to monitor and control a compact spherical tokamak (used as neutron sources) with the UK company, Tokamak Solutions.

An EDA View of Semiconductor Manufacturing

Wednesday, June 25th, 2014

Gabe Moretti, Contributing Editor

The concern that there is a significant break between tools used by designers targeting leading edge processes, those at 32 nm and smaller to be precise, and those used to target older processes was dispelled during the recent Design Automation Conference (DAC).  In his address as a DAC keynote speaker in June at the Moscone Center in San Francisco Dr. Antun Domic, Executive Vice President and General Manager, Synopsys Design Group, pointed out that advances in EDA tools in response to the challenges posed by the newer semiconductor process technologies also benefit designs targeting older processes.

Mary Ann White, Product Marketing Director for the Galaxy Implementation Platform at Synopsys, echoed Dr. Domic remarks and stated:” There seems to be a misconception that all advanced designs needed to be fabricated on leading process geometries such as 28nm and below, including FinFET. We have seen designs with compute-intensive applications, such as processors or graphics processing, move to the most advanced process geometries for performance reasons. These products also tend to be highly digital. With more density, almost double for advanced geometries in many cases, more functionality can also be added. In this age of disposable mobile products where cellphones are quickly replaced with newer versions, this seems necessary to remain competitive.

However, even if designers are targeting larger, established process technologies (planar CMOS), it doesn’t necessarily mean that their designs are any less advanced in terms of application than those that target the advanced nodes.  There are plenty of chips inside the mobile handset that are manufactured on established nodes, such as those with noise cancellation, touchscreen, and MEMS (Micro-Electronic Sensors) functionality. MEMS chips are currently manufactured at the 180nm node, and there are no foreseeable plans to move to smaller process geometries. Other chips at established nodes tend to also have some analog capability, which doesn’t make them any less complex.”

This is very important since the companies that can afford to use leading edge processes are diminishing in number due to the very high ($100 million and more) non recurring investment required.  And of course the cost of each die is also greater than with previous processes.  If the tools could only be used by those customers doing leading edge designs revenues would necessarily fall.

Design Complexity

Steve Carlson, Director of Marketing at Cadence, states that “when you think about design complexity there are few axes that might be used to measure it.  Certainly raw gate count or transistor count is one popular measure.  From a recent article in Chip Design a look at complexity on a log scale shows the billion mark has been eclipsed.”  Figure 1, courtesy of Cadence, shows the increase of transistors per die through the last 22 years.

Figure 1.

Steve continued: “Another way to look at complexity is looking at the number of functional IP units being integrated together.  The graph in figure 2, provided by Cadence, shows the steep curve of IP integration that SoCs have been following.  This is another indication of the complexity of the design, rather than of the complexity of designing for a particular node.  At the heart of the process complexity question are metrics such as number of parasitic elements needed to adequately model a like structure in one process versus another.”  It is important to notice that the percentage of IP blocks provided by third parties is getting close to 50%.

Figure 2.

Steve concludes with: “Yet another way to look at complexity is through the lens of the design rules and the design rule decks.  The graphs below show the upward trajectory for these measures in a very significant way.” Figure 3, also courtesy of Cadence, shows the increased complexity of the Design Rules provided by each foundry.  This trend makes second sourcing a design impossible, since having a second source foundry would be similar to having a different design.

Figure 3.

Another problem designers have to deal with is the increasing complexity due to the decreasing features sizes.  Anand Iyer, Calypto Director of Product Marketing, observed that: “Complexity of design is increasing across many categories such as Variability, Design for Manufacturability (DFM) and Design for Power (DFP). Advanced geometries are prone to variation due to double patterning technology. Some foundries are worst casing the variation, which can lead to reduced design performance. DFM complexity is causing design performance to be evaluated across multiple corners much more than they were used to. There are also additional design rules that the foundry wants to impose due to DFM issues. Finally, DFP is a major factor for adding design complexity because power, especially dynamic power is a major issue in these process nodes. Voltage cannot scale due to the noise margin and process variation considerations and the capacitance is relatively unchanged or increasing.”

Impact on Back End Tools.

I have been wondering if the increasing dependency on transistors geometries and the parasitic effects peculiar to each foundry would eventually mean that a foundry specific Place and Route tool would be better than adapting a generic tool to a Design Rules file that is becoming very complex.  I my mind complexity means greater probability of errors due to ambiguity among a large set of rules.  Thus by building rules specific Place and Route tools would directly lower the number of DR checks required.

Mary Ann White of Synopsys answered: “We do not believe so.  Double and multiple patterning are definitely newer techniques introduced to mitigate the lithographic effects required to handle the small multi-gate transistors. However, in the end, even if the FinFET process differs, it doesn’t mean that the tool has to be different.  The use of multi patterning, coloring and decomposition is the same process even if the design rules between foundries may differ.”

On the other hand Steve Carlson of Cadence shares the opinion.  “There have been subtle differences between requirements at new process nodes for many generations.  Customers do not want to have different tool strategies for second source of foundry, so the implementation tools have to provide the union of capabilities needed to enable each node (or be excluded from consideration).   In more recent generations of process nodes there has been a growing divergence of the requirements to support

like-named nodes. This has led to added cost for EDA providers.  It is doubtful that different tools will be spawned for different foundries.  How the (overlapping) sets of capabilities get priced and packaged by the EDA vendors will be a business model decision.  The use model users want is singular across all foundry options.  How far things diverge and what the new requirements are at 7nm and 5nm may dictate a change in strategy.  Time will tell.”

This is clear for now.  But given the difficulty of second sourcing I expect that a de4sign company will choose one foundry and use it exclusively.  Changing foundry will be almost always a business decision based on financial considerations.

New processes also change the requirements for TCAD tools.  At the just finished DAC conference I met with Dr. Asen Asenov, CEO of Gold Standard Simulations, an EDA company in Scotland that focuses on the simulation of statistical variability in nan-CMOS devices.

He is of the opinion that Design-Technology Co-Optimization (DTCO) has become mandatory in advanced technology nodes.  Modeling and simulation play an increasing important role in the DTCO process with the benefits of speeding up and reducing the cost of the technology, circuit and system development and hence reducing the time-to-market.  He said: “It is well understood that tailoring the transistor characteristics by tuning the technology is not sufficient any more. The transistor characteristics have to meet the requirement for design and optimization of particular circuits, systems and corresponding products.  One of the main challenges is to factor accurately the device variability in the DTCO tools and practices. The focus at 28nm and 20nm bulk CMOS is the high statistical variability introduced by the high doping concentration in the channel needed to secure the required electrostatic integrity. However the introduction of FDSOI transistors and FinFETs, that tolerate low channel doping, has shifted the attention to the process induced variability related predominantly to silicon channel thickness or shape  variation.”  He continued: “However until now TCAD simulations, compact model extraction and circuit simulations are typically handled by different groups of experts and often by separate departments in the semiconductor industry and this leads to significant delays in the simulation based DTCO cycle. The fact that TCAD, compact model extraction and circuit simulation tools are typically developed and licensed by different EDA vendors does not help the DTCO practices.”

Ansys pointed out that in advanced finFET process nodes, the operating voltage for the devices have drastically reduced. This reduction in operating voltage has also lead to a decrease in operating margins for the devices. With several transient modes of operation in a low power ICs, having an accurate representation of the package model is mandatory for accurate noise coupling simulations. Distributed package models with a bump resolution are required for performing Chip-Package-System simulations for accurate noise coupling analysis.

Further Exploration

The topic of Semiconductors Manufacturing has generated a large number of responses.  As a result the next monthly article will continue to cover the topic with particular focus on the impact of leading edge processes on EDA tools and practices.

Blog Review – Mon. June 16 2014

Monday, June 16th, 2014

Naturally, there is a DAC theme to this week’s blogs – the old, the new, together with the soccer/football debate and the overlooked heroes of technology. By Caroline Hayes, Senior Editor.

Among those attending DAC 2014, Jack Harding, eSilicon rejoiced in seeing some familiar faces but mourns the lack of new faces and the absence of a rock and roll generation for EDA.

Football fever has affected Shelly Stalnaker, Mentor Graphics, as she celebrates the World Cup coming to a TV screen near you. The rest of the world may call soccer football but the universality of IC design and verification is an analogy that will resonate with sport enthusiasts everywhere.

Celebrating Alan Turing, Aurelien, Dassault Systemes, looks at the life and achievements of the man who broke the Enigma Code, in WWII, invented the first computer in 1936 and who defined artificial intelligence. The fact he wasn’t mentioned in the 2001 film, Engima, about the code breakers, reflects how overlooked this incredible man was.

Mixed signal IC verification was the topic for a DAC panel, and Richard Goering, Cadence runs down what was covered, from tools and methodologies, the prospects for scaling and a hint at what’s next.

Digital Designers Grapple with Analog Mixed Signal Designs

Tuesday, June 10th, 2014

By John Blyler, Chief Content Officer

Today’s growth of analog and mixed signal circuits in the Internet of Things (IoT) applications raises questions about compiling C-code, running simulations, low power designs, latency and IP integration.

Often, the most valuable portion of a technical seminar is found in the question-and-answer (Q&A) session that follows the actual presentation. For me, that was true during a recent talk on the creation of mixed signal devices for smart analog and the Internet of Things (IoT) applications. The speakers included Diya Soubra, CPU Product Marketing Manager and Joel Rosenberg, Platform Marketing Director at ARM; and Mladen Nizic, Engineering Director at Cadence. What follows is my paraphrasing of the Q&A session with reference to the presentation where appropriate. – JB

Question: Is it possible to run C and assembly code on an ARM® Cortex®-M0 processor in Cadence’s Virtuoso for custom IC design? Is there a C-compiler within the tool?

Nizic: The C compiler comes from Keil®, ARM’s software development kit. The ARM DS-5 Development Studio is an Eclipse based tool suite for the company’s processors and SoCs. Once the code is compiled, it is run together with RTL software in our (Cadence) Incisive Mixed Signal simulator. The result is a simulation of the processor driven by an instruction set with all digital peripherals simulated in RTL or at the gate level. The analog portions of the design are simulated at the appropriate behavioral level, i.e., Spice transistor level, electrical behavioral Verilog A or a real number model. [See the mixed signal trends section of, “Moore’s Cycle, Fifth Horseman, Mixed Signals, and IP Stress”)

You can use the electrical behavioral models like a Verilog A and VHDL-A and –AMS to simulate the analog portions of the design. But real number models have become increasingly popular for this task. With real number models, you can model analog signals with variable amplitudes but discrete time steps, just as required by digital simulation. Simulations with a real number model representation for analog are done at almost the same speed as the digital simulation and with very little penalty (in accuracy). For example, here (see Figure 1) are the results of a system simulation where we verify how quickly Cortex-M0 would us a regulation signal to bring pressure to a specified value. It takes some 28-clock cycles. Other test bench scenarios might be explored, e.g., sending the Cortex-M0 into sleep mode if no changes in pressure are detected or waking up the processor in a few clock cycles to stabilize the system. The point is that you can swap these real number models for electrical models in Verilog A or for transistor models to redo your simulation to verify that the transistor model performs as expected.

Figure 1: The results of a Cadence simulation to verify the accuracy of a Cortex-M0 to regulate a pressure monitoring system. (Courtesy of Cadence)

Question: Can you give some examples of applications where products are incorporating analog capabilities and how they are using them?

Soubra: Everything related to motor control, power conversion and power control are good examples of where adding a little bit of (processor) smarts placed next to the mixed signal input can make a big difference. This is a clear case of how the industry is shifting toward this analog integration.

Question: What capabilities does ARM offer to support the low power requirement for mixed signal SoC design?

Rosenberg: The answer to this question has both a memory and logic component. In terms of memory, we offer the extended range register file compilers which today can go up to 256k bits. Even though the performance requirement for a given application may be relatively low, the user will want to boot from the flash into the SRAM or the register file instance. Then they will shut down the flash and execute out of the RAM as the RAM offers significantly lower active as well as stand-by power compared to executing out of flash.

On the logic side, we offer a selection from 7, 9 and 12 tracks. Within that, there are three Vt options – one for high, nominal and lower speeds. Beyond that we also offer power management kits that provide things like level shifters and power gating so the user can shut down none active parts of the SoC circuit.

Question: What are the latency numbers for waking up different domains that have been put to sleep?

Soubra: The numbers that I shared during the presentation do not include any peripherals since I have no way of knowing what peripherals will be added. In terms of who is consuming what power, the normal progression tends to be the processor, peripherals, bus and then the flash block. The “wake-up” state latency depends upon the implementation itself. You can go from tens-of-cycles to multiple-of-tens depending upon how the clocks and phase locked loops (PLLs) are implemented. If we shut everything down, then a few cycles will be required before everything goes off an, before we can restart the processor. But we are talking about tens not hundreds of cycles.

Question: And for the wake-up clock latency?

Soubra: Wake-up is the same thing, because when the wake-up controller says “lets go,” it has to restart all the clocks before it starts the processor. So it is exactly the same amount.

ARM Cortex-M low power technologies.

Question: What analog intellectual property (IP) components are offered by ARM and Cadence? How can designers integrate their own IP in the flow?

Nizic: At Cadence, through the acquisition of Cosmic, we have a portfolio of applicable analog and mixed signal IP, e.g., converters, sensors and the like. We support all design views that are necessary for this kind of methodology including model abstracts from real number to behavioral models. Like ARM’s physical IP, all of ours are qualified for the various foundry nodes so the process of integrating IP and silicon is fairly smooth.

Soubra: From ARM’s point-of-view, we are totally focused on the digital part of the SoC, including the processors, bus infrastructure components, peripherals, and memory controllers that are part of the physical IP (standard cell libraries, I/O cells, SRAM, etc). Designers integrate the digital parts (processors, bus components, peripherals and memory controller) in RTL design stages. Also, they can add the functional simulation models of memories and I/O cells in simulations, together with models of analog components from Cadence. The actual physical IP are integrated during various implementation stages (synthesis, placement and routing, etc).

Question: How can designers integrate their own IP into the SoC?

Nizic: Some of the capabilities and flows that we described are actually used to create customer IP for later reuse in SoC integration. There is a centric flow that can be used, whether the customer’s IP is pure analog or contains a small amount of standard cell digital. For example, the behavioral modeling capabilities help package this IP for the functional simulation in full chip verification. But getting the IP ready is only one aspect of the flow.

From a physical abstract it’s possible to characterize the IP for use in timing driven mode. This approach would allow you to physically verify the IP on the SoC for full chip verification.

Blog Review – Mon. June 02 2014

Monday, June 2nd, 2014

In case you didn’t know, DAC is upon us, and ARM has some sightseeing tips – within the confines of the show-space. Electric vehicles are being taken seriously in Europe and North America, Dassault Systemes has some manufacturing-design tips and Sonics looks back over 20 years of IP integration. By Caroline Hayes, Senior Editor.

Electric vehicles – it’s an easy sell for John Day, Mentor Graphics, but his blog has some interesting examples from Sweden of electric transport and infrastructure ideas.

Thanks are due to Leah Schuth, ARM, who can save you some shoe leather if you are in San Francisco this week. She has been very considerate and lumped together all the best bits to see at this week’s DAC. OK, the list may be a bit ARM-centric, but if you want IoT, wearable electronics and energy management, you know where to go.

We all want innovation but can the industry afford it? Hoping to instill best practice, Eric, Dassault Systemes writes an interesting, detailed piece design-manufacturing collaboration for a harmonious development cycle.

A tutorial on your PC is a great way to learn – in this case, the low power advantage of LPDDR4 over earlier LPDDR memory. Corrie Callenbach brings this whiteboard tutorial by Kishote Kasamsetty to our attention in Whiteboard Wednesdays—Trends in the Mobile Memory World.

A review of IP integration is presented by Drew Wingard, Sonics, in who asks what has been learned over the last two decades, what matters and why.

Deeper Dive – Is IP reuse good or bad?

Friday, May 30th, 2014

To buy or to reuse, that is the question. Caroline Hayes, Senior Editor asked four industry experts, Carsten Elgert (EC), Product Marketing Director, IPG (IP Group), Cadence, Tom Feist (TF), Senior Marketing Director, Design Methodology, Xilinx, Dave Tokic (DT), Senior Director, Partner Ecosystems and Alliances, Xilinx, and Warren Savage (WS), President and CEO, IPextreme about the pros and cons of IP reuse versus third party IP.

What are the advantages and disadvantages when integrating or re-using existing IP?

WS: This is sort of analogous to asking what are the advantages/disadvantages of living a healthy lifestyle? The disadvantages are few, the advantages myriad. But in essence it’s all about practicality. If you can re-use a piece of technology, that means that you don’t have to spend money developing something different which includes a huge cost of verification. Today’s chips are simply too large to functionally verify every gate. A part of the every chip verification strategy assumes that pre-existing IP has already had its verification during its development and if it is silicon-proven, this only decreases the risk of any latent defects that may be discovered. The only reason to not to reuse an IP is that the IP itself is lacking in some ways that make the case to not reuse it but rather create a new IP.

strong>TF: Improved productivity. Reuse can have disadvantages when using older IP on new technologies, it is not always possible to target the newer features of a device with old IP.

DT: IP reuse is all about improving productivity and can result in significantly shrinking the design time especially with configurable IP. Challenges come from when the IP itself needs to be modified from the original, which then requires additional design and verification time. Verification in general could be more challenging, as most IP is verified in isolation and not in the context of the system. For example, if the IP is being used in a way the provider didn’t “think of” in their verification process, it may have bugs that are discovered during that integration verification phase that then needs to be reflected back to determining which IP has the issue and correcting/verifying that IP.

CE: The benefits of using your own IP in-house are that you know what the IP is doing and can use it again – it is also not available as third party IP, for differentiation. The disadvantage is that it is rarely documented allowing it to be used in different departments. The same engineers know what they are taking when they reuse their IP, but to properly document it, to make the product, can be time-consuming for a neighboring department. It is also the case that unwanted behavior is not verified. However, it is cheaper and it works.

What are the advantages and disadvantages of using third party IP?

WS: The advantages of using third party IP is usually related to that company being a domain expert in a certain field. By using IP from that company, you are in fact licensing expertise from this company and putting it to use in an effective way. Think of it like going to dinner at a 4-star Michelin rated restaurant. The ingredients may be ordinary but how they are assembled are far more exceptional than something the ordinary person can achieve on their own.

TF: Xilinx cannot cover all areas. Having a third party ecosystem allows us to increase the reach to customers. We have qualification processes in place to ensure the quality of the IP is up to the required standard.

DT: Xilinx and its ecosystem provide more than 600 cores across all markets, with over 130 IP providers in our Alliance Program. These partners not only provide more “fundamental” standards-based IP, but also provide a very rich set of domain and application-specific IP that would be difficult for Xilinx to develop and support. This allows Xilinx technology to be more easily and quickly adopted in 100’s of applications. Some of the challenges come in terms of consistency of deliverables, quality, and business models. Xilinx has a mature process of partner qualification and also work with the partner to expose IP quality metrics when we promote a partner IP product that helps customers make smarter decisions on choosing a provider or IP core.

EC: Third party IP means there is no rewriting. Without it, it could take hundreds of man-years to develop a function that is not major selling point for a design. Third party IP is compatible as well as bug-free/limited bugs. It is like spending hundreds of hours redesigning the steering wheel of a car – it is not a differentiating selling point of the vehicle.

Is third party IP a justifiable risk in terms of cost and/or compatibility?

DT: The third party IP ecosystem for ASIC and programmable technologies has been around for decades. There are many respected and high quality providers out there, from smaller specialized IP core partners such as Xylon, OmniTek, Northwest Logic, and PLDA to name a few, up to the industry giants like ARM and Synopsys generating $100M’s in annual revenue. But ultimately the customer is responsible for determining the system, cost, and schedule requirements, evaluating IP options, and make the “build vs. buy” decision.

EC: I would reformulate that question: Can [the industry] live without IP? Then, the risk is justifiable.

How important are industry standards in IP integration? What else would you like to see?

TF: IP standards are very important. For example, Xilinx is on the IEEE P 1735 working group for IP security. This is very important to protect customer, 3rd party and Xilinx IP throughout flows that may contain 3rd party EDA tools and still allow everyone to interoperate on the IP. We hope to see the 2.0 revision of this standard ratified this year so all tool vendors and Xilinx can adopt it and make IP truly portable, yet protected.

DT: Another example is AMBA AXI4, where Xilinx worked closely with ARM to define this high speed interconnect standard to be optimized for integrating IP on both ASIC and programmable logic platforms.

WS: Today, not so much. There has been considerable discussion on this topic for the last 15 years and various industry initiatives have come and gone over the years on this. The most successful one to date has been IP-XACT. There is massive level of IP reuse today and the lack of standards has not slowed that. I am seeing that the way the industry is handling this problem is through the deployment of pre-integrated subsystems that include both a collection of IP blocks and the embedded software that drives it. I think that within another five years, the idea of “Lego-like” construction tools will die as they do nothing to solve the verification problem associated with such constructions.

Have you any statistics you can share on the value or TAM (Total Available Market) for EDA IP?
WS: I assume you mean IP? Which I like to point out is distinctive from EDA. True, many EDA players are adopting an IP strategy but that is primarily because the growth in IP and the stagnation of the EDA markets are forcing those players to find new growth areas. Sustained double-digit growth is hard to ignore.

To the TAM (total available market) question, a lot of market research says the market is around $2billion today. I have long postulated that the real IP market is at least twice the size of the stated market, sort of like the “dark matter” theories in astrophysics. But even this ignores considerable amounts of patent licensing, embedded software licensing and such which dwarf the $2billion number.

TF: According to EDAC Semiconductor IP revenue totaled $486million, a 4.2% increase compared to Q4 2012 and four-quarters moving average increased 9.6%.

EC: For interface IP, I can see a market for interface IP – sharing no processor, no ARM, to grow 20% to $400million [Analyst IP Nest].

IP Integration to accelerate SoCs

Thursday, May 29th, 2014

As the accelerating market of SoCs inevitably means a faster rate of adoption, system level designers are also faced with fragmenting markets, with new standards adopted and with multiple types of complex requirements in a system. The integration of IP (Intellectual Property) can be a boon, but there are caveats, as Caroline Hayes, Senior Editor reports.

Today, there are many types of IP in use today, processor, DSP, high-speed IP, clocking IP, even whole systems. There are also a variety of high-speed I/O and infrastructure IP (such as the AXI4 interconnect). It is used to support protocols, interfaces, controllers and processor on the core.

Dave Tokic, Senior Director, Partner Ecosystems and Alliances, Xilinx believes that IP is invaluable. “It just isn’t possible to create the systems being developed without substantial IP integration,” he says. He goes on to explain the appeal of IP. “More and more, designers are being asked to create increasingly complex designs much faster on programmable devices. As such, we’ve seen a tremendous growth in the number of available IP cores and their use and reuse on programmable devices. Typically, we see the use of a broad range of connectivity and high speed I/O such as Ethernet, PCI Express, SDI, MIPI, and HDMI; general purpose and specialized processors, such as compact Xilinx Microblaze microcontrollers and high-speed DSP, to powerful multi-core ARM A9 application processors we’ve integrated into the Zynq All Programmable SoC ; optimized DMA and memory controllers, all the way up to the highest performance Hybrid Memory Cube controllers; application-specific IP for compression such as HEVC, JPEG-2000 to high performance image and video processing pipeline IP; and a broad range of clocking and infrastructure IP, such as the AMBA AXI4 interconnect.”
For Carsten Elgert, Product Marketing Director, IPG (IP Group), Cadence, this diversity in the system design is why the IP market is growing – according to EDAC Semiconductor, IP revenue totaled $486million Q4 2013, a 4.2 % increase compared to Q4 2012.

Tom Feist, Senior Marketing Director, Design Methodology, Xilinx, summarizes it thus: “As geometries shrink, typically parts and designs get larger and more complex yet design teams do not. This means higher productivity is required to produce larger designs. A key factor to address this is IP reuse.”
The proliferation of connections to an outside DDR controller, flash memory controller, display output or other display standards and communications protocols like USB, are all defined and the designer has to meet the standards of protocol committees. Egbert says: “When designing an SoC, there is no time, and [the designer] usually does not have the knowledge to design the standard protocol by himself – the end product will be sold because of a unique functionality, not because of an excellent USB protocol.”
It is, he argues far easier to use IP blocks as components, buying a peripheral 1²C bus or UART protocol rather build 20k gates. The availability of IP helps – there is interface IP, communications IP, microprocessor IP and IP from third party vendors, such as EDA companies and ARM for example, says Elgert.
He points out that IP is bought to ensure code compatibility, and includes high-end versions, such as ARM, and low-end, 8bit processors in SoCs, with less than 3k gates. There is also EDA IP and legacy processors. “And do not forget analog IP,” counsels Elgert. The SoC analog or mixed signal portion needs PLL and sensors using ADC and AC – this is very common if the SoC is analog.”
With so much to choose from, I asked what are the ‘rules’ if there are any, to choosing IP. Tokic believes “Some of the key factors for successful IP use is the quality of the IP and ease of integration,” he says, referring to the company’s functional verification and validation. “We work closely with our ecosystem of qualified IP providers to provide visibility to our customers on various IP quality metrics… [We] led the industry in automating IP integration with the IP Integrator technology built into the Vivado Design Suite that shortens the design cycle up to 10x over more manual processes.”

Elgert breaks it down into technology nodes, posing the question for engineers to ask – “What is available on process node that I’m forgetting? This can be an extensive shipping list for complex systems,” he says. He also advises narrowing the choice down to a preferred silicon vendor.
For example, he says, the PHY and the layer stack is relevant, being specific for each available technology node. “I would consider as many silicon technologies as possible,” is his advice.
Other items on Elgert’s list are to consider the technical performance, the PPA, or Power Performance Areas. “What is the power consumer, and how big is the silicon? Silicon still costs money. Compare PPA, which is often confidential information,” he says “for responsiveness and clarity of data – EDA support is important here”.
To implement the IP, Feist observes that customers prefer to use industry standards and avoid proprietary bus / interconnect. “By leveraging ARM’s AMBA AXI4 protocol [customers] can leverage state-of-the-industry interconnect that enables point to point connections versus a bus structure. This accelerates their design development and the designs are more portable.” (The vast majority of internal bus structures use AMBA AXI, confirms Elgert.

Xilinx relies on an ecosystem for IP integration support. Feist explains the role of the Vivado design suite. “In Vivado IP all parts of the IPI design should be packaged as IP. If just using third party or Xilinx IP this is done for you. For customer IP, they need to package this up and we provide a packager for this. To get the most out of IPI, the IP should be designed in a way that uses interfaces where possible like the Xilinx IP. By using AXI, it allows customers to leverage IPs that come with Vivado like bus monitors, bandwidth checkers, JTAG stimulus, bus functional models etc. that should speed up development of the IP.”
By providing an extensible IP catalog, which can contain Xilinx, third-party, and intra-company IP that can be shared across a design team, division, or company, the suite leverages industry standards such as ARM’s AXI interconnect and IP-XACT metadata when packaging IP. With IP Integrator, Vivado facilitates rapid development of smarter systems, which can include embedded, DSP, video, analog, networking, interface, building block, and connectivity IP consolidated into a hierarchical system and IP-centric view.
Users can also package their own RTL, or C/C++/SystemC and MATLAB/Simulink algorithms into the IP catalog using High-Level Synthesis (HLS) or System Generator for DSP with the IP packager.
Xilinx describes IPI as providing an automated IP subsystem generation to speed configuration. For Feist, IPI provides the second step to allow a user to quickly integrate an IP created in HLS into a full design. “The first step happens earlier in HLS tool that is adding AXI interfaces automatically to the C code. With both these steps it is possible to connect an IP generated in HLS to an ARM core pretty quickly.”
Synopsys is also helping customers reduce configuration and debug time with a ‘plug and play’ IP accelerator program, to be announced at DAC next month. It will package its DesignWare IP together with hardware IP prototyping kits and software development kits to modify IP with a fast iteration flow and without the time penalties, was all Dr Johannes Stahl, Director, Product Marketing, Virtual Prototyping, Synopsys would reveal on a visit to the UK ahead of DAC.

Turning to EDA’s role in IP, Elgert says the same challenges occur as are met when designing an SoC. The GDS2 or RTL has to match the RTL of an in-house design. The IP vendor has to support the IP flow, he says, but there is a delivery burden on EDA companies to support customers to ensure that the various tool flows and verification platforms are enabled.
All agree that as geometries shrink, parts and designs get larger and more complex. This creates demands on small (even shrinking) design teams and IP use and reuse can accelerate the timeframe of design, verification and eventually time to market, with designs that are differentiated with functions specifically for the application.

IP Integration: Not a Simple Operation

Tuesday, May 13th, 2014

Gabe Moretti, Contributing Editor

Although the IP industry is about 25 years old, it still presents problems typical of immature industries.  Yet, the use of IP in systems design is now so popular one is hard press to find even one system design that does not use IP.  My first reaction to the use of IP is “back to the future”.

For many years of my professional career I dealt with board level design as well as chip design.  Between 1970 and 1990 IP was sold as discrete components by companies such as Texas Instruments, National, and Fairchild among many others.  Their databooks described precisely how to integrate the part in a design.  Although a defined standard for the contents did not exist, a de-facto standard was followed by all providers.  Engineers, using the databooks information would choose the correct part for their needs and the integration was reasonably straight forward.  All signals could be analyzed in the lab since pins and traces were available on the board.

Enters Complexity

As semiconductors fabrication progressed, the board became the chip, and the components are now IP modules.  One would think that integration would also remain reasonably straight forward.  But this is not the case.  Concerns about safeguarding intellectual property rights took over and IP developers were reluctant to provide much information about the functioning of the module, afraid that its functionality would be duplicated and thus they would loose sales.

As the number of transistors on a chip increases, the complexity of porting a design from one process to the next also increases.

Figure 1. Projected number of transistors on a chip

Developers found that by providing a hard macro, that is a module already placed and routed and ready for fabrication by the chosen foundry, was the best way to protect their intellectual rights.  But such strategy is costly because foundries cannot just validate every macro for free.  The IP provider must be in the position to guarantee volume use by the foundry’s customers.  Thus many IP modules must be synthesized.  This means they must be verified.

Karthik Srinivasan Corporate Application Engineer Manager Analog Mixed Signal at Apache Design Solutions, an Ansys company, wrote that “SoC designs today integrate significant number of IPs to accelerate their design times and to reduce the risks to their design closure. But the gap in the expectations of where and how the sign-off happens between the IP and SoC designers create design issues that affect the final product’s performance and release. IP designers often validate their IPs in isolation with expectations of near ideal operating conditions. SoCs are verified and signed off with mostly abstracted or in many cases ‘black-box’ views of IPs. But as more and more high speed and noise sensitive IPs get placed next to each other or next to the core digital logic failure conditions that were not considered emerge. This worsens when these IPs share one or more power and ground supply domains. For example, when a bank of high speed DDR IPs are placed next to a bank of memories, the switching of the DDR can generate sufficient noise on the shared ground network that can adversely affect the operation of the memory.

As designs migrate to smaller technology nodes, especially those using FinFET based technologies this gap in the design closure process is going to worsen the power noise and reliability closure process.”

DDR memory blocks are becoming a greater and greater portion of a chip as the portion of functionality implemented in firmware increases.  Bob Smith, Senior VP of Marketing and Business Development at Uniquify makes the case for a system view of memories.

” DDR IP is used in a wide variety of ASIC and SoC devices found in many different applications and market segments. If the device has an embedded processor, then it is highly likely that the processor requires access to external DDR memory. This access requires a DDR subsystem (DDR controller, PHY and IO) to manage the data traffic flowing to and from the embedded processor and external DDR memory.

Whether it is procured from an external source or developed by an internal IP group, almost all chip design projects rely on DDR IP to implement the on-chip DDR subsystem. The integration techniques used to implement the DDR IP in the chip design can have far reaching effects on DDR performance, chip area, power consumption and even reliability.

Figure 2. A non-optimized DDR implementation

The above fiogure illustrates a typical on-chip DDR implementation. Note that while the DDR I/Os span the perimeter of the chip, the DDR PHYs are configured as blocks and are placed in such a way that they are centered with the I/Os. As shown in the diagram, this not only wastes valuable chip area, but also creates other problems. ”

” A much more efficient way to implement the DDR subsystem IP is to deliver a DDR PHY that is exactly matched to the DDR I/O layout. By matching the PHY exactly to the I/Os, a tremendous amount of area is saved and power is reduced. Even better, the performance of the DDR can be improved since the PHY-I/O layout minimizes skew.”

Figure 3. Optimized DDR block

As process technology progresses and moves from 32 nm to 22nm and then 14 nm and so on, the role of the foundry in the place and route of an entire chip increases.  In direct proportion the freedom of designers to determine the final topology of a chip decreases.  Thus we are rapidly reaching the point where only hardened hard modules will be viable.  The number of viable providers in the IP industry is shrinking rapidly and many significant companies have been acquired in the last three or four years by EDA companies that becoming major providers of IP products.

Synopsys started selling IP around 1990 and has now a wide variety of IP in its portfolio, mostly developed internally.  Cadence, on the other hand, has built its extensive inventory of IP products mostly through acquisition.

Michael Munsey, Director of ENOVIA Semiconductor Strategy at Dassault Systemes points out that there are a number of issues to deal with regarding IP.

1.       IP Sourcing:  Companies are going to need a way to source IP.  They will need access to a cataloging system that allows for searching of both internally developed or under development IP as well as externally available third party IP.

2.       IP Governance: For internally developed IP, there needs to be systems and methodologies for handling the promotion of work in progress to company certified IP.  For both internal and externally acquired IP, there needs to be a process to validate that IP, and then a system to rate the IP internally based on previous use, documentation available, and other design artifacts.

3.       IP Issue Defect and Tracking: Since IP will be in use in multiple projects, a formal system is required to handle issue and defect tracking across multiple projects against all IP.  If one group finds and issue with a piece of IP, all other project groups that are using that IP need to be alerted of any issues found and the plan on resolving the issue.  Ideally this should be integrated into design tools that are used to assemble IP as well.  If a product has already gone out the door with the defective IP, these issues need to managed and corrective actions need to ensue based on any defects found.

4.       IP Security:  There are different levels of protection needed for different types of IP and robust methods must be put in place to ensure the security of IP.   First, company critical IP must be secure, and systems need to be put in place to make sure that the IP does not leave the company premises.  If collaborating with partners, any acquired IP must also be handled so that it is only used in the designs that are being collaboratively designed.  There need to be restrictions on using partner IP in design blocks which in turn can become IP in other designs.  There needs to be a way to track the ‘pedigree’ of IP.

5.   Variant driven platform based design:  Ultimately, for companies to keep up with the shortening market windows and application driven platform design, companies will need to adopt a system where there are base platforms with pre-qualified IP that can be configured on the fly and used a s a starting point for new designs.  These systems would automatically populate a design workspace with the required IP from a company approved catalog as the basis of a new design moving forward.

Integrating the Pieces

Farzad Zarrinfar, Managing Director of the Novelics Business Unit at Mentor Graphics,

provided a synthesis of the problems facing designers.

“For IP Integration, multiple IP like ‘Hard IP’, ‘Synthesizable Soft Peripheral IP’, and ‘Synthesizable Soft processor IP’ with different set of deliverables, use EDA tools for efficient ASIC/SOC designs. Selecting the optimal IP size (such as smallest embedded memory IP) is a critical design decision. While free IP is readily available, it does not always provide the best solution when compared to fee-based IP that provides much better characteristics for the specific applications.

IP integration to achieve smaller die size, lower leakage, lower dynamic power, or faster speed can provide designers with a more optimized solution that can potentially save millions of dollars over the life of the product, and better differentiate their chips in a highly competitive ASIC/SoC marketplace.”

Bill Neifert, Chief Technology Officer at Carbon Design Systems observes.  “Certainly, some designers at the bleeding edge differentiate every aspect of the subsystem and their own IP, but we’re increasingly seeing others adopt whole subsystem designs and then making configuration tweaks. Think black box design and ARM’s big.LITTLE offerings are prime examples of this trend.

Of course, in order to make these configuration changes, designers need to know the exact impact of the changes that they’re making. We see users doing this a lot on our IP Exchange web portal. They will download a CPAK (Carbon Performance Analysis Kit), a pre-built system or subsystem complete with software at the bare metal or O/S level. This gets them up and running quickly but not with their exact configuration. They’ll then iterate various configuration options in order to meet their exact design goals. It’s not unusual for a design team to compile 20 different configurations for the same IP block on our portal and then compare the impact of each of these different models on system performance.

Naturally, all of this impacts the firmware team quite a bit. The software developers don’t need to know exactly what the underlying hardware is doing but the firmware team needs the exact IP configuration. The sooner these decisions can be made, the sooner they can start being productive. Integrating this level of software on to the hardware typically exposes a new round of IP optimizations that can be made as well. Therefore, it’s not unusual for IP configuration changes to happen in waves as additional pieces of IP and software are added to the system.”

Drew Wingard, CTO at Sonics points out that standards matter.  “Because there are many sources for IP, the industry had to create and adhere to standards for integration. From a silicon vendors’ perspective, IP sources include third-party commercial components, internally designed blocks and cores, and customer-designed components. To meet the challenge of integrating IP components from many different places, SoC designers needed communication protocol standards. Communication protocol standards efforts began with the Virtual Socket Interface Alliance (VSIA), continued with the Open Core Protocol International Partnership (OCP-IP), and today reside with Accellera. Of course, our customers need to leverage de-facto standards such as ARM’s AMBA as well.  We owe our ability to integrate IP to the fundamental communication protocol standards work that these organizations performed.”

The Challenge of Verification

Sunrise’s Prithi Ramakrishnan is concerned about system verification.  “At a very high level, the main issue with IP is that the simulated environment is different from the final design environment.  Analog and RF IP is dependent on process/node, foundry, layout, extraction, model fidelity, and placement.  So you are either tied to just dropping it in ‘as is’ and treating it like a black box (nobody knows how it works and whether it meets the required specifications) or completely changing it (with the caveat that you can no longer expect the same results).  Digital IP needs to be resynthesized followed by placement and routing, and it takes several iterations to make the IP you got work the way you want it to work. In addition, this process is extremely tool-dependent.

Finally, there are system level issues like interoperability, interface and controls (how does the IP talk to the rest of the SoC). A very important, often overlooked factor is the communication between the IP providers and the SoC implementation houses – there are documents outlining integration guidelines, but without an automated process that takes in all that information, a lot could be lost in translation.”

The issue of how well a third party IP has been verified will always hunt designers unless the industry finds a way to make IP as trustworthy as the TI 7400 and equivalent parts of the early days.  Bernard Murphy, CTO at Atrenta observed: “One area that doesn’t get a lot of air-time is how a SoC verification team goes about debugging a problem around an IP. You have the old challenge – is this our bug or the IP developer’s bug? If the developer is down the hall, you can probably resolve the problem quickly. If they are now working for your biggest competitor, good luck with that. If this is a commercial IP, you work with an apps guy to circle around possibilities: maybe you are using it wrong, maybe you misunderstood the manual or the protocol, may be they didn’t test that particular configuration for that particular use-case… Then they bring in their expert and go back through the cycle until you converge on an answer. Problem is, all this burns a lot of time and you’re on a schedule. Is there a way to compress this debug cycle?”

He offers the following suggestion.  “One important class of things to check for is the above didn’t test that configuration for that use-case.  This is where synthesized assertions come in. These are derived automatically by the IP developer in the course of verifying the IP. They don’t look like traditional assertions (long, complex sequences of dependencies). They tend to be simpler, often non-obvious, and describe relationships not just at the boundary of the IP but also internal to the IP. Most importantly, they encode not just functionality but also the bounds of the use-cases in which the IP was tested. Think of it as a ‘signature’ for the function plus the verification of that function.”

Thomas L. Anderson, VP of Marketing at Breker Verification Systems pleads: integrate, but verify.  He argues that “The truth is that most SoC teams trust integration too much and verify too little. Many SoC products hit the market only after two or three iterations through the foundry. This costs a lot of money and risks losing market windows to competitors. Most SoC teams follow a five-step verification process:

  1. Ensure that each block, whether locally designed or licensed as IP, is well verified
  2. Use formal methods to verify that each block has been integrated into the SoC properly
  3. Assemble a minimal chip-level simulation testbench and run a few sanity verification tests
  4. Hand-write some simple C tests and run on the embedded processors in simulation or emulation
  5. Run the production software on the processors in simulation, emulation, or prototyping

The problem with this process is that the tests in steps 3 and 4 are too simple since they are hand-written. They typically verify only one block at a time, ignoring interaction between blocks. They also perform only one operation at a time, so they don’t stress cache coherency or any inherent concurrency within the design. Humans aren’t good at thinking and coding in parallel. Thorough SoC pre-silicon verification can occur only with multi-threaded, multi-processor test cases that string blocks together into realistic scenarios representing end-user applications of the chip.”

An Example of Complexity

Charlie Cheng, CEO of Kilopass gave me an example of complexity in choosing the correct IP for a design by using sparse matrix math.

With semiconductor IP comprising 90 percent of today’s semiconductor devices and memory IP accounting for over 50 percent of these complex SoCs, it’s no wonder that IP is the fastest growing sector of the overall semiconductor industry. As a result, managing third-party IP is a growing responsibility within today’s semiconductor companies. How to make the right choice from a growing quantity of IP is the major challenge facing engineering teams, purchasing departments, and executive management. The process of these groups buying IP can be viewed as a sparse matrix mathematical exercise but without the actual math formulas and data manipulation.

1.8V Vendor A OTP
TSMC 28HP
UMC 28HP
SMIC 28HP
GLOBALFOUNDRIES 28HP

The table shows two dimensions of a multi-dimensional matrix representing the variables confronting the purchasing company teams. In this two-dimensional matrix, imagine three additional tables for 1.8v operations with the four foundries at 28HPL, 28HPM, and 28HP.  Now, replicate this in a fourth dimension for the variable of 2.5v. Add a fifth and sixth dimension to the matrix for Vendor B’s OTP.  If this were a mathematical evaluation, a figure of merit would be assigned to each cell in each plane of the multidimensional matrix.

For example, Vendor A’s OTP at 1.8v has JEDEC three-lot at 28HP at TSMC, UMC and GLOBALFOUNDRIES, similarly for 28HPM and 28HP but has only working silicon at 28LP.  Vendor B’s OTP may not have received three lot qualification at any of the vendors on any of the processes, but may have first silicon or one or two lot qualification on one or more of the vendors. In the mathematical exercise, a figure of merit would be assigned to the being fully qualified, one or two lot qualified, first silicon or not taped out.  Using matrix algebra, the formal mathematical exercise would return a result but an intuitive evaluation of the process would suggest vendor A with more three-lot qualifications at multiple foundries would have an edge over vendor B which did not.

The above exercise would be typical of the evaluation occurring in the engineering team. A similar exercise would be occurring in the purchasing department with terms and conditions presented in the licensing agreement and royalty schedule each vendor submits. Corporate and legal would perform a similar exercise.

If the mathematical exercise was actually performed and all the cells in the matrix had assigned values, then a definitive solution is easily achieved. However, if the cells throughout the matrix are sparsely populated, then the solution ends up with a probable outcome.

Blog Review – Mon. April 28 2014

Monday, April 28th, 2014

The first PHYs; compute shaders; Accellera Day online; Security and privacy
By Caroline Hayes, Senior Editor.

Mentor Graphics’ Dennis Brophy initially questioned the need for an online Accellera Day, but soon retracted and is even offering to keep blog visitors informed as more are posted.

If you are interested in compute shaders, Sylvek at ARM, explains clearly and concisely what they are and how to add one to an application.

Another informative blog is the first of three from Corrie Callenbach, Cadence, directing us to Kevin Yee’s video: Take command of MIPI PHYS. The presenter takes us through the first of three PHY specifications introduced by MIPI.

Intel’s Mayura Garg points blog visitors to Michael Fey’s presentation at ISS2014. Fey, Executive Vice President, General Manager of Corporate Products and Intel Security CTO focuses on Security and Privacy in the Information Age – the blog’s own ‘scary video’.

Next Page »