Part of the  

Chip Design Magazine

  Network

About  |  Contact

Posts Tagged ‘Memory Interfaces’

Behold the Intrinsic Value of IP

Monday, March 13th, 2017

By Grant Pierce, CEO

Sonics, Inc.

Editor’s Note [this article was written in response to questions about IP licensing practices.  A follow-up article will be published in the next 24 hours with the title :” Determining a Fair Royalty Value for IP”].

figure

Understanding the intrinsic value of Intellectual Property is like beauty, it is in the eye of the beholder.  The beholder of IP Value is ultimately the user/consumer of that IP – the buyers. Buyers tend to value IP based upon their ability to utilize that IP to create competitive advantage, and therefore higher value for their end product. The IP Value figure above was created to capture this concept.

To be clear, this view is NOT about relative bargaining power between buyer and the supplier of IP – the seller –  that is built on the basis of patents. Mounds of court cases and text books exist that explore the question of patent strength. What I am positing is that viewing IP value as a matter of a buyer’s perception is a useful way to think of the intrinsic value of IP.

Position A on the value chart is a classification of IP that allows little differentiation by the buyer, but is addressing a more elastic market opportunity. This would likely be a Standard IP type that would implement an open standard. IP in this category would likely have multiple sources and therefore competitive pricing.  Although compliance with the standard would be valued by the buyer, the price of the IP itself would be likely lower reflecting its commodity nature. Here, the value might be equated to the cost of internally creating equivalent IP. Since few, if any, buyers in this category would see advantage for making this IP themselves and because there are likely many sellers, the intrinsic value of this IP is determined on a “buy vs buy” basis.  Buyers are going to buy this IP regardless, so they’ll look for the seller with the proposition most favorable to the buyer – which often is just about price.

Position B on the value chart is a classification of IP that allows for differentiation by the buyer, but addresses a more elastic market. IP in this category might be less constrained by standards requirements. It is likely that buyers would implement unique instantiations of this IP type and as a result command some end competitive advantage. Buyers in this category could make this IP themselves, but because there are commercial alternatives, the intrinsic value is determined by applying a “make vs buy” analysis. The value proposition of the sellers of this type of IP often include some important, but soft value propositions (e.g., ease of re-use, time-to-market, esoteric features), the make vs buy determination is highly variable and often buyer-specific. This in part explains the variability of pricing for this type of IP.

Position C on the value chart is a classification of IP that serves a less elastic market and empowers buyers to differentiate through their unique implementations of that IP. This classification of IP supports license fees and larger, more consistent, royalty rates. IP in this category becomes the competitive differentiation that sways large market share to the winning products incorporating that IP. This category supports some of the larger IP companies in the marketplace today. Buyers in this category are not going to make the IP themselves because the cost of development of the product and its ecosystem is too prohibitive and risky. The intrinsic value really comes down to what the seller charges.

This is a “buy vs not make” decision – meaning one either buys the IP or it doesn’t bother to make the product. A unique hallmark of IP in this position is that so long as the seller applies pricing consistently, then all buyers know at the very least that they are not disadvantaged relative to the competition and will continue to buy. Sellers will often give some technology away to encourage long-term lock in. For these reasons, pricing of IP in this space tends to be quite stable. That pricing level must subjectively be below the level that customers begin to perform unnatural acts and explore unusual alternatives.  So long as it does, the price charged probably represents accurately the intrinsic value.

Position D on the value chart is a classification of IP that requires adherence to a standard. Like category A, adherence to the standard does not necessarily allow differentiation to the buyer. The buyer of this category of IP might be required to use this IP in order to gain access to the market itself. Though the lack of end-product differentiation available to the buyer might suggest a lower license fee and/or lower to zero royalty rate, we see a significantly less elastic market for this IP type.

This IP category tends to comprise products adhering to closed and/or proprietary standards. IP products built on such closed and/or proprietary standards have given rise to several significant IP business franchises in the marketplace today. The IP in position D is in part characterized by the need to spend significant time and money to develop, market and maintain (defend) their position, in addition to spending on IP development. For this reason, teasing out the intrinsic value of this IP is not as straightforward as “make vs buy.” Pricing is really viewed more as a tax. So the intrinsic value determination is based on a “Fair Tax” basis. If buyers think the tax is no longer “fair,” for any reason, they will make the move to a different technology.

Examples:

Position A:  USB, PCI, memory interfaces (Synopsys)

Position B:  Configurable Processors, Analog IP cores (Synopsys, Cadence)

Position C:  General Purpose Processors, Graphics, DSP, NoC, EPU (ARM, Imagination, CEVA, Sonics)

Position D: CDMA, Noise Reduction, DDR (Qualcomm, Dolby, Rambus)

Why Customer Success is Paramount

Sonics is an IP supplier whose products tend to reside in the Type C category. Sonics sets its semiconductor IP pricing as a function of the value of the SoC design/chip that uses the IP. There is a spectrum of value functions for the Sonics IP depending upon the type of chip, complexity of design, target power/performance, expected volume, and other factors. Defining the upper and lower bounds of the value spectrum depends upon an approximation of these factors for each particular chip design and customer.

Royalties are one component of the price of IP and are a way of risk sharing to allow customers to bring their products to market without having to pay the full value of the incorporated IP up front. The benefit being that the creator and supplier of the IP is essentially investing in the overall success of the user’s product by accepting the deferred royalty payment. Sonics views the royalty component of its IP pricing as “customer success fees.”

With its recently introduced EPU technology, Sonics has adopted an IP business model based upon an annual technology access fee and a per power grain usage fee due at chip tapeout. Under this model, customers have unlimited use of the technology to explore power control for as many designs as they want, but only pay for their actual IP usage in a completed design. The tape out fee is calculated based on the number of power grains used in the design on a sliding scale. The more power grains customers use, the more energy saved, and the lower the cost per grain. Using more power grains drives lower energy consumption by the chip – buyers increase the market value of their chips using Sonics’ EPU technology. The bottom line is that Sonics’ IP business model depends on customers successfully completing their designs using Sonics IP.

How to Drive a Successful IoT Application Design Project

Monday, December 14th, 2015

Mladen Nizic and Brad Griffin, Cadence Design Ssystems

Internet of Things (IoT) applications are changing the way we live. They are changing how we manufacture and transport goods, deliver healthcare and other services, manage energy distribution and consumption and even how we travel and communicate. An edge-node composition is an essential element of an IoT application, providing an interface between the digital and analog worlds. Despite the diversity of IoT applications, a typical edge node includes sensors to collect information from the outside world, some amount of processing power and memory, the ability to receive and transmit information, and the ability to control devices in the immediate vicinity. Although they are modest in device counts compared to large systems on chips (SoCs), edge node devices are very complex systems that integrate analog and digital functions in silicon, package, and board and are controlled by software that must operate for many years harvesting energy or using a coin battery.

Engineers need to design, verify, and implement these edge-node systems rapidly to meet tight market windows. To achieve aggressive timelines, they need a flow that enables system prototyping, hardware/software verification, mixed-signal chip design and manufacturing, and chip/IC package/board integration. In this article, we will focus on two critical steps in the flow that impact the design cycle and the success of an entire project: 1) simulation/verification of the system/chip and 2) signal integrity analysis in chip-package-board integration.

Simulation/Verification

Verification is the biggest design challenge today, particularly when analog functionality is involved, and IoT devices are no exception. High-performance analog, digital and mixed-signal simulation is indispensable but not sufficient and must be complemented by a model-based, metric-driven methodology. Key elements of the methodology are as follows:

Verification planning and management: Engineers develop verification plans and manage the plan execution carefully to filter out issues as early on as possible. A typical IoT device operates in many different modes (standby, active sensing, recharging, data processing, transmitting/receiving, test, etc.), and the functional verification plan must verify all modes and their transitions in a well-defined sequence. Since operations are controlled by embedded software, the software is ideally verified in conjunction with the hardware. It is important to understand which tests can be performed at a higher level of abstraction and which require transistor-level simulation. For example, high-level abstractions can verify that software algorithm/processor issues apply correct controls to a multiplexer selecting analog input. However, transistor-level simulation is required to verify that a built-in A-to-D converter operates correctly in a specified temperature range.

Behavioral modeling: Due to the complexity of IoT designs, executing the verification plan using transistor-level simulations is practically impossible and needs to be reserved for verifying specific electrical characteristics that require a high level of accuracy and correlation to silicon. For most functional verification planning, the investment in abstracting analog components using Verilog or VHDL behavioral models pays off by making verification much more efficient in thoroughly covering the entire system. Recent advancements in Real Number Modeling (RNM) using Verilog-AMS/wreal or SystemVerilog IEEE 1800 have made the simulation of analog, digital, and software components of an IoT system practical. Of course, modeling has to be done with a clear purpose as required by the verification plan, and the models must be in alignment with the specifications or transistor-level circuit in the case of a bottom-up design.

Coverage metrics: To assess the success of the verification of IoT designs, which are, by default, mixed-signal in nature, digital concepts of coverage metrics need to be extended to analog and mixed-signal—at least when it comes to functional verification. Using property specification language (PSL) or SystemVerilog assertions (SVAs) in conjunction with RNM simulations gives designers the ability to collect coverage, set pass/fail criteria, and evaluate the quality and completeness of the testbench, which can be used to drive improvement. This feedback loop is a major methodology improvement in comparison with the traditional direct test method.

Low-power verification: IoT devices must be extremely power efficient. To minimize power consumption, designers use advanced low-power techniques such as multiple power domains and supply voltages and power shutoffs, which help reduce active and leakage currents or completely turn off parts of the design when not needed. Power specifications captured in standard formats (like CPF or UPF-1801) can be used to ensure that power intent is implemented correctly. Designers should pay particular attention when it comes to managing the switching of power supplies to different power domains and handling analog/digital signal-crossing during power shutoffs. Dynamic CPF/UPF1801-aware mixed-signal simulation and static methods are becoming a standard part of verification methodology.

Mixed-signal simulation: High-performance, tightly integrated SPICE/FastSPICE transistor-level and digital engines supporting analog behavioral languages including RNM are at the core of the verification flow. For example, Cadence® Virtuoso®  AMS Designer is able to mix different levels of hierarchy and understand low-power specifications that make it a simulator of choice for verifying IoT designs.

The outlined methodology is well-supported by the Cadence flow as shown in Figure 1 below.

Fig. 1. Cadence flow for an IoT design

Signal Integrity Analysis

When you first consider designing an IoT device, signal and power integrity may not be the first thing that comes to mind. The focus will likely be on how this unique device will collect input, what it will produce for output, and what kinds of bells and whistles distinguish this device from competitors. However, any modern-day system, including edge-node IoT devices, must be fast, economical, and low power.

Therefore, it is a given that signals will be switching at high rates on a system that is the lowest possible cost and consumes minimal power. Like it or not, signal and power integrity is going to become part of the design challenge at some point.

Design considerations engineers need to keep in mind include:

Power management: Most IoT devices are powered by a battery. Requirements to recharge or replace that battery may make the difference in a product succeeding or failing.  The device must be designed to deliver sufficient power to all components (i.e. microcontrollers and memory) in an efficient manor while keeping low-voltage power rails stable while the device is operating.

The power delivery network (PDN) must be designed to take into account the current return path of switching signals and in a way that reduces any voltage drop due to power that is choked off as a result of congestion caused from signal vias, mounting holes, or various other causes that carve up the PDN. Maintaining stable power is a challenge. Decoupling capacitors (decaps) are used to ensure PDN stability. Space requirements and product cost create a desire to minimize the use of decaps.

The path to a successful IoT PDN design rests in utilizing analysis tools for both DC and AC analysis.  Having a tightly integrated design and analysis environment, as provided by Cadence Allegro® Sigrity™ products, provides design efficiency that saves time and engineering cost while optimizing the IoT PDN for cost and performance.

Fig. 2. Integrated side-by-side PCB design and power integrity analysis as seen in Allegro Sigrity PI Base

Memory interfaces: While sensors provide much of the input, at the heart of a typical IoT device is a microcontroller and system memory. Storing and recalling data quickly and accurately is essential to IoT functions. Dynamic RAM and some of the faster static RAM components utilize parallel bus interfaces to store and retrieve data. The data bus and the address bus provide design challenges. Simultaneous switching signals with fast-edge rates and small voltage swings create a perfect storm of opportunity for simultaneous switching noise (SSN) to impact signal quality. An IoT device used for medical assessment or a device used for military applications such as threat analysis certainly cannot afford to have unreliable data storage and retrieval.

To ensure these devices have reliable data storage and retrieval, controlled impedance and delay-tuned signal routing must be performed during design, and timing analysis must also be performed to ensure that all setup and hold conditions are met.

The path to successful memory interface design is through a constraint-driven design environment that sets both physical and electrical constraints at the logic stage of design. As physical implementation begins, dynamic rule checking that validates length and spacing rules can ensure that data signals, clock signals, address bus signals, and various control signals are routed to complicated timing specifications.

However, with the miniaturized size of many IoT devices (i.e. wearable devices), memory interface signals transition from layer to layer through vias that produce impedance discontinuities. Power-aware signal integrity analysis is required to ensure the tiny timing margins are not impacted by signal ringing, overshoot, and rippling ground reference voltages.

When signal quality issues are discovered through the analysis process, a quick path to resolution through the physical implementation tools is the key to keeping predictable IoT product development schedules.

SerDes interfaces: Many IoT devices communicate to the outside world through wireless interfaces. However, some wearable devices have a physical connector that transfers collected data to a host system. Data transfers must be fast and follow a standard interface protocol such as USB. Designing an interface so that it meets electrical compliance testing becomes part of the design requirements. The USB Implementers Forum (USB-IF) offers an integrator’s list of products that meet a set of compliance tests.  While designing these high-speed interfaces (current USB specs allow transfer speeds of up to 10Gbps), simulating compliance tests is a way to make sure designs will pass the first time.

To meet compliance specifications at high data transfer rates, reflections, crosstalk, interconnect loss, and equalization must be assessed and analyzed.

For serial links, substrate and PCB vias often create the largest impedance discontinuity on the serial link, causing potential reflections along the channel and crosstalk between channels. It can be challenging to maintain signal quality in the face of routing challenges weaving through via fields, as well as the need to transition layers through signal vias. It takes special care to craft signals to meet routing density challenges vs. “best practice” signal integrity. When crafting via transitions that need to appear virtually transparent as well as routing signals through dense via fields, maintaining signal integrity requires detailed extraction and simulation techniques while refining these physical implementation challenges.

At gigabit data rates, USB links are likely to utilize advanced equalization techniques, such as feed forward equalization (FFE) or continuous time linear equalization (CTLE).  FFE and CTLE are complex signal-processing algorithms that are implemented within semiconductor I/Os. To simulate these functions, the algorithms are mimicked in software models and implemented within simulation tools using the Algorithmic Modeling Interface (AMI) extension to the IBIS (I/O Buffer Information Sheet) standard. For USB multi-gigabit SerDes, many component vendors supply IBIS-AMI models.  However, for those vendors that do not, model creation software is available that uses predefined algorithms that can be customized through parameterization to match the performance of the component with the USB interface.

Serial links require compliance to a specific bit error rate (BER). The target BER is typically less than one error for every 10 billion bits received. Since it is not practical to simulate tens of billions of bits of data with traditional circuit simulation, high-capacity channel simulation has become part of any serial link analysis methodology. This approach applies an impulse response to characterize the serial channel and then applies advanced methods to achieve high-capacity throughput.

Having an analysis environment that can perform compliance testing while directly integrating with the implementation tools enables rapid tuning. With the ability to efficiently maximize performance of serial links during the design stage, IoT products can quickly be prototyped, tested at compliance meetings, and completed to meet aggressive time-to-market requirements.

Summary

With IoT devices being designed for a number of industries—consumer, medical, industrial, and military, just to name a few—each IoT device design team must consider the signal and power requirements  and recognize that signal and power integrity must become part of the design and analysis methodology. The competitive nature of this emerging industry means that time to market and rapid prototyping are essential to the success of a design team. Utilizing an integrated design and a signal/power analysis environment can provide IoT product creation with the highest probability of success.