Part of the  

Chip Design Magazine


About  |  Contact



How to Drive a Successful IoT Application Design Project

Mladen Nizic and Brad Griffin, Cadence Design Ssystems

Internet of Things (IoT) applications are changing the way we live. They are changing how we manufacture and transport goods, deliver healthcare and other services, manage energy distribution and consumption and even how we travel and communicate. An edge-node composition is an essential element of an IoT application, providing an interface between the digital and analog worlds. Despite the diversity of IoT applications, a typical edge node includes sensors to collect information from the outside world, some amount of processing power and memory, the ability to receive and transmit information, and the ability to control devices in the immediate vicinity. Although they are modest in device counts compared to large systems on chips (SoCs), edge node devices are very complex systems that integrate analog and digital functions in silicon, package, and board and are controlled by software that must operate for many years harvesting energy or using a coin battery.

Engineers need to design, verify, and implement these edge-node systems rapidly to meet tight market windows. To achieve aggressive timelines, they need a flow that enables system prototyping, hardware/software verification, mixed-signal chip design and manufacturing, and chip/IC package/board integration. In this article, we will focus on two critical steps in the flow that impact the design cycle and the success of an entire project: 1) simulation/verification of the system/chip and 2) signal integrity analysis in chip-package-board integration.


Verification is the biggest design challenge today, particularly when analog functionality is involved, and IoT devices are no exception. High-performance analog, digital and mixed-signal simulation is indispensable but not sufficient and must be complemented by a model-based, metric-driven methodology. Key elements of the methodology are as follows:

Verification planning and management: Engineers develop verification plans and manage the plan execution carefully to filter out issues as early on as possible. A typical IoT device operates in many different modes (standby, active sensing, recharging, data processing, transmitting/receiving, test, etc.), and the functional verification plan must verify all modes and their transitions in a well-defined sequence. Since operations are controlled by embedded software, the software is ideally verified in conjunction with the hardware. It is important to understand which tests can be performed at a higher level of abstraction and which require transistor-level simulation. For example, high-level abstractions can verify that software algorithm/processor issues apply correct controls to a multiplexer selecting analog input. However, transistor-level simulation is required to verify that a built-in A-to-D converter operates correctly in a specified temperature range.

Behavioral modeling: Due to the complexity of IoT designs, executing the verification plan using transistor-level simulations is practically impossible and needs to be reserved for verifying specific electrical characteristics that require a high level of accuracy and correlation to silicon. For most functional verification planning, the investment in abstracting analog components using Verilog or VHDL behavioral models pays off by making verification much more efficient in thoroughly covering the entire system. Recent advancements in Real Number Modeling (RNM) using Verilog-AMS/wreal or SystemVerilog IEEE 1800 have made the simulation of analog, digital, and software components of an IoT system practical. Of course, modeling has to be done with a clear purpose as required by the verification plan, and the models must be in alignment with the specifications or transistor-level circuit in the case of a bottom-up design.

Coverage metrics: To assess the success of the verification of IoT designs, which are, by default, mixed-signal in nature, digital concepts of coverage metrics need to be extended to analog and mixed-signal—at least when it comes to functional verification. Using property specification language (PSL) or SystemVerilog assertions (SVAs) in conjunction with RNM simulations gives designers the ability to collect coverage, set pass/fail criteria, and evaluate the quality and completeness of the testbench, which can be used to drive improvement. This feedback loop is a major methodology improvement in comparison with the traditional direct test method.

Low-power verification: IoT devices must be extremely power efficient. To minimize power consumption, designers use advanced low-power techniques such as multiple power domains and supply voltages and power shutoffs, which help reduce active and leakage currents or completely turn off parts of the design when not needed. Power specifications captured in standard formats (like CPF or UPF-1801) can be used to ensure that power intent is implemented correctly. Designers should pay particular attention when it comes to managing the switching of power supplies to different power domains and handling analog/digital signal-crossing during power shutoffs. Dynamic CPF/UPF1801-aware mixed-signal simulation and static methods are becoming a standard part of verification methodology.

Mixed-signal simulation: High-performance, tightly integrated SPICE/FastSPICE transistor-level and digital engines supporting analog behavioral languages including RNM are at the core of the verification flow. For example, Cadence® Virtuoso®  AMS Designer is able to mix different levels of hierarchy and understand low-power specifications that make it a simulator of choice for verifying IoT designs.

The outlined methodology is well-supported by the Cadence flow as shown in Figure 1 below.

Fig. 1. Cadence flow for an IoT design

Signal Integrity Analysis

When you first consider designing an IoT device, signal and power integrity may not be the first thing that comes to mind. The focus will likely be on how this unique device will collect input, what it will produce for output, and what kinds of bells and whistles distinguish this device from competitors. However, any modern-day system, including edge-node IoT devices, must be fast, economical, and low power.

Therefore, it is a given that signals will be switching at high rates on a system that is the lowest possible cost and consumes minimal power. Like it or not, signal and power integrity is going to become part of the design challenge at some point.

Design considerations engineers need to keep in mind include:

Power management: Most IoT devices are powered by a battery. Requirements to recharge or replace that battery may make the difference in a product succeeding or failing.  The device must be designed to deliver sufficient power to all components (i.e. microcontrollers and memory) in an efficient manor while keeping low-voltage power rails stable while the device is operating.

The power delivery network (PDN) must be designed to take into account the current return path of switching signals and in a way that reduces any voltage drop due to power that is choked off as a result of congestion caused from signal vias, mounting holes, or various other causes that carve up the PDN. Maintaining stable power is a challenge. Decoupling capacitors (decaps) are used to ensure PDN stability. Space requirements and product cost create a desire to minimize the use of decaps.

The path to a successful IoT PDN design rests in utilizing analysis tools for both DC and AC analysis.  Having a tightly integrated design and analysis environment, as provided by Cadence Allegro® Sigrity™ products, provides design efficiency that saves time and engineering cost while optimizing the IoT PDN for cost and performance.

Fig. 2. Integrated side-by-side PCB design and power integrity analysis as seen in Allegro Sigrity PI Base

Memory interfaces: While sensors provide much of the input, at the heart of a typical IoT device is a microcontroller and system memory. Storing and recalling data quickly and accurately is essential to IoT functions. Dynamic RAM and some of the faster static RAM components utilize parallel bus interfaces to store and retrieve data. The data bus and the address bus provide design challenges. Simultaneous switching signals with fast-edge rates and small voltage swings create a perfect storm of opportunity for simultaneous switching noise (SSN) to impact signal quality. An IoT device used for medical assessment or a device used for military applications such as threat analysis certainly cannot afford to have unreliable data storage and retrieval.

To ensure these devices have reliable data storage and retrieval, controlled impedance and delay-tuned signal routing must be performed during design, and timing analysis must also be performed to ensure that all setup and hold conditions are met.

The path to successful memory interface design is through a constraint-driven design environment that sets both physical and electrical constraints at the logic stage of design. As physical implementation begins, dynamic rule checking that validates length and spacing rules can ensure that data signals, clock signals, address bus signals, and various control signals are routed to complicated timing specifications.

However, with the miniaturized size of many IoT devices (i.e. wearable devices), memory interface signals transition from layer to layer through vias that produce impedance discontinuities. Power-aware signal integrity analysis is required to ensure the tiny timing margins are not impacted by signal ringing, overshoot, and rippling ground reference voltages.

When signal quality issues are discovered through the analysis process, a quick path to resolution through the physical implementation tools is the key to keeping predictable IoT product development schedules.

SerDes interfaces: Many IoT devices communicate to the outside world through wireless interfaces. However, some wearable devices have a physical connector that transfers collected data to a host system. Data transfers must be fast and follow a standard interface protocol such as USB. Designing an interface so that it meets electrical compliance testing becomes part of the design requirements. The USB Implementers Forum (USB-IF) offers an integrator’s list of products that meet a set of compliance tests.  While designing these high-speed interfaces (current USB specs allow transfer speeds of up to 10Gbps), simulating compliance tests is a way to make sure designs will pass the first time.

To meet compliance specifications at high data transfer rates, reflections, crosstalk, interconnect loss, and equalization must be assessed and analyzed.

For serial links, substrate and PCB vias often create the largest impedance discontinuity on the serial link, causing potential reflections along the channel and crosstalk between channels. It can be challenging to maintain signal quality in the face of routing challenges weaving through via fields, as well as the need to transition layers through signal vias. It takes special care to craft signals to meet routing density challenges vs. “best practice” signal integrity. When crafting via transitions that need to appear virtually transparent as well as routing signals through dense via fields, maintaining signal integrity requires detailed extraction and simulation techniques while refining these physical implementation challenges.

At gigabit data rates, USB links are likely to utilize advanced equalization techniques, such as feed forward equalization (FFE) or continuous time linear equalization (CTLE).  FFE and CTLE are complex signal-processing algorithms that are implemented within semiconductor I/Os. To simulate these functions, the algorithms are mimicked in software models and implemented within simulation tools using the Algorithmic Modeling Interface (AMI) extension to the IBIS (I/O Buffer Information Sheet) standard. For USB multi-gigabit SerDes, many component vendors supply IBIS-AMI models.  However, for those vendors that do not, model creation software is available that uses predefined algorithms that can be customized through parameterization to match the performance of the component with the USB interface.

Serial links require compliance to a specific bit error rate (BER). The target BER is typically less than one error for every 10 billion bits received. Since it is not practical to simulate tens of billions of bits of data with traditional circuit simulation, high-capacity channel simulation has become part of any serial link analysis methodology. This approach applies an impulse response to characterize the serial channel and then applies advanced methods to achieve high-capacity throughput.

Having an analysis environment that can perform compliance testing while directly integrating with the implementation tools enables rapid tuning. With the ability to efficiently maximize performance of serial links during the design stage, IoT products can quickly be prototyped, tested at compliance meetings, and completed to meet aggressive time-to-market requirements.


With IoT devices being designed for a number of industries—consumer, medical, industrial, and military, just to name a few—each IoT device design team must consider the signal and power requirements  and recognize that signal and power integrity must become part of the design and analysis methodology. The competitive nature of this emerging industry means that time to market and rapid prototyping are essential to the success of a design team. Utilizing an integrated design and a signal/power analysis environment can provide IoT product creation with the highest probability of success.

Tags: , , , , , , , , , , , ,

Leave a Reply

Extension Media websites place cookies on your device to give you the best user experience. By using our websites, you agree to placement of these cookies and to our Privacy Policy. Please click here to accept.