Analog Circuits Benefit from Scaling Trends

The same semiconductor technology roadmap driven by digital scaling requirements can be profitably applied to analog circuits.

As CMOS technologies scale to smaller nodes, both benefits and challenges are created. There are advantages in speed and power due to lower capacitance loading and the lower supply voltage. Conversely, the reduction in intrinsic device gain and available signal swing negatively impact the performance.

Unlike transistors used for “digital” functions—meaning two-state operation—scaling doesn’t happen so readily or cleanly with analog blocks. The motivation to scale is driven by the fact that transistors scale exponentially with each process generation. Basically, transistors get smaller and more of them can be put on the same-size die. Figure 1, which is from Moore’s original paper, is an incredibly bold predication made over 40 years ago. It is actually based on very little data. In fact, we’ve seen the logic gates’ density doubling every 18 to 24 months. On the other hand, analog circuits like analog-to-digital converters (ADCs) only double their speed-resolution period every five years or more.

Figure 1: This representation of Moore’s Law predicting the exponential increase in complexity of integrated circuits was originally printed in “Electronics,” Volume 38, Number 8, April 19, 1965.

There are many reasons that analog doesn’t scale as readily. This can be seen graphically in Figure 2, which shows the transistors that are characteristic to 130- and 40-nm CMOS technologies. In the saturation region, there’s a much larger change in current in the 40-nm process. This change shows that the transistor has much less control over the current. The design of amplifiers is therefore much more difficult, as the realizable gain per stage is significantly reduced. To compensate for that reduction, more sophisticated circuits are required, leading to larger area and higher consumption.

Figure 2: The devices characteristics have degraded significantly from 130-nm process (LEFT) down to 40-nm process (RIGHT).

Another factor is the reduced supply voltages and dynamic range. As the supply voltage is reduced with the migration to lower process nodes, the available signal range shrinks. At today’s 1-V supply level, the signal range may be 0.7 V or less. This lower signal range requires proportionately lower noise levels to maintain the same dynamic range. In mixed-signal circuits, such as ADCs using switched capacitors, the reduction in the noise level can be achieved with larger capacitors. To compensate for a 2X reduction in signal range, the capacitors must be increased 4X—making it quite difficult to scale down the area. In addition, the larger loading capacitances require larger currents for charging and discharging, invariably leading to higher consumption. Is there any hope in benefiting from the process-node scaling that we see in digital blocks?

USB Case Study

Figure 3: This graphic depicts generations of Synopsys’ USB 2.0 PHY showing area scaling from 180-nm down to 28-nm.

Figure 3 shows three generations of Synopsys’ USB 2.0 PHY. Clearly, we’ve managed to scale the design from the original 180-nm to today’s 28-nm version. Getting there wasn’t as simple as re-targeting standard cell libraries and then running automatic place and route. Scaling was achieved for this analog/mixed-signal IP using a number of different design techniques.

First of all, the parameterized transistor cells were technology-node-optimized. In addition, much of the high-speed analog circuitry was pushed to the low-voltage core domain. The smaller technology nodes do have a higher poly sheet resistance per unit area, which helps in making the resistors smaller as well. The I/O voltage also scales from 3.3 to 1.8 V in 28-nm. This voltage scaling provides benefits in more efficient capacitor designs—making the lowpass filter in the phase-locked loops (PLLs), for example, much smaller. Of course, due consideration to leakage, linearity, and breakdown voltages must be made.

The main benefit of smaller technology nodes is to target higher speeds—for example, USB 3.0 operating at 5 Gbits/s or SATA at 6 Gbits/s—rather than improving the power consumption of existing designs. Like digital gates, analog IP benefits from technology scaling, but with a very different methodology—not using design tools, but rather different analog architectures.

Data Converters

Here’s an example for ADCs. The run-of-the-mill, dual 10-bit, 80-MHz ADC in 180-nm technology is 5X smaller in 65-nm. This is impressive. Like the previous USB PHY example, the size reduction has been achieved by architectural changes that were made possible by designing using core (1.2-V) devices. Originally, the ADCs were designed using I/O (3.3-V) devices, due to the higher voltage headroom allowed by these devices.

Presently, all state-of-the-art ADCs are designed using core (1.2-V or lower) devices. Although designing high-performance converters at the core voltage is challenging, it yields substantial gains in terms of maximum sampling rate, power dissipation, and—obviously—area. Architectures have evolved significantly. Many design tricks are employed to reduce area. For example, by employing digital calibration schemes, it’s possible to relax the performance of the individual analog blocks in the ADC. This makes those analog blocks (operational amplifiers, comparators, etc.) simpler, smaller, and lower in power consumption.

In the case of dual-matched converters, it’s possible to be very area-effective by reusing a very-high-sampling-rate, single-channel ADC to convert two channels at half-speed. This is achieved by adding a front-end stage that samples and holds the two channels in the same instant. Area savings of almost 50% can be achieved this way.

As is true for digital designs, a “virtuous cycle” is created by having a smaller design. If the converter is smaller, the parasitic capacitances that it must drive also will be smaller. As a result, the op-amp that drives them doesn’t need much higher output drive. In addition, the commensurate biasing circuits that go with it are simpler, allowing even more area (and power) to be saved.

How far can we go with this scaling? Where’s the limit? If we go below 32/28-nm, will we continue to see this size reduction in analog IP? Our conjecture is that area improvements will happen, but not at the dramatic levels seen in the 180-to-65-nm example cited previously. Here are a couple of reasons for this:

  • The advantages of moving from I/O to core devices have already been achieved with 65-nm technologies. Moving forward, it will become harder and harder to design using sub-1-V supplies. In addition, the designs will become more complicated in order to yield good performance at those low voltages. Most likely, there will be only two transistors stacked with many more placed laterally.
  • The converters are now a very small fraction of the complete system-on-a-chip (SoC) area—even if, in some cases, multiple instantiations of the converter are used (for example, in multiple-input multiple-output [MIMO] transceivers). Therefore, there may be no market driver/need.

“Digitally Assisted” Analog Circuits

Despite all of the challenges for analog design, a modern SoC offers a great advantage to the analog blocks: the availability of almost limitless computing power. In fact, today’s digital circuits achieve huge densities and extremely low energy per logic operation. For example, 45-nm offers above a million gates/mm2 and two-input NANDs consuming only about 1 nW/MHz. This is at least an order-of-magnitude improvement over 180-nm.

Frequently, analog circuits are making use of “digital assistance,” which allows simplification of the critical analog circuits that don’t scale easily. Examples include high-resolution ADCs and high-performance analog front ends. These digitally assisted analog design techniques are enabling the analog circuits to analyze themselves and auto-correct their deviations.

A classic technique is calibration or digital correction. Such techniques allow high levels of accuracy to be achieved with smaller component areas. They are directly applicable to circuits that rely on matching, such as successive-approximation and pipeline ADCs. The challenge in these techniques is to identify algorithms that allow the calibration or correction to be autonomous, rather than at the fabrication stage. The circuit must be able to estimate its own errors and then apply an appropriate compensation without interrupting the normal operation.

Typically, the self-calibrating routine is run at power-up. In some cases, however, the drifts with time and temperature cannot be accepted. The calibration must then continue running in the background or be repeated periodically. To avoid interrupting operation, a redundant stage can be added, allowing the components to be put offline in a rotating fashion for calibration one at a time. Alternatively, a replica stage can be calibrated. The result will then be mirrored to the operating stages.

Calibration can be used in a large variety of situations:

  • Tuning of analog filters: These circuits depend on time constants, which are determined by resistors and capacitors that show large process and temperature deviations. Calibration can be done against a precise clock reference that’s compared with the R-C time constant.
  • Centering of voltage-controlled-oscillator (VCO) frequency range: The oscillators running frequency can vary widely with process and temperature. Calibration can be done by forcing the control voltage to the mid-range and adding loading capacitors to the VCO stages to adjust the running frequency.
  • Offset compensation: Offset is unavoidable in analog circuits due to mismatches. Calibration is made possible by comparing the output voltage to 0. The circuit can then be balanced by adding some adjustment voltage (e.g., through a small digital-to-analog converter [DAC]).
  • Calibration by correlation with other calibrated parameters: In radio-frequency (RF) amplifiers, for example, gain is determined by the transconductance (Gm) of the active device and by the inductor (L) and capacitor (C) values of the tuned load. Normally, similar L and C are already calibrated in the VCO and their calibrating word is available. In addition, Gm can be made to track poly resistance in the biasing generator. It’s therefore quite possible that there’s a strong correlation between the amplifier gain and VCO calibrating word. This can be obtained by simulation and placed in a table lookup that’s used to adjust the RF amplifier during normal operation. No calibration of the RF amplifier takes place. The designer simply relies on correlation with other, already calibrated components.

Calibration techniques are confined to the block level. Other digitally assisted techniques take advantage of the fact that the analog block is embedded in a complete system—including all of the digital-demodulation and data-extraction stages. Today, this complete system is often entirely implemented in one chip. It’s then possible to estimate the quality of the signal being received and obtain information to feed back to the analog circuits to adjust their parameters (see Figure 4). These techniques are very efficient for the compensation of mismatches, phase deviations, and distortions.

In wireless systems, the transmitted signal often includes pilot tones and special coding, which allow the receiver to estimate deviations on the air interface. These same features also enable corrections for deviations in the analog front end using digital signal processing of the received data.

Figure 4: A radio receiver uses system level estimations of the signal quality to provide feedback to the analog circuits and compensate for mismatches, phase deviations and distortions.

Overall, these techniques allow for considerable relaxation of the analog front-end performance, which can be used for minimizing both area and power consumption.

Another trend is the transition of traditional analog functions to the digital domain. One example is filters. Digital filters have many advantages compared with analog ones. They don’t suffer from mismatches and implement mathematically exact transfer functions. In addition, their area and power consumption scale only logarithmically with dynamic range (whereas for an analog filter, they scale quadratically). Since 90-nm, the power consumption of digital filters also is generally lower than the analog equivalent (see Figure 5). Similar observation applies to area.

Figure 5: Experimental data demonstrates how digital filters consumption increases only logarithmically with dynamic range and that analog implementations tend to follow a quadratic rule.

To move the filters from the analog to the digital domain requires moving the ADCs toward the input (see Figure 6). That means digitizing a wider bandwidth and possibly including some out-of-band interferers that were supposed to have been removed by the filters. However, designing faster ADCs isn’t a big challenge in smaller process nodes. Speed comes together with the scaling of the technology because the devices are faster and the parasitic capacitances are lower. The wider bandwidth and dynamic range isn’t really a showstopper, as the wanted signal bandwidth is still the same. The out-of-band spectrum that needs to be digitized can be treated as noise.

This is the realm of sigma-delta ADCs (see Figure 7). They operate at highly oversampled frequencies and digitize the input signal as a high-speed bitstream. In the process, they generate high-energy, high-frequency noise. The output word is obtained after digital decimation in a filter, which reduces the sample rate to the nominal and removes the high-frequency noise. This decimating filter can be merged with the digital filter that was moved from the analog domain, leading to a very efficient overall implementation. Moving filters from the analog to the digital domain has many advantages and no significant disadvantages.


Figure 6: A Sigma-delta ADC operates at a high oversampling rate, allowing moving the analog filters to the digital domain.

Figure 7: This graphic shows typical architecture of a ADC (top) and the resulting output bitstream for a sinusoidal input (bottom).

Stochastic Converters

A very interesting and ambitious approach is stochastic circuits. Instead of trying to use high-accuracy components (either by making them large or via calibration), stochastic circuits rely on statistics of large numbers. In fact, the components are purposely made to be inaccurate. This technique is best illustrated with an example of a Flash ADC.

A classical Flash ADC includes one comparator for each transition between output codes. For a 3-bit ADC, there are thus seven comparators (see Figure 8a). Each comparator has a trigger point that corresponds to a respective code transition. For example, the first comparator has a trigger point at ½ LSB, the second at 3/2 LSB, and so on. These trigger points are often defined by a resistor tree (also illustrated in Figure 8a). The outputs of the comparators produce a thermometer code, which corresponds to the input signal level. For VIN = 0, for example, the outputs are all 0. For VIN = VREF, the outputs are all 1s. A digital decoder generates the binary output code.

As the resolution increases, the trigger points grow closer together. In addition, the comparators must be designed for very low offset. This requires a large area or some form of offset calibration for each comparator.

In a stochastic ADC, the comparator trigger points aren’t set by design (see Figure 8b). Rather, they’re allowed to be both random and large. Therefore, the comparator outputs won’t follow a thermometer code with an increasing signal level. Instead, they’ll turn on without any order, as the input signal goes above each individual random offset. Still, the sum of the comparator outputs follows a monotonic characteristic with the input signal level.

Figure 8: a) A Flash ADC uses high precision comparators. b) A Stochastic ADC has a similar structure but requires only small low accuracy comparators at the expense of larger numbers.

The comparators used can be very small because they’re supposed to be quite inaccurate and exhibit large offsets. Those offsets are determined by many factors, such as random variations of the devices VT parameter as well as its length and width (L and W). Because of the large number of comparators—each with independent deviations—the central limit theorem leads to a probability density function (PDF) of the offsets that closely approximates a Gaussian curve. With an input ramp applied to the comparators, the sum of their outputs will have a cumulative distribution function (CDF) as seen in Figure 9. This function can then be linearized and used as the converter output. The number of comparators required for achieving N-bit resolution is on the order of 2 x 4N, which is much larger than the 2N - 1 of a Flash ADC. Because the comparators can use the process’ minimum device size, however, this technique becomes quite interesting in advanced process nodes.

Figure 9: Cumulative Gaussian function that is representative of the characteristic of Stochastic converters.

The semiconductor technology roadmap driven by digital scaling requirements can be applied to analog circuits, such as Universal Serial Bus (USB) physical interfaces and data-converters. It will therefore provide advantages to the SoC integrator, such as smaller area. Note that the techniques used to scale these analog circuits are different than digital, as they rely on design techniques rather than electronic-design-automation (EDA) tools. In addition, the smaller technologies allow innovation in analog design, such as calibration and digital filters.

Carlos Azeredo Leme is senior engineer for analog IP at Synopsys. He received the Licenciado degree in EE and an MSEE from Instituto Superior Técnico, Portugal. His PhD is from ETH-Zurich, Switzerland. Azeredo Leme is professor auxiliar at the Instituto Superior Técnico. Previously, he was co-founder and chief technical officer of Chipidea Microelectronics. Azeredo Leme’s scientific interests are in analog, mixed-signal, and RFIC design with a focus on low power and low voltages. He has published over 80 articles in books, international journals, and conferences.

Navraj Nandra is the senior director of marketing for the DesignWare analog and mixed-signal IP products at Synopsys. He has worked in the semiconductor industry since the mid-’80s as an analog/mixed-signal IC designer. Nandra holds a masters degree in microelectronics, majoring in analog IC design, from Brunel University as well as a post-graduate diploma in process technology from Middlesex University. He has presented at numerous technical conferences on mixed-signal design, analog IP, and analog synthesis/EDA.


Tech Videos

MAGAZINE

  • Download the latest issue of the Chip Design Magazine
    and subscribe to receive future issues and the email newsletter.

Chip Design Research

Are you up-to-date on important SoC and IP design trends, analysis and market forecasts?

Chip Design now offers customized market research services.

For more information contact Karen Popp at +1 415-305-5557

Calendar Of Events

©2014 Extension Media. All Rights Reserved. PRIVACY POLICY | TERMS AND CONDITIONS