Published on November 30th, -0001

Simulation Falls Short with Asynchronous Clocks

Digital simulation relies on abstract behavioral models of circuits to predict how hardware designs behave. As long as designers adhere to a basic set of design rules, digital simulation is an excellent predictor of silicon behavior. The most fundamental rule that simulation depends on is that the design does not violate the setup and hold constraints specified for clocked elements. This is exactly why extensive timing analysis complements digital simulation. Static timing analysis verifies that, given a particular clock frequency, the setup and hold constraints are adhered to, and therefore, the simulation results are valid.

However with asynchronous clocks common in today's chips, designers can't help but violate this basic design rule. Any time data is transferred between asynchronous clock domains, the signals carrying this data will, at some point in time, violate the setup and hold constraints specified for the receiving registers. When this happens, the flip-flops in these registers become metastable—they will not settle to either a logical 1 or 0 within the specified delay for normal operation.

To prevent metastable signals from propagating through the design, designers have devised specific circuits, called synchronizers, to connect asynchronous clock domains. While virtually eliminating the possibility that metastable values will contaminate the design, synchronizers do introduce non- deterministic delays. For example, where a simulation of a common 2DFF synchronizer would predict a two-cycle delay (in terms of the receiving clock), the silicon for this 2DFF can produce, due to metastability, either a one-, two-, or three-cycle delay.

Therefore, with simulation unable to correctly predict the silicon behavior of synchronizers, designers must complement their simulation and static timing based verification flows with additional capabilities to verify that:

  • All clock domain crossing (CDC) signals have proper synchronizers
  • The design correctly transfers data across these synchronizers
  • The design correctly handles the non-deterministic delays through these synchronizers.

Extensive static or structural analysis of the RTL is typically used to verify that the proper synchronizers are in place for all CDCs. Moreover, any complete solution here should also automatically identify the various clock domains, map the clock distribution strategy, and, obviously, recognize a wide range of synchronizer structures.

Once it is determined that the correct synchronizers are in place, designers must verify that data is transferred correctly across them. For most synchronizers, the design must adhere to a particular protocol, generally referred as a CDC protocol. For example, if a value change must be transferred from a faster to a slower domain, the signal must be kept stable long enough (in terms of the slower clock) for it to propagate through the synchronizer. The best approach is to automatically generate these protocols as assertions when running static analysis. Since these CDC assertions typically specify properties for the logic in the originating clock domain, traditional simulation and formal analysis flows are very effective in verifying that the design obeys these CDC protocols.

Verifying that the non-deterministic delays are handled correctly is the most challenging task because digital simulation falls short in accurately modeling the non-deterministic behavior of synchronizers. Here's the problem: When two pieces of data have a well-defined timing relationship in clock domain A and are moved through separate synchronizers to clock domain B, the timing relationship between them can no longer be relied upon in domain B. For example, if the high and low bytes of a 16-bit word are transferred through two separate synchronizers, they may not arrive in the receiving clock domain at the same cycle. Regular simulation, since it does not model non-deterministic delays associated with synchronizers, will not find the tricky functional bugs related to such scenarios.

To find these types of bugs, designers need a simulation model of the design that actually models the non-deterministic behavior of synchronizers. For example, this functionality can be provided by creating behavioral metastability models that are automatically added to a regular RTL simulation. These models monitor the clocks in the originating and receiving domains as well as the inputs to the synchronizers to determine whether it is prone to metastability.

When a model finds that a metastability condition exists, it pseudo-randomly adjusts the delay (plus or minus one cycle) through the synchronizer to accurately reflect silicon behavior. As a result, the behavior of the simulator is adjusted to accurately reflect the possible behavior of the synchronizer. By combining this methodology with extensive coverage metrics, the designer now has the tools to validate that indeed all possible delays through the synchronizers are exercised, and the functionality of the design is unaffected by the occurrence of metastability.

With many of today's designs in wireless, multimedia, computing, and communications using asynchronous clocks to optimize power and performance, leading companies have started to integrate CDC verification as an integral component of their verification flow. In many cases, they had to learn the hard way. Today, however, with quality tools already available, you can get ahead of the curve and prevent an expensive respin due to CDC issues that escaped the traditional verification process.

Rindert Schutten is Product Marketing Manager, 0-In Verification Products, at Mentor Graphics.

Tech Videos

©2017 Extension Media. All Rights Reserved. PRIVACY POLICY | TERMS AND CONDITIONS