Published on September 05th, 2008

Timing Diagrams Ease Formal Property Development

Augmenting standard assertion languages to allow the direct transcription of timing diagrams into properties, enables development of an intuitive, compact property set capturing the full DUV functionality.

The combined use of testbenches and formal methods as an approach to module-level verification has rapidly gained ground. Historically, however, the two methodologies have not been easy to integrate into a systematic, dynamic/formal verification approach, because they take quite different verification perspectives.

Here, we describe a systematic formal verification approach that uses the testbench perspective – that of the module-level operations – to verify a module, or any part thereof. Starting from familiar and intuitive timing diagrams that describe the intended behavior of module-level operations, the approach bridges the perspective gap between the testbench and formal verification. It thus facilitates the use of formal verification, both standalone and in combination with testbenches. This operation-based formal verification approach eases and speeds property development, produces a compact, specification-compliant set of properties that is easy to maintain and reuse, and supports a systematic, high-coverage formal verification. And you don't have to be an expert in formal verification to use it.

The Perspective Disconnect

What are these quite different verification perspectives? Testbench authors view the functionality of a design-under-verification (DUV) from the perspective of the module's transactions and/or multi-cycle operations described in its micro-architecture specification. Examples are the "read" and "write" operations of a memory controller, or processor instructions as described in its instruction set architecture. In contrast, formal verification has historically used assertions or properties to describe implementation details and/or behavioral aspects of the DUV such as the causal relation of particular events. For example, a low-level assertion may state that there is never a FIFO overflow, whereas a higher-level property may state that "a request is always granted in no more than a certain number of cycles." Such assertions and properties do not bear an obvious one-to-one relation to the micro-architecture's operation view.

When testbenches and formal methods are used in combination, their two different perspectives make it difficult to relate the results, and to merge coverage information from dynamic and formal verification into a single, consistent coverage measure for the combined verification. The result is often verification redundancy and/or verification gaps – both of which negatively impact verification quality and productivity.

Testbenches and Timing Diagrams

Working at the module's external interface, directed tests are used to initiate sequences of DUV operations and observe whether the resulting output behavior is as expected.

Such module-level operations typically require multiple clock cycles to complete; thus the specification often expresses them in the form of timing diagrams. These diagrams are frequently used by system architects, designers and verification engineers to visualize module-level functionality in order to establish a common understanding of the DUV's intended functionality.

Figure 1

Figure 1: A timing diagram specifies the intended behavior of a multi-cycle, module-level operation.

A timing diagram of the kind shown in Figure 1 – explained in greater detail below – can be used to describe the entire intended behavior of a particular operation. Stimulus or "cause" conditions are marked in blue, expected response or "effects" are marked in red; and blank spaces indicate "don't care" values, that is, values that can be set arbitrarily, and should not affect the execution of the operation. Using such diagrams, the testbench team can devise directed tests to verify that individual operations and particular sequences of operations execute as intended.

Such directed tests cannot ensure that the module does not behave in an unintended manner. Consequently, the team often spends a great deal of time and effort trying to anticipate anything that can go wrong, and devising tests to stimulate and observe these potential malfunction scenarios. Thereafter, the team may use constrained-random techniques to achieve the desired level of confidence that potential incorrect behavior has been covered.

The ultimate verification challenge is to ensure that all operation sequences produce the desired results. However, the manifold complex interactions between operations – many of which occur only rarely – make it virtually impossible to ensure that the testbench exercises all critical operation combinations. This coverage gap often results in the escape of so-called "corner case" bugs.

Formal Methods and Timing Diagrams

At this point, many verification teams turn to formal methods to improve coverage and to uncover any further unintended behavior. Historically, however, there has often been a lack of guidance in how to write "good" properties that enable the verification team either to systematically increase testbench coverage or to use the properties stand-alone to verify DUV functionality in a systematic manner. This lack of guidance typically results in large sets of – often low-level – assertions and properties that (at best) describe particular aspects of some operations. In most cases, this ad hoc approach fails to exploit the full power of formal methods. It also adds another level of complexity to the verification task at hand, instead of reducing verification effort.

This is where the operation- and timing-diagram-based approach becomes really valuable. Because it is an intuitive way to specify intended operation behavior, timing diagrams are a natural reference source for writing properties for the formal tool. However, expressing timing diagrams in standardized assertion languages such as SystemVerilog Assertions (SVA) or Property Specification Language (PSL) is generally difficult and error-prone because they are simply not designed for easy capture of complex timing diagrams. The common approach is to write a set of properties that describe individual time-slices – effectively a different property for each cause-and-effect relationship in the timing diagram. The result is a set of concurrent properties that share "property fragments," which makes maintenance and reuse difficult and error-prone in the case of, for example, specification changes. In short, the approach loses the benefit of working with module-level operations.

Although SVA is not ideal for the purpose of describing timing diagrams, it is a highly expressive industry-standard language with significant market momentum and cross-product functionality. So, how do we devise an SVA-based formal verification approach that uses the testbench's operation view?

To simplify the capture of timing diagrams as SVA properties, we have developed the Timing Diagram Assertion Library for SVA, called "TIDAL." The library reduces or eliminates transposition errors when developing SVA properties from timing diagrams – reducing both initial effort and rework effort. It contains a handful of constructs, implemented in plain SVA, which enable the property writer to:

  • Separate an operation's cause and effect in the property description.
  • Define and reference time points in the timing diagram to synchronize the cause and effect parts of the property.
  • Freeze data at pre-defined time points and to reference it later in any other part of the property – similar to scoreboard mechanisms in testbenches.

Transcribing operation timing diagrams into properties produces a single, succinct property for an entire multi-cycle operation. Moreover, using the library, the verification team can rapidly implement specification changes.

Operation Property Verification Using Timing Diagrams

We can best explain the value of working from timing diagrams and module-level operations to formally verify a module (or parts thereof) by using an example, in this case a 32-bit DMA controller as the DUV (see Figure 2).

Figure 2

Figure 2: The challenge is to formally verify a 32-bit DMA controller.

The DMA controller drives the data transfer between RAM and peripherals, using a single bus for the data transfer (master) and command interface (slave) communications. The CPU configures data transfers, which the DMA controller then executes. The CPU configures the data transfer by writing special function registers that hold the source address, target address, and the length of the data transfer, as well as a flag indicating an optional parity check.

The DMA controller's operations and their sequencing are shown in the conceptual state diagram (CSD) in Figure 3. After reset, the DMA controller enters a conceptual "idle" state, during which the CPU can read and write the special function registers (read_sfr, write_sfr). When the CPU configures a transfer, the DMA controller starts the transfer (start_transfer) and enters a conceptual "busy" state, where it successively transfers single data packets (operation transfer), possibly interrupted by waiting for the bus (operation wait_for_bus). Upon transfer of the last data packet (operation transfer_last), the DMA controller returns to its idle state. When no transfer is configured, the DMA controller idles (NOP).

Figure 3

Figure 3: The operations and conceptual states of the DMA controller.

So, the operations of the DMA controller are multi-cycle transitions between its conceptual states. The expected behavior of module-level operations is defined in the DMA controller's specification, using timing diagrams that the designer uses to implement the CSD's high-level view by means of a more detailed finite state machine (FSM) in RTL.

The timing diagram in Figure 1 describes the entire intended behavior of the transfer_last operation, and is thus ideal for the development of a single property that describes the entire operation. As indicated above, the blue lines specify the operation's trigger condition: when the DMA controller is in the busy state, the transfer length is 1, the run bit (command[0]) is set and the bus granted, then the operation can be executed.

To keep things simple, it is assumed that the device from which the DMA controller reads and the device to which it writes acknowledge the reading/writing of data in two clock cycles, as shown in the signal values of the rdy_i signal. The red lines show the expected values of the output signals when this operation is executed. One cycle after the start of the operation (time point t_req), the controller starts the read phase from the source address until the rdy_i signal indicates that valid data is available (time point t_read_rdy). Then the write phase to the target address starts, terminating when the receiving device acknowledges reception of the data (also by setting rdy_i, corresponding to time point t_write_rdy). One cycle after the data transfer completes, the DMA controller returns to its idle state, and sets the run bit to inactive, keeping the registers for source and target address stable.

Using TIDAL, this timing diagram can be directly described in SVA as shown in Figure 4.

Figure 4

Figure 4: A single property describes the DMA's entire transfer_last operation.

The green statements are library constructs, while the dark blue statements are written in plain SVA. As in timing diagrams, the library enables the clear separation of the cause and effect statements of an operation, corresponding to the suppose and prove parts of the property. It also enables the insertion of time points (t_req, t_read_rdy, t_write_rdy) that can be referenced in both suppose and prove parts of the property to synchronize cause and effect; and it provides a means to freeze data values in the suppose part (set_freeze construct) for later reference in any part of the property – similar to scoreboarding in testbenches.

Note that the plain SVA statements in the property use a simple, yet pragmatic subset of SVA – sequences and expressions – sufficient to express the trigger conditions and value sequences of DUV signals. So, the plain SVA fragment is simple – it avoids advanced, semantically more involved SVA constructs, thus making property development less error-prone and more reliable.

When the transfer_last property shown in Figure 4 is formally proven to hold on the RTL implementation, the transfer_last operation is known to produce the intended results – always. In the event that the transfer of the last data packet is corrupted by, for example, some particular condition or behavior in the environment, formal verification of the transfer_last property will detect this error situation. The formal tool generates a counterexample that identifies this situation, enabling the verification team to debug the design. Such comprehensive verification and debug, of course, are the primary strengths and advantages of formal verification, derived from its exhaustive nature.

The Power of Operation Property Checking

Operation properties actually "play the role" of testbench components. The "suppose" statements replace the driver component, while the "prove" statements replace the monitors, checkers, scoreboards, and potential reference models. The exhaustive nature of formal verification eliminates the need for stimulus generators, coverage collectors and test control logic, and ensures that there is not a single situation that can cause the operation to perform in an unexpected way.

But this analogy does not extend to effort and effectiveness. A thorough simulation to verify the transfer_last operation would have to ensure that the multifarious ways of setting the DMA controller's inputs during execution of the operation cannot affect the operation in any unintended manner. Formal verification covers all these situations and satisfies – or exceeds – any coverage measure that can possibly be achieved using dynamic verification.

Now, extrapolate the verification challenge from a relatively simple DMA controller to a complex processor. Using module-level operations, only 40 operation properties were required to completely describe the behavior of a high-performance, configurable 32-bit RISC protocol processor that used 40 application-specific instructions, a seven-stage pipeline, and fine-grained multi-threading.

Conclusion

Timing diagrams are commonly used to express the expected behavior of a design's operations. They are familiar to system architects, designers and verification engineers, and are thus an ideal starting point to ease and speed property development for formal verification, exploiting the module-level operation view used in testbenches. Augmenting standard assertion languages such as SVA with a few, simple library constructs that allow the direct transcription of timing diagrams into properties, enables development of an intuitive, compact property set capturing the full DUV functionality. Moreover, the resulting property set both simplifies detection of verification gaps and delivers high-coverage verification.

Timing diagrams described as operation properties also bridge the gap between the informal specification and the formal specification. The proven operation properties constitute an easily reviewable "golden" reference model for the whole design team in all parts of the design and verification.

This formal approach can be applied stand-alone, as demonstrated by many successfully completed verification projects. However, because it uses the same entities as testbenches – timing diagrams and the high-level operation view of the DUV – it also supports the combined application of formal verification and testbenches. For example, one application would be to start by writing a low-effort, light-weight directed testbench to see the DUV working; to achieve a decent RTL implementation quality; and to use this testbench as a demonstrator for designers who want to integrate the DUV into their system. After fixing potential errors discovered by the direct testbench, the user can deploy this formal verification approach either to verify particular critical DUV operations, or to ensure very high coverage verification of the entire DUV.

And in both use cases, timing diagrams greatly simplify the verification process.

Dr. Michael SiegelDr. Michael Siegel, director of product marketing at OneSpin Solutions, defines the company's product roadmaps and leads its product marketing efforts. Prior to OneSpin, Michael gained 12 years of experience in functional and formal verification, and industrial R&D in managerial and technical positions at Infineon's Design Automation Department and Siemens Corporate Technology. He also was a senior scientist at the Weizmann Institute in Israel, under the chair of Turing Award winner and formal verification inventor Prof. Amir Pnueli. Michael holds a "Summa Cum Laude" Ph.D. degree in computer science from Kiel University, Germany, and has authored 35 articles on property checking and equivalence checking.

Jörg BormannJörg Bormann defines OneSpin's technology vision, sets its technology roadmap, and oversees development of its breakthrough formal verification products. With 18 years of experience in dynamic and formal verification techniques, Jörg invented OneSpin's completeness checker, developed the GapFree Verification Process, and defined the company's flagship 360 MV product. Prior to joining OneSpin, Jörg served in various management and hands-on roles at Siemens and Infineon. He began his career at Siemens AG Corporate Research Labs. Jörg received a masters degree in mathematics applied to electrical engineering from the University of Karlsruhe, Germany.



Tech Videos

©2017 Extension Media. All Rights Reserved. PRIVACY POLICY | TERMS AND CONDITIONS