Published on May 11th, 2011
It has been repeated countless times by experts and followers alike: Constrained-random verification is better suited for tackling the exploding state space of modern designs than directed testing. With all the promise and after years of deployment, though, teams still find it difficult to realize the benefits of constrained random without a plan for absorbing the risk that comes with it. How do teams mitigate those risks? They can start by avoiding the either/or choice of directed-vs.-constrained-random verification and start using both.
The Old Days Of Directed Testing
Directed testing is a straightforward approach to functional verification: Pick a feature, write a test to verify that feature, and repeat until done. It’s pretty straightforward. The correlation between feature and test is nice because it brings a level of visibility into the design (i.e., the tests that pass show what features have been verified). It also provides confidence and predictability because project managers can use past progress to predict future milestones and delivery.
Even though the approach seems to offer a high degree of visibility and predictability, however, directed testing has fallen out of favor in functional verification. It’s easy to see why. Ever-increasing design size and complexity translates into an exponential increase in the number of directed tests required for verification. It’s no surprise that directed testing is no longer an efficient way to verify the state space of many modern designs.
Constrained Random Is Better…Or At Least It Seems Better
Constrained-random testing is widely seen as an improvement over directed testing. Writing and maintaining large, directed test suites can be incredibly tedious. With a little experience, verification engineers can more efficiently build constrained-random stimulus and a corresponding coverage model that’s equivalent to several directed tests. Furthermore, constrained-random verification takes a shotgun approach. For little or no extra effort, a team verifies more than the state space at which it’s aiming. (It’s worth noting, however, that the additional state space remains effectively undefined without corresponding coverage goals.) In short, constrained random is a great way to get more bang for the verification buck—or so it seems.
Yet the efficiency and quality improvements of constrained-random testing do come at a price. Constrained-random tests require a more complex self-checking environment, which lengthens environment-development time. Longer development times have long been accepted as part of the package when moving to constrained random. The danger of acceptance, however, is that longer development times also mean a delay in initial results (i.e., early confirmation that the design is performing as expected). Delay allows risk and uncertainty to persist.
The big downside to constrained-random testing is its uncertainty relative to directed testing. That uncertainty is a plague to many functional-verification teams, which scramble weekly to define progress for management. Which tests are passing and what features are working? Answers to those questions can be hard to nail down—especially early in the verification effort. While constrained-random tests are a very efficient way to find bugs, they’re lousy at providing a clear picture for the parts of the design that are bug-free. With longer development times and early results that can be ambiguous at best, constrained-random testing still has its issues.
A Hybrid Approach With The Best Of Both Worlds
To visualize the tradeoff between using a directed or constrained-random approach, consider how efficiency and visibility vary (see Figure 1). Directed testing offers high visibility with its test-to-feature relationship. It also provides modest efficiency, due to the effort required to write tests. In contrast, constrained-random tests offer higher efficiency through the shotgun nature of the stimulus—but at the cost of relatively poor visibility.
Figure 1: Directed testing offers high visibility while constrained-random tests provide high efficiency.
Now, consider a hybrid approach that leverages the benefits of both. The key to a hybrid approach is to use tests that are best suited to where the team stands in the development cycle. In general, directed tests should be used to increase visibility near the beginning of development. Here, early results bring certainty and reduce risk. Once a sane baseline has been established, constrained-random tests can be used to drive coverage closure as efficiently as possible. To best leverage a hybrid directed-constrained-random verification approach, think about verification as a series of three phases:
Phase I – Establishing a Baseline with Directed Tests
The goal of the first phase is to gain early visibility into the quality of the design while establishing a solid and sane baseline to support subsequent development. Directed tests are predominantly used in this phase because of their shorter development times and narrow scope. Directed tests don’t require implementation of a coverage model or a completed design or verification environment. Results also are easy to quantify. Tests are run frequently to qualify additions, thereby ensuring the stability of the code base.
Progress in Phase I is measured in terms of features that correspond to passing tests.
Phase II – The Directed-to-Constrained-Random Transition
Phase II is a transition phase that builds on the confidence gained through early directed testing. In Phase I, sanity threads through the design are verified using directed tests. Phase II further stresses those threads. Constrained-random tests are used, although constraints are relatively tight to ensure that tests stay targeted. Targeted tests provide a more efficient way to cover state space than directed tests, but set functional boundaries to simplify debug.
Progress in Phase II is measured in terms of features and functional coverage results from passing tests. Emphasis shifts to the latter as the number of constrained-random tests grows.
Phase III – Exhaustive Testing with Random Tests
Phase III starts when the team is confident that the design is high-quality. Many features have been verified at least minimally—and critical features more rigorously. As a result, the design can reasonably be expected to handle the full scope of legal stimulus. Random tests written in this phase are just lightly constrained as the team works toward exhaustive coverage of the state space. Teams also would consider intelligent testbench tools to exercise difficult corner conditions as efficiently as possible. Additionally, a second class of directed tests is written as needed for focusing on error conditions and/or hard-to-reach corners of the state space. Writing directed tests for this purpose is already standard practice for most teams that use a constrained-random approach.
Phase III progress is described in terms of functional-coverage results. The status of early directed tests also might be included, provided they add value. (It’s likely that early directed tests become redundant, though they remain valuable as part of a continuous regression and for isolated debug.)
How should one visualize progress using a hybrid approach to verification? Consider it in the context of a graphic that has long been used to illustrate the difference between directed and constrained-random testing. Following an initial environment development effort, progress with directed testing is always shown to progress more or less linearly. Constrained-random testing is normally shown as having significantly longer development time with a faster rise-to-coverage closure. The point of the comparison is to show that constrained random reaches 100% feature coverage faster than directed testing (see Figure 2). The constrained-random curve as shown represents the ideal case, however. As discussed, it’s often compromised by poor visibility and ambiguous results.
Figure 2: A hybrid approach that combines both directed and constrained-random testing exhibits characteristics of both curves.
A hybrid approach that combines the advantages of both directed and constrained-random testing exhibits characteristics of both curves. Initial directed tests provide early results. Althought they don’t exhibit the same steep rise relative to an ideal constrained-random approach, the random tests added in Phases II accelerate the rate at which features are verified. They also carry less risk, due to their targeted nature. Progress during the third phase of a hybrid approach would be very similar to constrained random because the effort is effectively the same between the two.
A hybrid approach to functional verification requires a comparable overall effort relative to a pure constrained-random approach. Its advantage over pure constrained random is added visibility and reduced risk made possible through the use of directed tests.
Two Are Better than One
Too many people tend to think of directed testing as archaic and constrained-random testing as its “no-brainer” successor. That black-and-white comparison is a poor way to think about these two approaches. Constrained-random tests are a great tool for addressing the inherent productivity limitations of directed testing. Directed tests—especially early in development—are a great hedge against the uncertainty brought about by constrained-random verification. Both have a place in modern functional verification. So instead of picking one or the other, teams should be leveraging the best of both—combining their strengths to deliver a better end result.
Neil Johnson has been working in ASIC and FPGA development for more than 10 years. He currently holds the position of principal consultant at XtremeEDA Corp., a design-services firm specializing in all aspects of ASIC and FPGA development. Johnson is co-moderator for AgileSoC.com, a site dedicated to the introduction of agile development methods to the world of hardware development. He maintains a blog at http://agilesoc.wordpress.com.