Published in issue of Chip Design Magazine

Automate and Control the Functional-Verification Process

Scalable coverage-driven verification automates the verification process while keeping human interaction to a minimum.
These days, there's a lot of talk about electronic-design-automation (EDA) verification tool platforms, languages, and methodologies. For any real engineer or verification manager who has to get his or her job done in the nanometer silicon era, however, those are just minor pieces of the overall solution to a bigger problem. It is the processes that are becoming ever more key to successful design projects. In verification, for example, a maximum 20% potential productivity improvement will be gained by speeding up the platform infrastructure (e.g., simulation speed). In contrast, multiple areas for >10X improvements are available by automating the verification process (e.g., time to verification closure). In this context, "verification process" refers to the capability to fully understand the size of the verification problems right from the beginning. This term encompasses the automation of the tasks in order to get to completion (i.e., the guaranteed functionality of the specified design features).

Along with automation, efficient control mechanisms must be available. The goal of improving the verification process is to keep manual interaction to a minimum. This reduction of the human factor enables full scalability of the verification process. As a result, the complexity constraints of any given verification problem can be met. The presented verification process is based on the principles of scalable coverage-driven verification (SCDV), which has already become a well-accepted verification methodology in the EDA industry.

For the efficient usage of a scalable-coverage-driven-verification methodology, a complete verification plan has to be developed up front. In contrast to traditional verification approaches, this plan clearly separates the verification goals-in this context, the features that need to be checked for a given design-from the verification tools/methodologies (see Figure 1). In a series of real-life project engagements, a verification planning process has been developed. It is based on a standardized quality-management process in QS9000/ISO9001, which is dubbed Design Failure Mode and Effects Analysis (DFMEA). The DFMEA process collects inputs from all engineering disciplines that are involved in the design and qualification of the device. Mainly, they comprise concept, design, software, verification, test, and application engineers.

Figure 1
Figure 1: this generic verification plan, which is based on Microsoft Word, uses the vPlanning process.

To get to a complete verification plan, the input from these engineers has to be collected in a tool-independent format. Otherwise, non-tool experts could be easily disconnected. As a result, they wouldn't be able to contribute to a complete verification plan. The plan would therefore be very dependent on the level of know-how of individual tool experts, who might not be full experts for the design itself. Such an approach would bear a high risk of potential feature omissions that aren't going to be verified properly. In addition, the priorities for the various verification tasks would only reflect the understanding of individual engineers. They might not match the real project requirements.

The starting point of the newly developed verification planning process is a tool-independent verification plan. This plan can be dumped into a machine-readable format later. Essentially, it collects the inputs from all of the engineering experts who contributed to the analysis of the functionality provided by the design. Requirements for functional robustness also are described. Robustness aspects are usually outside the scope of the underlying design specification. Or they aren't described in greater detail.

STRUCTURED PLAN
The verification plan is structured into definitions of functional groups (buses, protocols, interfaces, etc.). Each group lists all functional features, which must be provided by the design in order to match its specification (burst handling, error control, etc.). The feature list itself is extended by a detailed description that makes the verification plan more readable. Intentionally, no scenarios are listed for testing certain features. This aspect keeps the verification plan fully tool- and/or methodology-independent.

After the list of features in the verification plan has been completed, the verification team analyzes the individual tools and metrics that could functionally prove the various feature items in the most efficient way. Here, the significance of a so-called "total-coverage" perspective to a certain verification problem becomes clear. With total coverage, all of the individual quality metrics that are used during the verification process contribute to the overall completion status of each listed functional aspect in the verification plan. The calculation and tagging of the completion status is done automatically. The quality metrics that are most commonly used during the verification process are:

  • HW code coverage
  • HW functional coverage
  • HW assertion coverage
  • HW formal coverage
  • (SW code coverage)
  • (SW functional coverage)
  • ...

Using this total coverage perspective, the verification team can instrument all of the listed functional features of the initial verification plan with a selection of quality metrics. Figure 2 shows such an instrumentation of the verification plan in an abstract way. For the first feature item, for example, a combination of code, functional, assertion, and formal coverage has been defined. In contrast, feature item #N only depends on code and formal coverage. The level of completion of the individual tool metrics contributes to the completion tag of each individual feature in the verification plan. The accumulation of all of the per-feature completion tags results in the total completion tag. That tag provides a high-level indication of the current status of the verification as a whole.

Figure 2
Figure 2: this illustration highlights the principles of total coverage instrumentation in the verification plan.

The automatic back-annotation of total coverage data into the machine-readable verification plan always gives the most up-to-date view to the overall verification status. Examples of such data include functional coverage, code coverage, and passing formal proofs. From a project-management point of view, the mapping of individual tool metrics and functional features in the verification plan also can be used to do more efficient resource planning during the verification process. The back-annotation capabilities between the verification environment and plan also provide better transparency in an area of the verification process that was formerly difficult to manage. In a simplified representation of the verification process, two main phases can be identified (see Figure 3):

  1. Implementation/setup phase 2
  2. Simulation/execution phase

Figure 3
Figure 3: A simplified version of the verification process identifies two main phases.

All of the coverage metrics that have been previously addressed are focusing on the second phase-simulation/execution. In this project-verification phase, they allow proper progress tracking and status analysis. The typical coverage measurements for this phase are shown in Figure 4. It can be seen that projects 1 and 3 are still at an early verification stage (environment bring-up). Yet project 2 is fully instrumented and running in parallel-regression mode (significant progress in coverage closure). The verification perspectives in this context refer to 100% completion status at a given milestone with respect to a predefined set of coverage goals. Those perspectives, which can be defined within the verification plan, are usually connected to design steps like RTL code freeze for module/system or tape-out.

Figure 4
Figure 4: this chart provides quick visualization of the total coverage closure for selected verification projects.

In contrast to the latter execution phase, almost no clear metrics existed for the initial implementation/setup phase. Information on progress and the actual status were heavily dependent on the feedback of the individual engineers and/or required periodical code/implementation reviews. Though useful, such reviews are time consuming. They have to be used carefully with respect to the overall project time. In most cases, any delay during this initial bring-up phase directly impacts the overall project-verification time.

A new kind of tracking capability has been introduced for the implementation/setup phase. This capability is based on the presented verification planning process. It allows the automated tracking of the actual implementation status as well as the progress that has been made during the observation period. Yet it doesn't cause any overhead to the verification team.

As part of the binding step between the verification environment and the machine-readable verification plan, the number of functional features is reported. These features are bound to implemented coverage definitions in the verification environment. An inverse data set also is reported. It shows the number of coverage points that contribute to the completion status of a feature in the verification plan. At the end of the implementation phase, all of the features in the verification plan must be bound to their counterparts in the coverage model of the verification environment.

Figure 5
Figure 5: Implementation coverage data from the start of a verification environment handles the important tracking of customer projects.

Figure 5 shows implementation coverage data that has been taken during the bring-up of a verification environment in a real-life project. The graphs for "Bound Features" and "Coverage Links" indicate good initial progress on the implementation work for the verification environment. Without knowing the details of the environment, it's clear that the verification team made a drastic change in its verification setup in the fourth implementation week (see marker A). Cross checking with the project team indicated that the drop in the number of coverage points in the "Total Coverage Grid" graph was caused by the adoption of an existing coverage model. This model was provided by re-used eVCs. Some predefined coverage definitions have held potentially no interest for the given verifications tasks. In any case, the detection of such a drop would require background information from the verification team in order to understand the current verification status (the potential risk of oversimplifications in the coverage grid).

After the sixth week, the implementation coverage graph didn't show any progress (see marker B). Cross-checking with the team indicated that it wasn't able to continue the development of the verification environment (here, improving the coverage grid). Instead, the team was tied to debug work. Based on this coverage information, it's now possible to better plan and manage the verification process itself.
It also is possible to determine the achievable and de-facto achieved functional design quality at any time in the verification process.

So far, the creation of a machine-readable verification plan and its usage for status analysis in the bring-up phase of the verification environment has been discussed. One example of a main engine for getting better automation within the verification execution phase is Cadence's vManager. This tool is an embedded part of the overall verification process (see Figure 6). It reads the verification plan, dispatches a specified number of regression jobs, and back-annotates the relevant coverage results from all of the tools involved in the verification plan. In this context, vManager serves as a verification cockpit that provides a consolidated view of all metrics/tools used in the functional-verification process (e.g., code coverage, formal properties, and compliance tests). It also provides the functionality to handle the entire regression run management.

Figure 6
Figure 6: here is an example of an automated verification tool that helps to enforce a good verification process.

The presented instrumentation of the verification process with advanced planning, tracking, and regression-control facilities provided a significant productivity enhancement within several real-life projects. It makes verification a better controllable project task while enabling more efficient usage of all of the resources being used. As a result, the defined quality goals are more easily met.

This article describes a unique opportunity to rate the efficiency of the overall simulation process. In addition, a metric is provided to cover the simulation progress by using a scalable coverage-driven verification methodology combined with verification-process-automation-tool infrastructure. Compared to conventional directed testing, constrained random testing in a coverage-driven simulation setup achieved significantly better quality. The analysis of the actual coverage status offers feedback on the quality of the test itself and the design-under-test in general. The introduction of a tool-assisted verification-planning process lowered the risk of dependencies between the achievable design quality and the contribution of individual engineers. Verification itself became a more predictable process. Based on the presented tools and methodology, full verification-resource scalability has been achieved.
Dr. Clemens Müller is Director of CoreComp, Europe within Cadence Design Systems. He is responsible for the technical realization of strategic sales programs in the area of functional verification. Before joining Cadence, Dr. M?ller was responsible for the execution of verification projects at Motorola and Verisity. He holds a PhD in electrical engineering from the Technical University of Aachen, Germany.

Tech Videos

MAGAZINE

  • Download the latest issue of the Chip Design Magazine
    and subscribe to receive future issues and the email newsletter.

Chip Design Research

Are you up-to-date on important SoC and IP design trends, analysis and market forecasts?

Chip Design now offers customized market research services.

For more information contact Karen Popp at +1 415-305-5557

Calendar Of Events

©2014 Extension Media. All Rights Reserved. PRIVACY POLICY | TERMS AND CONDITIONS