Part of the  

Chip Design Magazine

  Network

About  |  Contact

Posts Tagged ‘Verification’

Part II: The Ecstasy and the Agony of UVM Abstraction and Encapsulation Featuring the AMIQ APB VIP

Thursday, April 23rd, 2015

Part II of our tour through UVM reusability through TLM ports and the factory in the AMIQ APB VIP.

by Hamilton Carter – Senior Editor

Tuning the Receiver
Part I didn’t answer how, (or indeed if), the monitor’s messages make their way over to the write_item_from_mon method in the coverage collector.   Remember, the method is defined in the coverage collector, but apparently called from nowhere.  It gets worse.  Not only is the method not called from within the AMBA/APB package, but apparently it’s not called from the cagt package either!  Go ahead and look.  Take your time, grep through the files… I’ll be here when you get back.

You probably found the definition of the method up in the cagt package, but not a call to the method.  There’s a bit of UVM macro chicanery going on here.  You might have noticed a macro call at the top of the cagt version of the coverage file:

       `uvm_analysis_imp_decl(_item_from_mon)
       //coverage class
       class cagt_coverage#(type VIRTUAL_INTF_TYPE=int, type MONITOR_ITEM=uvm_sequence_item) extends 
             uvm_component;
              //pointer to the agent configuration class
              cagt_agent_config #(VIRTUAL_INTF_TYPE) agent_config;
               //port for receiving items collected by the monitor
               uvm_analysis_imp_item_from_mon#(MONITOR_ITEM,
                cagt_coverage#(VIRTUAL_INTF_TYPE, MONITOR_ITEM)) item_from_mon_port;

 

Interestingly, the argument to the `uvm_analysis_imp_decl macro contains the last few underscore terms of the method we’re looking for, ‘_item_from_mon’.  A little further down, notice that the partial phrase turns up again where the ‘port for receiving items…’ is declared.  The macro sets up a subclass of a TLM analysis port that’s specifically named uvm_analysis_imp* where the * is replaced by the macro’s argument.  Within that sub-class, unseen by mortal code-browsing eyes, the macro also sets up the call to the coverage collectors input method ‘write_item_from_mon’ which is defined as write* where the *, you guessed it, is once again replaced by the argument to the macro: ’_item_from_mon’.

Connecting the Components
OK, now we’ve located the pertinent communications methods, but where are the two ports attached to each other?  Is magical code automatically created that attaches the ports under the sheets?  Nope!  Look in the agent file within the cagt package.  There, you’ll find both the instantiation code for the monitor and the coverage collector, as well as the code that connects the two:

       if(coverage != null) begin
               coverage.agent_config = agent_config;
               monitor.output_port.connect(coverage.item_from_mon_port);

There, everything is well-explained… except for one last little detail.  Remember the cagt package doesn’t know a thing about the APB monitor or the APB coverage collector, nor should it.  It just sits back and happily and dumbly connects an abstracted monitor, (that doesn’t do anything), to an abstracted coverage collector, (that also doesn’t do anything).  Getting the environment to actually work requires the UVM factory to pull a switcheroo of object types using set_inst_override at the last minute in the APB agent file:

function new(string name = "amiq_apb_agent", uvm_component parent);
        super.new(name, parent);

        cagt_monitor#(.VIRTUAL_INTF_TYPE(amiq_apb_vif_t),
          .MONITOR_ITEM(amiq_apb_mon_item))::type_id::set_inst_override(amiq_apb_monitor::get_type(),
          "monitor", this);

        cagt_coverage#(.VIRTUAL_INTF_TYPE(amiq_apb_vif_t), 
          .MONITOR_ITEM(amiq_apb_mon_item))::type_id::set_inst_override(amiq_apb_coverage::get_type(),
          "coverage", this);

 endfunction

 

So, there you have it.  It took a bit of up-front planning—mostly defined by the UVM architecture—and a bit more work to dig through the code, (the first time anyway). Here’s what we got in return:

  1. Using the cagt package as a base, we never have to wire monitors to coverage collectors again.  We can build subclasses of cagt_monitor and cagt_coverage that add our specific code.  The banal connection code is executed automatically in the cagt_agent.
  2. Again with cagt: we never have to instantiate TLM analysis ports in either our monitors or coverage collectors ever again.
  3. Our monitor has no ties to the rest of the environment.  It needs its cagt base class, but it knows nothing of the specific objects it’s attached to, or for that matter, who’s doing the attaching.
  4. Ditto for the coverage collector.
  5. The base class can be used as the basic structure because it has no knowledge of what it’s specifically putting together.  Specifics are all handled by the factory’s switch at the last possible moment.  Here are more details about that little trick.

Consequently, we get a much better chance of write-once-and-walk-away-code—WOAWAC, as the natives say.   Oh!  And we get to look super-smart in the process!

 

The Ecstasy and the Agony of UVM Abstraction and Encapsulation Featuring the AMIQ APB VIP: Part I

Tuesday, April 21st, 2015

An interesting thing happened on the way to arriving at a completed article about the AMIQ APB VIP:  the code-base changed completely removing a layer of abstraction!  It’s a pretty cool little testament to the age of open source coding, and perhaps also to the agile manifesto which reads, in part, “…we have come to value…working software over comprehensive documentation.”  I’ll save the—maybe obvious—digression into inheritance vs. duplication of code for another post.  For now, I’ll go ahead and post the original article—a bit of a historical anachronism even if only by five days—because it provides perspectives into the design and construction of UVM objects that are hopefully still valuable examples.

by Hamilton Carter – Senior Editor

Why UVM and reuse?
Build it once, use it anywhere?  You can have that, but you’ll have to pay.  The good news is that if you do your job correctly, you only have to pay once.  The really good news is that the authors of UVM have already done most of the heavy lifting.

As you build each gleaming piece of your verification environment, there are a few reuse goals you should keep in mind:

  1. If at all possible, you don’t want to ever touch the code again… not ever.  You’ve got better things to do with your life, not the least of which is to create the next piece of gleaming verification goodness.
  2. You’ll want to be able to pass it around.  It’s good stuff!  Your buddies should totally get the benefit of knowing you—by having easier verification lives—through reuse of your stuff.
  3. Since you’re going to pass the code around, you’ll want it to seamlessly play with other pieces of verification IP.  This will make it easier for your buddies to use it and make you look like a genius!  It will also cut down on the number of support calls you get because—you know—you’re already working on the next great thing!

Amiq’s recent open source release of their AMBA-APB, UVM-compliant verification IP, (acronyms!  Everywhere acronyms!!!), gives us a nice platform to play around with and have discussions about UVM without the usual ‘invented here’ and NDA burdens to carry.  As an example of what UVM can buy for you, and what you have to pay to get it, let’s review an otherwise rather innocuous part of the APB UVC’s architecture: the connection between the monitor and coverage collector.

The above-mentioned reuse goals are accomplished for the APB bus monitor and its associated coverage collector.  Either of the two pieces can be used independently of the other.  It’s all done through the use of TLM ports and the factory.  Let’s walk through the code to get a good firm grounding of what’s there and how it’s done.  As we do, I’ll sprinkle in architectural and procedural details.

If you want to follow along in your code editor, you’ll need to pull the code repository for the AMBA APB project from https://github.com/amiq-consulting/amiq_apb as well as the supporting abstraction layer, (also from AMIQ), at https://github.com/amiq-consulting/cagt.  We’ll be rooting around in the sv subdirectory in each case.

NOTE:  With the recent code changes to the AMIQ APB VIP, to follow along clone the historical version of the APB code from https://github.com/hcarter333/amiq_apb.git.

Broadcast News
First, a few introductions:  The monitor’s job is to watch the APB bus and report, (broadcast actually), everything that’s happening.  The coverage collector’s job is to receive information from the monitor and format it into a cogent picture of the functional coverage generated by a given testcase.

A little bit of scrounging in the coverage collector code quickly reveals that it’s activated when the write_item_from_mon function is called.

       function void write_item_from_mon(amiq_apb_mon_item transfer);
             if(transfer.end_time != 0) begin
                   cover_item.sample(transfer);

This is also where the mysteries begin.  Who calls this function?   Since we’re collecting coverage based on what the monitor sees, the monitor would be the logical culprit.  A quick look at the monitor though reveals nothing.  The function is never mentioned.  The monitor does call a write function, but not write_item_from_mon.  Could the two be related?  Yup, they are.

Here’s what’s going on.  At the end of the day, you’d like to be able to use the monitor minus its associated coverage collector, (in an accelerated environment for example), without the monitor being any the wiser.  To pull this off, the authors used something called a TLM analysis port.


Monitor and coverage collector each with their associated TLM ports

The TLM port instantiated within the monitor provides a broadcast method named ‘write’.  The monitor calls this method without a clue as to whether anyone else is listening.  He proudly proclaims his latest APB news, and couldn’t care less if anyone hears.

The write method lives inside the monitor’s output_port and this brings up our next mystery.  Try to find where in the monitor the output_port was instantiated.  I dare you!  It’s just not there.  Careful attention to the figure above reveals answer.  The output port is declared in an abstract version of the monitor defined in the cagt package mentioned at the start of the article.

Amiq encapsulated the most basic, (and by basic I mean foundational, and by foundational, I mean the literal foundation of the environment), elements of the verification environment in the cagt package.  Almost all verification environments have monitors right?  Consequently, we’d like to have to write the foundational monitor code once and then leave it alone.  To accomplish this, the basics for the monitor are setup in the more abstract cagt package instead of the AMB/APB package.

A look at the cagt monitor reveals the declaration and instantiation of the monitor’s output port.  Now we know how the monitor is sending it’s messages out—using the output ports write method—as well as where the output port is declared—in cagt_monitor.sv.
Next: Tuning the Receiver

Math for Nothing and Refactoring for Free

Wednesday, August 20th, 2014

With all the requests by scientists and the government for money lately, here, finally, is something you can get from the government… for free.  Yes, that’s right, applicable scientific and engineering tools that are the result of NSF funding that your taxes paid for and that you can actually use… today… really!  If, like many engineers, you find yourself in need of the functionality Mathematica provides, but you’re reluctant to ask your boss for the money to purchase a license, sagemath might be for you.  Sage provides a lot of the same functionality that Mathematica does, and you don’t have to pay for it.  You can even use this Python based tool without making an install, with a cloud based version of Sage.  If you’ve ever played with Python, then you’re already most of the way to using Sage.

 

Functional Verification and Software Development, Brothers from a Different Mother

Verification Process Advise In Disguise as a Software Development Forum

Do you need better insights on improving your verification processes?  Are you running into issues like these?

  • Revision control nightmares
  • Blindingly and debilitatingly fast feature changes from marketing
  • Difficult to follow information trails on code verification history
  • Days and hours spent fixing broken designs after “No one changed anything” or only added “Correct By Design” fixes

As it turns out, the software industry had all these problems and beat the functional verification automation gang to the punch by several years.  While there is an ever growing cadre of EDA tools available to cure the above woes, there are also simple process steps, identified by the software folks, (as well as my own humble offering along with Shankar Hemady and a host of industry luminaries), that you can take to improve things now.  One of the beautiful things about the software industry  is that lots of its denizens love to share.  Checkout http://programmers.stackexchange.com, a forum that regularly discuss project management, debug, and code refactoring, (known as ‘garshdarned feature creep’ in our lingo).  As I was writing this, a question appeared on the board:

Develop in trunk and then branch off, or in release branch and then merge back?

Sound familiar?  Don’t let the name of the site fool you.  It’s not about how to program, it’s about how to be a programmer.  In addition to revision control, you’ll find posts about how to best change code, without destroying the code around it, and, equally importantly, with the advent of UVM, questions about design patterns, (think factory pattern).  Don’t worry about having to share in kind.  Stackexchange seems to have realized that your design, and or processes may actually be unique and beautiful snowflakes, in which cases, they’re quite happy with folks ‘just’ reading.

In the same vein, Coverity, a software verification company regularly publishes a software testing blog.  If you remember to squint your eyes, and say to yourself, ‘hardware systems’, or ‘embedded systems’, everywhere the blog says ‘software systems’, there are many good tricks and processes just waiting there for the taking.

 

 

 

The Verification Walk and Talk

Wednesday, December 18th, 2013

I may be biased coming from a metric driven verification background, but 2013 seems to have been the year of the reusable metric driven verification environment.  We saw Jasper and Duolog team up to produce not only a re-usable specification, but associated assertions that could travel with it, and the IP from project to project. ARM is shipping these verification environments along with its IP blocks.  Apache touted the benefits of metric driven power verification.  The big three, Cadence, Synopsys, and Mentor, are all headed down variously similar metric driven verification process paths.  Gary Smith called for more reusable system verification that spanned the entire gamut between block level IP and user level apps.  Even physical design is moving to automated, metric driven verification as companies like Sage advance the technique.  Will we soon see packaging and board design tools from companies like Dassault be included in a metric driven flow?

With all the advances in tools, there’s still one personal aspect of the verification project to keep in mind, the true point of origin of all these tracked metrics: communications.  While metrics can and should be codified and tracked by automated tools, they are most effective when they are codified correctly based on a shared understanding of what a design is intended to do and how it is likely to be used.  This knowledge is contained in the minds of various members of the product design and production team.  Here are a few guidelines about what each part of the team might be able to contribute in the metric definition process.  They are by no means complete and additions based on your experience would be greatly appreciated.

Fellow Verification Team Members
Your fellow verification team members may have previously worked with the block you are tasked  to verify and have a historical knowledge of its ins and outs.  In addition to that, your blocks may be adjacent, or share common resources.  Hopefully, you’re both in good contact with your design engineers, (see the next section).  You should also be in good contact with each other.  Often hammering out metric details about inter-block communications identifies bugs without a single testcase being run.

Design Engineers
If you don’t know who your block’s design engineer is, you should find out.  This may not be as simple a task as it seems.  In some boutique semiconductor companies, they may only be a few cubes away.  In large international firms, they might be on the other side of the globe.  In either event, it’s worth the effort to get to know them.  These folks are putting their understanding of the specification into physical realization.  You’ll want to make sure that you’re checking the design vs. what they think it should do as well as what the specification says it should do.  Here as in the section above, good communications can lead to bugs being found without executing any testcases.
Firmware Engineers
These people will use the device utilizing its exposed interfaces.  They can tell you how they’ll use the device.  This usage model defines known sequences of transactions that the device will  be expected to perform.  They can also tell you which portions of the device’s functionality have the highest priority.
Architects
Architects specify how the system is to be stitched together and share resources.  They can help identify stress tests to run the system through its paces.  They can also be invaluable in helping you get a big picture of the device you’re testing from the top down.
Marketing
Despite all the two-drink minimum jokes about marketing, they often have the deepest knowledge of how the device’s customers want to use it.   This knowledge can help in defining must-be-run test sequences and in prioritizing verification tasks.
Production Test
Even though its well after the fact, verification sequences run in production test sometimes expose use case relations between blocks you didn’t know existed.  Test engineers are privy to this sort of thing.  They should especially be engaged for verification planning of derivative products, and let’s face it, what’s not a derivative product these days?

Hopefully I’ve made a good case for communicating early and often with your project team.  In case I haven’t, you might want to think about one extra aspect: the fringe benefits.  Talking to these folks makes you a known quantity around the company and with the fluidity of our industry, over a few years can gain you exposure across multiple companies.  As you get ready to move into other areas of project execution, or to move into positions with more responsibility, being known as a communicator that makes a positive impact can only help.