Category Archives: Functional Coverage

OSVVM™ Webinar + World Tour Dates

Webinar Thursday June 26, 2014
OSVVM provides functional coverage and randomization utilities that layer on top of your transaction level modeling (tlm) based VHDL testbench. Using these you can create either basic Constrained Random tests or more advanced Intelligent Coverage based Random tests.  This simplified approach allows you to utilize advanced randomization techniques when you need them and easily mix advanced randomization techniques with directed, algorithmic, and file-based test generation techniques.  Best of all, OSVVM is free and works in most VHDL simulators.

Europe Session 3-4 pm CEST 6-7 am PDT 9-10 am EDT Enroll with Aldec
US Session 11 am-12 Noon PDT 2-3 pm EDT 8-9 pm CEST Enroll with Aldec

OSVVM World Tour Dates
VHDL Testbenches and Verification – OSVVM+ Boot Camp
Learn the latest VHDL verification techniques including transaction level modeling (tlm), self-checking, scoreboards, memory modeling, functional coverage, directed, algorithmic, constrained random, and intelligent testbench test generation. Create a VHDL testbench environment that is competitive with other verification languages, such as SystemVerilog or ‘e’. Our techniques work on VHDL simulators without additional licenses and are accessible to RTL engineers.

July 14-18 Munich, Germany Enroll with eVision Systems
July 21-25 Bracknell, UK Enroll with FirstEDA
August 18-22 and September 2-5 online class Enroll with SynthWorks
August 25-29 Portland, OR (Tigard/Tualatin) Enroll with SynthWorks
September 15-19 Gothenburg, Sweden Enroll with FirstEDA
October 20-24 Bracknell, UK Enroll with FirstEDA
October 27-31 and November 10-14 online class Enroll with SynthWorks
November 17-21 Baltimore, MD (BWI Area) Enroll with SynthWorks
December 1-5 and December 17-21 online class Enroll with SynthWorks

Presented by:
Jim Lewis, SynthWorks VHDL Training Expert, IEEE 1076 Working Group Chair, and OSVVM Chief Architect

Functional Coverage Goals and Randomization Weights

This is a continuing series of posts on OSVVM and functional coverage. If you are just getting started, you may wish to start with the OSVVM page.

In a constrained random approach, different items can be selected more frequently by using randomization weights. Items with a higher randomization weight are selected more frequently.

In Intelligent Coverage, the same effect can be achieved by using coverage goals. A coverage goal specifies how many times a value must land in a bin before the bin is considered covered. Each bin within the coverage model can have a different coverage goal. By default, coverage goals are also used as a randomization weight. Bins with a higher goal/weight will be generated more frequently. When a bin reaches its goal, it is no longer selected by Intelligent Coverage randomization.

Coverage goals and randomization weights are an important part of the Intelligent Coverage methodology. They allow us to randomize sequences with a particular distribution. For example, if a test does packet generation, then the following code generates normal operation 70% of the time, error case 1 20% of the time, and error case 2 10% of the time.

Bin1.AddBins( 70, GenBin(0) ) ;  -- Normal Handling, 70%
Bin1.AddBins( 20, GenBin(1) ) ;  -- Error Case 1,    20%
Bin1.AddBins( 10, GenBin(2) ) ;  -- Error Case 2,    10%

StimGen : while not Bin1.IsCovered loop
  iSequence := Bin1.RandCovPoint ; 
  case iSequence is   

    when 0 =>  -- Normal Handling   -- 70%
       DoTransaction(Rec, …, NO_ERROR) ; 
       . . . 

    when 1 =>  -- Error Case 1      -- 20%
       DoTransaction(Rec, …, ERROR_CASE1) ; 
       . . . 

    when 2 =>  -- Error Case 2      -- 10%
       DoTransaction(Rec, …, ERROR_CASE2) ; 
       . . .

Each of these sequences will be selected with a weighted uniform distribution until it reaches its coverage goal.

SystemVerilog supports specifying coverage goals on the entire coverage model, but not on individual coverage bins. Since they do not use functional coverage when randomizing, they do not have a compelling need for individual coverage goals. However, this also makes them lacking in advanced features.

This post provided some basic information about coverage goals. Intelligent Coverage contains numerous advanced features that go well beyond the basic capability. For example, coverage goals can be scaled to make tests run longer using a feature called coverage targets. Convergence to a coverage goal can be smoothed out using either thresholding or different randomization weight modes.

Learn more about the advanced Intelligent Coverage features in our VHDL Testbenches and Verification class.

VHDL Functional Coverage is more capable than SystemVerilog

This is a continuing series of posts on OSVVM and functional coverage. If you are just getting started, you may wish to start with the OSVVM page.

When writing functional coverage it is important to be able to capture all the details of a model. With item (aka point) coverage, both VHDL and SystemVerilog do a good job. However with cross coverage, if the model requires more than a simple Cartesian product, SystemVerilog falls short. OSVVM, on the other hand, offers a rich cross coverage capability.

In SystemVerilog functional coverage is modeled as a language based declaration. To model cross coverage in SystemVerilog, one starts by declaring item (aka point) coverage for each dimension in the cross. The cross coverage is then declared in terms of the defined item coverage. If the cross coverage is more complicated than a simple Cartesian product, a set of “and” and “or” masking operations are used to filter out items from the cross product. This results in an awkward, limited, and verbose capture of the functional coverage model.

In VHDL’s OSVVM, functional coverage is implemented as a data structure.  The functional coverage model is captured incrementally using any sequential code (if, loop, …).  As long as the entire model is captured before we start collecting coverage, we can use as many calls to AddBins or AddCross as needed.

Simple Cartesian product type coverage can be captured in a concise, single line call, such as the one used in my blog post Functional Coverage Made Easy with VHDL’s OSVVM. Note that item coverage did not need to be declared first.

ACov.AddCross( GenBin(0,7), GenBin(0,7) );

When the coverage modeling is more complicated, we can solve it piecewise using multiple calls to Addbins or AddCross to define the model. The following item coverage example uses three calls to create item coverage.

Bin1.AddBins(GenBin( 1, 3 )) ;
Bin1.AddBins(GenBin( 4, 252, 2 )) ;
Bin1.AddBins(GenBin(253, 255 )) ;

The output of GenBin is a single dimensional array value. As a result, concatenation can be used to join bins. Hence we can rewrite the above bins in a single line as shown below. Note I only recommend doing something like this when it increases readability.

Bin1.AddBins( GenBin(1, 3) & GenBin(4, 252, 2) & GenBin(253, 255)) ;

Since functional coverage is modeled using sequential code, writing conditional coverage or using iteration is simply a matter of writing the code. The following uses a boolean generic, gFAST_TEST, to determine whether to cover the entire input space or just a subset of it for a fast test.

if gFAST_TEST then
  ACov.AddCross( GenBin(0,3), GenBin(0,3) ); -- 4x4 Fast Model
  ACov.AddCross( GenBin(0,7), GenBin(0,7) ); -- 8x8 Complete Model
end if ;

In our VHDL Testbenches and Verification class you will get hands on experience writing functional coverage and using it to shape stimulus generation.

OSVVM’s Intelligent Coverage is 5X or More Faster than SystemVerilog’s Constrained Random

If the measure of test case generation was a large number of well randomized test cases, SystemVerilog and UVM would be on par with VHDL’s OSVVM.  However the true measure of test case generation is functional coverage closure – all test cases identified in the test plan are done.  Functional coverage closure is a big challenge for constrained random approaches to verification as used in SystemVerilog or ‘e’.  On the other hand, functional coverage closure is the focus of OSVVM’s Intelligent Coverage™.   This article takes a look at why Constrained Random has challenges and how Intelligent Coverage solves it.

1. Constrained Random Repeats Test Cases

In my post, Functional Coverage Made Easy with VHDL’s OSVVM, we used randomization with a uniform distribution (shown below) to select the register pairs for the ALU. Constrained random at its best produces a uniform distribution. As a result, this example is a best case model of constrained random tests.

Src1 := RV.RandInt(0, 7) ; -- Uniform Randomization
Src2 := RV.RandInt(0, 7) ;

The problem with constrained random testbenches is that they repeat some test cases before generating all test cases. In general to generate N cases, it takes “N * log N” randomizations. The “log N” represents repeated test cases and significantly adds to simulation run times. Ideally we would like to run only N test cases.

Running the previous ALU testbench, we get the following coverage matrix when the code completes. Note that some case were generated 10 time before all were done at least 1 time. It took 315 randomizations to generate all 64 unique pairs. This is slightly less than 5X more iterations than the 64 in the ideal case. This correlates well with theory as 315 is approximately 64 * log(64). By changing the seed value, the exact number of randomizations may increase or decrease but this would be a silly way to try to reduce the number of iterations a test runs.


2. Intelligent Coverage

“Intelligent Coverage” is a coverage driven randomization approach that randomly selects a hole in the functional coverage and passes it to the stimulus generation process. Using “Intelligent Coverage” allows the stimulus generation to focus on missing coverage and reduces the number of test cases generated to approach the ideal of N randomizations to generate N test cases.

Lets return to the ALU example. The Intelligent Coverage methodology starts by writing functional coverage. We did this in the previous example too. Next preliminary stimulus is generated by randomizing using the functional coverage model. In this example, we will replace the call to RandInt (uniform randomization) with a call to RandCovPoint (one of the Intelligent Coverage randomization methods). This is shown below. In this case, Src1 and Src2 are used directly in the test, so we are done.

architecture Test3 of tb is
  shared variable ACov : CovPType ;  -- Declare 
  TestProc : process 
    variable RV : RandomPType ;
    variable Src1, Src2 : integer ;
    -- create coverage model
    ACov.AddCross( GenBin(0,7), GenBin(0,7) );  -- Model

    while not ACov.IsCovered loop    -- Done?
      (Src1, Src2) := ACov.RandCovPoint ; -- Intelligent Coverage Randomization

      DoAluOp(TRec, Src1, Src2) ;    -- Transaction
      ACov.ICover( (Src1, Src2) ) ;  -- Accumulate
    end loop ;

    ACov.WriteBin ;  -- Report 
    EndStatus(. . . ) ;   
  end process ;

When randomizing across a cross coverage model, the output of RandCovPoint is an integer_vector. Instead of using the separate integers, Src1 and Src2, it is also possible to use an integer_vector as shown below.

variable Src : integer_vector(1 to 2) ;
. . . 
Src := ACov.RandCovPoint ;      -- Intelligent Coverage Randomization

The process is not always this easy. Sometimes the value out of RandCovPoint will need to be further shaped by the stimulus generation process.  We do this in our VHDL Testbenches and Verification class.

The Intelligent Coverage methodology works now and works with your current testbench approach. You can adopt this methodology incrementally. Add functional coverage today to make sure you are executing all of your test plan. For the tests that need help, use the Intelligent Coverage.

3. Intelligent Testbenches in SystemVerilog

With SystemVerilog you can certainly buy a simulator that implements Intelligent Testbenches. However, this is a signification upgrade, so it will cost. In addition using an intelligent testbench tool tends to require vendor specific coding – so you are locked into a particular vendor.

On the other hand, with VHDL’s OSVVM, the Intelligent Testbench capability is built into the functional coverage modeling.  It is free.  All of the customizations to the randomization are done by writing VHDL code and initiating transactions.

4. References

Here are a couple of articles on Intelligent Testbench approaches that also remove or reduce the repetition of test cases. However, these solutions are not free like OSVVM.

Wally Rhines. From Volume to Velocity. DVCon Keynote March 2011

Mark Olen. Intelligent Testbench Automation Delivers 10X to 100X Faster Functional Verification. Verification Horizons Blog June 2011

Brian Bailey. Enough of the sideshows – it’s time for some real advancement in functional verification! EDA DesignLine Blog May 2012

Functional Coverage Made Easy with VHDL’s OSVVM

Capturing functional coverage does not require a verification language. It does not need to be declarative. It simply requires a data structure. VHDL’s OSVVM makes capturing high fidelity (really detailed) functional coverage easy and concise.

In my previous post, “Why You Need Functional Coverage”, we looked at what is functional coverage and why you need it. We also noted functional coverage can be written using any code. CoveragePkg and language syntax are solely intended to simplify this effort.

In this post, we will first look at implementing functional coverage manually (without CoveragePkg). Then we will look at using CoveragePkg to capture item and cross coverage. We will see that with CoveragePkg, modeling functional coverage is simple, concise, and powerful.

1. Item (Point) Coverage done Manually

In this subsection we write item coverage using regular VHDL code. While for most problems this is the hard way to capture coverage, it provides a basis for understanding functional coverage and why we can implement it with a data structure.

In a packet based transfer (such as across an ethernet port), most interesting things happen when the transfer size is at or near either the minimum or maximum sized transfers. It is important that a number of medium sized transfers occur, but we do not need to see as many of them. For this example, lets assume that we are interested in tracking transfers that are either the following size or range: 1, 2, 3, 4 to 127, 128 to 252, 253, 254, or 255. The sizes we look for are specified by our test plan.

We also must decide when to capture (aka sample) the coverage. In the following code, we use the rising edge of clock where the flag TransactionDone is 1.

signal Bin : integer_vector(1 to 8) ;
 . . .
   wait until rising_edge(Clk) and TransactionDone = '1' ;
   case to_integer(unsigned(ActualData)) is
     when   1 =>          Bin(1) <= Bin(1) + 1 ;
     when   2 =>          Bin(2) <= Bin(2) + 1 ;
     when   3 =>          Bin(3) <= Bin(3) + 1 ;
     when   4 to 127 =>   Bin(4) <= Bin(4) + 1 ;
     when 128 to 252 =>   Bin(5) <= Bin(5) + 1 ;
     when 253 =>          Bin(6) <= Bin(6) + 1 ;
     when 254 =>          Bin(7) <= Bin(7) + 1 ;
     when 255 =>          Bin(8) <= Bin(8) + 1 ;
     when others =>
   end case ;
 end process ;

Any coverage can be written this way. However, this is too much work and too specific to the problem at hand. We could make a small improvement to this by capturing the code in a procedure. This would help with local reuse, but there are still no built-in operations to determine when testing is done, to print reports, or to save results and the data structure to a file.

2. Basic Item (Point) Coverage with CoveragePkg

In this subsection we use CoveragePkg to write the item coverage for the same packet based transfer sizes created in the previous section manually. Again, we are most interested in the smallest and largest transfers. Hence, for an interface that can transfer between 1 and 255 words we will track transfers of the following size or range: 1, 2, 3, 4 to 127, 128 to 252, 253, 254, and 255.
The basic steps to model functional coverage are declare the coverage object, create the coverage model, accumulate coverage, interact with the coverage data structure, and report the coverage.
Coverage is modeled using a data structure stored inside of a coverage object. The coverage object is created by declaring a shared variable of type CovPType, such as CovBin1 shown below.

architecture Test1 of tb is
  shared variable CovBin1 : CovPType ;

Internal to the data structure, each bin in an item coverage model is represented by a minimum and maximum value (effectively a range). Bins that have only a single value, such as 1 are represented by the pair 1, 1 (meaning 1 to 1). Internally, the minimum and maximum values are stored in a record with other bin information.
The coverage model is constructed by using the method AddBins and the function GenBin. The function GenBin transforms a bin descriptor into a set of bins. The method AddBins inserts these bins into the data structure internal to the protected type. Note that when calling a method of a protected type, such as AddBins shown below, the method name is prefixed by the protected type variable name, CovBin1. The version of GenBin shown below has three parameters: min value, max value, and number of bins. The call, GenBin(1,3,3), breaks the range 1 to 3 into the 3 separate bins with ranges 1 to 1, 2 to 2, 3 to 3.

TestProc : process
  --                    min, max, #bins
  CovBin1.AddBins(GenBin(1,   3,   3)); -- bins 1 to 1, 2 to 2, 3 to 3
  . . .

Additional calls to AddBins appends additional bins to the data structure. As a result, the call, GenBin(4, 252, 2), appends two bins with the ranges 4 to 127 and 128 to 252 respectively to the coverage model.

CovBin1.AddBins(GenBin(  4, 252, 2)) ; -- bins 4 to 127 and 128 to 252

Since creating one bin for each value within a range is common, there is also a version of GenBin that has two parameters: min value and max value which creates one bin per value. As a result, the call GenBin(253, 255) appends three bins with the ranges 253 to 253, 254 to 254, and 255 to 255.

CovBin1.AddBins(GenBin(253, 255)) ; -- bins 253, 254, 255

Coverage is accumulated using the method ICover. Since coverage is collected using sequential code, either clock based sampling (shown below) or transaction based sampling (by calling ICover after a transaction completes – shown in later examples) can be used.

-- Accumulating coverage using clock based sampling
  wait until rising_edge(Clk) and nReset = '1' ;
  CovBin1.ICover(to_integer(unsigned(RxData_slv))) ; 
end loop ;
end process ;

A test is done when functional coverage reaches 100%. The method IsCovered returns true when all the count bins in the coverage data structure have reached their goal. The following code shows the previous loop modified so that it exits when coverage reaches 100%.

-- capture coverage until coverage is 100%
while not CovBin1.IsCovered loop 
    wait until rising_edge(Clk) and nReset = '1' ;
    CovBin1.ICover(to_integer(RxData_slv)) ; 
end loop ;

Finally, when the test is done, the method WriteBin is used to print the coverage results to OUTPUT (the transcript window when running interactively).

-- Print Results
CovBin1.WriteBin ;

Putting the entire example together, we end up with the following.

architecture Test1 of tb is
  shared variable CovBin1 : CovPType ;  -- Coverage Object
  TestProc : process
    -- Model the coverage
    CovBin1.AddBins(GenBin(  1,   3     ));  
    CovBin1.AddBins(GenBin(  4, 252, 2)) ;
    CovBin1.AddBins(GenBin(253, 255   )) ; 

    -- Accumulating Coverage 
    -- clock based sampling
    while not CovBin1.IsCovered loop 
      wait until rising_edge(Clk) and nReset = '1' ;
      CovBin1.ICover(to_integer(RxData_slv)) ; 
    end loop ;

    -- Print Results
    CovBin1.WriteBin ;
    wait ; 
  end process ;

Note that when modeling coverage, we primarily work with integer values. All of the inputs to GenBin and ICover are integers, WriteBin reports results in terms of integers. This is similar to what other verification languages do.

3. Cross Coverage with CoveragePkg

Cross coverage examines the relationships between different objects, such as making sure that each register source has been used with an ALU. The hardware we are working with is as shown below. Note that the test plan will also be concerned about what values are applied to the adder. We are not intending to address that part of the test here.


Cross coverage for SRC1 crossed SRC2 with can be visualized as a matrix of 8 x 8 bins.


The steps for modeling cross coverage are the same steps used for item coverage: declare, model, accumulate, interact, and report. Collecting cross coverage only differs in the model and accumulate steps.

Cross coverage is modeled using the method AddCross and two or more calls to function GenBin. AddCross creates the cross product of the set of bins (created by GenBin) on its inputs. The code below shows the call to create the 8 x 8 cross. Each call to GenBin(0,7) creates the 8 bins: 0, 1, 2, 3, 4, 5, 6, 7. The AddCross creates the 64 bins cross product of these bins. This can be visualized as the matrix shown previously.

ACov.AddCross( GenBin(0,7), GenBin(0,7) );

AddCross supports crossing of up to 20 items. Internal to the data structure there is a record that holds minimum and maximum values for each item in the cross. Hence for the first bin, the record contains SRC1 minimum 0, SRC1 maximum 0, SRC2 minimum 0, and SRC2 maximum 0. The record also contains other bin information (such as coverage goal, current count, bin type (count, illegal, ignore), and weight)
The accumulate step now requires a value for SRC1 and SRC2. The overloaded ICover method for cross coverage uses an integer_vector input. This allows it to accept a value for each item in the cross. The extra set of parentheses around Src1 and Src2 in the call to ICover below designate that it is a integer_vector.

ACov.ICover( (Src1, Src2) ) ;

The code below shows the entire example. The shared variable, ACov, declares the coverage object. AddCross creates the cross coverage model. IsCovered is used to determine when all items in the coverage model have been covered. Each register is selected using uniform randomization (RandInt). The transaction procedure, DoAluOp, applies the stimulus. ICover accumulates the coverage. WriteBin reports the coverage.

architecture Test2 of tb is
  shared variable ACov : CovPType ;  -- Declare 
  TestProc : process 
    variable RV : RandomPType ;
    variable Src1, Src2 : integer ;
    -- create coverage model
    ACov.AddCross( GenBin(0,7), GenBin(0,7) );  -- Model

    while not ACov.IsCovered loop    -- Done?
      Src1 := RV.RandInt(0, 7) ;     -- Uniform Randomization 
      Src2 := RV.RandInt(0, 7) ; 

      DoAluOp(TRec, Src1, Src2) ;    -- Transaction
      ACov.ICover( (Src1, Src2) ) ;  -- Accumulate
    end loop ;

    ACov.WriteBin ;  -- Report 
    EndStatus(. . . ) ;   
  end process ;

4. Summary

OSVVM’s sequential approach to modeling allows creating functional coverage models as concisely as language syntax. In fact, for cross coverage it is more concise since we skip the step of first creating item (point) coverage before creating the cross coverage.

In future blog posts we will explore areas where OSVVM surpasses SystemVerilog capability: the Intelligent Coverage approach to randomization and high fidelity functional coverage modeling. We will see that while VHDL’s OSVVM makes easy work of these tasks, the declarative syntax of other verification languages makes it hard or impossible in these languages.

An easy way to stay tuned is to get the RSS feed.

OSVVM 2013.04

OSVVM release 2013.04 is now available at either SynthWorks Downloads or OSVVM Downloads.

Open Source VHDL Verification Methodology (OSVVM) is VHDL’s leading edge verification methodology. OSVVM provides randomization, functional coverage, and Intelligent Coverage™ (coverage driven randomization) utilities for VHDL.

You can get more information about OSVVM at either SynthWorks’ OSVVM Blog or at OSVVM Website.

Why You Need Functional Coverage

Functional Coverage Tracks Your Test Plan

Functional coverage is code that observes execution of a test plan.  As such, it is code you write to track whether important values, sets of values, or sequences of values that correspond to design or interface requirements, features, or boundary conditions have been exercised.

Functional coverage is important to any verification approach since it is one of the factors used to determine when testing is done.  Specifically, 100% functional coverage indicates that all items in the test plan have been tested.  Combine this with 100% code coverage and it indicates that testing is done.

Functional coverage that examines the values within a single object is called either point (SystemVerilog) or item (‘e’) coverage. I prefer the term item coverage since point can also be a single value within a particular bin.  One relationship we might look at is different transfer sizes across a packet based bus.  For example, the test plan may require that transfer sizes with the following size or range of sizes be observed: 1, 2, 3, 4 to 127, 128 to 252, 253, 254, or 255.

Functional coverage that examines the relationships between different objects is called cross coverage.  An example of this would be examining whether an ALU has done all of its supported operations with every different input pair of registers.

Many think functional coverage is an exclusive capability of a verification language such as SystemVerilog.  However, functional coverage collection is really just a matter of implementing a data structure.

VHDL’s Open Source VHDL Verification Methodology (OSVVM) provides a package, CoveragePkg, with a protected type that facilitates capturing the data structure and writing functional coverage.

Code Coverage is not enough

VHDL simulation tools can automatically calculate a metric called code coverage (assuming you have licenses for this feature).   Code coverage tracks what lines of code or expressions in the code have been exercised.

Code coverage cannot detect conditions that are not in the code.  For example, in the packet bus item coverage example discussed above, code coverage cannot determine that the required values or ranges have occurred – unless the code contains expressions to test for each of these sizes.  Instead, we need to write functional coverage.

In the ALU cross coverage example above, code coverage cannot determine whether particular register pairs have been used together, unless the code is written this way. Generally each input to the ALU is selected independently of the other.  Again, we need to write functional coverage.

Code coverage on a partially implemented design can reach 100%.  It cannot detect missing features (oops forgot to implement one of the timers) and many boundary conditions (in particular those that span more than one block).  Hence, code coverage cannot be used exclusively to indicate we are done testing.

In addition, code coverage is an optimistic metric.  In combinational logic code in an HDL, a process may be executed many times during a given clock cycle due to delta cycle changes on input signals.  This can result in several different branches of code being executed.  However, only the last branch of code executed before the clock edge truly has been covered.

Test Done = Test Plan Executed  and All Code Executed

To know testing is done, we need to know that both the test plan is executed and all of the code has been executed.   Is 100% functional coverage enough?

Unfortunately a test can reach 100% functional coverage without reaching 100% code coverage. This indicates the design contains untested code that is not part of the test plan. This can come from an incomplete test plan, extra undocumented features in the design, or case statement others branches that do not get exercised in normal hardware operation. Untested features need to either be tested or removed.

As a result, even with 100% functional coverage it is still a good idea to use code coverage as a fail safe for the test plan.

Why You Need Functional Coverage, even with Directed Testing

You might think, “I have written a directed test for each item in the test plan, I am done right?”

As design size grows, the complexity increases.  A test that completely validates one version of the design, may not validate the design after revisions.  For example, if the size of a FIFO increases, the test may no longer provide enough stimulus values to fill it completely and cause a FIFO Full condition.  If new features are added, a test may need to change its configuration register values to enable the appropriate mode.

Without functional coverage, you are assuming your directed, algorithmic, file based, or constrained random test actually hits the conditions in your test plan.

Don’t forget the engineers creed, “In the divine we trust, all others need to show supporting data.”    Whether you are using directed, algorithmic, file based, or constrained random test methods, functional coverage provides your supporting data.

OSVVM, VHDL’s Leading-Edge Verification Methodology

At its lowest level, Open Source VHDL Verification Methodology (OSVVM) is a set of VHDL packages that simplify implementation of functional coverage and randomization.  OSVVM uses these packages to create an Intelligent Coverage verification methodology that is a step ahead of other verification methodologies, such as SystemVerilog’s UVM.

Continues on the OSVVM static page: