Monday, August 15, 2016

Testing UVM Drivers, Part 2

In the previous post we looked at how we can emulate sequencer/driver communication using a lightweight stub of uvm_sequencer. Let's also look at some more tips and tricks I've picked up while writing unit tests for drivers. To mix things up a bit, let's look at the AXI protocol. We're not going to implement a full featured driver; instead, we'll focus on the write channels:

interface vgm_axi_interface(input bit ACLK, input bit ARESETn);
  logic [3:0] AWID;
  logic [31:0] AWADDR;
  logic [3:0] AWLEN;
  logic AWVALID;
  logic AWREADY;

  logic [3:0] WID;
  logic [31:0] WDATA;
  logic WLAST;
  logic WVALID;
  logic WREADY;

  logic [3:0] BID;
  logic [1:0] BRESP;
  logic BVALID;
  logic BREADY;
endinterface

Our sequence item will model the properties of a write transaction:

typedef enum bit [3:0] { LENGTH_[1:16] } length_e;


class sequence_item extends uvm_sequence_item;
  rand bit [3:0] id;
  rand bit [31:0] address;
  rand length_e length;
  rand transfer transfers[];
  rand int unsigned delay;

  // ...
endclass


class transfer extends uvm_sequence_item;
  rand bit[31:0] data;
  rand int unsigned delay;

  // ...
endclass

The driver will consist of the familiar get and drive loop:

class master_driver extends uvm_driver #(sequence_item);
  virtual vgm_axi_interface intf;

  virtual protected task get_and_drive();
    forever begin
      seq_item_port.get_next_item(req);
      drive();
      seq_item_port.item_done();
    end
  endtask

  // ...
endclass

We'll implement the driver based on the following unit tests. Have a quick look at them before going further:

    `SVTEST(drive_awvalid__with_delay)
      sequence_item item = new("item");
      `FAIL_UNLESS(item.randomize() with {
        delay == 3;
      })
      sequencer.add_item(item);

      repeat (3)
        @(posedge clk) `FAIL_UNLESS(intf.AWVALID === 0)
      @(posedge clk) `FAIL_UNLESS(intf.AWVALID === 1)
    `SVTEST_END


    `SVTEST(drive_awvalid__held_until_hready)
      sequence_item item = new("item");
      `FAIL_UNLESS(item.randomize() with {
        delay == 0;
      })
      sequencer.add_item(item);
      intf.AWREADY <= 0;

      repeat (4)
        @(posedge clk) `FAIL_UNLESS(intf.AWVALID === 1)

      intf.AWREADY <= 1;
      @(posedge clk) `FAIL_UNLESS(intf.AWVALID === 1)
      @(posedge clk) `FAIL_UNLESS(intf.AWVALID === 0)
    `SVTEST_END


    `SVTEST(drive_write_addr_channel)
      sequence_item item = new("item");
      `FAIL_UNLESS(item.randomize() with {
        id == 5;
        address == 'h1122_3344;
        length == LENGTH_14;
        delay == 0;
      })
      sequencer.add_item(item);

      @(posedge clk);
      `FAIL_UNLESS(intf.AWID == 5)
      `FAIL_UNLESS(intf.AWADDR == 'h1122_3344)
      `FAIL_UNLESS(intf.AWLEN == 'b1101)
    `SVTEST_END


    `SVTEST(drive_wvalid__with_delay)
      sequence_item item = new("item");
      `FAIL_UNLESS(item.randomize() with {
        delay == 0;
        length == LENGTH_4;
        transfers[0].delay == 1;
        transfers[1].delay == 3;
        transfers[2].delay == 2;
        transfers[3].delay == 0;
      })
      sequencer.add_item(item);

      // Skip over address phase
      @(posedge clk);

      @(posedge clk) `FAIL_UNLESS(intf.WVALID === 0)
      @(posedge clk) `FAIL_UNLESS(intf.WVALID === 1)

      repeat (3)
        @(posedge clk) `FAIL_UNLESS(intf.WVALID === 0)
      @(posedge clk) `FAIL_UNLESS(intf.WVALID === 1)

      repeat (2)
        @(posedge clk) `FAIL_UNLESS(intf.WVALID === 0)
      @(posedge clk) `FAIL_UNLESS(intf.WVALID === 1)

      @(posedge clk) `FAIL_UNLESS(intf.WVALID === 1)
    `SVTEST_END


    `SVTEST(drive_write_data_channel)
      sequence_item item = new("item");
      `FAIL_UNLESS(item.randomize() with {
        delay == 0;
        length == LENGTH_4;
        transfers[0].data == 'hffff_ffff;
        transfers[1].data == 'h0000_0000;
        transfers[2].data == 'haaaa_aaaa;
        transfers[3].data == 'h5555_5555;
        transfers[0].delay == 0;
        transfers[1].delay == 0;
        transfers[2].delay == 0;
        transfers[3].delay == 0;
      })
      sequencer.add_item(item);

      // Skip over address phase
      @(posedge clk);

      @(posedge clk) `FAIL_UNLESS(intf.WDATA === 'hffff_ffff)
      @(posedge clk) `FAIL_UNLESS(intf.WDATA === 'h0000_0000)
      @(posedge clk) `FAIL_UNLESS(intf.WDATA === 'haaaa_aaaa)
      @(posedge clk) `FAIL_UNLESS(intf.WDATA === 'h5555_5555)
    `SVTEST_END


    `SVTEST(drive_write_data_channel__data_held_until_wready)
      sequence_item item = new("item");
      `FAIL_UNLESS(item.randomize() with {
        delay == 0;
        length == LENGTH_4;
        transfers[0].data == 'hffff_ffff;
        transfers[1].data == 'h0000_0000;
        transfers[2].data == 'haaaa_aaaa;
        transfers[3].data == 'h5555_5555;
        transfers[0].delay == 0;
        transfers[1].delay == 0;
        transfers[2].delay == 0;
        transfers[3].delay == 0;
      })
      sequencer.add_item(item);

      // Skip over address phase
      @(posedge clk);

      intf.WREADY <= 0;
      repeat (3)
        @(posedge clk) `FAIL_UNLESS(intf.WDATA === 'hffff_ffff)
      intf.WREADY <= 1;
      @(posedge clk) `FAIL_UNLESS(intf.WDATA === 'hffff_ffff)

      intf.WREADY <= 0;
      @(posedge clk) `FAIL_UNLESS(intf.WDATA === 'h0000_0000)
      intf.WREADY <= 1;
      @(posedge clk) `FAIL_UNLESS(intf.WDATA === 'h0000_0000)

      intf.WREADY <= 0;
      repeat (2)
        @(posedge clk) `FAIL_UNLESS(intf.WDATA === 'haaaa_aaaa)
      intf.WREADY <= 1;
      @(posedge clk) `FAIL_UNLESS(intf.WDATA === 'haaaa_aaaa)

      intf.WREADY <= 0;
      repeat (4)
        @(posedge clk) `FAIL_UNLESS(intf.WDATA === 'h5555_5555)
      intf.WREADY <= 1;
      @(posedge clk) `FAIL_UNLESS(intf.WDATA === 'h5555_5555)
    `SVTEST_END


    `SVTEST(drive_write_data_channel__wlast_driven_for_last_transfer)
      sequence_item item = new("item");
      `FAIL_UNLESS(item.randomize() with {
        delay == 0;
        length == LENGTH_8;
        foreach (transfers[i])
          transfers[i].delay == 0;
      })
      sequencer.add_item(item);

      // Skip over address phase
      @(posedge clk);

      repeat (7)
        @(posedge clk) `FAIL_IF(intf.WLAST)
      @(posedge clk) `FAIL_UNLESS(intf.WLAST)
    `SVTEST_END

The tests above cover the desired functionality that we want to implement in our driver. We won't go through each and every one of them and see what production code we need to write, since the actual implementation of the driver isn't really important for this post. We want to focus more on the tests themselves.

When first confronted with these tests, a new developer won't have an easy time understanding what's going on. The tests are pretty verbose and it's not immediately clear what the focus of each one is. Let's see how we could improve this.

First, we'll notice that the one thing we do in each test is to create an item and queue it for the driver. We use randomization to make sure that the item is "consistent", meaning that the constraints defined in it hold. We could set item variables procedurally, but we would need to ensure that the length and the number of transfers match, which would mean even more code. Instead of repeating these steps in each unit test, we could centralize them into one place. Since we use randomization, we can't extract a function. We're forced to use a macro:

  `define add_item_with(CONSTRAINTS) \
    sequence_item item = new("item"); \
    `FAIL_UNLESS(item.randomize() with CONSTRAINTS; \
    sequencer.add_item(item);

This is going to save us some lines of code and it's going to make the intent of the code more obvious. What will differ between unit tests are the constraints they use when adding an item:

    `SVTEST(drive_awvalid__with_delay)
      `add_item_with({
        delay == 3;
      })

      // ...
    `SVTEST_END


    `SVTEST(drive_awvalid__held_until_hready)
      `add_item_with({
        delay == 0;
      })

      // ...
    `SVTEST_END


    `SVTEST(drive_write_addr_channel)
      `add_item_with({
        id == 5;
        address == 'h1122_3344;
        length == LENGTH_14;
        delay == 0;
      })

      // ...
    `SVTEST_END

We can even go one step further. If we look at the tests again, we'll notice that we use the same constraint over and over again to get an item without any delay. Also, whenever we want to check some write data channel aspects, we want to constrain the delay of each transfer to be zero. We do have tests where we want non-zero delays, but they are the exceptional cases. What we could do is to add some default values to the delay variables via soft constraints. This way, whenever we use the `add_item_with(...) macro we know that we'll get an item without any delay:

  `define add_item_with(CONSTRAINTS) \
    sequence_item item = new("item"); \
    `FAIL_UNLESS(item.randomize() with { \
      soft delay == 0; \
      foreach (transfers[i]) \
        soft transfers[i].delay == 0; \
      \
      if (1) \
        CONSTRAINTS \
    }) \
    sequencer.add_item(item);

This way, we can write even more compact code to add items. Here are the two data tests I mentioned:

    `SVTEST(drive_write_data_channel)
      `add_item_with({
        length == LENGTH_4;
        transfers[0].data == 'hffff_ffff;
        transfers[1].data == 'h0000_0000;
        transfers[2].data == 'haaaa_aaaa;
        transfers[3].data == 'h5555_5555;
      })

      // ...
    `SVTEST_END


    `SVTEST(drive_write_data_channel__data_held_until_wready)
      `add_item_with({
        length == LENGTH_4;
        transfers[0].data == 'hffff_ffff;
        transfers[1].data == 'h0000_0000;
        transfers[2].data == 'haaaa_aaaa;
        transfers[3].data == 'h5555_5555;
      })

      // ...
    `SVTEST_END

If we compare them with the initial version from the beginning of the post, we'll see that they are much more compact. It's also clearer that the handling of the data variables are what we're testing and not anything related to delays.

Since we want to check the behavior of signals at certain points in time, we need to do a lot of waits. The statement @(posedge clk) comes up a lot in our unit tests. We could shorten this by using a default clocking block:

module master_driver_unit_test;
  // ...

  default clocking @(posedge clk);
  endclocking
endmodule

Now, instead of using the verbose @(posedge clk) statement, we can use the cycle delay operator:

    `SVTEST(drive_wvalid__with_delay)
      // ...

      // Skip over address phase
      ##1;

      ##1 `FAIL_UNLESS(intf.WVALID === 0)
      ##1 `FAIL_UNLESS(intf.WVALID === 1)

      repeat (3)
        ##1 `FAIL_UNLESS(intf.WVALID === 0)
      ##1 `FAIL_UNLESS(intf.WVALID === 1)

      repeat (2)
        ##1 `FAIL_UNLESS(intf.WVALID === 0)
      ##1 `FAIL_UNLESS(intf.WVALID === 1)

      ##1 `FAIL_UNLESS(intf.WVALID === 1)
    `SVTEST_END

Having less words to parse makes the test's intent clearer. Using a default clocking block is an option most of the time, but if you have a more exotic protocol that uses both edges of the clock or multiple clocks altogether, it's not going to work.

One thing you may have noticed is that I've marked some wait statements with comments. If you've read Clean Code (and if you haven't you should), you'll call me out on this. Uncle Bob says that comments are a crutch for poorly written code that doesn't express its intent properly. Instead of relying on comments, we could create a named task:

  task wait_addr_phase_ended();
    ##1;
  endtask

Now, when a test calls this task, it'll be immediately apparent what the intention is:

    `SVTEST(drive_write_data_channel)
      `add_item_with({
        // ...
      })

      wait_addr_phase_ended();

      // ...
    `SVTEST_END

There is a mismatch between what the task does and its name. The task actually just waits for one cycle. To reflect this, it should have been named wait_cycle(). A task call like this will take us back to square one in terms of test readability. We may as well just use the ##1 statement as that tells us the same thing. If we want to solve this mismatch between the task name and its implementation, we should change the latter. In the context of our tests, we knew that the address phase was going to end after one clock cycle. Generally, though, What we want is to wait for AWVALID and AWREADY to be high at the same time:

  task wait_addr_phase_ended();
    @(posedge clk iff intf.AWVALID && intf.AWREADY);
  endtask

Some unit-testing purists might bash this method because it adds complexity to the testing logic. This is something we should try to avoid, since we don't want to have to start testing our tests. The task is lean enough that I'd say this point doesn't apply here.

While we may have streamlined our test preparation code, checking that our expectations are fulfilled is ridiculously long. Procedural code isn't really well suited to check behavior over time. You know what is though? Assertions... Instead of writing a big mess of procedural code that does repeats and cycle delays, we could write a nice property and check that it holds. When people hear the word property, they normally think of concurrent assertions, but this isn't really what we want here. What we want to do is to check that a certain property holds after a certain point in time, not during the whole simulation.

Luckily, SystemVerilog provides the expect construct, which does exactly what we want. Given a property, it will begin evaluating it starting with the first subsequent clock. For example, to check that the driver can drive data transfers with delays, we could write the following:

    `SVTEST(drive_wvalid__with_delay)
      // ...

      expect (
        intf.WVALID === 0
          ##1 intf.WVALID === 1
          ##1 intf.WVALID === 0 [*3]
          ##1 intf.WVALID === 1
          ##1 intf.WVALID === 0 [*2]
          ##1 intf.WVALID === 1
          ##1 intf.WVALID === 1)
      else
        `FAIL_IF(1)
    `SVTEST_END

This is much cleaner than the procedural code we had before. It also allows us to structure our unit tests according to the Arrange, Act, Assert pattern (even though most of the time for drivers Arrange and Act get a bit mixed, but at least Assert is separated clearly).

Since expecting that properties hold is something we'll want to do over and over again, let's wrap it in a utility macro and make it part of the vgm_svunit_utils package:

`define FAIL_UNLESS_PROP(prop) \
  expect (prop) \
  else \
    `FAIL_IF(1)

Using this macro will give the unit tests a more consistent SVUnit look and feel:

    `SVTEST(drive_wvalid__with_delay)
      // ...

      `FAIL_UNLESS_PROP(
        intf.WVALID === 0
          ##1 intf.WVALID === 1
          ##1 intf.WVALID === 0 [*3]
          // ...
      )
    `SVTEST_END

As we saw in the previous post, it's very important to use the === (4-state equality) operator instead of ==, otherwise we're writing tests that always pass. I intentionally wrote the drive_write_addr_channel buggy to show this (kudos to anyone who noticed). Also, we need to watch out for hidden comparisons. In the last test we didn't even use the equality operator:

    `SVTEST(drive_write_data_channel__wlast_driven_for_last_transfer)
      // ...

      repeat (7)
        @(posedge clk) `FAIL_IF(intf.WLAST)
      @(posedge clk) `FAIL_UNLESS(intf.WLAST)
    `SVTEST_END

Because of the way the `FAIL_* macros are written, they will both always pass, so the test isn't really doing anything. If we were to re-write them using properties, we would notice if WLAST isn't being driven:

    `SVTEST(drive_write_data_channel__wlast_driven_for_last_transfer)
      // ...

      `FAIL_UNLESS_PROP(
        !intf.WLAST [*7]
          ##1 intf.WLAST)
    `SVTEST_END

If WLAST were to be X, then the negation would also return X, which would be interpreted as a 0, leading to a fail of the property. For single bit signals, using properties is much safer than comparing for equality. There's also the added bonus that the code is more compact. For vectors, though, we still need to make sure that we're using the === operator.

Another cool thing that properties allow us to do is to focus more on the aspects that are important for a test. For example, in the data_held_until_ready test we want to check that the driver doesn't modify the value of WDATA until it sees a corresponding WREADY. We did this by choosing known values for the data transfers and checking that these values stay on the bus for the appropriate number of clock cycles. Actually, we don't (or shouldn't) care what data is being driven, as long as it stays constant. We could simplify the code to the following:

    `SVTEST(drive_write_data_channel__data_held_until_wready)
      `add_item_with({
        length == LENGTH_4;
      })

      // ...

      `FAIL_UNLESS_PROP(
        stable_for(intf.WDATA, 3)
          ##1 stable_for(intf.WDATA, 1)
          ##1 stable_for(intf.WDATA, 2)
          ##1 stable_for(intf.WDATA, 4))
    `SVTEST_END

The stable_for(...) sequence only exists because my simulator complained about using $past(...) inside an expect statement. It's content could have been inlined to the property:

  sequence stable_for(signal, int unsigned num_cycles);
    ##1 ($stable(signal) [*num_cycles]);
  endsequence

We don't really need to use a property to check that the signal stays stable. As per the LRM we could use the $stable(...) system function inside procedural code as well, but some tools don't allow this, limiting its use to assertions.

While it's usually frowned upon to have randomness in unit tests, I would argue that this isn't necessarily a problem here. Randomness should be avoided because it causes us to write complicated code for our expectations. As long as we don't need to do this (i.e. the checking code stays the same regardless of what we plug into it), everything should be fine. There is one caveat, though. On the off chance that two consecutive transfers contain the same data, it won't be possible to figure out whether the driver moved to the second transfer too early. In the extreme case, all transfers could have the same data value, making it impossible to check that the driver really holds transfers constant for their entire durations. This could be solved by writing a constraint that all data values should be unique.

Last, but not least, I hinted in the previous post that a driver not only drives items from the sequencer. It's also supposed to request items from the sequencer at well defined points in time. For example, a driver should call get_next_item(...) once it's able to process a new item, but not before, to allow the running sequence to randomize items at the latest possible time (so called late randomization). This is helpful when sequences use the current state of the system to decide what to do next. For simple protocols this is easy: a new item can start exactly after the previous one has finished. For pipelined protocols, though, it's a not as easy. The AXI protocol is massively pipelined and can have a lot of ongoing transactions at any time. I don't want to have to think what a sensible definition for an available slot for a new item would be, because the scheme would probably be too complicated. I do however want to show how we could verify that a driver calls sequencer methods at defined times.

We'll take a contrived example of how to handle responses, since I couldn't think of anything better. Whether we're checking that put_response(...) or get_next_item(...) was called when expected doesn't really matter, so this example should be enough to prove a point. Let's say that when a response comes on the write response channel, the driver is supposed to let the sequencer know.

We'll have another sequence item for the response:

typedef enum { OKAY, EXOKAY, SLVERR, DECERR } response_kind_e;

class response extends uvm_sequence_item;
  bit [3:0] id;
  response_kind_e kind;

  // ...
endclass

First of all, we'll want to check that the response contains the proper values:

    `SVTEST(put_response_when_ready)
      response rsp;

      intf.BID <= 5;
      intf.BVALID <= 1;
      ##1;

      intf.BREADY <= 1;
      ##1;

      uvm_wait_for_nba_region();
      `FAIL_UNLESS(sequencer.try_get_rsp(rsp))
      `FAIL_UNLESS(rsp.id == 5)
    `SVTEST_END

The driver waits for a response to be seen on the bus and calls put_response(...) when this is the case:

class master_driver extends uvm_driver #(sequence_item, response);
  // ..

  virtual protected task collect_and_put();
    forever begin
      collect();
      seq_item_port.put_response(rsp);
    end
  endtask

  virtual protected task collect();
    @(intf.ACLK iff intf.BVALID);
    rsp = response::type_id::create("rsp");
    rsp.id = intf.BID;
  endtask
endclass

The test is going to pass with this implementation. The problem is, though, that the response is considered to be done only once it has also been accepted, i.e. BREADY goes high. Unfortunately, the previous code sends responses too early. We'll need to update our unit test to check that put_response(...) was called after BREADY was also high and that it was called only once. This is where the test diagnostic information that sequencer_stub provides is going to be useful:

    `SVTEST(put_response_when_ready)
      // ...

      intf.BREADY <= 1;
      ##1;

      `FAIL_UNLESS_TRIGGERED(sequencer.put_response_called)
      `FAIL_UNLESS(sequencer.num_put_response_calls == 1)

      // ...
    `SVTEST_END

The sequencer_stub class contains a named event put_response_called that, as the name suggests, is triggered when put_reponse(...) is called by the driver. The `FAIL_UNLESS_TRIGGERED(...) macro is part of the vgm_svunit_utils package. It wraps the code required to check that an event was triggered in the current time step:

`define FAIL_UNLESS_TRIGGERED(ev) \
  fork \
    wait (ev.triggered); \
    \
    begin \
      #1; \
      `FAIL_IF_LOG(1, `"'ev' not triggered`") \
    end \
  join_any \
  disable fork;

The wait statement checks if the event was already triggered (because the driver code could have executed first) and if it wasn't, it blocks. If time moves forward and the event didn't get triggered, an error message is triggered. Since the test will now fail, we'll need to update the driver code (not shown).

We've covered a lot of ground in this post on how to write more comprehensive, readable and maintainable unit tests for UVM drivers. You can find the example code here. I hope this helps you be more productive when developing your own drivers.

Sunday, August 7, 2016

Testing UVM Drivers

It's that time again when I've started a new project at work. Since we're going to be using some new proprietary interfaces in this chip, this calls for some new UVCs. I wouldn't even consider developing a new UVC without setting up a unit testing environment for it first. Since this is a greenfield project, a lot of the specifications are volatile, meaning that the interface protocol can change at any moment. Having tests in place can help make sure that I don't miss anything. Even if the specification stays the same, I might decide to restructure the code and I want to be certain that it still works.

I first started with unit testing about two years ago, while developing some other interface UVCs. I've learned a few things throughout this time and I'd like to share some of the techniques I've used. In this post we'll look at how to test UVM drivers.

A driver is supposed to take a transaction (called a sequence item in UVM lingo) and convert it into signal toggles. Testing a driver is conceptually pretty straightforward: we supply it with an item and we check that the toggles it produces are correct. As an example, we'll take the Wishbone protocol, revision B.3. Our sequence item models the properties of an access:

typedef enum { READ, WRITE } direction_e;


class sequence_item extends uvm_sequence_item;
  rand direction_e direction;
  rand bit[31:0] address;
  rand bit[31:0] data;

  rand int unsigned delay;

  // ...
endclass

Aside from the direction, address and data, we can also randomize how many clock cycles the driver should wait before starting the transfer.

All the drivers I've seen up to now consisted primarily of a loop in which an item is fetched and then driven:

class master_driver extends uvm_driver #(sequence_item);
  virtual task run_phase(uvm_phase phase);
    forever begin
      seq_item_port.get_next_item(req);
      drive();
      seq_item_port.item_done();
    end
  endtask

  // ...
endclass

Our goal is to write the drive() task.

Let's look at how to supply a driver with an item. A driver is an active component, that asks for items at its own pace. Inside an agent, it's connected to a sequencer, that feeds it with items when they become available. Inside our unit test, we need to emulate the same relationship by having a test double which the driver can interrogate. There's nothing stopping us from using uvm_sequencer itself:

module master_driver_unit_test;
  master_driver driver;
  uvm_sequencer #(sequence_item) sequencer;


  function void build();
    svunit_ut = new(name);

    driver = new("driver", null);
    sequencer = new("sequencer", null);
    driver.seq_item_port.connect(sequencer.seq_item_export);
  endfunction

  // ...
endmodule

The first test we want to write is that when the driver gets an item with no delay, it drives CYC_O and STB_O immediately. We first create an item and we start it on the sequencer using execute_item(...). This models a `uvm_send(...) action inside a sequence:

module master_driver_unit_test;

  `SVTEST(cyc_and_stb_driven)
    sequence_item item = new("item");

    fork
      sequencer.execute_item(item);
    join_none

    @(posedge clk);
    `FAIL_UNLESS(intf.CYC_O === 1)
    `FAIL_UNLESS(intf.STB_O === 1)
  `SVTEST_END

  // ...
endmodule

Since execute_item(...) blocks until the driver finishes processing the item, we'll need to fork it out to be able to check what the driver does with the it.

After supplying the driver with the item, we need to check that it drives the appropriate signal values. Once an item is gotten, we expect the driver to start driving CYC_O and STB_O and their values to be valid on the next posedge. We have to check one clock cycle at a time. For example, if we would drive an item with a delay of three cycles, the unit test would look like this:

module master_driver_unit_test;

  `SVTEST(cyc_and_stb_driven_with_delay)
    sequence_item item = new("item");
    item.delay = 3;

    fork
      sequencer.execute_item(item);
    join_none

    repeat (3) begin
      @(posedge clk);
      `FAIL_UNLESS(intf.CYC_O === 0)
      `FAIL_UNLESS(intf.STB_O === 0)
    end

    @(posedge clk);
    `FAIL_UNLESS(intf.CYC_O === 1)
    `FAIL_UNLESS(intf.STB_O === 1)
  `SVTEST_END

  // ...
endmodule

It's not enough to just skip the first three clock cycles. We need to ensure that the driver signals idle cycles during that time. Also, notice the use of the === operator (4-state equality). If we were to use the == operator instead, we would get false positives if the driver doesn't drive any of the signals. This is because X (unknown value due to not being driven) matches anything.

We could write a few more unit tests for our driver. For example, a test could check that a read transfer is properly driven:

module master_driver_unit_test;

  `SVTEST(read_transfer_driven)
    sequence_item item = new("item");
    item.direction = READ;
    item.address = 'haabb_ccdd;

    fork
      sequencer.execute_item(item);
    join_none

    @(posedge clk);
    `FAIL_UNLESS(intf.WE_O === 0)
    `FAIL_UNLESS(intf.ADR_O === 'haabb_ccdd)
  `SVTEST_END

  // ...
endmodule

Another test would check that a write transfer is properly driven:

module master_driver_unit_test;

  `SVTEST(write_transfer_driven)
    sequence_item item = new("item");
    item.direction = WRITE;
    item.address = 'h1122_3344;

    fork
      sequencer.execute_item(item);
    join_none

    @(posedge clk);
    `FAIL_UNLESS(intf.WE_O === 1)
    `FAIL_UNLESS(intf.ADR_O === 'h1122_3344)
  `SVTEST_END

  // ...
endmodule

Notice that in the last two tests we didn't check CYC_O and STB_O anymore. This is because we already checked that they get driven when sending an item.  When we write the unit tests, we don't write them in isolation from the production code. They evolve together. To save effort and make the tests more readable, we can focus on certain aspects of the class we want to test.

The implementation of the drive() task would look something like this:

class master_driver extends uvm_driver #(sequence_item);
  virtual protected task drive();
    repeat (req.delay)
      @(posedge intf.CLK_I);

    intf.CYC_O <= 1;
    intf.STB_O <= 1;
    intf.WE_O <= req.direction;
    intf.ADR_O <= req.address;

    // ...
  endtask

  // ...
endclass

We can see that we drive WE_O and ADR_O at the same time we write CYC_O and STB_O. It wouldn't bring us much if we, for example, exhaustively checked that the address can be driven with different delays.

Up to now we only tested that the driver properly reacts to requests from the sequencer. Most of the times, the driver also has to react to other events triggered by its partner on the bus. In our case, since we're developing a master driver, it needs to be sensitive to toggles on signals driven by the slave. One such requirement is that a master is supposed to keep the control signals stable until the slave acknowledges the transfer. This is signaled by raising the ACK_I signal.

We need to write a test where, aside from executing an item on the sequencer, we also model the behavior of the slave:

module master_driver_unit_test;

  `SVTEST(transfer_held_until_ack)
    sequence_item item = new("item");
    intf.ACK_I <= 0;

    fork
      sequencer.execute_item(item);
    join_none

    repeat (3) begin
      @(posedge clk);
      `FAIL_UNLESS(intf.CYC_O === 1)
      `FAIL_UNLESS(intf.STB_O === 1)
    end

    intf.ACK_I <= 1;
    @(posedge clk);
    `FAIL_UNLESS(intf.CYC_O === 1)
    `FAIL_UNLESS(intf.STB_O === 1)
  `SVTEST_END

  // ...
endmodule

We basically model all collaborators of the driver, where some might communicate with it via method calls (like the sequencer) and some might communicate with it via signal toggles on the interface (like a connected slave).

This last test would suggest that we should update the drive() task with the following code:

class master_driver extends uvm_driver #(sequence_item);
  virtual protected task drive();
    // ...

    @(posedge intf.CLK_I iff intf.ACK_I);
  endtask

  // ...
endclass

While this is what we would need to do, the test still passes without this line. This is because, once the driver starts driving an item, it won't touch the signals anymore until it gets another item. The extra wait statement would cause the driver to mark an item as finished  at a later time. This is hints that it isn't enough to just check signal toggles. It's also important that the driver makes calls to item_done() at the right time. This isn't something that we can check easily check with uvm_sequencer. We'd need to capture information about method calls from the driver to the sequencer.

We could choose to implement such extra testing functionality in a sub-class of uvm_sequencer. We could capture the number of times item_done() (or any other method) got called during a test. This wouldn't be a problem to implement, but have you ever taken a look at uvm_sequencer? That class is massive. The functionality is shared with two of its subclasses, uvm_sequencer_base and uvm_sequencer_param_base. Most of the stuff a sequencer does (prioritization, locking, etc.) we don't even need. Debugging anything would be a nightmare. Interrupting an item in the middle of it being driven (due to a unit test finishing early) is also going to cause fatal errors to be issued (e.g. "get_next_item() called twice without a call to item_done() in between").

A better alternative would be to start from scratch with a lightweight class that mimics the uvm_sequencer functionality we need and provides us with test diagnostic information. When talking about test doubles, I like to use the same terminology as outlined in this article. I can't really decide whether what we want to create should be called a fake (since we'll implement working functionality, but only a limited part of what uvm_sequencer can do) or a stub (since we want to collect information about method calls). I chose 'stub', for now, based on gut feeling.

Our stub has to be parameterizable, just like uvm_sequencer, to be able to interact with any driver. Just like a sequencer, it's going to have a seq_item_export for the driver to connect to:

class sequencer_stub #(type REQ = uvm_sequence_item, type RSP = REQ)
    extends uvm_component;

  uvm_seq_item_pull_imp #(REQ, RSP, this_type) seq_item_export;

  // ...
endclass

Since in every test we always forked out execute_item(...), we'll provide a function that just schedules an item to be picked up by the driver at its convenience:

class sequencer_stub #(type REQ = uvm_sequence_item, type RSP = REQ)
    extends uvm_component;

  extern virtual function void add_item(REQ item);

  // ...
endclass

Requests (items from the sequencer to the driver) and responses (items from the driver to the sequencer) will be stored in FIFOs:

class sequencer_stub #(type REQ = uvm_sequence_item, type RSP = REQ)
    extends uvm_component;

  protected uvm_tlm_fifo #(REQ) reqs;
  protected uvm_tlm_fifo #(RSP) rsps;

  // ...
endclass

Last, but not least, the sequencer methods (get_next_item(), item_done(...), etc.) operate on these FIFOs to emulate the behavior of a real sequencer. For example, get_next_item() peeks inside the request FIFO:

task sequencer_stub::get_next_item(output REQ t);
  reqs.peek(t);
endtask

The item_done(...) function pops a request from the FIFO, because it's been handled:

function void sequencer_stub::item_done(RSP item = null);
  REQ t;
  void'(reqs.try_get(t));

  // ...
endfunction

At the same time, we can also count the number of times a certain method was called. This information could be useful to check that the driver requests items at defined intervals:

function bit sequencer_stub::has_do_available();
  num_has_do_available_calls++;
  return !reqs.is_empty();
endfunction

Last, but not least, we'll need to ensure that all unit tests start from the same state. This means that there aren't any items queued from previous unit tests and that the diagnostic information has been cleared. A flush() method (similar to uvm_tlm_fifo::flush()) should be called in teardown() to enforce this rule:

function void sequencer_stub::flush();
  reqs.flush();
  rsps.flush();
  num_get_next_item_calls = 0;
  num_try_next_item_calls = 0;
  // ...
endfunction

We can replace the sequencer in our unit test with this sequencer_stub class:

module master_driver_unit_test;
  sequencer_stub #(sequence_item) sequencer;

  // ...
endmodule

Replacing the calls to execute_item(...) with add_item(...) will make the code a bit more concise:

module master_driver_unit_test;
  `SVTEST(cyc_and_stb_driven)
    sequence_item item = new("item");
    sequencer.add_item(item);

    @(posedge clk);
    `FAIL_UNLESS(intf.CYC_O === 1)
    `FAIL_UNLESS(intf.STB_O === 1)
  `SVTEST_END

  // ...
endmodule

We could also do the more funky stuff, like testing that we only get a certain amount of calls to item_done(...) in a certain time window. Let's skip this for now to keep the post short.

I've uploaded the code for the sequencer_stub to GitHub under the name vgm_svunit_utils. I'll use this as an incubation area for additions to SVUnit. These could eventually get integrated into the main library if deemed to be worthy and useful to others.

You can also find the example code for this post here. I hope it inspired you to give unit testing a try!