Sunday, May 25, 2014

A Quick Look at SVUnit

Some time back I, like other people before me, had the realization that doing verification is a lot like doing software programming. Modern hardware verification languages (HVLs) like SystemVerilog and e were conceived with this idea in mind, but many people didn't get the memo yet.

Now, like most other people in my field, I'm not a trained software engineer. I don't have a computer science background. I am at most a hobby programmer.

I started out with SystemVerilog back in 2009, when it was getting more popular. As an electrical engineer, I just knew how to write code that did the job, but was nothing fancy. Since then I've had the chance to do some more software development in Java and C++. This opened me up to a much bigger world and made me decide that I wanted to improve my code writing skills. There were many dimensions in which to improve: flexibility, performance, readability, etc.

I decided that the first one I wanted to improve is the one that is most often overlooked: correctness. I was stuck in a continuous loop of write, see it fail later, fix it, see if fail later (maybe somewhere else), and so on... This had gotten me pretty frustrated, but this is the way everybody around me was doing it so I didn't think there was a better way. This made me realize that the development process is as important as the code. You could have the cleanest, most extensible code, but it's equal to 0 if it doesn't work as it should. Even if you do eventually manage to also get it working, you will have wasted a lot of effort doing it.

Fortunately, when I decided I wanted to improve my development skills, Neil Johnson had already started AgileSoc.com. He was challenging the age old idea that the person who writes the code shouldn't also test the code. This idea is so ingrained in the minds of hardware developers that they believe it to be the panacea of chip design. His scope is broader than just verification code. He envisions a future where designers test their own code before committing it to be thoroughly verified. He was right though, the code I wrote ended up being tested by me anyway, just at a later point in time (when I would have mostly forgotten what it was supposed to do and how it was supposed to work).

Neil is a firm believer in test driven development (TDD). The idea behind TDD is to write the tests before the production code. You should test it when you write it because that's when you're in the zone. TDD goes hand in hand with unit testing, that is testing at the smallest possible level. He developed SVUnit to empower unit testing and TDD for SystemVerilog. I was in SystemC mode when I first read about it and afterwards I switched to e so I didn't get a chance to try it out then.

Late last year I got a break between projects and I was assigned to do some work to support my colleagues using SystemVerilog. We were developing some new VIPs from basically scratch. This was the chance to use SVUnit and put my money where my mouth was.

Well, we engineers are a strange bunch, aren't we? We like to tinker with new technologies, but make us do something differently than we're used to and you'll see that at the same time we hate change. I was afraid a bit. I was worried that by doing the switch to unit testing it would hamper my productivity. "In the end, how easy can it be? If it were so easy, then everyone would be doing it, right?", I thought. Because of this I put if off for a couple of weeks.

Eventually, I finally gathered the courage to download SVUnit and install it. As real men don't look at the example (until maybe later), I fired it up and created a test for a class I was working on that week. A call to a script here, a file generated there, a few lines of code in the new file and bam! a unit test. "Wow, that was way easier that I thought", I said to myself. Neil did a great job in automating a lot of things and letting the user worry about writing his tests and not the infrastructure. I won't talk about how to use SVUnit in this post. There are plenty of resources on the AgileSoc.com site on how to get started.

I started out slowly without much discipline. Sometimes I wrote the code first and then tested it. Other times I gave TDD a try and wrote the tests first, watched them fail and then wrote the code. I did end up having most of my code (and a lot of code written by my colleagues) unit tested. I didn't just rely on unit testing, though. I also did integration testing of the entire VIPs to emulate them working in the field, which did show some problems. Nevertheless, unit testing provided a granularity that I couldn't have easily achieved when debugging in the field because there's just too much other stuff going on in the background and because I had limited controllability.

Now the big question: "Was it all worth it in the end?". I like to think so. We had a much smoother ramp-up of our new VIPs after putting them to use in real projects. New feature requests came, we found performance problems with some pieces of code and also some bugs, but having the unit tests made refactoring much easier and they gave us more confidence in the code. It's also not an "either/or" approach as I thought in the beginning; you can use unit testing and also do stuff without unit testing as well (where it's too much effort for example). Also keep in mind that it's not a magic bullet that will solve all of your problems. Just because the parts are working properly, doesn't mean that the whole machine does as well, so you do need integration testing as well.

SVUnit did wet up the e guys' appetite as well, and Cadence didn't disappoint by releasing eUnit last year. Once I switch to e mode again this summer, I'll definitely give that a try as well.

If you do SystemVerilog development, I urge you to give unit testing and SVUnit a try. Make your code work first and make it fancy second!

Thanks for reading and see you next time!

Saturday, May 10, 2014

Fun and Games with CRV: Sudoku

This week let's mix it up a bit and do something less work-related. Everybody probably knows what Sudoku is, but just in case you don't here's a link to the Wikipedia page.

Sudoku is a pretty popular game that many people play to pass the time. I've played it as well, but I'm not really good at it so I turned to my computer for help. Now, programming a Sudoku solver is probably really difficult, plus I would have no idea where to start. This is why we'll cheat and have the SystemVerilog random number generator solve it for us.

This is the beauty of constraint programming; we don't have to solve problems, we just have to describe them and let our constraint solver do the heavy lifting. While we want to express our problem in the form of constraints, we also want to express everything in an as short and abstract a way as possible. Let's get started!

We'll represent our Sudoku grid as a 9 x 9 array of integers. We'll define all properties (fields and constraints) in a class called sudoku_solver_base. This class will hold all of the rules needed to solve any game of Sudoku, while an actual game will be represented by a sub-class of this class that defines the initial starting point (for example also via constraints or by reading it from a file).

class sudoku_solver_base;
  rand int grid[9][9];
  
  // ...
endclass

The first constraint we have is that all elements inside our array must be numbers between 1 and 9, which is pretty easy to express (also note the syntax of a multi-dimensional foreach as this is common gotcha):

constraint all_elements_1_to_9_c {
  foreach (grid[i, j])
    grid[i][j] inside { [1:9] };
}

The elements on each row must also be unique. This is very easily expressed using SystemVerilog 2012's unique construct:

constraint unique_on_row_c {
  foreach (grid[i])
    unique { grid[i] };
}

Using unique has saved us from doing double foreach and has made the constraint much more readable.

The complementary of the last constraint is that all elements on a column must also be unique. This is a bit trickier as SystemVerilog doesn't have any way of slicing an array into columns. While we could do some double foreach looping, that wouldn't be as clean as using unique. What we first have to do is construct an auxiliary array that is the transpose of our grid. This requires extra memory but it will help keep our constraints more readable.

local rand int grid_transposed[9][9];

constraint create_transposed_c {
  foreach (grid[i, j])
    grid_transposed[i][j] == grid[j][i];
}

grid_transposed's rows will contain grid's columns. It's now easy to constrain these to have all unique elements:

constraint unique_on_column_c {
  foreach (grid_transposed[i])
    unique { grid_transposed[i] };
}

We may not have gained very much because we've anyway used a double foreach just to construct the transposed grid, but this is a pretty nifty trick if you need to apply complex constraints on 2D arrays. The idea isn't mine, I "borrowed" it from one of Team Specman's blog posts.

Now we have to concentrate on the 9 sub-grids that make up our Sudoku grid. Like we did with the transposed grid, we want to construct these and use unique on their elements. We can think of these sub-grids as being the elements of a 3x3 matrix. This is how we'll index them:

+-------+-------+-------+
|       |       |       |
| (0,0) | (0,1) | (0,2) |
|       |       |       |
+-------+-------+-------+
|       |       |       |
| (1,0) | (1,1) | (1,2) |
|       |       |       |
+-------+-------+-------+
|       |       |       |
| (2,0) | (2,1) | (2,2) |
|       |       |       |
+-------+-------+-------+

These we will keep in a 2D array of 3x3 arrays (which is basically a 4D array):

local rand int sub_grids[3][3][3][3];

Keeping with my goal of writing short and sweet constraints, the first thing I tried was directly slicing in two dimensions:

constraint create_sub_grids_c {
  foreach (sub_grids[i, j])
    sub_grids[i][j] == grid[3*i +: 3][3*j +: 3];
}

This unfortunately wasn't supported. "This is probably because of the range operator", I thought. "No big deal, I'll just slice row-wise", I then said to myself. I tried it out first for one row:

constraint create_sub_grids_c {
  foreach (sub_grids[i, j])
    sub_grids[i][j][1] == grid[3*i + 1][3*j + 1];
}

This was however also not supported as equality constraints cannot be defined on entire arrays, only on individual array elements. This means we have to use brute force and explicitly assign each element (something I would have hoped to avoid):

constraint create_sub_grids_c {
  foreach (sub_grids[i, j, k, l])
    sub_grids[i][j][k][l] == grid[i*3 + k][j*3 + l];
}

This was another double foreach I would have hoped to avoid. We'll, we've worked so much for our sub-grids, we might as well just use unique on them and set our final constraint:

constraint unique_in_sub_grid_c {
  foreach (sub_grids[i, j])
    unique { sub_grids[i][j] };
}

Here is where the harsh reality of simulator bugs hit me and even though uniqueness constraints on entire arrays are allowed by the LRM, the simulator was encountering an internal error during randomization. While annoying, I thought I could maybe work around it by first storing each sub-grid in a one dimensional array:

local rand int sub_grids_lin[3][3][9];

constraint create_sub_grids_lin_c {
  foreach (sub_grids_lin[i, j, k])
    sub_grids_lin[i][j][k] == sub_grids[i][j][k/3][k%3];
}

Magically, after inserting this constraint, the internal error was gone, even though I hadn't yet removed the array uniqueness constraint. I left this line inside the sample code just to have it working, but be aware that it shouldn't be necessary.

For testing I used the same puzzle as on Wikipedia:

class sudoku_solver extends sudoku_solver_base;
  constraint puzzle_c {
    grid[0][0] == 5;
    grid[0][1] == 3;
    grid[0][4] == 7;
    
    // ...
    
    grid[8][4] == 8;
    grid[8][7] == 7;
    grid[8][8] == 9;
  }
endclass

The result I got was also the same as there on the first run. That's right, I got first time right Sudoku!

As always, you can find the full code on the blog's SourceForge repository. If you want to play with it a little more you could also create a file reader for it to read in the starting point of the puzzle.

I hope you enjoyed this little exercise! In a future post I do intend to implement the same thing in e and see if I end up with less code (or more). If you don't want to miss it, feel free to hit the subscribe button.

Saturday, May 3, 2014

"What are you implying?" - Overlapped Implication in e

A feature where SystemVerilog really shines for hardware verification is its assertion language. Making it easy to specify complex requirements in a clear and easily understandable way, coupled with the fact that they can be used for both simulation and formal verification, means that SVAs are the complete package.

A question I got from a colleague a little while back was "How do I do overlapped implication in an e temporal expression? You know, like SystemVerilog's '|->' operator.". He hit the nail on the head with that one; there is no such operator in e, but surely we can figure something out.

Let's start with the basics. Here is what an overlapped implication property looks like in SystemVerilog:

overlapped_implication: assert property (
  @(posedge clk)
    antecedent |-> consequent
);

Whenever the first condition is true (called the antecedent) then the second condition (called the consequent) must also be true.

Digging around in the e LRM I got the idea of using a combination of the detach operator, together with the true match variable repeat operator (which is fancy talk for ~[range]). Here's what I came up with:

<'
expect overlapped_implication is @antecedent => detach({@consequent; ~[1]});
'>

The way it works is that detach sort of spawns a parallel evaluation of the second TE. The second TE will evaluate to true whenever @consequent has happened once. Putting it all together would read: whenever @antecedent happens, make sure that @consequent already happened once. This is all fine and dandy on paper, but when doing a test run I was surprised to see the SVA firing without any error message from the expect. Continuing the simulation also caused the expect to trigger, but one clock cycle later. Strange?

Well not so strange after all. The gotcha here is that the '=>' operator includes an additional clock cycle between the execution of the left hand side and of the right hand side (because it is equivalent to non-overlapped implication). What we actually described in our expect is: whenever @antecedent happens, make sure that at the next clock cycle @consequent already happened once. This explains why we only saw it triggering one clock cycle later. While we could live with this approach and just put a nice error message that hints to look at the previous clock cycle for the source of the error, this feels like only half of a solution. There has to be a better way!

Breaking out the old Boolean algebra textbook (or in our case Wikipedia) we are reminded that logical implication is just another logical operation, which means we can also express it using and, or and not. The equivalent form of "p implies q" is "not p or q". This is easily expressed as a TE:

<'
// @antecedent implies @consequent
expect overlapped_implication is not @antecedent or @consequent;
'>

The only disadvantage here is that it isn't instantly apparent to a not so math savvy person reading the code that what we have is an implication, but this can be easily fixed with a comment.

You can find both forms of the temporal expression, including some SystemVerilog code to test them, on the blog's SourceForge repository.

See you next time!