A Quick Look at eUnit

Earlier this month I started a new project. I'm using Specman again, so this means you'll probably see more e related posts in the future.

In a previous post I wrote about how I wanted to improve my coding skills. In particular, I want to produce more reliable code. While I was in SystemVerilog mode, I dabbled with unit testing using the open source SVUnit library.

As luck would have it, in the meantime Cadence released eUnit, a unit testing library for e. I gave it a try developing a new scoreboard. This worked out pretty well since there wasn't any design to code against, but I didn't want to idle while waiting for it. I developed the scoreboard in isolation, based solely on the specification. I didn't get to integrate it yet, as at the time of writing I still didn't get a chance to simulate it together with the design, though I am pretty hopeful.

During this short foray in using eUnit I found different things, some good, some bad. Read on if you want to know more.

eUnit is pretty lightweight when it comes to libraries, as it contains very few lines of code, though in typical Specman fashion, a lot of the internals are built into the tool itself in the form of built-in structs and methods. Getting started with it is pretty easy, as it provides a code generator to quickly get an up and running testing environment. Though the eUnit code generator also provides mockups and other nice features, I still get the feeling however that SVUnit was easier to get into precisely because it generated less code which didn't overwhelm a new user.

The first thing that hit me when running my first test was that there isn't much information displayed to the console. I for one don't like staring at a blank screen; I want my programs to give me feedback and show me that they are actually running. Error messages are suspiciously not printed to the screen, but the output is written to a log file. This is easily fixable with a patch.

Another thing that I found it lacked was displaying the source of error messages. SVUnit has the nice thing of showing which `FAIL_* macro failed (at what location in which file), making it very easy to understand what's going on and what could have cause the error. This can also be fixed, but the solution requires rewriting all of the eu_expect macros (I'm not particularly thrilled about this).

Since we're on the topic of debugging failed tests, a nice feature that eUnit has and SVUnit doesn't is running a single test. This makes analyzing it much easier, by removing the background noise of other tests. There are however slight mismatches when running just one tests versus running the entire test suite. When running just one test, all built-in phase are called, so for example you can see the effect of any code you may have in a check() method (one of the phases that comes after run()), whereas this won't get executed when running all tests.

Another thing I don't like is that there isn't any nice summary at the end of a test suite that shows how many tests were run, how many failed and how many passed. While I have been able to live without it for now, I am looking into how to add this as well as a patch.

One thing that caused me a bit of trouble was testing clocked behavior (or any kind of temporal behavior for that matter). There isn't any clear example in the manual on how to do this. It also doesn't show how to test signal based behavior, though it does mentions that there is a unit called eu_ports_bundle that can be used to connect to any ports inside the feature under test (FUT). Because Specman is decoupled from a simulator, things are a little trickier. I'm not sure how many people know how to fully use Specman standalone (I for one don't really know that well), but information on how to do this is available in other parts of the manual. It would have been nice, though, to have a few mentions in the eUnit section as well. Here are some examples of things you have to look out for:

  • any clock events have to be set to sys.any or, what I prefer, triggered manually
  • there can't be any @sim events present in standalone mode
  • there can't be any ports that are externally bound; they must be bound to mock ports of the opposite direction (this is where the eu_ports_bundle comes in)
  • driving ports is delayed - this means that the effect of a write will be seen on the next tick (this models delta cycles inside a simulator)
  • events and temporal expressions get triggered at the end of a tick, so some checks have to be delayed until after sys.tick_end or to the next tick (the same things happens when using SVUnit due to the SystemVerilog scheduler)

A pleasant surprise was that it's very easy to do code seams, due to the aspect oriented nature of e. This makes it very easy to test methods in isolation. In fact, adding code seams is a bit too easy, which tempted me to test based on values of internal state fields, instead of through the struct's public API. This makes the tests brittle, because they will need fixing should the internal implementation change.

The biggest issue I have with eUnit is that there aren't any setup() or teardown() methods defined for the test suite. Sure, there is a unit_tests_setup() method that is called once at the beginning of the suite, but this isn't enough. I mentioned earlier that we want to test in isolation. Well, isolation also means that tests are completely independent of each other; if they're not independent, then test dependencies may mask bugs because we aren't executing the exact scenario we think we are. The classical case is testing an e unit. Units almost always have state and we have to start each test with a blank state. For structs we could just create a new instance each time, but units can only be created once, at the beginning, during the test phase. We would need to execute some cleaning code before the start of each test, which is why we need a setup() hook. For the moment, the jury's still out on this one as some authors recommend not using setup() or teardown() as it may ruin test readability and it may lead to tests that are more integration tests than unit tests. Some xUnit frameworks have done away with them altogether (for example NUnit). A discussion on this topic may make the topic of a future post, but for now I'm leaning toward wanting them, one of the reasons being the scenario that I described above with the unit. Other programming languages don't have this problem because they don't have the concept of a static hierarchy. SystemVerilog has it as well if you want to test modules, interfaces or checkers, which is why it's neat that SVUnit provides these two hooks.

My impression up to now is that eUnit is not as mature as SVUnit and it shows. I makes me feel like I'm fighting against it sometimes to make it do what I want. Some essential features (for me at least) are missing, but with the right additions it can become more user friendly.

I have a few patches in place and I'm working on a few more. I'll share these in a future post, so consider subscribing if you don't want to miss it.

In the meantime, I'll keep using eUnit to produce code that contains less bugs and to have a safety net in place should I need to make any big changes. If you use e, I recommend that you give it a try and see for yourself!