- Published: Tuesday, 05 June 2012 13:47
RTL simulations are used to observe the outputs of a design when the inputs are driven. The goal is to characterize the circuit and verify its functionality in all circumstances and conditions.
This article shows a few RTL simulation technics with their advantages and inconvenients. The first ones are outdated and do not address anymore the challenges of verification of complex design but they are still used because of legacy reasons or by lack of knowledge / time of the verification engineers. The last ones that include random verification and functional coverage are an introduction to what can be accomplished with state-of-the-art technics like OVM / UVM.
As an illustration let's take the following example:
- The DUT (Design Under Test) in orange in the next picture is a signal processing unit with a receive (Rx) and transmit (Tx) data paths.
- It is driven by a CPU that gets its instruction from a ROM (Read-Only
- Our testbench instantiates the 3 elements: the DUT, the CPU and the memory models.
The quickest way to see what happens when the DUT is stimulated is to dump a few signals from the simulation tool and observe them in a waveform viewer.
This is easy to do and give a fast feedback. However, visual inspection is prone to human errors and is difficult to reproduce. It is also impossible to automate or to use with large design.
Another common way of verifying a data path is to monitor the Rx/Tx bus and to produce outputs files that are compared with golden files, for example with a simple Unix diff command.
This is a self-checking simulation technic but the maintenance of the test cases might become painful in case of desynchronization between input and output flows. It is also prone to human error whn building the golden files.
In some cases, when the DUT is a Rx/Tx data path like in our example, it is possible to implement a loop between the Tx and the Rx path. In this case, the data sent by the CPU model can be compared in the CPU model itself with the received data. Let's think to UART or SPI data path.
The test cases are self-checking and greatly simplified because data generation and verification are handled on the same side. However, if a bug is present on both sides of the paths, it might not be detected.
Using a reference model
When a reference model is available (for example a C or Matlab model), it might be instantiated in the testbench in parallel with the DUT and stimulated in the same way as the DUT. The outputs of the model and DUT are then compared.
It is a self-checking technic that ensures that DUT is always verified against the reference model and that is easy to maintain. Building testbenches with responding models is now widely used and it is one of the building block of more advanced verification methodology like UVM / OVM.
However, a reference model is not always available (commercially or in-house) and may be time-consuming to develop.
Note that the model can also act as a "responder" to the DUT.
All the previously-described technics were developped with the idea of pre-defined / fixed scenarios. This is called scenario-driven verification. In some cases, the amount of scenarios can be infinite! Let's think to the instruction set of a CPU and the possible sequences of instructions that can be executed.
The solution is to develop one general test case and to use randomization. Our CPU might then be easily driven by millions of random sequences of instructions. Of course, not all sequences are legal or realistic and some constraints are defined by the verification engineer to make sure that the CPU is driven in a correct way. This is called constraint-driven verification.
Randomization allows to cover completely a design but how to know when to stop? This is the purpose of the functional coverage. In short, functional coverage is part of the testbench (written for example in PSL or SystemVerilog) and defines exhaustively what must be covered. For example, if we have a UART, we define the coverage goal that all 6-, 7- and 8-bit modes must be covered.
When 100% of the coverage goals are reached, the verification may stop and the design is declared as fully covered from a functional point of view.
The difficulty of the task is:
- to exhaustively list all points of coverage and built the verification plan. This is however normally a mandatory task whatever verification technic is used.
- translate all coverage goals in a language understandable by the simulator
Verification technics are numerous. The most state-of-the art are the ones combining reference models, randomization of stimuli and functional coverage with a well-defined methodology. This is the baseline of UVM / OVM approach that is covered in another article.