ASIC Verification: Introduction to Specman

Monday, March 3, 2008

Introduction to Specman

I am writing this Specman tutorial for those who are just taking their steps towards Specman verification. This section describes the evolution of verification in general, the components of e-verification environment and some advantages of this language. This document is a draft document and I keep updating it as I learn more about Specman in particular. I am going to mention Verification Engineers in this article as VE.

INTRODUCTION

Today, project teams build huge verification environments, where verification consumes 40-70% of the resources needed in a typical design cycle. Because a verification environment typically contains concurrent mechanisms for controlling traffic streams to device input ports, and for checking outstanding transactions at the output ports, Verilog and VHDL have traditionally been used for building verification environments. Unfortunately, it is widely recognized that for more complex verification environments and problems, these languages do not contain the necessary constructs for modeling the verification environment efficiently.

As a result, many project teams have moved to using higher-level languages such as C and C++ to be more efficient in creating the verification environment. Unfortunately, these general-purpose languages do not have any built-in constructs for modeling hardware concepts such as concurrency, operating in simulation time, or manipulating vectors of various bit widths . Without these constructs, handling device-specific needs such as controlling synchronization between traffic streams, checking correct timing and formatting traffic data are extremely difficult and time-consuming. Project teams often use a mix of HDL and C/C++ code to attack this verification problem, spending a good deal of time on the interface between the languages.

In addition, advanced methodologies require test benches and test bench languages to implement advanced concepts like constraints for test generation, assertions and definition of coverage scenarios for functional coverage analysis.

EVOLUTION OF VERIFICATION

This section presents an evolution from a HDL based verification strategy into automated test bench verification.

In HDL based verification strategy, it became common to describe both your Device Under Test(DUT) and the verification environment (TestBench) in either VHDL or Verilog. In a typical HDL test environment, the TB consists of tasks that write the data in to DUT and read the data from DUT to verify it. This approach won't be helpful for the very large design, since HDL didn't provide enough features necessary to build the complex environment. So to make the test environment more readable and re-usable, the VE started to write the environment in Object Oriented Programming like C++.

Object Oriented Programming facilitated modeling the data input and output of DUT at the highest level of abstraction. The VE created the abstract data model and the environment convert the abstract data model into bit level representation and inject into the DUT. But the test environment became more complex, because of new utilities (simulator interface) were required. So the overall productivity gain was not sufficient. Therefore, the VE started to create the random verification environment that selects the stimuli for the DUT automatically.

In a Random generation strategy, the VE writes the single test and run it multiple times with different seeds to create the multiple tests. The disadvantages of this method are:

  • It will create the illegal stimuli thereby checking the same functionality so many times.
  • Functional test coverage become a requirement.
At this point, there was a strong motivation to reduce the amount of time spent on creating the complex utilities. These utilities were difficult to maintain, when the design specification changes during the verification process.

So the VE started thinking to have a automated test bench environment that should have the following characters:
  1. Need a language that has to allow the objects or components in a verification environment to be extended for a particular test.
  2. A language that has to express the constraints. Because, a constraint based approach is more powerful.
  3. A coverage engine that allows goals to be defined for complex test case scenarios.

COMPONENTS OF AUTOMATED VERIFICATION SYSTEM

Different components create a complete verification environment. This section describes how various components are built.

GENERATOR

The generator is generating all possible input stimulus based on the constraints provided by the user and inject into the DUT. Suppose, for example your DUT is going to handle all types of USB packets, then the generator should be able to generate all possible USB packets like, TOKEN, DATA and HANDSHAKE. It should also able to generate the error packets like the one that is more than the maximum packet size or that contains CRC error. The specific test contains additional constraints, whose purpose is to direct the generation of the input stimulus to a specific area. For example, if you are constraining the packets to be of one of the TOKEN or DATA, then the generator will generate those two packets only, thereby limiting the area of interest. If you are not giving any specific constraints then you will have a lot of different tests, some of which are quite useless, since they fail to find any bugs.

DRIVER


After the valid input stimulus are generated as mentioned in the generator section, now it is time to inject it into the DUT. The driver object performs the function of taking one stimulus item at a time and injects into DUT until all the stimulus has been injected.

The driver gets the high level data, for example a complete USB packet and converts it into low level data, which is usually a list of bits, bytes etc. In other words, the driver is responsible for implementing the physical level protocol - send the SETUP token & DATA packet that contains the information about how to configure the USB device, wait for some clock cycles as per the spec, receives the acknowledge back from the USB device. There is a construct in 'e' that lets you to convert the high level data- data objects- into the low level data of bits and bytes.

COLLECTOR


The collector is on the output side of your DUT. The main function of this is, to collect the low level data of bits and bytes from the DUT and convert it into high level data - again there is a 'e' construct that lets you to convert the low level data into high level data.

DATA CHECKER

There are two aspects to checking in 'e'. One is data check and another is temporal check. We will see one by one.

Temporal check

These checkers monitor the DUT's input and output interfaces and make sure that they do not violate the specified input/output protocol. For example, if the Host sends the SETUP and DATA packet to USB device, it expects the ACK from USB device within some clock cycles as specified in USB2.0 spec. If ACK doesn't come from USB device, the temporal check display the error message to the user. These checkers are related to the timing of the interfaces not the contents of the data. The data of a packet might be all wrong but as long as it is sent according to the interface spec., the temporal checker is satisfied.

Data check

These checkers verify that the actual data that is sent by the DUT is correct. For example they should check that the data that is read from a specified register is as expected, or that the fields of a packet (destination address, endpoint number, crc) have all been assigned with the right values.

Temporal checkers use temporal expressions to monitor interfaces. Data checkers use "deep_compare()" and "deep_compare_physical()" to store data. Temporal checkers are usually placed in "drivers", "collectors", "BFM"s (Bus Functional Models - a non synthesizable model) or "agents" - in one word, in all those elements that inject data, collect data and monitor physical interfaces.

The scoreboard gets the DUT logical input data from a data generator. Once it knows the logical data that went into the DUT, it calculates the expected output logical data. In many cases, for example, when the DUT's response depends not only on the applied stimulus but also on internal state, predicting the exact output data is not an easy task. In that case you need to model the scoreboard as same as the DUT.

The first thing to consider when writing a scoreboard is where to take the input data from. The input data can usually be taken either directly from the generator or from a monitor located inside the agent.

Choosing between these two options depends on your answers to the two following questions:

  1. Is it guaranteed that your scoreboard will always be able to take data from the generator?
  2. How complex are the data structures that your scoreboard has to check?

The answer for both the above questions would be, if you are doing top level simulation and your stimulus data is complex, consider taking data from the generator. If your data is simple to reconstruct, take it from the monitor.

COVERAGE

Coverage is one method to measure the part of the design that the test has covered. Functional coverage checks the functionality of the DUT. There are two main types of functional coverage. Input coverage checks if the generator has generated all the types of data that one expects it to generate. For example, in USB verification environment, the input checker checks whether all the USB packets are generated or not. Output coverage tell you if you have tested all the parts that you wanted to test in your design. In other words, when the output coverage is 100% you can send your design to the ASIC vendor.

I think this is the correct time to introduce the difference b/w HDL based verification with Automated TB verification.

DIRECTED VS RANDOM TEST

The traditional verification strategy to find out the bugs in any HDL based verification methodology would be writing the verilog TB that will take the "directed tests" and run it on the DUT to verify the functionality of a particular portion of the design. That will find bugs only related to those directed tests. In that case, in order to give a 100% functional coverage, you need to study a functional spec. line by line and starts writing a tests for every line that you read on the spec. That is going to be a tedious work for a VE. But TBs written using an HVL such as "e" or "SystemVerilog" usually run random test.

When you start building a directed TB, you first have to think a lot about weakest points in your design. Then you have to concentrate on the constraints that will exercise those portions of the design. But, to write the constraints, you should be very familiar with DUT. You should know in advance exactly which inputs might make the DUT go into specific states. This means that normally designers write both the design and the directed TB for the design. However, if a certain corner case bugs didn't occur to the designer while he was writing the code, he won't think about it when he is checking the code.

Random verification is meant to overcome the problems just presented. Usually it means that you just provide constraints on the inputs. Within these constraint limitation, values are selected randomly by the software. Verilog support random generation of values. However, this is not quite enough. There are plenty of times when you would like to limit the values to a certain range or to create a dependency between the values that you allow for one input and the values that you allow for the other. If you are randomly generating an USB packet, you definitely want the values of some fields and even their length in bits, to be dependent on the values assigned to other fields. For example, in USB if you select the speed of the USB device is High speed then the maximum packet size should be of 1024 bytes; If the FS device is selected, then MPS is going to be 64 bytes. Trying to do this with the limited support of Verilog is more complex.

Once your inputs are constrained properly, we are ready to move it to the DUT. Run the random verification over long time in the hope that it will find interesting bugs. But if your TB is more complex and you have a lot of input combinations, of course, your random verification will not generate all those interesting combination of inputs. In that case, coverage will be useful to check the TB had taken the design through the problematic states that you have thought of.

How does specman-e work?

Verilog which is supported by almost every simulator on the market is too limited to support all the capabilities that random verification requires. But Specman-e is more then a sophisticated C/C++ library that implements random generation and coverage collection. The e language also has two features that C/C++ lack, but that are a basic requirement for every verification language.

The first of these is a "simulation time". A verification environment must be able to understand simulator time and wait for simulator events, such as changes in the values of nets. For example, it must be able to start a specific process, such as packet generation or packet injection into the DUT, when certain signals get a certain value. Specman-e must therefore be able to translate simulator events (such as changes in signals) into system events that a C/C++ program can understand. Of course, you might build your own C code that does exactly the same (SystemC has a layer that does just that), but this would take a lot of effort.

The second is "garbage collection". As every beginning C/C++ programmer knows, in C/C++ its in the programmer's responsibility to free every piece of memory that the system allocates to his program. Each time that you create an object you must ask the system for some storage memory for that object and later, when you finish using the object, return the allocated memory to the system. If you don't, you will have "memory leaks" and your memory will slowly run out because of objects that are no longer used by your program, but still take a place in memory. Sooner or later, depending on the amount of objects that you fail to recycle, these memory leaks will make your program crash, because the system will refuse its memory requests. In a random verification environment that creates new objects at a very high rate, such memory leaks might pose a more serious problem than in other cases. A "garbage collection" mechanism prevents such leaks by keeping track of all the memory that is allocated to your program by the system, and automatically releasing that memory when your program no longer uses it. e, like other "safe" languages such as Java, has such a mechanism. This is also the reason why you will never be able to retrieve the actual memory location of any data object in an e program, but you will be able to do so in C/C++.

Specman elite and the simulators are two separate process that runs concurrently and synchronize to each other during the simulation. Both of them talk to each other through the "stubs" file.

Both sepcman and simulator are invoked simultaneously at t=0 simulation time. Initially the specman elite gains the control and it passes the control to simulator and the simulation started. The simulator continues to run until it encounter the call back set by a VE. Now the control passed back to the specman and it does all necessary computation and pass the control back to simulator. Once the simulator gets the control, it started the simulation where it previously left it off. So there is no simulation time has elapsed. Simulation continues back and forth b/w simulator and specman until stop() is encountered in specman.

REFERENCE

"Design Verification with e" by Samir Palnitkar.

http://www.specman-verification.com/

No comments: