|
|
|
|
|
|
|
|
|
|
|
|
Note : This section was written by Avidan Efody (avidan_e@yahoo.com). You are more than welcome to send comments, complaints, corrections or tempting work offers to Avidan Efody.
|
|
|
|
|
|
Introduction
|
|
|
|
|
|
About this document
|
|
|
|
|
|
This document has three target audiences. Those who are not at all familiar with Specman should find in the first chapter a brief objective account of its main working principles and a discussion of its pros and cons. The second and the third chapters aim at beginners with the E language and describe some of its most salient characteristics, along with examples. I hope that readers of these chapters will gain some knowledge both of low level syntax through the examples and comments, and of the high level ideas that stand behind them. The last chapter deals with the verification methodology associated with Specman and E, and can profit both beginners and experienced users.
|
|
|
|
|
|
Though it will soon become clear to the reader, I must emphasize that this document was not written with the knowledge or on behalf of Verisity, and that all the information, examples and tips that you will find herewith have not been verified or approved in any way by Verisity.
|
|
|
|
|
|
|
|
|
|
|
|
At A Quick Glance
|
|
|
|
|
|
Directed verification vs. Random verification
|
|
|
Specman is a development environment for the E language, somewhat like MFC or Borland C are development environments for C++, the main difference being that an E code can never execute stand alone without Specman. Since for the moment, at least until E becomes an industry standard, working with Specman means writing E and vice versa, the term Specman in this chapter is used to refer to both.
|
|
|
|
|
|
As an automated verification software the purpose of Specman is to help you find bugs in a Verilog or VHDL design. The simplest way to find bugs in a Verliog or VHDL design is with a Verilog or VHDL testbench. Verilog or VHDL testbenches are usually called directed testbenches while a Specman testbench is called a random testbench. You must bear in mind that random is not a synonym for Specman or E - there are other companies (for example VERA) and a lot of other ways, except for Specman, to build a random testbench. I will now shortly explain the difference between directed and random testbenches.
|
|
|
|
|
|
When you build a directed testbench you first have to think a lot about the places in your design where bugs might hide, or the weakest points in your design. Once you have a list of these, you assign values to the inputs in order to check your design at these specific points. For example, if you have a counter, you might want to check the behavior of your design when this counter is zero, when it reaches its maximal value or when it wraps around. Hence, you have to think about an input sequence that will make your counter reach these states. Therefore, normally, a directed testbench is made up of several separate sequences of input values, or tests, and each of these is supposed to make your design go into a specific state, which seems to you to be problematic.
|
|
|
|
|
|
The problem here is of course that you have to think about most of the problematic parts by yourself. There might be a lot of problematic parts where bugs might be hiding that you haven't thought of. Also, in order to know where the problematic parts are, you usually have to be quite familiar with the design. It takes a very good engineer with a lot of experience to find problematic points in a design made by someone else. This means that normally designers write both the design and the directed testbench for the design. However, if a certain conceptual bug did not occur to the designer while he was writing the code, he is not likely to think about it when he is checking the code. The best way is of course if somebody else could check it.
|
|
|
|
|
|
Random verification is meant, first and foremost to overcome the problems just presented. Usually it means that you just provide constraints or certain limits on the inputs. Within these limits, values are selected randomly by the software. Verilog and to a somewhat lesser extent VHDL, both support random generation of values or simply put some sort of a method like “rand() in C. However, this is not quite enough. There are plenty of times when you would like to limit the values to a certain range or to create a dependency between the values that you allow for one input and the values that you allow for the other. If you are randomly creating (or generating) an Ethernet packet, you definitely want the values of some fields and even their length in bits, to be dependent on the values assigned to other fields. Trying to do this with the limited support of Verilog or VHDL is more or less like banging your head against a concrete wall.
|
|
|
|
|
|
Now, once your inputs are free to move within certain limits at random, you let your randon verification testbench run a long time (from several nights to months and even years) in the hope that it will find interesting bugs. Of course, if you have a lot of inputs and your design is very complicated, your random verification might never produce all the possible sequences. This, however, is not a problem – you never wait for all the possibilities to be exhausted before you call it a day. Instead, you will stop running your testbench when the intervals between interesting bugs become too long since this means that either your design is more or less clean (hopefully) or that your random testbench is not doing its work properly. In any of these two cases it would be better to dedicate your computing resources, which are usually limited, to another purpose.
|
|
|
|
|
|
It is important to note that random testing does not mean the designer does not have to think a lot about the most problematic points in his design, only that now you don’t have to count on him as much as before. The problematic points can now be used as test cases for the random testbench. You should check that your random testbench made the design go into all these problematic states that you meant to check using a directed testbench. You do this with Coverage which, despite some significant developments (and very good public relations), is still essentially like placing a breakpoint on a complicated line in your VHDL or Verilog design, and checking that it works properly. Another option is for each designer to write a small directed testbench for his/her block in Verilog or VHDL. This might save you some money on Specman licenses at the price of depriving you of your simple testcases.
|
|
|
|
|
|
How does Specman work?
|
|
|
The idea on which Specman is based is quite simple. Both Verilog and VHDL, in somewhat different ways, allow the user to call external functions (also called callbacks). Also, almost every simulator on the market supports standard C libraries that enable external applications to perform all kinds of operations on its data structures. You can, for example, assign values to signals, run the simulation, stop it or find nets in the design, all from an external application. In this way it is possible, for example, to call an external callback from a Verilog design in the middle of a Modelsim simulation, and then make that callback find the names of all the signals in the design that begin with the letter A and print those signals to the simulator console. I once did that when I was extremely bored.
|
|
|
|
|
|
As mentioned above, Verilog and VHDL, which are also supported by almost every simulator on the market are too limited to support all the capabilities that random verification requires. So instead of writing a Verilog or VHDL testbench and then compiling it inside the simulator, we can write complicated callbacks in C or C++ and have all the flexibility and arithmetic libraries we need. In this way, for example we can implement a much more complicated random generation then Verilog or VHDL allow. This is in fact more or less what Specman does.
|
|
|
|
|
|
The only purpose of this complicated explanation is to make you ask the following question : If all of Specman is based on a C/C++ interface, why is it that the designers of Specman have chosen to invent this new language, called E, and then sell us all an integrated environment that includes a special debugger, optimizers, libraries and so forth? Why didn't they just write some C/C++ libraries that could provide exactly the same abilities as this new language? If they would have done this then all we would have to buy from Verisity, would be these libraries. All the rest of the development environment (debuggers, optimizers whatever) could come from Borland, Microsoft or whoever sells an integrated C/C++ development environments. By the way, it is good to know that there are a lot of companies that have exactly such libraries for their own private use or for sale.
|
|
|
|
|
|
Of course, Verisity might fill a book with the reasons for the invention of E. Elegance, the English like structure of the statements or all the new abilities they added and that would be too complicated to add through libraries. One must admit that there is some truth to that, but personally I think that it was more then everything else an ingenious marketing decision. You just can't sell several C/C++ libraries for the prices they demand for their integrated environment (namely Specman). So they just complicated the market a bit in order to have a justification for the prices they demand. As a by product they got all kinds of other advantages, not the least of which is the big money they pocket for the courses they give in this new language, which is now almost a recognized standard.
|
|
|
|
|
|
Specman pros and cons
|
|
|
The cons of Specman are obvious it costs a lot of money to train your staff and to buy the licenses. In case you are training your electrical engineers, who normally don't posses a lot of programming experience, to do the Specman verification, expect the learning curves to be about half as steep as those shown in Verisity's presentations. Generally speaking C/C++ programmers would learn a lot faster, but this requires, of course, separate design and verification teams, which is something that not all companies, especially the smaller ones, can afford. There are other cons too the environment contains a lot of bugs and it is my impression that Verisity programmers, just like any programmer in the world, are keener on inventing all kinds of complicated new features, rather then fixing the bugs in the old ones. In my dealings with Verisity's technical support I have found it to be neither quick nor very helpful, but that is of course my own personal experience, and I believe that people who work for larger companies might tell you a different story.
|
|
|
|
|
|
The most important pro one can find for Specman is its competitors, which are usually a lot worse. Other tools are a lot more complicated or contain a lot of bugs (although VERA is closing fast). And after all you can't say that you don't get at least some of your money's worth. Specman is sometimes very annoying but after a while, having gained some insights that I'll share with you soon, the work can become reasonable and sometimes even rewarding. Also, one has to admit that there are some cool parts, such as the possibility to extend structs and methods, and if you haven't worked with random generation before, you will probably be amazed by the amount of bugs that you can find using quite a simple testbench.
|
|
|
|
|
|
The conclusion is: Before you buy Specman, have a good look around. There might be some company who will provide the most reasonable solution good C/C++ libraries - quite soon. Don't let Specman salesman seduce you with other features that Specman has, since the most important and effective part of Specman is his random generation. Other parts are, in my opinion, mostly nice to have.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|