What are the best practices for equipment description languages ​​(Verilog, VHDL, etc.)

What are the best practices to follow when implementing HDL code?

What are the common features and differences compared to the more common areas of software development?

+58
vhdl verilog hdl
Nov 28 '08 at 23:17
source share
6 answers

Best Book on This Reuse Guide . It covers both VHDL and Verilog.

And, in particular, some problems that do not have exact correspondence in the software:

  • Without latches
  • Be careful when discarding
  • Check your internal and external time
  • Use synthesized code only
  • Record the output of all modules.
  • Be careful with locking and non-blocking assignments
  • Be careful with sensitive lists for combinatorial logic (or use @ (*) in Verilog)

Some of them are the same as

  • Use CM
  • Read code reviews
  • Check (simulate) your code
  • Reuse code
  • Do you have a modern schedule?
  • Have a specification or use case or Agile client
+37
Dec 03 '08 at 16:22
source share

Sort the old thread, but I wanted to bet $ 0.02. This is not entirely specific to Verilog / VHDL .. more about hardware design in general ... specially synthesized design for custom ASICs.

This is my opinion, based on many years of industry experience (as opposed to academic) in design. They do not have a special order.

My umbrella operator is the design for doing the validation. In hardware design, validation is paramount. Mistakes are much more expensive if found in real silicon. You cannot just recompile. Therefore, pre-silicon pays much more attention.

  • Know the difference between management paths and data paths. This allows you to create much more elegant and convenient code. It also allows you to save the gate and minimize the spread of X. For example, data paths should never have flopped flops, control paths should always need this.

  • Prove functionality before testing. Either through a formal approach, or through oscillograms. This has many advantages, I will explain 2. Firstly, it will save you wasted onion time peeling through problems. Unlike many projects at the application level (esp during training) and most of the course, the time for changing the code is very long (from 10 minutes to days, depending on complexity). Each time you change the code, you need to go through development, verification, compilation, waveform formation and, finally, the actual simulation. This may take several hours. Secondly, you are much less likely to get into corner cases. Please note that this applies to testing before silicon. They will surely end up in a post-silicone calculator that you spend $$$ on. Believe me, the initial cost of checking the functionality significantly minimizes the risk and is worth the effort. It is sometimes difficult to convince recent college graduates.

  • Have the chicken bits. Chicken bits are a bit in MMIO set with a driver to disable a function in silicon. It is designed to bring back changes in which confidence is low (trust is directly proportional to validation efforts). It is impossible to hit all possible states in pre-silicon. Confidence in your design cannot be achieved until it is confirmed in post-silicon. Even if there is only 1 state that falls into 0.000005% of the time that causes error, it WILL BE a HIT in post-silicon, but not necessarily in pre-silicon.

  • Avoid exceptions to managing at all costs. Each new exception doubles your validation actions. It is hard to explain. Suppose there is a DMA block that will store data in memory that another block will use. Assume that the stored data structure depends on the function performed. If you decide to design in such a way that the stored data structure differs between the various functions, you simply multiplied your efforts by checking the number of DMA functions. If this rule is respected, the data structure stored as a result will be a super-set of all data available for each function, where the content objects are hard-coded. After the DMA save logic has been checked for 1 function, it will be checked for all functions.

  • Minimize interfaces (read information about management paths). This is due to minimizing exceptions. Firstly, each new interface requires verification. This includes new checkers / trackers, approvals, coverage points, and functional tire models in your test bench. Secondly, it can increase your evaluation efforts exponentially! Let's say you have 1 interface for reading data in caches. Now let's say (for some odd reason) you decide you want to use a different interface to read main memory. You have only quadrupled your validation efforts. Now you need to check these combinations at any time n:

    • no cache reading, no memory reading
    • read cache, read memory
    • reading cache, reading without memory
    • read cache, read memory
  • Understand and communicate assumptions. The absence of this is the main reason for blocking to block communication problems. You may have a perfect unit that is fully tested. However, not understanding all the assumptions, your unit will fail when it is connected.

  • Minimize potential conditions. The fewer states (perceived or unintentional) a design has, the less effort is required to verify. It is good practice to group similar functions into 1 top-level function (e.g. sequencers and arbiters). It is very difficult to define and define this high-level function so that it covers as many smaller functions as possible, but at the same time you significantly eliminate the state and, in turn, the potential for errors.

  • Always provide a strong signal while leaving your unit. In most cases, this is the solution. You have no idea what the endpoint block (s) will do with it. You may run into time issues that can directly affect your ideal implementation.

  • Avoid mealy type FSM if performance is not negatively impacted. Meily FSMs are more likely to cause synchronization problems than Moore.

  • .. and, finally, the one I don’t like the most: “if it doesn’t break, don’t fix it” Due to the risk and high cost of errors, hacking a more practical solution to problems many times. Others avoided this by mentioning the use of existing components.

Regarding a comparison with a more traditional software design:

  • discrete event programming is a completely different paradigm. People see the verilog syntax and think "oh, that's exactly the same as C" ... however, it couldn't be further from the truth. Although the syntax is similar, you need to think differently. For example, a traditional debugger has little meaning in synthesized RTL (Testbench design is different). The best tool is waveforms on paper. However, in doing so, the FSM design can sometimes mimic procedural programming. People with a software background tend to go crazy with FSM (I know that at first).

  • Verilog has many, many (and many) special features of testbench. It is fully object oriented. As for testbench design, it is very similar to traditional software design. However, he has another dimension associated with it, time. race conditions and protocol delays should be considered

  • As for validation, it is also different (and the same). There are three main approaches;

    • Formal Propaganda Check (FPV): You prove with logic that it will always work.
    • Directed random testing. Arbitraryly set the delays, input values, and enableability determined by the seed. directional means that the seed puts weight on paths that have less confidence. This approach uses coverage points to indicate health status.
    • Focus testing. This is similar to traditional software testing.

... for completeness, I also need to discuss the best test methods at the stand ... but this is another day

Sorry for the length .. I was in the "Zone" :)

+51
Mar 13 2018-11-11T00:
source share

HDL, like Verilog and VHDL, really seem to encourage spaghetti code. Most modules consist of several “always” (Verilog) or “process” (VHDL) blocks, which can be in any order. A common algorithm or module function is often completely closed. Finding out how the code works (if you don't write it) is a painful process.

A few years ago, I came across this article , which outlines a more structured VHDL design method. The basic idea is that each module has only 2 process units. One for combinatorial code and another for synchronous (registers). It is great for creating readable and supported code.

+22
Apr 29 '09 at 22:26
source share
  • in HDL, some parts of the code can work at the same time, for example, two lines of code "can work" at the same time, this is an advantage to use wisely. this is something that a programmer, accustomed to linear languages, can hardly understand:

    • Long and specific conveyors must be created.
    • You can make your large modules at the same time.
    • instead of performing a repeat action for different data, you can create multiple units and do the work in parallel.
  • Particular attention should be paid to the boot process - once your chip is operational, you have come a long way.

Debugging on hardware is usually much more complicated than debugging software, so:

  • Simple code is preferable, sometimes there are other ways to speed up your code, after it is already running, for example, using a chip with a higher speed, etc. "

  • Avoid smart protocols between components.

  • The working code in HDL is more valuable than on other software, since the hardware is so difficult to debug to reuse, as well as consider the possibility of using "libraries" of modules, some of which are free, while others are sold.

  • designing should take into account not only errors in the HDL code, but also errors on the chip you are programming and on other hardware devices that interact with the chip, so you really need to think about a design that is easy to verify.

Some debugging tips:

  • If a project includes several building blocks, it would probably be possible to create lines from the interfaces between these blocks to test points outside the chip.

  • You want to keep enough lines in your project to distract interesting data that needs to be checked using external devices. you can also use these lines, and your code as a way to tell you about the current state of execution - for example, if you get data at some points, you write some value for the lines, at a later stage of execution you write another value, etc.

    If your chip is reconfigurable, it will become even more convenient, since you can adapt specific tests and reprogram the outputs for each test as you go (it looks very good with LEDs :). )

Edit:

By smart protocols, I meant that if two of your physical devices are connected, they must exchange data with the simplest communication protocol. those. Do not use any complicated home protocols between them.

The reason is - The fixed bugs "inside" the FPGA / ASIC are fairly simple, since you have simulators. Therefore, if you are sure that the data arrives the way you want and goes out when your program sends it, you have reached Hardware utopia - the ability to work at the software level :) (using the simulator). But if your data does not reach you, how you want it, and you need to find out why ... you have to connect to the lines, and it's not so simple.

Finding errors on the lines is difficult, because you need to connect to the lines with special equipment that records the status of the lines at different times, and you will need to make sure that your lines are operating in accordance with the protocol.

If you need to connect your two physical devices, make the “protocol” as simple as possible, to the extent that it will not be called a protocol :) For example, if the units share the clock, add x lines of data between them and make one block, in order to write them down, and another block read, thus transmitting, for example, one “word”, which has x bits between them at each clock drop. If you have an FPGA, if the original clock speed is too fast for parallel data - you can control the speed of this, according to your experiments, for example, so that the data remains on the lines for at least 't' clock cycles, etc. '. I assume that parallel data transfer is easier, since you can work with lower clock speeds and get the same characteristics, without having to split your words on one device and reassemble on another. (I hope that between the "hours" there is no delay between each hour). Even this is probably too complicated :)

Regarding SPI, I2C, etc. I have not implemented any of them, I can say that I connected the legs of two FPGAs coming from the same clock (I do not remember the exact formation of resistors in the middle), at much higher speeds, so I really can not come up with good reasons for use them as the main way to transfer data between your FPGAs if the FPGAs are not very far apart, which is one reason to use a serial rather than a parallel bus.

JTAG is used by some FPGA companies to test / program their products, but I'm not sure that it was used as a way to transport data at high speeds, and this is a protocol ... (another that may have built-in chip support).

If you need to implement any known protocol, consider using pre-made HDL code for this - which you can find or buy.

+6
May 18 '09 at 11:41
source share

This is a question that requires JBDAVID 10 teams to design equipment.

  • Use Revision / Version Control, as in Software. SVN and Hg are free.
  • Require code to pass syntax check before registering. LINT tool is better.
  • Use the full-featured hardware validation language to verify design. System-Verilog is an almost safe choice.
  • Tracking bugs. Bugzilla and GNATS are free tools. FogBugz requires a little $.
  • Use Assertions to troubleshoot misuse.
  • The Coverage Triad provides a stable design: coverage of the measurement code, functional coverage and coverage of statements in both simulation and formal tools.
  • Power is King: Use CPF or UPF to capture, secure, and verify your Power-Intent.
  • real design is often a mixed signal. Use mixed signal language to test analog with digital. Verilog-AMS is one such solution. But do not go overboard. Realnumber simulation can perform most of the functional aspects of mixed signal behavior.
  • Use hardware acceleration to test software that should work with silicon!
  • Syntax. Aware text editors for your HDL / HVL are the minimum requirement for an IDE developer.
+4
Jun 18 '09 at 9:46 a.m.
source share

For FPGAs, Xilinx has this excellent page (none, new TBD location). Almost all will apply to other FPGA providers or will have equivalent rules. For design ASIC is very much.

Altera has Recommended HDL Coding Styles (PDF) and Design Recommendations for Altera Devices and Quartus II Design Assistant . Altera has this payment rate . However, this is about performance. I have no other information how good it is.

+3
Feb 18 2018-10-18
source share



All Articles