Unit firmware testing

What best practices have you used in unit testing embedded firmware specific to embedded systems?

+54
unit-testing embedded
Jun 30 '09 at 3:57
source share
10 answers

Firmware may have come a long way in the last 10 years, but we usually did the following:

  • for algorithms that are independent of the target equipment, we simply had unit tests that were built and tested on a non-embedded platform.
  • for those things that required hardware, unit tests were conditionally compiled into code to use any equipment. In our case, it was a serial port on the target computer, which pushed the results to another, more capable machine, where the tests were checked for correctness.
  • Depending on the hardware, you can sometimes fake a “virtual” device on a non-embedded platform. This usually consisted of another thread of execution (or a signal function) changing the memory used by the program. Useful for memory I / O, but not for IRQ, etc.
  • usually you could only unit test create a small subset of the complete code at a time (due to memory limitations).
  • to test time-sensitive things, we did not. Simply and easily. The hardware we used (8051 and 68302) was not always functional if it worked too slowly. Such debugging was supposed to be done initially with a CRO (oscilloscope) and (when we had more money) ICE (in-circuit emulator).

I hope the situation has improved since the last time I did it. I would not wish this pain to my worst enemy.

+43
Jun 30 '09 at 4:06
source share

Much can be achieved by testing modules in a PC environment (compiling your code using the PC C compiler and running your code as part of testing PC modules) with a few caveats:

  • This does not apply to testing your low-level code, including startup code, RAM tests, hardware drivers. You will have to use more direct unit testing.
  • Your built-in system compiler must be reliable, so you will not look for errors created by the compiler.
  • Your code should be a layered architecture with hardware abstraction. You may need to write device driver simulators for your PC module testing platform.
  • You should always use stdint.h types like uint16_t , not just unsigned int , etc.

We followed these rules and found that after unit testing the application level code in the unit test PC system, we can have good confidence that it works well.

The benefits of unit testing on a PC platform:

  • You do not encounter the problem of insufficient disk space on the embedded platform due to the addition of a unit testing system.
  • The compilation cycle loop is usually faster and easier on the PC platform (and avoids the “write / download” step, which can take several minutes).
  • You have more options for visualizing progress (some embedded applications have limited peripheral I / O devices), storing input / output data for analysis, and performing more labor-intensive tests.
  • You can use easily accessible PC-based unit test frameworks that are not available / suitable for the embedded platform.
+18
Jun 30 '09 at 12:06
source share

Embedded systems are a broad topic, but in general, think of it as a special-purpose product that combines both hardware and software. My embedded background is mobile phones, which are just a small subset of all embedded systems. I will try to take a little into account the following points:

  • Release hardware dependencies whenever possible. Thus, you can run your unit tests on the mocked “equipment”, as well as test various rare / exceptional cases that would be more difficult to verify at the target level. To prevent the cost of abstraction, you can use, for example, conditional compilation.

  • As little as possible depends on the equipment.

  • Unit tests running in an emulator or cross-compiler environment still do not guarantee that the code runs on the target hardware. You should also check the target. Test as early as possible.

+13
Jun 30 '09 at 4:12
source share

You might want to check out Test Driven Development for Embedded C by James W. Groening. The book is due to be published in August 2010, but a beta book is now available on The Pragmatic Bookshelf .

+11
May 6 '10 at 3:16 p.m.
source share

The voice of inexperience is here, but I’ve been thinking about this lately. It seems to me that the best approach would be either

A) Write as much of your device-independent application code as you can in the PC environment before writing it on the landing page, and write down your unit tests at the same time (this is done on the PC, first of all making you separate the device-independent materials) . Thus, you can use your choice of module testers, and then check the hardware-dependent material in the old-fashioned way - with RS-232 and / or oscilloscopes and I / O pins signaling time-dependent data, depending on how fast it should work,

B) Write everything on the target hardware, but you have a make target to conditionally compile the unit test assembly, which will run unit tests and output the results (or data that can be analyzed for the results) via RS-232 or some other means. If you have little memory, this can be difficult.

Edit 7/3/2009 I just thought about how unit test hardware-dependent stuff. If your hardware events happen too fast to record using RS-232, but you don’t want to manually sift tons of oscilloscope data data to see how your I / O pin flags rise and fall, as expected, you can use a PC card with built-in DIO (for example, National Instruments data cards) to automatically estimate the time of these signals. Then you just had to write software on your PC to manage the data acquisition card to synchronize with the current launch of the unit test.

+6
Jun 30 '09 at 17:23
source share

We manage to get quite a lot of hardware-dependent code using the simulator, we use the Keil simulator and IDE (and not affiliated only with their help). We write simulator scripts to control the “equipment” in the way we expect it to respond, and we can fairly reliably test our working code. Of course, this can make some effort to simulate hardware for some tests, but for most things it works very well and allows us to do a lot without any hardware. We managed to get an almost complete system working in the simulator before having access to the hardware, and they had very few problems related to how to put the code on the real thing once. It can also significantly speed up the release of code, since everything can be done on a PC with a deeper debugger available during chip simulation, and is trying to do everything on hardware.

We got it to work reliably for complex control systems, memory interfaces, user ICs with SPI support, and even a mono display.

+6
Sep 29 '11 at 21:30
source share

There are a lot of good answers here, some things that haven't been mentioned are running the diagnostic code to:

  • HAL event log (interrupts, bus messages, etc.)
  • You have code to track your resources (all active semaphores, thread activity)
  • Have a capture mechanism to copy the contents of the heap and memory to persistent storage (hard disk or equivalent) to detect and debug deadlocks, livelocks, memory leaks, buffer overflows, etc.
+3
Jul 08 '09 at 0:46
source share

When I came across this past year, I really wanted to test the embedded platform. I was developing a library, and I used RTOS calls and other features of the embedded platform. There was nothing concrete, so I adapted UnitTest ++ code for my purposes. I program in the NetBurner family, and since it has a built-in web server, it was pretty straightforward to write a web interface for GUI testing that would give classic RED / GREEN feedback. It turned out pretty well , and now unit testing is much simpler, and I feel much more confident knowing that the code works on real hardware. I even use a module testing module to conduct integration tests. First, I trick / drown out the hardware and implement this interface for testing. But in the end, I write some man-in-cycle tests that use actual equipment. It turns out that this is a much simpler way to find out about the hardware and have an easy way to recover from built-in traps. Since all tests are performed from AJAX callbacks to the web server, the trap occurs only as a result of manually calling the test, and the system always restarts a few seconds after the trap.

NetBurner is fast enough for a write / compile / load / run cycle of about 30 seconds.

+2
Mar 24 '11 at 23:41
source share

Many embedded processors are available on eval platforms, so although you may not have real I / O devices, you can often execute many of your algorithms and logic in one of these ways, often with hardware debugging is available through jtag. And unit tests are usually more about your logic than your i / o. Usually the problem is that your test artifacts come back from one of these environments.

0
Jun 30 '09 at 4:08
source share

Split code between device-specific and device-independent. Independent code can be tested without too much effort. Dependent code just needs to be checked manually until you have a smooth communication interface.

If you are writing a communication interface, sorry.

0
Jun 30 '09 at 4:56
source share



All Articles