Code coverage analysis for embedded C ++ projects

Recently, I started working on a very large project in C ++, which, after completing 90% of the implementation, determined that they needed to demonstrate coverage of 100% of the branches during testing. The project is hosted on an integrated platform (Integrity Green Hills). I am looking for suggestions and experiences from other users at StackOverflow who have used code coverage products in similar environments. I am interested in both positive and negative comments regarding these types of tools.

+6
embedded testing code-analysis
source share
4 answers

100% branch coverage? This is absolutely necessary, especially because some branches (by default in case operations for machine states, for example) cannot be started. I expect there are some exceptions, and if not, you may need to understand that coating testing may and may not complete before you start - otherwise you will end up pulling your hair or, even worse, give incorrect data .

Most coverage testing for embedded systems is actually done on a PC. The code is portable, some aspects of the microcontroller are emulated in software, and Bullseye or another similar utility is launched to cover the PC code. The reason for this is that there are too many microcontrollers and compilers / debuggers / test environments to develop code coverage tools for each of them.

When code coverage tools exist for a specific embedded platform, they are not as powerful, customizable, easy to use, and error free as those designed for the PC platform. Processors often do not have the traceability (without high-level hardware emulation) necessary to ensure good code coverage without inserting additional debugging code into your firmware, which then has consequences and side effects that are difficult to control, especially with time problems in real-time systems.

Porting code is not terribly difficult if you can ignore hardware code (and since you use C ++ correctly, it should be easy, right ?; -D). The biggest problem you'll encounter is the types, which, although they are better specified in C ++ than in C, still pose some problems. Make sure you use the .h type or similar setting to specifically tell the compiler exactly what you are using and how to interpret it.

After that, you can go to the city, checking the basic logic on the PC. You can even test low-level hardware drivers if you are interested in developing the necessary software emulation for this, although synchronization problems can be somewhat unpleasant.

Software testing tools such as MxVDev do most of the microcontroller emulation for you and also help with timing issues, but you will still have a little work even with that help.

If you have to do this in the system itself, you will need to purchase an emulator for the processor with the ability to cover - not an inexpensive offer (many emulators cost more than $ 30,000 for a complete set of tools and emulation equipment), but this is one of many tools used in highly reliable environments such as automotive and aerospace.

-Adam

Disclaimer: I work for a company that produces MxVDev.

+5
source share

We used Cantata and vectorcast in the past to test modules and cover code. We also use Greenhills tools, and both of these tools work with Greenhills development tools. We run most of our test on the PPC simulator and just run the test, which relies on hardware on the Target hardware through the JTAG module. Canatata and Vector cast are very similar, and katata is somewhat easier to use and has a few more functions, but small additions make a big difference in user experience.

As a rule, if you want to achieve a high level of coverage of branches, you need to develop your code for verification. The more you test, the more you will learn about writing test code.

We also tried PC testing compared to the built-in testing, which caused problems due to endianess, but this is only a hardware level problem.

In addition, these tools are RTCA / DO-178B certified.

+3
source share

Like Adam, we transfer our embedded code to a PC harness and do most of the coverage and profiling. I used AutomatedQA AQTime and Compuwares DevPartner, both of which are good products,

If you needed to use observational, you would need to use the coverage profiler, which created the instrumental version of the source. Both commercial and open source tools are available for this, but IMO, it adds a lot of work to get a big win.

100% coverage is ambitious as you will need many errors to access all error handlers and exception handlers. IMO, this would also be easier to do in harness than on board.

It is also worth paying attention to those who asked for 100% coverage of the code, that 100% coverage of the code in no way corresponds to 100% coverage of the test . Consider, for example, the following function:

int div(int a, int b) { return (a/b); } 

A 100% coverage of a code only requires calling this function once; to cover a 100% coverage will require many more calls. My own test strategy involves the development of automated test patterns to give me an acceptable level of test coverage and use the code coverage tool only as an aid in finding unverified areas. To some extent, it depends on your testing budget; for me, 100% code coverage is the way to the expensive for what it provides.

+2
source share

See SD C ++ Test Coverage . This is a family of (industry-specific) tools for testing a wide variety of C ++ dialects (ANSI, GNU, MS ...), which are perfectly reproducible even in real embedded hardware due to the very small footprint and an easy way to export collected data on trial coverage . It displays a GUI coverage screen that is independent of your actual embedded hardware, which will also contain a full report summary report.

[I am the director of the company providing these tools.]

0
source share

All Articles