The fundamental misconception about software metrics is that they are useful when placed in a beautiful report.
Most people use the following error process:
- Collect all the metrics supported by their tools.
- Compile report
- Compare it with recommended values.
- Start looking for a question that their new found answer can answer.
It is wrong, backward and counterproductive on many levels; it is not even funny. The right approach to any indicator assembly is to first find out why. What is your reason for measuring? With this in mind, you could figure out what needs to be measured, and given that you know why and what you can figure out, how to get some information that could serve as a further request.
I saw a wide range of values for the indicators you listed, and, frankly, in projects or comparison environments, they really don't make much sense.
You can be sure enough that the same team will create things that look the same as before. But you do not need indicators to understand this.
You can use metrics to find hot spots to research, but if you have quality problems, the errors will be broken down into problem modules, and finding them is mostly worthless.
Now don't get me wrong. I like the performance. I wrote several scripts and tools to extract visualization and do all kinds of fancy things with them, all this is fun and maybe even useful, but I'm not so sure about the future.
Torbjörn gyllebring
source share