Common Values ​​for Code Metrics (C #, Visual Studio) for Production Projects

There are a few questions about code metrics, especially this one about goal values. What I am looking for is what is “ordinary” in real production projects. Maybe it's just me, but not a single project that I ever mentioned has in mind, so when I run ReSharper Code Issues or Visual Studio Code Metrics, it seems like I'm the first, so the values ​​always amaze me.

Examples from my current SharePoint destination:

Maintainability | Cyclomatic cmplx. | Inher. depth | Class coupl. | LOC 67 | 6,712 | 7 | 569 | 21,649 68 | 3,192 | 7 | 442 | 11,873 

Update: So the question is, what values ​​do you usually see “in the wild”? Optimal values ​​and best practices aside, which values ​​are commonly found?

+7
source share
2 answers

I assume the indicated values ​​are at the assembly level. If so, Cyclomatic Complexity and Lines of Code are most useful at the method level. First of all, one should study the depth of inheritance. Class communication provides more useful feedback at first glance at the method level, and then at the class level.

In addition to the recommendations contained in overflowing the stack stack you included, Code Complete 2nd Edition has this to say about the Cyclomatic Complexity method, page 458:

  • 0-5. The procedure is probably wonderful.
  • 6-10 Start thinking about ways to simplify the procedure.
  • 10+ Break the part of the subroutine into the second subroutine and call it from the first procedure

In "real life" projects, what is acceptable is likely to depend on the type of development process you are using. If the team practices TDD (test management) and seeks to write SOLID , then these indicators should be close to optimal values.

If TAD (test development) or, moreover, code without unit tests, then expect all indicators to be higher than optimal, since the likelihood of more complex, more complex methods and classes, and possibly more prolific inheritance can be increased . However, the goal should be to limit the occurrence of "bad" indicators, regardless of how the code was developed.

+10
source

The fundamental misconception about software metrics is that they are useful when placed in a beautiful report.

Most people use the following error process:

  • Collect all the metrics supported by their tools.
  • Compile report
  • Compare it with recommended values.
  • Start looking for a question that their new found answer can answer.

It is wrong, backward and counterproductive on many levels; it is not even funny. The right approach to any indicator assembly is to first find out why. What is your reason for measuring? With this in mind, you could figure out what needs to be measured, and given that you know why and what you can figure out, how to get some information that could serve as a further request.

I saw a wide range of values ​​for the indicators you listed, and, frankly, in projects or comparison environments, they really don't make much sense.

You can be sure enough that the same team will create things that look the same as before. But you do not need indicators to understand this.

You can use metrics to find hot spots to research, but if you have quality problems, the errors will be broken down into problem modules, and finding them is mostly worthless.

Now don't get me wrong. I like the performance. I wrote several scripts and tools to extract visualization and do all kinds of fancy things with them, all this is fun and maybe even useful, but I'm not so sure about the future.

+6
source

All Articles