What code metric convinces you that the provided code is "crappy"?

Lines of code for each file, methods for each class, cyclic complexity, etc. Developers resist and get around each other, if not all of them! There is a good article from Joel's article (no time to find it now).

What code metric do you recommend to automatically identify a "crappy code"?

What the majority can convince (you cannot convince all of us of some crappy metrics !: O)) developers that this code is "crap".

Only metrics that can be automatically measured are counted!

+22
language-agnostic metrics automation software-quality
Oct 09 '08 at 13:42
source share
27 answers

No coding style metrics are part of such a warning.

For me, this is just about static code analysis , which really can be 'on' all the time:

  • cyclic complexity (detected in the style of checks)
  • dependency loop definition (e.g. via findbugs)
  • critical errors detected, for example, findbugs.

I would put the coverage test in the second step, since such tests may take some time.




Do not forget that “crappy” codes are not detected by metrics, but by combination and evolution (as in a “trend”): see What is the metric code craze for? .

This means that you don’t just need to recommend code metrics to "automatically detect" shitty code ", but you also need to recommend the right combination and trending to follow these metrics.




In the browser, I share disappointment ;), and I do not share the point of view of tloach (in the comments of other answers) "Ask an undefined question, get a vague answer," he says ... your question deserves a specific answer.

+28
Oct 09 '08 at 13:50
source share

Not an automated solution, but I find WTF per minute useful.

WTF per minute http://www.osnews.com/images/comics/wtfm.jpg

+31
Oct 09 '08 at 13:49
source share

The number of warnings that the compiler breaks out when I build.

+12
Oct 9 '08 at 13:43
source share

The number of numbered lines per line of production code. Typically, this indicates a sloppy programmer who does not understand version control.

+12
09 Oct '08 at 14:18
source share

Developers are always concerned with the metric used against them, and calling the "crappy" code is not a good start. This is important because if you are worried that your developers played around them, do not use indicators for anything that is in their interest / disadvantage.

How it works best, don't let the metric tell you where the code is crappy, but use the metric to determine where you need to look. You look, having a code review, and the decision on how to fix the problem lies between the developer and the reviewer. I also made a mistake on the developer side versus the metric. If the code still appears on the metric, but reviewers think this is good, leave it alone.

But it’s important to keep this game effect in mind when your performance starts to improve. Great, now I have 100% coverage, but explicit testing tests? Metrica tells me that I'm fine, but I still need to check this out and see what brings us there.

In the bottom line, a person picks a car.

+9
Oct 09 '08 at 15:24
source share

number of global variables.

+8
Oct 9 '08 at 13:44
source share
  • Non-existent tests (identified by code coverage). This is not necessarily an indication that the code is bad, but it is a great warning sign.

  • Response in the comments.

+8
Oct 09 '08 at 13:44
source share

Metrics alone do not identify cheesy code. However, they can identify a suspicious code.

There are many metrics for OO software. Some of them can be very useful:

  • The average size of the method (both in LOC / Statement and in complexity). Larger methods can be a sign of poor design.
  • The number of methods overridden by the subclass. A large number indicates a poor class design.
  • Specialization index (number of overridden methods * nesting level / total number of methods). High numbers indicate possible problems in the class diagram.

There are much more viable indicators, and they can be calculated using tools. This can be a good help in defining crappy code.

+7
09 Oct '08 at 13:54
source share
  • global variables
  • magic numbers
  • code / comment
  • heavy communication (e.g. in C ++ you can measure this by looking at the class relationships or the number of cpp / header files that cross each other
  • const_cast or other types of castings in the same code base (and not w / external libs)
  • large code fragments are commented out and left there
+6
Oct 09 '08 at 14:52
source share

My personal favorite warning flag: comment for free. Usually means that the encoder did not stop to think about it; plus it automatically makes it difficult to understand, so it raises a shitty ratio.

+4
09 Oct '08 at 13:57
source share

My bet: a combination of cyclic complexity (CC) and code coverage from automatic tests (TC).

CC | TC 2 | 0% - good anyway, cyclomatic complexity too small 10 | 70% - good 10 | 50% - could be better 10 | 20% - bad 20 | 85% - good 20 | 70% - could be better 20 | 50% - bad ... 

crap4j - a possible tool (for java) and an explanation of the concept ... in search of a friendly C # tool: (

+3
Oct 09 '08 at 13:43
source share

At first glance: rude application of code idioms.

As soon as I have a closer look: obvious errors and misconceptions of the programmer.

+3
Oct 11 '08 at 14:33
source share

Number of useless comments on meaningful comments:

 'Set i to 1' Dim i as Integer = 1 
+2
Oct 09 '08 at 13:52
source share

I do not believe that there is such a metric. With the exception of code that doesn't really do what it suggested (which is a whole additional level of crappiness), 'crappy' code means code that is hard to maintain. This usually means that the developer understands that this is always to some extent a subjective thing, like bad writing. Of course, there are cases when everyone agrees that the letter (or code) is crappy, but it is very difficult to write a metric for it.

Plus everything is relative. Code that performs a very complex function, in minimal memory optimized for each last speed cycle, will look very bad compared to a simple function without any restrictions. But he's not crappy - he just does what he has to.

+2
09 Oct '08 at 13:55
source share

Unfortunately, there is no such metric that I know of. Something to keep in mind, no matter what you choose, programmers will play the system so that their code looks good. I saw that any "automatic" metric is being implemented everywhere.

+2
09 Oct '08 at 14:13
source share

Many conversions to and from strings. Typically, this is a sign that the developer does not understand what is happening, and is simply trying random things until something works. For example, I often saw code like this:

  object num = GetABoxedInt(); // long myLong = (long) num; // throws exception long myLong = Int64.Parse(num.ToString()); 

when they really wanted:

  long myLong = (long)(int)num; 
+2
Oct 09 '08 at 14:42
source share
  • Keep track of the relationship between template classes and standard classes. A high ratio will indicate a pattern.
  • Check magic numbers not defined as constants
  • Use the template comparison utility to detect potentially duplicated code.
+2
Oct 09 '08 at 15:03
source share

I am surprised that no one mentioned crap4j .

+2
09 Oct '08 at 15:31
source share

Sometimes you just know it when you see it. For example, this morning I saw:

 void mdLicense::SetWindows(bool Option) { _windows = (Option ? true: false); } 

I just needed to ask myself: “Why would anyone ever do this?”.

+1
Oct 09 '08 at 15:27
source share

Coverage of the code has some significance, but otherwise I tend to rely more on profiling the code to find out if this code is crappy.

0
09 Oct '08 at 13:43
source share

Value for comments that include profanity for comments that don't.

Higher = better code.

0
Oct 09 '08 at 13:48
source share

Comment Lines / Code Lines

value > 1 -> bad (too many comments)

value < 0.1 -> bad (not enough comments)

Adjust the numerical values ​​according to your own experience; -)

0
09 Oct '08 at 13:53
source share

I use a multi-level approach, while the first level is reasonable readability, offset only by the complexity of the problem being solved. If it cannot pass the readability check, I usually find the code less good.

0
Oct 09 '08 at 14:25
source share

TODO: comments in production code. It just shows that the developer does not complete the tasks until completion.

0
Nov 30 '09 at 6:42
source share

Methods with 30 arguments. On a web service. It's all.

0
Jun 23 '10 at 20:52
source share

Well, there are various ways you could use to indicate if the code is good code. Below are some of them:

  • Consistency: well, a code block, whether it be a class or a method, if it is discovered that it serves several functions, then the code can be found below in connectivity. The code below in connectivity can be called minimal for reuse. It can also be called the code below in maintainability.

    • Code complexity. You can use the cyclic complexity of McCabe (the number of decision points) to determine code complexity. Complex code complexity can be used to represent code with less usability (difficult to read and understand).

    • Documentation: code with insufficient document may also be related to poor quality software in terms of ease of use of the code.

Check out the next page to read the checklist to view the code.

0
Feb 14 '13 at 20:28
source share

This fun blog post CRAP Metric Code can be helpful.

-one
Nov 30 '09 at 4:30
source share



All Articles