like most indicators, they mean very little without context. So the short answer is: never (except for a line printer, this is ridiculous! Who is printing programs these days?)
Example:
Imagine you're testing units and refactoring outdated code. It starts with 50,000 lines of code (50 KLOC) and 1,000 proven errors (failed unit tests). The ratio of 1K / 50KLOC = 1 error per 50 lines of code. Obviously, this is terrible code!
Now, a few iterations later, you have reduced known errors by half (and unknown errors by more than what is most likely) and the code base by five times using exemplary refactoring. Now the ratio is 500/10000 = 1 bug per 20 lines of code. This is apparently even worse!
Depending on what impression you want to make, it can be represented as one or more of the following:
- 50% less errors
- five times less code
- 80% less code
- 60% worsening error to code ratio
they are all true (assuming I haven't messed up the math), and they all suck to summarize the huge improvement that such refactoring efforts should have accomplished.
Steven A. Lowe Oct 08 '08 at 19:10 2008-10-08 19:10
source share