Despite my skepticism of the applicability of any assessment in this case, I found some statistics that may be relevant.
In this article, the author cites figures from a “large body of empirical research” published in “Software Evaluations, Benchmarks, and Best Practices” (Jones, 2000) . At SIE CMM Level 1 , which sounds like the level of this code, you can expect a defect rate of 0.75 per function point . I will leave it to you to determine how function points and LOCs can relate to your code - you will probably need a metrics tool to perform this analysis.
Steve McConnell at Code Complete cites a study of 11 projects developed by the same team, 5 with no code reviews, 6 with code reviews. The defect coefficient for the unvalued code was 4.5 per 100 LOC, and for the considered one - 0.82. Therefore, on this basis, your assessment seems fair in the absence of any other information. However, I have to take on the level of professionalism among this team (simply because they felt the need to conduct research), and that they would at least follow the warnings; your defect rate can be much higher.
A warning point is that some of them are benign, and some of them are bugs (i.e. lead to undesirable behavior of the software), if you ignore them on the assumption that they are all benign, you introduce errors. In addition, some of them will become service errors if other conditions change, but if you have already decided to accept the warning, you do not have protection against introducing such errors.
Clifford
source share