Should there be a code error used in code reviews?

After reading this blog post , I think it might be a good idea ... sometimes.

the neurotic paranoid in me began to wonder how close the eye that they lay on my code. Especially after several times when I noticed errors right before I checked in the supposedly verified code.

So, I came up with the concept of "jester" errors. Jester is a unit test tester for Java. It works by syntactically correcting the source code under the test, compiling it, and then starting the unit test suite. If the unit test package causes everything to be A-okay, then you know that the test suite is erroneous because it missed the monkey key that was thrown into the code.

Beetle bugs are bugs that you intentionally enter into your code to check if your code reviewer really detects bugs in your code or simply passes a pass to it.

- Al Sweigart

+6
code review
source share
17 answers

Of all the things that you could spend your time doing to add value in the software development process, this should be very close to the bottom of the list, even if you add value to everything that is extremely controversial.

I think the problem here is that you do not understand the fundamental importance of adding code review. The code viewpoint is not intended to look for errors. You must have other processes, such as unit testing, integration testing, continuous integration building, QA tools / teams, etc., to find errors.

Code reviews provide value by identifying problems at different levels - obviously, you CAN identify specific errors during code reviews, and this is great, but code review can also help identify problems at a higher or lower level.

For example, at a slightly higher level, someone might indicate that there is a requirement that the entire currency must be stored as cents in whole numbers, and not as floating point dollars, and that the code in question does not meet this (otherwise, this is completely correct )

At a lower level, it gives more experienced developers the opportunity to give younger developer tips for advice on how they could solve problems better / easier / etc. For example, perhaps the developer has many nested loops, which can be replaced with some simple lists or higher order functions.

In addition, it can also give a younger, more energetic developer a chance to give a more experienced, more experienced developer, but also a more experienced developer, to learn about new technology that will simplify their code, etc.

At the end of the day, it also allows you to simply monitor what is happening in the code base, catch any glaring deviations from what you expect, make sure everyone is on the right track, marching in the same direction, and help participating developers grow and develop.

+13
source share

Not. Sending a few hundred or thousands of lines of code to someone and saying “Please read this” will never result in a thorough check. If you think that people do not view the code when you do this, you are right. The deliberate use of errors in the code simply turns this ridiculous practice into a game.

The only way to properly review the code is to bring people together and combine them.

+4
source share

Well, I know that if I did this, I would forget about it and, therefore, deliberately introduced the error into the code base. Therefore, I am not sure if this is a good idea if you do not have a good memory.

+4
source share

I disagree with the way a blogger does this. Code reviews should be conducted in person, in the room, with everyone present. But the idea of ​​putting the error into the code is still worth it to see if people are reading (1) the code before they come to the meeting, and (2) pay attention and think during the meeting.

+4
source share

This is a very passive-aggressive way to solve the problem, which can probably be better handled better. If you think you need better and more formal processes to view the code, say so.

+4
source share

Depending on how this is done, I think it can be horrible / stupid or valuable.

Several commentators seem to think that this idea basically plays “gotcha” with your teammates, and I tend to agree.

However, as suggested in the IEEE article, if your overall process is very mature, there may be room for defective seeding. This article refers to the context in which an organization measures and monitors defect numbers quite fully, including arrival rate, closing rate, general (not only found) defects, etc.

The article talks about some of the risks with this, as do many commentators here. If your process is mature enough to warrant Defective sedation, there are various ways to protect against these risks.


A related concept is to submit a version of the code that is bound to a QA, but not to a release candidate. The idea is to verify that the QA organization tests actually implement the code. Two pieces of information can be gleaned from this practice:

  • Quality Supervision: This is especially useful when there is little or no traceability from test requirements.
  • Dead code detection: see which code is not verified; if your traceability is strong, then you can be sure that all / all of the code needed for production is checked for testing, so the uncovered code is dead. Or, in practice, this is the starting point for further study.
+3
source share

something about intentionally adding bugs to your code to catch your employees is just not sitting right with me. I say that you are just official checks . You then know that your code has been viewed, and it is also shown that they detect more errors.

+2
source share

This is what I did before when discussing the rules for an online game, we add stupid things so that we know that each of us really reads sentences. It was very good and very helpful.

+1
source share

Not only does introducing errors always a bad idea, it will also make you look bad. If they find errors, they will think less about your code. If you then explain to them that you have just submitted errors in order to verify them, then they will also think less highly of you (because they will think that you are paranoid or liars).

+1
source share

I want to say: "Are you serious?" but I can see your point of view ...

I probably wouldn’t have done this, though, but rather I actually wave the code in their face and ask them to give answers to real questions and make them use their brains ... Maybe even put one in order to attract their attention right there maybe they will then feel more confident to check it correctly after registering ...

I would not want to specifically check for errors, simply because of a fear before I lost information about where the error is: ^ _ ^

0
source share

I remember someone once suggesting you not to compile your code before viewing the code, and then try to find compilation errors when viewing it. The theory was, once the review was completed, you can compile the code and see how many compilation errors you did not find in the review. Then it is safe to say that you missed many logical errors. I don’t know if I can ever force myself to not type “make”, but it made me think, and that would also help with this problem.

0
source share

I think the best way to deal with this problem would be to call a person as not attentive or not perfect enough. Or just stop doing code reviews with this person. Nobody likes stealing, to be direct and honest.

0
source share

Just keep in mind that every mistake can be used against you. Either in the fact that people simply think that you are incompetent for stupid mistakes, or in the way people think that you are actively sabotaging the project.

Outside of the Olympics, sabotage and deceit is something that is very discouraging and something that can actually greatly affect.

0
source share

There is a problem in my organization when end-user testers (UAs) do not analyze applications until the final testing phase after passing QA, when requirements changes are very expensive (and, unfortunately, too frequent).

In an iterative project, to make sure that they are focused on the product, I put “jokes” in the user interface that are intentionally ugly or poorly formulated (for example, Lorem Ipsum, orange backgrounds, Courier fonts, ...). We hope that UA testers will deal with the application and start commenting on the text or user interface right away, instead of having a quick look at it and saying that it is "good enough" and waiting until the end.

0
source share

Maybe not to check the code, it's complicated enough, but maybe if you want the QA group to hit your code hard ... here is a link with a section on the defect of sowing from IEEE Best Practices

0
source share

The quote provided suggests intentionally adding errors to test your automated unit test coverage. This coverage is something you can fix / improve. This seems reasonable, especially if you don't know how reliable your unit tests are.

Your idea of ​​adding them to test the reviewer's code coverage (something beyond your control) seems far less valuable.

0
source share

Like most discussed here, in terms of developing high quality software, this method can lead to significant risk. And to ensure the safety of mission-critical software development (for example, a flying system for airplanes, an electronic throttle for cars) you will never want to introduce a random error into your software.

This method can be impractical if the code verification process is complemented by static analysis. If an error was entered, the static analysis tool should detect the error. However, if the error detection function in the static analysis tool was weak, this would generally present a completely different problem.

0
source share

All Articles