Sunday, March 10, 2013

The Trap Called Ten Percent Code Review

Years ago, I was in a meeting with a project team to understand their code review techniques.

I asked, “How frequently do you review code? What is your approach?”

The team leader was proud enough in his response. He said, “We review 10% of the code to know the issues.”

I was curious. “What have been the findings? You have completed more than 50% of coding in this project!”

“We are going to do review 10% of code after a month from now because that is when we would have completed all significant modules.”

“Well. If that is the plan, how would you address review findings? Won’t it be too late in the game?”

A big pause before the team leader opened up.

“We are working on an aggressive schedule. We need to complete this project on time. Such reviews would delay our progress. According to our organizational policy, we can review 10% of the code base anytime depending on project priorities. There are no hard rules here!”

The room was silent. What could I have done at that time? I suggested that the team lead initiates code reviews as early instead of waiting for a month to pass. He did initiate code reviews. He did that to comply - nothing more. The results were ineffective and the whole issue was put to rest – a temporary rest. This is because he and his team members succumbed to schedule pressure.

Months later when we delivered the project, the rate of increase in the number of acceptance bugs was sky rocketing. Bug fixing revealed severe code quality issues. Major refactoring to fix the messy code base resulted in schedule overrun and team burnout. And of course, we had to pay off what we had accumulated - I mean, technical debt.
The trap called ten percent code review is very common. And you must have seen this happening elsewhere too.

Can we avoid this trap? Yes. Of course! How?

The answer is simple. We can avoid this by doing it right.

First, we must understand the meaning of ten percent code review. It means conducting code reviews right after 10% of code completion. This means that you need to include ‘architecturally significant’ or ‘critical’ modules early in the cycle. No one can afford to do these after 50% of code completion. A better approach is to start code reviews no later than 10 days from the start of coding.

It also means that doing effective reviews by involving experienced reviewers and ensuring that the findings are not repeated in future.

It does not mean that ten percent code review is sufficient to ensure code quality. It is one of the techniques. Besides ten percent code review, one should consider defect prevention, the usage of static analysis tools, peer reviews of critical code segments and so on.  The effectiveness of techniques such as defect prevention depends on the rigor of code reviews.  Without rigor it is difficult to idenitfy deep code quality issues.

Ten percent code review is not an escape route to spend limited time in code reviews in order to succumb to an aggressive project schedule. When you don't do it right, it becomes a trap. 

Have you come across similar traps? What has been your approach?


Mayuresh Gokhale said...

Totally agree with the importance of rigorous code reviews. Some practices can be borrowed from Agile - XP/Scrum methodologies to increase the efficiency:

1. Code analysis can itself be automated like the nightly builds and the automated UT execution. Code quality complaince issues would be notified after every check-in/build. It should be made mandatory to resolve these on high priority; instead of waiting for the formal "review" process.
2. Integration of code quality monitors with the IDEs of every developer's local setup. Standard set of rules to be used across the team without any exceptions.
3. Paired programming for development of core areas in the application.
4. If following Scrum - Dedicate one whole Sprint for improving code quality and dealing with the technical debt. Typically Sprint#3 or Sprint#4 is an ideal candidate for undertaking this type of work.
5. Any Bug-fixing task MUST be followed up by dedicated peer review.


Raja Bavani said...


Absolutely! Thanks for sharing these practices. Unfortunately, some teams are crazy about static analysis tools and all they do is to depend on such tools. My next blog post is on this concern. The link is

Thanks again!