Back to blog

What 3 years of code reviews taught me

I used to think code reviews were about finding bugs. After hundreds of PRs, here are the 10 lessons that changed how I review and receive feedback.

February 18, 202610 min read
Code ReviewsBest PracticesTeamworkSoftware Engineering
What 3 years of code reviews taught me
The Review Trap - From attacking the person to improving the code
The Review Trap - From attacking the person to improving the code

I used to think code reviews were about finding bugs.

I was wrong.

In 3 years, I've reviewed hundreds of PRs. I've had my code torn apart. I've written comments I regret. I've approved things I shouldn't have. I've blocked things that didn't matter.

Here's what I actually learned.


Lesson 1: the best reviews aren't about code

The code is just the artifact. The real questions are:

  • Does this solve the right problem?
  • Will we regret this in 6 months?
  • Can someone else understand this without a walkthrough?

I once approved a PR that was technically perfect. Clean code, good tests, proper types. Three months later, we rewrote it entirely because it solved the wrong problem.

The author had misunderstood the requirements. I never asked "what problem are we solving?" I just looked at the code.

Now my first comment is always about intent, not implementation.


Lesson 2: small PRs get better reviews

Big PRs get rubber-stamped. I've done it. You've done it.

500 lines? I skim. I miss things. I approve because I'm tired of looking at it.

50 lines? I actually read every line. I think about edge cases. I suggest improvements.

The data backs this up. Studies show review quality drops dramatically after 200-400 lines. Your brain just gives up.

I started rejecting PRs that were too big. Not because the code was bad, but because I couldn't review it properly.

"Can you split this into smaller PRs?" became my most frequent comment.


Lesson 3: critique the code, not the person

Early in my career, I wrote comments like:

  • "You didn't handle the error case"
  • "You forgot to add tests"
  • "This is wrong"

Every comment started with "you." Every comment felt like an attack.

Now I write:

  • "What happens if the API returns an error here?"
  • "This function might benefit from a test for the edge case"
  • "I'm not sure I understand the approach here, can you walk me through it?"

Same feedback. Completely different tone.

The person receiving the review shouldn't feel defensive. They should feel like we're solving a problem together.


Lesson 4: explain the why, not just the what

Bad comment:

"Use useMemo here"

Better comment:

"This calculation runs on every render. Since it's expensive and the dependencies rarely change, useMemo could help performance."

The first comment tells someone what to do. The second teaches them something.

I've learned more from reviewers who explained their reasoning than from any tutorial. Code reviews are mentorship disguised as quality control.

When I take the time to explain why, the author doesn't just fix this PR. They write better code in the next one.


Lesson 5: not every comment needs action

I used to treat every review comment as a blocker. My PRs would go through 5 rounds of changes before approval.

Then I discovered prefixes:

  • nit: Minor style preference, feel free to ignore
  • suggestion: Consider this, but not blocking
  • question: I'm curious, not necessarily requesting changes
  • blocker: This needs to change before merge

Example:

nit: I'd name this userCount instead of count, but up to you.

vs.

blocker: This query has no pagination. With 100k users, this will timeout.

Not everything is equally important. Label your comments so the author knows what actually matters.


Lesson 6: approve with comments

For years, I thought there were only two options: approve or request changes.

Then I learned about "approve with comments."

"Approved! A few minor suggestions below, but nothing blocking. Feel free to address them in this PR or a follow-up."

This unblocks the author while still sharing feedback. They can merge now and iterate, or address comments if they have time.

I use this for:

  • Nitpicks that don't affect functionality
  • Suggestions for future improvement
  • Questions that don't need answers before merge

It keeps velocity high without sacrificing feedback.


Lesson 7: review your own PR first

Before requesting review, I go through my own diff. Line by line.

Every time, I find things:

  • Console.logs I forgot to remove
  • Comments that don't make sense
  • Variable names I can improve
  • Edge cases I missed

If I can find issues in my own code, why waste a reviewer's time on them?

I also add comments to my own PR explaining non-obvious decisions:

"I chose to duplicate this code instead of abstracting because the two use cases might diverge. Happy to refactor if you disagree."

This gives reviewers context and shows I've thought about the tradeoffs.


Lesson 8: the goal is shipping, not perfection

I've blocked PRs over formatting. Over variable names. Over stylistic preferences.

That was ego, not quality control.

The real question is: is this code good enough to ship?

Good enough means:

  • It works
  • It doesn't introduce bugs
  • It doesn't create tech debt we'll regret
  • Someone else can maintain it

It doesn't mean it's how I would write it. It doesn't mean it's the most elegant solution. It means it's shippable.

I've learned to ask myself: "If this ships as-is, will anything bad happen?" If the answer is no, approve it.


Lesson 9: respond quickly, review thoroughly

A PR sitting for 3 days kills momentum. The author context-switches. The code gets stale. Merge conflicts appear.

I try to respond to PRs within a few hours. Even if it's just:

"I'll review this properly tomorrow, but wanted to acknowledge I saw it."

Speed matters. But speed without quality is worse than slow.

When I do review, I take the time to do it right. I don't skim. I check out the branch. I run the code. I think about edge cases.

Fast acknowledgment + thorough review > slow everything.


Lesson 10: receiving reviews is a skill too

Getting your code critiqued is hard. I've felt defensive. I've argued over comments. I've taken feedback personally.

Now I approach reviews differently:

  • Assume good intent. The reviewer is trying to help, not attack.
  • Ask clarifying questions. If I don't understand a comment, I ask.
  • Say thank you. Someone spent their time improving my code.
  • Disagree respectfully. "I see your point, but here's why I chose this approach..."

The best engineers I've worked with take feedback gracefully. They don't argue, they discuss. They don't defend, they explain.

Code reviews are a conversation, not a judgment.


The cheat sheet

PrincipleIn practice
Review intent first"What problem does this solve?"
Keep PRs small200-400 lines max
Critique code, not people"This might..." not "You forgot..."
Explain the whyTeach, don't just correct
Label your commentsnit/suggestion/question/blocker
Approve with commentsUnblock while still giving feedback
Self-review firstCatch the obvious stuff yourself
Ship good enoughPerfect is the enemy of shipped
Respond fastAcknowledge quickly, review thoroughly
Receive gracefullyAssume good intent, say thank you

The lesson

Code reviews aren't about proving you're smart. They're about making the code better and the team stronger.

The best reviewers I know are generous with their knowledge and gentle with their feedback. They make you want to write better code, not dread their comments.

That's what I'm still learning to be.


This is part 4 of my "What I learned the hard way" series. Next up: what I wish I knew before launching my first SaaS.

Got questions? Hit me up on LinkedIn or check out more on my blog.