#pdxwp: Code Review Takes Two

We did a second pass at our code review meetup — last night turned out much better than the first. The high point for me: most of the “presentation” was, in fact, discussion. The latter proved to be way more valuable for everyone, as most of the twenty people in the room don’t do code review on a regular basis.

Here’s what we did:

  • Jeremy Ross submitted a section of code he had been working on, along with instructions on what he wanted feedback on.
  • On Saturday, I reviewed the code. I committed it in one commit to a branch in a private Github repo. On the changeset, I did a line-by-line read-through, commenting as I went. To wrap the review up, I created a pull request explaining how I did the review, what I looked for, and how to interpret my feedback.
  • Saturday night, I prepared a reveal.js presentation with an introduction to code review and the contents of what I found in my review. reveal.js is a super slick tool for preparing a HTML/CSS/JS presentation out of content in Markdown.
  • Jeremy read and considered my review, then updated the presentation with his feedback.
  • I did most of the presentation discussion facilitation, and Jeremy talked through how he received my feedback.

reveal.js doesn’t produce great static slide output, and I used its Markdown feature which requires Grunt to serve, so the bulk of what we covered will forever live on in the outline that follows.
Continue reading

Advantages of code review

Advantages of pre-deploy code review, over post-deploy audit:

  • Authors have a strong incentive to craft small, well-formed changes that will be readily understood, to explain them adequately, and to provide appropriate test plans, test coverage and context.
  • Reviewers have a real opportunity to make significant suggestions about architecture or approach in review. These suggestions are less attractive to adopt from audit, and may be much more difficult to adopt if significant time has passed between push and audit.
  • Authors have a strong incentive to fix problems and respond to feedback received during review, because it blocks them. Authors have a much weaker incentive to address problems raised during audit.
  • Authors can ask reviewers to apply and verify fixes before they are pushed.
  • Authors can easily pursue feedback early, and get course corrections on approach or direction.
  • Reviewers are better prepared to support a given change once it is in production, having already had a chance to become familiar with and reason through the code.
  • Reviewers are able to catch problems which automated tests may have difficulty detecting. For example, human reviewers are able to reason about performance problems that tests can easily miss because they run on small datasets and stub out service calls.
  • Communicating about changes before they happen generally leads to better preparation for their effects.