Two proposed sessions for SRCCON 2015

SRCCON was my favorite conference last year, and in the running for favorite conference of all time. I liked it so much I’ve submitted two proposals for this year. You should too! Submissions are open until April 10th.

Continous Integration for Content

There’s lots of little attributes which define the “quality” of a piece of content — just like there are attributes which define code quality. Developers have continuous integration to run automated checks on their code, but journalists have editors — who are prone to human error. It’s easy and quite common to forget to add a photo credit, or spell the SEO title incorrectly. What are some ways we can automate these errors out of existence? Let’s get together, present some real world “quality” problems to work on, prototype, wireframe, and define algorithms, and then share our results.

Code review takes two

Code review is single-handedly the best way to level up your development skills. It’s also really hard! Let’s discuss code review methodologies as a group, and then pair up to practice.

#pdxwp: Code Review Takes Two

We did a second pass at our code review meetup — last night turned out much better than the first. The high point for me: most of the “presentation” was, in fact, discussion. The latter proved to be way more valuable for everyone, as most of the twenty people in the room don’t do code review on a regular basis.

Here’s what we did:

  • Jeremy Ross submitted a section of code he had been working on, along with instructions on what he wanted feedback on.
  • On Saturday, I reviewed the code. I committed it in one commit to a branch in a private Github repo. On the changeset, I did a line-by-line read-through, commenting as I went. To wrap the review up, I created a pull request explaining how I did the review, what I looked for, and how to interpret my feedback.
  • Saturday night, I prepared a reveal.js presentation with an introduction to code review and the contents of what I found in my review. reveal.js is a super slick tool for preparing a HTML/CSS/JS presentation out of content in Markdown.
  • Jeremy read and considered my review, then updated the presentation with his feedback.
  • I did most of the presentation discussion facilitation, and Jeremy talked through how he received my feedback.

reveal.js doesn’t produce great static slide output, and I used its Markdown feature which requires Grunt to serve, so the bulk of what we covered will forever live on in the outline that follows.
Continue reading #pdxwp: Code Review Takes Two

Advantages of code review

Advantages of pre-deploy code review, over post-deploy audit:

  • Authors have a strong incentive to craft small, well-formed changes that will be readily understood, to explain them adequately, and to provide appropriate test plans, test coverage and context.
  • Reviewers have a real opportunity to make significant suggestions about architecture or approach in review. These suggestions are less attractive to adopt from audit, and may be much more difficult to adopt if significant time has passed between push and audit.
  • Authors have a strong incentive to fix problems and respond to feedback received during review, because it blocks them. Authors have a much weaker incentive to address problems raised during audit.
  • Authors can ask reviewers to apply and verify fixes before they are pushed.
  • Authors can easily pursue feedback early, and get course corrections on approach or direction.
  • Reviewers are better prepared to support a given change once it is in production, having already had a chance to become familiar with and reason through the code.
  • Reviewers are able to catch problems which automated tests may have difficulty detecting. For example, human reviewers are able to reason about performance problems that tests can easily miss because they run on small datasets and stub out service calls.
  • Communicating about changes before they happen generally leads to better preparation for their effects.