Currently, works of journalism (articles, videos, galleries, graphics, etc.) no matter what subject (news, sports, entertainment, business, features, investigations, etc.) are quantitatively measured the same. An investigative piece that might be nowhere near as popular in pageviews across a mass audience (yes, sometimes, they can be) is quantitatively measured the same way a celebrity death story is. Either story could make a sensational splash, truly connect emotionally with readers, or both. Each has value, but there are different kinds of values across different subjects journalists cover.
If we value impactful accountability journalism, why are we quantitatively equating it one-to-one to entertainingly impactful news? For example, when an investigation is published that saves taxpayer money or even human lives, we should instead try to measure these in a more multi-dimensional way — instead of merely the simplistic ones — and measure them differently from journalism works that have different goals. We should do this not just because the quantification would be more accurate (again, still imperfect), but because it would be a better model of the complex real-world response.
Specifically, editors at separate organizations asked us the same question: Can you share some of that data with us? You know, the topic data and the data on authors?
Begrudgingly, we agreed, and started to send out reports on a monthly basis.
Editors: “Hmm, this is great! Can we get this quicker?”
Parse.ly: “Uh, sure. We can give it to you weekly.”
Editors: “Awesome! Actually, it’d be great if we could get this daily.”
Parse.ly: “OK, what’s up here? Why do you care more about the data than the recommendations?”
Well, as it turns out, nobody had really showed them this data before, and the data was simply eye-opening for the editorial team. They were using it to go beyond monitoring individual articles to understanding what was resonating with their audience.
Queue the second Aha! moment in early 2011. We took a step back and did some research on analytics tools for online publishers. What we found was astounding. Almost no innovation had happened on the analytics side for online publishers. Most tools were one-size-fits-all systems that treated an e-commerce site the same as a content site, and obviously, that’s not the way to do it.
Content-based sites are dramatically different than an e-commerce property from both a data and business perspective.
It’s no wonder these publishers were clamoring for data that provided fresh insights on their property. Publishers need to know how their content breaks out by topic, what causes a post to go viral, why one author does better with search traffic than another, and a bevy of other key insights that are specific to their needs. We knew this was a big opportunity, and decided to dive head-first into the analytics space.
Sachin Kamdar — Hello Publishers, Meet Dash
Short list of questions publishers want answered that I believe could be answered with the right data:
- Who are my best writers?
- What topics are my audience most engaged in?
- Which types of pieces do best over time?
- What type of stories should I have my writers work on?
- When is the best time to publish?
- What’s the best length for a piece?
- Does including rich media help with engagement?
- Do my writers actually need to include links? How many?
What am I missing?
Obviously most publishers know most of these by heart, it’s key to running a successful business. What’s more interesting is to use this type of data as a baseline for experimentation.
It’s important to remember the difference between creation and optimization, and how data can be used for each.
Thought: One of the most valuable features of Twitter as a publishing platform is that the writer has a much better sense of who they’re communicating with. There’s a “Following” list which puts names and reputations behind a readership. Furthermore, the writer can indirectly assess the likelihood of their content being consumed based on followers’ account activity. “Blogs” and older publishing platforms don’t have this vibrance; they have pageviews, time on site, and other metrics distant from the purpose of publishing.
Fast and great support from the 37signals team. Metrics tracked:
- Percentage “Smiley” (or positive) ratings
- Average time to first response
- Average time to first resolution
- Percentage of cases taking more than four hours to reply
- Percentage of cases getting a response in the first hour
Blog posts about the BostonGlobe.com announcement. Andy Boyle is keeping track of all the blog posts about the launch, along with publication date and time, word count, and whether the writer did an interview for the piece. More analytics about information.
Idea: Prioritize frequently asked questions on an external facing documentation site based on how often the questions get asked in support tickets. Show the number of times a given question was asked this month as a way of indicating to the customer that the answer probably already solves their question.
Data makes the world more visible.
At the end of March, I embarked on a personal initiative at the J-School to quantify as many of our processes as possible. My working thesis: if we can generate enough data about a system, and have a framework to understand it, we can be far more effective in what we do. Quite possibly way over 5-6%. Continue reading
Judy Watson, associate dean at the J-School, asked me last week to pull together relevant usage and performance metrics for work we’re doing on the web. They’ll be a part of an annual report back to CUNY central. I thought it’d be fun to share them here too. Continue reading
Measuring and increasing accuracy in journalism. Jonathan Stray outlines one approach. I think we need to throw more computing power at it.