From 6 to 7 pm this evening, I joined Selcen Onsan’s Tech Immersion class as a guest speaker. Tech Immersion is one of the five Entrepreneurial Journalism courses this spring. I want to write down a few thoughts on the session as a way of starting to iterate (and hopefully improve) my teaching methods. The notes we collaboratively generated on a Google Doc are at the bottom. Continue reading
Ordered a Fitbit way back in August or September and it finally arrived today. Holy Moses. The device is sleek and brain-dead simple, and the website allows you to track everything under the sun in an elegant way. If it works well, this is the coolest thing I’ve come across in a while.
Texting a screengrab of Google Maps because you don’t know the exact address.
What Seed Media Group should be doing, what every media group should be doing, is become a tech-oriented company (one of the reasons PLoS is successful is that it is essentially a technology-rich publishing company, with an incredible and visionary IT/Web team working with the editorial team in driving innovation).
Quite similar to what Michael Young of The Times said in March. You are not a newspaper, you are a news organization. You are not a media company, you are a technology company.
Google, in collaboration with The New York Times and The Washington Post, dropped a bombshell today in the battle for the future of news: Living Stories. The new project is described as “an experiment in presenting news, one designed specifically for the online environment,” and there are currently pages for eight different topics, including the climate change negotiations, the war in Afghanistan, and the healthcare debate.
There are four reasons why Living Stories are a Very Important Thing:
Topics are introduced with context. Each has an approachable, up-to-date summary at the top of the page that acts as a primer for the issue. The primer includes links, too; if the reader wants to learn more about a specific event presented in the summary, it’s just a click away. Let’s compare: The New York Times topic page for global warming and Google’s Living Story for climate change. In my opinion, Google’s information hierarchy wins.
Time is heavily leveraged for perspective. The clickable timeline with milestone headlines underneath the initial topic summary is a powerful method for understanding how the “living story” has unfolded to date. Stories are also presented in reverse-chronological order, making it easier to dive back into history for deeper understanding.
Filtering by the abstract components that make up an ongoing story is absolutely brilliant. For the Afghanistan page, this means “All coverage” can be filtered down to “The Global Response,” “Casualties,” and “The Afghan Elections,” among others.
“No updates since last visit.” The future of news is personalized. More importantly, personalized in the sense that the news knows what’s news to me.
A critical ethos of contextual journalism is to drive understanding. The goal should be to present a topic in such a way that the new information starts where the reader is at, and then lends the opportunity for the reader to learn as much as they have time for. The nut to crack is how you scale this method of presenting information across all of the topics a news organization may cover. That riddle involves what the information architecture looks like, how you incorporate production into the editorial workflow, and how you ensure the pages stay consistent and up-to-date.
In the Times article about the announcement, Josh Cohen of Google News said “if [Living Stories] worked well, Google would make the software available free to publishers to embed in their sites, much as those publishers can now use Google Maps and YouTube functions on their sites.” From the business perspective, it’s again unfortunate that Google is the one seriously innovating with the intersection of technology and journalism. Derek Willis notes that Living Stories was built “in collaboration with news organizations” using their APIs. Google Search was built in collaboration with content producers and their XML sitemaps.
Content doesn’t matter without the package. The package is how you make the money, and Google looks like it’s doing serious experimentation with one key component of a rebooted system of news: context.
Informational interviews are a key part of finding stories, David says. He consumes a lot of coffee, talks with people about what they’re working on, and then also asks about what else they’re working on. That secondary information can lead to interesting pieces down the road.
Marshall has a detailed workflow for tracking down stories in the tech sector. He’s been working for ReadWriteWeb for the last year and a half, and is responsible for two to three posts a day. Most of the time, stories are “interrupt-driven” or dependent on the news of the day. The whole staff logs into a single Fever account to share RSS reading responsibilities.
One source of feeds is pretty ingenious. A research assistant dug up people who first linked popular web services such as Twitter, Facebook, etc. on Delicious. He did so for a number of startups over the last couple of years and put all of that information on a spreadsheet. Based on this aggregate information, he was able to identify 15 or so people who regularly link upcoming web services before anyone else. Subscribing to these Delicious accounts has multiple stories a week about hot new startups.
Most of the ReadWriteWeb writers use Tweetdeck for Twitter. Marshall has the 4,000+ people he’s following organized into different categories, including NY Times, analysts, augmented reality, etc. The team has a Skype chat they keep open 24 hours for coordinating on stories. They use hashtags within the conversation to enable people to find information of a specific type (i.e. which stories need editing with #edit).
Libby Tucker notes that the differences between David and Marshall’s reporting styles. David flies to Urbana, Illinois to interview a scientist, whereas Marshall notes that if he has to put his pants on, it’s a big day.
My second attempt gets a B-.
I’m a big fan of services that can reliably keep my data in sync across multiple computers. Dropbox is likely my all-time favorite, and allows me to effortlessly sync 50 GB of documents, code, and media between my laptop and my desktop (ahem, .Mac). CoPress has a folder we’ve shared amongst the entire team for making accessible meeting notes, documentation, legal information, etc. Evernote, even with a mediocre user interface, enables me to quickly have access to my notes across any device. My notes are organized in the way of the GTD and I can easily search or filter by tag to get what I need.
Trust, however, is a very critical component of any relationship with a cloud or syncing service, and transparency is one method for achieving it. Dropbox is a pro in this regard; every account has at least 30 days of version history for anything that’s being synced.
Early yesterday, I made the move to Adium and decided that I finally wanted my Address Book accessible across multiple computers. Spanning Sync was the most obvious choice, as I’ve been using it to bring my calendar from iCal to Google Calendar and then to another iCal for several months now with no serious complaints. It’s blind trust in the service, though. Spanning Sync has a sync log, but the only way to revert to prior versions is to make your own backups and brute force it. The same thing applies to Address Book information which, in my situation, isn’t all that great of a solution.
I did as clean of a sync process as I could think of to get it right the first time. The Address Book on my MacBook has the gold master of my contacts, so I backed up and did a one-way sync to Google (overwriting all of my contact data) and then a one-way sync down to my desktop.
The result? I have 794 contacts on my MacBook, 743 contacts in Gmail, and 742 contacts on my iMac. It almost worked. Kinda.
Spanning Sync presents two significant issues for me that also affect the amount of trust I have in relationships with cloud syncing services in general. First and foremost, the numbers don’t add up. Not all of my contacts were synced properly, while some were entirely deleted along the way, and I have no way of figuring this out until I unsuccessfully try to find a person’s contact information. Of the 10 or so numbers marked as “favorites” on my phone, 4 lost their corresponding address cards. For the convenience of having contacts synced across computers, I’m willing to deal with this to some degree. Secondly, version history needs to be more robust than a log file. Every cloud service should keep a changelog of a week or more where the user can go back and revert an object to a prior state.
Intuitively addressing these issues in any web product means a greater amount of trust in the relationship.
On Tuesday, July 21 around 11 pm Pacific, I stumbled across a serious information security flaw in DuckWeb, the University of Oregon’s student information portal. For some of the work I’ve been doing with Publish2, I’ve been paying close attention to the composition and beauty of URLs. When printing out my degree audit for a trip down to Eugene the next day, I realised that the print version of the degree audit had a unique string of digits at the end of the URL. Curious, I changed the last two, refreshed, and ended up with someone else’s degree audit.
In the interest of sharing my favorite podcasts of the previous year with my friend Shane, I thought I might open the recommendations to all. While on the drive home to turkey day, these are three “world changing” conversations you should consider listening to:
Howard talks with Jon Udell about collective consciousness and self-organizing species, and why the mass collaboration we think is emerging right now isn’t really all that unique. Shane, DJ, and I did discuss the episode on a Fertile Ambition call a month or so ago, but we ran into a headlock about the multi-tasking theory Howard presents.
Brilliant arguments both for and against, and listening to the entire debate lends a better understanding of what the difficulties are in helping to bring basic needs to Africa.
So thought-provoking I’ve listened to it twice. The first time put me in a trance for part of a train ride back down from Seattle. In short, the premise is this: we’re creating untold numbers of automated bots, or narrow artificial intelligence, on the web for specific purposes. When left unchecked, as many are, these bots have the potential to cause very messy situations which could have negative real world implications. One of the author’s proposals is to build a second, secure network of only verifiably human entities.