media_buttons is only one of the many ways the Classic Editor can be customized. Wouldn’t it be great if there was a database covering all of them?
This is where you come in! Take a look through the Gutenberg Migration Guide. For each action, filter, and so on, we’d like to document real-world examples of how they’ve been used. Then, for each of those real-world examples, identify how the feature might be replicated in Gutenberg.
Ideally, the majority of WordPress users should be able to use Gutenberg on the day WordPress 5.0 is released. They'll hit "Update WordPress", navigate back to the editor, and continue publishing in Gutenberg with all of the functionality they expect in the Classic Editor.
But plugins! If any one of their active plugins are incompatible with Gutenberg, the WordPress user is likely to experience pain, misery, and bad fortune. Many WordPress installations have a dozen or more active plugins, so WordPress plugins are a significant risk vector for Gutenberg incompatibility.
Enter the Gutenberg Plugin Compatibility Database. The goal for this crowdsourcing tool is to identify whether or not WordPress.org plugins are compatible with Gutenberg. With this data set, we'll be able to:
Know the most likely causes of incompatibility.
Focus developer outreach on the highest impact problems.
Proactively educate WordPress users on whether or not their WordPress installation is ready for Gutenberg.
The only gotcha: we need lots and lots of person-hours for testing. If each plugin takes roughly 1 minute to test, we'll need ~75 person-hours to get through the remaining ~4500 plugins in the database.
Check out the project README.md for a more complete introduction to what's involved. This includes a definition for "Gutenberg-compatible", explanation for why only 5000 plugins are in the database, and other design decisions.
Do you or someone you know have access to lots of person-hours (e.g. WordCamp contributor day, hosting support team, etc.)? I'd love to chat! Feel free to leave a comment, ping me on WordPress.org Slack (I'm 'danielbachhuber'), or get in touch however most convenient.
Where is crowdsourcing at in 2010? How is crowdsourcing different from open source journalism, and which is appropriate for what types of stories? This is listing of links to try and illustrate the differences and similarities between crowdsourcing and open source journalism. How you structure a project with many participants will have a significant impact on the end results.
The Jane’s incident takes Slashdot’s evolution one major step forward. Slashdot readers are now actively shaping media coverage of the topics near and dear to their geeky little hearts. They are helping journalists get the story right, which is a far cry from exerting censorship. Just as open source programmers would critique a beta release of software filled with bugs, the Slashdot readers panned the first release of Jane’s journalistic offering — and the upgrade, apparently, will be quick to follow.
The original article.
Why the open source way trumps the crowdsourcing way
In essence, open source projects have many contributors and many beneficiaries while crowdsourcing projects have many contributors and few beneficiaries. Open source is advantageous because “everyone who contributes also benefits.” When crowdsourcing is a competition, there are limited beneficiaries and the effort of everyone else can be wasted.
What I Learned from Assignment Zero
Jay Rosen debriefs on Assignment Zero, a distributed trend project in partnership with Wired.com, with the goal of tracking “the spread of peer production and wisdom-of-the-crowd efforts across the social landscape, including the practice of crowdsourcing.” They learned they needed to: understand and articulate the different styles of labor, grok contributors’ motivations, and plan for unexpected levels of participation. Also see Derek Powazek’s review.
I talked a few steps ago about the retention of knowledge and the speed of spreading knowledge but what I’m really talking about here is the creation of knowledge. And I’d like to be able to come up with a better term than crowdsourcing which, as I mentioned, uses less than 1% of the population working on these problems. Since there will be problems in the future that we haven’t even thought of, in the face of that, what we want to do is maximize our problem-solving machinery. What I think we want to do as we democratize education is move from crowdsourcing really to something like ‘societysourcing’ where we’re getting 10%, 50% of the population involved with solving problems. It goes without saying that vast numbers of people on the planet will not take the opportunity to get an Ivy League education but, for the first time in history, it’s widely available. […] We need to get everyone involved with solving problems.
Across every industry and system of human society, this is where there is opportunity.
I’m curious to see if there is a reputation system built into it. As they say, this works based on the participation of experts and non-experts. How do you gauge the expertise of a sweeper? And I don’t mean to imply as a journalist that I think that journalists are ‘experts’ by default. For instance, I know a lot about US politics but consider myself a novice when it comes to British politics.
To take a step back, Swift River is a project to “crowdsource the filter” for real-time crisis reporting. Ushahidi provides a platform for aggregating the information around a crisis but, when a crisis situation explodes metaphorically or literally, the information coming in can quickly overwhelm the people trying to make sense of it. Swift River will enable an observer to create a new instance for a given situation, add RSS feeds from various sources including news publications and Twitter, and then additional users will be able to come in as “sweepers” to curate those incoming bits of information and float the most important to the top.
In the comments, Jon mentions that the three “most critical aspects are the trust algorithm (veracity), predictive tagging and filtering out redundancies and inaccuracies.” The first, in my opinion, will be the most challenging, and hopefully most rewarding, piece of the riddle. They’ll be able to scale their ability to float accurate information if they focus on identifying the trustworthy people instead of the trustworthy information.
A couple of weeks ago on Twitter, I observed that the crowd is the least important part of crowdsourcing. More often than not, you could care less about the opinion of the crowd on a whole. What you really want is an authoritative answer, or field report, from the most knowledgeable person in that crowd.
There’s talk on the town about adding a journalism session to BarCamp Portland. This should be a time to brainstorm and collaborate on the future of news in the Portland-area, instead of just being a space for journalists and bloggers to come together and try and resolve their issues. Let’s have an idea-generating session on what the journalism needs of Portland are, how we’ll be able to fill those news from the grassroots if/when The Oregonian implodes because of their terrible CMS, and then, in turn, how we’ll be able to monetize that. This is something where perspectives from both camps, the journalists and the bloggers, would offer value to the conversation.
To provide fodder for this discussion, listen to the most recent installment of Dave Winer and Jay Rosen’s Rebooting the News. One of the ideas that I think will “save journalism” is the digital assignment desk Jay starts talking about near the end. His part of the idea is this: a tool to map out all of the particulars that might need to be reported on in the coverage of any given issue. Once the editorial team has this laid out, they can then decide what resources they want to apply and where.