Writing for the Chronicle of Higher Education in 2012, Timothy Messer-Kruse described his failed efforts to penetrate Wikipedia’s gravitational field. He begins:
For the past 10 years I’ve immersed myself in the details of one of the most famous events in American labor history, the Haymarket riot and trial of 1886. Along the way I’ve written two books and a couple of articles about the episode. In some circles that affords me a presumption of expertise on the subject. Not, however, on Wikipedia.
His tale of woe will be familiar to countless domain experts who thought Wikipedia was the encyclopedia anyone can edit but found otherwise. His research had led to the conclusion that a presumed fact, often repeated in the scholarly literature, was wrong. Saying so triggered a rejection based on Wikipedia’s policy on reliable sources and undue weight. Here was the ensuing exchange:
Explain to me, then, how a ‘minority’ source with facts on its side would ever appear against a wrong ‘majority’ one?” I asked the Wiki-gatekeeper. He responded, “You’re more than welcome to discuss reliable sources here, that’s what the talk page is for. However, you might want to have a quick look at Wikipedia’s civility policy.
(You can relive his adventure by visiting this revision of the article’s talk page and clicking the Next edit link a half-dozen times. You have to dig to find backstories like this one. But to Wikipedia’s credit, they are preserved and can be found.)
Timothy Messer-Kruse’s Wikipedia contributions page summarizes his brief career as a Wikipedia editor. He battled the gatekeepers for a short while, then sensibly retreated. As have others. In The Closed, Unfriendly World of Wikipedia, Internet search expert Danny Sullivan blogged his failed effort to offer some of his expertise. MIT Technology Review contributor Tom Simonite, in The Decine of Wikipedia, calls Wikipedia “a crushing bureaucracy with an often abrasive atmosphere that deters newcomers” and concludes:
Today’s Wikipedia, even with its middling quality and poor representation of the world’s diversity, could be the best encyclopedia we will get.
That would be a sad outcome. It may be avoidable, but only if we take seriously the last of Wikipedia’s Five pillars. “Wikipedia has no firm rules,” that foundational page says, it has “policies and guidelines, but they are not carved in stone.” Here is the policy that most desperately needs to change: Content forking:
A point of view (POV) fork is a content fork deliberately created to avoid neutral point of view guidelines, often to avoid or highlight negative or positive viewpoints or facts. All POV forks are undesirable on Wikipedia, as they avoid consensus building and therefore violate one of our most important policies.
That policy places Wikipedia on the wrong side of history. Not too long ago, we debated whether a distributed version control system (DVCS) could possibly work, and regarded forking an open source project as a catastrophe. Now GitHub is the center of an open source universe in which DVCS-supported forking is one of the gears of progress.
Meanwhile, as we near the 20th anniversary of wiki software, its inventor Ward Cunningham is busily reimagining his creation. I’ve written a lot lately about his new federated wiki, an implementation of the wiki idea that values a chorus of voices. In the federated wiki you fork pages of interest and may edit them. If you do, your changes may or may not be noticed. If they are noticed they may or may not be merged. But they belong to the network graph that grows around the page. They are discoverable.
In Federated Education: New Directions in Digital Collaboration, Mike Caulfield offers this key insight about federated wiki:
Wiki is a relentless consensus engine. That’s useful.
But here’s the thing. You want the consensus engine, eventually. But you don’t want it at first.
How can we ease the relentlessness of Wikipedia’s consensus engine? Here’s a telling comment posted to Timothy Messer-Kruse’s User talk page after his Chronicle essay appeared:
Great article. Next time just go ahead and make all of your changes in one edit, without hesitation. If you are reverted, then make a reasonable educated complaint in the talk page of the article (or simply write another article for the Chronicle, or a blog post). Other people with more, eh, “wikiexperience” will be able to look at your edit, review the changes, and make them stand.
To “write another article for the Chronicle, or a blog post” is, of course, a way of forking the Wikipedia article. So why not encourage that? There aren’t an infinite number of people in the world who have deep knowledge of the Haymarket affair and are inclined to share it. The network graph showing who forked that Wikipedia article, and made substantive contributions, needn’t be overwhelming. Timothy Messer-Kruse’s fork might or might not emerge as authoritative in the judgement of Wikipedia but also of the world. If it did, Wikipedia might or might not choose to merge it. But if the consensus engine is willing to listen for a while to a chorus of voices, it may be able to recruit and retain more of the voices it needs.
46 thoughts on “A federated Wikipedia”
I know that we focus on the fork side of the cycle when we tell the Git/Github story but is part of the story also the fact that Git also made the merge/pull request easy, which eased the fear of the possibility of forking?
I agree with you that the policy as written is problematic. But then the whole notion of neutrality is problematic as well. But I presume the fear that underlines that policy is that same risk that was there for Open Source software projects. Or is the “social merge” that your suggesting here fine?
I ask because we’re in a similar situation, but with much less power disparity, when it comes to community data projects. I wrote about this in http://cameronneylon.net/blog/fork-merge-and-crowd-sourcing-data-curation/ These risks put me off forking datasets to improve them, but I’d be much less worried if I thought there were relatively tractable ways of merging my changes back in. That works great on Github and not so well in GoogleDocs.
Your post about the Wellcome APC (article processing charge) data highlights another aspect of the granularity issue I raised in my response to Mike Caulfield. We still mostly version at the file level. But programs are made of modules and functions, articles are made of paragraphs (and paragraph-like objects), datasets are (typically) made of rows.
I’m sure Greg Wilson will chime in here at some point, because as he and I have been discussing for years, it’s crazy that our versioning tools still don’t operate at the right granularity.
That’s why the paragraph-level editing and versioning in FedWiki (http://blog.jonudell.net/2015/01/12/thoughts-in-motion/) is so interesting to me. Although FedWiki itself still lacks effective diff/merge capability, its intrinsic chunking of content — every paragraph or paragraph-like thing has an ID and a history — begs to be applied to all kinds of data. I think it could radically reduce fork/merge friction and enable better interop between the different kinds of tools and methods you describe in your post.
Some of that vision is already in FedWiki, by the way. It has paragraph-like data plugins (http://blog.jonudell.net/2015/01/01/fedwiki-for-collaborative-analysis-of-data/) and a notion of composing pipelines of pages containing them in ways that encourage exploration of alternatives.
I actually wrote about Federated (and P2P) Wiki several years ago: http://p2pfoundation.net/P2P_Wiki
Parts of it are a bit technically worded. And I ramble a bit in retrospect. So I have to update it. But you’ll find it interesting. Here’s some highlights of ideas from it:
* “Playback” of revision history. Heard of Etherpad? It was writing software where you could see the entire revision history of something you wrote. Literally you hit “play” and could see all the way back to the first character you typed to start a “document”.
* I mention how FedWiki (and P2P Wiki, generally) change the funding structure of wikis. Since FedWiki isn’t controled by one organization, that changes the funding model of wikis.
* I make an analogy to political panarchy and peer to peer wikis. And I get into the political economy associated with different mediums, ex. how blogs can be authocratic.
I also document how a lot of people (like you) had ideas along the lines of FedWiki or go further than them. And I link to several projects that in one way or another tried to implement those ideas.
Funny, our brains are in the same space today. On Hapgood I was looking at the same issue from another angle.
I can see a world where Messer-Kruse not only writes Wikipedia forks, but curates the articles (and forks of articles) he believes are worthwhile. Maybe the vast majority of people still end up on a consensus site, but students in classes are told to “use the Kruse articles” where possible. Over time those forks end up Wikipedia, but even before then the world is still a better place.
Thanks for connecting this story — I had forgotten about it, and it shows another model of how such things could work.
It also occurs to me that forking might be a more granular thing. Messer-Kruse was really focusing on a few small parts of the page. As you and I have been exploring recently in FedWiki, editing and versioning at paragraph granularity is a powerful construct. If Messer-Kruse had the option to fork and improve a paragraph, the fork/merge cycle could exhibit more lightness and agility, and create less fear and angst.
Huh — now *that’s* interesting.If you could get the changes sorted out, you could get the Messer-Kruse “overlay”, while still reading Wikipedia. Or something that says, hey, there are 2 other versions of this paragraph in your neighborhood. If you don’t much care about that paragraph, you keep reading. It’s good enough.
But if you do care, you shift hover, or click or something and get a list of alternates.
Interestingly it could also show at a scan what the most contentious paragraphs were. Citing Obama’s birth date from Wikipedia, you’re free and clear. Birthplace? You might want to check to make sure the current version is accurate.
Even in a non-federated system like Wikipedia, that would make a huge difference in helping people to understand dissent, points of contention, and spaces where there may be a politically motivated edit war going on.
Two things that strike me here – one is that I’m really struck by the idea of paragraph level fork and merge that you’re exploring in FedWiki. I can’t really justify it with data but it feels like this might be a tractable sweet spot between character level versioning which is too much for people the document level versioning that seems often to be default – or at least is what happens when the world of binary(ish) doc files that I inhabit meets the world of proper version control.
The second was the extent to which this idea of the multiple versions of wikipedia might be able to be handled by annotations rather than forking. Or are annotations and a paragraph level (or arbitrary granularity) fork functionality actually the same thing in practice? As I understand SFW you always have a local fork of the whole page? If an eg W3C standard annotation is a JSON object with references, how different is this to an SFW page JSON with its references back to its provenance?
To clarify, FedWiki doesn’t do paragraph-level fork/merge, however it is equipped for that because editing happens in paragraph chunks, each having its own id to which versions can be associated.
FedWiki also lacks, btw, notification that your page was forked, and that’s something I think it will need. (Maybe http://indiewebcamp.com/Webmention?)
I’m not current with W3C annotation but what you suggest sounds plausible/doable/interesting.
Regarding data, do you see an opportunity for row-level granularity?
The insufficiency of nested replies! In reply to Jon’s comment below. Yes, sorry I was conflating two issues there – the idea of forking and the versioning that you were showing. I do agree that one thing I was struggling with my mental model of FedWiki was what a pull request might look like. Webmention as a protocol seems like a potential solution.
In terms of row level versioning in data I think that makes a lot of sense. I was reading elsewhere this week that the concept of a “paragraph” is actually really difficult to pin down. I wonder whether it is exactly the slipperiness that makes it a good level of granularity to work with? This seems less true of data, where row level seems to make a lot of sense. And where the granularity of cell/row/table seems pretty clear.
I think the broader question I was working towards (which you start to play with below with Hypothesis service) is the question as to how the provenance model for SFW matches or does not match (or could match) with an annotation model. That’s a badly framed question but I guess I’m asking whether one could crosswalk between the two data models to deliver some of the functionality that seems implicit in this discussion.
I think that there is a danger in seeing forking as some sort of solution to what is mainly an issue of power relations in a ‘community’ with a dominant culture. I have loved playing around on fed wiki and think it has lots of possibilities but to achieve a culture different from wikipedia will take a lot more than forking. Human nature being what it is, any technology innovation can be used to reinforce existing power relations whatever the good intentions of those involved in innovation and (linked) use. Rules aren’t the same as ethics, as your article and Messer-Kruse’s demonstrate. I don’t think it’s a coincidence that the % women OSS developers is even lower than % women Wikipedia editors.
“any technology innovation can be used to reinforce existing power relations whatever the good intentions of those involved”
Note that although Wikipedia’s male dominance is a current news topic, there’s another important axis of bias. Women aren’t the only ones excluded, so are all kinds of experts like Messer-Kruse. Wikipedia reveres peer-reviewed academic publications but excludes the people who write them. Of course there’s another dominant culture in play: the academy. It does not value, and so discourages, what Messer-Kruse was trying to do.
In the alternate universe I like to imagine, where knowledge production in Wikipedia cooperates with knowledge production in the academy, you’d need a decentralized process controlled by neither.
Once upon a time there was a peer-to-peer collaboration technology called Groove. It was used successfully during the Iraq war to broker communication among cooperating teams representing a number of allied governments. It was politically impossible for any one of them to be a central host. When everyone was just a node in the peer network, though, communication flowed.
I need to learn more about Groove. What drove me to federated wiki at first was my struggle with trying to get some cross-institutional wiki going. Very much the same problem — your UW profs aren’t going to encourage students to work on a WSU wiki, and vice versa. It struck me the genius of student blogging was once you reduce ownership to the level of the student, boundaries become porous. Federated wiki potentially brings that same porousness to collaborative work. So the Groove example is very pertinent.
Incidentally, this is something I think people continue not to get, even in blogging. A;; this energy is spent trying to build systems that allow institutions to collaborate, instead of dropping control to the individual, where these issues disappear.
Some background here: http://www.openp2p.com/pub/a/p2p/2000/10/24/ozzie_interview.html
What excited me about Thali (we’ll see what it morphs into): it’s what you’d do if you started building Groove in 2014 instead of 1998, on the rich open source substrate now available.
Note that after Ray took Groove to Microsoft which Sharepoint-ized it, and then after Ray got Azure going and left Microsoft, he looked back on Groove as a niche product. Which it was. People working in humanitarian services and disaster relief, who had to rely on ad-hoc infrastructure and peer networking, loved it and have yet to replace it. But that’s not most people. So Talko (http://blog.jonudell.net/2014/10/31/lets-talk/) is very much about shared spaces that are centrally hosted.
I missed these interesting responses (remembering to click the notify this time). I can’t quite express this properly Mike but I feel that it’s about more than the individual and distributed technology in achieving collaboration/cooperation. A project that I am trying to set up with a friend who is a historian is to collect writings about the history of education in the industrial/ post-industrial era to learn from previous interventions by well-meaning groups – their mix of successes and failures as they grow and often become institutionalised. I am thinking of things like faith-based education and ragged schools in 19th century industrial areas, trades union education, etc. Is what these interventions had in common an explicit ethical dimension? An interesting current day take on this is http://www.ragged-online.com/
My vain hope is that history may have something to tell us in making change in learning and knowledge in the present day.
Just looked up Groove. Originally developed by Ray Ozzie?!?!? How come we aren’t seeing more of that from MS, I wonder?
Following this discussion with intense interest.
I’m getting ready to help with a Wikipedia edit-a-thon. This kind of event has well-documented procedures that seem to address many of the concerns articulated in this article and thread, primarily through enlisting the help of folks on the “inside” at Wikipedia so that the work done by the new editors doesn’t vanish or trigger the “alien-repelling antibodies” that seem always on the prowl (sometimes for good reasons, sometimes not) at Wikipedia.
I read Messer-Kruse’s recent Chronicle post reviewing Tom Leitch’s new book on Wikipedia (full disclosure: Tom’s a friend and colleague) and I have to say that I’m not persuaded by M-K’s arguments. I’ll keep reading. If the point is that an expert cannot write his or her expertise directly into Wikipedia, I see that as a feature, not a bug, even in the lamentable case that the expert might actually be right and the published sources might be wrong (or incomplete). Wikipedia is not a scholarly journal, or even a newspaper, though it can foster certain aspects of scholarship and scholarly communication. I know that Britannica used to enlist experts to write (or sign) articles on their specialties, but even this strikes me as very dicey. Better to have a messy, fraught consensus-of-record that gets shifted by other means than to make Wikipedia be something other than what it is designed to be. I don’t think Wikipedia’s problems indicate or are linked to problems in academic culture, at least not fundamentally. Sadly, those problems are numerous and constitute a universe of their own.
I’m not talking about power relations at this point–that’s a very important topic but distinct from the one I”m trying to make.
Federated wiki has started to excite me, as an idea and as a practice, but for other reasons than the idea that it will allow us to share forked personal Wikipedias (a contradiction in terms). I don’t see FedWiki as solving problems raised by Wikipedia so much as being a Memex or (to use an older metaphor) a Commonplace Book on steroids, with attribution, sharable at various granular levels, though we won’t see the real goodness until we have a way for folks to know their content has been forked.
The FedWiki idea might also be a framework for getting at the problem of comments on blogs that end up living only on a particular post on a particular person’s blog. Writing one’s own blog post instead of leaving a long comment like this one is a best practice, but at this moment I’d like to be able to do both somehow.
I have more to say on Wikipedia, especially the NPOV policy, but this comment is already far too long.
Gardner — Super happy you’ll join us for the March Happening (and the February orientation). You’ll find that it’s INCREDIBLY messy, and maybe even hair-pullingly frustrating at first, but then at a certain point (hopefully) a bit addictive.
I’ve come to see the initial use as a sort of hybrid of collaborative wiki and commonplace books (or Memex, NLS, KMS, etc). I envision Happenings as “barn-raisings” that create new densely networked resources quickly with communities that may or may not persist afterwards (I call this idea “sites as hashtags”). People can listen there, or listen to the lifestream of writers involved, who may be writing about many unrelated things.
That personal stream and personal reference is really important and I don’t want to lose that piece of ithe story. It may be that the “Networked Memex” piece of it is the biggest potential. It certainly pulls you in.
But these federated sites could also be interesting animals. We’ve tried things before where at a conference everybody shares what they have on a wiki, but people don’t, really, for a number of reasons, not the least of which they know that the wiki will die like most wiki besides wikipedia. But what about a federated conference site, where you are taking your notes on the conference and linking them to others?
This could be exhilirating, and incredibly productive. And I think it would help with the MAJOR problem with Wikipedia, which is it is the only game in town. There’s a wealth of conversational, rhetorical material on the web on any topic. But it’s hard to find short readable expository pieces anywhere but Wikipedia. Wikipedia is *hospitable* — it is welcoming to foreign readers, not forcing them to read ten years of blog posts to understand an issue, which is why it’s the go to resource. And weirdly it’s become the one place you can get that — many places on the web are convivial but few are hospitable. (“how-to” sites being a notable exception).
Having sites like these, where dissent can be seen and easily surfaced would could make a nice counterbalance to Wikipedia.
But again, I think these two things — quasi-wiki-like structures and networked memexing have a synergy.
One final note, have I mentioned that it’s very messy right now? ;) Seriously though, we’re trying to evolve a writing style and a technology in tandem, and I’ll be damned if the writing style isn’t the hardest part. Part of the attraction of making wiki-ish articles is that it’s a starting point stylistically that people can get. Hopefully they move out from that, but it’s a better starting point for the form than blogging.
If a forked personal Wikipedia sounds like a contradiction in terms than I’ve explained poorly.
A big part of the sea-change wrought by Git, and amplified by GitHub, had to do with connotations associated with forking. In the early days of open source software, that was a bad thing. It meant your community had failed to remain coherent. Forking was expensive and disruptive. That was true socially, but also technically, and the two were related. In centralized source control systems, a fork — or even a branch — meant a lot of gears had to grind, a lot of context had to be transplanted in ways that couldn’t easily be undone or recombined.
One of the reasons Git is conceptually challenging for a lot of people (including programmers who have only used centralized source control systems) is that Git makes forking — and branching — cheap and non-disruptive. And that it does so by means of indirection. Where other systems move data around, Git just rearranges pointers.
Here’s a common workflow in GitHub. You have a repository full of stuff. I fork it, create a branch within it, and make /a single small change/. This was unthinkable in heavyweight centralized systems. In Git that branch is a throwaway construct. You can easily delete it, and ideally you will when its job is done. But meanwhile it becomes the axis of a workflow. I submit a pull request asking you to review the branch. Our conversation about whether you’ll do that, and if so how, and if not what I’ll need to do to retry the request, is bound to the request, accessible to both of us and anyone else who cares to observe, now or later, what happened.
If we play out a Wikipedia edit scenario from this perspective, there is no sense in which Wikipedia is forced to become something other than that for which it was designed. The Wikipedians who control the master repository for that page have exactly as much power as they do now. The difference is in the process of collaboration. I don’t just make a change and hope you’ll accept it. I formulate the change in my branch, and ask you to review it. That request is visible to everyone, as is your response, and our subsequent workflow, and the final outcome whatever it may be. And it’s all transparent in a much more coherent way than in the current Wikipedia workflow.
Experts shouldn’t write encyclopedias, but they should be able to collaborate effectively with those who do. That collaboration is manifestly broken in today’s Wikipedia. I’m not saying M-K should seize control of that article. I am saying a more orderly and effective workflow could enable the needed collaboration.
Software developers have recently gained a lot of experience with such workflow, but that’s a historical accident. It’s needed everywhere, and the patterns — if not the specific tools and environments — won’t remain part of the unevenly distributed future.
It’s worth noting that in informal ways WIkipedia has done this when things got really bad. In Gamergate, the back and forth was so nasty that Wales set up this for the gaters:
And that was partially frustration, but it was also because at some point radically incompatible visions have to fork so they can be viewed side by side and compared.
In this case the article that resulted served to show that the GG vision of the article was a rambling list of loosely connected events and complaints with no real center. It could have also shown that the “legitimate” GG wikizens had a workable vision for the page that was fair-minded but being shouted down in edit wars.
This is kind of what I mean by “You want consensus but not at first.” There are times visions have to fork so that each side can educate the other. And there are times they have to fork so that people can realize one side of the debate is pretty vacant. Both are useful.
OK, P.S.: the problem of granularity gets wicked at certain levels. Words are pretty easy to define as units of meaning. Sentences too. But paragraphs are one of the great mysteries of semantic organization, unless one follows so mechanical a process that the paragraphs become wearyingly predictable. Some “granules” are tougher to map as meaningful units of communication than others.
Agreed. Among other limitations, the primitive implementation I’ve got so far doesn’t account for moves, splits, or joins. It could, but even if it did, that kind of visualization will only sometimes be useful. But when it is, wow. Going from this:
was exciting not only to me, but especially to the author of the paragraph in question who had not seen her own process revealed in such a way, and who (like me) imagines usefully revealing that kind of process to students.
Wikipedia doesn’t account for moves, splits, and joins very well either. I’ve seen pages moved that lose history and simultaneously re-assert an obvious confirmation bias in claims for evidence in an official document that doesn’t support the conclusion at all. But any effort to correct that is quickly rejected.
Hello, it’s the author in question here. I think for me the persistent confusion of purpose with fedwiki relates to continuing to reference the idea of an “article” which brings in all the assumptions about determinate versions, and all the problems that wikipedia is wrestling with in such an ugly way.
One thing I held on to during fedwiki was that it wasn’t intended to be wikipedia, and to me that meant it wasn’t intended to produce articles so much as to sustain and connect ideas in formation that might find their way into article-like things on other platforms.
So to me there was a mismatch in intention between some of the assumptions of forkability and some of the assumptions of writing style (especially involving writing in the first person — once I fork that, am I still that I? I might dispute or refashion a review of a work of philosophy, but can I remix a dream that someone has had of inheriting a house with different rooms? Even if I have had that same dream, it makes no sense to me to collapse the distinction between the two of us, or to make that distinction hard for others than the two of us to fathom out.)
Fedwiki taught me something important about process, and as I’ve been discussing with Alyson Indrunas on her blog, I think its real gift is in the moment of practice: in the acts of writerly attention, focus, and cooperation, not so much in the artefacts that were left behind.
I wonder what we should call them if not articles. I agree, article puts too polished a spin on it. But page is bland, and post implies no revision. Proto-articles?
I think Jon’s larger point, however, is not that federated wiki could replace Wikipedia, but that some of the principles of it could be profitably applied to Wikipedia. (And my larger point is somewhat different, that these proto-articles could inform better articles on Wikipedia by letting alternate visions of topics play themselves out. )
I feel you’re right about article, page and post. Could you call them notes? There’s a strange thing with notes in singular and plural terms, but here I’m thinking of a note in the singular sense — that a fedwiki entry is a note. You write a note. You link to a note. Is that helpful?
In a separate space (scholarly communications) we’re finding word choice matters a lot in how people think about the possibilities. We have (journal) articles which traditionally have a binary published/not published state and we’re trying to find ways to articulate a much more continuous range of status (on two dimensions, un-validated to ‘certified’ and private to public) and the processes by which movement occurs or is managed.
Which is probably to say the obvious, but a discussion of what words to use is an effective way of figuring out how different people think about the problem.
On one hand, if the unit of work is a paragraph, then we already have a word for that. :-). Just call it “paragraph”.
On the other hand, I think this is actually a new thing in the world that calls for a new name. I’d like to offer the name Wikit.
Wikit is to (Federated) Wiki as Tweet is to Twitter (and as Post is to Blog and as Article is to Magazine).
I love your spirit of creating a new word, dobbse, but “Wikit” reminds me too much of sticky wikit and it sounds too much like “post-it.” But yay, let’s make up words! I think the complication is that you can write a paragraph in what we are currently calling the article in the fedwiki. A “page” sounds too book-like, and the bibliophile in me kind of likes that word. And to Kate’s point, I also like the term note. She and I have been writing “notes” to one another via my blog, but I took one of the notes into the fedwiki, and it changed the entire feeling for me. Got a bit harder to parse.
Jon, you have such a wonderful point that took me years to see an “author of the paragraph in question who had not seen her own process revealed in such a way, and who (like me) imagines usefully revealing that kind of process to students.”
I never saw my own process until I had a student–a very bright OL student–notice how I changed my thoughts about their assignments in my LMS announcements. She made a connect the dots type chart of my thinking, and she was absolutely right. It made aware of what I did as a writer when I was teaching something totally new to me, and admittedly–I didn’t know where I was going until I got there two steps before the students. So I’m with you on that idea, and I wish you’d write more about it.
I had similar experiences about 10 years ago. After doing a lot of research as a grad student on a marginal historical topic I was able to correct scholarly literature with primary source material on matters of fact, but this was resisted as “original research” which is prohibited under one Wikipedia rule. (There is also a rule against rigid rule-following.) Some tics are humorous, like topics and figures deemed “British” are to be edited to use British spellings. Years later I also saw pages deleted for not being sufficiently relevant because they were about defunct online magazines without much of a trace left behind — or rather the “deletionists” were not willing to do deep research. In some cases material that has ample documentation in print but not online will also be regarded as dubious or irrelevant.
It’s humorous that Brits like me expect to be able to write articles about British topics in the spelling we are accustomed to, is it? And that we expect not to encounter a change of spelling mid-way through an article?
Consistency makes sense, but forming it on the basis of a subject’s “strong national ties” is problematic and produces humorous results. Barring a broad consensus in favor of using American English, the T. S. Eliot page is currently using British spellings. Why not change spelling at the point that Eliot’s change in citizenship is described? Dual US-Canada citizens are even more trouble. :-D
I was a newcomer to Wikipedia too and the atmosphere was quite exciting. I edited multiple pages by myself and then exploited the “Need Changes” mechanism to edit tens of pages. But deep down somewhere in the simple unattractive working style of Wikipedia, I found myself discouraged. The points you’ve mentioned applied fully. Although there are compliments, a feeling is much needed in those compliments.
Now I only look up to this respectable mammoth of information just to hunt for, well, information. All that feeling of helping the world has vaporised into thin air. I really wish this wasn’t the case, but as you’ve quoted Tom Simonite, Wikipedia is a crushing bureaucracy with an often abrasive atmosphere that deters newcomers.
Wikipedia deletionism is rampant and corrosive. What happens is that some editors see the problem, but a particular editor (who has admin rights) takes a position and the others don’t want to disagree. Admins stick together and those with good points of view are ignored. While I appreciate the concept of alternate versions which could be discovered (vs. deleted by heavyhandedness), the organizational culture and attitude is the problem. There should be a blind review system, which there is not. Usernames and user histories are the problem. It seems the federated system would rely on usernames/identities to help people ascertain appropriate versions. But that, I would say, is the root of the problem currently.
I was inspired by Mike Caulfield’s statement about consensus: that you need it eventually, but not immediately. Alternate versions shouldn’t hang around forever, they should be absorbed eventually. In GitHub (e.g. http://blog.jonudell.net/2015/01/22/a-federated-wikipedia/#comment-413294) the flow of open and closed pull requests (and issues) is a barometer of the health of a project. If things are moving along smoothly there’s no reason to worry. If they are not, it’s a sign that something is awry.
Reblogged this on Questo blog non esiste and commented:
Post interessantissimo (e difficile) sui limiti del Punto di Vista Neutrale su Wikipedia. Non sono sicuro di essere d’accordo, ma l’idea delle “interpretazioni” differenti potrebbe essere utilissima per tutti gli articoli dedicati alle Humanities.
You are just proposing http://meatballwiki.org/wiki/ViewPoint again
Earlier in this thread Cameron Neylon mentioned W3C Annotations, and I also mentioned the IndieWeb thing called Webmention http://indiewebcamp.com/Webmention. I’ve looked into both. WebMentions is nifty but not for the purpose we’re discussing here, it’s a more modern/secure pingback. But hypothes.is has recently released the first implementation of an annotator based on the W3C standard, and it’s sweet.
Here’s me in the hypothes.is Chrome extension annotating something in the above thread: http://jonudell.net/images/w3c-annotation-example.jpg
Here’s the URL of the annotation: https://hypothes.is/a/eAubKKXaRlKAhMrUY7bNBw
Here’s the URL of my annotation stream in the hypothes.is server (which would be one of many interoperable servers): https://hypothes.is/stream?q=user:judell
Here’s the URL of annotations tagged wikipedia: https://hypothes.is/stream?q=tag:%27wikipedia%27 (Actually that doesn’t seem to work yet, but I’d love to see this become a next-gen delicious with all the taggy goodness.)
Here’s a gist that illustrates output from the hypothes.is API: https://gist.github.com/judell/69828b06ef6b376bbe04. Interestingly, it uses multiple methods to locate the annotation within the cited page: absolute character count, text to search for, and XPath notation.
Update: via https://hypothes.is/a/taYI6gs4STWR5Wtjg4HVCw, a discussion of Fuzzy Anchoring: https://hypothes.is/blog/fuzzy-anchoring/
One of the nice things about the W3C standard is the way that the annotation location method is extensible – so in principle it can annotate web, text video, audio and stand some chance of working across different versions of the same “text” eg different editions of Alice in Wonderland as the classic example.
Hi, interesting read. I am the founder of Newslines (http://newslines.org), a crowdsourced site that’s a mix between Wikipedia and daily news.
I used to be a Wikipedia editor and was horrified by the ridiculous energy sapping process for adding information to the site. In building an alternative site, I have gained some insight into Wikipedia’s problems from a software level. Many of the ways touted to fix Wikipedia’s problems will not actually work in practice. From what I have read, I don’t think that federated wiki will be any better (although alternative solutions may lead to a better way in the future).
A large part of Wikipedia’s problems stem directly from the wiki software itself. The major problem is the “article” format of most wiki pages. This gives presentation problems in that it forces the web page to look like a book page, but more importantly, allows any editor to veto any content on the page. This gives rise to real or imagined experts believing they “own” the page, and from that groupthink, harassment, bias naturally follow.
In many cases (especially news and biographies) this fact checking by the editors is simply unnecessary. In a better designed system the person who writes about the Haymarket riots would be able to add their information as an new update to existing information. If he has had a paper or book published then it should be included as a news item, independent of whether it is “encyclopedic” or not. Then the *readers* should be able to sort and filter the information.
There are other solutions that can mitigate problems that arise from using wikis, such as assigning anonymous editors to check content, and paying editors to contribute. I have implemented many of these on Newslines. In fact, over the past months our writers have added over 26,000 posts with no edit warring or harassment.
In short, rather than creating more wiki, the solution to creating a better Wikipedia is to create a system with less wiki.
If you are interested in reading more I have several blog posts on Wikipedia’s issues: http://newslines.org/blog/wikipedias-13-deadly-sins/
A system with less wiki? Then it would not be Wikipedia at all, it would be something completely different.
A system where anyone can add links to their contributions, however bad quality, and no-one can delete them (why ever not??) sounds like a spam magnet. How do you do quality control?
That reminds me of this famous exchange on the (now frozen) page: http://c2.com/cgi/wiki?WikiPedia:
To answer your question, Wikipedia’s editors need not cede ultimate control. As per http://blog.jonudell.net/2015/01/22/a-federated-wikipedia/#comment-413294, forked pages could be a discoverable annotation layer supported by a transparent process of review and inclusion.
>A system with less wiki? Then it would not be Wikipedia at all, it would be something completely different.
The answer to that is simply: What is Wikipedia? Is it an encyclopedia, a news archive, a medical directory, a fact-checking system. Each of the these functions should be handled in the most appropriate way. Wikipedia’s success has got many people believing that wikis are the best way to make content. Wikis are not the only solution, and often not the best solution, even for Wikipeida’s content. My site handles news archiving in a far more efficient and stress-free way than a wiki does, using non-wiki processes. Sure, it doesn’t look the same as “Wikipedia” but that’s a good thing ;-) More wiki will not solve core wiki problems, especially groupthink, harassment, and bias.
>A system where anyone can add links to their contributions, however bad quality, and no-one can delete them (why ever not??) sounds like a spam magnet. How do you do quality control?
We have built a structured system where editors can approve other writers’ edits according to set rules. For example here’s how to add a post about a movie: http://help.newslines.org/knowledge-base/add-a-movie-post/
I can go to thousands of Wikipedia pages right now and add spam and it will not be noticed. There is no chance of that happening at Newslines because everything that is added must go through computer validation and human approval. Each editor is rated on their performance, which minimizes spam.