How Federated Wiki neighborhoods grow and change

Federated Wiki sites form neighborhoods that change dynamically as you navigate FedWiki space. Sites that are within your current neighborhood are special in two ways: you can link to them by names alone (versus full URLs), and you can search them.

Here’s one neighborhood I can join.

A row of flags (icons) in the bottom right corner of the screen (1) indicates that there are five sites in this neighborhood: my own and four others. The number next to the search box in the bottom middle (2) says that 772 pages can be searched. That number is the sum of all the pages in the neighborhood.

From each site in the neighborhood, FedWiki retrieves a summary called the sitemap. It is a list of all the pages on the site. Each item in the list has the page’s title, date, and complete first paragraph (which might be very short or very long). FedWiki’s built-in search uses sitemaps which means that it only sees the titles and first paragraphs of the pages in your neighborhood.

Here are the sites in this neighborhood:

  1. forage.ward.fed.wiki.org
  2. jon.sf.fedwikihappening.net
  3. sites.fed.wiki.org
  4. video.fed.wiki.org
  5. ward.fed.wiki.org

You can find these names by hovering over the row of flags. If you are technical you might also want to observe them in a JavaScript debugger. In this picture, I used Control-J in Chrome to launch the debugger, then clicked into the Console tab, then typed the name of the JavaScript variable that represents the neighborhood: wiki.neighborhood.

Why are these five sites in my neighborhood? It’s obvious that my own site, jon.sf.fedwikihappening.net, belongs. And since I’ve navigated to a page on forage.ward.fed.wiki.org, it’s not suprising to find that site in my neighborhood too. But what about the other three? Why are they included?

The answer is that Ward’s page includes references to sites.fed.wiki.org, video.fed.wiki.org, and ward.fed.wiki.org. A FedWiki reference looks like a paragraph, but its blue tint signals that it’s special. Unlike a normal paragraph, which you inject into the page using the HTML or Markdown plugin, a reference is injected using the Reference plugin. It’s a dynamic element that displays the flag, the page name, and synopsis (first paragraph) of the referenced page. It also adds that page’s origin site to the neighborhood.

Two of the five sites in this example neighborhood — jon.sf.fedwikihappening.net and forage.ward.fed.wiki.org — got there directly by way of navigation. The other three got there indirectly by way of references.

To add a reference to one of your own pages, you click the + symbol to add a factory, drag the flag (or favicon) of a remote FedWiki page, and drop it onto the factory.

To illustrate, I’ll start with a scratch page that has a factory ready to accept a drop.

In a second browser tab, I’ll navigate to forage.ward.fed.wiki.org’s Ward Cunningham page, the one with the three references we saw above. Then I’ll drag that page’s favicon into the first browser tab and drop it onto the factory. Dragging between browser tabs may be unfamiliar to you. It was to me as well, actually. But it’s a thing.

The setup in this example is:

Tab 1: http://jon.sf.fedwikihappening.net/view/welcome-visitors/view/scratch

Tab 2: http://forage.ward.fed.wiki.org/view/ward-cunningham

Here is the result:

How many sites are in this neighborhood? When I did this experiment, I predicted either 2 or 5. It would be 2 if the neighborhood included only my site and the origin of the referenced page. It would be 5 if FedWiki included, in addition, sites referenced on the referenced page. Things aren’t transitive in that way, it turns out, so the answer is 2.

Except that it isn’t. It’s 3! Look at the row of flags in the bottom right corner. There are three of them: jon.sf.fedwikihappening.net, forage.ward.fed.wiki.org, and mysteriously, fedwikihappening.rodwell.me. That’s Paul Rodwell’s site. How did he get into this neighborhood?

This closeup of the journal will help explain the mystery. The page was forked 5 days ago.

We can view the source of the page to find out more.

And here’s the answer. Early in the life of my scratch page I forked Paul Rodwell’s scratch page from fedwikihappening.rodwell.me.

So we’ve now discovered a third way to grow your neighborhood. First by navigating to remote pages directly. Second by including references to remote pages. And third by forking remote pages.

FedWiki for collaborative analysis of data

A FedWiki page presents one or more wiki pages side by side. This arrangement is called the lineup. During interactive use of FedWiki the lineup grows rightward as you navigate the federation. But you can also compose a lineup by forming an URL that describes a purposeful arrangement of wiki pages. In Federated Wiki for teaching and learning basic composition I composed two lineups. The first compares two versions of a page on Kate Bowles’ FedWiki site. The second compares two versions of that page from two different sites: mine and Kate’s. With these two lineups I’m exploring the notion that FedWiki could be a writers’ studio in which students watch their own paragraphs evolve, and also overlay suggestions from teachers (or other students).

In that example the order of wiki pages in the lineup isn’t important. You can compare versions left-to-right or right-to-left. But here’s another example where left-to-right sequence matters:

Link: Favorite Broccoli Recipes

URL: http://jon.sf.fedwikihappening.net/view/italian-broccoli/view/broccoli-fried-with-sesame-and-raspberry/view/favorite-broccoli-recipes

Rendering:

The tables shown in these wiki pages are made by a data plugin that accumulates facts and performs calculations. FedWiki has explored a number of these data plugins. This one implements the little language that you can see in these views of the text that lives in those embedded plugins:

On the Italian Broccoli page:

5 (calories) per (garlic clove)
200 (calories) per (bunch of broccoli)
SUM Italian Broccoli (calories)

On the Broccoli Fried With Sesame and Raspberry page:

100 (calories) per (tbsp sesame seed oil)
34 (calories) per (100 grams broccoli)

And:

3 (tbsp sesame seed oil)
SUM (calories)
1 (100 grams broccoli)
SUM Broccoli Fried With Sesame Oil (calories)

On the Favorite Broccoli Recipes page:

Italian Broccoli (calories)

And:

Broccoli Fried With Sesame Oil (calories)

Other plugins implement variations on this little language, and it’s surprisingly easy to create new ones. What I’m especially drawing attention to here, though, is that the lineup of wiki pages forms a left-to-right pipeline. Facts and calculations flow not only downward within a wiki page, but also rightward through a pipeline of wiki pages.

And that pipeline, as we’ve seen, can be composed of pages from one site, or of pages drawn from several sites. I could provide one set of facts, you could provide an alternative set of facts, anyone could build a pipeline that evaluates both. It’s a beautiful way to enable the collaborative production and analysis of data.

Federated Wiki for teaching and learning basic composition

The FedWikiHappening has mainly explored Federated Wiki as an environment for collaborative writing. But the underlying software is rich with unexplored capability. It is, among many other possible uses, a great platform for the teaching and learning of basic writing skills.

Every page in FedWiki is backed by two data structures. The story is a sequence of paragraphs. The journal is a sequence of actions that add, edit, move, or delete paragraphs. Because editing is paragraph-oriented, the progressive rewriting of a paragraph is recorded in the journal.

I once taught an introductory writing class to undergraduates. Part of my method was to awaken students to the notion that paragraphs can and should evolve, and that it’s useful to observe and discuss that evolution. In FedWiki the evolution of a paragraph is not directly visible, but it’s available just below the surface. Here’s a beautiful example from a Kate Bowles essay called Sentences that get things done. The essay emerged in response to a collaborative riff that ends with Kate’s title. But here let’s watch one paragraph in Kate’s essay grow and change.

1. The relevance to wiki is that vernacular language is both capable of sustaining narrative and does not depend on citation.


2. The relevance to SFW is that vernacular language is both capable of sustaining narrative and does not depend on citation.


3. The relevance to SFW is that vernacular language is both capable of sustaining and amplifying personal narrative and yet does not depend on authorial celebrity or citation.


4. The relevance to SFW is that vernacular language is both capable of sustaining and amplifying personal narrative and yet does not depend on authorial celebrity or citation. Vernacular language is available to be borrowed, forked, repurposed.


5. The relevance to SFW is that vernacular language is both capable of sustaining and amplifying personal narrative and yet does not depend on authorial celebrity or citation. Vernacular language is available to be borrowed, forked, repurposed, and so becomes a practice of creating sentences that get things done, rather than further intensification of the spectacle of heroic individuals doing things.


6. The relevance to SFW is that vernacular language is both capable of sustaining and amplifying personal narrative and yet does not depend on authorial celebrity or citation. Vernacular language is available to be borrowed, forked, repurposed, and so becomes a practice of collaboratively creating sentences that get things done, rather than further intensification of the spectacle of heroic individuals doing things.


7. The relevance to SFW is that vernacular language is both capable of sustaining and amplifying personal narrative and yet does not depend on authorial celebrity or citation. Vernacular language is available to be borrowed, forked, repurposed, and so becomes a practice of collaboratively creating sentences that get things done, as a counterpoint to the intensification of heroic individuals doing things.


8. The relevance to SFW is that vernacular language is both capable of sustaining and amplifying personal narrative and yet does not depend on authorial celebrity or citation. Vernacular language is available to be borrowed, forked, repurposed, and so becomes a practice of collaboratively creating sentences that get things done.

Version 4 might have been a keeper. But something propelled Kate to work through versions 5, 6, and 7. In the final version we see what she was reaching for: a way to land on the sentence that is both the essay’s title and a reference to the context from which the essay arose.

Any kind of web client software, running in the browser or in the cloud, could access that essay’s journal and surface that paragraph history. The FedWiki API (application programming interface) is simple and universal: just subtract /view from a FedWiki URL and append .json to the name of a page. So, for example:

Kate’s page: http://kate.au.fedwikihappening.net/view/sentences-that-get-things-done

The API for Kate’s page: http://kate.au.fedwikihappening.net/sentences-that-get-things-done.json

We can also construct URLs that arrange versions side by side. Here’s a FedWiki lineup that arranges two versions of the paragraph side by side and in context:

http://kate.au.fedwikihappening.net/view/sentences-that-get-things-done_rev26/view/sentences-that-get-things-done_rev36

Now imagine that I’m the teacher, Kate is the student, I’ve forked Kate’s essay, and I’ve written version 8 as an example for Kate. Here’s an URL that arranges her version alongside mine:

http://kate.au.fedwikihappening.net/view/sentences-that-get-things-done_rev26/jon.sf.fedwikihappening.net/sentences-that-get-things-done_rev36

I would love to help build tools that mine FedWiki’s latent ability to support the teaching and learning of prose composition. And I would equally love using those tools to facilitate that teaching and learning.

Individual voices in the Federated Wiki chorus

In recent days I’ve been immersed in the Federated Wiki Happening, a group exploration of Ward Cunningham’s Smallest Federated Wiki (SFW). When I first saw what Ward was up to, nearly a year ago, I resisted the temptation to dive in because I knew it would be a long and deep dive that I couldn’t make time for. But when Mike Caulfield brought together a diverse group of like-minded scholars for the #fedwikihappening, I had the time and took the plunge. It’s been a joyful experience that reminds me of two bygone eras. The first was the dawn of the web, when I built the BYTE website and explored the Internet’s precursors to today’s social software. The second was the dawn of the blogosphere, when I immersed myself in Radio UserLand and RSS.

During both of those eras I participated in online communities enaged in, among other things, the discovery of emergent uses of the networked software that enabled those communities to exist. The interplay of social and technological dynamics was exciting in ways I’d almost forgotten. This week, the FedWikiHappening took me there again.

I want to explain why but, as Mike says today, so much has happened so quickly that it’s hard to know where to begin. For now, I’ll choose a single narrative thread: identity.

SFW inverts the traditional wiki model, which enables many authors to work on a canonical page. In SFW there is no canonical page. We all create our own pages and edit them exclusively. But we can also copy pages from others, and make changes. Others may (or may not) notice those changes, and may (or may not) merge the changes.

In this respect SFW resembles GitHub, and its terminology — you “fork” a page from an “origin site” — invites the comparison. But SFW is looser than GitHub. What GitHub calls a pull request, for example, isn’t (yet) a well-developed feature of SFW. And while attribution is crystal-clear in GitHub — you always know who made a contribution — it is (by design) somewhat vague in SFW. In the Chorus of Voices that Ward envisions, individual voices are not easy to discern.

That notion was hard for some of us in The Happening, myself included, to swallow. In SFW we were represented not as avatars with pictures but as neutral “flags” made of color gradients. Identity was Discoverable But Not Obvious.

Then Alex North cracked the code. He read through the FedWiki sources, found the hook for uploading the favicon that serves as SFW’s flag/avatar, and worked out a procedure for using that hook to upload an image.

The next day I worked out a Windows-friendly variant of Alex’s method and uploaded my own image. Meanwhile a few other Happening participants used Alex’s method to replace their colored gradients with photos.

The next day Mike Caulfield bowed to the will of the people and uploaded a batch of photos on behalf of participants unable to cope with Alex’s admittedly geeky hack. Suddenly the Happening looked more like a normal social network, where everyone’s contributions have identifying photos.

That was a victory, but not an unqualified one.

It was a victory in part because Alex showed the group that SFW is web software, and like all web software is radically open to unintended uses. Also, of course, because we were able to alter the system in response to a perceived need.

And yet, we may have decided too quickly not to explore a mode of collaboration that favors the chorus over the individual voice. Can we work together effectively that way, in a federated system that ultimately gives us full control of our own data? That remains an open question for me, one of many that the Happening has prompted me to ask and explore.

TypeScript Successes and Failures

My last post buried the lead, so I’ll hoist it to the top here: I’ve left Microsoft. While I figure out what my next gig will be, I’ll be doing some freelance writing and consulting. My first writing assignment will be an InfoWorld feature on TypeScript. It’s an important technology that isn’t yet well understood or widely adopted. I made two efforts to adopt it myself. The first, almost a year ago, didn’t stick. The second, a few weeks ago, did.

I’ll reflect on those experiences in the article. But I’m also keen to mine other perspectives on why TypeScript adoption fails or succeeds. And I’m particularly interested to hear about experiences with TypeScript toolchains other than Visual Studio. If you have perspectives and experiences to share, please drop a note here or to jon at jonudell.info.

Skype Translator will (also) be a tool for language learners

When I saw this video of Skype Translator I realized that beyond just(!) translation, it will be a powerful tool for language learning. Last night I got a glimpse of that near future. Our next door neighbor, Yolanda, came here from Mexico 30 years ago and is fluently bilingual. She was sitting outside with her friend, Carmen, who speaks almost no English. I joined them and tried to listen to their Spanish conversation. I learned a bit of Spanish in high school but I’ve never been conversational. Here in Santa Rosa I’m surrounded by speakers of Spanish, it’s an opportunity to learn, and Yolanda — who has worked as a translator in the court system — is willing to help.

I find myself on parallel tracks with respect to my learning of two different languages: music and Spanish. In both cases I’ve historically learned more from books than by ear. Now I want to put myself into situations that force me to set the books aside, listen intently, and then try to speak appropriately. I can use all the help I can get. Luckily we live in an era of unprecedented tool support. On the musical front, I’ve made good use of Adrian Holovaty’s SoundSlice, a remarkable tool for studying and transcribing musical performances it pulls from YouTube. I haven’t used SoundSlice much for annotation, because I’m trying to develop my ear and my ability to converse musically in realtime. But its ability to slow down part of a tune, and then loop it, has been really helpful in my efforts to interact with real performances.

I suspect that’s why Skype Translator will turn out to be great for language learning. Actually I’m sure that will happen, and here’s why. Last night I showed the Skype Translator video to Yolanda and Carmen. Neither is remotely tech-savvy but both instantly understood what was happening. Yolanda marveled to see machine translation coming alive. Carmen, meanwhile, was transfixed by the bilingual exchange. And when she heard the English translation of a Spanish phrase, I could see her mouthing the English words. I found myself doing the same for the Spanish translation of English phrases.

That’s already a powerful thing, and yet we were only observers of a conversation. When we can be participants, motivated to communicate, the service won’t just be a way to speak across a language gap. It’ll be a way to learn one another’s languages.

No disclosure is needed here, by the way, because I’m a free agent for now. My final day with Microsoft was last Friday. In the end I wasn’t able to contribute in the ways I’d hoped I could. But great things are happening there, and Skype Translator is only one of the reasons I’m bullish on the company’s future.

Human/machine partnership for problems otherwise Too Hard

My recent post about redirecting a page of broken links weaves together two different ideas. First, that the titles of the articles on that page of broken links can be used as search terms in alternate links that lead people to those articles’ new locations. Second, that non-programmers can create macros to transform the original links into alternate search-driven links.

There was lots of useful feedback on the first idea. As Herbert Van de Sompel and Michael Nelson pointed out, it was a really bad idea to discard the original URLs, which retain value as lookup keys into one or more web archives. Alan Levine showed how to do that with the Wayback Machine. That method, however, leads the user to sets of snapshots that don’t consistently mirror the original article, because (I think) Wayback’s captures happened both before and after the breakage.

So for now I’ve restored the original page of broken links, alongside the new page of transformed links. I’m grateful for the ensuing discussion about ways to annotate those transformed links so they’re aware of the originals, and so they can tap into evolving services — like Memento — that will make good use of the originals.

The second idea, about tools and automation, drew interesting commentary as well. dyerjohn pointed to NimbleText, though we agreed it’s more suited to tabular than to textual data. Owen Stephens reminded me that the tool I first knew as Freebase Gridworks, then Google Refine, is going strong as OpenRefine. And while it too is more tabular than textual, “the data is line based,” he says, “and sort of tabular if you squint at it hard.” In Using OpenRefine to manipulate HTML he presents a fully-worked example of how to use OpenRefine to do the transformation I made by recording and playing back a macro in my text editor.

Meanwhile, on Twitter, Paul Walk and Les Carr and I were rehashing the old permathread about coding for non-coders.

The point about MS Word styles is spot on. That mechanism asks people to think abstractly, in terms of classes and instances. It’s never caught on. So with my text-transformation puzzle, Les suggests. Even with tools that enable non-coders to solve the puzzle, getting people across the cognitive threshold is Too Hard.

While mulling this over, I happened to watch Jeremy Howard’s TED talk on machine learning. He demonstrates a wonderful partnership between human and machine. The task is to categorize a large set of images. The computer suggests groupings, the human corrects and refines those groupings, the process iterates.

We’ve yet to inject that technology into our everyday productivity tools, but we will. And then, maybe, we will finally start to bridge the gap between coders and non-coders. The computer will watch the styles I create as I write, infer classes, offer to instantiate them for me, and we will iterate that process. Similarly, when I’m doing a repetitive transformation, it will notice what’s happening, infer the algorithm, offer to implement it for me, we’ll run it experimentally on a sample, then iterate.

Maybe in the end what people will most need to learn is not how to design stylistic classes and instances, or how to write code that automates repetitive tasks, but rather how to partner effectively with machines that work with us to make those things happen. Things that are Too Hard for most living humans and all current machines to do on their own.