Designing for least knowledge

In a post on the company blog I announced that it’s now possible to use the Hypothesis extension in the Brave browser. That’s great news for Hypothesis. It’s awkward, after all, to be an open-source non-profit software company whose product is “best viewed in” Chrome. A Firefox/Hypothesis extension has been in the works for quite a while, but the devil’s in the details. Although Firefox supports WebExtensions, our extension has to be ported to Firefox, it isn’t plug-and-play. Thanks to Brave’s new ability to install unmodified Chrome extensions directly from the Chrome web store, Hypothesis can now plug into a powerful alternative browser. That’s a huge win for Hypothesis, and also for me personally since I’m no longer tethered to Chrome in my daily work.

Another powerful alternative is forthcoming. Microsoft’s successor to Edge will (like Brave) be built on Chromium, the open-source engine of Google’s Chrome. The demise of EdgeHTML represents a loss of genetic diversity, and that’s unfortunate. Still, it’s a huge investment to maintain an independent implementation of a browser engine, and I think Microsoft made the right pragmatic call. I’m now hoping that the Chromium ecosystem will support speciation at a higher level in the stack. Ever since my first experiments with Greasemonkey I’ve been wowed by what browser extensions can do, and eager for standardization to unlock that potential. It feels like we may finally be getting there.

Brave’s support for Chrome extensions matters to me because I work on a Chrome extension. But Brave’s approach to privacy matters to me for deeper reasons. In a 2003 InfoWorld article I discussed Peter Wayner’s seminal book Translucent Databases, which explores ways to build information systems that work without requiring the operators to have full access to users’ data. The recipes in the book point to a design principle of least knowledge.

Surveillance capitalism knows way too much about us. Is that a necessary tradeoff for the powerful conveniences it delivers? It’s easy to assume so, but we haven’t really tried serious alternatives yet. That’s why this tweet made my day. “We ourselves do not know user identities or how donations might link via user,” wrote Brendan Eich, Brave’s founder. “We don’t want to know.”

We don’t want to know!

That’s the principle of least knowledge in action. Brave is deploying it in service of a mission to detoxify the relationship between web content and advertising. Will the proposed mechanisms work? We’ll see. If you’re curious, I recommend Brendan Eich’s interview with Eric Knorr. The first half of the interview is a deep dive into JavaScript, the second half unpacks Brave’s business model. However that model turns out, I’m grateful to see a real test of the principle. We need examples of publishing and social media services that succeed not as adversaries who monetize our data but rather as providers who deliver convenience at a reasonable price we’re happy to pay.

My hunch is that we’ll find ways to build profitable least-knowledge services once we start to really try. Successful examples will provide a carrot, but there’s also a stick. Surveillance data is toxic stuff, risky to monetize because it always spills. It’s a liability that regulators — and perhaps also insurers — will increasingly make explicit.

Don’t be evil? How about can’t be evil? That’s a design principle worth exploring.

Renaming Hypothesis tags

Wherever social tagging is supported as an optional feature, its use obeys a power law. Some people use tags consistently, some sporadically, most never. This chart of Hypothesis usage illustrates the familiar long-tail distribution:

Those of us in the small minority of consistent taggers care a lot about the tag namespaces we’re creating. We tag in order to classify resources, we want to be able to classify them consistently, but we also want to morph our tag namespaces to reflect our own changing intuitions about how to classify and also to adapt to evolving social conventions.

Consistent tagging requires a way to make, use, and perhaps share a list of controlled tags, and that’s a topic for another post. Morphing tag namespaces to satisfy personal needs, or adapt to social conventions, requires the ability to rename tags, and that’s the focus of this post.

There’s nothing new about the underlying principles. When I first got into social tagging, with del.icio.us back in the day, I made a screencast to show how I was using del.icio.us’ tag renaming feature to reorganize my own classification system, and to evolve it in response to community tag usage. The screencast argues that social taggers form a language community in which metadata vocabularies can evolve in the same way natural languages do.

Over the years I’ve built tag renamers for other systems, and now I’ve made one for Hypothesis as shown in this 90-second demo. If you’re among the minority who want to manage your tags in this way, you’re welcome to use the tool, here’s the link. But please proceed with care. When you reorganize a tag namespace, it’s possible to wind up on the wrong end of the arrow of entropy!

Annotation-powered apps: A “Hello World” example

Workflows that can benefit from annotation exist in many domains of knowledge work. You might be a teacher who focuses a class discussion on a line in a poem. You might be a scientist who marks up research papers in order to reason about connections among them. You might be an investigative journalist fact-checking a claim. A general-purpose annotation tool like Hypothesis, already useful in these and other cases, can be made more useful when tailored to a specific workflow.

I’ve tried a few different approaches to that sort of customization. For the original incarnation of the Digital Polarization Project, I made a Chrome extension that streamlined the work of student fact-checkers. For example, it assigned tags to annotations in a controlled way, so that it was easy to gather all the annotations related to an investigation.

To achieve that effect the Digipo extension repurposed the same core machinery that the Hypothesis client uses to convert a selection in a web page into the text quote and text position selectors that annotations use to identify and anchor to selections in web pages. Because the Digipo extension and the Hypothesis extension shared this common machinery, annotations created by the former were fully interoperable with those created by the latter. The Hypothesis client didn’t see Digipo annotations as foreign in any way. You could create an annotation using the Digipo tool, with workflow-defined tags, and then open that annotation in Hypothesis and add a threaded discussion to it.

As mentioned in Annotations are an easy way to Show Your Work, I’ve been unbundling the ingredients of the Digipo tool to make them available for reuse. Today’s project aims to be the “Hello World” of custom annotation-powered apps. It’s a simple working example of a powerful three-step pattern:

  1. Given a selection in a page, form the selectors needed to post an annotation that targets the selection.
  2. Lead a user through an interaction that influences the content of that annotation.
  3. Post the annotation.

For the example scenario, the user is a researcher tasked with evaluating the emotional content of selected passages in texts. The researcher is required to label the selection as StrongPositive, WeakPositive, Neutral, WeakNegative, or StrongNegative, and to provide freeform comments on the tone of the selection and the rationale for the chosen label.

Here’s the demo in action:

The code behind the demo is here. It leverages two components. The first encapsulates the core machinery used to identify the selection. The second provides a set of functions that make it easy to create annotations and post them to Hypothesis. Some of these provide user interface elements, notably the dropdown list of Hypothesis groups that you can also see in other Hypothesis-oriented tools like facet and zotero. Others wrap the parts of the Hypothesis API used to search for, or create, annotations.

My goal is to show that you needn’t be a hotshot web developer to create a custom annotation-powered app. If the pattern embodied by this demo app is of interest to you, and if you’re at all familiar with basic web programming, you should be able to cook up your own implementations of the pattern pretty easily.

To work with selections in web pages, annotation-powered apps like these have to get themselves injected into web pages. There are three ways that can happen, and the Hypothesis client exploits all of them. Browser extensions are the most effective method. The Hypothesis extension is currently available only for Chrome, but there’s a Firefox extension waiting in the wings, made possible by an emerging consensus around the WebExtensions model.

A second approach entails sending a web page through a proxy that injects an annotation-powered app into the page. The Hypothesis proxy, via.hypothes.is, does that.

Finally there’s the venerable bookmarklet, a snippet of JavaScript that runs in the context of the page when the user clicks a button on the browser’s bookmarks bar. Recent evolution of the browser’s security model has complicated the use of bookmarklets. The simplest ones still work everywhere, but bookmarklets that load helper scripts — like the HelloWorldAnnotated demo — are thwarted by sites that enforce Content Security Policy.

It’s a complicated landscape full of vexing tradeoffs. Here’s how I recommend navigating it. The HelloWorldAnnotated demo uses the bookmarklet approach. If you’re an instructional designer targeting Project Gutenberg, or a scientific developer targeting the scientific literature, or a newsroom toolbuilder targeting most sources of public information, you can probably get away with deploying annotation-powered apps using bookmarklets. It’s the easiest way to build and deploy such apps. If the sites you need to target do enforce Content Security Policy, however, then you’ll need to deliver the same apps by way of a browser extension or a proxy. Of those two choices, I’d recommend a browser extension, and if there’s interest, I’ll do a follow-on post showing how to recreate the HelloWorldAnnotated demo in that form.

Annotating on, and with, media


At the 2018 I Annotate conference I gave a flash talk on the topic covered in more depth in Open web annotation of audio and video. These are my notes for the talk, along with the slides.


Here’s something we do easily and naturally on the web: Make a selection in a page.

The web annotation standard defines a few different ways to describe the selection. Here’s how Hypothesis does it:

We use a variety of selectors to help make the annotation target resilient to change. I’d like you to focus here on the TextPositionSelector, which defines the start and the end of the selection. Which is something we just take for granted in the textual domain. Of course a selection has a start and an end. How could it not?

An annotation like this gives you a kind of an URL that points to a selection in a document. There isn’t yet a standard way to write that URL along with the selectors, so Hypothesis points to that combination — that is, the URL plus the selectors — using an ID, like this:

The W3C have thought about a way to bundle selectors with the URL, so you’d have a standard way to cite a selection, like this:

In any case, the point is we’re moving into a world where selections in web resources have their own individual URLs.

Now let’s look again at this quote from Nancy Pelosi:

That’s not something she wrote. It’s something she said, at the Peter G Peterson Fiscal Summit, that was recorded on video.

Is there a transcript that could be annotated? I looked, and didn’t find anything better than this export of YouTube captions:

But of course we lack transcriptions for quite a lot of web audio and video. And lots of it can’t be transcribed, because it’s music, or silent moving pictures.

Once you’ve got a media selection there are some good ways to represent it with an URL. YouTube does it like this:

And with filetypes like mp3 and mp4, you can use media fragment syntax like this:

The harder problem in the media domain turns out to be just making the selection in the first place.

Here I am in that video, in the process of figuring out that the selection I’m looking for starts at 28:20 and ends at 28:36.

It’s not a fun or easy process. You can set the start, as I’ve done here, but then you have to scrub around on the timeline looking for the end, and then write that down, and then tack it onto the media fragment URL that you’ve captured.

It’s weird that something so fundamental should be missing, but there just isn’t an easy and natural way to make a selection in web media.

This is not a new problem. Fifteen years ago, these were the dominant media players.

We’ve made some progress since then. The crazy patchwork of plugins and apps has thankfully converged to a standard player that works the same way everywhere.

Which is great. But you still can’t make a selection!

So I wrote a tool that wraps a selection interface around the standard media player. It works with mp3s, mp4s, and YouTube videos. Unlike the standard player, which has a one-handled slider, this thing has a two-handled slider which is kind of obviously what you need to work with the start and end of a selection.

You can drag the handles to set the start and end, and you can nudge them forward and backward by minutes and seconds, and when you’re ready to review the intro and outro for your clip, you can play just a couple of seconds on both ends to check what you’re capturing.

When you’re done, you get a YouTube URL that will play the selection, start to stop, or an mp3 or mp4 media fragment URL that will do the same.

So how does this relate to annotation? In a couple of ways. You can annotate with media, or you can annotate on media.

Here’s what I mean by annotating with media.

I’ve selected some text that wants to be contextualized by the media quote, and I’ve annotated that text with a media fragment link. Hypothesis turns that link into an embedded player (thanks, by the way, to a code contribution from Steel Wagstaff, who’s here, I think). So the media quote will play, start to stop, in this annotation that’s anchored to a text selection at Politifact.

And here’s what I mean by annotating on media.

If I’m actually on a media URL, I just can annotate it. In this case there’s no selection to be made, the URL encapsulates the selection, so I can just annotate the complete URL.

This is a handy way to produce media fragment URLs that you can use in these ways. I hope someone will come up with a better one than I have. But the tool is begging to be made obsolete when the selection of media fragments becomes as easy and natural as the selection of text has always been.

Annotations are an easy way to Show Your Work

Journalists are increasingly being asked to show their work. Politifact does that with a sources box, like this:

This is great! The more citation of sources, the better. If I want to check those sources, though, I often wind up spending a lot of time searching within source articles to find passages cited implicitly but not explicitly. If those passages are marked using annotations, the method I’ll describe here makes that material available explicitly, in ways that streamline the reporter’s workflow and improve the reader’s experience.

In A Hypothesis-powered Toolkit for Fact Checkers I described a toolkit that supported the original incarnation of the Digital Polarization Project. More recently I’ve unbundled the key ingredients of that toolkit and made them separately available for reuse. The ingredient I’ll discuss here, HypothesisFootnotes, is illustrated in this short clip from a 10-minute screencast about the original toolkit. Here’s the upshot: Given a web page that contains Hypothesis direct links, you can include a script that pulls the cited material into the page, and connects direct links in the page to citations gathered elsewhere in the page.

Why would such links exist in the first place? Well, if you’re a reporter working on an investigation, you’re likely to be gathering evidence from a variety of sources on the web. And you’re probably copying and pasting bits and pieces of those sources into some kind of notebook. If you annotate passages within those sources, you’ve captured them in a way that streamlines your own personal (or team) management of that information. That’s the primary benefit. You’re creating a virtual notebook that makes that material available not only to you, but to your team. Secondarily, it’s available to readers of your work, and to anyone else who can make good use of the material you’ve gathered.

The example here brings an additional benefit, building on another demonstration in which I showed how to improve Wikipedia citations with direct links. There I showed that a Wikipedia citation need not only point to the top of a source web page. If the citation refers specifically to a passage within the source page, it can use a direct link to refer the reader directly to that passage.

Here’s an example that’s currently live in Wikipedia:

It looks like any other source link in Wikipedia. But here’s where it takes you in the cited Press Democrat story:

Not every source link warrants this treatment. When a citation refers to a specific context in a source, though, it’s really helpful to send the reader directly to that context. It can be time-consuming to follow a set of direct links to see cited passages in context. Why not collapse them into the article from which they are cited? That’s what HypothesisFootnotes does. The scattered contexts defined by a set of Hypothesis direct links are assembled into a package of footnotes within the article. Readers can still visit those contexts, of course, but since time is short and attention is scarce, it’s helpful to collapse them into an included summary.

This technique will work in any content management system. If you have a page that cites sources using direct links, you can use the HyothesisFootnotes script to bring the targets of those direct links into the page.

Thoughts on Audrey Watters’ “Thoughts on Annotation”

Back in April, Audrey Watters’ decided to block annotation on her website. I understand why. When we project our identities online, our personal sites become extensions of our homes. To some online writers, annotation overlays can feel like graffiti. How can we respect their wishes while enabling conversations about their writing, particularly conversations that are intimately connected to the writing? At the New Media Consortium conference recently, I was finally able to meet Audrey in person, and we talked about how to balance these interests. Yesterday Audrey posted her thoughts about that conversation, and clarified a key point:

You can still annotate my work. Just not on my websites.

Exactly! To continue that conversation, I have annotated that post here, and transcluded my initial set of annotations below.


judell 6/27/2017 #

using an HTML meta tag to identify annotation preferences

This is just a back-of-the-napkin sketch of an idea, not a formal proposal.

judell 6/27/2017 #

I’m much less committed to having one canonical “place” for annotations than Hypothesis is

Hypothesis isn’t committed to that either. The whole point of the newly-minted web annotation standard is to enable an ecosystem of interoperable annotation clients and servers, analogous to comparable ecosystems of email and web clients and servers.

judell 6/27/2017 #

Hypothesis annotations of a PDF can be centralized, no matter where the article is hosted or whether it’s a local copy

Centralization and decentralization are slippery terms. I would rather say that Hypothesis can unify a set of annotations across a family of representations of the “same” work. Some members of that family might be HTML pages, others might be PDFs hosted on the web or kept locally.

It’s true that when Hypothesis is used to create and view such annotations, they are “centralized” in the Hypothesis service. But if someone else stands up an instance of Hypothesis, that becomes a separate pool of annotations. Likewise, we at Hypothesis have planned for, and expect to see, a world in which non-Hypothesis-based implementations of standard annotation capability will host still other separate pools of annotations.

So you might issue three different API queries — to Hypothesis, to a Hypothesis-based service, and to a non-Hypothesis-based service — for a PDF fingerprint or a DOI. Each of those services might or might not internally unify annotations across a family of “same” resources. If you were to then merge the results of those three queries, you’d be an annotation aggregator — the moral equivalent of what Radio UserLand, Technorati, and other blog aggregators did in the early blogosphere.

Weaving the annotated web

In 1997, at the first Perl Conference, which became OSCON the following year, my friend Andrew Schulman and I both gave talks on how the web was becoming a platform not only for publishing, but also for networked software.

Here’s the slide I remember from Andrew’s talk:

http://wwwapps.ups.com/tracking/tracking.cgi?tracknum=1Z742E220310270799

The only thing on it was a UPS tracking URL. Andrew asked us to stare at it for a while and think about what it really meant. “This is amazing!” he kept saying, over and over. “Every UPS package now has its own home page on the world wide web!”

It wasn’t just that the package had a globally unique identifier. It named a particular instance of a business process. It made the context surrounding the movement of that package through the UPS system available to UPS employees and customers who accessed it in their browsers. And it made that same context available to the Perl programs that some of us were writing to scrape web pages, extract their data, and repurpose it.

As we all soon learned, URLs can point to many kinds of resources: documents, interactive forms, audio or video.

The set of URL-addressable resources has two key properties: it’s infinite, and it’s interconnected. Twenty years later we’re still figuring out all the things you can do on a web of hyperlinked resources that are accessible at well-known global addresses and manipulated by a few simple commands like GET, POST, and DELETE.

When you’re working in an infinitely large universe it can seem ungrateful to complain that it’s too small. But there’s an even larger universe of resources populated by segments of audio and video, regions of images, and most importantly, for many of us, text in web documents: paragraphs, sentences, words, table cells.

So let’s stare in amazement at another interesting URL:

https://hyp.is/LoaMFCSJEee3aAMJuXhO-w/www.ics.uci.edu/~fielding/pubs/dissertation/software_arch.htm

Here’s what it looks like to a human who follows the link: You land on a web page, in this case Roy Fielding’s dissertation on web architecture, it scrolls to the place where I’ve highlighted a phrase, and the Hypothesis sidebar displays my annotation which includes a comment and a tag.

And here’s what that resource looks like to a computer when it fetches a variant of that link:

{
    "body": [
        {
            "type": "TextualBody",
            "value": "components: web resources\n\nconnectors: links\n\ndata: data",
            "format": "text/markdown"
        },
        {
            "type": "TextualBody",
            "purpose": "tagging",
            "value": "IAnnotate2017"
        }
    ],
    "target": [
        {
            "source": "https://www.ics.uci.edu/~fielding/pubs/dissertation/software_arch.htm",
            "selector": [
                {
                    "type": "XPathSelector",
                    "value": "/table[2]/tbody[1]/tr[1]/td[1]",
                    "refinedBy": {
                        "start": 82,
                        "end": 114,
                        "type": "TextPositionSelector"
                    }
                },
                {
                    "type": "TextPositionSelector",
                    "end": 4055,
                    "start": 4023
                },
                {
                    "exact": "components, connectors, and data",
                    "prefix": "tion of architectural elements--",
                    "type": "TextQuoteSelector",
                    "suffix": "--constrained in their relations"
                }
            ]
        }
    ],
    "created": "2017-04-18T22:48:46.756821+00:00",
    "@context": "http://www.w3.org/ns/anno.jsonld",
    "creator": "acct:judell@hypothes.is",
    "type": "Annotation",
    "id": "https://hypothes.is/a/LoaMFCSJEee3aAMJuXhO-w",
    "modified": "2017-04-18T23:03:54.502857+00:00"
}

The URL, which we call a direct link, isn’t itself a standard way to address a selection of text, it’s just a link that points to a web resource. But the resource it points to, which describes the highlighted text and its coordinates within the document, is — since February of this year — a W3C standard. The way I like to think about it is that the highlighted phrase — and every possible highlighted phrase — has its own home page on the web, a place where humans and machines can jointly focus attention.

If we think of the web we’ve known as a kind of fabric woven together with links, the annotated web increases the thread count of that fabric. When we weave with pieces of URL-addressable documents, we can have conversations about those pieces, we can retrieve them, we can tag them, and we can interconnect them.

Working with our panelists and others, it’s been my privilege to build a series of annotation-powered apps that begin to show what’s possible when every piece of the web is addressable in this way.

I’ll show you some examples, then invite my collaborators — Beth Ruedi from AAAS, Mike Caulfield from Washington State University Vancouver, Anita Bandrowski from SciCrunch, and Maryann Martone from UCSD and Hypothesis — to talk about what these apps are doing for them now, and where we hope to take them next.

Science in the Classroom

First up is a AAAS project called Science in the Classroom, a collection of research papers from the Science family of journals that are annotated — by graduate students — so teachers can help younger students understand the methods and outcomes of scientific research.

Here’s one of those annotated papers. A widget called the Learning Lens toggles layers of annotation and off.

Here I’ve selected the Glossary layer, and I’ve clicked on the word “distal” to reveal the annotation attached to it.

Now lets look behind the scenes:

Hypothesis was used to annotate the word “distal”. But Learning Lens predated the use of Hypothesis, and the Science in the Classroom team wanted to keep using Learning Lens to display annotations. What they didn’t want was the workflow behind it, which required manual insertion of annotations into HTML pages.

Here’s the solution we came up with. Use Hypothesis to create annotations, then use some JavaScript in Science in the Classroom pages to retrieve Hypothesis annotations and write them into the pages, using the same format that had been applied manually. The preexisting and unmodified Learning Lens JavaScript can then do what it does: pick up the annotations, assign color-coded highlights based on tags, and show the annotations when you click on the highlights.

What made this possible was a JavaScript library that helps with the heavy lifting required to attach an annotation to its intended target in the document.

That library is part of the Hypothesis client, but it’s also available as a standalone module that can be used for other purposes. It’s a nice example of how open source components can enable an ecosystem of interoperable annotation services.

DigiPo / EIC

Next up is a toolkit for student fact-checkers and investigative journalists. You’ve already heard from Mike Caulfield about the Digital Polarization Project, or DigiPo, and from Stefan Candea about the European Investigative Collaborations network. Let’s look at how we’ve woven annotation into their investigative workflows.

These investigations are both written and displayed in a wiki. This is a DigiPo example:

I did the investigation of this claim myself, to test out the process we were developing. It required me to gather a whole lot of supporting evidence before I could begin to analyze the claim. I used a Hypothesis tag to collect annotations related to the investigation, and you can see them in this Hypothesis view:

I can be very disciplined about using tags this way, but it’s a lot to ask of students, or really almost anyone. So we created a tool that knows about the set of investigations underway in the wiki, and offers the names of those pages as selectable tags.

Here I’ve selected a piece of evidence for that investigation. I’m going to annotate it, not by using Hypothesis directly, but instead by using a function in a separate DigiPo extension. That function uses the core anchoring libraries to create annotations in the same way the Hypothesis client does.

But it leads the user through an interstitial page that asks which investigation the annotation belongs to, and assigns a corresponding tag to the annotation it creates.

Back in the wiki, the page embeds the same Hypothesis view we’ve already seen, as a Related Annotations widget pinned to that particular tag:

I had so much raw material for this article that I needed some help organizing it. So I added a Timeline widget that gathers a subset of the source annotations that are tagged with dates.

To put something onto the timeline, you select a date on a page.

Then you create an annotation with a tag corresponding to the date.

Here’s what the annotation looks like in Hypothesis.

Over in the wiki, our JavaScript finds annotations that have these date tags and arranges them on the Timeline.

Publication dates aren’t always evident on web pages, sometimes you have to do some digging to find them. When you do find one, and annotate a page with it, you’ve done more than populate the Timeline in a DigiPo page. That date annotation is now attached to the source page for anyone to discover, using Hypothesis or any other annotation-aware viewer. And that’s true for all the annotations created by DigiPo investigators. They’re woven into DigiPo pages, but they’re also available for separate reuse and aggregation.

The last and most popular annotation-related feature we added to the toolkit is called Footnotes. Once you’ve gathered your raw material into the Related Annotations bucket, and maybe organized some of it onto the Timeline, you’ll want to weave the most pertinent references into the analysis you’re writing.

To do that, you find the annotation you gathered and use Copy to clipboard to capture the direct link.

Then you wrap that link around some text in the article:

When you refresh the page, here’s what you get. The direct link does what a direct link does: it takes you to the page, scrolls you to the annotation in context. But it can take a while to review a bunch of sources that way.

So the page’s JavaScript also creates a link that points down into the Footnotes section. And there, as Ted Nelson would say, and as Nate Angell for some reason hates hearing me say, the footnote is “transcluded” into the page so all the supporting context is right there.

One final point about this toolkit. Students don’t like the writing tools available in wikis, and for good reason, they’re pretty rough around the edges. So we want to enable them to write in Google Docs. We also want them to footnote their articles using direct links because that’s the best way to do it. So here’s a solution we’re trying. From the wiki you’ll launch into Google Docs where you can do your writing in a much more robust editor that makes it really easy to include images and charts. And if you use direct links in that Google Doc, they’ll still show up as Footnotes.

We’re not yet sure this will pan out, but my colleague Maryann Martone, who uses Hypothesis to gather raw material for her scientific papers, and who writes them in Google Docs, would love to be able to flow annotations through her writing tool and into published footnotes.

SciBot

Maryann is the perfect segue to our next example. Along with Anita Bandrowski, she’s working to increase the thread count in the fabric of scientific literature. When neuroscientists write up the methods used in their experiments, the ingredients often include highly specific antibodies. These have colloquial names, and even vendor catalog numbers, but they still lacked unique identifiers. So the Neuroscience Information Framework, NIF for short, has defined a namespace called RRID (research resource identifier), built a registry for RRIDs, and convinced a growing number of authors to mention RRIDs in their papers.

Here’s an article with RRIDs in it. They’re written directly into the text because the text is the scientific record, it’s the only artifact that’s guaranteed to be preserved. So if you’re talking about a goat polyclonal antibody, you look it up in the registery, capture its ID, and write it directly into the text. And if it’s not in the registry, please add it, you’ll make Anita very happy if you do!

The first phase of a project we call SciBot was about validating those RRIDs. They’re just freetext, after all, typed in by authors. Were the identifiers spelled correctly? Did they point to actual registry entries? To find out we built a tool that automatically annotates occurrences of RRIDs.

In this example, Anita is about to click on the SciBot tool, which launches from a bookmarklet, and sends the text of the paper to a backend service. It scans the text for RRIDs, looks up each one in the registry, and uses the Hypothesis API to create an annotation — bound to the occurrence in the text — that reports the results of the registry lookup.

Here the Hypothesis realtime API is showing that SciBot has created three annotations on this page.

And here are those three annotations, anchored to their occurrences in the page, with registry entries displayed in the sidebar.

SciBot curators review these annotations and use tags to mark which are valid. When some aren’t, and need attention, the highlight focuses that attention on a specific occurrence.

This hybrid of automatic entity recognition and interactive human curation is really powerful. Here’s an example where an antibody doesn’t have an RRID but should.

Every automatic workflow needs human exception handling and error correction. Here the curator has marked an RRID that wasn’t written into the literature, but now is present in the annotation layer.

These corrections are now available to train a next-gen entity recognizer. Iterating through that kind of feedback loop will be a powerful way to mine the implicit data that’s woven into the scientific literature and make it explicit.

Here’s the Hypothesis dashboard for one of the SciBot curators. The tag cloud gives you a pretty good sense of how this process has been unfolding so far.

Publishers have begun to link RRIDs to the NIF registry. Here’s an example at PubMed.

If you follow the ZIRC_ZL1 link to the registry, you’ll find a list of other papers whose authors used the same experimental ingredient, which happens to be a particular strain of zebrafish.

This is the main purpose of RRIDs. If that zebrafish is part of my experiment, I want to find who else has used it, and what their experiences have been — not just what they reported in their papers, but ideally also what’s been discussed in the annotation layer.

Of course I can visit those papers, and search within them for ZIRC_ZLI, but with annotations we can do better. In DigiPo we saw how footnoted quotes from source documents can transclude into an article. Publishers could do the same here.

Or we could do this. It’s a little tool that offers to look up an RRID selected in text.

It just links to an instance of the Hypothesis dashboard that’s pinned to the tag for that RRID.

Those search results offer direct links that take you to each occurrence in context.

Claim Chart

Finally, and to bring us full circle, I recently reconnected with Andrew Schulman who works nowadays as a software patent attorney. There’s a tool of his trade called a claim chart. It’s a two-column table. In column one you list claims that a patent is making, which are selections of text from the claims section of the patent. And in column two you assemble bits of evidence, gathered from other sources, that bear on specific claims. Those bits of evidence are selections of text in other documents. It’s tedious to build a claim chart, it involves a lot of copying and pasting, and the evidence you gather is typically trapped in whatever document you create.

Andrew wondered if an annotation-powered app could help build claim charts, and also make the supporting evidence web-addressable for all the reasons we’ve discussed. If I’ve learned anything about annotation, it’s that when somebody asks “Can you do X with annotation?” the answer should always be: “I don’t know, should be possible, let’s find out.”

So, here’s an annotation-powered claim chart.

The daggers at top left in each cell are direct links. The ones in the first column go to patent claims in context.

The ones in the second column go to related statements in other documents.

And here’s how the columns are related. When you annotate a claim, you use a toolkit function called Add Selection as Claim.

Your selection here identifies the target document (that is, the patent), the claim chart you’re building (here, it’s a wiki page called andrew_test), and the claim itself (for example, claim 1).

Once you’ve identified the claims in this way, they’re available as targets of annotations in other documents. From a selection in another document, you use Add Selection as Claim-Related.

Here you see all the claims you’ve marked up, so it’s easy to connect the two statements.

The last time I read Vannevar Bush’s famous essay As We May Think, this was the quote that stuck with me.

When statements in documents become addressable resources on the web, we can weave them together in the way Vannevar Bush imagined.

Dwelling in the zone of evidence

I’ve written plenty about the software layer that adapts the Hypothesis annotator to the needs of someone who gathers, organizes, analyzes, and then writes about evidence found online. Students in courses based on Mike Caulfield’s Digital Polarization template will, I hope, find that this software streamlines the grunt work required to find and cite the evidence that supports evaluation of a claim like this one:

Claim: The North Carolina Republican Party sent out a press release boasting about how its efforts drove down African-American turnout in the 2016 US presidential election.

That’s a lightly-edited version of something I read in the New Yorker and can send you to directly:

As we were fleshing out how a DigiPo course would work, I wrote an analysis of that claim. The investigation took me all the way back to the 1965 Voting Rights Act. Then it led to the 2013 Supreme Court decision — in Shelby vs Holder — to dilute the “strong medicine” Congress had deemed necessary “to address entrenched racial discrimination in voting.” Then to a series of legal contests as North Carolina began adjusting its voting laws. Then to the election-year controversies about voter suppression. And finally to the press release that the North Carolina GOP sent the day before the election, and the reactions to it.

Many claims don’t require this kind of deep dive. As Mike writes today, core strategies — look for fact-checking prior art, go upstream to the source, read laterally — can resolve some claims quickly.

But some claims do require a deep dive. In those cases I want students to immerse themselves in that process of discovery. I want them to suspend judgement about the claim and focus initially on marshalling evidence, evaluating sources, and laying a foundation for analyis. It’s hard work that the DigiPo toolkit can make easier, maybe even fun. That’s crucial because the longer you can comfortably dwell in that zone of evidence-gathering and suspended judgement, the stronger your critical thinking will become.

When I first read Toobin’s claim my internal narrative was: “Boasted about voter suppression? Of course those neanderthals did!” Then I entered the zone and spent many hours there. Voter suppression wasn’t a topic I’d spent much time reading about, so I learned a lot. When I returned to the claim I arrived at an interesting judgement. Yes there was voter suppression, and it was in some ways more draconian than I had thought. But had the North Carolina GOP actually boasted (Mother Jones: bragged, Salon: celebrated) the lower African-American turnout? I concluded it had not. It had reported reduced early voting, but not explicitly claimed that was a successful outcome of voter suppression.

So we rated the claim as Mixed — that is, partly true, partly false. A next step for this investigation would be to break the claim into more granular parts. (Software developers would call that “refactoring” the claim.) So:

In a press release on November 7, 2016, the North Carolina GOP reported lower African-American early voting.

That’s easy to check. True.

Here’s another:

In its 11/7/2016 press release the North Carolina GOP boasted about the success of its voter suppression efforts.

Also easy to check: False.

What about this?

In the wake of Shelby vs Holder, the North Carolina GOP pushed legislation that discriminates against African-American voters.

You need to gather and organize a lot of source material in order to even begin to evaluate that claim. My fondest hope for DigiPo is that students inclined to judge the claim, one way or the another, will delay that judgement long enough to gather evidence that all can agree is valid. That, I believe, would be a fantastic educational outcome.

How shared vocabularies tie the annotated web together

I’m fired up about the work I want to share at Domains 2017 this summer. The tagline for the conference is Indie Tech and Other Curiosities, and I plan to be one of the curiosities!

I’ve long been a cheerleader for the Domain of One’s Own movement. In Reclaiming Innovation, Jim Groom wrote about the need to “understand technologies as ‘potentiality’ (to graft a concept by Anton Chekov from a literary to a technical context).” He continued:

This is the idea that within the use of every technical tool there is more than just the consciousness of that tool, there is also the possibility to spark something beyond those predefined uses. The only real way to galvanize that potentiality is to provide the conditions of possibility — that is, a toolkit for user innovation.

My recent collaboration with Mike Caulfield on the Digital Polarization Initiative has led to the creation of just such a toolkit. It supports DigiPo in the ways described and shown here. A version of the toolkit, demoed here, will support a team of investigative journalists. Now I need to show how the toolkit enables educators, scientists, investigative reporters, students — anyone who researches and writes articles or reports or papers backed by web-based evidence — to innovate in similar ways.

In tech we tend to abuse the term innovation so let me spell out exactly what I mean: Better ways to gather, organize, reason over, and cite online evidence. Web annotation, standardized this week by the W3C, is a key enabler. The web’s infinite space of addressable URLs is now augmented by a larger infinity of segments of interest within the resources pointed to by URLs. In the textual realm, paragraphs, list items, sentences, or individual words can be reliably linked to conversations — but also applications — that live in connected annotation layers.

A web of addressable segments of interest is a necessary, but not sufficient, condition of possibility. We also need tools that enable us to gather, organize, recombine, and cite those segments. And some of those tools need to be malleable in the hands of users who can shape them for their own purposes.

When I reread Vannevar Bush’s As We May Think, to prepare for a conversation about it with Gardner Campbell and Jeremy Dean (video, Gardner’s reflections), I focused on this passage:

He has dozens of possibly pertinent books and articles in his memex. First he runs through an encyclopedia, finds an interesting but sketchy article, leaves it projected. Next, in a history, he finds another pertinent item, and ties the two together.

Nowadays that first encyclopedia article lives at one URL. The pertinent item in a history is a segment of interest within another URL-addressable resource. How do we tie them together? A crucial connector is a tag that belongs to neither resource but refers to both.

When tools control the sets of tags available for resource interconnection, they enable groups of people to make such connections reliably. That’s what the DigiPo toolkit does when it offers a list of investigation pages, drawn from the namespace of a wiki, as the set of tags that connect annotation-defined evidence to investigations. You see that happening with the DigiPo toolkit shown here, and with a variant of the toolkit shown here. In both cases the tags that bind evidence to wiki pages are controlled by software that acquires a list of wiki pages and presents the names of those pages as selectable tags.

One future direction for the toolkit leads to software that acquires lists of pages from other kinds of content management systems: WordPress, Drupal, you name it. Every CMS defines a namespace that is implicitly a list of tags that can be used to bind sets of resources to the pages served by that CMS. If you’re looking to adapt a DigiPo-like tool to your CMS, I’ll be delighted to show you how.

Such adaptation, though, requires somebody to write some code. While it’s unfashionable in some circles to say so, I don’t think everyone should learn to code. There’s a more fundamental web literacy, nicely captured by Audrey Watters here:

It’s about understanding the components of the Web and knowing how to tag and then manipulate them. By thinking and developing sets of named resources, you are a Web thinker. This isn’t about programming but rather the creation of sets of resources and the identification of components that work with those resources and combine them to create solutions.

Web annotation vastly enlarges the universe of resources that can be named. But it’s on us to name them. Tags are a principal way we do that. If our naming of resources is going to be an effective way to organize and combine them, though, we need to do it reliably and consistently. Software can enforce that consistency, but not everyone can write software. So a user innovation toolkit for the annotated web needs to empower users to enforce consistent naming without writing code.

A couple of weeks ago I built a Chrome extension that enables users to define their own lists of shared tags by recording them in an Google Doc. The demonstration video prompted this query from Jim Groom:

I just got through with a workshop here demoing Hypothes.is for a European group that may be using it to annotate online legislation for data privacy set to go live in 2018. They are teaching a course on it, and this could be one of the spaces/hubs they build the open part around. I came back to this video just now, but got the sense I could already tag from within annotations/pages, so how does the tag helper change this? Just a different way at it? Is it new functionality from previous tags? I love that you can have a Google Doc list of tags, but the video example is not making sense to me for some reason. And I wanna know :)

Here’s my response. That tag helper, now incorporated into the toolkit I’m evolving for DigiPo and other uses, makes it possible for people who don’t write code to define tag namespaces that govern their gathering, organization, recombination, and citation not only of URL-addressable resources but also of annotation-addressable segments of interest within those resources. People can “tie them together” — as Vannevar Bush imagined — in the ways their interests and workflows require.

Does that answer the question? If not, please keep asking until I do so properly. User-defined tag namespaces, though admittedly still a curiosity, are one of the best ways to make collective use of a web of addressable segments.

How annotation layers define “segments of interest” for new kinds of applications

Here are some analogies we use when talking about software:

Construction: Programs are houses built on foundations called platforms.

Ecology: Programs are organisms that depend on ecosystem services provided by platforms.

Community: Programs work together in accordance with rules defined by platforms.

Architecture: Programs are planned, designed, and built according to architectural plans.

Economics: Programs are producers and consumers of services.

Computer hardware: Programs are components that attach to a shared bus.

All are valid and may be useful in one way or another. In this essay I focus on the last because it points to an important way of understanding what web annotation can enable. My claim here is that the web’s emerging annotation layer forms a shared bus for a new wave of content-oriented applications.

A computer’s bus connects devices: disk drive, keyboard, network adapter. If we think of the web in this way, we’d say that devices (your computer, mine) and also people (you, me) attach to the bus. And that the protocol for attachment has something to do with URLs.

You can, for example, follow this link to display and interact with the set of Hypothesis annotations related to this web page. You can also paste the link’s URL into a message or a document to share the view with someone else.

That same URL can behave like an API (application programming interface) that accesses the resource named and located by the URL. A page like this one, part of the DigiPo fact-checking project, uses the link that way. It derives the Hypothes search URL from its own URL, and injects the resulting Hypothesis view into the page.

Every time we create a new wiki page at digipo.io, we mint a new URL that summons the set of Hypothesis annotations specific to that page. In principle there’s no limit to the number of such pages — and associated sets of annotations — we can add. And that’s just one of an unlimited number of sites. The web of URL-addressable resources is infinitely large.

Even so, URLs address only a small part of a larger infinity of resources: words and phrases in texts, regions within images, segments of audio and video. Web annotation enables us to address that larger infinity. The DigiPo project illustrates some of the ways in which annotation expands the notion of content as a bus shared by people and computers. But first some background on how annotation works.

The proposed standard for web annotation defines an extensible set of selectors:

Many Annotations refer to part of a resource, rather than all of it, as the Target. We call that part of the resource a Segment (of Interest). A Selector is used to describe how to determine the Segment from within the Source resource.

When the segment of interest is a selection in a textual resource, one kind of selector captures the selection and its surrounding text. Another captures the position of the selection (“starts at the 347th character, ends at the 364th”). Still another captures its location in a web page (“contained in the 2nd list item in the first list in the seventh paragraph”). For reasons of both speed and reliability, Hypothesis uses all three selectors when it attaches (“anchors”) an annotation to a selection.

When a segment of interest is a clip within a podcast or a video, a selector would capture the start and stop (“starts at 1 minute, 32 seconds, ends at 3 minutes, 12 seconds”). When it’s a region in a bitmapped image, a selector would capture the coordinates (“starts at x=12,y=53, ends at x=355,y=124”). When it’s a piece of a vector image, a selector would capture the Scalable Vector Graphics (SVG) markup defining that piece of the image.

The W3C’s model of web annotation lays a foundation for other kinds of selectors in other domains: locations in maps, nodes in Jupyter notebooks, bars and trend lines and data points in charts. But let’s stick with textual annotation for now, consider how it expands the universe of addressable resources, and explore what we can do in that universe.

Here’s a picture of what’s happening in and around the above-mentioned DigiPo page:

The author has cited a Hypothesis link that refers to a piece of evidence in another web page. The link encapsulates both the URL of that page and a set of selectors that mark the selected passage within it. When you follow the link Hypothesis takes you to the page, scrolls to the passage, and highlights it. That’s a powerful interactive experience!

Now suppose you want to review all the evidence that supports this investigation. You can do it interactively but that will require a lot of context-disrupting clicks. So another program embedded in the wiki page summarizes the cited quotes for you. It uses a variant of the Hypothesis direct link that delivers the interactive experience. The variant is a Hypothesis API call that delivers the annotation in a machine-friendly format. The summarization script collects all the Hypothesis direct links on the page, gathers the annotations, extracts the URLs and quotes, injects them into the Footnotes section of the page, and rewrites the links to point to corresponding footnotes.

To enable this magic, an app that people can use to annotate regions in web pages is necessary but not sufficient. You also need an API-accessible service that enables computers to create and retrieve annotations. Even more fundamentally, you need an open web standard that defines how apps and services work not only with atomic resources named and located by URLs, but also segments of interest within them.

What else is possible on a shared content bus where segments of interest are directly addressable both by people and computers? Here’s one idea being pondered by some folks in the world of open educational resources (OER). Suppose you’re creating an open textbook that attaches quizzes to segments within the text. The quizzes live in a database. How do you connect a quiz to a segment in your book?

Because a quiz is an URL-addressable resource, you can transclude one directly into your book near the segment to which it applies. Doing that normally means encoding the segment’s location in the book’s markup so the software that attaches the quiz can put it in the right place. That works, but it entangles two editorial tasks: writing the book, and curating the quizzes. That entanglement makes it harder to provide tools that support the tasks individually. If you can annotate segments of interest, though, you can disentangle the tasks, tool them separately, build the book more efficiently, and ensure others can more cleanly repurpose your work.

Analogies are necessary but imperfect. The notion of a shared bus, formed by an annotation layer and used by applications oriented to segments of content, may or may not resonate. I’m looking for a better analogy; suggestions welcome. But however you want to think about it, the method I’m describing here works powerfully well, I’ll continue to apply it, and I’d love to discuss ways you can too.

Componentware Revisited

I’m not a scholar, nor do I play one on TV, but when I search Google Scholar I find that I’m cited there a few times, most notably for a 1994 BYTE cover story, Componentware. The details there are at best of historical interest but the topic remains evergreen: How do we package software in ways that maximize its reusability while minimizing the level of skill required to achieve reuse?

By 1996 the web had booted up and I reprised the theme in On-Line Componentware1. That’s when it dawned on me that the websites that people “surfed” to were also software components that could be woven together to meet a variety of needs. It was my first glimpse of what we later came to know as SOA (service-oriented architecture), then RESTful APIs, and most recently microservices. Ever since then, wearing one hat or another, I’ve been elaborating the theme of that column: “A powerful capability for ad hoc distributed computing arises naturally from the architecture of the Web.” (link)

That architecture has in some ways remained the same, in other ways evolved dramatically, but its generative power continues to surprise and delight me. And I keep finding new ways to package and reuse web components.

Hypothesis has been a fascinating case study. Our web annotation system has two main components. The web service, written in Python, runs on a web server. The client, written in JavaScript, runs in your browser. Both are available for reuse in many different ways.

One way to reuse the web service is to embed views in web pages, as shown in this example from the Digital Polarization (Digipo) project:

The “Matching Annotations” widget embedded in that page is just this search result wrapped in an iframe. This is one of the most common and powerful ways to reuse web components.

The Hypothesis API affords another way to reuse its server component. The Timeline widget, embedded on that same page, works that way. It searches Hypothesis for the URLs of annotations tagged with the id of the current wiki page. Then it searches the annotations on each of those URLs for another user-assigned tag that signifies the publication date, and arranges those results chronologically. (The Timeline widget could have been written in PHP to run in the wiki server, but I’m more familiar with JavaScript so instead it’s written in JS and runs in the browser.)

The Hypothesis client can also be reused in powerful ways. Most notably, you can add the client to a website by including this simple script tag in the site’s main template:

https://hypothes.is/embed.js

Or you can use the Hypothesis proxy, https://via.hypothes.is/, to inject the client into a web page, for example: https://via.hypothes.is/https://en.wikipedia.org/wiki/Proxy_server.

When you use Hypothesis to annotate a PDF file, it relies on a separate component — Mozilla’s PDF.js — to parse the PDF and render it in the browser so the Hypothesis client can operate on it. PDF.js is available natively in Firefox, the Hypothesis Chrome extension injects it when you annotate a PDF in that browser.

Another Hypothesis component, pdf.js-hypothesis, enables a web server to serve a PDF with PDF.js and Hypothesis both active. That makes PDF annotation available in any browser. We use it in our prototype Canvas app, for example, to serve annotation-enabled PDFs in the Canvas learming management system (LMS).

Still another component enables custom rendering of annotations. You can see it in action at Science in the Classroom, a collection of research papers annotated to serve as teaching materials.

Graduate students use Hypothesis to create the annotations. But Science in the Classroom prefers to display them using its own mechanism, Learning Lens. So when the page loads, it fetches annotations using the Hypothesis API and then paints them on the page using a component that’s part of the Hypothesis client but is also available as the standalone NPM module dom-anchor-text-quote.

I am deliberately blurring the definition of web component because I think it properly encompasses many different things: a web page embedded in an iframe; an API-accessible web service; a rich client application like Hypothesis (or a simple widget like the Timeline) embedded in a web page; a standalone module like dom-anchor-text-quote; a repackaging of Hypothesis as a WordPress plugin or a Canvanas external tool.

This is a rich assortment of ingredients! But there’s one that’s notably absent. We’ve seen lots of ways to use the Hypothesis client as a component that plugs into other environments and makes annotation available there. But what if you want to plug something into the Hypothesis client? There isn’t yet a mechanism for that. The code is open source and can be modified, as Marija Katic and Martin Eve have done with Annotran, a translation tool based on Hypothesis. That’s a great example of code reuse. But it isn’t, at least to my way of thinking, an example of component reuse. Although I recognize many different species of software components, they all share one piece of common DNA: reuse without internal modification.

In an essay on what I learned while building the Canvas app, I noted two critical aspects of the healthy ecosystem that Canvas and other learning management systems inhabit:

1. Standard protocols. In the LMS world, Learning Tools Interoperability (LTI) defines those protocols.

2. Frictionless component reuse. This flows from item 1. An LTI app expects to be launched from an LMS and to run embedded in an iframe there. Again, this is the most common and powerful way to reuse web components.

The question I asked there, and tried to answer: Could an iframe embed web components within a rich web client like Hypothesis? If so that might open the way for features not yet in the Hypothesis core, like controlled tagging, that would otherwise require deep surgery on the Hypothesis client, and intimate knowledge of its JavaScript framework (Angular) and the nonstandard component model dictated by that framework.

I had already tried a couple of experiments to add controlled tagging to the Hypothesis client. In this one, the tag suggestions offered in the tag editor are bound to Hypothesis groups. In this one, tag suggestions are bound to an external web service. Both experiments entailed nontrivial alteration of the Hypothesis client.

In a third experiment, I modified the Hypothesis client in a way that could enable a family of components to plug into it. This customized client embedded an iframe in the annotation editor, and launched a user-defined web application into that iframe, passing it one parameter: the id of the annotation open in the editor. Because it was configured with the credentials of a Hypothesis user, it could work as a pluggable component that communicates with the active annotation and also with the full panoply of web resources. You could, perhaps, think of it as an annotation applet. Here’s a demo.

This approach was intriguing and might serve some useful purposes, but an iframe is an ugly and awkward construct to stick into the middle of a richly-designed web client. And this approach again fails my definition of component reuse because it requires internal modification of the client.

So as I began working to integrate Hypothesis into Digipo I was still looking for a way to control Hypothesis tags without modifying the Hypothesis client. As described in A toolkit for fact checkers, we initially used bookmarklets to do that, then began developing a Chrome extension for the Digipo project.

The Chrome extension immediately solved a couple of vexing problems. It enabled us to cleanly package a growing set of Digipo tools, by making them conveniently right-click-accessible. And it got around the security constraints that increasingly make bookmarklets untenable.

Just as importantly it enabled us to blend together a Digipo-specific set of tools, some but not all of which are Hypothesis-powered. For a Digipo fact checker, Hypothesis isn’t a primary part of the experience. It’s a supporting component that’s brought into the process as and where needed. It’s infrastructure.

The Digipo workflow relies on controlled tagging to accumulate evidence into several buckets associated with each investigation. When you’re on a page that you want to put into a bucket, you can use Digipo’s Tag this Page helper to create a Hypothesis page note with the tag for that investigation. It starts here:

That leads to a page that lists the Digipo investigations.

When you choose one, the extension uses the Hypothesis API to create a page note with the investigation’s tag.

Thanks to Hypothesis direct linking, the interaction flows seamlessly from the Digipo extension to Hypothesis. You land in the annotation editor where you can do more with Hypothesis: add comments and new tags, discuss the target document with other Hypothesis users.

But this arrangement only creates Hypothesis page notes: annotations that refer to a target document but not to a selection within that document. More powerful uses of Hypothesis flow from selections within target documents. Could a selection-based annotation begin in the Digipo extension, acquire a tag, and then flow through to Hypothesis?

Happily the answer is yes. You can see that here.

The Digipo Chrome extension presents one set of helpers when you right-click on a page with nothing selected. Some of the helpers rely on Hypothesis, others just automate parts of the Digipo workflow — for example, launching advanced Google searches. When you right-click with a selection active, the Digipo Chrome extension presents another set of helpers which, again, may or may not rely on Hypothesis. One of them, Tag this Selection, works like Tag this Page in that it uses the Hypothesis API to create an annotation that includes a controlled tag. But Tag this Selection does a bit more work. It sends not only the URL of the target document, but also a Text Quote Selector that anchors the annotation within the document. In this case, too, the interaction then flows seamlessly into Hypothesis where you can edit the newly-created annotation and perhaps discuss the selected passage.

You can see more of the interplay between the Digipo and Hypothesis extensions in this screencast. I’m pretty excited by how this is turning out. The Digipo extension is Chrome-only for now, as is the Hypothesis extension, but WebExtensions should soon enable broader coverage. There’s still a need to plug packaged behavior directly into the Hypothesis client. But much can be accomplished with an extension that cooperates with Hypothesis using its existing set of affordances. The Digipo extension is one example. I can imagine many others, and I’m expanding my definition of componentware to include them.


1 I love how our copy editor insisted on hyphenating On-Line!

A toolkit for fact checkers

Update: See this post (with screencasts!)

Mike Caulfield’s Digital Polarization Initiative (DigiPo) is a template for a course that will lead students through exercises to analyze and fact-check news stories. The pedagogical approach Mike describes here is evolving; in parallel I’ve been evolving a toolkit to help students research and organize the raw materials of the analyses they’ll be asked to produce. Annotation is a key component of the toolkit. I’ve been working to integrate it into the fact-checking workflow in ways that complement the use of other tools.

We’re not done yet but I’m pleased with the results so far. This post is an interim report to summarize what we’ve learned so far about building an annotation-powered toolkit for fact checkers.

Here’s an example of a DigiPo claim to be investigated:

EPA Plans to Allow Unlimited Dumping of Fracking Wastewater in the Gulf of Mexico (see Occupy)

I start with no a priori knowledge of EPA rules governing release of fracking wastewater, and only a passing acquaintance with the cited source, occupy.com. So the first order of business is to marshal some evidence. Hypothesis is ideal for this purpose. It creates links that encapsulate both the URL of a page containing found evidence, and the evidence itself — that is, a quote selected in the page.

There’s a dedicated page for each DigiPo investigation. It’s a wiki, so you can manually include Hypothesis links as you create them. But fact-checking is tedious work, and students will benefit from any automation that helps them focus on the analysis.

The first step was to include Hypothesis as a widget that displays annotations matching the wiki id of the page. Here’s a standalone Hypothesis view that gathers all the evidence I’ve tagged with digipo:analysis:gulf_of_frackwater. From there it was an easy next step to tweak the wiki template so it embeds that view directly in the page:

That’s really helpful, but it still requires students to acquire and use the correct tag in order to populate the widget. We can do better than that, and I’ll show how later, but here’s the next thing that happened: the timeline.

While working through a different fact-checking exercise, I found myself arranging a subset of the tagged annotations in chronological order. Again that’s a thing you can do manually; again it’s tedious; again we can automate with a bit of tag discipline and some tooling.

If you do much online research, you’ll know that it’s often hard to find the publication date of a web page. It might or might not be encoded in the URL. It might or might not appear somewhere in the text of the page. If it does there’s no predictable location or format. You can, however, ask Google to report the date on which it first indexed a page, and that turns out to be a pretty good proxy for the publication date.

So I made another bookmarklet to encapsulate that query. If you were to activate it on one of my posts it would lead you to this page:

I wrote the post on Oct 30, Google indexed it on Oct 31, that’s close enough for our purposes.

I made another bookmarklet to capture that date and add it, as a Hypothesis annotation, to the target page.

With these tools in hand, we can expand the widget to include:

  • Timeline. Annotations on the target page with a googledate tag, in chronological order.

  • Related Annotations. Annotations on the target page with a tag matching the id of the wiki page.

You can see a Related Annotations view above, here’s a Timeline:

So far, so good, but as Mike rightly pointed out, this motley assortment of bookmarklets spelled trouble. We wouldn’t want students to have to install them, and in any case bookmarklets are increasingly unlikely to work. So I transplanted them into a Chrome extension. It presents the growing set of tools in our fact-checking toolkit as right-click options on Chrome’s context menu:

It also affords a nice way to stash your Hypothesis credentials, so the tools can save annotations on your behalf:

(The DigiPo extension is Chrome-only for now, as is the Hypothesis extension, but WebExtensions should soon enable broader coverage.)

With the bookmarklets now wrapped in an extension we returned to the problem of simplifying the use of tags corresponding to wiki investigation pages. Hypothesis tags are freeform. Ideally you’d be able to configure the tag editor to present controlled lists of tags in various contexts, but that isn’t yet a feature of Hypothesis.

We can, though, use the Digipo extension to add a controlled-tagging feature to the fact-checking toolkit. The Tag this Page tool does that:

You activate the tool from a page that has evidence related to a DigiPo investigation. It reads the DigiPo page that lists investigations, captures the wiki ids of those pages. and presents them in a picklist. When you choose the investigation to which the current page applies, the current page is annotated with the investigation’s wiki id and will then show up in the Related Annotations bucket on the investigation page.

While I was doing all this I committed an ironic faux pas on Facebook and shared this article. Crazy, right? I’m literally in the middle of building tools to help people evaluate stuff like this, and yet I share without checking. Why did I not take the few seconds required to vet the source, bipartisanreport.com?

When I made myself do that I realized that what should have taken a few seconds took longer. There’s a particular Google advanced query syntax you need in this situation. You are looking for the character string “bipartisanreport.com” but you want to exclude the majority of self-referential pages. You only want to know what other sites say about this one. The query goes like this:

bipartisanreport.com -site:bipartisanreport.com

Just knowing the recipe isn’t enough. Using it needs to be second nature and, even for me, it clearly wasn’t. So now there’s Google this Site:

Which produces this:

It’s ridiculously simple and powerful. I can see at a glance that bipartisanreport.com shows up on a couple of lists of questionable sites. What does the web think about the sites that host those lists? I can repeat Google this Site to zoom in on them.

Another tool in the kit, Save Facebook Share Count, supports the sort of analysis that Mike did in a post entitled Despite Zuckerberg’s Protests, Fake News Does Better on Facebook Than Real News. Here’s Data to Prove It.

How, for example, has this questionable claim propagated on Facebook? There’s a breadcrumb trail in the annotation layer. On Dec 26 I used Save Publication Date to assign the tag googledate:2016-08-31, and on the same day I used Save Facebook Share Count to record the number of shares reported by the Facebook API. On Dec 30 I again used Save Facebook Share Count. Now we can see that the article is past its sell-by date on Facebook and never was highly influential.

Finally there’s Summarize Quotes, which arose from an experiment of Mike’s to fact-check a single article exhaustively. Here’s the article he picked, along with the annotation layer he created:

Some of the annotations contain Hypothesis direct links to related annotations. If you open this annotation in the Politico article, for example, you can follow Hypothesis links to related annotations on pages at USA Today and Science.

These transitive annotations are potent but it gets to be a lot of clicking around. So the most experimental of the tools in the kit, Summarize Quotes, produces a page like this:

This approach doesn’t feel quite right yet, but I suspect there’s something there. Using these tools you can gather a lot of evidence pretty quickly and easily. It then needs to be summarized effectively so students can reason about the evidence and produce quality analysis. The toolkit embodies a few ways to do that summarization, I’m sure more will emerge.

Marshalling the evidence

In Bird-dogging the web I responded to questions raised by Mike Caulfield about how annotation can help us fact-check the web. He’s now written a definition of the political technique, called bird-dogging, we discussed in those posts. It’s a method of recording candidates’ positions on issues, but it’s recently been mis-characterized as a way to incite violence. I’ve annotated a batch of articles that conflate bird-dogging with violence:

source: https://hypothes.is/api/search?tags=bird-dogging&user=judell

Each annotation links to Mike’s definition. Collectively they form a data set that can be used to trace the provenance of the bird-dogging = violence meme. A digital humanist could write an interesting paper on how the meme flows through a network of sources, and how it morphs along the way. But how will such evidence ever make a difference?

In Annotating the wild west of information flow I sketched an idea that weaves together annotation, a proposed standard for fact-checking called ClaimReview, and Google’s plan to use that standard to add Fact Check labels to news articles. These ingredients are necessary but not sufficient. The key missing ingredient? President Obama nailed it in his remarks at the White House Frontiers Conference: “We’re going to have to rebuild, within this wild west of information flow, some sort of curating function that people agree to.”

It can sometimes seem, in this polarized era, that we can agree on nothing. But we do agree, at least tacitly, on the science behind the technologies that sustain our civilization: energy, agriculture, medicine, construction, communication, transportation. When evidence proves that cigarettes can cause lung cancer, or that buildings in some places need to be earthquake-resistant, most of us accept it. Can we learn to honor evidence about more controversial issues? If that’s possible, annotation’s role will be to help us marshal that evidence.

Bird-dogging the web

In Annotating the wild west of information flow I responded to President Obama’s appeal for “some sort of curating function that people agree to” with a Hypothes.is thought experiment. What if an annotation tool could make claims about the veracity of statements on the web, and record those claims in a standard machine-readable format such as ClaimReview? The example I gave there: a climate scientist can verify or refute an assertion about climate change in a newspaper article.

Today Mike Caulfield writes about another kind of fact-checking. At http://www.mostdamagingwikileaks.com/ he found this claim:

“Bird-dogging is a term coined by high-level Clinton staffers who openly talk about it in the video. They boast about inciting violence at Trump rallies, paying for every protest…”

Mike knows better.

Wait, what? Bird-dogging is about violence?

I was a bird-dogger for some events in 2008 and as a blogger got to know a bunch of bird-doggers in my work as a blogger. Clinton didn’t invent the term and it has nothing to do with violence.

So he annotates the statement. But he’s not just refuting a claim, he’s explaining what bird-dogging really means: you follow candidates around and film their responses to questions about your issues.

Now Mike realizes that he can’t find an authoritative definition of that practice. So, being an expert on the subject, he writes one. Which prompts this question:

Why the heck am I going to write a comment that is only visible from this one page? There are hundreds (maybe thousands) of pages on the internet making use of the fact that there is no clear explanation of this on the web.

Mike’s annotation does two things at once. It refutes a claim about bird-dogging on one specific page. That’s the sweet spot for annotation. His note also provides a reusable definition of bird-dogging that ought to be discoverable in other contexts. Here there’s nothing special about a Hypothes.is note versus a wiki page, a blog post, or any other chunk of URL-addressable content. An authoritative definition of bird-dogging could exist in any of these forms. The challenge, as Mike suggests, is to link that definition to many relevant contexts in a discoverable way.

The mechanism I sketched in Annotating the wild west of information flow lays part of the necessary foundation. Mike could write his authoritative definition, post it to his wiki, and then use Hypothes.is to link it, by way of ClaimReview-enhanced annotations, to many misleading statements about bird-dogging around the web. So far, so good. But how will readers discover those annotations?

Suppose Mike belongs to a team of political bloggers who aggregate claims they collectively make about statements on the web. Each claim links to a Hypothes.is annotation that locates the statement in its original context and to an authoritative definition that lives at some other URL.

Suppose also that Google News regards Mike’s team as a credible source of machine-readable claims for which it will surface the Fact Check label. Now we’re getting somewhere. Annotation alone doesn’t solve Mike’s problem, but it’s a key ingredient of the solution I’m describing.

If we ever get that far, of course, we’ll run into an even more difficult problem. In an era of media fragmentation, who will ever subscribe to sources that present Fact Check labels in conflict with beliefs? But given the current state of affairs, I guess that would be a good problem to have.

Towards accessible annotation: a prototype and some questions

The most basic operation in Hypothes.is — select text on a page, click the Annotate button — is not yet accessible to a visually-impaired person who is using a screenreader. I’ve done a bit of research and come up with an approach that looks like it could work, but also raises many questions. In the spirit of keystroke conservation I want to record here what I think I know, and try to find out what I don’t.

Here’s a screencast of an initial prototype that shows, with the NVDA screen reader active on my system, the following sequence of events:

  • Load the Gettysburg address.
  • Use a key to move a selection from paragraph to paragraph.
  • Hear the selected paragraph.
  • Tab to the Annotate button and hit Enter to annotate the selected paragraph.

It’s a start. Now for some questions:

1. Is this a proper use of the aria-live attribute?

The screenreader can do all sorts of fancy navigation, like skip to the next word, sentence, or paragraph. But its notion of a selection exists within a copy of the document and (so far as I can tell) is not connected to the browsers’s copy. So the prototype uses a mechanism called ARIA Live Regions.

When you use the hotkey to advance to a paragraph and select it, a JavaScript method sets the aria-live attribute on that paragraph. That alone isn’t enough to make the screenreader announce the paragraph, it just tells it to watch the element and read it aloud if it changes. To effect a change, the JS method prepends selected: to the paragraph. Then the screenreader speaks it.

2. Can JavaScript in the browser relate the screenreader’s virtual buffer to the browser’s Document Object Model?

I suspect the answer is no, but I’d love to be proven wrong. If JS in the browser can know what the screenreader knows, the accessibility story would be much better.

3. Is this a proper use of role="link"?

The first iteration of this prototype used a document that mixed paragraphs and lists. Both were selected by the hotkey, but only the list items were read aloud by the screen reader. Then I realized that’s because list items are among the set of things — links, buttons, input boxes, checkboxes, menus — that are primary navigational elements from the screenreader’s perspective. So the version shown in the screencase adds role="link" to the visited-and-selected paragraph. That smells wrong, but what’s right?

4. Is there a polyfill for Selection.modify()?

Navigating by element — paragraph, list item, etc. — is a start. But you want to be able to select the next word (or previous) word or sentence or paragraph or table cell. And you want to be able to extend a selection to include the next word or sentence or paragraph or table cell.

A non-standard technology, Selection.modify(), is headed in that direction, and works today in Firefox and Chrome. But it’s not on a standards track. So is there a library that provides that capability in a cross-browser fashion?

It’s a hard problem. A selection within a paragraph that appears to grab a string of characters is, under the covers, quite likely to cross what are called node boundaries. Here, from an answer on StackOverflow, is a picture of what’s going on:

When a selection includes a superscript3 as shown here, it’s obvious to you what the text of the selection should be: 123456790. But that sequence of characters isn’t readily available to a JavaScript program looking at the page. It has to traverse a sequence of nodes in the browser’s Document Object Model in order to extract a linear stream of text.

It’s doable, and in fact Hypothes.is does just that when you make a selection-based annotation. That gets harder, though, when you want to move or extend that selection by words and paragraphs. So is there a polyfill for Selection.modify()? The closest I’ve found is rangy, are there others?

5. What about key bindings?

The screen reader reserves lots of keystrokes for its own use. If it’s not going to be possible to access its internal representation of the document, how will there be enough keys left over for rich navigation and selection in the browser?

What I Learned While Building an App for the Canvas Learning Management System

Life takes strange turns. I’m connected to the ed-tech world by way of Gardner Campbell, Jim Groom, and Mike Caulfield. They are fierce critics of the academy’s embrace of the Learning Management System (LMS) and are among the leaders of an indie-web movement that arose in opposition to it. So it was odd to find myself working on an app that would enable my company’s product, the Hypothes.is web/PDF annotator, to plug into what’s become the leading LMS, Instructure’s Canvas.

I’m not an educator, and I haven’t been a student since long before the advent of the LMS, so my only knowledge of it was second-hand. Now I can report a first-hand experience, albeit that of a developer building an LMS app, not that of a student or a teacher.

What I learned surprised me in a couple of ways. I’ve found Canvas to be less draconian than I’d been led to expect. More broadly, the LMS ecosystem that’s emerged — based on a standard called Learning Tools Interoperability (LTI), now supported by all the LMS systems — led me to an insight about how the same approach could help unify the emerging ecosystem of annotation systems. Even more broadly, all this has prompted me to reflect on how the modern web platform is both more standardized and more balkanized than ever before.

But first things first. Our Canvas app began with this request from teachers: “How can we enable students to use Hypothes.is to annotate the PDF files we upload to our courses?” There wasn’t any obvious way to integrate our tool into the native Canvas PDF viewer. That left two options. We could perhaps create a plugin, internal to Canvas, based on Hypothes.is and the JavaScript component (Mozilla’s PDF.js) we and others use to convert PDF files into web pages. Or we could create an LTI app that delivers that combo as a service running — like all LTI apps — outside Canvas. We soon found that the first option doesn’t really exist. Canvas is an open source product, but the vast majority of schools use Instructure’s hosted service. Canvas has a plugin mechanism but there seems to be no practical way to use it. I don’t know about other LMSs (yet) but if you want to integrate with Canvas, you’re going to build an app that’s launched from Canvas, runs in a Canvas page, and communicates with Canvas using the standard LTI protocol and (optionally) the Canvas API.

Working out how to do that was a challenge. But with lots of help from ed-tech friends and associates as well as from Instructure, we came up with a nice solution. A teacher who wants to base an assignment on group annotation of a PDF file or a web page adds our LTI app to a course. The app displays a list of the PDFs in the Files area of the course. The teacher selects one of those, or provides the URL of a web page to annotate, then completes the assignment in the usual way by adding a description, setting a date, and defining the grading method if participation will be graded. When the student clicks the assignment link, the PDF or web page shows up in a Canvas page with the Hypothes.is annotator active. The student logs into Hypothes.is, switches to a Hypothes.is private group (if the teacher created one for the course), engages with the document and with other students in the annotation layer, and at some point submits the assignment. What the teacher sees then, in a Canvas tool called Speed Grader, on a per-student basis, is an export of document-linked conversation threads involving that student.

The documents that host those conversations can live anywhere on the web. And the conversations are wide open too. Does the teacher engage with students? Do students engage with one another? Does conversation address predefined questions or happen organically? Do tag conventions govern how annotations cluster within or across documents? Nothing in Hypothes.is dictates any such policies, and nothing in Canvas does either.

Maybe the LMS distorts or impedes learning, I don’t know, I’m not an educator. What I can say is that, from my perspective, Canvas just looks like a content management system that brings groups and documents together in a particular context called a course. That context can be enhanced by external tools, like ours, that enable interaction not only among those groups and documents but also globally. A course might formally enroll a small group of students, but as independent Hypothes.is users they can also interact DS106-style with Hypothes.is users and groups anywhere. The teacher can focus on conversations that involve enrolled students, or zoom out to consider a wider scope. To me, at least, this doesn’t feel like a walled garden. And I credit LTI for that.

The app I’ve written is a thin layer of glue between two components: Canvas and Hypothes.is. LTI defines how they interact, and I’d be lying if I said it was easy to figure out to get our app to launch inside Canvas and respond back to it. But I didn’t need to be an HTTP, HTML, CSS, JavaScript, or Python wizard to get the job done. And that’s fortunate because I’m not one. I just know enough about these technologies to be able to build basic web apps, much like ones I was able to build 20 years ago when the web first became a software platform. The magic for me was always about what simple web apps can do when connected to the networked flow of information among people and computers. My Canvas experience reminded me that we can still tap into that magic.

Why did I need to be reminded? Because while the web’s foundation is stronger than ever, the layers being built on it — so-called frameworks, with names like Angular and Ember (in the browser), Rails and Pyramid (on the server) — are the province of experts. These frameworks help with common tasks — identifying users, managing interaction with them, storing their data — so developers can focus on what their apps do specially. That’s a good and necessary thing when the software is complex, and when it’s written by people who build complex software for a living.

But lots of useful software isn’t that complex, and isn’t written by people who do that for a living. Before the web came along, plenty got built on Lotus 1-2-3, Excel, dBase, and FoxPro, much of it by information workers who weren’t primarily doing that for a living. The early web had that same feel but with an astonishing twist: global connectivity. With only modest programming skill I could, and did, build software that participated in a networked flow of information among people and computers. That was possible for two reasons. First, with HTML and JavaScript (no CSS yet) I could deliver a basic user interface to anyone, anywhere, on any kind of computer. Second, with HTTP I could connect that user interface to components and databases all around the web. Those components and databases were called web sites by the people who viewed them through the lens of the browser. But for me they were also software services. Through the lens of a network-savvy programming language (it was Perl, at the time) the web looked like a library of software modules, and URLs looked like the API (application programming interface) to that library.

If I had to write a Canvas plugin I’d have needed to learn a fair bit about its framework, called Rails, and about Ruby, the language in which that framework is written. And that hard-won knowledge would not have transferred to another LMS built on a different framework and written in a different language. Happily LTI spared me from that fate. I didn’t need to learn that stuff. When our app moves to another LMS it’ll need to know how to pull PDF files out of that other system. And that other system might not yet support all the LTI machinery required for two-way communication. But assuming it does, the app will do exactly what it does now — launch in response to an “API call” (aka URL), deliver a “component” (an annotation-enabled document) — in exactly the same way.

Importantly I wasn’t just spared a deep dive into Rails, the server framework that powers Canvas. I was also spared a deep dive into Angular, the JavaScript framework that powers the Hypothes.is client. That’s because our browser-based app can work as a pluggable component. It’s easy to embed Hypothesis in web pages and not much harder to do the same for PDFs displayed in the browser. All I had to do was the plumbing. I wish that had been easier than it was. But it was doable with modest and general skills. That makes the job accessible to people without elite and specific skills. How many more such people are there? Ten times? A hundred? The force multiplier, whatever it may be, increases the likelihood that useful combinations of software components will find their way into learning environments.

All this brings me back to Hypothes.is, and to the annotation ecosystem that we envision, promote, and expect to participate in. The W3C Web Annotation Working Group is defining standard ways to represent and exchange annotations, so that different kinds of annotation clients and servers can work together as do different kinds of email clients and email servers, or browsers and web servers. Because Hypothes.is implements early variants of those soon-to-be-formalized annotation standards, I’ve been able to do lots of useful integration work. Much of it entails querying our service for annotation data and then filtering, transforming, or cross-linking it. That requires only basic web data wrangling. Some of the work entails injection of that data into web pages. That requires only basic web app development. But until recently I didn’t see a way to democratize the ability to extend the Hypothes.is client.

Here’s a example of the kind of thing I want to be able to do and, more importantly, that I want others to be able to do. Like other social systems we offer tags as a principal way to organize data sets. In Hypothes.is you can use tags to keep track of documents as well as annotations linked to those documents. The tags are freeform. We remember and prompt with the tags you’ve used recently, but there are no rules, you can make up whatever tags you want. That’s great for casual use. If you need a bit more rigor, it’s possible to agree with your collaborators on a restricted set of tags that define key facets of the data you jointly create. But pretty soon you find yourself wishing for more control. You want to define specific lists of terms available in specific contexts for specific purposes.

Hypothes.is uses the Angular framework, as I’ve said. It also relies on a set of components that work only in that framework. One of those, called ngTagsInput, is the tag editor used in Hypothes.is. The good news is that it handles basic tagging quite well, and our developers didn’t need to build that capability, they just plugged it in. The bad news is that in order to do any meaningful work with ngTagsInput, you’d need to learn a lot about it, about how it works within the Angular framework, and about Angular itself. That hard-won knowledge won’t transfer to another JavaScript framework, nor will what you build using that knowledge transfer to another web client built on another framework. A component built in Angular won’t work in Ember just as a component built for Windows won’t work on the Mac.

With any web-based technology there’s always a way to get your foot in the door. In this case, I found a way to hook into ngTagsInput at the point where it asks for a list of terms to fill its picklist. In the Hypothes.is client, that list is kept locally in your browser and contains the tags you’ve used recently. It only required minor surgery to redirect ngTagsInput to a web-based list. That delivered two benefits. The list was controlled, so there was no way to create an invalid tag. And it was shared, so you could synchronize a group on the same list of controlled tags.

A prototype based on that idea has helped some Hypothes.is users manage annotations with shared tag namespaces. But others require deeper customization. Scientific users, in particular, spend increasing time and effort annotating documents, extracting structured information from them, and classifying both the documents and the annotations. For one of them, it wasn’t enough to connect ngTagsInput to a web-based list of terms. People need to see context wrapped around those terms in order to know which ones to pick. That context was available on the server, but there was no way to present it in ngTagsInput. Cracking that component open and working out how to extend it to meet this requirement is a job for an expert. You’d need a different expert to do the same thing for ngTagsInput’s counterpart in a different JavaScript framework. That doesn’t bode well if you want to end up with annotation ecosystem made of standard parts.

So, channeling Douglas Hofstadter, I wondered: “What’s the LTI of annotation?” The answer I came up with, in another prototype, was a way to embed a simple web application in the body of an annotation. Just as my LTI app is launched in the context of a Canvas course, with knowledge of the students and resources in that course as well as API access to both Canvas and to the global network of people and information, so with this little web app. It’s launched in the context of an annotation, with knowledge of the properties of that annotation (document URL, quote, comment, replies, tags) and with API access to both Hypothes.is and to the same global network of people and information. Just as my LTI app requires only basic web development knowledge and ability, so with this annotation app. You don’t need to be an expert to create something useful in this environment. And the thing you do could transfer to another standards-based annotation environment.

There’s nothing new here. We’ve had all these capabilities for 20 years. Trends in modern web software development pile on layers of abstraction and push us toward specialization and make it harder to see the engine under the hood that that runs everything. But if you lift the hood you’ll see that the engine is still there, humming along more smoothly than ever. One popular JavaScript framework, called jQuery, was once widely used mainly to paper over browsers’ incompatible implementations of HTML, JavaScript, CSS, and an underlying technology called the Document Object Model. jQuery is falling into disuse because modern browsers have converged remarkably well on those web standards. Will Angular and Ember and the rest likewise converge on a common system of components? A common framework, even? I hope so; opinions differ; if it does happen it won’t be soon.

Meanwhile Web client apps, in fierce competition with one another and with native mobile apps, will continue to require elite developers who commit to non-portable frameworks. Fair enough. But that doesn’t mean we have to lock out the much larger population of workaday developers who command basic web development skills and can use them to create useful software that works everywhere. We once called Perl the duct tape of the Internet. With a little knowledge of it, you could do a lot. It’s easy to regard that as an era of lost innocence. But a little knowledge of our current flavors of duct tape can still enable many of us to do a lot, if systems are built to allow and encourage that. The LTI ecosystem does. Will the annotation ecosystem follow suit?

Copyright can’t stop annotation of government documents

I’ll admit that the Medium Legal team’s post AB 2880?—?Kill (this) Bill had me at hello:

Fellow Californians, please join us in opposing AB 2880, which would allow and encourage California to extend copyright protection to works made by the state government. We think it’s a bad idea that would wind up limiting Californians’ ability to post and read government information on platforms like Medium.

That sure does sound like a bad idea, and hey, I’m a Californian now too. But when I try to read the actual bill I find it hard to relate its text to Medium Legal’s interpretations, or to some others:

I doubt I’m alone in struggling to connect these interpretations to their evolving source text. Medium Legal says, for example:

AB 2880 requires the state’s Department of General Services to track the copyright status of works created by the state government’s 228,000 employees, and requires every state agency to include intellectual property clauses in every single one of their contracts unless they ask the Department in advance for permission not to do so.

What’s the basis for this interpretation? How do Medium Legal think the text of the bill itself supports it? I find four mentions of the Department of General Services in the bill: (1), (2), (3), (4). To which of these do Medium Legal refer? Do they also rely on the Assembly Third Reading? How? I wish Medium Legal had, while preparing their post, annotated those sources.

The Assembly Third Reading, meanwhile, concludes:

Summary of the bill: In summary, this bill does all of the following:

1) clarifies existing law that state agencies may own, license, and register intellectual property to the extent not inconsistent with the rights of the public to obtain, inspect, copy, publish and otherwise communicate under the California Public Records Act, the California Constitution as provided, and under the First Amendment to the United States Constitution;

2) …

7) …

Analysis Prepared by: Eric Dang / JUD. / (NNN) NNN-NNNN

The same questions apply. How does Eric Dang think the source text supports his interpretation? How do his seven points connect to the bill under analysis? Again, an annotation layer would help us anchor the analysis to its sources.

Medium Legal and Eric Dang used digital tools to make notes supporting their essays. Such notes are, by default, not hyperinked to specific source passages and not available to us as interpretive lenses. Modern web annotation flips that default. Documents remain canonical; notes anchor precisely to words and sentences; the annotation layer is a shareable overlay. There’s no copying, so no basis for the chilling effect that critics of AB 2880 foresee. While the bill might limit Californians’ ability to post and read government information on platforms like Medium, it won’t matter one way or the other to Californians who do such things on platforms like Hypothesis.

Thoughts in motion, annotated

In Knowledge Work as Craft Work (2002), Jim McGee wrote:

The journey from apprentice to master craftsman depends on the visibility of all aspects of craft work.

That was the inspiration for a talk I gave at the 2010 Traction User Group meeting, which focused on the theme of observable work. In the GitHub era we take for granted that we can craft software in the open, subjecting each iteration to highly granular analysis and discussion. Beautiful Code (2007) invited accomplished programmers to explain their thinking. I can imagine an annotated tour of GitHub repositories as the foundation of a future edition of that book.

I can also imagine crafting prose — and then explaining the process — in a similarly open and observable way. The enabling tools don’t exist but I’m writing this post in a way that I hope will suggest what they might be. The toolset I envision has two main ingredients: granular versioning and annotation. When I explored Federated Wiki last year, I got a glimpse of the sort of versioning that could usefully support analysis of prose craft. The atomic unit of versioning in FedWiki is the paragraph. In Thoughts in motion I created a plugin that revealed the history of each paragraph in a document. As writers we continually revise. The FedWiki plugin illustrated that process in a compelling way. The sequence of revisions to a paragraph recorded a sequence of decisions.

For an expert writer such decisions are often tacit. We apply rules that we’ve internalized. How might an expert writer bring those rules to the surface, reflect on them, and explain them to others? Granular version history is necessary but not sufficient. We also need a way to narrate our decisions. I think annotation of version history can help us tell that story. To test that intuition, I am recording a detailed history of this blog post as I write it. The experiment I have in mind: annotate that change history to explain — to myself and others — the choices I’ve made along the way.

Time passes…

OK, I’ve done the experiment here. It certainly explained some things to me about my own process. I doubt it’s generally useful as is, but I think the technique could become so in two ways. As a teacher, I might start with a demo essay, work through a series of revisions, and then annotate them to illustrate aspects of structure, word choice, clarity, and brevity. As a student I might work through my own essay in the same way, guided by progressive feedback (in the annotation layer) from the teacher. It looks promising to me, what do you think?

Annotation is not (only) web comments

Annotation looks like a new way to comment on web pages. “It’s like Medium,” I sometimes explain, “you highlight the passage you’re talking about, you write a comment about it, the comment anchors to the passage and displays to its right.” I need to stop saying that, though, because it’s wrong in two ways.

First, annotation isn’t new. In 1968 Doug Engelbart showed a hypertext system that could link to regions within documents. In 1993, NCSA Mosaic implemented the first in a long lineage of modern annotation tools. We pretend that tech innovation races along at breakneck speed. But sometimes it sputters until conditions are right.

Second, annotation isn’t only a form of online discussion. Yes, we can converse more effectively when we refer to selected passages. Yes, such conversation is easier to discover and join when we can link directly to a context that includes the passage and its anchored conversation. But I want to draw attention to a very different use of annotation.

A web document is a kind of database. Some of its fields may be directly available: the title, the section headings. Other fields are available only indirectly. The author’s name, for example, might link to the author’s home page, or to a Wikipedia page, where facts about the author are recorded. The web we weave using such links is the map that Google reads and then rewrites for us to create the most powerful information system the world has yet seen. But we want something even more powerful: a web where the implicit connections among documents become explicit. Annotation can help us weave that web of linked data.

The semantic web is, of course, another idea that’s been kicking around forever. In that imagined version of the web, documents encode data structures governed by shared schemas. And those islands of data are linked to form archipelagos that can be traversed not only by people but also by machines. That mostly hasn’t happened because we don’t yet know what those schemas need to be, nor how to create writing tools that enable people to easily express schematized information.

Suppose we agree on a set of standard schemas, and we produce schema-aware writing tools that everyone can use to add new documents to a nascent semantic web. How will we retrofit the web we already have? Annotation can help us make the transition. A project called SciBot has given me a glimpse of how that can happen.

Hypothesis’ director of biosciences Maryann Martone and her colleagues at the Neuroscience Information Framework (NIF) project are building an inventory of antibodies, model organisms, and software tools use by neuroscientists. NIF has defined and promoted a way to identify such resources when mentioned in scientific papers. It entails a registry of Research Resource Identifiers (RRIDs) and a protocol for including RRIDs in scientific papers.

Here’s an example of some RRIDs cited in Dopaminergic lesioning impairs adult hippocampal neurogenesis by distinct modification of a-synuclein:

Free-floating sections were stained with the following primary antibodies: rat monoclonal anti-BrdU (1:500; RRID:AB_10015293; AbD Serotec, Oxford, United Kingdom), rabbit polyclonal anti-Ki67 (1:5,000; RRID:AB_442102; Leica Microsystems, Newcastle, United Kingdom), mouse monoclonal antineuronal nuclei (NeuN; 1:500; RRID:AB_10048713; Millipore, Billerica, MA), rabbit polyclonal antityrosine hydroxylase (TH; RRID:AB_1587573; Millipore), goat polyclonal anti-DCX (1:250; RRID:AB_2088494; Santa Cruz Biotechnology, Santa Cruz, CA), and mouse monoclonal anti-a-syn (1:100; syn1; clone 42; RRID:AB_398107; BD Bioscience, Franklin Lakes, NJ).

The term “goat polyclonal anti-DCX” is not necessarily unique. So the author has added the identifer RRID:AB_2088494, which corresponds to a record in NIF’s registry. RRIDs are embedded directly in papers, rather than attached as metadata, because as Dr. Martone says, “papers are the only scientific artifacts that are guaranteed to be preserved.”

But there’s no guarantee an RRID means what it should. It might be misspelled. Or it might point to a flawed record in the registry. Could annotation enable a process of computer-assisted validation? Thus was born the idea of SciBot. It’s a human/machine partnership that works as follows.

A human validator sends the text of an article to a web service. The service scans the article for RRIDs. For each that it finds, it looks up the corresponding record in the registry, then calls the Hypothesis API to post an annotation that anchors to the text of the RRID and includes the lookup result in the body of the annotation. That’s the machine’s work. Now comes the human partner.

If the RRID is well-formed, and if the lookup found the right record, a human validator tags it a valid RRID — one that can now be associated mechanically with occurrences of the same resource in other contexts. If the RRID is not well-formed, or if the lookup fails to find the right record, a human validator tags the annotation as an exception and can discuss with others how to handle it. If an RRID is just missing, the validator notes that with another kind of exception tag.

If you’re not a neuroscientist, as I am not, that all sounds rather esoteric. But this idea of a humans and machines working together to enhance web documents is, I think, powerful and general. When I read Katherine Zoepf’s article about emerging legal awareness among Saudi women, for example, I was struck by odd juxtapositions along the timeline of events. In 2004, reforms opened the way for women to enter law schools. In 2009, “the Commission for the Promotion of Virtue and the Prevention of Vice created a specially trained unit to conduct witchcraft investigations.” I annotated a set of these date-stamped statements and arranged them on a timeline. The result is a tiny data set extracted from a single article. But as with SciBot, the method could be applied by a team of researchers to a large corpus of documents.

Web documents are databases full of facts and assertions that we are ill-equipped to find and use productively. Those documents have already been published, and they are not going to change. Using annotation we can begin to make better use of the documents that exist today, and more clearly envision tomorrow’s web of linked data.

This hybrid approach is, I think, the viable middle path between two unworkable extremes. People won’t be willing or able to weave the semantic web. Nor will machines, though perfectly willing, be able to do that on their own. The machines will need training wheels and the guidance of human minds and hands. Annotation’s role as a provider of training and guidance for machine learning can powerfully complement its role as the next incarnation of web comments.

Adventures in annotation

I just wrote my first blog post for Hypothesis, the web annotation startup I joined recently. In the post I talk about how a specific feature of the annotator — its ability to sync annotations across local and/or web-based copies of the same file — illustrates a general approach to working with copies of resources that may live in many places and answer to many names.

When I finished drafting the post I pinged Dan Whaley, founder of Hypothesis, to review it. Here’s the IRC chat transcript:

Jon: https://hypothes.is/?p=3705&preview=true

Dan: I'm annotating!

Jon: The preview URL?

Dan: :-)

I was a bit surprised. The preview URL was password-protected but annotations against it would not be, they’d show up in the public annotation stream. But hey, I’m all about transparency when appropriate, so bring it!

Over the next few minutes we traded annotations and I tweaked the post. Here’s a picture of Dan asking to add space around an element.

And then jgmac1106 jumped in.

That’s Greg McVerry, an enthusiastic user of Hypothesis. I’d been in touch with him earlier that day because he’d asked a question about local annotation of PDFs, we’d conversed, and I wrote the post partly to answer the question as broadly as possible. I couldn’t easily grant him access to the preview, but I’d sent him a copy of the post as an attachment. And suddenly there he was, contributing to the collaborative edit that Dan and I were doing. It was a nice surprise!

After I published the post I got another nice surprise. I had realized that the annotations on the preview would remain visible in Hypothesis. But when I cited it in an internal forum, Dan responded with the canonical WordPress URL, https://hypothes.is/blog/synchronizing-annotations-between-local-and-remote-pdfs/, and when I loaded that into a tab where Hypothesis was active, all the preview annotations were intact.

It took me a minute to realize how that was possible. A WordPress preview URL knows the eventual URL at which a post will appear, and encodes it in the HEAD section of the HTML document like so:

<link rel=”canonical” href=”https://hypothes.is/blog/synchronizing-annotations-between-local-and-remote-pdfs/”&gt;

When the Hypothesis service receives an annotation for the preview URL that declares a canonical URL, it remembers both as aliases of one another. That is, of course, exactly the point I was making in the post.

We hadn’t planned on this but, as a result, you can see the chatter that preceded publication of the post, as well as chatter since, through the lens of Hypothesis, at either the preview URL or the published URL.

Note that you don’t need to install the Hypothesis extension, or use the bookmarklet, to load Hypothesis on our blog, because it’s already embedded there. You only need to activate Hypothesis as shown here (click to play the mini-screencast).

I haven’t thought through all the collaborative possibilites this will enable, but it sure makes my spidey sense tingle.