Annotation-powered apps: A “Hello World” example

Workflows that can benefit from annotation exist in many domains of knowledge work. You might be a teacher who focuses a class discussion on a line in a poem. You might be a scientist who marks up research papers in order to reason about connections among them. You might be an investigative journalist fact-checking a claim. A general-purpose annotation tool like Hypothesis, already useful in these and other cases, can be made more useful when tailored to a specific workflow.

I’ve tried a few different approaches to that sort of customization. For the original incarnation of the Digital Polarization Project, I made a Chrome extension that streamlined the work of student fact-checkers. For example, it assigned tags to annotations in a controlled way, so that it was easy to gather all the annotations related to an investigation.

To achieve that effect the Digipo extension repurposed the same core machinery that the Hypothesis client uses to convert a selection in a web page into the text quote and text position selectors that annotations use to identify and anchor to selections in web pages. Because the Digipo extension and the Hypothesis extension shared this common machinery, annotations created by the former were fully interoperable with those created by the latter. The Hypothesis client didn’t see Digipo annotations as foreign in any way. You could create an annotation using the Digipo tool, with workflow-defined tags, and then open that annotation in Hypothesis and add a threaded discussion to it.

As mentioned in Annotations are an easy way to Show Your Work, I’ve been unbundling the ingredients of the Digipo tool to make them available for reuse. Today’s project aims to be the “Hello World” of custom annotation-powered apps. It’s a simple working example of a powerful three-step pattern:

  1. Given a selection in a page, form the selectors needed to post an annotation that targets the selection.
  2. Lead a user through an interaction that influences the content of that annotation.
  3. Post the annotation.

For the example scenario, the user is a researcher tasked with evaluating the emotional content of selected passages in texts. The researcher is required to label the selection as StrongPositive, WeakPositive, Neutral, WeakNegative, or StrongNegative, and to provide freeform comments on the tone of the selection and the rationale for the chosen label.

Here’s the demo in action:

The code behind the demo is here. It leverages two components. The first encapsulates the core machinery used to identify the selection. The second provides a set of functions that make it easy to create annotations and post them to Hypothesis. Some of these provide user interface elements, notably the dropdown list of Hypothesis groups that you can also see in other Hypothesis-oriented tools like facet and zotero. Others wrap the parts of the Hypothesis API used to search for, or create, annotations.

My goal is to show that you needn’t be a hotshot web developer to create a custom annotation-powered app. If the pattern embodied by this demo app is of interest to you, and if you’re at all familiar with basic web programming, you should be able to cook up your own implementations of the pattern pretty easily.

To work with selections in web pages, annotation-powered apps like these have to get themselves injected into web pages. There are three ways that can happen, and the Hypothesis client exploits all of them. Browser extensions are the most effective method. The Hypothesis extension is currently available only for Chrome, but there’s a Firefox extension waiting in the wings, made possible by an emerging consensus around the WebExtensions model.

A second approach entails sending a web page through a proxy that injects an annotation-powered app into the page. The Hypothesis proxy, via.hypothes.is, does that.

Finally there’s the venerable bookmarklet, a snippet of JavaScript that runs in the context of the page when the user clicks a button on the browser’s bookmarks bar. Recent evolution of the browser’s security model has complicated the use of bookmarklets. The simplest ones still work everywhere, but bookmarklets that load helper scripts — like the HelloWorldAnnotated demo — are thwarted by sites that enforce Content Security Policy.

It’s a complicated landscape full of vexing tradeoffs. Here’s how I recommend navigating it. The HelloWorldAnnotated demo uses the bookmarklet approach. If you’re an instructional designer targeting Project Gutenberg, or a scientific developer targeting the scientific literature, or a newsroom toolbuilder targeting most sources of public information, you can probably get away with deploying annotation-powered apps using bookmarklets. It’s the easiest way to build and deploy such apps. If the sites you need to target do enforce Content Security Policy, however, then you’ll need to deliver the same apps by way of a browser extension or a proxy. Of those two choices, I’d recommend a browser extension, and if there’s interest, I’ll do a follow-on post showing how to recreate the HelloWorldAnnotated demo in that form.

Annotating on, and with, media


At the 2018 I Annotate conference I gave a flash talk on the topic covered in more depth in Open web annotation of audio and video. These are my notes for the talk, along with the slides.


Here’s something we do easily and naturally on the web: Make a selection in a page.

The web annotation standard defines a few different ways to describe the selection. Here’s how Hypothesis does it:

We use a variety of selectors to help make the annotation target resilient to change. I’d like you to focus here on the TextPositionSelector, which defines the start and the end of the selection. Which is something we just take for granted in the textual domain. Of course a selection has a start and an end. How could it not?

An annotation like this gives you a kind of an URL that points to a selection in a document. There isn’t yet a standard way to write that URL along with the selectors, so Hypothesis points to that combination — that is, the URL plus the selectors — using an ID, like this:

The W3C have thought about a way to bundle selectors with the URL, so you’d have a standard way to cite a selection, like this:

In any case, the point is we’re moving into a world where selections in web resources have their own individual URLs.

Now let’s look again at this quote from Nancy Pelosi:

That’s not something she wrote. It’s something she said, at the Peter G Peterson Fiscal Summit, that was recorded on video.

Is there a transcript that could be annotated? I looked, and didn’t find anything better than this export of YouTube captions:

But of course we lack transcriptions for quite a lot of web audio and video. And lots of it can’t be transcribed, because it’s music, or silent moving pictures.

Once you’ve got a media selection there are some good ways to represent it with an URL. YouTube does it like this:

And with filetypes like mp3 and mp4, you can use media fragment syntax like this:

The harder problem in the media domain turns out to be just making the selection in the first place.

Here I am in that video, in the process of figuring out that the selection I’m looking for starts at 28:20 and ends at 28:36.

It’s not a fun or easy process. You can set the start, as I’ve done here, but then you have to scrub around on the timeline looking for the end, and then write that down, and then tack it onto the media fragment URL that you’ve captured.

It’s weird that something so fundamental should be missing, but there just isn’t an easy and natural way to make a selection in web media.

This is not a new problem. Fifteen years ago, these were the dominant media players.

We’ve made some progress since then. The crazy patchwork of plugins and apps has thankfully converged to a standard player that works the same way everywhere.

Which is great. But you still can’t make a selection!

So I wrote a tool that wraps a selection interface around the standard media player. It works with mp3s, mp4s, and YouTube videos. Unlike the standard player, which has a one-handled slider, this thing has a two-handled slider which is kind of obviously what you need to work with the start and end of a selection.

You can drag the handles to set the start and end, and you can nudge them forward and backward by minutes and seconds, and when you’re ready to review the intro and outro for your clip, you can play just a couple of seconds on both ends to check what you’re capturing.

When you’re done, you get a YouTube URL that will play the selection, start to stop, or an mp3 or mp4 media fragment URL that will do the same.

So how does this relate to annotation? In a couple of ways. You can annotate with media, or you can annotate on media.

Here’s what I mean by annotating with media.

I’ve selected some text that wants to be contextualized by the media quote, and I’ve annotated that text with a media fragment link. Hypothesis turns that link into an embedded player (thanks, by the way, to a code contribution from Steel Wagstaff, who’s here, I think). So the media quote will play, start to stop, in this annotation that’s anchored to a text selection at Politifact.

And here’s what I mean by annotating on media.

If I’m actually on a media URL, I just can annotate it. In this case there’s no selection to be made, the URL encapsulates the selection, so I can just annotate the complete URL.

This is a handy way to produce media fragment URLs that you can use in these ways. I hope someone will come up with a better one than I have. But the tool is begging to be made obsolete when the selection of media fragments becomes as easy and natural as the selection of text has always been.

Annotations are an easy way to Show Your Work

Journalists are increasingly being asked to show their work. Politifact does that with a sources box, like this:

This is great! The more citation of sources, the better. If I want to check those sources, though, I often wind up spending a lot of time searching within source articles to find passages cited implicitly but not explicitly. If those passages are marked using annotations, the method I’ll describe here makes that material available explicitly, in ways that streamline the reporter’s workflow and improve the reader’s experience.

In A Hypothesis-powered Toolkit for Fact Checkers I described a toolkit that supported the original incarnation of the Digital Polarization Project. More recently I’ve unbundled the key ingredients of that toolkit and made them separately available for reuse. The ingredient I’ll discuss here, HypothesisFootnotes, is illustrated in this short clip from a 10-minute screencast about the original toolkit. Here’s the upshot: Given a web page that contains Hypothesis direct links, you can include a script that pulls the cited material into the page, and connects direct links in the page to citations gathered elsewhere in the page.

Why would such links exist in the first place? Well, if you’re a reporter working on an investigation, you’re likely to be gathering evidence from a variety of sources on the web. And you’re probably copying and pasting bits and pieces of those sources into some kind of notebook. If you annotate passages within those sources, you’ve captured them in a way that streamlines your own personal (or team) management of that information. That’s the primary benefit. You’re creating a virtual notebook that makes that material available not only to you, but to your team. Secondarily, it’s available to readers of your work, and to anyone else who can make good use of the material you’ve gathered.

The example here brings an additional benefit, building on another demonstration in which I showed how to improve Wikipedia citations with direct links. There I showed that a Wikipedia citation need not only point to the top of a source web page. If the citation refers specifically to a passage within the source page, it can use a direct link to refer the reader directly to that passage.

Here’s an example that’s currently live in Wikipedia:

It looks like any other source link in Wikipedia. But here’s where it takes you in the cited Press Democrat story:

Not every source link warrants this treatment. When a citation refers to a specific context in a source, though, it’s really helpful to send the reader directly to that context. It can be time-consuming to follow a set of direct links to see cited passages in context. Why not collapse them into the article from which they are cited? That’s what HypothesisFootnotes does. The scattered contexts defined by a set of Hypothesis direct links are assembled into a package of footnotes within the article. Readers can still visit those contexts, of course, but since time is short and attention is scarce, it’s helpful to collapse them into an included summary.

This technique will work in any content management system. If you have a page that cites sources using direct links, you can use the HyothesisFootnotes script to bring the targets of those direct links into the page.

Investigating ClaimReview

In Annotating the Wild West of Information Flow I discussed a prototype of a ClaimReview-aware annotation client. ClaimReview is a standard way to format “a fact-checking review of claims made (or reported) in some creative work.” Now that both Google and Bing are ingesting ClaimReview markup, and fact-checking sites are feeding it to them, the workflow envisioned in that blog post has become more interesting. A fact checker ought to be able to highlight a reviewed claim in its original context, then capture that context and push it into the app that produces the ClaimReview markup embedded in fact-checking articles and slurped by search engines.

That workflow is one interesting use of annotation in the domain of fact-checking, it’s doable, and I plan to explore it. But here I’ll focus instead on using annotation to project claim reviews onto the statements they review, in original context. Why do that? Search engines may display fact-checking labels on search results, but nothing shows up when you land directly on an article that includes a reviewed claim. If the reviewed claim were annotated with the review, an annotation client could surface it.

To help make that idea concrete, I’ve been looking at ClaimReview markup in the wild. Not unexpectedly, it gets used in slightly different ways. I wrote a bookmarklet to help visualize the differences, and used it to find these five patterns:

https://www.washingtonpost.com/news/fact-checker/wp/2018/03/16/does-president-trumps-border-wall-pay-for-itself

{
  "urlOfFactCheck": "https://www.washingtonpost.com/news/fact-checker/wp/2018/03/16/does-president-trumps-border-wall-pay-for-itself/?utm_term=.31281569ab46",
  "reviewRating": {
    "@type": "Rating",
    "ratingValue": "-1",
    "alternateName": "Fuzzy math",
    "worstRating": "-1",
    "bestRating": "-1",
    "image": "https://dhpikd1t89arn.cloudfront.net/custom-rating/custom-ratings-b89013e8-729f-44aa-816e-93a9307b079c"
  },
  "itemReviewed": {
    "@type": "CreativeWork",
    "author": {
      "@type": "Person",
      "name": "Donald Trump ",
      "jobTitle": "President",
      "image": "https://dhpikd1t89arn.cloudfront.net/custom-rating/custom-ratings-b89013e8-729f-44aa-816e-93a9307b079c",
      "sameAs": [
        "www.whitehouse.gov"
      ]
    },
    "datePublished": "2018-03-13",
    "name": "in a tweet"
  },
  "claimReviewed": "\"The $18 billion wall will pay for itself by curbing the importation of crime, drugs and illegal immigrants who tend to go on the federal dole.”"
}

https://www.snopes.com/fact-check/did-captured-islamic-state-obama/

{
  "urlOfFactCheck": "https://www.snopes.com/fact-check/did-captured-islamic-state-obama/",
  "reviewRating": {
    "alternateName": "False",
    "ratingValue": "",
    "bestRating": ""
  },
  "itemReviewed": {
    "datePublished": "2018-03-13T08:25:46+00:00",
    "name": "Snopes",
    "sameAs": "http://dailyworldupdate.com/2018/03/11/breaking-captured-isis-leader-had-obama-on-speed-dial/"
  },
  "claimReviewed": "A captured Islamic State leader's cell phone contained the phone numbers of world leaders, including former President Barack Obama."
}

https://www.factcheck.org/2018/03/trump-misses-mars-attack/

{
  "urlOfFactCheck": "https://www.factcheck.org/2018/03/trump-misses-mars-attack/",
  "reviewRating": {
    "@type": "Rating",
    "ratingValue": "-1",
    "alternateName": "Clinton Endorsed Mars Flight",
    "worstRating": "-1",
    "bestRating": "-1",
    "image": "https://dhpikd1t89arn.cloudfront.net/custom-rating/custom-ratings-76945159-07c1-49d6-833e-717c1da2478f"
  },
  "itemReviewed": {
    "@type": "CreativeWork",
    "author": {
      "@type": "Person",
      "name": "Donald Trump",
      "jobTitle": "President of the United States",
      "image": "https://dhpikd1t89arn.cloudfront.net/custom-rating/custom-ratings-76945159-07c1-49d6-833e-717c1da2478f",
      "sameAs": [
        "http://www.whitehouse.gov"
      ]
    },
    "datePublished": "2018-03-13",
    "name": "Speech at military base"
  },
  "claimReviewed": "NASA “wouldn’t have been going to Mars if my opponent won.\""
}

http://www.politifact.com/texas/statements/2018/mar/15/ted-cruz/ted-cruz-says-beto-orourke-wants-open-borders-and-/

{
  "urlOfFactCheck": "http://www.politifact.com/texas/statements/2018/mar/15/ted-cruz/ted-cruz-says-beto-orourke-wants-open-borders-and-/",
  "reviewRating": {
    "@type": "Rating",
    "ratingValue": "2",
    "alternateName": "False",
    "worstRating": "1",
    "bestRating": "7",
    "image": "https://dhpikd1t89arn.cloudfront.net/rating_images/politifact/tom-false.jpg"
  },
  "itemReviewed": {
    "@type": "CreativeWork",
    "author": {
      "@type": "Person",
      "name": "Ted Cruz",
      "jobTitle": "U.S. senator for Texas",
      "image": "https://dhpikd1t89arn.cloudfront.net/rating_images/politifatc/tom-false.jpg",
      "sameAs": [
        "https://www.youtube.com/watch?v=MvKElqdUxZg"
      ]
    },
    "datePublished": "2018-03-06",
    "name": "Houston, Texas"
  },
  "claimReviewed": "Says Beto O’Rourke wants \"open borders and wants to take our guns.\""
}

https://climatefeedback.org/claimreview/breitbart-repeats-bloggers-unsupported-claim-noaa-manipulates-data-exaggerate-warming/

{
  "urlOfFactCheck": "https://climatefeedback.org/claimreview/breitbart-repeats-bloggers-unsupported-claim-noaa-manipulates-data-exaggerate-warming/",
  "reviewRating": {
    "@type": "Rating",
    "alternateName": "Unsupported",
    "bestRating": 5,
    "ratingValue": 2,
    "worstRating": 1
  },
  "itemReviewed": {
    "@type": "CreativeWork",
    "author": {
      "@type": "Person",
      "name": "James Delingpole"
    },
    "headline": "TBD",
    "image": {
      "@type": "ImageObject",
      "height": "200px",
      "url": "https://climatefeedback.org/wp-content/uploads/Sources/Delingpole_Breitbart_News.png",
      "width": "200px"
    },
    "publisher": {
      "@type": "Organization",
      "logo": "Breitbart",
      "name": "Breitbart"
    },
    "datePublished": "20 February 2018",
    "url": "TBD"
  },
  "claimReviewed": "NOAA has adjusted past temperatures to look colder than they were and recent temperatures to look warmer than they were."
}

To normalize the difference I’ll need to look at more examples from these sites. But I’m also looking for more patterns, so if you know of other sites that routinely embed ClaimReview markup, please drop me links!

 

Open web annotation of audio and video

Hypothesis does open web annotation of text. Let’s unpack those concepts one at a time.

Open, a famously overloaded word, here means:

  • The software that reads, writes, stores, indexes, and searches for annotations is available under an open source license.

  • Open standards govern the representation and exchange of annotations.

  • The annotation system is a good citizen of the open web.

Web annotation, in the most general sense, means:

  • Selecting something within a web resource.

  • Attaching stuff to that selection.

Text, as the Hypothesis annotation client understands it, is HTML, or PDF transformed to HTML. In either case, it’s what you read in a browser, and what you select when you make an annotation.

What’s the equivalent for audio and video? It’s complicated because although browsers enable us to select passages of text, the standard media players built into browsers don’t enable us to select segments of audio and video.

It’s trivial to isolate a quote in a written document. Click to set your cursor to the beginning, then sweep to the end. Now annotation can happen. The browser fires a selection event; the annotation client springs into action; the user attaches stuff to the selection; the annotation server saves that stuff; the annotation client later recalls it and anchors it to the selection.

But selection in audio and video isn’t like selection in text. Nor is it like selection in images, which we easily and naturally crop. Selection of audio and video happens in the temporal domain. If you’ve ever edited audio or video you’ll appreciate what that means. Setting a cursor and sweeping a selection isn’t enough. You can’t know that you got the right intro and outro by looking at the selection. You have to play the selection to make sure it captures what you intended. And since it probably isn’t exactly right, you’ll need to make adjustments that you’ll then want to check, ideally without replaying the whole clip.

All audio and video editors support making and checking selections. But what’s built into modern browsers is a player, not an editor. It provides a slider with one handle. You can drag the handle back and forth to play at different times, but you can’t mark a beginning and an end.

YouTube shares that limitation, by the way. It’s great that you can right-click and Copy video URL at current time. But you can’t mark that as the beginning of a selection, and you can’t mark a corresponding end.

We mostly take this limitation for granted, but as more of our public discourse happens in audio or video, rarely supported by written transcripts, we will increasingly need to be able to cite quotes in the temporal domain.

Open web annotation for audio and video will, therefore, require standard players that support selection in the temporal domain. I expect we’ll get there, but meanwhile we need to provide a way to do temporal selection.

In Annotating Web Audio I described a first cut at a clipping tool that wraps a selection interface around the standard web audio and video players. It was a step in the right direction, but was too complex — and too modal — for convenient use. So I went back to the drawing board and came up with a different approach shown here:

This selection tool has nothing intrinsically to do with annotation. It’s job is to make your job easier when you are constructing a link to an audio or video segment.

For example, in Welcome to the Sapiezoic I reflected on a Long Now talk by David Grinspoon. Suppose I want to support that post with an audio pull quote. The three-and-a-half-minute segment I want to use begins: “The Anthropocene has been proposed as a new epoch…” It ends: “…there has never been a geological force aware of its own existence. And to me that’s a very profound change.”

Try opening http://podcast.longnow.org/salt/salt-020170906-grinspoon-podcast.mp3 and finding the timecodes for the beginning (“The Anthropocene…”) and the end (“…profound change”). The link you want to construct is: http://podcast.longnow.org/salt/salt-020170906-grinspoon-podcast.mp3#t=730,949. That’s a Media Fragments link. It tells a standard media player when to start and stop.

If you click the bare link, your browser will load the audio file into its standard player and play the clip. If you paste the link into a piece of published text, it may even be converted by the publishing system into an inline player. Here’s an annotation on my blog post that does that:

That’s a potent annotation! But to compose it you have to find where the quote starts (12:10) and ends (15:49), then convert those timecodes to seconds. Did you try? It’s doable, but so fiddly that you won’t do it easily or routinely.

The tools at http://jonudell.net/av/ aim to lower that activation threshold. They provide a common interface for mp3 audio, mp4 video, and YouTube video. The interface embeds a standard audio or video player, and adds a two-handled slider that marks the start and end of a clip. You can adjust the start and end and hear (or hear and see) the current intro and outro. At every point you’ve got a pair of synced permalinks. One is the player link, a media fragment URL like http://podcast.longnow.org/salt/salt-020170906-grinspoon-podcast.mp3#t=730,949. The other is the editor link. It records the settings that produce the player link.

As I noted in Annotating Web Audio, these clipping tools are just one way to ease the pain of constructing media fragment URLs. Until standard media players enable selection, we’ll need complementary tools like these to help us do it.

Once we have a way to construct timecoded segments, we can return to the question: “What is open web annotation for audio and video?”

At the moment, I see two complementary flavors. Here’s one:

I’m using a Hypothesis page note to annotate a media fragment URL. There’s no text to which Hypothesis can anchor an annotation, but it can record a note that refers to the whole page. In effect I’m bookmarking the page as I could also do in, for example, Pinboard. The URL encapsulates the selection; annotations that attach to the URL refer to the selection.

This doesn’t yet work as you might expect in Hypothesis. If you visit http://podcast.longnow.org/salt/salt-020170906-grinspoon-podcast.mp3#t=730,949 with Hypothesis you’ll see my annotation, but you’ll also see it if you visit the same URL at #t=1,10, or any other media fragment. That’s helpful in one way, because if there were multiple annotations on the podcast you’d want to discover all of them. But it’s unhelpful in a more important way. If you want to annotate a particular segment for personal use, or to mark it as a pull quote, or because you’re a teacher leading a class discussion that refers to that segment, then you want annotations to refer to the particular segment.

Why doesn’t Hypothesis distinguish between #t=1,10 and #t=730,949? Because what follows the # in a URL is most often not a media fragment, it’s an ordinary fragment that marks a location in text. You’ve probably seen how that works. A page like example.com/page1 has intrapage links like example.com/page1#section1 and example.com/page1#section2. I can send you to those locations in the page by capturing and relaying those fragment-enhanced links. In that case, if we’re annotating the page, we’d most likely want all our annotations to show up, no matter which fragment is focused in the browser. So Hypothesis doesn’t record the fragment when it records the target URL for an annotation.

But there can be special fragments. If I share a Hypothesis link, for example https://hyp.is/uT06DvGBEeeXDlMfjMFDAg, you’ll land on a link that ends with #annotations:uT06DvGBEeeXDlMfjMFDAg. The Hypothesis client uses that information to scroll the annotated document to the place where the annotation is anchored.

Media fragments could likewise be special. The Hypothesis server, which normally discards media fragments, could record them so that annotations made on a media fragment URL would target the fragment. And I think that’s worth doing. Meanwhile, note that you can annotate the editor links provided by the tools at http://jonudell.net/av/:

This works because the editor links don’t use fragments, only URL parameters, and because Hypothesis regards each uniquely-parameterized URL as a distinct annotation target. Note also that you needn’t use the tools as hosted on my site. They’re just a small set of files that can be hosted anywhere.

Eventually I hope we’ll get open web annotation of audio and video that’s fully native, meaning not only that the standard players support selection, but also that they directly enable creation and viewing of annotations. Until then, though, this flavor of audio/video annotation — let’s call it annotating on media — will require separate tooling both for selecting quotes, and for creating and viewing annotations directly on those quotes.

We’ve already seen the other flavor: annotating with media. To do that with Hypothesis, construct a media fragment URL and cite it in a Hypothesis annotation. What should the annotation point to? That’s up to you. I attached the David Grinspoon pull quote to one of my own blog posts. When I watched a PBS interview with Virginia Eubanks, I captured one memorable segment and attached it to the page on her blog that features the book discussed in the interview.

(If I were Virginia Eubanks I might want to capture the pull quote myself, and display it on my book page for visitors who aren’t seeing it through the Hypothesis lens.)

Open web annotation of audio and video should encompass both of these flavors. You should be able to select a clip within a standard player, and annotate it in situ. And you should be able to use that clip in an annotation.

Until players enable selection, the first flavor — annotating on a segment — will require a separate tool. I’ve provided one implementation, there can be (perhaps already are?) others. However it’s captured, the selection will be represented as a media fragment link. Hypothesis doesn’t yet, but pretty easily could, support annotation of such links in a way that targets media fragments.

The second flavor — annotation with a segment — again requires a way to construct a media fragment link. With that in hand, you can just paste the link into the Hypothesis annotation editor. Links ending with .mp3#t=10,20 and links like youtube.com/watch?v=Avxm7JYjk8M&start=10&end=20 will become embedded players that start and end at the indicated times. Links like .mp4#t=10,20 and youtube.com/embed/Avxm7JYjk8M?start=10&end=20 don’t yet become embedded players but can.

The ideal implementation of open web annotation for audio and video will have to wait for a next generation of standard media players. But you can use Hypothesis today to annotate on as well as with media.

How to improve Wikipedia citations with Hypothesis direct links

Wikipedia aims to be verifiable. Every statement of fact should be supported by a reliable source that the reader can check. Citations in Wikipedia typically refer to online documents accessible at URLs. But with the advent of standard web annotation we can do better. We can add citations to Wikipedia that refer precisely to statements that support Wikipedia articles.

According to Wikipedia’s policy on citing sources:

Wikipedia’s Verifiability policy requires inline citations for any material challenged or likely to be challenged, and for all quotations, anywhere in article space.

Last night, reading https://en.wikipedia.org/wiki/Tubbs_Fire, I noticed this unsourced quote:

Sonoma County has four “historic wildfire corridors,” including the Hanly Fire area.

I searched for the source of that quotation, found it in a Press Democrat story, annotated the quote, and captured a Hypothesis direct link to the annotation. In this screenshot, I’ve clicked the annotation’s share icon, and then clicked the clipboard icon to copy the direct link to the clipboard. The direct link encapsulates the URL of the story, plus the information needed to locate the quotation within the story.

Given such a direct link, it’s straightforward to use it in a Wikipedia citation. Back in the Wikipedia page I clicked the Edit link, switched to the visual editor, set my cursor at the end of the unsourced quote, and clicked the visual editor’s Cite button to invoke this panel:

There I selected the news template, and filled in the form in the usual way, providing the title of the news story, its date, its author, the name of the publication, and the date on which I accessed the story. There was just one crucial difference. Instead of using the Press Democrat URL, I used the Hypothesis direct link.

And voilà! There’s my citation, number 69, nestled among all the others.

Citation, as we’ve known it, begs to be reinvented in the era of standard web annotation. When I point you to a document in support of a claim, I’m often thinking of a particular statement in that document. But the burden is on you to find that statement in the document to which my citation links. And when you do, you may not be certain you’ve found the statement implied by my link. When I use a direct link, I relieve you of that burden and uncertainty. You land in the cited document at the right place, with the supporting statement highlighted. And if it’s helpful we can discuss the supporting statement in that context.

I can envision all sorts of ways to turbocharge Wikipedia’s workflow with annotation-powered tools. But no extra tooling is required to use Hypothesis and Wikipedia in the way I’ve shown here. If you find an unsourced quote in Wikipedia, just annotate it in its source context, capture the direct link, and use it in the regular citation workflow. For a reader who clicks through Wikipedia citations to check original sources, this method yields a nice improvement over the status quo.

Annotating Web Audio

On a recent walk I listened to Unmasking Misogyny on Radio Open Source. One of the guests, Danielle McGuire, told the story of Rosa Parks’ activism in a way I hadn’t heard before. I wanted to capture that segment of the show, save a link to it, and bookmark the link for future reference.

If you visit the show page and click the download link, you’ll load the show’s MP3 file into your browser’s audio player. Nowadays that’s almost always going to be the basic HTML5 player. Here’s what it looks like in various browsers:

The show is about an hour long. I scrubbed along the timeline until I heard Danielle McGuire’s voice, and then zeroed in on the start and end of the segment I wanted to capture. It starts at 18:14 and ends at 21:11. Now, how to link to that segment?

I first investigated this problem in 2004. Back then, I learned that it’s possible to fetch and play random parts of MP3 files, and I made a web app that would figure out, given start and stop times like 18:14 and 21:11, which part of the MP3 file to fetch and play. Audio players weren’t (and still aren’t) optimized for capturing segments as pairs of minute:second parameters. But once you acquired those parameters, you could form a link that would invoke the web app and play the segment. Such links could then be curated, something I often did using the del.icio.us bookmarking app.

Revisiting those bookmarks now, I’m reminded that Doug Kaye and I were traveling the same path. At ITConversations he had implemented a clipping service that produced URLs like this:

http://www.itconversations.com/clip.php?showid=378&start=4:37&stop=6:04

Mine looked like this:

http://udell.infoworld.com/?url=http://www.itconversations.com/audio/download/ITConversations-378.mp3&beg=4:37&end=6:04

Both of those audio-clipping services are long gone. But the audio files survive, thanks to the exemplary cooperation between ITConversations and the Internet Archive. So now I can resurrect that ITConversations clip — in which Doug Engelbart, at the Accelerating Change conference in 2004, describes the epiphany that inspired his lifelong quest — like so:

http://jonudell.net/av/audio.html?url=http://jonudell.net/audio/ITC.AC2004-DougEngelbart-2004.11.07.mp3&startmin=4&startsec=37&endmin=6&endsec=4

And here’s the segment of Danielle McGuire’s discussion of Rosa Parks that I wanted to remember:

http://jonudell.net/av/audio.html?url=http://ia601506.us.archive.org/5/items/171207OSPODCASTSexualHarassment/171207-OS-PODCAST-SexualHarassment.mp3&startmin=18&startsec=14&endmin=21&endsec=11

This single-page JavaScript app aims to function both as a player of predefined segments, and as a tool that makes it as easy as possible to define segments. It’s still a work in progress, but I’m able to use it effectively even as I continue to refine the interaction details.

For curation of these clips I am, of course, using Hypothesis. Here are some of the clips I’ve collected on the tag AnnotatingAV:

To create these annotations I’m using Hypothesis page notes. An annotation of this type is like a del.icio.us or pinboard.in bookmark. It refers to the whole resource addressed by a URL, rather than to a segment of interest within a resource.

Most often, a Hypothesis user defines a segment of interest by selecting a passage of text in a web document. But if you’re not annotating any particular selection, you can use a page note to comment on, tag, and discuss the whole document.

Since each audio clip defines a segment as a standalone web page with a unique URL, you can use a Hypothesis page note to annotate that standalone page:

It’s a beautiful example of small pieces loosely joined. My clipping tool is just one way to form URLs that point to audio and video segments. I hope others will improve on it. But any clipping tool that produces unique URLs can work with Hypothesis and, of course, with any other annotation or curation tool that targets URLs.