Renaming Hypothesis tags

Wherever social tagging is supported as an optional feature, its use obeys a power law. Some people use tags consistently, some sporadically, most never. This chart of Hypothesis usage illustrates the familiar long-tail distribution:

Those of us in the small minority of consistent taggers care a lot about the tag namespaces we’re creating. We tag in order to classify resources, we want to be able to classify them consistently, but we also want to morph our tag namespaces to reflect our own changing intuitions about how to classify and also to adapt to evolving social conventions.

Consistent tagging requires a way to make, use, and perhaps share a list of controlled tags, and that’s a topic for another post. Morphing tag namespaces to satisfy personal needs, or adapt to social conventions, requires the ability to rename tags, and that’s the focus of this post.

There’s nothing new about the underlying principles. When I first got into social tagging, with del.icio.us back in the day, I made a screencast to show how I was using del.icio.us’ tag renaming feature to reorganize my own classification system, and to evolve it in response to community tag usage. The screencast argues that social taggers form a language community in which metadata vocabularies can evolve in the same way natural languages do.

Over the years I’ve built tag renamers for other systems, and now I’ve made one for Hypothesis as shown in this 90-second demo. If you’re among the minority who want to manage your tags in this way, you’re welcome to use the tool, here’s the link. But please proceed with care. When you reorganize a tag namespace, it’s possible to wind up on the wrong end of the arrow of entropy!

Letters to Mr. Wilson’s Museum of Jurassic Technology

Dear Mr. Wilson,

Your Museum of Jurassic Technology (MJT) first came to my attention in 1995 when I read an excerpt from Lawrence Weschler’s Mr. Wilson’s Cabinet of Wonder in the New Yorker. Or anyway, that’s the origin story I’ve long remembered and told. As befits the reality-warping ethos of the MJT, it seems not to be true. I can find no record of such an article in the New Yorker’s archive. (Maybe I read it in Harpers?) What I can find, in the New York Times’ archive, is a review of the book that nicely captures that ethos:

Run by its eccentric proprietor out of a storefront in Culver City, Calif., the museum is clearly a modern-day version, as Mr. Weschler astutely points out, of the “wonder-cabinets” that sprang up in late Renaissance Europe, inspired by all the discoveries in the New World. David Wilson comes off as an amusingly Casaubonesque figure who, in his own little way, seeks to amass all the various kinds of knowledge in the world; and if his efforts seem random and arcane, they at any rate sound scientifically specific. Yet when Mr. Weschler begins to check out some of the information in the exhibits, we discover that much of it is made up or imagined or so elaborately embroidered as to cease to resemble any real-world facts.

The key to the pleasure of this book lies in that “much of,” for the point of David Wilson’s museum is that you can’t tell which parts are true and which invented. In fact, some of the unlikeliest items — the horned stink ants, for instance — turn out to be pretty much true. In the wake of its moment of climactic exposure, “Mr. Wilson’s Cabinet of Wonder” turns into an expedition in which Lawrence Weschler tracks down the overlaps, correspondences and occasionally tenuous connections between historical and scientific reality on the one hand and the Museum of Jurassic Technology on the other.

We’ve always wanted to visit the MJT, and finally made the pilgrimage a few weeks ago. Cruising down the California coast on a road trip, the museum was our only LA destination. The miserable traffic we fought all day, both entering and leaving Culver City, was a high price to pay for several hours of joy at the museum. But it was worth it.

At one point during our visit, I was inspecting a display case filled with a collection of antique lace, each accompanied by a small freestanding sign describing the item. On one of the signs, the text degrades hilariously. As I remember it (no doubt imperfectly) there were multiple modes of failure: bad kerning, faulty line spacing, character misencoding, dropouts, faded print. Did I miss any?

While I was laughing at the joke, I became aware of another presence in the dark room, a man with white hair who seemed to pause near me before wandering off to an adjoining dark room. I thought it might be you, but I was too intimidated to ask.

The other night, looking for videos that might help me figure out if that really was you, I found a recording of an event at the USC Fisher Museum of Art. In that conversation you note that laughter is a frequent response to the MJT:

“We don’t have a clue what people are laughing about,” you said, “but it’s really hard to argue with laughter.”

I was laughing at that bit of hilariously degraded signage. I hope that was you hovering behind me, and I hope that you were appreciating my appreciation of the joke. But maybe not. Maybe that’s going to be another questionable memory. Whatever the truth may be, thank you. We need the laughs now more than ever.

Yours sincerely,

Jon Udell

Searching across silos, circa 2018

The other day, while searching across various information silos — WordPress, Slack, readthedocs.io, GitHub, Google Drive, Google Groups, Zendesk, Stack Overflow for Teams — I remembered the first time I simplified that chore. It was a fall day in 1996. I’d been thinking about software components since writing BYTE’s Componentware1 cover story in 1994, and about web software since launching the BYTE website in 1995. On that day in 1996 I put the two ideas together for the first time, and wrote up my findings in an installment of my monthly column entitled On-Line Componentware. (Yes, we really did write “On-Line” that way.)

The solution described in that column was motivated by McGraw-Hill’s need to unify search across BYTE and some other MgH pubs that were coming online. Alta Vista, then the dominant search engine, was indexing all the sites. It didn’t offer a way to search across them. But it did offer a site: query as Google and Bing do today, so you could run per-site queries. I saw it was possible to regard Alta Vista as a new kind of software component for a new kind of web-based application that called the search engine once per site and stitched the results into a single web page.

Over the years I’ve done a few of these metasearch apps. In the pre-API era that meant normalizing results from scraped web pages. In the API era it often means normalizing results fetched from per-site APIs. But that kind of normalization was overkill this time around, I just needed an easier way to search for words that might be on our blog, or in our docs, or in our GitHub repos, or in our Google storage. So I made a web page that accepts a search term, runs a bunch of site-specific searches, and opens each into a new tab.

This solution isn’t nearly as nice as some of my prior metasearchers. But it’s way easier than authenticating to APIs and merging their results, or using a bunch of different query syntaxes interactively. And it’s really helping me find stuff scattered across our silos.

But — there’s always a but — you’ll notice that Slack and Zendesk aren’t yet in the mix. All the other services make it possible to form a URL that includes the search term. That’s just basic web thinking. A set of web search results is an important kind of web resource that, like any other, should be URL-accessible. Unless I’m wrong, in which case someone please correct me, I can’t do the equivalent of `?q=notifications` in a Slack or Zendesk URL. Of course it’s possible to wrap their APIs in a way that enables URL query, but I shouldn’t have to. Every search deserves its own URL.


1 Thanks as always to Brewster Kahle for preserving what publishers did not.

Annotation-powered apps: A “Hello World” example

Workflows that can benefit from annotation exist in many domains of knowledge work. You might be a teacher who focuses a class discussion on a line in a poem. You might be a scientist who marks up research papers in order to reason about connections among them. You might be an investigative journalist fact-checking a claim. A general-purpose annotation tool like Hypothesis, already useful in these and other cases, can be made more useful when tailored to a specific workflow.

I’ve tried a few different approaches to that sort of customization. For the original incarnation of the Digital Polarization Project, I made a Chrome extension that streamlined the work of student fact-checkers. For example, it assigned tags to annotations in a controlled way, so that it was easy to gather all the annotations related to an investigation.

To achieve that effect the Digipo extension repurposed the same core machinery that the Hypothesis client uses to convert a selection in a web page into the text quote and text position selectors that annotations use to identify and anchor to selections in web pages. Because the Digipo extension and the Hypothesis extension shared this common machinery, annotations created by the former were fully interoperable with those created by the latter. The Hypothesis client didn’t see Digipo annotations as foreign in any way. You could create an annotation using the Digipo tool, with workflow-defined tags, and then open that annotation in Hypothesis and add a threaded discussion to it.

As mentioned in Annotations are an easy way to Show Your Work, I’ve been unbundling the ingredients of the Digipo tool to make them available for reuse. Today’s project aims to be the “Hello World” of custom annotation-powered apps. It’s a simple working example of a powerful three-step pattern:

  1. Given a selection in a page, form the selectors needed to post an annotation that targets the selection.
  2. Lead a user through an interaction that influences the content of that annotation.
  3. Post the annotation.

For the example scenario, the user is a researcher tasked with evaluating the emotional content of selected passages in texts. The researcher is required to label the selection as StrongPositive, WeakPositive, Neutral, WeakNegative, or StrongNegative, and to provide freeform comments on the tone of the selection and the rationale for the chosen label.

Here’s the demo in action:

The code behind the demo is here. It leverages two components. The first encapsulates the core machinery used to identify the selection. The second provides a set of functions that make it easy to create annotations and post them to Hypothesis. Some of these provide user interface elements, notably the dropdown list of Hypothesis groups that you can also see in other Hypothesis-oriented tools like facet and zotero. Others wrap the parts of the Hypothesis API used to search for, or create, annotations.

My goal is to show that you needn’t be a hotshot web developer to create a custom annotation-powered app. If the pattern embodied by this demo app is of interest to you, and if you’re at all familiar with basic web programming, you should be able to cook up your own implementations of the pattern pretty easily.

To work with selections in web pages, annotation-powered apps like these have to get themselves injected into web pages. There are three ways that can happen, and the Hypothesis client exploits all of them. Browser extensions are the most effective method. The Hypothesis extension is currently available only for Chrome, but there’s a Firefox extension waiting in the wings, made possible by an emerging consensus around the WebExtensions model.

A second approach entails sending a web page through a proxy that injects an annotation-powered app into the page. The Hypothesis proxy, via.hypothes.is, does that.

Finally there’s the venerable bookmarklet, a snippet of JavaScript that runs in the context of the page when the user clicks a button on the browser’s bookmarks bar. Recent evolution of the browser’s security model has complicated the use of bookmarklets. The simplest ones still work everywhere, but bookmarklets that load helper scripts — like the HelloWorldAnnotated demo — are thwarted by sites that enforce Content Security Policy.

It’s a complicated landscape full of vexing tradeoffs. Here’s how I recommend navigating it. The HelloWorldAnnotated demo uses the bookmarklet approach. If you’re an instructional designer targeting Project Gutenberg, or a scientific developer targeting the scientific literature, or a newsroom toolbuilder targeting most sources of public information, you can probably get away with deploying annotation-powered apps using bookmarklets. It’s the easiest way to build and deploy such apps. If the sites you need to target do enforce Content Security Policy, however, then you’ll need to deliver the same apps by way of a browser extension or a proxy. Of those two choices, I’d recommend a browser extension, and if there’s interest, I’ll do a follow-on post showing how to recreate the HelloWorldAnnotated demo in that form.

Annotating on, and with, media


At the 2018 I Annotate conference I gave a flash talk on the topic covered in more depth in Open web annotation of audio and video. These are my notes for the talk, along with the slides.


Here’s something we do easily and naturally on the web: Make a selection in a page.

The web annotation standard defines a few different ways to describe the selection. Here’s how Hypothesis does it:

We use a variety of selectors to help make the annotation target resilient to change. I’d like you to focus here on the TextPositionSelector, which defines the start and the end of the selection. Which is something we just take for granted in the textual domain. Of course a selection has a start and an end. How could it not?

An annotation like this gives you a kind of an URL that points to a selection in a document. There isn’t yet a standard way to write that URL along with the selectors, so Hypothesis points to that combination — that is, the URL plus the selectors — using an ID, like this:

The W3C have thought about a way to bundle selectors with the URL, so you’d have a standard way to cite a selection, like this:

In any case, the point is we’re moving into a world where selections in web resources have their own individual URLs.

Now let’s look again at this quote from Nancy Pelosi:

That’s not something she wrote. It’s something she said, at the Peter G Peterson Fiscal Summit, that was recorded on video.

Is there a transcript that could be annotated? I looked, and didn’t find anything better than this export of YouTube captions:

But of course we lack transcriptions for quite a lot of web audio and video. And lots of it can’t be transcribed, because it’s music, or silent moving pictures.

Once you’ve got a media selection there are some good ways to represent it with an URL. YouTube does it like this:

And with filetypes like mp3 and mp4, you can use media fragment syntax like this:

The harder problem in the media domain turns out to be just making the selection in the first place.

Here I am in that video, in the process of figuring out that the selection I’m looking for starts at 28:20 and ends at 28:36.

It’s not a fun or easy process. You can set the start, as I’ve done here, but then you have to scrub around on the timeline looking for the end, and then write that down, and then tack it onto the media fragment URL that you’ve captured.

It’s weird that something so fundamental should be missing, but there just isn’t an easy and natural way to make a selection in web media.

This is not a new problem. Fifteen years ago, these were the dominant media players.

We’ve made some progress since then. The crazy patchwork of plugins and apps has thankfully converged to a standard player that works the same way everywhere.

Which is great. But you still can’t make a selection!

So I wrote a tool that wraps a selection interface around the standard media player. It works with mp3s, mp4s, and YouTube videos. Unlike the standard player, which has a one-handled slider, this thing has a two-handled slider which is kind of obviously what you need to work with the start and end of a selection.

You can drag the handles to set the start and end, and you can nudge them forward and backward by minutes and seconds, and when you’re ready to review the intro and outro for your clip, you can play just a couple of seconds on both ends to check what you’re capturing.

When you’re done, you get a YouTube URL that will play the selection, start to stop, or an mp3 or mp4 media fragment URL that will do the same.

So how does this relate to annotation? In a couple of ways. You can annotate with media, or you can annotate on media.

Here’s what I mean by annotating with media.

I’ve selected some text that wants to be contextualized by the media quote, and I’ve annotated that text with a media fragment link. Hypothesis turns that link into an embedded player (thanks, by the way, to a code contribution from Steel Wagstaff, who’s here, I think). So the media quote will play, start to stop, in this annotation that’s anchored to a text selection at Politifact.

And here’s what I mean by annotating on media.

If I’m actually on a media URL, I just can annotate it. In this case there’s no selection to be made, the URL encapsulates the selection, so I can just annotate the complete URL.

This is a handy way to produce media fragment URLs that you can use in these ways. I hope someone will come up with a better one than I have. But the tool is begging to be made obsolete when the selection of media fragments becomes as easy and natural as the selection of text has always been.

Annotations are an easy way to Show Your Work

Journalists are increasingly being asked to show their work. Politifact does that with a sources box, like this:

This is great! The more citation of sources, the better. If I want to check those sources, though, I often wind up spending a lot of time searching within source articles to find passages cited implicitly but not explicitly. If those passages are marked using annotations, the method I’ll describe here makes that material available explicitly, in ways that streamline the reporter’s workflow and improve the reader’s experience.

In A Hypothesis-powered Toolkit for Fact Checkers I described a toolkit that supported the original incarnation of the Digital Polarization Project. More recently I’ve unbundled the key ingredients of that toolkit and made them separately available for reuse. The ingredient I’ll discuss here, HypothesisFootnotes, is illustrated in this short clip from a 10-minute screencast about the original toolkit. Here’s the upshot: Given a web page that contains Hypothesis direct links, you can include a script that pulls the cited material into the page, and connects direct links in the page to citations gathered elsewhere in the page.

Why would such links exist in the first place? Well, if you’re a reporter working on an investigation, you’re likely to be gathering evidence from a variety of sources on the web. And you’re probably copying and pasting bits and pieces of those sources into some kind of notebook. If you annotate passages within those sources, you’ve captured them in a way that streamlines your own personal (or team) management of that information. That’s the primary benefit. You’re creating a virtual notebook that makes that material available not only to you, but to your team. Secondarily, it’s available to readers of your work, and to anyone else who can make good use of the material you’ve gathered.

The example here brings an additional benefit, building on another demonstration in which I showed how to improve Wikipedia citations with direct links. There I showed that a Wikipedia citation need not only point to the top of a source web page. If the citation refers specifically to a passage within the source page, it can use a direct link to refer the reader directly to that passage.

Here’s an example that’s currently live in Wikipedia:

It looks like any other source link in Wikipedia. But here’s where it takes you in the cited Press Democrat story:

Not every source link warrants this treatment. When a citation refers to a specific context in a source, though, it’s really helpful to send the reader directly to that context. It can be time-consuming to follow a set of direct links to see cited passages in context. Why not collapse them into the article from which they are cited? That’s what HypothesisFootnotes does. The scattered contexts defined by a set of Hypothesis direct links are assembled into a package of footnotes within the article. Readers can still visit those contexts, of course, but since time is short and attention is scarce, it’s helpful to collapse them into an included summary.

This technique will work in any content management system. If you have a page that cites sources using direct links, you can use the HyothesisFootnotes script to bring the targets of those direct links into the page.

Investigating ClaimReview

In Annotating the Wild West of Information Flow I discussed a prototype of a ClaimReview-aware annotation client. ClaimReview is a standard way to format “a fact-checking review of claims made (or reported) in some creative work.” Now that both Google and Bing are ingesting ClaimReview markup, and fact-checking sites are feeding it to them, the workflow envisioned in that blog post has become more interesting. A fact checker ought to be able to highlight a reviewed claim in its original context, then capture that context and push it into the app that produces the ClaimReview markup embedded in fact-checking articles and slurped by search engines.

That workflow is one interesting use of annotation in the domain of fact-checking, it’s doable, and I plan to explore it. But here I’ll focus instead on using annotation to project claim reviews onto the statements they review, in original context. Why do that? Search engines may display fact-checking labels on search results, but nothing shows up when you land directly on an article that includes a reviewed claim. If the reviewed claim were annotated with the review, an annotation client could surface it.

To help make that idea concrete, I’ve been looking at ClaimReview markup in the wild. Not unexpectedly, it gets used in slightly different ways. I wrote a bookmarklet to help visualize the differences, and used it to find these five patterns:

https://www.washingtonpost.com/news/fact-checker/wp/2018/03/16/does-president-trumps-border-wall-pay-for-itself

{
  "urlOfFactCheck": "https://www.washingtonpost.com/news/fact-checker/wp/2018/03/16/does-president-trumps-border-wall-pay-for-itself/?utm_term=.31281569ab46",
  "reviewRating": {
    "@type": "Rating",
    "ratingValue": "-1",
    "alternateName": "Fuzzy math",
    "worstRating": "-1",
    "bestRating": "-1",
    "image": "https://dhpikd1t89arn.cloudfront.net/custom-rating/custom-ratings-b89013e8-729f-44aa-816e-93a9307b079c"
  },
  "itemReviewed": {
    "@type": "CreativeWork",
    "author": {
      "@type": "Person",
      "name": "Donald Trump ",
      "jobTitle": "President",
      "image": "https://dhpikd1t89arn.cloudfront.net/custom-rating/custom-ratings-b89013e8-729f-44aa-816e-93a9307b079c",
      "sameAs": [
        "www.whitehouse.gov"
      ]
    },
    "datePublished": "2018-03-13",
    "name": "in a tweet"
  },
  "claimReviewed": "\"The $18 billion wall will pay for itself by curbing the importation of crime, drugs and illegal immigrants who tend to go on the federal dole.”"
}

https://www.snopes.com/fact-check/did-captured-islamic-state-obama/

{
  "urlOfFactCheck": "https://www.snopes.com/fact-check/did-captured-islamic-state-obama/",
  "reviewRating": {
    "alternateName": "False",
    "ratingValue": "",
    "bestRating": ""
  },
  "itemReviewed": {
    "datePublished": "2018-03-13T08:25:46+00:00",
    "name": "Snopes",
    "sameAs": "http://dailyworldupdate.com/2018/03/11/breaking-captured-isis-leader-had-obama-on-speed-dial/"
  },
  "claimReviewed": "A captured Islamic State leader's cell phone contained the phone numbers of world leaders, including former President Barack Obama."
}

https://www.factcheck.org/2018/03/trump-misses-mars-attack/

{
  "urlOfFactCheck": "https://www.factcheck.org/2018/03/trump-misses-mars-attack/",
  "reviewRating": {
    "@type": "Rating",
    "ratingValue": "-1",
    "alternateName": "Clinton Endorsed Mars Flight",
    "worstRating": "-1",
    "bestRating": "-1",
    "image": "https://dhpikd1t89arn.cloudfront.net/custom-rating/custom-ratings-76945159-07c1-49d6-833e-717c1da2478f"
  },
  "itemReviewed": {
    "@type": "CreativeWork",
    "author": {
      "@type": "Person",
      "name": "Donald Trump",
      "jobTitle": "President of the United States",
      "image": "https://dhpikd1t89arn.cloudfront.net/custom-rating/custom-ratings-76945159-07c1-49d6-833e-717c1da2478f",
      "sameAs": [
        "http://www.whitehouse.gov"
      ]
    },
    "datePublished": "2018-03-13",
    "name": "Speech at military base"
  },
  "claimReviewed": "NASA “wouldn’t have been going to Mars if my opponent won.\""
}

http://www.politifact.com/texas/statements/2018/mar/15/ted-cruz/ted-cruz-says-beto-orourke-wants-open-borders-and-/

{
  "urlOfFactCheck": "http://www.politifact.com/texas/statements/2018/mar/15/ted-cruz/ted-cruz-says-beto-orourke-wants-open-borders-and-/",
  "reviewRating": {
    "@type": "Rating",
    "ratingValue": "2",
    "alternateName": "False",
    "worstRating": "1",
    "bestRating": "7",
    "image": "https://dhpikd1t89arn.cloudfront.net/rating_images/politifact/tom-false.jpg"
  },
  "itemReviewed": {
    "@type": "CreativeWork",
    "author": {
      "@type": "Person",
      "name": "Ted Cruz",
      "jobTitle": "U.S. senator for Texas",
      "image": "https://dhpikd1t89arn.cloudfront.net/rating_images/politifatc/tom-false.jpg",
      "sameAs": [
        "https://www.youtube.com/watch?v=MvKElqdUxZg"
      ]
    },
    "datePublished": "2018-03-06",
    "name": "Houston, Texas"
  },
  "claimReviewed": "Says Beto O’Rourke wants \"open borders and wants to take our guns.\""
}

https://climatefeedback.org/claimreview/breitbart-repeats-bloggers-unsupported-claim-noaa-manipulates-data-exaggerate-warming/

{
  "urlOfFactCheck": "https://climatefeedback.org/claimreview/breitbart-repeats-bloggers-unsupported-claim-noaa-manipulates-data-exaggerate-warming/",
  "reviewRating": {
    "@type": "Rating",
    "alternateName": "Unsupported",
    "bestRating": 5,
    "ratingValue": 2,
    "worstRating": 1
  },
  "itemReviewed": {
    "@type": "CreativeWork",
    "author": {
      "@type": "Person",
      "name": "James Delingpole"
    },
    "headline": "TBD",
    "image": {
      "@type": "ImageObject",
      "height": "200px",
      "url": "https://climatefeedback.org/wp-content/uploads/Sources/Delingpole_Breitbart_News.png",
      "width": "200px"
    },
    "publisher": {
      "@type": "Organization",
      "logo": "Breitbart",
      "name": "Breitbart"
    },
    "datePublished": "20 February 2018",
    "url": "TBD"
  },
  "claimReviewed": "NOAA has adjusted past temperatures to look colder than they were and recent temperatures to look warmer than they were."
}

To normalize the difference I’ll need to look at more examples from these sites. But I’m also looking for more patterns, so if you know of other sites that routinely embed ClaimReview markup, please drop me links!