Notes for an annotation SDK

While helping Hypothesis find its way to ed-tech it was my great privilege to explore ways of adapting annotation to other domains including bioscience, journalism, and scholarly publishing. Working across these domains showed me that annotation isn’t just an app you do or don’t adopt. It’s also a service you’d like to be available in every document workflow that connects people to selections in documents.

In my talk Weaving the Annotated Web I showcased four such services: Science in the Classroom, The Digital Polarization Project, SciBot, and ClaimChart. Others include tools to evaluate credibility signals, or review claims, in news stories.

As I worked through these and other scenarios, I accreted a set of tools for enabling any annotation-aware interaction in any document-oriented workflow. I’ve wanted to package these as a coherent software development kit, that hasn’t happened yet, but here are some of the ingredients that belong in such an SDK.

Creating an annotation from a selection in a document

Two core operations lie at the heart of any annotation system: creating a note that will bind to a selection in a document, and binding (anchoring) that note to its place in the document. A tool that creates an annotation reacts to a selection in a document by forming one or more selectors that describe the selection.

The most important selector is TextQuoteSelector. If I visit and select the phrase “illustrative examples” and then use Hypothesis to annotate that selection, the payload sent from the client to the server includes this construct.

  "type": "TextQuoteSelector", 
  "exact": "illustrative examples", 
  "prefix": "n\n    This domain is for use in ", 
  "suffix": " in documents. You may use this\n"

The Hypothesis client formerly used an NPM module, dom-anchor-text-quote, to derive that info from a selection. It no longer uses that module, and the equivalent code that it does use isn’t separately available. But annotations created using TextQuoteSelectors formed by dom-anchor-text-quote interoperate with those created using the Hypothesis client, and I don’t expect that will change since Hypothesis needs to remain backwards-compatible with itself.

You’ll find something like TextQuoteSelector in any annotation system. It’s formally defined by the W3C here. In the vast majority of cases this is all you need to describe the selection to which an annotation should anchor.

There are, however, cases where TextQuoteSelector won’t suffice. Consider a document that repeats the same passage three times. Given a short selection in the first of those passages, how can a system know that an annotation should anchor to that one, and not the second or third? Another selector, TextPositionSelector (, enables a system to know which passage contains the selection.

  "type": "TextPositionSelector", 
  "start": 51
  "end": 72, 

It records the start and end of the selection in the visible text of an HTML document. Here’s the HTML source of that web page.

    <h1>Example Domain</h1>
    <p>This domain is for use in illustrative examples in documents. You may use this
    domain in literature without prior coordination or asking for permission.</p>
    <p><a href="">More information...</a></p>

Here is the visible text to which the TextQuoteSelector refers.

\n\n Example Domain\n This domain is for use in illustrative examples in documents. You may use this\n domain in literature without prior coordination or asking for permission.\n More information…\n\n\n\n

The positions recorded by a TextQuoteSelector can change for a couple of reasons. If the document is altered, it’s obvious that an annotation’s start and stop numbers might change. Less obviously that can happen even if the document’s text isn’t altered. A news website, for example, may inject different kinds of advertising-related text content from one page load to the next. In that case the positions for two consecutive Hypothesis annotations made on the same selection can differ. So while TextPositionSelector can resolve ambiguity, and provide hints to an annotation system about where to look for matches, the foundation is ultimately TextQuoteSelector.

If you try the first example in the README at, you can form your own TextQuoteSelector and TextPositionSelector from a selection in a web page. That repo exists only as a wrapper around the set of modules — dom-anchor-text-quote, dom-anchor-text-position, and wrap-range-text — needed to create and anchor annotations.

Building on these ingredients, HelloWorldAnnotated illustrates a common pattern.

  • Given a selection in a page, form the selectors needed to post an annotation that targets the selection.
  • Lead a user through an interaction that influences the content of that annotation.
  • Post the annotation.

Here is an example of such an interaction. It’s a content-labeling scenario in which a user rates the emotional affect of a selection. This is the kind of thing that can be done with the stock Hypothesis client, but awkwardly because users must reliably add tags like WeakNegative or StrongPositive to represent their ratings. The app prompts for those tags to ensure consistent use of them.

Although the annotation is created by a standalone app, the Hypothesis client can anchor it, display it, and even edit it.

And the Hypothesis service can search for sets of annotations that match the tags WeakNegative or StrongPositive.

There’s powerful synergy at work here. If your annotation scenario requires controlled tags, or a prescribed workflow, you might want to adapt the Hypothesis client to do those things. But it can be easier to create a standalone app that does exactly what you need, while producing annotations that interoperate with the Hypothesis system.

Anchoring an annotation to its place in a document

Using this same set of modules, a tool or system can retrieve an annotation from a web service and anchor it to a document in the place where it belongs. You can try the second example in the README at to see how this works.

For a real-world demonstration of this technique, see Science in the Classroom. It’s a project sponsored by The American Association for the Advancement of Science. Graduate students annotate research papers selected from the Science family of journals so that younger students can learn about the terminology, methods, and outcomes of scientific research.

Pre-Hypothesis, annotations on these papers were displayed using Learning Lens, a viewer that color-codes them by category.

Nothing about Learning Lens changed when Hypothesis came into the picture, it just provided a better way to record the annotations. Originally that was done as it’s often done in the absence of a formal way to describe annotation targets, by passing notes like: “highlight the word ‘proximodistal’ in the first paragraph of the abstract, and attach this note to it.” This kind of thing happens a lot, and wherever it does there’s an opportunity to adopt a more rigorous approach. Nowadays at Science in the Classroom the annotators use Hypothesis to describe where notes should anchor, as well as what they should say. When an annotated page loads it searches Hypothesis for annotations that target the page, and inserts them using the same format that’s always been used to drive the Learning Lens. Tags assigned by annotators align with Learning Lens categories. The search looks only for notes from designated annotators, so nothing unwanted will appear.

An annotation-powered survey

The Credibility Coalition is “a research community that fosters collaborative approaches to understanding the veracity, quality and credibility of online information.” We worked with them on a project to test a set of signals that bear on the credibility of news stories. Examples of such signals include:

  • Title Representativeness (Does the title of an article accurately reflect its content?)
  • Sources (Does the article cite sources?)
  • Acknowledgement of uncertainty (Does the author acknowledge uncertainty, or the possibility things might be otherwise?)

Volunteers were asked these questions for each of a set of news stories. Many of the questions were yes/no or multiple choice and could have been handled by any survey tool. But some were different. What does “acknowledgement of uncertainty” look like? You know it when you see it, and you can point to examples. But how can a survey tool solicit answers that refer to selections in documents, and record their locations and contexts?

The answer was to create a survey tool that enabled respondents to answer such questions by highlighting one or more selections. Like the HelloWorldAnnotated example above, this was a bespoke client that guided the user through a prescribed workflow. In this case, that workflow was more complex. And because it was defined in a declarative way, the same app can be used for any survey that requires people to provide answers that refer to selections in web documents.

A JavaScript wrapper for the Hypothesis API

The HelloWorldAnnotated example uses functions from a library, hlib, to post an annotation to the Hypothesis service. That library includes functions for searching and posting annotations using the Hypothesis API. It also includes support for interaction patterns common to annotation apps, most of which occur in facet, a standalone tool that searches, displays, and exports sets of annotations. Supported interactions include:

– Authenticating with an API token

– Creating a picklist of groups accessible to the authenticated user

– Assembling and displaying conversation threads

– Parsing annotations

– Editing annotations

– Editing tags

In addition to facet, other tools based on this library include CopyAnnotations and TagRename

A Python wrapper for the Hypothesis API

If you’re working in Python, hypothesis-api is an alternative API wrapper that supports searching for, posting, and parsing annotations.


If you’re a publisher who embeds Hypothesis on your site, you can use a wildcard search to find annotations. But it would be helpful to be notified when annotations are posted. h_notify is a tool that uses the Hypothesis API to watch for annotations on individual or wildcard URLs, or from particular users, or in a specified group, or with a specified tag.

When an h_notify-based watcher finds notes in any of these ways, it can send alerts to a Slack channel, or to an email address, or add items to an RSS feed.

At Hypothesis we mainly rely on the Slack option. In this example, user nirgendheim highlighted the word “interacting” in a page on the Hypothesis website.

The watcher sent this notice to our #website channel in Slack.

A member of the support team (Hypothesis handle mdiroberts) saw it there and responded to nirgendheim as shown above. How did nirgendheim know that mdiroberts had responded? The core Hypothesis system sends you an email when somebody replies to one of your notes. h_notify is for bulk monitoring and alerting.

A tiny Hypothesis server

People sometimes ask about connecting the Hypothesis client to an alternate server in order to retain complete control over their data. It’s doable, you can follow the instructions here to build and run your own server, and some people and organizations do that. Depending on need, though, that can entail more effort, and more moving parts, than may be warranted.

Suppose for example you’re part of a team of investigative journalists annotating web pages for a controversial story, or a team of analysts sharing confidential notes on web-based financial reports. The documents you’re annotating are public, but the notes you’re taking in a Hypothesis private group are so sensitive that you’d rather not keep them in the Hypothesis service. You’d ideally like to spin up a minimal server for that purpose: small, simple, and easy to manage within your own infrastructure.

Here’s a proof of concept. This tiny server clocks in just 145 lines of Python with very few dependencies. It uses Python’s batteries-include SQLite module for annotation storage. The web framework is Pyramid only because that’s what I’m familiar with, but could as easily be Flask, the ultra-light framework typically used for this sort of thing.

A tiny app wrapped around those ingredients is all you need to receive JSON payloads from a Hypothesis client, and return JSON payloads when the client searches for annotations to anchor to a page.

The service is dockerized and easy to deploy. To test it I used the speedrun to create an instance at Then I made the handful of small tweaks to the Hypothesis client shown in client-patches.txt. My method for doing that, typical for quick proofs of concept that vary the Hypothesis client in some small way, goes like this:

  • Clone the Hypothesis client.
  • Edit gulpfile.js to say const IS_PRODUCTION_BUILD = false. This turns off minification so it’s possible to read and debug the client code.
  • Follow the instructions to run the client from a browser extension. After establishing a link between the client repo and browser-extension repo, as per those instructions, use this build command — make build SETTINGS_FILE=settings/chrome-prod.json — to create a browser extension that authenticates to the Hypothesis production service.
  • In a Chromium browser (e.g. Chrome or Edge or Brave) use chrome://extensions, click Load unpacked, and point to the browser-extension/build directory where you built the extension.

This is the easiest way to create a Hypothesis client in which to try quick experiments. There are tons of source files in the repos, but just a handful of bundles and loose files in the built extension. You can run the extension, search and poke around in those bundles, set breakpoints, make changes, and see immediate results.

In this case I only made the changes shown in client-patches.txt:

  • In options/index.html I added an input box to name an alternate server.
  • In options/options.js I sync that value to the cloud and also to the browser’s localStorage.
  • In the extension bundle I check localStorage for an alternate server and, if present, modify the API request used by the extension to show the number of notes found for a page.
  • In the sidebar bundle I check localStorage for an alternate server and, if present, modify the API requests used to search for, create, update, and delete annotations.

I don’t recommend this cowboy approach for anything real. If I actually wanted to use this tweaked client I’d create branches of the client and the browser-extension, and transfer the changes into the source files where they belong. If I wanted to share it with a close-knit team I’d zip up the extension so colleagues could unzip and sideload it. If I wanted to share more broadly I could upload the extension to the Chrome web store. I’ve done all these things, and have found that it’s feasible — without forking Hypothesis — to maintain branches that maintain small but strategic changes like this one. But when I’m aiming for a quick proof of concept, I’m happy to be a cowboy.

In any event, here’s the proof. With the tiny server deployed to, I poked that address into the tweaked client.

And sure enough, I could search for, create, reply to, update, and delete annotations using that 145-line SQLite-backed server.

The client still authenticates to Hypothesis in the usual way, and behaves normally unless you specify an alternate server. In that case, the server knows nothing about Hypothesis private groups. The client sees it as the Hypothesis public layer, but it’s really the moral equivalent of a private group. Others will see it only if they’re running the same tweaked extension and pointing to the same server. You could probably go quite far with SQLite but, of course, it’s easy to see how you’d swap it out for a more robust database like Postgres.


I think of these examples as signposts pointing to a coherent SDK for weaving annotation into any document workflow. They show that it’s feasible to decouple and recombine the core operations: creating an annotation based on a selection in a document, and anchoring an annotation to its place in a document. Why decouple? Reasons are as diverse as the document workflows we engage in. The stock Hypothesis system beautifully supports a wide range of scenarios. Sometimes it’s helpful to replace or augment Hypothesis with a custom app that provides a guided experience for annotators and/or an alternative display for readers. The annotation SDK I envision will make it straightforward for developers to build solutions that leverage the full spectrum of possibility.

End notes

Weaving the annotated web

Posted in .

5 thoughts on “Notes for an annotation SDK

  1. I once tried diving in to the Hypothesis-owned repos, especially the official client repo, with a few different aims. When it started taking more than an hour or two just to be able to reproduce a setup that would let me make a simple “hello world”-style change to the client that I already use (the bookmarklet), I bailed (even though at the time, it was supposed to be, “I’m going to set this aside for now and come back later,” but later still hasn’t happened, and I’m not sure it ever will.)

    The documentation is badly in need of an outsider’s perspective. Hollywood has the position of a script supervisor, which can also be thought of as something like a “viewer advocate” . You guys could really benefit from something similar with respect to your Development Guide.

    A sample: after the instructions describe the commands needed to build the client from the Makefile, the docs say, “You now have a development client built.” Good so far. “To run your development client in a browser you’ll need a local copy of either the Hypothesis Chrome extension or h[…]”. Wait, what? It continues, “Follow either Running the Client from the Browser Extension[…]”. Running the client—running the client—*from* the extension? The extension is *not* a client? Picking back up, “[…] or Running the Client From h below”. Pardon? Running the client from h—Hypothesis’s backend implementation? What does it mean to *run the client* *from the server*? Following up by reading through sections that the “below” was referring to, it can kind of be made to make some sense, but only by drawing upon a lot of experience and educated guessing, and not so much from the contents of what those sections actually say.

    Stumbling blocks in the documentation are one thing, but setting those aside and trying to move forward on the original goal—getting a working version of the client with a trivial customization, as proof that it’s actually moldable—other issues became clearer. Fundamental oddities noticeable when looking at what builds actually contain after being put together, obstacles to working out and reasoning about what the browser extension, for example, is actually executing; the last thing that made me throw up my hands was realizing that the extension—which targets a known runtime, since Chrome is the only browser supported—was still using a transpiler intended for use on the Web (for rewriting sources from modern ES.whatever into a form that legacy browsers can understand without causing e.g. syntax errors). *Incredibly* frustrating, since I was trying to track down at the time some magic that is apparently the result of the build, but I was endlessly thwarted since the bundle that build produces was so different from the sources you see in the repo.

    This was all before I ran into the bespoke tools in the namespace that I was delighted to find recently but haven’t been able to really look at.

Leave a Reply