How you can help build the fever map

Rich Kilmer is an old friend who runs CargoSense, a logistics company that gathers and analyzes data from sensors attached to products moving through supply chains. As the pandemic emerged he realized that one particular kind of sensor — the household thermometer used to measure human body temperature — was an underutilized source of crucial data. So the CargoSense team built TrackMyTemp, a web app you can use to provide a stream of anonymized temperature readings to the research community.

It’s dead simple to use. When you first visit the site it asks for your age and current temperature, invites you to activate a virtual thermometer, and prompts for the thermometer’s type (oral, forehead, ear). You also have to allow access to your location so your data can be geocoded.

After you report your first reading, you land on your “personal virtual thermometer” page which you’re invited to bookmark. The URL of that page encodes an anonymous identity. You revisit it whenever you want to contribute a new temperature reading — ideally always at the same time of day.

The precision of your location varies, Rich told me, to account for differences in population density. In a less populous area, an exact location could be a personally identifying signal, so the service fuzzes the location.

Why participate? The site explains:

An elevated temperature can be an indicator that your body is fighting off an infection. Some people contract COVID-19 but never know they have it, because other than a minor increase in temperature, they never show any other symptoms. As we isolate ourselves in our homes for physical distancing purposes we wonder what else we can do. Using this site is something we all can do to help epidemiologists better model how the virus is spreading. By copying your temperature from your physical thermometer into a virtual thermometer using this site, you will help to build a community, and national and global real-time datasets that will help researchers track and combat the spread of COVID-19. We do this while maintaining your privacy, and you only need your mobile phone and your existing thermometer to participate.

You may have seen reports about the Kinsa smart thermometer fever map, which seems to show fevers dropping as social distancing takes hold. This kind of data can help us monitor the initial wave of COVID-19 outbreaks, and could be a critical early warning system later this year as a second wave begins to emerge.

With TrackMyTemp you don’t need a smart thermometer, you can use the ordinary one you already have. I plan to keep one in my office, take a reading when I start my work day, and send the number to TrackMyTemp. If even a tiny fraction of the 128 million U.S. households got into the habit of doing this, we’d produce an ongoing stream of data useful to health researchers.

The best motivation, of course, is enlightened self-interest. So here’s mine. Until this week I can’t remember the last time I took my temperature. Knowing it on a daily basis could help me detect the onset of the virus and amp up my efforts to protect others from it.

Controlling a browser extension with puppeteer

Web technology, at its best, has always occupied a sweet spot at the intersection of two complementary modes: interactive and programmatic. You interact with a web page by plugging its URL into a browser. Then you read what’s there or, if the page is an app, you use the app. Give that same URL, web software can read the page, analyze it, extract data from it, interact with the page, and weave its data and behavior into new apps. Search is the classic illustration of this happy synergy. A crawler reads and navigates the web using the same affordances available to people, then produces data that powers a search engine.

In the early days of the web, it was easy to automate the reading, navigation, analysis, and recombination of web pages because the pages were mostly just text and links. Such automation was (and still is) useful for analysis and recombination, as well as for end-to-end testing of web software. But things got harder as the web moved away from static server-generated pages and toward dynamic client-generated pages made with JavaScript.

All along the way, we’ve had another powerful tool in our kit. Web pages and apps, however they’re produced, can be enhanced with extra JavaScript that’s injected into them. My first experience of the power of that approach was a bookmarklet that, when activated on an Amazon page, checked your local library to see if the book was available there. Then came Greasemonkey, a framework for injecting “userscripts” into pages. It was the forerunner of browser extensions now converging on the WebExtensions API.

The Hypothesis web annotator is a good example of the kind of software that relies on the ability to inject JavaScript into web pages. It runs in two places. The annotator is injected into the web page you’re annotating; the sidebar (i.e. the app) runs in an iframe that’s also injected into the page. The annotator acquires page metadata, reacts to the selection event, forms W3C-style selectors that define the selection, and sends that information (by way of JavaScript messages) to the sidebar. The sidebar queries the server, retrieves annotations for the page into which the annotator was injected, displays the annotations, tells the annotator about navigation it should sync with, and listens for messages from the annotator announcing selections that serve as the basis of new annotations. When you create a new annotation in the sidebar, it messages the annotator so it can anchor it. That means: receive annotation data from the sidebar, and highlight the selection it refers to.

Early in my Hypothesis tenure we needed to deploy a major upgrade to the anchoring machinery. Would annotations created using the old system still anchor when using the new one? You could check interactively by loading HTML and PDF documents into two versions of the extension, then observing and counting the annotations that anchored or didn’t, but that was tedious and not easily repeatable. So I made a tool to automate that manual testing. At the time the automation technology of choice was Selenium WebDriver. I was able to use it well enough to validate the new anchoring system, which we then shipped.

Now there’s another upgrade in the works. We’re switching from a legacy version of PDF.js, the Mozilla viewer that Hypothesis uses to convert PDF files into web pages it can annotate, to the current PDF.js. Again we faced an automation challenge. This time around I used the newer automation technology that is the subject of this post. The ingredients were: headless Chrome, a browser mode that’s open to automation; puppeteer, an API for headless Chrome; and the devtools protocol, which is how the Chromium debugger, puppeteer, and other tools and APIs control the browser.

This approach was more powerful and effective than the Selenium-based one because puppeteer’s devtools-based API affords more intimate control of the browser. Anything you can do interactively in the browser’s debugger console — and that’s a lot! — you can do programmatically by way of the devtools protocol.

That second-generation test harness is now, happily, obsolete thanks to an improved (also puppeteer-based) version. Meanwhile I’ve used what I learned on a couple of other projects. What I’ll describe here, available in this GitHub repo, is a simplified version of one of them. It illustrates how to use puppeteer to load an URL into an extension, simulate user input and action (e.g., clicking a button), open a new extension-hosted tab in reaction to the click, and transmit data gathered from the original tab to the new tab. All this is wrapped in a loop that visits many URLs, gathers analysis results for each, combines the results, and saves them.

This is specialized pattern, to be sure, but I’ve found important uses for it and expect there will be more.

To motivate the example I’ve made an extension that, while admittedly concocted, is plausibly useful as a standalone tool. When active it adds a button labeled view link elements to each page you visit. When you click the button, the extension gathers all the link elements on the page, organizes them, and sends the data to a new tab where you view it. There’s a menagerie of these elements running around on the web, and it can be instructive to reveal them on any given page.

The most concocted aspect of the demo is the input box that accompanies the view link elements button. It’s not strictly required for this tool, but I’ve added it because the ability to automate user input is broadly useful and I want to show how that can work.

The extension itself, in the repo’s browser-extension folder, may serve as a useful example. While minimal, it illustrates some key idioms: injecting code into the current tab; using shadow DOM to isolate injected content from the page’s CSS; sending messages through the extension’s background script in order to create a new tab. To run it, clone the repo (or just download the extension’s handful of files into a folder) and follow these instructions from https://developer.chrome.com/extensions/getstarted:

  1. Open the Extension Management page by navigating to chrome://extensions. The Extension Management page can also be opened by clicking on the Chrome menu, hovering over More Tools then selecting Extensions.

  2. Enable Developer Mode by clicking the toggle switch next to Developer mode.

  3. Click the LOAD UNPACKED button and select the extension directory.

This works in Chromium-based browsers including Chrome, Brave, and the new version of Edge. The analogous procedure for Firefox:

Open “about:debugging” in Firefox, click “Load Temporary Add-on” and select any file in your extension’s directory.

The Chromium and Firefox extension machinery is not fully compatible. That’s why Hypothesis has yet to ship a Firefox version of our extension, and why the demo extension I’m describing here almost works in Firefox but fails for a reason I’ve yet to debug. Better convergence would of course be ideal. Meanwhile I’m confident that, with some elbow grease, extensions can work in both environments. Chromium’s expanding footprint does tend to erode the motivation to make that happen, but that’s a topic for another day. The scope of what’s possible with Chromium-based browsers and puppeteer is vast, and that’s my topic here.

Of particular interest to me is the notion of automated testing of workflow. The demo extension hints at this idea. It requires user input; the automation satisfies that requirement by providing it. In a real workflow, one that’s driven by a set of rules, this kind of automation can exercise web software in a way that complements unit testing with end-to-end testing in real scenarios. Because the software being automated and the software doing the automation share intimate access to the browser, the synergy between them is powerful in ways I’m not sure I can fully explain. But it makes my spidey sense tingle and I’ve learned to pay attention when that happens.

Here’s a screencast of the runnning automation:

The results gathered from a selection of sites are here.

Some things I noticed and learned along the way:

Languages. Although puppeteer is written in JavaScript, there’s no reason it has to be. Software written in any language could communicate with the browser using the devtools protocol. The fact that puppeteer is written in JavaScript introduces potential confusion in places like this:

const results = await viewPage.evaluate((message) => {
  // this blocks run in the browser's 2nd tab
  const data = JSON.parse(document.querySelector('pre').innerText) 
  ...

The first line of JS runs in node, the second line runs in the browser. If puppeteer were written in a different language that would be obvious; since it isn’t I remind myself with a comment.

Settings. If you need the automated instance of Chromium to remember login info or other things, you can add userdir: {PATH} to the object passed to puppeteer.launch.

Interactivity. It may seem odd that the demo is using headless Chrome but specifies headless: false in the object passed to puppeteer.launch. I do that in order to be able to watch the automation when running locally; I’d turn it off when running automation in the cloud. Although I’ve not yet explored the idea, an interactive/automated hybrid seems an intriguing foundation for workflow apps and other kinds of guided experiences.

Web components used in the ClinGen app

A few years back I watched Joe Gregorio’s 2015 OSCON talk and was deeply impressed by the core message: “Stop using JS frameworks, start writing reusable, orthogonally-composable units of HTML+CSS+JS.” Earlier this year I asked Joe to reflect on what he’s since learned about applying the newish suite of web standards known collectively as web components. He replied with a blog post that convinced me to try following in his footsteps.

I’ve been thinking about how software components can be built and wired together since the dawn of the web, when the state-of-the-art web components were Java applets and Netscape browser plugins. Since then we’ve seen newer plugin mechanisms (Flash, Silverlight) rise and fall, and the subsequent rise of JavaScript frameworks (jQuery, Angular, React) that are now the dominant way to package, reuse, and recombine pieces of web functionality.

The framework-centric approach has never seemed right to me. My touchstone example is the tag editor in the Hypothesis web annotator. When the Hypothesis project started, Angular was the then-fashionable framework. The project adopted a component, ngTagsInput, which implements the capable and attractive tag editor you see in Hypothesis today. But Angular has fallen from grace, and the Hypothesis project is migrating to React — more specifically, to the React-API-compatible preact. Unfortunately ngTagsInput can’t come along for the ride. We’ll either have to find a React-compatible replacement or build our own tag-editing component.

Why should a web component be hardwired to an ephemeral JavaScript framework? Why can’t the web platform support framework-agnostic components, as Joe Gregorio suggests?

I’ve been exploring these questions, and my provisional answers are: “It shouldn’t!” and “It can!”

The demo I’ll describe here comes from an app used by biocurators working for the ClinGen Consortium. They annotate scientific literature to provide data that will feed machine learning systems. While these biocurators can and do use off-the-shelf Hypothesis to highlight words and phrases in papers, and link them to search results in databases of phenotypes, genes, and alleles, the workflow is complex and the requirements for accurate tagging are strict. The app provides a guided workflow, and assigns tags automatically. In the latest iteration of the app, we’ve added support for a requirement to tag each piece of evidence with a category — individual, family, or group — and an instance number from one to twenty.

The demo extracts this capability from the app. It’s live here, or you can just download index.html and index.js and open index.html locally. This is plain vanilla web software: no package.json, no build script, no transpiler, no framework.

To create a widget that enables curators to declare they are annotating in the context of individual 1 or family 3 or group 11, I started with IntegerSelect, a custom element that supports markup like this:

  <select is="integer-select" type="individual" count="20"></select>  

Here I digress to clarify a point of terminology. If you look up web components on Wikipedia (or anywhere else) you will learn that the term refers to a suite of web technologies, primarily Custom Elements, Shadow DOM, and HTML Template. What I’m showing here is only Custom Elements because that’s all this project (and others I’m working on) have required.

(People sometimes ask: “Why didn’t web components take off?” Perhaps, in part, because nearly all the literature implies that you must adopt and integrate a wider variety of stuff than may be required.)

The thing being extended by IntegerSelect is a native HTML element, HTMLSelectElement. When it renders into the DOM, it will have empty select markup which the element’s code then fills. It inherits from HTMLSelectElement, so that code can use expressions like this.options[this.selectedIndex].value.

Two key patterns for custom elements are:

  1. Use attributes to pass data into elements.

  2. Use messages to send data out of elements.

You can see both patterns in IntegerSelect’s class definition:

class IntegerSelect extends HTMLSelectElement {
  type
  constructor() {
    super()
    this.type = this.getAttribute('type')
  }
  relaySelection() {
    const e = new CustomEvent('labeled-integer-select-event', { 
      detail: {
        type: this.type,
        value: this.options[this.selectedIndex].value
      }
    })
    // if in a collection, tell the collection so it can update its display
    const closestCollection = this.closest('labeled-integer-select-collection')
    if (closestCollection) {
      closestCollection.dispatchEvent(e)
    }
    // tell the app so it can save/restore the collection's state
    dispatchEvent(e)    
  }
  connectedCallback() {
    const count = parseInt(this.getAttribute('count'))
    let options = ''
    const lookupInstance = parseInt(getLookupInstance(this.type))
    for (let i = 1; i < count; i++) {
      let selected = ( i == lookupInstance ) ? 'selected' : ''
      options += `<option ${selected}>${i}`
    }
    this.innerHTML = options
    this.onclick = this.relaySelection
  }
}
customElements.define('integer-select', IntegerSelect, { extends: "select" })

As you can see in the first part of the live demo, this element works as a standalone tag that presents a picklist of count choices associated with the given type. When clicked, it creates a message with a payload like this:

   {"type":"individual","value":"8"}

And it beams that message at a couple of targets. The first is an enclosing element that wraps the full hierarchy of elements that compose the widget shown in this demo. The second is the single-page app that hosts the widget. In part 1 of the demo there is no enclosing element, but the app is listening for the event and reports it to the browser’s console.

An earlier iteration relied on a set of radio buttons to capture the type (individual/family/group), and combined that with the instance number from the picklist to produce results like individual 1 and group 3. Later I realized that just clicking on an IntegerSelect picklist conveys all the necessary data. An initial click identifies the picklist’s type and its current instance number. A subsequent click during navigation of the picklist changes the instance number.

That led to a further realization. The radio buttons had been doing two things: identifying the selected type, and providing a label for the selected type. To bring back the labels I wrapped IntegerSelect in LabeledIntegerSelect whose tag looks like this:

  <labeled-integer-select type="individual" count="20"></labeled-integer-select>

Why not a div (or other) tag with is="labeled-integer-select"? Because this custom element doesn’t extend a particular kind of HTML element, like HTMLSelectElement. As such it’s defined a bit differently. IntegerSelect begins like so:

class IntegerSelect extends HTMLSelectElement

And concludes thusly:

customElements.define('integer-select', IntegerSelect, { extends: "select" })

LabeledIntegerSelect, by contrast, begins:

class LabeledIntegerSelect extends HTMLElement

And ends:

customElements.define('labeled-integer-select', LabeledIntegerSelect)

As you can see in part 2 of the demo, the LabeledIntegerSelect picklist is an IntegerSelect picklist with a label. Because it includes an IntegerSelect, it sends the same message when clicked. There is still no enclosing widget to receive that message, but because the app is listening for the message, it is again reported to the console.

Finally we arrive at LabeledIntegerSelectCollection, whose markup looks like this:

  <labeled-integer-select-collection>
    <labeled-integer-select type="individual" count="20"></labeled-integer-select>
    <labeled-integer-select type="family" count="20"></labeled-integer-select>
    <labeled-integer-select type="group" count="20"></labeled-integer-select>
  </labeled-integer-select-collection>
  

When this element is first connected to the DOM it visits the first of its LabeledIntegerSelects, sets its selected attribute true, and bolds its label. When one of its LabeledIntegerSelects is clicked, it clears that state and reestablishes it for the clicked element.

The LabeledIntegerSelectCollection could also save and restore that state when the page reloads. In the current iteration it doesn’t, though. Instead it offloads that responsibility to the app, which has logic for saving to and restoring from the browser’s localStorage.

You can see the bolding, saving, and restoring of choices in this screencast:

I’m still getting the hang of custom elements, but the more I work with them the more I like them. Joe Gregorio’s vision of “reusable, orthogonally-composable units of HTML+CSS+JS” doesn’t seem far-fetched at all. I’ll leave it to others to debate the merits of the frameworks, toolchains, and package managers that so dramatically complicate modern web development. Meanwhile I’ll do all that I can with the basic web platform, and with custom elements in particular. Larry Wall has always said of Perl, which enabled my earliest web development work, that it makes easy things easy and hard things possible. I want the web platform to earn that same distinction, and I’m more hopeful than ever that it can.

Products and Capabilities

I’ve come to depend on the outlining that’s baked into Visual Studio Code. So far as I can tell, it is unrelated to the tool’s impressive language intelligence. There’s no parsing or understanding of the text in which sections can fold and unfold, but indentation is significant and you can exploit that to make nested expand/collapse zones in any language. I use it for HTML, CSS, JavaScript, Python, Markdown, SQL, and maybe a few others I’m forgetting.

For a long time I thought of myself as someone who respected outlining tools as an important category of software but, having tried various of them going all the way back to ThinkTank, never adopted one as part of my regular work. My experience with VSCode, though, has shown me that while I never adopted an outliner product, I have most surely adopted outlining capability. I use it wherever it manifests. I make tables of contents in Google Docs, I use code folding in text editors that support it, I exploit any outlining affordance that exists in any tool I’m using. Recently I was thrilled to learn that the web platform offers such an affordance in the form of the <details> tag.

In thinking about this post, the first title that occurred to me was: Products vs. Capabilities. Were I still earning a living writing about tech I might have gone with that, or an editor might have imposed it, because conflict (in theory) sells content. But since I’m not selling content it’s my call. I choose Products and Capabilities because the two are, or anyway can (and arguably should) be complementary. For some people, an outliner is a product you use regularly because you value that way of visualizing and reorganizing stuff that you write. You might not be one of those people. I’m not. I don’t, for example, use outlining to organize my thoughts when I write prose, as I’m doing now. That just isn’t how my mind works. I’ve done enough decent writing over the years to accept that it’s OK to not explicitly outline my prose.

For the various kinds of code I spend a lot of time wrangling these days, though, outlining has become indispensable. An outlining expert might say that the capability I’m talking about isn’t real full-fledged outlining, and I’d agree. In a dedicated outliner, for example, you expect to be able to drag things up and down, and in and out, without messing them up. VSCode doesn’t have that, it’s “just” basic indentation-driven folding and unfolding. But that’s useful. And because it shows up everywhere, I’m more effective in a variety of contexts.

In web annotation, I see the same kind of product/capability synergy. Hypothesis is a product that you can adopt and use as a primary tool. You might do that if you’re the sort of person who fell in love with delicious, then moved on to Diigo or Pinboard. Or if you’re a teacher who wants to make students’ reading visible. But web annotation, like outlining, is also a capability that can manifest in — and enhance — other tools. I showed some examples in this 2017 talk; I’ve helped develop others since; I’m more excited than ever about what’s possible.

I’m sure there are other examples. What are some of them? Is there a name for this pattern?

Digital sweatshops then and now

In the early 1960s my family lived in New Delhi. In 1993, working for BYTE, I returned to India to investigate its software industry. The story I wrote, which appears below, includes this vignette:

Rolta does facilities mapping for a U.S. telephone company through its subsidiary in Huntsville, Alabama. Every night, scanned maps flow through the satellite link to Bombay. Operators running 386-based RoltaStations retrieve the maps from a Unix server, digitize them using Intergraph’s MicroStation CAD software, and relay the converted files back to Huntsville.

I didn’t describe the roomful of people sitting at those 386-based RoltaStations doing the work. It was a digital sweatshop. From the window of that air-conditioned room, though, you could watch physical laborers enacting scenes not unlike this one photographed by my dad in 1961.

I don’t have a picture of those rows of map digitizers in the city soon to be renamed Mumbai. But here’s a similar picture from a recent Times story, A.I. Is Learning From Humans. Many Humans.:

Labor-intensive data labeling powers AI. Web annotation technology is a key enabler for that labeling. I’m sure there will digital sweatshops running the software I help build. For what it’s worth, I’m doing my best to ensure there will be opportunities to elevate that work, empowering the humans in the loop to be decision-makers and exception-handlers, not just data-entry clerks.

Meanwhile, here’s that 1993 story, nearly forgotten by the web but salvaged from the NetNews group soc.culture.indian.


Small-systems thinking makes India a strategic software partner

by Jon Udell

BYTE, 09/01/1993

Recently, I saw a demonstration of a new Motif-based programmer’s tool called Sextant. It’s a source code analyzer that converts C programs into labeled graphs that you navigate interactively. The demonstration was impressive. What made it unique for me was that it took place in the offices of Softek, a software company in New Delhi, India.

It’s well known that Indian engineering talent pervades every level of the microcomputer industry. But what’s happening in India? On a recent tour of the country, I visited several high-tech companies and discovered that India is evolving rapidly from an exporter of computer engineering talent into an exporter of computer products and services. Software exports, in particular, dominate the agenda. A 1992 World Bank study of eight nations rated India as the most attractive nation for U.S., European, and Japanese companies seeking offshore software-development partners.

The World Bank study was conducted by Infotech Consulting (Parsippany, NJ). When the opportunity arose to visit India, I contacted Infotech’s president, Jack Epstein, for advice on planning my trip. He referred me to Pradeep Gupta, an entrepreneur in New Delhi who publishes two of India’s leading computer magazines, PC Quest and DataQuest. Gupta also runs market-research and conference businesses. He orchestrated a whirlwind week of visits to companies in New Delhi and Bombay and generously took time to acquaint me with the Indian high-tech scene.

A Nation of Small Systems

Even among Indians, there’s a tendency to attribute India’s emerging software prowess to the innate mathematical abilities of its people. “After all, we invented zero,” says Dewang Mehta, an internationally known computer graphics expert. He is also executive director of the National Association of Software and Service Companies (NASSCOM) in New Delhi. While this cultural stereotype may hold more than a grain of truth, it’s not the whole story. As NASSCOM’s 1992 report on the state of the Indian software industry notes, India has the world’s second-largest English-speaking technical work force. Consequently, Indian programmers are in tune with the international language of computing, as well as the language spoken in the U.S., the world’s largest market.

Furthermore, India’s data-communications infrastructure is rapidly modernizing. And the Indian government has begun an aggressive program of cutting taxes and lifting import restrictions for export-oriented Indian software businesses while simultaneously clearing the way for foreign companies to set up operations in India.

Other countries share many of these advantages, but India holds an ace. It is a nation of small systems. For U.S. and European companies that are right-sizing mainframe- and minicomputer-based information systems, the switch to PC-based client/server alternatives can be wrenching. Dumping the conceptual baggage of legacy systems isn’t a problem for India, however, because, in general, those systems simply don’t exist. “India’s mainframe era never happened,” says Gupta.

When Europe, Japan, and the U.S. were buying mainframes left and right, few Indian companies could afford their high prices, which were made even more costly by 150 percent import duties. Also, a government policy limiting foreign investors to a 40 percent equity stake in Indian manufacturing operations drove companies like IBM away, and the Indian mainframe industry never got off the ground.

What did develop was an indigenous microcomputer industry. In the early 1980s, Indian companies began to import components and to assemble and sell PC clones that ran DOS. This trend quickened in 1984, when the late Rajiv Gandhi, prime minister and an avid computer enthusiast, lifted licensing restrictions that had prevented clone makers from selling at full capacity.

In the latter half of the 1980s, a computerization initiative in the banking industry shifted the focus to Unix. Front offices would run DOS applications, but behind the scenes, a new breed of Indian-made PCs — Motorola- and Intel-based machines running Unix — would handle the processing chores. Unfortunately, that effort stalled when the banks ran afoul of the unions; even today, many of the Bank of India’s 50,000 branches aren’t linked electronically.

Nevertheless, the die was cast, and India entered the 1990s in possession of a special advantage. Indian programmers are not only well educated and English-speaking, but out of necessity they’re keenly focused on client/server or multiuser solutions for PCs running DOS (with NetWare) or Unix — just the kinds of solutions that U.S. and European companies are rushing to embrace. India finds itself uniquely positioned to help foreign partners right-size legacy applications.

The small-systems mind-set also guides India’s fledgling supercomputer industry. Denied permission by the U.S. government to import a Cray supercomputer, the Indian government’s Center for the Development of Advanced Computers built its own — very different — sort of supercomputer. Called PARAM, it gangs Inmos T800 transputers in parallel and can also harness Intel 860 processors for vector work. Related developments include a transputer-based neural-network engine intended to run process-control applications. The designers of this system impressed me with their clear grasp of the way in which inexpensive transputers can yield superior performance, scalability, modularity, and fault tolerance.

Software Products and Services

Many of the companies I visited produce comparable offerings for LAN or Unix environments. In the realm of packaged software, Oberoi Software in New Delhi sells a high-end hotel management application using Sybase 4.2 that runs on Hewlett-Packard, DEC, and Sun workstations. A low-end version uses Btrieve for DOS LANs. Softek offers 1-2-3, dBase, and WordStar work-alikes for DOS and Unix.

Shrink-wrapped products, however, aren’t India’s strong suit at the moment. PCs remain scarce and expensive commodities. According to DataQuest, fewer than 500,000 PCs can be found in this nation of 875 million people. To a U.S. software engineer, a $3000 PC might represent a month’s wages. An equivalently prosperous Indian professional would have to work a full year to pay for the same system. To put this in perspective, the average per capita wage in India is about $320, and the government caps the monthly salary of Indian corporate executives at around $1600 per month.

Software piracy is another vexing problem. “The competition for a 5000-rupee [approximately $165] Indian spreadsheet isn’t a 15,000-rupee imported copy of Lotus 1-2-3,” says Gupta, “but rather a zero-rupee pirated copy of Lotus 1-2-3.”

Painfully aware of the effect piracy has on the country’s international reputation as a software power, government and industry leaders have joined forces to combat it. The Department of Electronics (DoE), for example, has funded an anti-piracy campaign, and Lotus has a $69 amnesty program that enables users of illegal copies of 1-2-3 to come clean.

Reengineering Is a National Strength

The real action in Indian software isn’t in products. It’s in reengineering services. A typical project, for example, might involve re-creating an IBM AS/400-type application for a LAN or Unix environment. A few years ago, Indian programmers almost invariably would perform such work on location in the U.S. or Europe, a practice called “body shopping.” This was convenient for clients, but it wasn’t very beneficial to India because the tools and the knowledge spun off from reengineering projects tended to stay overseas.

More recently, the trend is to carry out such projects on Indian soil. Softek, for example, used a contract to build a law-office automation system for a Canadian firm as an opportunity to weld a number of its own products into a powerful, general-purpose client/server development toolkit. Softek engineers showed me how that toolkit supports single-source development of GUI software for DOS or Unix (in character mode) as well as Windows. They explained that client programs can connect to Softek’s own RDBMS (relational DBMS) or to servers from Gupta, Ingres, Oracle, or Sybase. That’s an impressive achievement matched by few companies anywhere in the world, and it’s one that should greatly enhance Softek’s appeal to foreign clients.

While reengineering often means right-sizing, that’s not always the case. For example, the National Indian Institution for Training, a New Delhi-based computer-training institute rapidly expanding into the realm of software products and services, has rewritten a well-known U.S. commercial word processor. Rigorous development techniques are the watchword at NIIT. “We have a passion for methodology,” says managing director Rajendra S. Pawar, whose company also distributes Excelerator, Intersolv’s CASE tool.

Other projects under way at NIIT include an X Window System interface builder, Mac and DOS tutorials to accompany the Streeter series of math textbooks (for McGraw-Hill), a simple but effective multimedia authoring tool called Imaginet, a word processor for special-needs users that exploits an NIIT-designed motion- and sound-sensitive input device, and an instructional video system.

Although services outweigh products for now, and the Indian trade press has complained that no indigenous software product has yet made a splash on the world scene, the situation could well change. Indian programmers are talented, and they’re up-to-date with database, GUI, network, and object-oriented technologies. These skills, along with wages 10 or more times less than U.S. programmers, make Indian programming a force to be reckoned with. Software development is a failure-prone endeavor; many products never see the light of day. But, as Tata Unisys (Bombay) assistant vice president Vijay Srirangan points out, “The cost of experimentation in India is low.” Of the many software experiments under way in India today, some will surely bear fruit.

A major obstacle blocking the path to commercial success is the lack of international marketing, but some help has been forthcoming. Under contract to the U.K.’s Developing Countries Trade Agency, the marketing firm Schofield Maguire (Cambridge, U.K.) is working to bring selected Indian software companies to the attention of European partners. “India does have a technological lead over other developing countries,” says managing partner Alison Maguire. “But to really capitalize on its software expertise, it must project a better image.”

Some companies have heard the message. For example, Ajay Madhok, a principal with AmSoft Systems (New Delhi), parlayed his firm’s expertise with computer graphics and digital video into a high-profile assignment at the 1992 Olympics. On a recent U.S. tour, he visited the National Association of Broadcasters show in Las Vegas. Then he flew to Atlanta for Comdex. While there, he bid for a video production job at the 1996 Olympics.

Incentives for Exporters

According to NASSCOM, in 1987, more than 90 percent of the Indian software industry’s $52 million in earnings came from “on-site services” (or body shopping). By 1991, on-site services accounted for a thinner 61 percent slice of a fatter $179 million pie. Reengineering services (and, to a lesser extent, packaged products) fueled this growth, with help from Indian and U.S. government policies.

On the U.S. side, visa restrictions have made it harder to import Indian software labor. India, meanwhile, has developed a range of incentives to stimulate the software and electronics industries. Government-sponsored technology parks in Noida (near New Delhi), Pune (near Bombay), Bangalore, Hyderabad, and several other locations support export-oriented software development. Companies that locate in these parks share common computing and telecommunications facilities (including leased-line access to satellite links), and they can import duty-free the equipment they need for software development.

The Indian government has established export processing zones in which foreign companies can set up subsidiaries that enjoy similar advantages and receive a five-year tax exemption. Outside these protected areas, companies can get comparable tax and licensing benefits by declaring themselves fully export-oriented.

Finally, the government is working to establish a number of hardware technology parks to complement the initiative in software. “We want to create many Hong Kongs and Singapores,” says N. Vittal, Secretary of the DoE and a tireless reformer of bureaucracy, alluding to the economic powerhouses of the Pacific Rim.

The Indian high-tech entrepreneurs I met all agreed that Vittal’s tenacious slashing of government red tape has blazed the trail they now follow. How serious is the problem of government red tape? When the government recently approved a joint-venture license application in four days, the action made headlines in both the general and trade press. Such matters more typically take months to grind their way through the Indian bureaucracy.

The evolution of India’s telecommunications infrastructure shows that progress has been dramatic, though uneven. In a country where only 5 percent of the homes have telephone service, high-tech companies increasingly rely on leased lines, packet-switched data networks, and satellite links. The DoE works with the Department of Telecommunication (DoT) to ensure that software export businesses get priority access to high-bandwidth services.

But the slow pace of progress at the DoT remains a major frustration. For example, faxing can be problematic in India, because the DoT expects you to apply for permission to transmit data. And despite widespread Unix literacy, only a few of the dozens of business cards I received during my tour carried Internet addresses. Why? DoT regulations have retarded what would have been the natural evolution of Unix networking in India. I did send mail home using ERNET, the educational resource network headquartered in the DoE building in New Delhi that links universities throughout the country. Unfortunately, ERNET isn’t available to India’s high-tech businesses.

Vittal recognizes the critical need to modernize India’s telecommunications. Given the scarcity of an existing telecommunications infrastructure, he boldly suggests that for many scattered population centers, the solution may be to completely pass over long-haul copper and vault directly into the satellite era. In the meantime, India remains in this area, as in so many others, a land of extreme contrasts. While most people lack basic telephone service, workers in strategic high-tech industries now take global voice and data services for granted.

Powerful Partners

When Kamal K. Singh, chairman and managing director of Rolta India, picks up his phone, Rolta’s U.S. partner, Intergraph, is just three digits away. A 64-Kbps leased line carries voice and data traffic from Rolta’s offices, located in the Santacruz Electronics Export Processing Zone (SEEPZ) near Bombay, to an earth station in the city’s center. Thence, such traffic travels via satellite and T1 lines in the U.S. to Intergraph’s offices in Huntsville, Alabama.

Rolta builds Intel- and RISC-based Intergraph workstations for sale in India; I saw employees doing everything from surface-mount to over-the-network software installation. At the same time, Rolta does facilities mapping for a U.S. telephone company through its subsidiary in Huntsville. Every night, scanned maps flow through the satellite link to Bombay. Operators running 386-based RoltaStations retrieve the maps from a Unix server, digitize them using Intergraph’s MicroStation CAD software, and relay the converted files back to Huntsville.

Many Indian companies have partnerships with U.S. firms. India’s top computer company, HCL, joined forces with Hewlett-Packard to form HCL-HP. HCL’s roots were in multiprocessor Unix. “Our fine-grained multiprocessing implementation of Unix System V has been used since 1988 by companies such as Pyramid and NCR,” says director Arjun Malhotra.

HCL’s joint venture enables it to build and sell HP workstations and PCs in India. “People appreciate HP quality,” says marketing chief Ajai Chowdhry. But since Vectra PCs are premium products in the price-sensitive Indian market, HCL-HP also plans to leverage its newly acquired HP design and manufacturing technology to build indigenous PCs that deliver “good value for money,” according to Malhotra.

Pertech Computers, a system maker in New Delhi, recently struck a $50 million deal to supply Dell Computer with 240,000 motherboards. Currently, trade regulations generally prohibit the import of certain items, such as finished PCs. However, exporters can use up to 25 percent of the foreign exchange they earn to import and sell such items. Pertech director Bikram Dasgupta plans to use his “forex” money to buy Dell systems for resale in India and to buy surface-mount equipment so that the company can build work-alikes.

IBM returned to India last year, after leaving in 1978, to join forces with the Tatas, a family of Indian industrialists. The joint venture, Tata Information Systems, will manufacture PS/2 and PS/VP systems and develop software exports.

Citicorp Overseas Software, a Citicorp subsidiary, typifies a growing trend to locate software-development units in India. “Our charter is first and foremost to meet Citicorp’s internal requirements,” says CEO S. Viswanathan, “but we are a profit center and can market our services and products.” On a tour of its SEEPZ facility in Bombay, I saw MVS, Unix, VMS, and Windows programmers at work on a variety of projects. In addition to reengineering work for Citicorp and other clients, the company markets banking products called Finware and MicroBanker.

ITC (Bangalore) supplements its Oracle, Ingres, and AS/400 consulting work by selling the full range of Lotus products. “Because we have the rights to manufacture Lotus software locally,” says vice president Shyamal Desai, “1-2-3 release 2.4 was available here within a week of its U.S. release.” Other distributors of foreign software include Onward Computer Technologies in Bombay (NetWare) and Bombay-based Mastek (Ingres).

India’s ambitious goal is to quadruple software exports from $225 million in 1992 to $1 billion in 1996. To achieve that, everything will have to fall into place. It would be a just reward. India gave much to the international microcomputer industry in the 1980s. In the 1990s, the industry just might return the favor.

“It’s not your fault, mom.”

I just found this never-published 2007 indictment of web commerce, and realized that if mom were still here 12 years later I could probably write the same thing today. There hasn’t been much much progress on smoother interaction for the impaired, and I don’t see modern web software development on track to get us there. Maybe it will require a (hopefully non-surgical) high-bandwidth brain/computer interface. Maybe Doc Searls’ universal shopping cart. Maybe both.


November 3, 2007

Tonight, while visiting my parents, I spent an hour helping my mom buy tickets to two Metropolitan Opera simulcasts. It went horribly wrong in six different ways. Here is the tale of woe.

A friend had directed her to Fandango with instructions to “Type ‘Metropolitan Opera’ in the upper-right search box.” Already my mom’s in trouble. Which upper-right search box? There’s one in the upper-right corner of the browser, and another in the upper-right corner of the Fandango page. She doesn’t distinguish between the two.

I steer her to the Fandango search box, she tries to type in ‘Metropolitan Opera’, and fails several times. Why? She’s unclear about clicking to set focus on the search box. And when she does finally aim for it, she misses. At age 86, arthritis and macular degeneration conspire against her.

I help her to focus on the search box, she types in ‘Metropolitan Opera’, and lands on a search results page. The title of the first show she wants to buy is luckily on the first page of a three-page result set, but it’s below the fold. She needs to scroll down to find it, but isn’t sure how.

I steer her to the show she wants, she clicks the title, and lands on another page where the interaction is again below the fold. Now she realizes she’s got to navigate within every page to the active region. But weak vision and poor manual dexterity make that a challenge.

We reach the page for the April 26th show. Except, not quite. Before she can choose a time for the show — unnecessarily, since there’s only one time — she has to reselect the date by clicking a link labeled ‘Saturday April 26th’. Unfortunately the link text’s color varies so subtly from the regular text’s color that she can’t see the difference, and doesn’t realize it is a link.

I steer her to the link, and she clicks to reveal the show time: 1:30. Again it’s unclear to her that this is a link she must follow.

I steer her to the 1:30 link, and finally we reach the purchase page. Turns out she already has an account with Fandango, which my sister must have helped her create some time ago. So mom just needs to sign in and…

You can see this coming from a mile away. She’s forgotten which of her usual passwords she used at this site. After a couple of failures, I steer her to the ‘Forgot password’ link, and we do the email-checking dance.

The email comes through, we recover the password, and proceed to checkout. The site remembers her billing address and credit card info. I’m seeing daylight at the end of the tunnel.

Except it’s not daylight, it’s an oncoming train.

Recall that we’re trying to buy tickets for two different shows. So now it’s time to go back and add the second one to the shopping cart. Unfortunately Fandango doesn’t seem to have a shopping cart. Each time through you can only buy one set of tickets.

I steer her back to the partially-completed first transaction. We buy those tickets and print the confirmation page.

Then we enter the purchase loop for the second time and…this one is not like the other one. This time through, it asks for the card’s 3-digit security code. She enters it correctly, but the transaction fails because something has triggered the credit card company’s fraud detector. Probably the two separate-but-identical charges in rapid succession.

We call the credit card company, its automated system wants her to speak or enter her social security number. She tries speaking the number, the speech recognizer gets it wrong, but mom can’t hear what it says back to her. In addition to macular degeneration and arthritis, she’s lost a lot of her hearing in the last few years. So she tries typing the social security number on the phone’s keypad, and fails.

I grab the phone and repeat ‘agent’ and ‘operator’ until somebody shows up on the line. He does the authentication dance with her (maiden name, husband’s date of birth, etc.), then I get back on the line and explain the problem. The following dialogue ensues:

Agent: “Whoa, this is going to be hard, we’re having a problem and I can’t access her account right now. Do you want to try later?”

Me: “Well, I’m here with my 86-year-old mom, and we’ve invested a whole lot of effort in getting to this point, is there any way to hang onto the context?”

He sympathizes, and connects me to another, more powerful agent. She overrides the refusal, authorizes the $63, and invites me to try again.

Now the site reports a different error: a mismatch in either the billing zip code, or the credit card’s 3-digit security code. To Fandango it looks like the transaction failed. To the credit card company it looks like it succeeded. What now?

The agent is pretty sure that if the transaction failed from Fandango’s perspective, it’ll come out in the wash. But we’re both puzzled. I’m certain the security code is correct. And all other account info must be correct, right? How else how could we have successfully bought the first set of tickets?

But just to double-check, I visit mom’s account page on Fandango. The billing address zip code is indeed correct. So is everything else, except…wait…the credit card’s expiration date doesn’t match. The account page says 11/2007, the physical card says 11/2010.

Turns out the credit card company recently refreshed mom’s card. This is weird and disturbing because, if the successful transaction and the failed transaction were both using the same wrong expiration date, they both should have failed.

Sigh. I change the expiration date, and finally we buy the second set of tickets. It’s taken an hour. My mom observes that if Fandango accepted orders over the phone, we’d have been done about 55 minutes ago. And she asks:

“How could I possibly have done that on my own?”

I know all the right answers. Better web interaction design. Better assistive technologies for the vision-, hearing-, and dexterity-impaired. Better service integration between merchants and credit card companies. Better management of digital identity. Someday it’ll all come together in a way that would enable my mom to do this for herself. But today isn’t that day.

Her only rational strategy was to do just what she did, namely recruit me. For which she apologizes.

All I can say, in the end, is: “Mom, it’s not your fault.”

Walking the Kortum Trail

On my first trip to Sonoma County I flew to SFO and drove north through San Francisco, across the Golden Gate Bridge, and through Marin County on the 101 freeway.

The next time I drove highway 1 up the coast and was stunned by the contrast. The Marin headlands are twisty and challenging, but then the road opens out to a series of ocean and grassland vistas inhabited mainly, it seems, by a few lucky cattle. How, I wondered, could such pristine coastline exist so near the megalopolis?

One especially magical stretch runs north from Bodega Bay, on the Marin/Sonoma county line, to Jenner at the mouth of the Russian River. If you park at Wrights Beach you can walk the last four miles to the river, along dramatic coastal bluffs, on a trail named for Bill and Lucy Kortum. When Bill died in 2015 the Press Democrat described him as a man “who spent most of his life fighting to rein in urban sprawl and protect public access to the coast” and “saved the landscape of Sonoma County.”

Recently, for the first time, I hiked the Kortum Trail end-to-end both ways. Here’s the view from Peaked Hill:

For that view, and for so much else that we love about Sonoma County, we owe thanks to Bill Kortum. It began in the early 1960s when his older brother Karl led the fight to stop the construction of a nuclear power plant on Bodega Head. Although I’ve changed my mind about the need for nuclear power, siting one on the San Andreas fault was a crazy plan. Thankfully that never happened.

We’re frequent visitors to Bodega Head, and we’d heard about the “Hole in the Head” — the excavated pit left behind when PG&E abandoned the project — but until recently we weren’t sure quite where it was. Turns out we’d driven right past it every time we made the hairpin turn on Westshore Road at Campbell Cove. What would have been a nuclear plant is now a tranquil pond enjoyed by waterfowl.

Many of the things we love about Sonoma County can be traced to the activism that began there, and has continued for fifty years, led or influenced by Bill Kortum.

The planned community at Sea Ranch would have blocked public access to 13 miles of coastline. That didn’t happen. Now there are six access trails.

The entire California coastline would look very different were it not for the oversight of the California Coastal Commission, established by voters in 1972 partly in response to the Sea Ranch controversy.

The Sonoma County landscape would look very different were it not for urban growth boundaries that restrict sprawl and encourage densification.

The shameful six-decade-long lack of a passenger rail system in the North Bay would have continued.

The 1200-mile California Coastal Trail would not be more than halfway complete.

I was a bit surprised to find no Wikipedia page for Bill Kortum, so I’ve created a stub, about which Wikipedia says: “Review waiting, please be patient. This may take 8 weeks or more.”

As a relative newcomer to Sonoma County I feel ill-equipped to write more of this history. Perhaps others who know more will add to that stub. Here are some of the best sources I’ve found:

– John Crevelli’s Fifty Year History of Environmental Activism in Sonoma County.

– Gaye LeBaron’s 2006 interview with Bill Kortum.

– The Press Democrat’s obituary.

You were nearing the end of your life when we arrived here, Bill. I never had the chance to meet you, or to thank you for having the vision to preserve so much that is special about this place, along with the will and political savvy to make it happen. We are deeply grateful.

Why Maine embraces immigration

An excellent choice for 4th-of-July podcast listening would be Maine, Africa on Radio Open Source. The intro:

We went to Portland, Maine, this week to meet newcomers from Central Africa, Angolans and Congolese asking for U.S. asylum. Fox News hit the panic button two weeks ago: their line was that Maine is being overrun, inundated by African migrants. On a long day in Portland, however, we found nobody sounding scared.

I’ve noticed the strong African influence in Portland when visiting, and wondered: “Why is that?” The answer is that Maine’s population is stable but aging. Immigrants are needed, and the state welcomes them.

A couple of moments in this podcast stopped me in my tracks.

With mayor Ethan Strimling:

“You know what, we don’t consider you somebody else’s problem, we consider you our opportunity.”

And with a woman interviewed on the street:

Chris Lydon: “How did Portland get to be so welcoming?”

Interviewee: “We’re Americans.”

Not everyone’s on board. There’s an anti-immigrant minority, but it is a minority. Most of the people in Portland, it seems, look backward to their own immigrant roots and forward to a future that embraces an influx of world cultures.

On this Independence Day I choose to believe that a national majority feels the same way, and will prevail.

The Woodard projection

In a memorable episode of The West Wing, visitors from the Cartographers for Social Justice upend CJ’s and Josh’s worldviews.

Cartographer: “The Peters projection.”

CJ: “What the hell is that?”

Cartographer: “It’s where you’ve been living this whole time.”

I’m having the same reaction to Colin Woodard’s 2011 book American Nations. He sees North America as three federations of nations. The federation we call the United States comprises nations he calls Yankeedom, New Netherland, The Midlands, Tidewater, Greater Appalachia, The Deep South, El Norte, The Far West, and The Left Coast.

Here’s his definition of a nation:

A nation is a group of people who share — or believe they share — a common culture, ethnic origin, language, historical experience, artifacts, and symbols.”

Worldwide some nations are stateless, some align with state borders, and some cut across state borders. North America’s eleven nations are, in his view, stateless, cutting across state boundaries in ways I find disorienting but enlightening.

On his map (from a 2013 Tufts alumni magazine article now readable only at the Internet Archive) I have marked the US places where I’ve lived.

Until now, if you asked me where I’ve lived, I’d have said East Coast and Midwest and recently California. According to the Woodard projection I have lived in four nations: Yankeedom, The Midlands, Tidewater, and The Left Coast. It wasn’t easy to locate my homes on his map. They all occupy a narrow band of latitude. On the East Coast, that band touches three of Woodard’s nations. In two of those, Yankeedom and The Midlands, I lived near the cradles of nations that spread far north and west.

I’m from near Philadelphia, in The Midlands, “founded by English Quakers, who welcomed people of many nations and creeds to their utopian colonies on the shores of Delaware Bay.” That resonates. I grew up in a place called Plymouth Meeting and went to Quaker kindergarten there. It would never have occurred to me that Philadelphia is culturally closer to places as far west as Nebraska, and as far north as the province of Ontario, than to Baltimore or Boston. Likewise I never thought of Ann Arbor, where I called myself a midwesterner, as part of a culture that flowed west from Boston. Or that Baltimore sits at the intersection of three nations.

These eleven nations have been hiding in plain sight throughout our history. You see them outlined on linguists’ dialect maps, cultural anthropologists’ maps of material culture regions, cultural geographers’ maps of religious regions, campaign strategists’ maps of political geography, and historians’ maps of the pattern of settlement across the continent.

Two of the eleven nations, Yankeedom and The Deep South, have been “locked in nearly perpetual combat for control of the federal government since the moment such a thing existed,” Woodard says.

The analysis, which builds on prior art that he cites, may be a helpful way to contextualize the 2016 US election.

“The Woodard projection.”

“What the hell is that?”

“It’s where you’ve been living this whole time.”

Don’t just Google it! First, let’s talk!

Asking questions in conversation has become problematic. For example, try saying this out loud: “I wonder when Martin Luther King was born?” If you ask that online, a likely response is: “Just Google it!” Maybe with a snarky link: http://lmgtfy.com/?q=when was martin luther king born?

Asking in conversation is frowned upon too. Why ask us when you could ask Siri or Alexa to ask Google?

There’s a kind of shaming going on here. We are augmented with superpowers, people imply, you’re an idiot to not use them.

Equally problematic is the way this attitude shuts down conversation. Sure, I can look up MLK’s birthday. Or you can. But what if our phones are dead? Now we need to rely on our own stored knowledge and, more importantly, on our collective pool of stored knowledge.

I think MLK was shot in 1968. I’d have been twelve. Does that sound right? Yeah, we were in the new house, it was sixth grade. And I know he died young, maybe 42? So I’ll guess 1968 – 42 = 1926.

Were you around then? If so, how do you remember MLK’s assassination? If not, what do you know about the event and its context?

As you can see in the snarky screencast, I’m wrong, MLK died even younger than I thought. You might know that. If we put our heads together, we might get the right answer.

Asking about something that can easily be looked up shouldn’t stop a conversation, it should start one. Of course we can look up the answer. But let’s dwell for a minute on what we know, or think we know. Let’s exercise our own powers of recall and synthesis.

Like all forms of exercise, this one can be hard to motivate, even though it’s almost always rewarding. Given a choice between an escalator and the stairs, I have to push myself to prefer the stairs. In low-stakes wayfinding I have to push myself to test my own sense of direction. When tuning my guitar I have to push myself to hear the tones before I check them. I am grateful to be augmented by escalators, GPS wayfinders, and tuners. But I want to use these powers in ways that complement my own. Finding the right balance is a struggle.

Why bother? Consider MLK. In The unaugmented mind I quoted from MLK confidante Clarence Jones’ Behind the Dream:

What amazed me was that there was absolutely no reference material for Martin to draw upon. There he was [in the Birmingham jail] pulling quote after quote from thin air. The Bible, yes, as might be expected from a Baptist minister, but also British prime minister William Gladstone, Mahatma Gandhi, William Shakespeare, and St. Augustine.

And:

The “Letter from a Birmingham Jail” showed his recall for the written material of others; his gruelling schedule of speeches illuminated his ability to do the same for his own words. Martin could remember exact phrases from several of his unrelated speeches and discover a new way of linking them together as if they were all parts of a singular ever-evolving speech. And he could do it on the fly.

Jones suggests that MLK’s power flowed in part from his ability to reconfigure a well-stocked working memory. We owe it to ourselves, and to one another, to nurture that power. Of course we can look stuff up. But let’s not always go there first. Let’s take a minute to exercise our working memories and have a conversation about them, then consult the oracle. I’ll bet that in many cases that conversation will turn out to be the most valuable part of the experience.

My new library superpower

The recommendations that matter to me — for books to read, movies to watch, products to buy, places to visit — almost never come from algorithms. Instead they come from friends, family, acquaintances, and now also, in the case of books, from fellow library patrons whom I’ve never met.

This last source is a recent discovery. Here’s how it works. When I enter the library, I walk past shelves of new books and staff picks. These are sometimes appealing, but there’s a much better source of recommendations to be found at the back of the library. There, on shelves and carts, are the recently-returned books ready to be reshelved. I always find interesting titles among them. Somehow I had never noticed that you can scan these titles, and check out the ones you want before they ever get reshelved. Did it always work this way at our library? At libraries in other places I’ve lived? If so I am woefully late to the party, but for me, at least, this has become a new superpower.

I reckon that most books in the library rarely if ever leave the shelves. The recently-returned section is a filter that selects for books that patrons found interesting enough to check out, read (or maybe not), and return. That filter has no bias with respect to newness or appeal to library staff. And it’s not being manipulated by any algorithm. It’s a pure representation of what our library’s patrons collectively find interesting.

The library’s obvious advantage over the bookstore is the zero price tag. But there’s a subtler advantage. In the bookstore, you can’t peruse an anonymized cache of recently-bought books. Online you can of course tap into the global zeitgeist but there you’re usually being manipulated by algorithms. LibraryThing doesn’t do that, and in principle it’s my kind of place online, but in practice I’ve never succeeded in making it a habit. I think that’s because I really like scanning titles that I can just take off a shelf and read for free. Not even drone delivery can shorten the distance between noticing and reading to seconds. Ebooks can, of course. But that means another screen, I already spend too much time looking at screens, the print medium provides a welcome alternative.

This method has been so effective that I’ve been (guiltily) a bit reluctant to describe it. After all, if everyone realized it’s better to bypass the new-book and staff-pick displays and head straight for the recently-returned area, the pickings would be slimmer there. But I’m nevertheless compelled to disclose the secret. Since few if any of my library’s patrons will ever read this post, I don’t think I’m diluting my new superpower by explaining it here. And who knows, maybe I’ll meet some interesting people as I exercise it.

 

Designing for least knowledge

In a post on the company blog I announced that it’s now possible to use the Hypothesis extension in the Brave browser. That’s great news for Hypothesis. It’s awkward, after all, to be an open-source non-profit software company whose product is “best viewed in” Chrome. A Firefox/Hypothesis extension has been in the works for quite a while, but the devil’s in the details. Although Firefox supports WebExtensions, our extension has to be ported to Firefox, it isn’t plug-and-play. Thanks to Brave’s new ability to install unmodified Chrome extensions directly from the Chrome web store, Hypothesis can now plug into a powerful alternative browser. That’s a huge win for Hypothesis, and also for me personally since I’m no longer tethered to Chrome in my daily work.

Another powerful alternative is forthcoming. Microsoft’s successor to Edge will (like Brave) be built on Chromium, the open-source engine of Google’s Chrome. The demise of EdgeHTML represents a loss of genetic diversity, and that’s unfortunate. Still, it’s a huge investment to maintain an independent implementation of a browser engine, and I think Microsoft made the right pragmatic call. I’m now hoping that the Chromium ecosystem will support speciation at a higher level in the stack. Ever since my first experiments with Greasemonkey I’ve been wowed by what browser extensions can do, and eager for standardization to unlock that potential. It feels like we may finally be getting there.

Brave’s support for Chrome extensions matters to me because I work on a Chrome extension. But Brave’s approach to privacy matters to me for deeper reasons. In a 2003 InfoWorld article I discussed Peter Wayner’s seminal book Translucent Databases, which explores ways to build information systems that work without requiring the operators to have full access to users’ data. The recipes in the book point to a design principle of least knowledge.

Surveillance capitalism knows way too much about us. Is that a necessary tradeoff for the powerful conveniences it delivers? It’s easy to assume so, but we haven’t really tried serious alternatives yet. That’s why this tweet made my day. “We ourselves do not know user identities or how donations might link via user,” wrote Brendan Eich, Brave’s founder. “We don’t want to know.”

We don’t want to know!

That’s the principle of least knowledge in action. Brave is deploying it in service of a mission to detoxify the relationship between web content and advertising. Will the proposed mechanisms work? We’ll see. If you’re curious, I recommend Brendan Eich’s interview with Eric Knorr. The first half of the interview is a deep dive into JavaScript, the second half unpacks Brave’s business model. However that model turns out, I’m grateful to see a real test of the principle. We need examples of publishing and social media services that succeed not as adversaries who monetize our data but rather as providers who deliver convenience at a reasonable price we’re happy to pay.

My hunch is that we’ll find ways to build profitable least-knowledge services once we start to really try. Successful examples will provide a carrot, but there’s also a stick. Surveillance data is toxic stuff, risky to monetize because it always spills. It’s a liability that regulators — and perhaps also insurers — will increasingly make explicit.

Don’t be evil? How about can’t be evil? That’s a design principle worth exploring.

Highlighting passages doesn’t aid my memory, but speaking them does

When I was in college, taking notes on textbooks and course readings, I often copied key passages into a notebook. There weren’t computers then, so like a medieval scribe I wrote out my selections longhand. Sometimes I added my own notes, sometimes not, but I never highlighted, even in books that I owned. Writing out the selections was a way to perform the work I was reading, record selections in my memory, and gain deeper access to the mind of the author.

Now we have computers, and the annotation software I help build at Hypothesis is ideal for personal note-taking. Close to half of all Hypothesis annotations are private notes, so clearly lots of people use it that way. For me, though, web annotation isn’t a private activity. I bookmark and tag web resources that I want to keep track of, and collaborate with others on document review, but I don’t use web annotation to enhance my private reading.

To be sure, I mostly read books and magazines in print. It’s a welcome alternative to the screens that otherwise dominate my life. But even when my private reading happens online, I don’t find myself using our annotation tool the way so many others do.

So, what’s a good way to mark and remember a passage in a book if you don’t want to highlight it, or in the case of a library book, can’t highlight it? I thought about the scribing I used to do in college, and realized there’s now another way to do that. Recently, when I read a passage in a book or magazine that I want to remember and contemplate, I’ve been dictating it into a note-taking app on my phone.

I’ve followed the evolution of speech-to-text technology with great interest over the years. When I reviewed Dragon NaturallySpeaking, I did what every reviewer does. I tried to use the tool to dictate my review, and got mixed results. Over time the tech improved but I haven’t yet adopted dictation for normal work. At some point I decided to forget about dictation software until it became something that civilians who weren’t early-adopter tech journos used in real life.

One day, when I received some odd text messages from Luann, I realized that time had arrived. She’d found the dictation feature on her phone. It wasn’t working perfectly, and the glitches were amusing, but she was using it in an easy and natural way, and the results were good enough.

I still don’t dictate to my computer. This essay is coming to you by way of a keyboard. But I dictate to my phone a lot, mostly for text messages. The experience keeps improving, and now this new practice — voicing passages that I read in books, in order to capture and remember them — seems to be taking hold.

I’m reminded of a segment in a talk given by Robert “R0ml” Lefkowitz at the 2004 Open Source Conference, entitled The Semasiology of Open Source (part 2), the second in a series structured as thesis (part 1), antithesis (part 2), and synthesis (part 3). ITConversation aptly described this luminous series of talks as “an intellectual joy-ride”; I’m going to revisit the whole thing on a hike later today.

Meanwhile, here’s a transcription of the segment I’m recalling. It appears during a review of the history of literacy. At this point we have arrived at 600 AD.

To be a reader was not to be the receiver of information, it was to be the transmitter of the information, because it was not possible to read silently. So things that were written were written as memory aids to the speaker. And the speaker would say the words to the listener. To read was to speak, and those were synonyms … The writing just lies there, whereas the speaking lifts it off the page. The writing is just there, but the speaking is what elevates the listener.

Had I merely read that passage I’m certain I would not remember it 14 years later. Hearing R0ml speak the words made an indelible impression. (Seeing him speak the words, of course, made it even more indelible.)

Silent reading, once thought impossible, had to be invented. But just because we can read silently doesn’t mean we always should, as everyone who’s read aloud to a young child, or to a vision-impaired elder, knows. It’s delightful that voice recognition affords new ways to benefit from the ancient practice of reading aloud.

A small blog neighborhood hiding in plain sight

For as long as I can remember, I’ve tweeted every blog post. As an experiment, I didn’t do that with this one. As a result, WordPress tells me that relatively few people have read it. But I’m not monetizing pageview counters here, why do I care? The interaction with those who did read that item was pleasant.

So, the experiment continues. Some items here, like this one, can enjoy a more intimate space than others, simply by not announcing themselves on Twitter. A small blog neighborhood hiding in plain sight.

Where’s my Net dashboard?

Yesterday Luann was reading a colleague’s blog and noticed a bug. When she clicked the Subscribe link, the browser loaded a page of what looked like computer code. She asked, quite reasonably: “What’s wrong? Who do I report this to?”

That page of code is an RSS feed. It works the same way as the one on her own blog. The behavior isn’t a bug, it’s a lost tradition. Luann has been an active blogger for many years, and once used an RSS reader, but for her and so many others, the idea of a common reader for the web has faded.

There was a time when most of the sources I cared about flowed into such a reader: mainstream news, a vibrant and growing blogosphere, podcasts, status updates, standing search queries, you name it. The unread item count could get daunting, but I was able to effectively follow a wide assortment of information flows in what I came to think of as my Net dashboard.

Where’s my next dashboard? I imagine a next-gen reader that brings me the open web and my social circles in a way that helps me attend to and manage all the flow. There are apps for that, a nice example being FlowReader, which has been around since 2013. I try these things hopefully but so far none has stuck.

Information overload, once called infoglut, remains a challenge. We’re all flooded with more channels than we can handle, more conversations happening in more places than we can keep track of.

Fear of missing out (FOMO) is the flip side of infoglut. We expect that we should be able to sanely monitor more than we actually can.

The first-gen reader didn’t solve infoglut/FOMO, nothing could, but for a while, for me, it was better than the alternative, which was (and now is again) email. Of course that was me, a tech journalist who participated in, researched, and wrote about topics in Net technology and culture — including RSS, which animated the dashboard I used to keep track of everything else. It was, however, a workflow that researchers and analysts in other fields will recognize.

Were I were doing the same kind of work today, I’d cobble together the same kind of dashboard, while grumbling about the poorer experience now available. Instead my professional information diet is narrower and deeper than when analytical writing for commercial audiences was my work. My personal information diet, meanwhile, remains as diverse as everyone’s.

So I’m not sure that a next-gen reader can solve the same problems that my first-gen reader did, in the same ways. Still, I can’t help but envision a dashboard that subscribes to, and manages notifications from, all my sources. It seems wrong that the closest thing to that, once more, is email. Plugging the social silos into a common reader seems like the obvious thing. But if that were effective, we’d all be using FlowReader or something like it.

Why don’t we? Obviously the silos can refuse to cooperate, as FlowReader noted when announcing the demise of its Facebook integration:

These changes were made [by Facebook] to give users more control over their own data, which we support. It’s a great thing for users! However, it also means that Facebook data is no longer going to be easy to share between applications.

You know what would be a really great thing for users, though? A common reader that makes it easy to keep track of friends and family and coworkers along with news and all kinds of personal and professional information sources.

“What’s wrong?”

It’s not just that the silos can shut down their feeds. It’s that we allowed ourselves to get herded into them in the first place. For a while, quite a few people got comfortable with the notion of publishing and subscribing to diverse feeds in a common way, using systems that put them in charge of outflow and inflow. In one form or another that’s still the right model. Sometimes we forget things and have to relearn them. This is one of those things.

“Who do I report this to?”

Everyone.

The reengineering of three California lakes

If you drive up the the eastern side of California, you’ll encounter three ancient lakebeds transformed by human engineering during the last century. The sequence goes like this: Salton Sea, Owens Valley, Mono Lake. We visited all three on a recent road trip. Since returning I’ve learned that their stories continue to unfold.

The Salton Sea

The Salton Sea, almost twice the size of Lake Tahoe, was created accidentally in 1905 when an irrigation project went out of control and, for a year and a half, sucked the Colorado river into what had been a dry lakebed. As a recent immigrant to California I can confirm that many natives don’t know much, if anything, about this place. A 2004 documentary narrated by John Waters, Plagues & Pleasures on the Salton Sea, traces its arc through a boom era of sport fishing and real estate speculation to what is now a living ghost town. For decades people have talked about saving the Salton Sea. That may once have meant restoring California’s “lost Riviera” but nowadays it’s all about mitigating an environmental disaster.

The Salton Sink is a low-lying basin that has flooded and dried many times over hundreds of thousands of years. What makes the current drying phase different is that the only inflow now is agricultural runoff with high concentrations of pesticides. As the lake evaporates, toxic dust goes windborne, threatening not only the Salton Sea communities, but also nearby Mexicali, Mexico, a metropolitan area with a million people. Meanwhile the lake’s increasing salinity is ruining the habitat of millions of migratory birds. These looming health and environmental crises motivated California to allocate $200 million as part of a successful June 2018 ballot initiative (Prop. 68) to stabilize the Salton Sea. (Another $200 million won’t be forthcoming because it was part of a failed follow-on ballot initiative in November 2018, Prop. 3.)

In an effort to buy time, a 2003 agreement to transfer water from the Imperial Valley to San Diego required the release of “mitigation water” to the Salton Sea. That ended in 2017 with no clear roadmap in place. What would it mean to stabilize the Salton Sea? A dwindling Colorado River won’t be the answer. It may be possible to import seawater from the Pacific, or the Gulf of California, to create a “smaller but sustainable” future that balances the needs of the region’s people and wildlife in a context of growing crises of drought and fire. Dry methods of dust suppression might also help. But all methods will require major engineering of one kind or another. The cost of that 1905 accident continues to grow.

The Owens Valley

Further north lies Owens Lake, mostly drained since 1913 when William Mulholland began siphoning its water into the LA aqueduct. The movie Chinatown is a fictionalized account of the battle for that water. What remains is mostly a vast salt flat that’s patrolled by mining vehicles and was, until recently, the nation’s worst source of dust pollution. From highway 395 it still looks like the surface of some other planet. But mitigation began at the turn of this century, and by 2017 LA’s Department of Water and Power had spent nearly $2 billion on what a modern western explorer known as StrayngerRanger describes as “a patchwork of dust-smothering techniques, including gravel, flooded ponds and rows of planted vegetation that cover nearly 50 square miles — an area more than twice the size of Manhattan.”

A key innovation, according to the LA Times, “involves using tractors to turn moist lake bed clay into furrows and basketball-sized clods of dirt. The clods will bottle up the dust for years before breaking down, at which point the process will be repeated.” This dry method was a big success, the dust has settled, public trails opened in 2016, and as Robin Black’s photography shows, life is returning to what’s left of the lake.

The LA Times:

In what is now hailed as an astonishing environmental success, nature quickly responded. First to appear on the thin sheen of water tinged bright green, red and orange by algae and bacteria were brine flies. Then came masses of waterfowl and shorebirds that feed on the insects.

Robin Black:

This was never supposed to happen, and it’s a BIG DEAL. This, the habitat creation and management. This, the welcoming of the public to the lake. This is a huge victory for the groups in the Owens Valley who fought so tirelessly to make this happen.

We need to celebrate environmental successes and learn from them. And on this trip, the best one was yet to come.

Mono Lake

Mono Lake, near the town of Lee Vining (“the gateway to Yosemite”), was the next object of LA’s thirst. Its inflow was diverted to the aqueduct, and the lake almost went dry. Now it’s filling up again, thanks to the dedicated activists who formed the Mono Lake Committee 40 years ago, sued the LA Department of Water and Power, won back 3/4 of the inflow, and as chronicled in their video documentary The Mono Lake Story, have been stewarding the lake’s recovery ever since.

Today you can walk along the newly-restored Lee Vining Creek trail and feel the place coming alive again. There’s been cleanup and trail-building and planting but, for the most part, all that was needed was to let the water flow again.

How did LA compensate for the lost flow? This 30-second segment of the documentary answers the question. The agreement included funding that enabled LA to build a wastewater reclamation plant to make up the difference. It wasn’t a zero-sum game after all. There was a way to maintain a reliable water supply for LA and to restore Mono Lake. It just took people with the vision to see that possibility, and the will to make it real.

Renaming Hypothesis tags

Wherever social tagging is supported as an optional feature, its use obeys a power law. Some people use tags consistently, some sporadically, most never. This chart of Hypothesis usage illustrates the familiar long-tail distribution:

Those of us in the small minority of consistent taggers care a lot about the tag namespaces we’re creating. We tag in order to classify resources, we want to be able to classify them consistently, but we also want to morph our tag namespaces to reflect our own changing intuitions about how to classify and also to adapt to evolving social conventions.

Consistent tagging requires a way to make, use, and perhaps share a list of controlled tags, and that’s a topic for another post. Morphing tag namespaces to satisfy personal needs, or adapt to social conventions, requires the ability to rename tags, and that’s the focus of this post.

There’s nothing new about the underlying principles. When I first got into social tagging, with del.icio.us back in the day, I made a screencast to show how I was using del.icio.us’ tag renaming feature to reorganize my own classification system, and to evolve it in response to community tag usage. The screencast argues that social taggers form a language community in which metadata vocabularies can evolve in the same way natural languages do.

Over the years I’ve built tag renamers for other systems, and now I’ve made one for Hypothesis as shown in this 90-second demo. If you’re among the minority who want to manage your tags in this way, you’re welcome to use the tool, here’s the link. But please proceed with care. When you reorganize a tag namespace, it’s possible to wind up on the wrong end of the arrow of entropy!

Letters to Mr. Wilson’s Museum of Jurassic Technology

Dear Mr. Wilson,

Your Museum of Jurassic Technology (MJT) first came to my attention in 1995 when I read an excerpt from Lawrence Weschler’s Mr. Wilson’s Cabinet of Wonder in the New Yorker. Or anyway, that’s the origin story I’ve long remembered and told. As befits the reality-warping ethos of the MJT, it seems not to be true. I can find no record of such an article in the New Yorker’s archive. (Maybe I read it in Harpers?) What I can find, in the New York Times’ archive, is a review of the book that nicely captures that ethos:

Run by its eccentric proprietor out of a storefront in Culver City, Calif., the museum is clearly a modern-day version, as Mr. Weschler astutely points out, of the “wonder-cabinets” that sprang up in late Renaissance Europe, inspired by all the discoveries in the New World. David Wilson comes off as an amusingly Casaubonesque figure who, in his own little way, seeks to amass all the various kinds of knowledge in the world; and if his efforts seem random and arcane, they at any rate sound scientifically specific. Yet when Mr. Weschler begins to check out some of the information in the exhibits, we discover that much of it is made up or imagined or so elaborately embroidered as to cease to resemble any real-world facts.

The key to the pleasure of this book lies in that “much of,” for the point of David Wilson’s museum is that you can’t tell which parts are true and which invented. In fact, some of the unlikeliest items — the horned stink ants, for instance — turn out to be pretty much true. In the wake of its moment of climactic exposure, “Mr. Wilson’s Cabinet of Wonder” turns into an expedition in which Lawrence Weschler tracks down the overlaps, correspondences and occasionally tenuous connections between historical and scientific reality on the one hand and the Museum of Jurassic Technology on the other.

We’ve always wanted to visit the MJT, and finally made the pilgrimage a few weeks ago. Cruising down the California coast on a road trip, the museum was our only LA destination. The miserable traffic we fought all day, both entering and leaving Culver City, was a high price to pay for several hours of joy at the museum. But it was worth it.

At one point during our visit, I was inspecting a display case filled with a collection of antique lace, each accompanied by a small freestanding sign describing the item. On one of the signs, the text degrades hilariously. As I remember it (no doubt imperfectly) there were multiple modes of failure: bad kerning, faulty line spacing, character misencoding, dropouts, faded print. Did I miss any?

While I was laughing at the joke, I became aware of another presence in the dark room, a man with white hair who seemed to pause near me before wandering off to an adjoining dark room. I thought it might be you, but I was too intimidated to ask.

The other night, looking for videos that might help me figure out if that really was you, I found a recording of an event at the USC Fisher Museum of Art. In that conversation you note that laughter is a frequent response to the MJT:

“We don’t have a clue what people are laughing about,” you said, “but it’s really hard to argue with laughter.”

I was laughing at that bit of hilariously degraded signage. I hope that was you hovering behind me, and I hope that you were appreciating my appreciation of the joke. But maybe not. Maybe that’s going to be another questionable memory. Whatever the truth may be, thank you. We need the laughs now more than ever.

Yours sincerely,

Jon Udell

Searching across silos, circa 2018

The other day, while searching across various information silos — WordPress, Slack, readthedocs.io, GitHub, Google Drive, Google Groups, Zendesk, Stack Overflow for Teams — I remembered the first time I simplified that chore. It was a fall day in 1996. I’d been thinking about software components since writing BYTE’s Componentware1 cover story in 1994, and about web software since launching the BYTE website in 1995. On that day in 1996 I put the two ideas together for the first time, and wrote up my findings in an installment of my monthly column entitled On-Line Componentware. (Yes, we really did write “On-Line” that way.)

The solution described in that column was motivated by McGraw-Hill’s need to unify search across BYTE and some other MgH pubs that were coming online. Alta Vista, then the dominant search engine, was indexing all the sites. It didn’t offer a way to search across them. But it did offer a site: query as Google and Bing do today, so you could run per-site queries. I saw it was possible to regard Alta Vista as a new kind of software component for a new kind of web-based application that called the search engine once per site and stitched the results into a single web page.

Over the years I’ve done a few of these metasearch apps. In the pre-API era that meant normalizing results from scraped web pages. In the API era it often means normalizing results fetched from per-site APIs. But that kind of normalization was overkill this time around, I just needed an easier way to search for words that might be on our blog, or in our docs, or in our GitHub repos, or in our Google storage. So I made a web page that accepts a search term, runs a bunch of site-specific searches, and opens each into a new tab.

This solution isn’t nearly as nice as some of my prior metasearchers. But it’s way easier than authenticating to APIs and merging their results, or using a bunch of different query syntaxes interactively. And it’s really helping me find stuff scattered across our silos.

But — there’s always a but — you’ll notice that Slack and Zendesk aren’t yet in the mix. All the other services make it possible to form a URL that includes the search term. That’s just basic web thinking. A set of web search results is an important kind of web resource that, like any other, should be URL-accessible. Unless I’m wrong, in which case someone please correct me, I can’t do the equivalent of `?q=notifications` in a Slack or Zendesk URL. Of course it’s possible to wrap their APIs in a way that enables URL query, but I shouldn’t have to. Every search deserves its own URL.


1 Thanks as always to Brewster Kahle for preserving what publishers did not.

Annotation-powered apps: A “Hello World” example

Workflows that can benefit from annotation exist in many domains of knowledge work. You might be a teacher who focuses a class discussion on a line in a poem. You might be a scientist who marks up research papers in order to reason about connections among them. You might be an investigative journalist fact-checking a claim. A general-purpose annotation tool like Hypothesis, already useful in these and other cases, can be made more useful when tailored to a specific workflow.

I’ve tried a few different approaches to that sort of customization. For the original incarnation of the Digital Polarization Project, I made a Chrome extension that streamlined the work of student fact-checkers. For example, it assigned tags to annotations in a controlled way, so that it was easy to gather all the annotations related to an investigation.

To achieve that effect the Digipo extension repurposed the same core machinery that the Hypothesis client uses to convert a selection in a web page into the text quote and text position selectors that annotations use to identify and anchor to selections in web pages. Because the Digipo extension and the Hypothesis extension shared this common machinery, annotations created by the former were fully interoperable with those created by the latter. The Hypothesis client didn’t see Digipo annotations as foreign in any way. You could create an annotation using the Digipo tool, with workflow-defined tags, and then open that annotation in Hypothesis and add a threaded discussion to it.

As mentioned in Annotations are an easy way to Show Your Work, I’ve been unbundling the ingredients of the Digipo tool to make them available for reuse. Today’s project aims to be the “Hello World” of custom annotation-powered apps. It’s a simple working example of a powerful three-step pattern:

  1. Given a selection in a page, form the selectors needed to post an annotation that targets the selection.
  2. Lead a user through an interaction that influences the content of that annotation.
  3. Post the annotation.

For the example scenario, the user is a researcher tasked with evaluating the emotional content of selected passages in texts. The researcher is required to label the selection as StrongPositive, WeakPositive, Neutral, WeakNegative, or StrongNegative, and to provide freeform comments on the tone of the selection and the rationale for the chosen label.

Here’s the demo in action:

The code behind the demo is here. It leverages two components. The first encapsulates the core machinery used to identify the selection. The second provides a set of functions that make it easy to create annotations and post them to Hypothesis. Some of these provide user interface elements, notably the dropdown list of Hypothesis groups that you can also see in other Hypothesis-oriented tools like facet and zotero. Others wrap the parts of the Hypothesis API used to search for, or create, annotations.

My goal is to show that you needn’t be a hotshot web developer to create a custom annotation-powered app. If the pattern embodied by this demo app is of interest to you, and if you’re at all familiar with basic web programming, you should be able to cook up your own implementations of the pattern pretty easily.

To work with selections in web pages, annotation-powered apps like these have to get themselves injected into web pages. There are three ways that can happen, and the Hypothesis client exploits all of them. Browser extensions are the most effective method. The Hypothesis extension is currently available only for Chrome, but there’s a Firefox extension waiting in the wings, made possible by an emerging consensus around the WebExtensions model.

A second approach entails sending a web page through a proxy that injects an annotation-powered app into the page. The Hypothesis proxy, via.hypothes.is, does that.

Finally there’s the venerable bookmarklet, a snippet of JavaScript that runs in the context of the page when the user clicks a button on the browser’s bookmarks bar. Recent evolution of the browser’s security model has complicated the use of bookmarklets. The simplest ones still work everywhere, but bookmarklets that load helper scripts — like the HelloWorldAnnotated demo — are thwarted by sites that enforce Content Security Policy.

It’s a complicated landscape full of vexing tradeoffs. Here’s how I recommend navigating it. The HelloWorldAnnotated demo uses the bookmarklet approach. If you’re an instructional designer targeting Project Gutenberg, or a scientific developer targeting the scientific literature, or a newsroom toolbuilder targeting most sources of public information, you can probably get away with deploying annotation-powered apps using bookmarklets. It’s the easiest way to build and deploy such apps. If the sites you need to target do enforce Content Security Policy, however, then you’ll need to deliver the same apps by way of a browser extension or a proxy. Of those two choices, I’d recommend a browser extension, and if there’s interest, I’ll do a follow-on post showing how to recreate the HelloWorldAnnotated demo in that form.

Annotating on, and with, media


At the 2018 I Annotate conference I gave a flash talk on the topic covered in more depth in Open web annotation of audio and video. These are my notes for the talk, along with the slides.


Here’s something we do easily and naturally on the web: Make a selection in a page.

The web annotation standard defines a few different ways to describe the selection. Here’s how Hypothesis does it:

We use a variety of selectors to help make the annotation target resilient to change. I’d like you to focus here on the TextPositionSelector, which defines the start and the end of the selection. Which is something we just take for granted in the textual domain. Of course a selection has a start and an end. How could it not?

An annotation like this gives you a kind of an URL that points to a selection in a document. There isn’t yet a standard way to write that URL along with the selectors, so Hypothesis points to that combination — that is, the URL plus the selectors — using an ID, like this:

The W3C have thought about a way to bundle selectors with the URL, so you’d have a standard way to cite a selection, like this:

In any case, the point is we’re moving into a world where selections in web resources have their own individual URLs.

Now let’s look again at this quote from Nancy Pelosi:

That’s not something she wrote. It’s something she said, at the Peter G Peterson Fiscal Summit, that was recorded on video.

Is there a transcript that could be annotated? I looked, and didn’t find anything better than this export of YouTube captions:

But of course we lack transcriptions for quite a lot of web audio and video. And lots of it can’t be transcribed, because it’s music, or silent moving pictures.

Once you’ve got a media selection there are some good ways to represent it with an URL. YouTube does it like this:

And with filetypes like mp3 and mp4, you can use media fragment syntax like this:

The harder problem in the media domain turns out to be just making the selection in the first place.

Here I am in that video, in the process of figuring out that the selection I’m looking for starts at 28:20 and ends at 28:36.

It’s not a fun or easy process. You can set the start, as I’ve done here, but then you have to scrub around on the timeline looking for the end, and then write that down, and then tack it onto the media fragment URL that you’ve captured.

It’s weird that something so fundamental should be missing, but there just isn’t an easy and natural way to make a selection in web media.

This is not a new problem. Fifteen years ago, these were the dominant media players.

We’ve made some progress since then. The crazy patchwork of plugins and apps has thankfully converged to a standard player that works the same way everywhere.

Which is great. But you still can’t make a selection!

So I wrote a tool that wraps a selection interface around the standard media player. It works with mp3s, mp4s, and YouTube videos. Unlike the standard player, which has a one-handled slider, this thing has a two-handled slider which is kind of obviously what you need to work with the start and end of a selection.

You can drag the handles to set the start and end, and you can nudge them forward and backward by minutes and seconds, and when you’re ready to review the intro and outro for your clip, you can play just a couple of seconds on both ends to check what you’re capturing.

When you’re done, you get a YouTube URL that will play the selection, start to stop, or an mp3 or mp4 media fragment URL that will do the same.

So how does this relate to annotation? In a couple of ways. You can annotate with media, or you can annotate on media.

Here’s what I mean by annotating with media.

I’ve selected some text that wants to be contextualized by the media quote, and I’ve annotated that text with a media fragment link. Hypothesis turns that link into an embedded player (thanks, by the way, to a code contribution from Steel Wagstaff, who’s here, I think). So the media quote will play, start to stop, in this annotation that’s anchored to a text selection at Politifact.

And here’s what I mean by annotating on media.

If I’m actually on a media URL, I just can annotate it. In this case there’s no selection to be made, the URL encapsulates the selection, so I can just annotate the complete URL.

This is a handy way to produce media fragment URLs that you can use in these ways. I hope someone will come up with a better one than I have. But the tool is begging to be made obsolete when the selection of media fragments becomes as easy and natural as the selection of text has always been.

Annotations are an easy way to Show Your Work

Journalists are increasingly being asked to show their work. Politifact does that with a sources box, like this:

This is great! The more citation of sources, the better. If I want to check those sources, though, I often wind up spending a lot of time searching within source articles to find passages cited implicitly but not explicitly. If those passages are marked using annotations, the method I’ll describe here makes that material available explicitly, in ways that streamline the reporter’s workflow and improve the reader’s experience.

In A Hypothesis-powered Toolkit for Fact Checkers I described a toolkit that supported the original incarnation of the Digital Polarization Project. More recently I’ve unbundled the key ingredients of that toolkit and made them separately available for reuse. The ingredient I’ll discuss here, HypothesisFootnotes, is illustrated in this short clip from a 10-minute screencast about the original toolkit. Here’s the upshot: Given a web page that contains Hypothesis direct links, you can include a script that pulls the cited material into the page, and connects direct links in the page to citations gathered elsewhere in the page.

Why would such links exist in the first place? Well, if you’re a reporter working on an investigation, you’re likely to be gathering evidence from a variety of sources on the web. And you’re probably copying and pasting bits and pieces of those sources into some kind of notebook. If you annotate passages within those sources, you’ve captured them in a way that streamlines your own personal (or team) management of that information. That’s the primary benefit. You’re creating a virtual notebook that makes that material available not only to you, but to your team. Secondarily, it’s available to readers of your work, and to anyone else who can make good use of the material you’ve gathered.

The example here brings an additional benefit, building on another demonstration in which I showed how to improve Wikipedia citations with direct links. There I showed that a Wikipedia citation need not only point to the top of a source web page. If the citation refers specifically to a passage within the source page, it can use a direct link to refer the reader directly to that passage.

Here’s an example that’s currently live in Wikipedia:

It looks like any other source link in Wikipedia. But here’s where it takes you in the cited Press Democrat story:

Not every source link warrants this treatment. When a citation refers to a specific context in a source, though, it’s really helpful to send the reader directly to that context. It can be time-consuming to follow a set of direct links to see cited passages in context. Why not collapse them into the article from which they are cited? That’s what HypothesisFootnotes does. The scattered contexts defined by a set of Hypothesis direct links are assembled into a package of footnotes within the article. Readers can still visit those contexts, of course, but since time is short and attention is scarce, it’s helpful to collapse them into an included summary.

This technique will work in any content management system. If you have a page that cites sources using direct links, you can use the HyothesisFootnotes script to bring the targets of those direct links into the page.

Investigating ClaimReview

In Annotating the Wild West of Information Flow I discussed a prototype of a ClaimReview-aware annotation client. ClaimReview is a standard way to format “a fact-checking review of claims made (or reported) in some creative work.” Now that both Google and Bing are ingesting ClaimReview markup, and fact-checking sites are feeding it to them, the workflow envisioned in that blog post has become more interesting. A fact checker ought to be able to highlight a reviewed claim in its original context, then capture that context and push it into the app that produces the ClaimReview markup embedded in fact-checking articles and slurped by search engines.

That workflow is one interesting use of annotation in the domain of fact-checking, it’s doable, and I plan to explore it. But here I’ll focus instead on using annotation to project claim reviews onto the statements they review, in original context. Why do that? Search engines may display fact-checking labels on search results, but nothing shows up when you land directly on an article that includes a reviewed claim. If the reviewed claim were annotated with the review, an annotation client could surface it.

To help make that idea concrete, I’ve been looking at ClaimReview markup in the wild. Not unexpectedly, it gets used in slightly different ways. I wrote a bookmarklet to help visualize the differences, and used it to find these five patterns:

https://www.washingtonpost.com/news/fact-checker/wp/2018/03/16/does-president-trumps-border-wall-pay-for-itself

{
  "urlOfFactCheck": "https://www.washingtonpost.com/news/fact-checker/wp/2018/03/16/does-president-trumps-border-wall-pay-for-itself/?utm_term=.31281569ab46",
  "reviewRating": {
    "@type": "Rating",
    "ratingValue": "-1",
    "alternateName": "Fuzzy math",
    "worstRating": "-1",
    "bestRating": "-1",
    "image": "https://dhpikd1t89arn.cloudfront.net/custom-rating/custom-ratings-b89013e8-729f-44aa-816e-93a9307b079c"
  },
  "itemReviewed": {
    "@type": "CreativeWork",
    "author": {
      "@type": "Person",
      "name": "Donald Trump ",
      "jobTitle": "President",
      "image": "https://dhpikd1t89arn.cloudfront.net/custom-rating/custom-ratings-b89013e8-729f-44aa-816e-93a9307b079c",
      "sameAs": [
        "www.whitehouse.gov"
      ]
    },
    "datePublished": "2018-03-13",
    "name": "in a tweet"
  },
  "claimReviewed": "\"The $18 billion wall will pay for itself by curbing the importation of crime, drugs and illegal immigrants who tend to go on the federal dole.”"
}

https://www.snopes.com/fact-check/did-captured-islamic-state-obama/

{
  "urlOfFactCheck": "https://www.snopes.com/fact-check/did-captured-islamic-state-obama/",
  "reviewRating": {
    "alternateName": "False",
    "ratingValue": "",
    "bestRating": ""
  },
  "itemReviewed": {
    "datePublished": "2018-03-13T08:25:46+00:00",
    "name": "Snopes",
    "sameAs": "http://dailyworldupdate.com/2018/03/11/breaking-captured-isis-leader-had-obama-on-speed-dial/"
  },
  "claimReviewed": "A captured Islamic State leader's cell phone contained the phone numbers of world leaders, including former President Barack Obama."
}

https://www.factcheck.org/2018/03/trump-misses-mars-attack/

{
  "urlOfFactCheck": "https://www.factcheck.org/2018/03/trump-misses-mars-attack/",
  "reviewRating": {
    "@type": "Rating",
    "ratingValue": "-1",
    "alternateName": "Clinton Endorsed Mars Flight",
    "worstRating": "-1",
    "bestRating": "-1",
    "image": "https://dhpikd1t89arn.cloudfront.net/custom-rating/custom-ratings-76945159-07c1-49d6-833e-717c1da2478f"
  },
  "itemReviewed": {
    "@type": "CreativeWork",
    "author": {
      "@type": "Person",
      "name": "Donald Trump",
      "jobTitle": "President of the United States",
      "image": "https://dhpikd1t89arn.cloudfront.net/custom-rating/custom-ratings-76945159-07c1-49d6-833e-717c1da2478f",
      "sameAs": [
        "http://www.whitehouse.gov"
      ]
    },
    "datePublished": "2018-03-13",
    "name": "Speech at military base"
  },
  "claimReviewed": "NASA “wouldn’t have been going to Mars if my opponent won.\""
}

http://www.politifact.com/texas/statements/2018/mar/15/ted-cruz/ted-cruz-says-beto-orourke-wants-open-borders-and-/

{
  "urlOfFactCheck": "http://www.politifact.com/texas/statements/2018/mar/15/ted-cruz/ted-cruz-says-beto-orourke-wants-open-borders-and-/",
  "reviewRating": {
    "@type": "Rating",
    "ratingValue": "2",
    "alternateName": "False",
    "worstRating": "1",
    "bestRating": "7",
    "image": "https://dhpikd1t89arn.cloudfront.net/rating_images/politifact/tom-false.jpg"
  },
  "itemReviewed": {
    "@type": "CreativeWork",
    "author": {
      "@type": "Person",
      "name": "Ted Cruz",
      "jobTitle": "U.S. senator for Texas",
      "image": "https://dhpikd1t89arn.cloudfront.net/rating_images/politifatc/tom-false.jpg",
      "sameAs": [
        "https://www.youtube.com/watch?v=MvKElqdUxZg"
      ]
    },
    "datePublished": "2018-03-06",
    "name": "Houston, Texas"
  },
  "claimReviewed": "Says Beto O’Rourke wants \"open borders and wants to take our guns.\""
}

https://climatefeedback.org/claimreview/breitbart-repeats-bloggers-unsupported-claim-noaa-manipulates-data-exaggerate-warming/

{
  "urlOfFactCheck": "https://climatefeedback.org/claimreview/breitbart-repeats-bloggers-unsupported-claim-noaa-manipulates-data-exaggerate-warming/",
  "reviewRating": {
    "@type": "Rating",
    "alternateName": "Unsupported",
    "bestRating": 5,
    "ratingValue": 2,
    "worstRating": 1
  },
  "itemReviewed": {
    "@type": "CreativeWork",
    "author": {
      "@type": "Person",
      "name": "James Delingpole"
    },
    "headline": "TBD",
    "image": {
      "@type": "ImageObject",
      "height": "200px",
      "url": "https://climatefeedback.org/wp-content/uploads/Sources/Delingpole_Breitbart_News.png",
      "width": "200px"
    },
    "publisher": {
      "@type": "Organization",
      "logo": "Breitbart",
      "name": "Breitbart"
    },
    "datePublished": "20 February 2018",
    "url": "TBD"
  },
  "claimReviewed": "NOAA has adjusted past temperatures to look colder than they were and recent temperatures to look warmer than they were."
}

To normalize the difference I’ll need to look at more examples from these sites. But I’m also looking for more patterns, so if you know of other sites that routinely embed ClaimReview markup, please drop me links!

 

Open web annotation of audio and video

Hypothesis does open web annotation of text. Let’s unpack those concepts one at a time.

Open, a famously overloaded word, here means:

  • The software that reads, writes, stores, indexes, and searches for annotations is available under an open source license.

  • Open standards govern the representation and exchange of annotations.

  • The annotation system is a good citizen of the open web.

Web annotation, in the most general sense, means:

  • Selecting something within a web resource.

  • Attaching stuff to that selection.

Text, as the Hypothesis annotation client understands it, is HTML, or PDF transformed to HTML. In either case, it’s what you read in a browser, and what you select when you make an annotation.

What’s the equivalent for audio and video? It’s complicated because although browsers enable us to select passages of text, the standard media players built into browsers don’t enable us to select segments of audio and video.

It’s trivial to isolate a quote in a written document. Click to set your cursor to the beginning, then sweep to the end. Now annotation can happen. The browser fires a selection event; the annotation client springs into action; the user attaches stuff to the selection; the annotation server saves that stuff; the annotation client later recalls it and anchors it to the selection.

But selection in audio and video isn’t like selection in text. Nor is it like selection in images, which we easily and naturally crop. Selection of audio and video happens in the temporal domain. If you’ve ever edited audio or video you’ll appreciate what that means. Setting a cursor and sweeping a selection isn’t enough. You can’t know that you got the right intro and outro by looking at the selection. You have to play the selection to make sure it captures what you intended. And since it probably isn’t exactly right, you’ll need to make adjustments that you’ll then want to check, ideally without replaying the whole clip.

All audio and video editors support making and checking selections. But what’s built into modern browsers is a player, not an editor. It provides a slider with one handle. You can drag the handle back and forth to play at different times, but you can’t mark a beginning and an end.

YouTube shares that limitation, by the way. It’s great that you can right-click and Copy video URL at current time. But you can’t mark that as the beginning of a selection, and you can’t mark a corresponding end.

We mostly take this limitation for granted, but as more of our public discourse happens in audio or video, rarely supported by written transcripts, we will increasingly need to be able to cite quotes in the temporal domain.

Open web annotation for audio and video will, therefore, require standard players that support selection in the temporal domain. I expect we’ll get there, but meanwhile we need to provide a way to do temporal selection.

In Annotating Web Audio I described a first cut at a clipping tool that wraps a selection interface around the standard web audio and video players. It was a step in the right direction, but was too complex — and too modal — for convenient use. So I went back to the drawing board and came up with a different approach shown here:

This selection tool has nothing intrinsically to do with annotation. It’s job is to make your job easier when you are constructing a link to an audio or video segment.

For example, in Welcome to the Sapiezoic I reflected on a Long Now talk by David Grinspoon. Suppose I want to support that post with an audio pull quote. The three-and-a-half-minute segment I want to use begins: “The Anthropocene has been proposed as a new epoch…” It ends: “…there has never been a geological force aware of its own existence. And to me that’s a very profound change.”

Try opening http://podcast.longnow.org/salt/salt-020170906-grinspoon-podcast.mp3 and finding the timecodes for the beginning (“The Anthropocene…”) and the end (“…profound change”). The link you want to construct is: http://podcast.longnow.org/salt/salt-020170906-grinspoon-podcast.mp3#t=730,949. That’s a Media Fragments link. It tells a standard media player when to start and stop.

If you click the bare link, your browser will load the audio file into its standard player and play the clip. If you paste the link into a piece of published text, it may even be converted by the publishing system into an inline player. Here’s an annotation on my blog post that does that:

That’s a potent annotation! But to compose it you have to find where the quote starts (12:10) and ends (15:49), then convert those timecodes to seconds. Did you try? It’s doable, but so fiddly that you won’t do it easily or routinely.

The tools at http://jonudell.net/av/ aim to lower that activation threshold. They provide a common interface for mp3 audio, mp4 video, and YouTube video. The interface embeds a standard audio or video player, and adds a two-handled slider that marks the start and end of a clip. You can adjust the start and end and hear (or hear and see) the current intro and outro. At every point you’ve got a pair of synced permalinks. One is the player link, a media fragment URL like http://podcast.longnow.org/salt/salt-020170906-grinspoon-podcast.mp3#t=730,949. The other is the editor link. It records the settings that produce the player link.

As I noted in Annotating Web Audio, these clipping tools are just one way to ease the pain of constructing media fragment URLs. Until standard media players enable selection, we’ll need complementary tools like these to help us do it.

Once we have a way to construct timecoded segments, we can return to the question: “What is open web annotation for audio and video?”

At the moment, I see two complementary flavors. Here’s one:

I’m using a Hypothesis page note to annotate a media fragment URL. There’s no text to which Hypothesis can anchor an annotation, but it can record a note that refers to the whole page. In effect I’m bookmarking the page as I could also do in, for example, Pinboard. The URL encapsulates the selection; annotations that attach to the URL refer to the selection.

This doesn’t yet work as you might expect in Hypothesis. If you visit http://podcast.longnow.org/salt/salt-020170906-grinspoon-podcast.mp3#t=730,949 with Hypothesis you’ll see my annotation, but you’ll also see it if you visit the same URL at #t=1,10, or any other media fragment. That’s helpful in one way, because if there were multiple annotations on the podcast you’d want to discover all of them. But it’s unhelpful in a more important way. If you want to annotate a particular segment for personal use, or to mark it as a pull quote, or because you’re a teacher leading a class discussion that refers to that segment, then you want annotations to refer to the particular segment.

Why doesn’t Hypothesis distinguish between #t=1,10 and #t=730,949? Because what follows the # in a URL is most often not a media fragment, it’s an ordinary fragment that marks a location in text. You’ve probably seen how that works. A page like example.com/page1 has intrapage links like example.com/page1#section1 and example.com/page1#section2. I can send you to those locations in the page by capturing and relaying those fragment-enhanced links. In that case, if we’re annotating the page, we’d most likely want all our annotations to show up, no matter which fragment is focused in the browser. So Hypothesis doesn’t record the fragment when it records the target URL for an annotation.

But there can be special fragments. If I share a Hypothesis link, for example https://hyp.is/uT06DvGBEeeXDlMfjMFDAg, you’ll land on a link that ends with #annotations:uT06DvGBEeeXDlMfjMFDAg. The Hypothesis client uses that information to scroll the annotated document to the place where the annotation is anchored.

Media fragments could likewise be special. The Hypothesis server, which normally discards media fragments, could record them so that annotations made on a media fragment URL would target the fragment. And I think that’s worth doing. Meanwhile, note that you can annotate the editor links provided by the tools at http://jonudell.net/av/:

This works because the editor links don’t use fragments, only URL parameters, and because Hypothesis regards each uniquely-parameterized URL as a distinct annotation target. Note also that you needn’t use the tools as hosted on my site. They’re just a small set of files that can be hosted anywhere.

Eventually I hope we’ll get open web annotation of audio and video that’s fully native, meaning not only that the standard players support selection, but also that they directly enable creation and viewing of annotations. Until then, though, this flavor of audio/video annotation — let’s call it annotating on media — will require separate tooling both for selecting quotes, and for creating and viewing annotations directly on those quotes.

We’ve already seen the other flavor: annotating with media. To do that with Hypothesis, construct a media fragment URL and cite it in a Hypothesis annotation. What should the annotation point to? That’s up to you. I attached the David Grinspoon pull quote to one of my own blog posts. When I watched a PBS interview with Virginia Eubanks, I captured one memorable segment and attached it to the page on her blog that features the book discussed in the interview.

(If I were Virginia Eubanks I might want to capture the pull quote myself, and display it on my book page for visitors who aren’t seeing it through the Hypothesis lens.)

Open web annotation of audio and video should encompass both of these flavors. You should be able to select a clip within a standard player, and annotate it in situ. And you should be able to use that clip in an annotation.

Until players enable selection, the first flavor — annotating on a segment — will require a separate tool. I’ve provided one implementation, there can be (perhaps already are?) others. However it’s captured, the selection will be represented as a media fragment link. Hypothesis doesn’t yet, but pretty easily could, support annotation of such links in a way that targets media fragments.

The second flavor — annotation with a segment — again requires a way to construct a media fragment link. With that in hand, you can just paste the link into the Hypothesis annotation editor. Links ending with .mp3#t=10,20 and links like youtube.com/watch?v=Avxm7JYjk8M&start=10&end=20 will become embedded players that start and end at the indicated times. Links like .mp4#t=10,20 and youtube.com/embed/Avxm7JYjk8M?start=10&end=20 don’t yet become embedded players but can.

The ideal implementation of open web annotation for audio and video will have to wait for a next generation of standard media players. But you can use Hypothesis today to annotate on as well as with media.

How to improve Wikipedia citations with Hypothesis direct links

Wikipedia aims to be verifiable. Every statement of fact should be supported by a reliable source that the reader can check. Citations in Wikipedia typically refer to online documents accessible at URLs. But with the advent of standard web annotation we can do better. We can add citations to Wikipedia that refer precisely to statements that support Wikipedia articles.

According to Wikipedia’s policy on citing sources:

Wikipedia’s Verifiability policy requires inline citations for any material challenged or likely to be challenged, and for all quotations, anywhere in article space.

Last night, reading https://en.wikipedia.org/wiki/Tubbs_Fire, I noticed this unsourced quote:

Sonoma County has four “historic wildfire corridors,” including the Hanly Fire area.

I searched for the source of that quotation, found it in a Press Democrat story, annotated the quote, and captured a Hypothesis direct link to the annotation. In this screenshot, I’ve clicked the annotation’s share icon, and then clicked the clipboard icon to copy the direct link to the clipboard. The direct link encapsulates the URL of the story, plus the information needed to locate the quotation within the story.

Given such a direct link, it’s straightforward to use it in a Wikipedia citation. Back in the Wikipedia page I clicked the Edit link, switched to the visual editor, set my cursor at the end of the unsourced quote, and clicked the visual editor’s Cite button to invoke this panel:

There I selected the news template, and filled in the form in the usual way, providing the title of the news story, its date, its author, the name of the publication, and the date on which I accessed the story. There was just one crucial difference. Instead of using the Press Democrat URL, I used the Hypothesis direct link.

And voilà! There’s my citation, number 69, nestled among all the others.

Citation, as we’ve known it, begs to be reinvented in the era of standard web annotation. When I point you to a document in support of a claim, I’m often thinking of a particular statement in that document. But the burden is on you to find that statement in the document to which my citation links. And when you do, you may not be certain you’ve found the statement implied by my link. When I use a direct link, I relieve you of that burden and uncertainty. You land in the cited document at the right place, with the supporting statement highlighted. And if it’s helpful we can discuss the supporting statement in that context.

I can envision all sorts of ways to turbocharge Wikipedia’s workflow with annotation-powered tools. But no extra tooling is required to use Hypothesis and Wikipedia in the way I’ve shown here. If you find an unsourced quote in Wikipedia, just annotate it in its source context, capture the direct link, and use it in the regular citation workflow. For a reader who clicks through Wikipedia citations to check original sources, this method yields a nice improvement over the status quo.

Annotating Web Audio

On a recent walk I listened to Unmasking Misogyny on Radio Open Source. One of the guests, Danielle McGuire, told the story of Rosa Parks’ activism in a way I hadn’t heard before. I wanted to capture that segment of the show, save a link to it, and bookmark the link for future reference.

If you visit the show page and click the download link, you’ll load the show’s MP3 file into your browser’s audio player. Nowadays that’s almost always going to be the basic HTML5 player. Here’s what it looks like in various browsers:

The show is about an hour long. I scrubbed along the timeline until I heard Danielle McGuire’s voice, and then zeroed in on the start and end of the segment I wanted to capture. It starts at 18:14 and ends at 21:11. Now, how to link to that segment?

I first investigated this problem in 2004. Back then, I learned that it’s possible to fetch and play random parts of MP3 files, and I made a web app that would figure out, given start and stop times like 18:14 and 21:11, which part of the MP3 file to fetch and play. Audio players weren’t (and still aren’t) optimized for capturing segments as pairs of minute:second parameters. But once you acquired those parameters, you could form a link that would invoke the web app and play the segment. Such links could then be curated, something I often did using the del.icio.us bookmarking app.

Revisiting those bookmarks now, I’m reminded that Doug Kaye and I were traveling the same path. At ITConversations he had implemented a clipping service that produced URLs like this:

http://www.itconversations.com/clip.php?showid=378&start=4:37&stop=6:04

Mine looked like this:

http://udell.infoworld.com/?url=http://www.itconversations.com/audio/download/ITConversations-378.mp3&beg=4:37&end=6:04

Both of those audio-clipping services are long gone. But the audio files survive, thanks to the exemplary cooperation between ITConversations and the Internet Archive. So now I can resurrect that ITConversations clip — in which Doug Engelbart, at the Accelerating Change conference in 2004, describes the epiphany that inspired his lifelong quest — like so:

http://jonudell.net/av/audio.html?url=http://jonudell.net/audio/ITC.AC2004-DougEngelbart-2004.11.07.mp3&startmin=4&startsec=37&endmin=6&endsec=4

And here’s the segment of Danielle McGuire’s discussion of Rosa Parks that I wanted to remember:

http://jonudell.net/av/audio.html?url=http://ia601506.us.archive.org/5/items/171207OSPODCASTSexualHarassment/171207-OS-PODCAST-SexualHarassment.mp3&startmin=18&startsec=14&endmin=21&endsec=11

This single-page JavaScript app aims to function both as a player of predefined segments, and as a tool that makes it as easy as possible to define segments. It’s still a work in progress, but I’m able to use it effectively even as I continue to refine the interaction details.

For curation of these clips I am, of course, using Hypothesis. Here are some of the clips I’ve collected on the tag AnnotatingAV:

To create these annotations I’m using Hypothesis page notes. An annotation of this type is like a del.icio.us or pinboard.in bookmark. It refers to the whole resource addressed by a URL, rather than to a segment of interest within a resource.

Most often, a Hypothesis user defines a segment of interest by selecting a passage of text in a web document. But if you’re not annotating any particular selection, you can use a page note to comment on, tag, and discuss the whole document.

Since each audio clip defines a segment as a standalone web page with a unique URL, you can use a Hypothesis page note to annotate that standalone page:

It’s a beautiful example of small pieces loosely joined. My clipping tool is just one way to form URLs that point to audio and video segments. I hope others will improve on it. But any clipping tool that produces unique URLs can work with Hypothesis and, of course, with any other annotation or curation tool that targets URLs.

Syndicating annotations

Steel Wagstaff asks:

Immediate issue: we’ve got books on our dev server w/ annotations & want to move them intact to our production instance. The broader use case: I publish an open Pressbook & users make public comments on it. Someone else wants to clone the book including comments. How?

There are currently three URL-independent identifiers that can be used to coalesce annotations across instances of a web document published at different URLs. The first was the PDF fingerprint, the second was the DOI, and a third, introduced recently as part of Hypothesis’ EPUB support, uses Dublin Core metadata like so:

<meta name=”dc.identifier” content=”xchapter_001″>
<meta name=”dc.relation.ispartof” content=”org.example.hypothesis.demo.epub-samples.moby-dick-basic”>

If you dig into our EPUB.js and Readium examples, you’ll find those declarations are common to both instances of chapter 1 of Moby Dick. Here’s an annotation anchored to the opening line, Call me Ishmael. When the Hypothesis client loads, in a page served from either of the example URLs, it queries for two identifiers. One is the URL specific to each instance. The other is a URN formed from the common metadata, and it looks like this:

urn:x-dc:org.example.hypothesis.demo.epub-samples.moby-dick-basic/chapter_001

When you annotate either copy, you associate its URL with this Uniform Resource Name (URN). You can search for annotations using either of the URLs, or the just URN like so:

https://hypothes.is/search?q=url:urn:x-dc:org.example.hypothesis.demo.epub-samples.moby-dick-basic/xchapter_001

Although it sprang to life to support ebooks, I think this mechanism will prove more broadly useful. Unlike PDF fingerprints and DOIs, which typically identify whole works, it can be used to name chapters and sections. At a conference last year we spoke with OER (open educational resource) publishers, including Pressbooks, about ways to coalesce annotations across their platforms. I’m not sure this approach is the final solution, but it’s usable now, and I hope pioneers like Steel Wagstaff will try it out and help us think through the implications.

Really, AT&T?

We woke up this morning in Santa Rosa to smoke and sirens. Last night’s winds fanned a bunch of wildfires in the North Bay, and parts of our town are destroyed. We’re a few miles south of the evacuation zone, things might shift around, but at the moment we’re staying put.

Information was hard to come by. Our Comcast service is down. Our AT&T phones were up and running though, so I turned on my mobile hotspot and read this: “Cannot turn on hotspot, please visit att.com or call 611.”

WTF?

Here’s what happened. Last month we started hitting data overage charges. That hadn’t been an issue before, but we dropped by the AT&T store to review our options. The sales rep pushed hard for an upgrade to an unlimited data plan for an extra $5/mo. Not really necessary, we’re almost always on WiFi, but OK, sure, five bucks, why not?

Turns out the rep neglected to mention that the upgrade removed our tethering capability. This is not something you want to find out while breathing smoke, hearing sirens, and trying to make sense of the latest evacuation map for your burning city. According to the rep we spoke with today, this critical fact about tethering is often omitted from the upsell pitch.

We’ve got it turned back on now. I’m awaiting a callback from a manager’s manager about the additional $30/mo they plan to charge to restore a capability they hadn’t told me they were taking away.

This is a small thing. We are, of course, infinitely luckier than a bunch of folks in our town who will return to the charred foundations of what were their homes. We’re mainly thinking about them today. But while we’re waiting for the ash to settle, I just want to say: Really, AT&T?

Welcome to the Sapiezoic

The latest Long Now podcast, by David Grinspoon, takes a very long view indeed. As we transition from the Holocene to the Anthropocene, he thinks, we’re not just entering a new geological epoch, as shown here:

That alone would be a big deal. But epochs are just geologic eyeblinks a few million years in duration. Grinspoon thinks we might be entering something way bigger. Not just a period or an era. We might happen to be alive now at an eon boundary, as shown here:

There have only been four eons so far. Each was a major transition in earth history — a shift in the relationship between life and the planet. Life first emerged at the beginning of the Archean era around 4 billion years ago, when things cooled down enough. Around 2.5 billion years ago, cyanobacteria learned to photosynthesize. They bathed the world in oxygen, caused mass extinction, and deeply entangled life with the physical and chemical workings of the planet. At the boundary between the Proterozoic and Phanerozoic, around 500 million years ago, life went multicellular, plants and animals appeared, and the modern eon began.

Will our descendants look back on the Anthropocene as the dawn of a fifth eon? Grinspoon makes a compelling case that they might. The Archean cyanobacteria that poisoned the environment couldn’t know what they were doing. We can. Our infrastructure is taking over the planet as surely as the oxygenated atmosphere did. But it is, at least potentially, under our conscious control. The hallmark of the Sapiezoic eon he envisions: intentional re-terraforming.

Grinspoon cites one shining example: the Montreal Protocol that will, if we stay the course, reverse manmade ozone depletion. “We are as gods,” says Stewart Brand, “so we might as well get good at it.”

I’ve listened to all the Long Now podcasts, some more than once. This one rates very highly. It’s a great talk.

Fact-checking Naomi Klein’s “No Is Not Enough”

So my conclusion is that Klein, who says she wrote this book quickly, to respond to the current moment, with less attention to endnotes than usual, is generally reliable on facts.

The way in which I reached that conclusion is a pretty good example of the strategies outlined in Web Literacy for Student Fact-Checkers, and a reminder that those methods aren’t just for students. All of us — me, you, Naomi Klein, everyone — need to build those muscles and exercise them regularly.

On a hike last week I heard an excellent episode of Radio Open Source, featuring Naomi Klein, David Graeber, and Pankaj Mishra. One of the segments of interest that stuck with me is this remark by Naomi Klein:

We need to examine the way in which politics has been taken over by the logic of corporate branding, which is not something Trump started. Trump was just better at it than anybody else because he is himself a fully commercialized brand. So the table was set for Trump, he just showed up and said, “Well, I know this game better than you jokers, I’m the real thing, I’m a reality TV star and I’m a megabrand. Step aside!”

(If I could, I would link you directly to that segment in situ, that’s something I had working a long time ago, but since audio quotation still isn’t a ubiquitous feature of the web, here’s the compelling minute of audio that contains that quote.)

I was previously aware of Naomi Klein but had never heard her speak, had read none of her books, and was only slightly familiar with her critique of corporatized politics. Her conversation with Chris Lydon on that podcast prompted me to read her new book, No Is Not Enough, published just a few weeks ago.

I was also slightly familiar with criticism of Klein’s views. So, in a moment when the president of the United States had just tweeted a video of himself performing a mock attack during his time as a reality TV personality on the pro wrestling circuit, I was curious to know her thoughts but also prepared to take them with a grain of salt.

Here’s the book’s table of contents:

The first section elaborates on the above quote. Human megabrands are, Klein points out, a relatively new thing. She writes:

People keep asking — is he going to divest? Is he going to sell his businesses? Is Ivanka going to? But it’s not at all clear what these questions even mean, because their primary businesses are their names. You can’t disentangle Trump the man from Trump the brand; those two entities merged long ago. Every time he sets foot in one of his properties — a golf club, a hotel, a beach club — White House press corps in tow, he is increasing his overall brand value, which allows his company to sell more memberships, rent more rooms, and increase fees.

I hope we can agree across ideologies that this kind of thing is unhealthy. In the audio clip I cited above, Klein notes that the antidote is not a liberal megabrand, not Zuckerberg or Oprah. Conflation of brand power and political power is just a bad idea, and we need to reckon with that.

The rest of the book builds on arguments made in her earlier ones: Capitalism’s winners exploit natural and man-made crises to consolidate their winnings (The Shock Doctrine); climate change presents an existential challenge to that world order (This Changes Everything). Since I haven’t read those books, and have only just now read a few reviews pro and con, I lack the full context needed to evaluate the arguments in No Is Not Enough. But that’s exactly the right setup for the point about fact-checking that I want to make here.

How reliable is Naomi Klein on facts? I came to No Is Not Enough with no strong opinion one way or another. I raised an eyebrow, though, when I read this passage about Treasury secretary Steven Mnuchin:

Even among Goldman alumni, Steven Mnuchin has distinguished himself by his willingness to profit off misery. Afer the 2008 Wall Street collapse, and in the middle of the foreclosure crisis, Mnuchin purchased a California bank. The renamed company, OneWest, earned Mnuchin the nickname “Foreclosure King,” reportedly collecting $1.2 billion from the government to help cover the losses for foreclosed homes and evicting tens of thousands of people between 2009 and 2014. One attempted foreclosure involved a ninety-year-old woman who was behind on her payments by 27 cents.”

The last sentence sent me to Google, where I quickly learned it had been debunked in a tweetstorm by Ted Frank in January 2017. He works for a libertarian think tank, and I doubt we’d see eye to eye on many issues, but his takedown of the 27-cent claim was accurate. Politico, for example, corrected its version of the story.

This is unfortunate because everything else in the above quote seems to check out. And you don’t have to be a liberal snowflake to worry legitimately about the Goldmanization of the US Cabinet.1

I went on to spot-check a number of other claims in No Is Not Enough and again, so far as I can tell with modest effort, everything checks out. So my conclusion is that Klein, who says she wrote this book quickly, to respond to the current moment, with less attention to endnotes than usual, is generally reliable on facts.

The way in which I reached that conclusion is a pretty good example of the strategies outlined in Web Literacy for Student Fact-Checkers, and a reminder that those methods aren’t just for students. All of us — me, you, Naomi Klein, everyone — need to build those muscles and exercise them regularly.


1. On another episode of Radio Open Source, in a remarkable dialogue between Pat Buchanan and Ralph Nader, the arch-conservative Buchanan aligned with the arch-liberal Nader on that point:

I agree completely with Ralph, I did not know we were going to make the world safe for Goldman Sachs, and I am a little surprised to find three or four or five of these guys, one or two might have been OK.