Componentware Revisited

I’m not a scholar, nor do I play one on TV, but when I search Google Scholar I find that I’m cited there a few times, most notably for a 1994 BYTE cover story, Componentware. The details there are at best of historical interest but the topic remains evergreen: How do we package software in ways that maximize its reusability while minimizing the level of skill required to achieve reuse?

By 1996 the web had booted up and I reprised the theme in On-Line Componentware1. That’s when it dawned on me that the websites that people “surfed” to were also software components that could be woven together to meet a variety of needs. It was my first glimpse of what we later came to know as SOA (service-oriented architecture), then RESTful APIs, and most recently microservices. Ever since then, wearing one hat or another, I’ve been elaborating the theme of that column: “A powerful capability for ad hoc distributed computing arises naturally from the architecture of the Web.” (link)

That architecture has in some ways remained the same, in other ways evolved dramatically, but its generative power continues to surprise and delight me. And I keep finding new ways to package and reuse web components.

Hypothesis has been a fascinating case study. Our web annotation system has two main components. The web service, written in Python, runs on a web server. The client, written in JavaScript, runs in your browser. Both are available for reuse in many different ways.

One way to reuse the web service is to embed views in web pages, as shown in this example from the Digital Polarization (Digipo) project:

The “Matching Annotations” widget embedded in that page is just this search result wrapped in an iframe. This is one of the most common and powerful ways to reuse web components.

The Hypothesis API affords another way to reuse its server component. The Timeline widget, embedded on that same page, works that way. It searches Hypothesis for the URLs of annotations tagged with the id of the current wiki page. Then it searches the annotations on each of those URLs for another user-assigned tag that signifies the publication date, and arranges those results chronologically. (The Timeline widget could have been written in PHP to run in the wiki server, but I’m more familiar with JavaScript so instead it’s written in JS and runs in the browser.)

The Hypothesis client can also be reused in powerful ways. Most notably, you can add the client to a website by including this simple script tag in the site’s main template:

https://hypothes.is/embed.js

Or you can use the Hypothesis proxy, https://via.hypothes.is/, to inject the client into a web page, for example: https://via.hypothes.is/https://en.wikipedia.org/wiki/Proxy_server.

When you use Hypothesis to annotate a PDF file, it relies on a separate component — Mozilla’s PDF.js — to parse the PDF and render it in the browser so the Hypothesis client can operate on it. PDF.js is available natively in Firefox, the Hypothesis Chrome extension injects it when you annotate a PDF in that browser.

Another Hypothesis component, pdf.js-hypothesis, enables a web server to serve a PDF with PDF.js and Hypothesis both active. That makes PDF annotation available in any browser. We use it in our prototype Canvas app, for example, to serve annotation-enabled PDFs in the Canvas learming management system (LMS).

Still another component enables custom rendering of annotations. You can see it in action at Science in the Classroom, a collection of research papers annotated to serve as teaching materials.

Graduate students use Hypothesis to create the annotations. But Science in the Classroom prefers to display them using its own mechanism, Learning Lens. So when the page loads, it fetches annotations using the Hypothesis API and then paints them on the page using a component that’s part of the Hypothesis client but is also available as the standalone NPM module dom-anchor-text-quote.

I am deliberately blurring the definition of web component because I think it properly encompasses many different things: a web page embedded in an iframe; an API-accessible web service; a rich client application like Hypothesis (or a simple widget like the Timeline) embedded in a web page; a standalone module like dom-anchor-text-quote; a repackaging of Hypothesis as a WordPress plugin or a Canvanas external tool.

This is a rich assortment of ingredients! But there’s one that’s notably absent. We’ve seen lots of ways to use the Hypothesis client as a component that plugs into other environments and makes annotation available there. But what if you want to plug something into the Hypothesis client? There isn’t yet a mechanism for that. The code is open source and can be modified, as Marija Katic and Martin Eve have done with Annotran, a translation tool based on Hypothesis. That’s a great example of code reuse. But it isn’t, at least to my way of thinking, an example of component reuse. Although I recognize many different species of software components, they all share one piece of common DNA: reuse without internal modification.

In an essay on what I learned while building the Canvas app, I noted two critical aspects of the healthy ecosystem that Canvas and other learning management systems inhabit:

1. Standard protocols. In the LMS world, Learning Tools Interoperability (LTI) defines those protocols.

2. Frictionless component reuse. This flows from item 1. An LTI app expects to be launched from an LMS and to run embedded in an iframe there. Again, this is the most common and powerful way to reuse web components.

The question I asked there, and tried to answer: Could an iframe embed web components within a rich web client like Hypothesis? If so that might open the way for features not yet in the Hypothesis core, like controlled tagging, that would otherwise require deep surgery on the Hypothesis client, and intimate knowledge of its JavaScript framework (Angular) and the nonstandard component model dictated by that framework.

I had already tried a couple of experiments to add controlled tagging to the Hypothesis client. In this one, the tag suggestions offered in the tag editor are bound to Hypothesis groups. In this one, tag suggestions are bound to an external web service. Both experiments entailed nontrivial alteration of the Hypothesis client.

In a third experiment, I modified the Hypothesis client in a way that could enable a family of components to plug into it. This customized client embedded an iframe in the annotation editor, and launched a user-defined web application into that iframe, passing it one parameter: the id of the annotation open in the editor. Because it was configured with the credentials of a Hypothesis user, it could work as a pluggable component that communicates with the active annotation and also with the full panoply of web resources. You could, perhaps, think of it as an annotation applet. Here’s a demo.

This approach was intriguing and might serve some useful purposes, but an iframe is an ugly and awkward construct to stick into the middle of a richly-designed web client. And this approach again fails my definition of component reuse because it requires internal modification of the client.

So as I began working to integrate Hypothesis into Digipo I was still looking for a way to control Hypothesis tags without modifying the Hypothesis client. As described in A toolkit for fact checkers, we initially used bookmarklets to do that, then began developing a Chrome extension for the Digipo project.

The Chrome extension immediately solved a couple of vexing problems. It enabled us to cleanly package a growing set of Digipo tools, by making them conveniently right-click-accessible. And it got around the security constraints that increasingly make bookmarklets untenable.

Just as importantly it enabled us to blend together a Digipo-specific set of tools, some but not all of which are Hypothesis-powered. For a Digipo fact checker, Hypothesis isn’t a primary part of the experience. It’s a supporting component that’s brought into the process as and where needed. It’s infrastructure.

The Digipo workflow relies on controlled tagging to accumulate evidence into several buckets associated with each investigation. When you’re on a page that you want to put into a bucket, you can use Digipo’s Tag this Page helper to create a Hypothesis page note with the tag for that investigation. It starts here:

That leads to a page that lists the Digipo investigations.

When you choose one, the extension uses the Hypothesis API to create a page note with the investigation’s tag.

Thanks to Hypothesis direct linking, the interaction flows seamlessly from the Digipo extension to Hypothesis. You land in the annotation editor where you can do more with Hypothesis: add comments and new tags, discuss the target document with other Hypothesis users.

But this arrangement only creates Hypothesis page notes: annotations that refer to a target document but not to a selection within that document. More powerful uses of Hypothesis flow from selections within target documents. Could a selection-based annotation begin in the Digipo extension, acquire a tag, and then flow through to Hypothesis?

Happily the answer is yes. You can see that here.

The Digipo Chrome extension presents one set of helpers when you right-click on a page with nothing selected. Some of the helpers rely on Hypothesis, others just automate parts of the Digipo workflow — for example, launching advanced Google searches. When you right-click with a selection active, the Digipo Chrome extension presents another set of helpers which, again, may or may not rely on Hypothesis. One of them, Tag this Selection, works like Tag this Page in that it uses the Hypothesis API to create an annotation that includes a controlled tag. But Tag this Selection does a bit more work. It sends not only the URL of the target document, but also a Text Quote Selector that anchors the annotation within the document. In this case, too, the interaction then flows seamlessly into Hypothesis where you can edit the newly-created annotation and perhaps discuss the selected passage.

You can see more of the interplay between the Digipo and Hypothesis extensions in this screencast. I’m pretty excited by how this is turning out. The Digipo extension is Chrome-only for now, as is the Hypothesis extension, but WebExtensions should soon enable broader coverage. There’s still a need to plug packaged behavior directly into the Hypothesis client. But much can be accomplished with an extension that cooperates with Hypothesis using its existing set of affordances. The Digipo extension is one example. I can imagine many others, and I’m expanding my definition of componentware to include them.


1 I love how our copy editor insisted on hyphenating On-Line!

A toolkit for fact checkers

Update: See this post (with screencasts!)

Mike Caulfield’s Digital Polarization Initiative (DigiPo) is a template for a course that will lead students through exercises to analyze and fact-check news stories. The pedagogical approach Mike describes here is evolving; in parallel I’ve been evolving a toolkit to help students research and organize the raw materials of the analyses they’ll be asked to produce. Annotation is a key component of the toolkit. I’ve been working to integrate it into the fact-checking workflow in ways that complement the use of other tools.

We’re not done yet but I’m pleased with the results so far. This post is an interim report to summarize what we’ve learned so far about building an annotation-powered toolkit for fact checkers.

Here’s an example of a DigiPo claim to be investigated:

EPA Plans to Allow Unlimited Dumping of Fracking Wastewater in the Gulf of Mexico (see Occupy)

I start with no a priori knowledge of EPA rules governing release of fracking wastewater, and only a passing acquaintance with the cited source, occupy.com. So the first order of business is to marshal some evidence. Hypothesis is ideal for this purpose. It creates links that encapsulate both the URL of a page containing found evidence, and the evidence itself — that is, a quote selected in the page.

There’s a dedicated page for each DigiPo investigation. It’s a wiki, so you can manually include Hypothesis links as you create them. But fact-checking is tedious work, and students will benefit from any automation that helps them focus on the analysis.

The first step was to include Hypothesis as a widget that displays annotations matching the wiki id of the page. Here’s a standalone Hypothesis view that gathers all the evidence I’ve tagged with digipo:analysis:gulf_of_frackwater. From there it was an easy next step to tweak the wiki template so it embeds that view directly in the page:

That’s really helpful, but it still requires students to acquire and use the correct tag in order to populate the widget. We can do better than that, and I’ll show how later, but here’s the next thing that happened: the timeline.

While working through a different fact-checking exercise, I found myself arranging a subset of the tagged annotations in chronological order. Again that’s a thing you can do manually; again it’s tedious; again we can automate with a bit of tag discipline and some tooling.

If you do much online research, you’ll know that it’s often hard to find the publication date of a web page. It might or might not be encoded in the URL. It might or might not appear somewhere in the text of the page. If it does there’s no predictable location or format. You can, however, ask Google to report the date on which it first indexed a page, and that turns out to be a pretty good proxy for the publication date.

So I made another bookmarklet to encapsulate that query. If you were to activate it on one of my posts it would lead you to this page:

I wrote the post on Oct 30, Google indexed it on Oct 31, that’s close enough for our purposes.

I made another bookmarklet to capture that date and add it, as a Hypothesis annotation, to the target page.

With these tools in hand, we can expand the widget to include:

  • Timeline. Annotations on the target page with a googledate tag, in chronological order.

  • Related Annotations. Annotations on the target page with a tag matching the id of the wiki page.

You can see a Related Annotations view above, here’s a Timeline:

So far, so good, but as Mike rightly pointed out, this motley assortment of bookmarklets spelled trouble. We wouldn’t want students to have to install them, and in any case bookmarklets are increasingly unlikely to work. So I transplanted them into a Chrome extension. It presents the growing set of tools in our fact-checking toolkit as right-click options on Chrome’s context menu:

It also affords a nice way to stash your Hypothesis credentials, so the tools can save annotations on your behalf:

(The DigiPo extension is Chrome-only for now, as is the Hypothesis extension, but WebExtensions should soon enable broader coverage.)

With the bookmarklets now wrapped in an extension we returned to the problem of simplifying the use of tags corresponding to wiki investigation pages. Hypothesis tags are freeform. Ideally you’d be able to configure the tag editor to present controlled lists of tags in various contexts, but that isn’t yet a feature of Hypothesis.

We can, though, use the Digipo extension to add a controlled-tagging feature to the fact-checking toolkit. The Tag this Page tool does that:

You activate the tool from a page that has evidence related to a DigiPo investigation. It reads the DigiPo page that lists investigations, captures the wiki ids of those pages. and presents them in a picklist. When you choose the investigation to which the current page applies, the current page is annotated with the investigation’s wiki id and will then show up in the Related Annotations bucket on the investigation page.

While I was doing all this I committed an ironic faux pas on Facebook and shared this article. Crazy, right? I’m literally in the middle of building tools to help people evaluate stuff like this, and yet I share without checking. Why did I not take the few seconds required to vet the source, bipartisanreport.com?

When I made myself do that I realized that what should have taken a few seconds took longer. There’s a particular Google advanced query syntax you need in this situation. You are looking for the character string “bipartisanreport.com” but you want to exclude the majority of self-referential pages. You only want to know what other sites say about this one. The query goes like this:

bipartisanreport.com -site:bipartisanreport.com

Just knowing the recipe isn’t enough. Using it needs to be second nature and, even for me, it clearly wasn’t. So now there’s Google this Site:

Which produces this:

It’s ridiculously simple and powerful. I can see at a glance that bipartisanreport.com shows up on a couple of lists of questionable sites. What does the web think about the sites that host those lists? I can repeat Google this Site to zoom in on them.

Another tool in the kit, Save Facebook Share Count, supports the sort of analysis that Mike did in a post entitled Despite Zuckerberg’s Protests, Fake News Does Better on Facebook Than Real News. Here’s Data to Prove It.

How, for example, has this questionable claim propagated on Facebook? There’s a breadcrumb trail in the annotation layer. On Dec 26 I used Save Publication Date to assign the tag googledate:2016-08-31, and on the same day I used Save Facebook Share Count to record the number of shares reported by the Facebook API. On Dec 30 I again used Save Facebook Share Count. Now we can see that the article is past its sell-by date on Facebook and never was highly influential.

Finally there’s Summarize Quotes, which arose from an experiment of Mike’s to fact-check a single article exhaustively. Here’s the article he picked, along with the annotation layer he created:

Some of the annotations contain Hypothesis direct links to related annotations. If you open this annotation in the Politico article, for example, you can follow Hypothesis links to related annotations on pages at USA Today and Science.

These transitive annotations are potent but it gets to be a lot of clicking around. So the most experimental of the tools in the kit, Summarize Quotes, produces a page like this:

This approach doesn’t feel quite right yet, but I suspect there’s something there. Using these tools you can gather a lot of evidence pretty quickly and easily. It then needs to be summarized effectively so students can reason about the evidence and produce quality analysis. The toolkit embodies a few ways to do that summarization, I’m sure more will emerge.

Marshalling the evidence

In Bird-dogging the web I responded to questions raised by Mike Caulfield about how annotation can help us fact-check the web. He’s now written a definition of the political technique, called bird-dogging, we discussed in those posts. It’s a method of recording candidates’ positions on issues, but it’s recently been mis-characterized as a way to incite violence. I’ve annotated a batch of articles that conflate bird-dogging with violence:

source: https://hypothes.is/api/search?tags=bird-dogging&user=judell

Each annotation links to Mike’s definition. Collectively they form a data set that can be used to trace the provenance of the bird-dogging = violence meme. A digital humanist could write an interesting paper on how the meme flows through a network of sources, and how it morphs along the way. But how will such evidence ever make a difference?

In Annotating the wild west of information flow I sketched an idea that weaves together annotation, a proposed standard for fact-checking called ClaimReview, and Google’s plan to use that standard to add Fact Check labels to news articles. These ingredients are necessary but not sufficient. The key missing ingredient? President Obama nailed it in his remarks at the White House Frontiers Conference: “We’re going to have to rebuild, within this wild west of information flow, some sort of curating function that people agree to.”

It can sometimes seem, in this polarized era, that we can agree on nothing. But we do agree, at least tacitly, on the science behind the technologies that sustain our civilization: energy, agriculture, medicine, construction, communication, transportation. When evidence proves that cigarettes can cause lung cancer, or that buildings in some places need to be earthquake-resistant, most of us accept it. Can we learn to honor evidence about more controversial issues? If that’s possible, annotation’s role will be to help us marshal that evidence.

Bird-dogging the web

In Annotating the wild west of information flow I responded to President Obama’s appeal for “some sort of curating function that people agree to” with a Hypothes.is thought experiment. What if an annotation tool could make claims about the veracity of statements on the web, and record those claims in a standard machine-readable format such as ClaimReview? The example I gave there: a climate scientist can verify or refute an assertion about climate change in a newspaper article.

Today Mike Caulfield writes about another kind of fact-checking. At http://www.mostdamagingwikileaks.com/ he found this claim:

“Bird-dogging is a term coined by high-level Clinton staffers who openly talk about it in the video. They boast about inciting violence at Trump rallies, paying for every protest…”

Mike knows better.

Wait, what? Bird-dogging is about violence?

I was a bird-dogger for some events in 2008 and as a blogger got to know a bunch of bird-doggers in my work as a blogger. Clinton didn’t invent the term and it has nothing to do with violence.

So he annotates the statement. But he’s not just refuting a claim, he’s explaining what bird-dogging really means: you follow candidates around and film their responses to questions about your issues.

Now Mike realizes that he can’t find an authoritative definition of that practice. So, being an expert on the subject, he writes one. Which prompts this question:

Why the heck am I going to write a comment that is only visible from this one page? There are hundreds (maybe thousands) of pages on the internet making use of the fact that there is no clear explanation of this on the web.

Mike’s annotation does two things at once. It refutes a claim about bird-dogging on one specific page. That’s the sweet spot for annotation. His note also provides a reusable definition of bird-dogging that ought to be discoverable in other contexts. Here there’s nothing special about a Hypothes.is note versus a wiki page, a blog post, or any other chunk of URL-addressable content. An authoritative definition of bird-dogging could exist in any of these forms. The challenge, as Mike suggests, is to link that definition to many relevant contexts in a discoverable way.

The mechanism I sketched in Annotating the wild west of information flow lays part of the necessary foundation. Mike could write his authoritative definition, post it to his wiki, and then use Hypothes.is to link it, by way of ClaimReview-enhanced annotations, to many misleading statements about bird-dogging around the web. So far, so good. But how will readers discover those annotations?

Suppose Mike belongs to a team of political bloggers who aggregate claims they collectively make about statements on the web. Each claim links to a Hypothes.is annotation that locates the statement in its original context and to an authoritative definition that lives at some other URL.

Suppose also that Google News regards Mike’s team as a credible source of machine-readable claims for which it will surface the Fact Check label. Now we’re getting somewhere. Annotation alone doesn’t solves Mike’s problem, but it’s a key ingredient of the solution I’m describing.

If we ever get that far, of course, we’ll run into an even more difficult problem. In an era of media fragmentation, who will ever subscribe to sources that present Fact Check labels in conflict with beliefs? But given the current state of affairs, I guess that would be a good problem to have.

Reading and writing for our peers

The story Jan Dawson tells in The De-Democratization of Online Publishing is familiar to me. Like him, I was thrilled to be part of the birth of personal publishing in the mid-1990s. By 2001 my RSS feedreader was delivering a healthy mix of professional and amateur sources. Through the lens of my RSS reader, stories in the New York Times were no more or less important than blog posts from my peers in the tech blogosophere, And because RSS was such a simple format, there was no technical barrier to entry. It was a golden era of media democratization not seen before or since.

As Dawson rightly points out, new formats from Google (Accelerated Mobile Pages) and Facebook (Instant Articles) are “de-democratizing” online publishing by upping the ante. These new formats require skills and tooling not readily available to amateurs. That means, he says, that “we’re effectively turning back the clock to a pre-web world in which the only publishers that mattered were large publishers and it was all but impossible to be read if you didn’t work for one of them.”

Let’s unpack that. When I worked for a commercial publisher in 2003, my charter was to bring its audience to the web and establish blogging as a new way to engage with that audience. But my situation was atypical. Most of the bloggers I read weren’t, like me, working for employers in the business of manufacturing audiences. They were narrating their work and conserving keystrokes. Were they impossible to read? On the contrary, if you shared enough interests in common it was impossible not to read them.

When publishers created audiences and connected advertisers to them, you were unlikely to be read widely. Those odds don’t change when Google and Facebook become the publishers; only the gatekeepers do. But when publishing is personal and social, that doesn’t matter.

One of the bloggers I met long ago, Lucas Gonze, is a programmer and a musician who curates and performs 19th-century parlour music. He reminded me that before the advent of recording and mass distribution, music wasn’t performed by a small class of professionals for large audiences. People gathered around the piano in the parlour to play and sing.

Personal online publishing once felt like that. I don’t know if it will again, but the barrier isn’t technical. The tools invented then still exist and they work just fine. The only question is whether we’ll rekindle our enthusiasm for reading and writing for our peers.

From PDF to PWP: A vision for compound web documents

I’ve been in the web publishing game since it began, and for all this time I’ve struggled to make peace with the refusal of the Portable Document Format (PDF) to wither and die. Why, in a world of born-digital documents mostly created and displayed on computers and rarely printed, would we cling to a format designed to emulate sheets of paper bound into books?

For those of us who labor to extract and repurpose the contents of PDF files, it’s a nightmare. You can get the text out of a PDF file but you can’t easily reconstruct the linear stream that went in. That problem is worse for tabular data. For web publishers, it’s a best practice to separate content assets (text, lists, tables, images) from presentation (typography, layout) so the assets can be recombined for different purposes and reused in a range of of formats: print, screens of all sizes. PDF authoring tools could, in theory, enable some of that separation, but in practice they don’t. Even if they did, it probably wouldn’t matter much.

Consider a Word document. Here the tools for achieving separation are readily available. If you want to set the size of a heading you don’t have to do it concretely, by setting it directly. Instead you can do it abstractly, by defining a class of heading, setting properties on the class, and assigning the class to your heading. This makes perfect sense to programmers and zero sense to almost everyone else. Templates help. But when people need to color outside the lines, it’s most natural to do so concretely (by adjusting individual elements) not abstractly (by defining and using classes).

It is arguably a failure of software design that our writing tools don’t notice repetition of concrete patterns and guide us to corresponding abstractions. That’s true for pre-web tools like Word. It’s equally true for web tools — like Google Docs — that ape their ancestors. Let’s play this idea out. What if, under the covers, the tools made a clean separation of layout and typography (defined in a style sheet) from text, images, and data (stored in a repository)? Great! Now you can restyle your document, and print it or display it on any device. And you can share with others who work with you on any of their devices.

What does sharing mean, though? It gets complicated. The statements “I’ll send you the document” or “I’ll share the document with you” can sometimes mean: “Here is a link to the document.” But they can also mean: “Here is a copy of the document.” The former is cognitively unnatural for the same reason that defining abstract styles is. We tend to think concretely. We want to manipulate things in the digital world directly. Although we’re learning to appreciate how the link enables collaboration and guarantees we see the same version, sending or sharing a copy (which affords neither advantage) feels more concrete and therefore more natural than sending or sharing a link.

Psychology notwithstanding, we can’t (yet) be sure that the recipient of a document we send or share will able to use it online. So, often, sending or sharing can’t just mean transferring a link. It has to mean transferring a copy. The sender attaches the copy to a message, or makes the copy available to the recipient for download.

That’s where the PDF file shines. It bundles a set of assets into a single compound document. You can’t recombine or repurpose those assets easily, if at all. But transfer is a simple transaction. The sender does nothing extra to bundle it for transmission, and the recipient does nothing extra to unbundle it for use.

I’ve been thinking about this as I observe my own use of Google Docs. Nowadays I create lots of them. My web publishing instincts tell me to create sets of reusable assets and then link them together. Instead, though, I find myself making bigger and bigger Google Docs. One huge driver of this behavior has been the ability to take screenshots, crop them, and copy/paste them into a doc. It’s massively more efficient than the corresponding workflow in, say, WordPress, where the process entails saving a file, uploading to the Media Folder, and then sourcing the image from there.

Another driver has been the Google Docs table of contents feature. I have a 100-page Google Doc that’s pushing the limits of the system and really ought to be a set of interlinked files. But the workflow for that is also a pain: capture the link to A, insert it into B, capture the link to B, insert it into A. I’ve come to see the table of contents feature — which builds the TOC as a set of links derived from doc headings — as a link automation tool.

As the Google Drive at work accumulates more stuff, I’m finding it harder to find and assemble bits and pieces scattered everywhere. It’s more productive to work with fewer but larger documents that bundle many bits and pieces together. If I send you a link to a section called out in the TOC, it’s as if I sent you a link to an individual document. But you land in a context that enables you to find related stuff by scanning the TOC. That can be a more reliable method of discovery, for you, than searching the whole Google Drive.

Can’t I just keep an inventory of assets in a folder and point you to the folder? Yes, but I’ve tried, it feels way less effective, I think there are two reasons why. First, there’s the overhead of creating and naming the assets. Second, the TOC conveys outline structure that the folder listing doesn’t.

This method is woefully imperfect for all kinds of reasons. A 100-page Google Doc is an unwieldy construct. Anonymous assets can’t be found by search. Links to headings lack human-readable information. And yet it’s effective because, I am coming to realize, there’s an ancient and powerful technology at work here. When I create a Google Doc in this way I am creating something like a book.

This may explain why the seeming immortality of the PDF format is less crazy than I have presumed. Even so, I’m still not ready to ante up for Acrobat Pro. I don’t know exactly what a book that’s born digital and read on devices ought to be. I do know a PDF file isn’t the right answer. Nor is a website delivered as a zip file. We need a thing with properties of both.

I think a W3C Working Draft entitled Portable Web Publications for the Open Web Platform (PWP) points in the right direction. Here’s the manifesto:

Our vision for Portable Web Publications is to define a class of documents on the Web that would be part of the Digital Publishing ecosystem but would also be fully native citizens of the Open Web Platform.

PWP usefully blurs distinctions along two axes.

That’s exactly what’s needed to achieve the goal. We want compound documents to be able to travel as packed bundles. We want to address their parts individually. And we want both modes available to us regardless of whether the documents are local or remote.

Because a PWP will be made from an inventory of managed assets, it will require professional tooling that’s beyond the scope of Google Docs or Word Online. Today it’s mainly commercial publishers who create such tools and use them to take apart and reconstruct the documents — typically still Word files — sent to them by authors. But web-native authoring tools are emerging, notably in scientific publishing. It’s not a stretch to imagine such tools empowering authors to create publication-ready books in PWP. It’s more of a stretch to imagine successors to Google Docs and Word Online making that possible for those of us who create book-like business documents. But we can dream.

Customer service and human dignity

It’s been a decade since I interviewed Paul English on the subject of customer service and human dignity (audio). He was CTO and co-founder at kayak but in this interview we talked more about GetHuman. It had begun as a list of cheats to help you hack through the automated defenses of corporate customer service and get to a real person. Here’s how I remember The IVR Cheat Sheet back then:

finance phone steps to find a human
America First Credit Union 800-999-3961 0 or say “member services”
American Express 800-528-4800 0 repeatedly
Bank of America 800-900-9000 00 or dial 813-882-1103 for Executive Office.
Bank of America 800-622-8731 *
Bank of America 800-432-1000 Say “operator” or “associate” at any point in the menu.
Charles Schwab 800-435-9050 3, 0
Chase 800-CHASE24 5 pause 1 4
Chrysler Financial 800-700-0738 Select language, then press 00
Citi AAdvantage 888-766-2484 Ignore prompts and wait for a human.
Citi Card 800-967-8500 0,0,0,0,0

In our interview Paul said:

Dignity is defined in part as giving people the right to make decisions. In particular if it’s a company I’m paying $100/month for cable or cell phone or whatever, and they don’t give me the ability to decide when I need to talk to a human, I find it really insulting.

When the CEO makes the terrible decision to treat customer service as a cost center, the bonus for the VP who runs it is based on one thing: shaving pennies off the cost of the call.

I responded:

Which is a tragedy because customer service is a huge opportunity for business differentiation. If we set up a false dichotomy, where it’s either automated or human, we’re missing out on the real opportunity which is to connect the right people to the right context at the right time. That’s what needs to happen, but a tricky thing to orchestrate and there doesn’t seem to be any vision for how to do that.

I’ve used GetHuman for 10 years. Yesterday I went there to gird for battle with Comcast and was delighted to see that the service has morphed into this:

Boston-based startup GetHuman on Wednesday unveiled a new service that lets you to pay $5 to $25 to hire a “problem solver” who will call a company’s customer service line on your behalf to resolve issues. Prices vary depending on the company, but GetHuman offers to fight for your airline refund, deal with Facebook account issues, or perhaps even prevent a grueling call with Comcast to disconnect your service.

— CNET, May 4, 2016

I’m really curious about their hands-off problem-solving service and will try it in other circumstances, but my negotiation with Comcast was going to require my direct involvement. So this free call-back service made my day:

How our Comcast call-back works

First we call Comcast, wade through their phone maze, wait on hold for you, and then call you back when an agent can talk. We try 4 times, in case we don’t get through the first time. Of course, once you do talk to a Comcast rep, you still have to do the talking, negotiating, etc.

I went back to work. The call came. Normally I’d be feeling angry and humiliated in this situation. Instead I felt happy and empowered. Companies have used their robots to thwart me all these years. Now I’ve got a robot on my side of the table. It’s on!