Highlighting passages doesn’t aid my memory, but speaking them does

When I was in college, taking notes on textbooks and course readings, I often copied key passages into a notebook. There weren’t computers then, so like a medieval scribe I wrote out my selections longhand. Sometimes I added my own notes, sometimes not, but I never highlighted, even in books that I owned. Writing out the selections was a way to perform the work I was reading, record selections in my memory, and gain deeper access to the mind of the author.

Now we have computers, and the annotation software I help build at Hypothesis is ideal for personal note-taking. Close to half of all Hypothesis annotations are private notes, so clearly lots of people use it that way. For me, though, web annotation isn’t a private activity. I bookmark and tag web resources that I want to keep track of, and collaborate with others on document review, but I don’t use web annotation to enhance my private reading.

To be sure, I mostly read books and magazines in print. It’s a welcome alternative to the screens that otherwise dominate my life. But even when my private reading happens online, I don’t find myself using our annotation tool the way so many others do.

So, what’s a good way to mark and remember a passage in a book if you don’t want to highlight it, or in the case of a library book, can’t highlight it? I thought about the scribing I used to do in college, and realized there’s now another way to do that. Recently, when I read a passage in a book or magazine that I want to remember and contemplate, I’ve been dictating it into a note-taking app on my phone.

I’ve followed the evolution of speech-to-text technology with great interest over the years. When I reviewed Dragon NaturallySpeaking, I did what every reviewer does. I tried to use the tool to dictate my review, and got mixed results. Over time the tech improved but I haven’t yet adopted dictation for normal work. At some point I decided to forget about dictation software until it became something that civilians who weren’t early-adopter tech journos used in real life.

One day, when I received some odd text messages from Luann, I realized that time had arrived. She’d found the dictation feature on her phone. It wasn’t working perfectly, and the glitches were amusing, but she was using it in an easy and natural way, and the results were good enough.

I still don’t dictate to my computer. This essay is coming to you by way of a keyboard. But I dictate to my phone a lot, mostly for text messages. The experience keeps improving, and now this new practice — voicing passages that I read in books, in order to capture and remember them — seems to be taking hold.

I’m reminded of a segment in a talk given by Robert “R0ml” Lefkowitz at the 2004 Open Source Conference, entitled The Semasiology of Open Source (part 2), the second in a series structured as thesis (part 1), antithesis (part 2), and synthesis (part 3). ITConversation aptly described this luminous series of talks as “an intellectual joy-ride”; I’m going to revisit the whole thing on a hike later today.

Meanwhile, here’s a transcription of the segment I’m recalling. It appears during a review of the history of literacy. At this point we have arrived at 600 AD.

To be a reader was not to be the receiver of information, it was to be the transmitter of the information, because it was not possible to read silently. So things that were written were written as memory aids to the speaker. And the speaker would say the words to the listener. To read was to speak, and those were synonyms … The writing just lies there, whereas the speaking lifts it off the page. The writing is just there, but the speaking is what elevates the listener.

Had I merely read that passage I’m certain I would not remember it 14 years later. Hearing R0ml speak the words made an indelible impression. (Seeing him speak the words, of course, made it even more indelible.)

Silent reading, once thought impossible, had to be invented. But just because we can read silently doesn’t mean we always should, as everyone who’s read aloud to a young child, or to a vision-impaired elder, knows. It’s delightful that voice recognition affords new ways to benefit from the ancient practice of reading aloud.

A small blog neighborhood hiding in plain sight

For as long as I can remember, I’ve tweeted every blog post. As an experiment, I didn’t do that with this one. As a result, WordPress tells me that relatively few people have read it. But I’m not monetizing pageview counters here, why do I care? The interaction with those who did read that item was pleasant.

So, the experiment continues. Some items here, like this one, can enjoy a more intimate space than others, simply by not announcing themselves on Twitter. A small blog neighborhood hiding in plain sight.

Where’s my Net dashboard?

Yesterday Luann was reading a colleague’s blog and noticed a bug. When she clicked the Subscribe link, the browser loaded a page of what looked like computer code. She asked, quite reasonably: “What’s wrong? Who do I report this to?”

That page of code is an RSS feed. It works the same way as the one on her own blog. The behavior isn’t a bug, it’s a lost tradition. Luann has been an active blogger for many years, and once used an RSS reader, but for her and so many others, the idea of a common reader for the web has faded.

There was a time when most of the sources I cared about flowed into such a reader: mainstream news, a vibrant and growing blogosphere, podcasts, status updates, standing search queries, you name it. The unread item count could get daunting, but I was able to effectively follow a wide assortment of information flows in what I came to think of as my Net dashboard.

Where’s my next dashboard? I imagine a next-gen reader that brings me the open web and my social circles in a way that helps me attend to and manage all the flow. There are apps for that, a nice example being FlowReader, which has been around since 2013. I try these things hopefully but so far none has stuck.

Information overload, once called infoglut, remains a challenge. We’re all flooded with more channels than we can handle, more conversations happening in more places than we can keep track of.

Fear of missing out (FOMO) is the flip side of infoglut. We expect that we should be able to sanely monitor more than we actually can.

The first-gen reader didn’t solve infoglut/FOMO, nothing could, but for a while, for me, it was better than the alternative, which was (and now is again) email. Of course that was me, a tech journalist who participated in, researched, and wrote about topics in Net technology and culture — including RSS, which animated the dashboard I used to keep track of everything else. It was, however, a workflow that researchers and analysts in other fields will recognize.

Were I were doing the same kind of work today, I’d cobble together the same kind of dashboard, while grumbling about the poorer experience now available. Instead my professional information diet is narrower and deeper than when analytical writing for commercial audiences was my work. My personal information diet, meanwhile, remains as diverse as everyone’s.

So I’m not sure that a next-gen reader can solve the same problems that my first-gen reader did, in the same ways. Still, I can’t help but envision a dashboard that subscribes to, and manages notifications from, all my sources. It seems wrong that the closest thing to that, once more, is email. Plugging the social silos into a common reader seems like the obvious thing. But if that were effective, we’d all be using FlowReader or something like it.

Why don’t we? Obviously the silos can refuse to cooperate, as FlowReader noted when announcing the demise of its Facebook integration:

These changes were made [by Facebook] to give users more control over their own data, which we support. It’s a great thing for users! However, it also means that Facebook data is no longer going to be easy to share between applications.

You know what would be a really great thing for users, though? A common reader that makes it easy to keep track of friends and family and coworkers along with news and all kinds of personal and professional information sources.

“What’s wrong?”

It’s not just that the silos can shut down their feeds. It’s that we allowed ourselves to get herded into them in the first place. For a while, quite a few people got comfortable with the notion of publishing and subscribing to diverse feeds in a common way, using systems that put them in charge of outflow and inflow. In one form or another that’s still the right model. Sometimes we forget things and have to relearn them. This is one of those things.

“Who do I report this to?”

Everyone.

The reengineering of three California lakes

If you drive up the the eastern side of California, you’ll encounter three ancient lakebeds transformed by human engineering during the last century. The sequence goes like this: Salton Sea, Owens Valley, Mono Lake. We visited all three on a recent road trip. Since returning I’ve learned that their stories continue to unfold.

The Salton Sea

The Salton Sea, almost twice the size of Lake Tahoe, was created accidentally in 1905 when an irrigation project went out of control and, for a year and a half, sucked the Colorado river into what had been a dry lakebed. As a recent immigrant to California I can confirm that many natives don’t know much, if anything, about this place. A 2004 documentary narrated by John Waters, Plagues & Pleasures on the Salton Sea, traces its arc through a boom era of sport fishing and real estate speculation to what is now a living ghost town. For decades people have talked about saving the Salton Sea. That may once have meant restoring California’s “lost Riviera” but nowadays it’s all about mitigating an environmental disaster.

The Salton Sink is a low-lying basin that has flooded and dried many times over hundreds of thousands of years. What makes the current drying phase different is that the only inflow now is agricultural runoff with high concentrations of pesticides. As the lake evaporates, toxic dust goes windborne, threatening not only the Salton Sea communities, but also nearby Mexicali, Mexico, a metropolitan area with a million people. Meanwhile the lake’s increasing salinity is ruining the habitat of millions of migratory birds. These looming health and environmental crises motivated California to allocate $200 million as part of a successful June 2018 ballot initiative (Prop. 68) to stabilize the Salton Sea. (Another $200 million won’t be forthcoming because it was part of a failed follow-on ballot initiative in November 2018, Prop. 3.)

In an effort to buy time, a 2003 agreement to transfer water from the Imperial Valley to San Diego required the release of “mitigation water” to the Salton Sea. That ended in 2017 with no clear roadmap in place. What would it mean to stabilize the Salton Sea? A dwindling Colorado River won’t be the answer. It may be possible to import seawater from the Pacific, or the Gulf of California, to create a “smaller but sustainable” future that balances the needs of the region’s people and wildlife in a context of growing crises of drought and fire. Dry methods of dust suppression might also help. But all methods will require major engineering of one kind or another. The cost of that 1905 accident continues to grow.

The Owens Valley

Further north lies Owens Lake, mostly drained since 1913 when William Mulholland began siphoning its water into the LA aqueduct. The movie Chinatown is a fictionalized account of the battle for that water. What remains is mostly a vast salt flat that’s patrolled by mining vehicles and was, until recently, the nation’s worst source of dust pollution. From highway 395 it still looks like the surface of some other planet. But mitigation began at the turn of this century, and by 2017 LA’s Department of Water and Power had spent nearly $2 billion on what a modern western explorer known as StrayngerRanger describes as “a patchwork of dust-smothering techniques, including gravel, flooded ponds and rows of planted vegetation that cover nearly 50 square miles — an area more than twice the size of Manhattan.”

A key innovation, according to the LA Times, “involves using tractors to turn moist lake bed clay into furrows and basketball-sized clods of dirt. The clods will bottle up the dust for years before breaking down, at which point the process will be repeated.” This dry method was a big success, the dust has settled, public trails opened in 2016, and as Robin Black’s photography shows, life is returning to what’s left of the lake.

The LA Times:

In what is now hailed as an astonishing environmental success, nature quickly responded. First to appear on the thin sheen of water tinged bright green, red and orange by algae and bacteria were brine flies. Then came masses of waterfowl and shorebirds that feed on the insects.

Robin Black:

This was never supposed to happen, and it’s a BIG DEAL. This, the habitat creation and management. This, the welcoming of the public to the lake. This is a huge victory for the groups in the Owens Valley who fought so tirelessly to make this happen.

We need to celebrate environmental successes and learn from them. And on this trip, the best one was yet to come.

Mono Lake

Mono Lake, near the town of Lee Vining (“the gateway to Yosemite”), was the next object of LA’s thirst. Its inflow was diverted to the aqueduct, and the lake almost went dry. Now it’s filling up again, thanks to the dedicated activists who formed the Mono Lake Committee 40 years ago, sued the LA Department of Water and Power, won back 3/4 of the inflow, and as chronicled in their video documentary The Mono Lake Story, have been stewarding the lake’s recovery ever since.

Today you can walk along the newly-restored Lee Vining Creek trail and feel the place coming alive again. There’s been cleanup and trail-building and planting but, for the most part, all that was needed was to let the water flow again.

How did LA compensate for the lost flow? This 30-second segment of the documentary answers the question. The agreement included funding that enabled LA to build a wastewater reclamation plant to make up the difference. It wasn’t a zero-sum game after all. There was a way to maintain a reliable water supply for LA and to restore Mono Lake. It just took people with the vision to see that possibility, and the will to make it real.

Renaming Hypothesis tags

Wherever social tagging is supported as an optional feature, its use obeys a power law. Some people use tags consistently, some sporadically, most never. This chart of Hypothesis usage illustrates the familiar long-tail distribution:

Those of us in the small minority of consistent taggers care a lot about the tag namespaces we’re creating. We tag in order to classify resources, we want to be able to classify them consistently, but we also want to morph our tag namespaces to reflect our own changing intuitions about how to classify and also to adapt to evolving social conventions.

Consistent tagging requires a way to make, use, and perhaps share a list of controlled tags, and that’s a topic for another post. Morphing tag namespaces to satisfy personal needs, or adapt to social conventions, requires the ability to rename tags, and that’s the focus of this post.

There’s nothing new about the underlying principles. When I first got into social tagging, with del.icio.us back in the day, I made a screencast to show how I was using del.icio.us’ tag renaming feature to reorganize my own classification system, and to evolve it in response to community tag usage. The screencast argues that social taggers form a language community in which metadata vocabularies can evolve in the same way natural languages do.

Over the years I’ve built tag renamers for other systems, and now I’ve made one for Hypothesis as shown in this 90-second demo. If you’re among the minority who want to manage your tags in this way, you’re welcome to use the tool, here’s the link. But please proceed with care. When you reorganize a tag namespace, it’s possible to wind up on the wrong end of the arrow of entropy!

Letters to Mr. Wilson’s Museum of Jurassic Technology

Dear Mr. Wilson,

Your Museum of Jurassic Technology (MJT) first came to my attention in 1995 when I read an excerpt from Lawrence Weschler’s Mr. Wilson’s Cabinet of Wonder in the New Yorker. Or anyway, that’s the origin story I’ve long remembered and told. As befits the reality-warping ethos of the MJT, it seems not to be true. I can find no record of such an article in the New Yorker’s archive. (Maybe I read it in Harpers?) What I can find, in the New York Times’ archive, is a review of the book that nicely captures that ethos:

Run by its eccentric proprietor out of a storefront in Culver City, Calif., the museum is clearly a modern-day version, as Mr. Weschler astutely points out, of the “wonder-cabinets” that sprang up in late Renaissance Europe, inspired by all the discoveries in the New World. David Wilson comes off as an amusingly Casaubonesque figure who, in his own little way, seeks to amass all the various kinds of knowledge in the world; and if his efforts seem random and arcane, they at any rate sound scientifically specific. Yet when Mr. Weschler begins to check out some of the information in the exhibits, we discover that much of it is made up or imagined or so elaborately embroidered as to cease to resemble any real-world facts.

The key to the pleasure of this book lies in that “much of,” for the point of David Wilson’s museum is that you can’t tell which parts are true and which invented. In fact, some of the unlikeliest items — the horned stink ants, for instance — turn out to be pretty much true. In the wake of its moment of climactic exposure, “Mr. Wilson’s Cabinet of Wonder” turns into an expedition in which Lawrence Weschler tracks down the overlaps, correspondences and occasionally tenuous connections between historical and scientific reality on the one hand and the Museum of Jurassic Technology on the other.

We’ve always wanted to visit the MJT, and finally made the pilgrimage a few weeks ago. Cruising down the California coast on a road trip, the museum was our only LA destination. The miserable traffic we fought all day, both entering and leaving Culver City, was a high price to pay for several hours of joy at the museum. But it was worth it.

At one point during our visit, I was inspecting a display case filled with a collection of antique lace, each accompanied by a small freestanding sign describing the item. On one of the signs, the text degrades hilariously. As I remember it (no doubt imperfectly) there were multiple modes of failure: bad kerning, faulty line spacing, character misencoding, dropouts, faded print. Did I miss any?

While I was laughing at the joke, I became aware of another presence in the dark room, a man with white hair who seemed to pause near me before wandering off to an adjoining dark room. I thought it might be you, but I was too intimidated to ask.

The other night, looking for videos that might help me figure out if that really was you, I found a recording of an event at the USC Fisher Museum of Art. In that conversation you note that laughter is a frequent response to the MJT:

“We don’t have a clue what people are laughing about,” you said, “but it’s really hard to argue with laughter.”

I was laughing at that bit of hilariously degraded signage. I hope that was you hovering behind me, and I hope that you were appreciating my appreciation of the joke. But maybe not. Maybe that’s going to be another questionable memory. Whatever the truth may be, thank you. We need the laughs now more than ever.

Yours sincerely,

Jon Udell

Searching across silos, circa 2018

The other day, while searching across various information silos — WordPress, Slack, readthedocs.io, GitHub, Google Drive, Google Groups, Zendesk, Stack Overflow for Teams — I remembered the first time I simplified that chore. It was a fall day in 1996. I’d been thinking about software components since writing BYTE’s Componentware1 cover story in 1994, and about web software since launching the BYTE website in 1995. On that day in 1996 I put the two ideas together for the first time, and wrote up my findings in an installment of my monthly column entitled On-Line Componentware. (Yes, we really did write “On-Line” that way.)

The solution described in that column was motivated by McGraw-Hill’s need to unify search across BYTE and some other MgH pubs that were coming online. Alta Vista, then the dominant search engine, was indexing all the sites. It didn’t offer a way to search across them. But it did offer a site: query as Google and Bing do today, so you could run per-site queries. I saw it was possible to regard Alta Vista as a new kind of software component for a new kind of web-based application that called the search engine once per site and stitched the results into a single web page.

Over the years I’ve done a few of these metasearch apps. In the pre-API era that meant normalizing results from scraped web pages. In the API era it often means normalizing results fetched from per-site APIs. But that kind of normalization was overkill this time around, I just needed an easier way to search for words that might be on our blog, or in our docs, or in our GitHub repos, or in our Google storage. So I made a web page that accepts a search term, runs a bunch of site-specific searches, and opens each into a new tab.

This solution isn’t nearly as nice as some of my prior metasearchers. But it’s way easier than authenticating to APIs and merging their results, or using a bunch of different query syntaxes interactively. And it’s really helping me find stuff scattered across our silos.

But — there’s always a but — you’ll notice that Slack and Zendesk aren’t yet in the mix. All the other services make it possible to form a URL that includes the search term. That’s just basic web thinking. A set of web search results is an important kind of web resource that, like any other, should be URL-accessible. Unless I’m wrong, in which case someone please correct me, I can’t do the equivalent of `?q=notifications` in a Slack or Zendesk URL. Of course it’s possible to wrap their APIs in a way that enables URL query, but I shouldn’t have to. Every search deserves its own URL.


1 Thanks as always to Brewster Kahle for preserving what publishers did not.