The reengineering of three California lakes

If you drive up the the eastern side of California, you’ll encounter three ancient lakebeds transformed by human engineering during the last century. The sequence goes like this: Salton Sea, Owens Valley, Mono Lake. We visited all three on a recent road trip. Since returning I’ve learned that their stories continue to unfold.

The Salton Sea

The Salton Sea, almost twice the size of Lake Tahoe, was created accidentally in 1905 when an irrigation project went out of control and, for a year and a half, sucked the Colorado river into what had been a dry lakebed. As a recent immigrant to California I can confirm that many natives don’t know much, if anything, about this place. A 2004 documentary narrated by John Waters, Plagues & Pleasures on the Salton Sea, traces its arc through a boom era of sport fishing and real estate speculation to what is now a living ghost town. For decades people have talked about saving the Salton Sea. That may once have meant restoring California’s “lost Riviera” but nowadays it’s all about mitigating an environmental disaster.

The Salton Sink is a low-lying basin that has flooded and dried many times over hundreds of thousands of years. What makes the current drying phase different is that the only inflow now is agricultural runoff with high concentrations of pesticides. As the lake evaporates, toxic dust goes windborne, threatening not only the Salton Sea communities, but also nearby Mexicali, Mexico, a metropolitan area with a million people. Meanwhile the lake’s increasing salinity is ruining the habitat of millions of migratory birds. These looming health and environmental crises motivated California to allocate $200 million as part of a successful June 2018 ballot initiative (Prop. 68) to stabilize the Salton Sea. (Another $200 million won’t be forthcoming because it was part of a failed follow-on ballot initiative in November 2018, Prop. 3.)

In an effort to buy time, a 2003 agreement to transfer water from the Imperial Valley to San Diego required the release of “mitigation water” to the Salton Sea. That ended in 2017 with no clear roadmap in place. What would it mean to stabilize the Salton Sea? A dwindling Colorado River won’t be the answer. It may be possible to import seawater from the Pacific, or the Gulf of California, to create a “smaller but sustainable” future that balances the needs of the region’s people and wildlife in a context of growing crises of drought and fire. Dry methods of dust suppression might also help. But all methods will require major engineering of one kind or another. The cost of that 1905 accident continues to grow.

The Owens Valley

Further north lies Owens Lake, mostly drained since 1913 when William Mulholland began siphoning its water into the LA aqueduct. The movie Chinatown is a fictionalized account of the battle for that water. What remains is mostly a vast salt flat that’s patrolled by mining vehicles and was, until recently, the nation’s worst source of dust pollution. From highway 395 it still looks like the surface of some other planet. But mitigation began at the turn of this century, and by 2017 LA’s Department of Water and Power had spent nearly $2 billion on what a modern western explorer known as StrayngerRanger describes as “a patchwork of dust-smothering techniques, including gravel, flooded ponds and rows of planted vegetation that cover nearly 50 square miles — an area more than twice the size of Manhattan.”

A key innovation, according to the LA Times, “involves using tractors to turn moist lake bed clay into furrows and basketball-sized clods of dirt. The clods will bottle up the dust for years before breaking down, at which point the process will be repeated.” This dry method was a big success, the dust has settled, public trails opened in 2016, and as Robin Black’s photography shows, life is returning to what’s left of the lake.

The LA Times:

In what is now hailed as an astonishing environmental success, nature quickly responded. First to appear on the thin sheen of water tinged bright green, red and orange by algae and bacteria were brine flies. Then came masses of waterfowl and shorebirds that feed on the insects.

Robin Black:

This was never supposed to happen, and it’s a BIG DEAL. This, the habitat creation and management. This, the welcoming of the public to the lake. This is a huge victory for the groups in the Owens Valley who fought so tirelessly to make this happen.

We need to celebrate environmental successes and learn from them. And on this trip, the best one was yet to come.

Mono Lake

Mono Lake, near the town of Lee Vining (“the gateway to Yosemite”), was the next object of LA’s thirst. Its inflow was diverted to the aqueduct, and the lake almost went dry. Now it’s filling up again, thanks to the dedicated activists who formed the Mono Lake Committee 40 years ago, sued the LA Department of Water and Power, won back 3/4 of the inflow, and as chronicled in their video documentary The Mono Lake Story, have been stewarding the lake’s recovery ever since.

Today you can walk along the newly-restored Lee Vining Creek trail and feel the place coming alive again. There’s been cleanup and trail-building and planting but, for the most part, all that was needed was to let the water flow again.

How did LA compensate for the lost flow? This 30-second segment of the documentary answers the question. The agreement included funding that enabled LA to build a wastewater reclamation plant to make up the difference. It wasn’t a zero-sum game after all. There was a way to maintain a reliable water supply for LA and to restore Mono Lake. It just took people with the vision to see that possibility, and the will to make it real.

Where’s my Net dashboard?

Yesterday Luann was reading a colleague’s blog and noticed a bug. When she clicked the Subscribe link, the browser loaded a page of what looked like computer code. She asked, quite reasonably: “What’s wrong? Who do I report this to?”

That page of code is an RSS feed. It works the same way as the one on her own blog. The behavior isn’t a bug, it’s a lost tradition. Luann has been an active blogger for many years, and once used an RSS reader, but for her and so many others, the idea of a common reader for the web has faded.

There was a time when most of the sources I cared about flowed into such a reader: mainstream news, a vibrant and growing blogosphere, podcasts, status updates, standing search queries, you name it. The unread item count could get daunting, but I was able to effectively follow a wide assortment of information flows in what I came to think of as my Net dashboard.

Where’s my next dashboard? I imagine a next-gen reader that brings me the open web and my social circles in a way that helps me attend to and manage all the flow. There are apps for that, a nice example being FlowReader, which has been around since 2013. I try these things hopefully but so far none has stuck.

Information overload, once called infoglut, remains a challenge. We’re all flooded with more channels than we can handle, more conversations happening in more places than we can keep track of.

Fear of missing out (FOMO) is the flip side of infoglut. We expect that we should be able to sanely monitor more than we actually can.

The first-gen reader didn’t solve infoglut/FOMO, nothing could, but for a while, for me, it was better than the alternative, which was (and now is again) email. Of course that was me, a tech journalist who participated in, researched, and wrote about topics in Net technology and culture — including RSS, which animated the dashboard I used to keep track of everything else. It was, however, a workflow that researchers and analysts in other fields will recognize.

Were I were doing the same kind of work today, I’d cobble together the same kind of dashboard, while grumbling about the poorer experience now available. Instead my professional information diet is narrower and deeper than when analytical writing for commercial audiences was my work. My personal information diet, meanwhile, remains as diverse as everyone’s.

So I’m not sure that a next-gen reader can solve the same problems that my first-gen reader did, in the same ways. Still, I can’t help but envision a dashboard that subscribes to, and manages notifications from, all my sources. It seems wrong that the closest thing to that, once more, is email. Plugging the social silos into a common reader seems like the obvious thing. But if that were effective, we’d all be using FlowReader or something like it.

Why don’t we? Obviously the silos can refuse to cooperate, as FlowReader noted when announcing the demise of its Facebook integration:

These changes were made [by Facebook] to give users more control over their own data, which we support. It’s a great thing for users! However, it also means that Facebook data is no longer going to be easy to share between applications.

You know what would be a really great thing for users, though? A common reader that makes it easy to keep track of friends and family and coworkers along with news and all kinds of personal and professional information sources.

“What’s wrong?”

It’s not just that the silos can shut down their feeds. It’s that we allowed ourselves to get herded into them in the first place. For a while, quite a few people got comfortable with the notion of publishing and subscribing to diverse feeds in a common way, using systems that put them in charge of outflow and inflow. In one form or another that’s still the right model. Sometimes we forget things and have to relearn them. This is one of those things.

“Who do I report this to?”

Everyone.

A small blog neighborhood hiding in plain sight

For as long as I can remember, I’ve tweeted every blog post. As an experiment, I didn’t do that with this one. As a result, WordPress tells me that relatively few people have read it. But I’m not monetizing pageview counters here, why do I care? The interaction with those who did read that item was pleasant.

So, the experiment continues. Some items here, like this one, can enjoy a more intimate space than others, simply by not announcing themselves on Twitter. A small blog neighborhood hiding in plain sight.

Highlighting passages doesn’t aid my memory, but speaking them does

When I was in college, taking notes on textbooks and course readings, I often copied key passages into a notebook. There weren’t computers then, so like a medieval scribe I wrote out my selections longhand. Sometimes I added my own notes, sometimes not, but I never highlighted, even in books that I owned. Writing out the selections was a way to perform the work I was reading, record selections in my memory, and gain deeper access to the mind of the author.

Now we have computers, and the annotation software I help build at Hypothesis is ideal for personal note-taking. Close to half of all Hypothesis annotations are private notes, so clearly lots of people use it that way. For me, though, web annotation isn’t a private activity. I bookmark and tag web resources that I want to keep track of, and collaborate with others on document review, but I don’t use web annotation to enhance my private reading.

To be sure, I mostly read books and magazines in print. It’s a welcome alternative to the screens that otherwise dominate my life. But even when my private reading happens online, I don’t find myself using our annotation tool the way so many others do.

So, what’s a good way to mark and remember a passage in a book if you don’t want to highlight it, or in the case of a library book, can’t highlight it? I thought about the scribing I used to do in college, and realized there’s now another way to do that. Recently, when I read a passage in a book or magazine that I want to remember and contemplate, I’ve been dictating it into a note-taking app on my phone.

I’ve followed the evolution of speech-to-text technology with great interest over the years. When I reviewed Dragon NaturallySpeaking, I did what every reviewer does. I tried to use the tool to dictate my review, and got mixed results. Over time the tech improved but I haven’t yet adopted dictation for normal work. At some point I decided to forget about dictation software until it became something that civilians who weren’t early-adopter tech journos used in real life.

One day, when I received some odd text messages from Luann, I realized that time had arrived. She’d found the dictation feature on her phone. It wasn’t working perfectly, and the glitches were amusing, but she was using it in an easy and natural way, and the results were good enough.

I still don’t dictate to my computer. This essay is coming to you by way of a keyboard. But I dictate to my phone a lot, mostly for text messages. The experience keeps improving, and now this new practice — voicing passages that I read in books, in order to capture and remember them — seems to be taking hold.

I’m reminded of a segment in a talk given by Robert “R0ml” Lefkowitz at the 2004 Open Source Conference, entitled The Semasiology of Open Source (part 2), the second in a series structured as thesis (part 1), antithesis (part 2), and synthesis (part 3). ITConversation aptly described this luminous series of talks as “an intellectual joy-ride”; I’m going to revisit the whole thing on a hike later today.

Meanwhile, here’s a transcription of the segment I’m recalling. It appears during a review of the history of literacy. At this point we have arrived at 600 AD.

To be a reader was not to be the receiver of information, it was to be the transmitter of the information, because it was not possible to read silently. So things that were written were written as memory aids to the speaker. And the speaker would say the words to the listener. To read was to speak, and those were synonyms … The writing just lies there, whereas the speaking lifts it off the page. The writing is just there, but the speaking is what elevates the listener.

Had I merely read that passage I’m certain I would not remember it 14 years later. Hearing R0ml speak the words made an indelible impression. (Seeing him speak the words, of course, made it even more indelible.)

Silent reading, once thought impossible, had to be invented. But just because we can read silently doesn’t mean we always should, as everyone who’s read aloud to a young child, or to a vision-impaired elder, knows. It’s delightful that voice recognition affords new ways to benefit from the ancient practice of reading aloud.

Designing for least knowledge

In a post on the company blog I announced that it’s now possible to use the Hypothesis extension in the Brave browser. That’s great news for Hypothesis. It’s awkward, after all, to be an open-source non-profit software company whose product is “best viewed in” Chrome. A Firefox/Hypothesis extension has been in the works for quite a while, but the devil’s in the details. Although Firefox supports WebExtensions, our extension has to be ported to Firefox, it isn’t plug-and-play. Thanks to Brave’s new ability to install unmodified Chrome extensions directly from the Chrome web store, Hypothesis can now plug into a powerful alternative browser. That’s a huge win for Hypothesis, and also for me personally since I’m no longer tethered to Chrome in my daily work.

Another powerful alternative is forthcoming. Microsoft’s successor to Edge will (like Brave) be built on Chromium, the open-source engine of Google’s Chrome. The demise of EdgeHTML represents a loss of genetic diversity, and that’s unfortunate. Still, it’s a huge investment to maintain an independent implementation of a browser engine, and I think Microsoft made the right pragmatic call. I’m now hoping that the Chromium ecosystem will support speciation at a higher level in the stack. Ever since my first experiments with Greasemonkey I’ve been wowed by what browser extensions can do, and eager for standardization to unlock that potential. It feels like we may finally be getting there.

Brave’s support for Chrome extensions matters to me because I work on a Chrome extension. But Brave’s approach to privacy matters to me for deeper reasons. In a 2003 InfoWorld article I discussed Peter Wayner’s seminal book Translucent Databases, which explores ways to build information systems that work without requiring the operators to have full access to users’ data. The recipes in the book point to a design principle of least knowledge.

Surveillance capitalism knows way too much about us. Is that a necessary tradeoff for the powerful conveniences it delivers? It’s easy to assume so, but we haven’t really tried serious alternatives yet. That’s why this tweet made my day. “We ourselves do not know user identities or how donations might link via user,” wrote Brendan Eich, Brave’s founder. “We don’t want to know.”

We don’t want to know!

That’s the principle of least knowledge in action. Brave is deploying it in service of a mission to detoxify the relationship between web content and advertising. Will the proposed mechanisms work? We’ll see. If you’re curious, I recommend Brendan Eich’s interview with Eric Knorr. The first half of the interview is a deep dive into JavaScript, the second half unpacks Brave’s business model. However that model turns out, I’m grateful to see a real test of the principle. We need examples of publishing and social media services that succeed not as adversaries who monetize our data but rather as providers who deliver convenience at a reasonable price we’re happy to pay.

My hunch is that we’ll find ways to build profitable least-knowledge services once we start to really try. Successful examples will provide a carrot, but there’s also a stick. Surveillance data is toxic stuff, risky to monetize because it always spills. It’s a liability that regulators — and perhaps also insurers — will increasingly make explicit.

Don’t be evil? How about can’t be evil? That’s a design principle worth exploring.

My new library superpower

The recommendations that matter to me — for books to read, movies to watch, products to buy, places to visit — almost never come from algorithms. Instead they come from friends, family, acquaintances, and now also, in the case of books, from fellow library patrons whom I’ve never met.

This last source is a recent discovery. Here’s how it works. When I enter the library, I walk past shelves of new books and staff picks. These are sometimes appealing, but there’s a much better source of recommendations to be found at the back of the library. There, on shelves and carts, are the recently-returned books ready to be reshelved. I always find interesting titles among them. Somehow I had never noticed that you can scan these titles, and check out the ones you want before they ever get reshelved. Did it always work this way at our library? At libraries in other places I’ve lived? If so I am woefully late to the party, but for me, at least, this has become a new superpower.

I reckon that most books in the library rarely if ever leave the shelves. The recently-returned section is a filter that selects for books that patrons found interesting enough to check out, read (or maybe not), and return. That filter has no bias with respect to newness or appeal to library staff. And it’s not being manipulated by any algorithm. It’s a pure representation of what our library’s patrons collectively find interesting.

The library’s obvious advantage over the bookstore is the zero price tag. But there’s a subtler advantage. In the bookstore, you can’t peruse an anonymized cache of recently-bought books. Online you can of course tap into the global zeitgeist but there you’re usually being manipulated by algorithms. LibraryThing doesn’t do that, and in principle it’s my kind of place online, but in practice I’ve never succeeded in making it a habit. I think that’s because I really like scanning titles that I can just take off a shelf and read for free. Not even drone delivery can shorten the distance between noticing and reading to seconds. Ebooks can, of course. But that means another screen, I already spend too much time looking at screens, the print medium provides a welcome alternative.

This method has been so effective that I’ve been (guiltily) a bit reluctant to describe it. After all, if everyone realized it’s better to bypass the new-book and staff-pick displays and head straight for the recently-returned area, the pickings would be slimmer there. But I’m nevertheless compelled to disclose the secret. Since few if any of my library’s patrons will ever read this post, I don’t think I’m diluting my new superpower by explaining it here. And who knows, maybe I’ll meet some interesting people as I exercise it.

 

Don’t just Google it! First, let’s talk!

Asking questions in conversation has become problematic. For example, try saying this out loud: “I wonder when Martin Luther King was born?” If you ask that online, a likely response is: “Just Google it!” Maybe with a snarky link: http://lmgtfy.com/?q=when was martin luther king born?

Asking in conversation is frowned upon too. Why ask us when you could ask Siri or Alexa to ask Google?

There’s a kind of shaming going on here. We are augmented with superpowers, people imply, you’re an idiot to not use them.

Equally problematic is the way this attitude shuts down conversation. Sure, I can look up MLK’s birthday. Or you can. But what if our phones are dead? Now we need to rely on our own stored knowledge and, more importantly, on our collective pool of stored knowledge.

I think MLK was shot in 1968. I’d have been twelve. Does that sound right? Yeah, we were in the new house, it was sixth grade. And I know he died young, maybe 42? So I’ll guess 1968 – 42 = 1926.

Were you around then? If so, how do you remember MLK’s assassination? If not, what do you know about the event and its context?

As you can see in the snarky screencast, I’m wrong, MLK died even younger than I thought. You might know that. If we put our heads together, we might get the right answer.

Asking about something that can easily be looked up shouldn’t stop a conversation, it should start one. Of course we can look up the answer. But let’s dwell for a minute on what we know, or think we know. Let’s exercise our own powers of recall and synthesis.

Like all forms of exercise, this one can be hard to motivate, even though it’s almost always rewarding. Given a choice between an escalator and the stairs, I have to push myself to prefer the stairs. In low-stakes wayfinding I have to push myself to test my own sense of direction. When tuning my guitar I have to push myself to hear the tones before I check them. I am grateful to be augmented by escalators, GPS wayfinders, and tuners. But I want to use these powers in ways that complement my own. Finding the right balance is a struggle.

Why bother? Consider MLK. In The unaugmented mind I quoted from MLK confidante Clarence Jones’ Behind the Dream:

What amazed me was that there was absolutely no reference material for Martin to draw upon. There he was [in the Birmingham jail] pulling quote after quote from thin air. The Bible, yes, as might be expected from a Baptist minister, but also British prime minister William Gladstone, Mahatma Gandhi, William Shakespeare, and St. Augustine.

And:

The “Letter from a Birmingham Jail” showed his recall for the written material of others; his gruelling schedule of speeches illuminated his ability to do the same for his own words. Martin could remember exact phrases from several of his unrelated speeches and discover a new way of linking them together as if they were all parts of a singular ever-evolving speech. And he could do it on the fly.

Jones suggests that MLK’s power flowed in part from his ability to reconfigure a well-stocked working memory. We owe it to ourselves, and to one another, to nurture that power. Of course we can look stuff up. But let’s not always go there first. Let’s take a minute to exercise our working memories and have a conversation about them, then consult the oracle. I’ll bet that in many cases that conversation will turn out to be the most valuable part of the experience.