Category Archives: Uncategorized

Tech’s inequality paradox

Travelers leaving from the San Francisco airport on morning flights know the drill: you stay over the night before at a motel on El Camino Real in San Bruno. Last week I booked the Super 8 which turns out to be perfectly serviceable. As a bonus, it’s right next door to Don Pico’s Mexican Bistro and Cevicheria which is unlike anything else you’ll find on motel row:

The back bar in the new dining room is a 1925 mahogany Brunswick from the Cliff House in San Francisco; the large bullfight mural is an original painting by Roberto Leroy Smith; large mirrors came from Harry Denton’s; the chandeliers are of Austrian crystal, from the World Trade Center at the San Francisco Ferry Building; the trophy fish are from Bing Crosby’s private collection; the large elephant, floral, and deer paintings are from the movie Citizen Kane with Orson Welles; the sombreros are 1920s antiques from a Mexican hat collection acquired from Universal Studios; and the stylized modern art paintings are by California painter Rudy Hess. – http://www.donpicosbistro.com/history/

It was too late for dinner but I sat at the mahogany bar, had a drink and a snack, and talked with Angel, the bartender. He’s a veteran of San Francisco’s culture war. Born and raised in the Mission District, he was driven out seven years ago. At most he could afford a studio apartment and that was no place to raise a young child.

Angel didn’t express the anger that you can now see bubbling to the surface when you walk the streets of San Francisco. Just the sadness of the dispossessed. We talked about many things. At one point he answered a text on his iPhone and it suddenly hit me. That’s the same iPhone that San Francisco’s tech elite carry.

For most things you can buy, there’s almost no limit to what you can spend. A tech billionaire in San Francisco can own a home or a car that costs hundreds of times what Angel can pay for a home or a car. But while it’s possible to buy a gold-plated and diamond-encrusted iPhone, I’ve never seen one. The tech that’s at the heart of San Francisco’s crisis of inequality is a commodity, not a luxury good. It’s a great equalizer. Everybody has a smartphone, everybody has access to the services it provides. But if you’re Angel, you can’t use that phone in the neighborhood you grew up in.

Business registration as a framework for local data

In Crowdsourcing local data the right way I envisioned a different way for businesses to register with state governments. In this model, state governments invite and encourage businesses to be the authoritative sources for their own data, and to announce URLs at which that data is published in standard formats. Instead of plugging data into the state’s website, a business would transmit an URL. The state would sync the data at that URL, assign it a version number, and verify its copy (tethered to the URL) as an approved version. The state would also certify the URL as a source of additional data not required by the state but available from the business at that URL.

For businesses with calendars of public events, one kind of additional data would be those calendars. A while back I met with Steve Cook, deputy commissioner of Vermont’s department of tourism and marketing, to show him the Elm City “web of events” model. We discussed the central challenge: awakening event promoters to the possibility of using their own calendars as feeds that would flow directly into the statewide calendar. How do you light up those feeds? Steve got it. He pointed to another section of the building. “Those guys run the business registration site,” he said. “On the registration form, we already ask for the URL of a business’s home page. How hard would it be to also ask for a calendar URL if they have one?”

Exactly. And by asking for that URL, the state awakens the business to a possibility — authoritative self-publishing of data — that wouldn’t otherwise have occurred to it. This hasn’t yet happened in Vermont. But if Carl Malamud ever becomes Secretary of State in California I’ll bet it will happen there!

Crowdsourcing local data the right way

In How Google Map Hackers Can Destroy a Business at Will, Wired’s Kevin Poulsen sympathizes with local businesses trying to represent themselves online.

Maps are dotted with thousands of spam business listings for nonexistent locksmiths and plumbers. Legitimate businesses sometimes see their listings hijacked by competitors or cloned into a duplicate with a different phone number or website.

These attacks happen because Google Maps is, at its heart, a massive crowdsourcing project, a shared conception of the world that skilled practitioners can bend and reshape in small ways using tools like Google’s Mapmaker or Google Places for Business.

No, these attacks happen because Google Maps isn’t based on the right kind of crowdsourcing. The Wired story continues:

Google seeds its business listings from generally reliable commercial mailing list databases, including infoUSA and Axciom.

Let’s back up a step. Where does infoUSA get its data? From sources like new business filings and company websites, and follow-up calls to verify the data.

Those calls shouldn’t be necessary. The source of truth should be an individual business owner who signs a state registration form and publishes a website. Instead, intermediaries govern what the web knows about that business. If that data were crowdsourced in the right way, it would flow directly from the business owner.

Here’s how that could happen. A state’s process for business registration asks for a URL. If data available at that URL conforms to an agreed-upon format, it populates the registration form. If the registration is approved, the state endorses that URL as the source of truth for basic facts about the business.

Of course the business might provide more information than the state can verify. That’s OK. The state’s website might only record and assure the name and address of the business, plus the URL at which additional facts — not verifiable by the state — are provided by the business owner. Those facts would include the hours of operation. The business owner is the source of truth for those facts. Changes made at the source ripple through the system.

The problem isn’t that information about local businesses is crowdsourced. We’re just doing it wrong.

Things in the era of dematerialization

As we clear out the house in order to move west, we’re processing a vast accumulation of things. This morning I hauled another dozen boxes of books from the attic, nearly all of which we’ll donate to the library. Why did I haul them up there in the first place? We brought them from our previous house, fourteen years ago. I could have spared myself a bunch of trips up and down the stairs by taking them directly to the library back then. But in 2000 we were only in the dawn of the era of dematerialization. You couldn’t count on being able to find a book online, search inside it, have a used copy shipped to you in a couple of days for a couple of dollars.

Now I am both shocked and liberated to realize how few things matter to me. I joke that all I really need is my laptop, my bicycle, and my guitar, but in truth there isn’t much more. For Luann, though, it’s very different. Her cabinets of wonders are essential to who she is and what she does. So they will have to be a logistical priority.

In the age of dematerialization, some things will matter more than ever. Things that aren’t data. Things that are unique. Things made by hand. Things that were touched by other people, in other places, at other times. RadioLab’s podcast about things is a beautiful collection of stories that will help you think about what matters and why, or what doesn’t and why not.

Trails near me

I stayed this week at the Embassy Suites in Bellevue, Washington [1, 2]. Normally when visiting Microsoft I’m closer to campus, but the usual places were booked so I landed here. I don’t recommend the place, by the way, and not because of the door fiasco, that could have happened in any modern hotel. It’s the Hyatt-esque atrium filled with fake boulders and plastic plants that creeps me out. Also the location near the junction of 156th and route 90. Places like this are made for cars, and I want to be able hike and run away from traffic.

A web search turned up no evidence of running trails nearby. So I went down to the gym only to find people waiting in line for the treadmills. Really? It’s depressing enough to run on a treadmill, I’m not going to queue for the privilege. So I headed out, figuring that a run along busy streets is better than no run at all.

Not far from the hotel, on 160th, I found myself in a Boeing industrial park alongside a line of arriving cars. As I jogged past the guard booth a guy leaped out at me and asked for my badge. “I’m just out for a run,” I said. “This is private property,” he said, and pointed to a nearby field. “But I think there’s a trail over there.” I crossed the field and entered part of the Bellevue trail network. The section I ran was paved with gravel, with signs identifying landmarks, destinations, and distances. I ran for 45 minutes, exited into the parking lot of a Subaru dealership near my hotel, and congratulated myself on a nice discovery.

Later I went back to the web to learn more about the trails I’d run. And found nothing that would have enabled a person waiting in line for a treadmill at the Embassy Suites to know that, within a stone’s throw, there were several points of access to a magnificent trail system. The City of Bellevue lists trails alphabetically, but the name of the nearby Robinswood Park Trail had meant nothing to me until I found it myself. Nor did I find anything at the various trails and exercise sites that I checked — laboriously, one by one, because each is its own silo.

I knew exactly what I wanted: running trails near me. That the web didn’t help me find them is, admittedly, a first world problem. What’s more, I like exploring new places on foot and discovering things for myself. But still, the web ought to have enabled that discovery. Why didn’t it, and how could it?

The trails I found have, of course, been walked and hiked and cycled countless times by people who carry devices in their pockets that can record and publish GPS breadcrumbs. Some will have actually done that, but usually by way of an app, like Runtastic, that pumps the data into a siloed social network. You can get the data back and publish it yourself, but that’s not the path of least resistance. And where would you publish to?

Here’s a Thali thought experiment. I tell my phone that I want to capture GPS breadcrumbs whenever it detects that I’m moving at a walking or running pace along a path that doesn’t correspond to a mapped road and isn’t a path it’s seen before. The data lands in my phone’s local Thali database. When I’m done, the data just sits there. If there was nothing notable about this new excursion my retention policy deletes the data after a couple of days.

But maybe I want to contribute it to the commons, so that somebody else stuck waiting in line for a treadmill can know about it. In that case I tell my phone to share the data. Which doesn’t mean publish it to this or that social network silo. As Gary McGraw once memorably said: “I’m already a member of a social network. It’s called the Internet.”

Instead I publish the data to my personal cloud, using coordinates, tags, and a description so that search engines will index it, and aggregators will include it in their heat maps of active trails. Or maybe, because I don’t want my identity bound to those trails, I publish to an anonymizing service. Either way, I might also share with friends. I can do that via my personal cloud, of course, but with Thali I can also sync with them directly.

For now I have no interest in joining sites like Runtastic. Running for me is quiet meditation, I don’t want to be cheered on by virtual onlookers, or track my times and distances, or earn badges. But maybe I’ll change my mind someday. In that case I might join Runtastic and sync my data into it. Later I might switch to another service and sync there. The point is that it’s never not my data. I never have to download it from one place in order to upload it to another. The trails data lives primarily on my phone. Anyone else who interacts with it gets it from me, where “me” means the mesh of devices and personal cloud services that my phone syncs with. I can share it with my real friends without forcing them to meet me in a social network silo. And I can share it with the real social network that we call the web.

Turning it off and on again


In The Internet of Things That Used To Work Better I whined about rebooting my stove. This morning I was stuck outside a hotel room waiting for “engineering” to come and reboot the door. It eventually required a pair of technicians, Luis and Kumar, who jiggled and then replaced batteries (yes, it’s a battery-operated door), then attached two different diagnostic consoles. When they got it working I asked what the problem had been. They had no idea. “Hello, IT, have you tried turning it off and on again?” is the tagline for a civilization whose front-line technicians have no theory of operation. Will the door open when I return tonight? I have no idea. But at least now I know how to turn it off and on again.

How Thali could make the Smallest Federated Wiki even smaller

Thanks to my friend Mike Caulfield, an educational technologist who’s been digging into Ward Cunningham’s Smallest Federated Wiki, I’ve now got a much clearer idea of how SFW and Thali could play together and why they should.

Mike’s recent series on SFW is the best review and analysis of Ward’s newest creation that I’ve seen:

http://hapgood.us/2014/06/12/student-curation-in-smallest-federated-wiki/

http://hapgood.us/2014/06/11/letting-lots-of-people-host-your-stuff-in-their-collections-is-a-good-survival-strategy/

http://hapgood.us/2014/06/10/the-answer-to-project-based-work-in-moocs-is-federation/

I had dipped a toe into the SFW water but there’s a learning curve and Mike climbed it before I could. Today he jumpstarted me by setting me up with a node of an SFW federation he’s hosting on AWS. Here I am participating in a wiki federation with some friends in the ed-tech tribe. We are able to do this because Mike provisioned SFW instances for each of us.

What’s the Thali connection? Well, in the first few seconds of http://screencast.com/t/fRlahVd0EK5 you see Mike provisioning a node in a federation he’s hosting on AWS. That’s the minimum bar for SFW: you need an instance of the server. Most people can’t or won’t leap over that bar.

But the server’s a pretty small piece of the pie. Most of SFW runs in the browser. There’s a lot there, and it’s well-architected for growth.

A server implementation for Thali would enable lots more people to create and participate in Wiki federations, by running SFW on their own devices and syncing opportunistically with peers on friends’ devices. Since the existing Sinatra-based SFW is CouchDB-aware, Thali — based on Couchbase Lite — should provide a comfortable home.

Why would people want to use SFW? Mike’s posts and screencasts point to a world in which GitHub-like collaboration breaks out of the geek ghetto and becomes a natural way for all kinds of teachers and learners to collaborate.

Ward points to that possibility and others in a series of SFW screencasts at http://vimeo.com/channels/wiki. I’d seen a few, tonight I went back and watched the rest. Some highlights:

On forking and comparing

An inline calculator plugin (in 25 lines of CoffeeScript!)

Visualization of in-page data

These demos really capture the idea of the universal canvas (http://www.infoworld.com/d/developer-world/we-need-universal-canvas-doesnt-suck-130) that I’ve dreamed of for a long time.

My 2006 InfoWorld article said, by the way,

Here’s the best definition of the universal canvas: ‘Most people would prefer a single, unified environment that adapts to whichever environment they are working in, moves transparently between local and remote services and applications, and is largely device-independent — a kind of universal canvas for the Internet Age.’

You might expect to find that definition in a Google white paper from 2006. Ironically, it comes from a Microsoft white paper from 2000, announcing a “Next Generation Internet” initiative called .NET.

You never know how things will turn out.

Mapping the decentralization movement

“Right now we’re experiencing a moment of maximum centralization,” says Scott Rosenberg in his introduction to a new effort that combines “a tech-industry beat I will cover; a cultural investigation and conversation I will undertake; and a personal-publishing venture I am kicking off now.”

We’ve been here before. The Internet was a peer-to-peer network until it wasn’t. Likewise the Web. Some have forgotten, and most never knew, that Tim Berners-Lee’s original browser could write and publish as well as read pages. By the early 2000s the pendulum had swung so far toward centralization that, as it began to swing back, we called the “two-way web” one of the pillars of “Web 2.0.” Personal publishing flourished for a while, then the pendulum swung again toward centralized social media. If Scott’s right, and I hope he is, the pendulum is about to swing back toward a more distributed Web.

Thali is one project moving in that direction, there are many others. When we compared notes with Jeremie Miller the other day, he pointed us to a long list of fellow travelers. Another observer, Doc Searls, periodically issues updates with pointers to related (and some of the same) efforts.

It behooves all of us to sort out how these efforts are similar or different along various axes. Some are peer-to-peer, others not. Some bind identities to public keys, others don’t. Some skew toward messaging and social networking, others toward bulk data exchange or publishing. Some consider themselves personal data stores, others don’t. Many are “friend-to-friend” networks with peer-to-peer trust models, some aren’t. There are platforms, protocols, overlay networks, and apps in the mix.

In order to reason about these axes of comparison I loaded up a bunch of links into Pinboard, made a common tag (redecentralize) to unite all the links related to this exploration, and began tagging. Here’s what I’ve got at https://pinboard.in/u:judell/t:redecentralize/ so far:

What else belongs on this list? What are core attributes? What are the best axes along which to compare? The tag cloud is suggestive but it’s only my lens on the list, I’d love to see other lenses applied to the same (evolving) list.

Note that Pinboard (as with del.icio.us long ago) such lenses can be applied — and in a decentralized way! You could import my redecentralize feed into your own Pinboard account and tag the links according to your world view. We could compare one anothers’ views, and see a combined view at https://pinboard.in/t:redecentralize/. While that’s a very cool way to do collaborative mind-mapping, it’s not likely to happen in this case. But comments here (or elsewhere) will be welcome.

A world without hearsay

If you received email from me in the early 2000s, it would have arrived with an attachment I routinely added to my messages. The attachment was my digital signature, the output of an algorithm that combined my message with the private half of my cryptographic key pair. If you had acquired my public key as part of a prior communication, and if your email client supported the protocol, you were assured that the message had been “signed” by me. Since those two conditions rarely applied, though, you were more likely to be puzzled or annoyed.

Why did I do this? As I look back, I think it had a lot to do with my experience in the early blogosphere. Back then blogs didn’t support comments. To comment on something you wrote, I’d write a blog post referring to your post. You could reply in the same way. The conversational thread didn’t exist in any one place, but in practice links wove the discussion together well enough. And because all our writing appeared on our own blogs, we owned our words. Discourse was typically much more civil than in discussion forums, or in the comment areas that blogs later evolved.

I liked the idea that my digital output was bound in some way to my identity. In the case of blogging, that identity was associated with a website that I controlled. Why not extend that idea to email? In that case, my identity was associated with a key pair, the private half of which I controlled. Signing messages was a way to say: “I own and stand behind these words.” And to say: “You should distrust a message ‘from’ me that isn’t properly signed.”

When I abandoned my digital signature experiment I chalked it up to a failure of technology adoption. It was, I thought, a good idea that never took off because people didn’t understand why it was a good idea, or because popular software didn’t make it accessible enough.

Now a hugely popular email program, Gmail, is about to make the idea more accessible than it’s ever been. Google is preparing a Chrome extension called End-to-End for encrypting (and signing) email[1]. I should rejoice! But Yaron Goland thinks otherwise. He argues here that routinely binding our identities to our messages is a really bad idea.

Imagine, for a moment, what your world would look like if every time you had a conversation with someone a permanent record was made of the conversation. The record would be fully authenticated and suitable for use in the court of public opinion and/or law.

In this world our everyday lives, our conversations, our exchanges, with anyone about anything become little permanent records that follow us around forever.

This is exactly the world we create with technologies like S/MIME and PGP Mail. Or, more generally, the world we create when we use digital signatures. A digital signature is intended to be an authenticator, a way for someone other than us to prove that we did/said something. When we use digital signatures for momentous things that should be on the public record, like mortgage documents perhaps, then they serve a good purpose. But with PGP Mail we suddenly sign… well… everything[2]. It’s like having a notary public walking behind you all day long stamping every statement, note, mail, etc. as provably and irrevocably yours.

I don’t think we want such records to exist. I think we want a much more ephemeral world where the bulk of what we do just quietly vanishes into the ether leaving as little of a trail as possible. The open source experiment I’ve spent the last year or so working on (and why I haven’t been blogging much, I’ve been insanely busy) is called Thali and we are trying to build that ephemeral world.

Yaron calls the dystopian vision he conjures “a world without hearsay” and Thali rejects it. When you communicate with a Thali peer, you and the peer strongly authenticate one another. But the data you exchange bears no trace of those identities. At least not by default. Thali applications will, of course, often need to add markers of identity to the documents they exchange. But the Thali system won’t do that automatically.

If email were exchanged directly among peers, rather than through relays, then I might never have felt the need to bind identity to individual messages. Since email travels through relays, though, I would still like to assure you that email “from” me really is from me, as well as protecting it from the prying eyes of intermediaries and servers. But I find Yaron’s argument persuasive. The potential harm to me may outweigh the benefit to you.


[1] End-to-End isn’t, by the way, a Gmail-only thing. You can use it in any text entry field in the Chrome browser to compose a signed/encrypted message. I don’t use Gmail but was able to use End-to-End to send a protected message from Outlook.com. That entails copying and pasting though, which presumably won’t be necessary if End-to-End is integrated into Gmail.

[2] Signing and encryption aren’t necessarily joined at the hip. Depending on how the technologies are implemented, it may be possible to sign without encrypting, encrypt without signing, or sign and encrypt. I tried the End-to-End extension and found that it doesn’t do bare signatures but does encrypt with or without signatures. So you could use it to protect messages without binding your identity to them.

Jeremy Dorn’s excellent JSON forms editor

The lingua franca for web data is a format called JSON (JavaScript Object Notation). It’s easier for people and machines to read and write than its predecessor, XML. And because JSON is JavaScript it’s a natural choice for a web increasingly powered by JavaScript code.

Like XML before it, though, JSON is only easy for some people to read and write: programmers. Most people need their interaction with JSON data to be mediated by HTML forms. As you’d expect, there are a bunch of JavaScript libraries that can do that mediation. Until recently, though, I hadn’t found one that felt right.

There is an embarrassment of riches. In any category of open source software where that’s true, evaluating the choices becomes an interesting challenge. Some useful criteria:

- Activity. Recent activity (commits, discussion, pull requests) by one or more contributors sends a strong positive signal.

- Demo. A live online demo, when feasible and if comprehensive, can help you decide whether to invest in exploring the project more deeply.

- Documentation. Clear, concise, and complete documentation also helps you decide whether to investigate more deeply.

- Size. Other things being equal, less code is more.

- Self-sufficiency. Other things being equal, less dependency on other code is more.

In my search for a JSON forms editor, these criteria led me to Jeremy Dorn’s json-editor. It’s small, self-contained, and wonderfully capable. And the demo is spectacular. Using it, I learned everything I needed to know about the tool, and verified that it met all my requirements. Because json-editor supports JSON schema, I was even able to prototype the data structures I was planning to use and then interact with them directly in the live demo.

Well done, Jeremy! You have not only created an excellent software tool. You have also exemplified how to present such a tool in a way that enables people to evaluate it quickly and thoroughly, then (if it meets the need) adopt it easily.

Everything is amazing and I am grateful

Last night I hung out with friends who hadn’t heard Louis CK’s profound rant, everything’s amazing and nobody’s happy. The central character is a guy experiencing WiFi on one of the first flights to offer it. He’s online at 30,000 feet, among the first ever to watch YouTube videos while hurtling across the sky. Then the service fails. “This is bullshit,” the guy gripes. Louis CK: “Really? The world owes you something you didn’t know existed 5 minutes ago?”

Today I’m on a flight from San Francisco to Boston. I just found out that a glitchy podcatcher failed to download the podcasts I was going to listen to for the next few hours. And, oh no! This isn’t a WiFi flight! The hipster response: “This sucks.” But I don’t feel that way. I’ll listen to those episodes of the Long Now Seminars and KUOW Speakers Forum some other time. (Perhaps, amazingly, streamed on demand to my phone while I’m out hiking with the dogs.) For now, instead, I am watching the blue dot on my phone’s map creep across Lake Tahoe. Which, if I incline my head to the right and look down, is visible directly below. And I am wishing my dad could be here to experience this miracle. He would have wept tears of joy.

Gene Udell was a Navy pilot, he flew the amphibious PBY Catalina, he understood the principles of flight and navigation. But he never took any of it for granted. Whenever we were in an airport, he’d look out at the jets on the concourse and say: “I know how it all works. But I still have a hard time believing those things can fly.”

It was the same when the Internet came along. He understood, roughly, how it worked. But it constantly amazed him. To have lived long enough to have such an experience was for him a privilege and an endless source of wonder and delight.

(What’s that town down there? Oh, Elko, Nevada.)

Was it always this way? I don’t think so. We learned how to make fire a long time ago but we’re still delighted to be able to do it today. Why do the newest and most advanced technologies provoke such ingratitude?

Maybe it’s because the newest and most advanced stuff builds on more layers of supporting technologies than we want to think about. Or maybe it’s that things evolve so quickly that we habituate and crave the next jolt of novelty. For one reason or another, our feeling of wonder gives way to a feeling of entitlement.

But dad, I’m here to tell you, it’s amazing. I wish you could be here sitting next to me watching that blue dot crawl across the map on my phone’s screen, identifying what we can see below. It would have made you so happy. And I promise you this: I will never take it for granted.

Can we tether email to “the truth”?

“I wish we had trackback for emails.” – Robert Scoble, circa 2006

My source for that quote is Jeff Sandquist, who hired both Robert Scoble and me to work at Microsoft. We are a company with a deeply-rooted email culture. Robert was bemoaning the lack of peripheral awareness that blogging culture had taught him to appreciate. In the blogosphere, as in later forms of social media, you are (mostly) guaranteed to discover responses to things you have written. In email culture there is no such guarantee, and that’s a bug.

I’m always on the lookout for ways to collaborate in shared spaces, and lately I’ve been getting good mileage out of Office 365 and OneDrive for Business. It’s a combination of hosted SharePoint, lightweight web apps, full-strength Windows apps, and sync among my various PCs, tablets, and phone. The pieces have come together in a way that reminds me of the excitement I felt when I first began experimenting with what we once called groupware.

But where email culture runs deep, it’s a challenge to build bridges between inboxes and shared spaces. Yesterday, for example, I sent an email with links to documents in my shared space on Office 365. One respondent advised me to use attachments rather than links. Another argued that links are superior because the reader is guaranteed to get the latest version. Both make valid points. You don’t want to hit a dead link if you’re reading email offline. But you don’t want to read a stale document if you’re online.

Why not do both? Use a link that resolves as an attachment when the email is sent, but retains its identity as a link. If the email is read offline, it functions as an attachment. But if the email is read online, it can function as a link too. Benefits include:

- If the document changed, the reader gets the current version

- If the document didn’t change, the reader knows it didn’t

- When the link resolves, the author sees a trackback

More broadly this idea reflects one of the core tenets of Thali. In distributed systems with copies of things floating around, there ought to be a canonical instance of each thing. “We call it the truth,” says Thali’s creator Yaron Goland. To the extent possible, we all need to be the source of truth for our own stuff, and we need to hold it as closely as we can.

Joint custody of data

Benjamin Mako Hill has long hosted his own email server. In Google Has Most Of My Email Because It Has All Of Yours, he rethinks that strategy after this conversation:

A few years ago, I was surprised to find out that my friend Peter Eckersley — a very privacy conscious person who is Technology Projects Director at the EFF — used Gmail. I asked him why he would willingly give Google copies of all his email. Peter pointed out that if all of your friends use Gmail, Google has your email anyway. Any time I email somebody who uses Gmail — and anytime they email me — Google has that email.”

Benjamin goes on to analyze his email archive and arrives at this sobering conclusion:

Despite the fact that I spend hundreds of dollars a year and hours of work to host my own email server, Google has about half of my personal email!

How could we manage our hosted lifebits in a way that enables our bits to commingle without loss of control? It’s easy in principle, though hard in practice. Here’s the easy-in-principle approach. An email is not a bag of bits that I send to you. It’s a bag of bits that I park in my own personal cloud, which is a cloud service that I trust, and/or a set of devices I own. I don’t send you the bits, I send you a link. Access to the bits, via that link, is governed by permissions I set.

You, conversely, authorize me to follow links that invite me to access your messages and replies. We both end up with archives of our conversational threads. Yes, of course, there’s nothing to prevent either of us from violating trust and sharing those threads with the world. But there’s no intermediary, we communicate directly, and we have joint custody of our mutual data in what Groove called a shared space.

There are, of course, all sorts of reasons why this is hard-in-practice and may never happen. But are they good reasons?

Multi-persona architectures, then and now

Thali is, among other things, a powerful reminder of just how far ahead of the curve Groove was back in 2000. The other day I spoke with Omer Eiferman and Oren Ladaan about Cellrox, an isolation technology for Android that virtualizes the operating system’s kernel for multiple user spaces. It’s aimed at the BYOD (bring your own device) business market and driven by IT security. IT doesn’t, for example, want Facebook commingling with the corporate email client. But where end-user privacy is becoming paramount, especially in Europe, there’s grassroots demand as well. “You can’t put Facebook on a Blackphone,” says Omer Eiferman, “and you can’t swipe it at Starbucks to buy a latte.”

Each virtualized compartment is a configurable persona. One might run only corporate apps, another only Facebook. If a keylogger found its way into the Facebook persona, it would not be able to eavesdrop on the corporate persona. Conversely, users’ private personae can be configured without corporate MDM (mobile device management) controls.

Where had I heard this before? Groove. For a chapter on Groove security in the O’Reilly Peer to Peer book, I did extensive interviews with Ray Ozzie and his security team. Groove’s strong multi-persona architecture was one of its underappreciated features. Your personal, business, and gaming personae were cryptographically walled off from one another. It wasn’t obvious, to most people at the time, why that would matter. Now it starts to make sense.

Fellow travelers: Thali and telehash

Thali isn’t the only software project that wants to connect people and devices securely and directly. One of our fellow travelers is telehash, which Jeremie Miller describes as “a secure wire protocol powering a decentralized overlay network for apps and devices.” I caught up with Jeremie yesterday on a talky.io video chat to compare notes.

Jeremie’s roots as a networking innovator run deep. In 1999 he launched Jabber (now XMPP) along with the first Jabber server. Then came The Locker Project, a personal data store based on a vision of ownership and control that also guides Thali and other fellow travelers.

The Locker Project focused on data, expecting the right mechanisms for connecting lockers and exchanging the data would arrive. Telehash wants to hasten that arrival. And it’s ambitious. The goal, Jeremie says, is a networking stack that supports always-secure peer-to-peer networking over any available transport — Wi-Fi, 3G/4G, BlueTooth, you name it — and that uses local discovery to find the path of least resistance.

“Networking at the edge is blossoming,” Jeremie says, “there’s crazy growth that isn’t yet widely recognized.” What I’m hearing from potential Thali developers aligns with that perception. The cloud-first pattern dominates, and for many good reasons, but people are noticing that the devices on their desks and in their pockets are equipped not only with ever more powerful processors and capacious storage, but also with ever more robust (and diverse) network pipes. Those pipes connect us to the cloud. They also can and will connect us directly.

Could Thali use telehash? In theory, yes. Both use mutual authentication, both bind user identities to self-asserted public keys. Thali for now builds upon existing TLS machinery. Telehash aims to become an alternative to TLS that’s simpler, more flexible, and built from the ground up for decentralized use. For now we travel parallel roads but we would happily see them converge.

The P in P2P is People

When Groove launched somebody asked me to explain why it was an important example of peer-to-peer technology. I said that was the wrong question. What mattered was that Groove empowered people to communicate directly and securely, form ad-hoc networks with trusted family, friends, and associates, and exchange data freely within those networks. P2P, although then much in vogue — there were P2P books, P2P conferences — wasn’t Groove’s calling card, it was a means to an end.

The same holds true for Thali. Yes it’s a P2P system. But no that isn’t the point. Thali puts you in control of communication that happens within networks of trust. That’s what matters. Peer networking is just one of several enablers.

Imagine a different kind of Facebook, one where you are a customer rather than a product. You buy social networking applications, they’re not free. But when you use those apps you are not in an adversarial relationship with a social networking service. You (along with your trusted communication partners) are the service, and the enabling software works for you.

Thali, at its core, is a database that lives on one or more of your devices and is available to one or more apps running on those devices. Because you trust yourself you’ll authorize Thali apps to mesh your devices and sync data across that mesh. The sync happens directly, without traveling through a cloud relay, and is always secured by mutual SSL authentication. You can, of course, also push to the cloud for backup.

Communicating with other people happens the same way. You exchange cryptographic keys with people you trust, you authorize them to see subsets of the data on your mesh of devices, and that data syncs to their device meshes. The default P2P mode means that you don’t depend on a cloud relay that wants access to your data in exchange for the service it provides.

For cloud services that don’t monetize your data, by the way, Thali delivers a huge benefit. Apps like Snapchat and Chess with Friends incur bandwidth costs proportional to their user populations. If users can exchange photos and gameplay directly, those costs vanish. And there’s no penalty for the user. Sending your photos and chess moves directly costs you no more than sending through the cloud.

But the key point is one that Dave Winer made back when P2P was in vogue: the P in P2P is people. With handheld computers (we call them phones) more powerful than the servers of that era we are now ready to find out what a people-to-people web can be.

Shiny old things

We’ve lived in New England for 25 years. It’s been a great place to raise a family but that’s done, so we’re moving to northern California. The key attractors are weather and opportunity.

Winter has never been our friend, and if we had needed convincing (we didn’t) the winter of 2013-2014 would have done it. I am half Sicilian, my happy place is 80-degree sunshine, I am not there nearly enough. Luann doesn’t crave the sun the way I do, but she’s ready to say goodbye to icy winters and buggy summers.

The opportunity, for Luann, revolves around her art. Ancient artifacts inspired by the Lascaux cave are not exactly in tune with the New England artistic sensibility. We think she’ll find a more appreciative audience out west.

For me it’s about getting closer to Seattle and San Francisco, the two poles of my professional life. Located between those two poles I’ll still be a remote employee, but I’ll be a lot less remote than I am here. That matters more than, until recently, I was willing to admit.

Earthquakes don’t worry me too much. I was in San Jose for the ’89 Loma Prieta quake. We were at an outdoor poolside meeting, heard it rumble toward us, watched the ground we had thought solid turn to liquid, got soaked by the tidal wave that jumped out of the pool, heard it rumble away. What impressed me most was the resiliency of the built environment. Given what I heard and saw I’d have expected much more to have broken than did.

What does worry me, a bit, is the recent public conversation about ageism in tech. I’m 20 years past the point at which Vinod Khosla would have me fade into the sunset. And I think differently about innovation than Silicon Valley does. I don’t think we lack new ideas. I think we lack creative recombination of proven tech, and the execution and follow-through required to surface its latent value.

Elm City is one example of that. Another is my current project, Thali, Yaron Goland’s bid to create the peer-to-peer web that I’ve long envisioned. Thali is not a new idea. It is a creative recombination of proven tech: Couchbase, mutual SSL authentication, Tor hidden services. To make Thali possible, Yaron is making solid contributions to Thali’s open source foundations. Though younger than me, he is beyond Vinod Khosla’s sell-by date. But he is innovating in a profoundly important way.

Can we draw a clearer distinction between innovation and novelty? That might help us reframe the conversation about ageism in tech.

The next thing

The Elm City project was my passion and my job for quite some time. It’s still my passion but no longer my job. The model for calendar syndication that I created is working well in a few places, but hasn’t been adopted widely enough to warrant ongoing sponsorship by my employer, Microsoft. And I’ll be the last person to complain about that. A free community information service based on open standards, open source software, and open data? Really? That’s your job? For longer than anyone could reasonably have expected, it was.

So now I’m on to the next project, one that you might think even more unlikely for a Microsoft employee. I’m helping Yaron Goland create something we are both passionate about: the peer-to-peer Web. Yaron’s project is called Thali, and I’ll say more about it later.

But first I want to sum up what I’ve learned from the Elm City effort.

The elevator pitch for Elm City is short and sweet. It’s RSS for calendars. That implies a pub/sub network based on a standard exchange format, in this case iCalendar. And an ecosystem of interoperable software components. And layered on top of that, an ecosystem of cooperating stakeholders.

On the interop front iCalendar doesn’t fare as well as you’d expect, given that it’s been around since 1999 and is baked into calendar software from Google, Microsoft, and Apple (among many others) that’s used every day by hundreds of millions of people. Why is interop still a problem? Because while in theory people and organizations can form iCalendar-based pub/sub networks, in practice few ever try. So iCalendar feeds don’t interoperate nearly as well as you’d expect.

One of the legacies of Elm City is the iCalendar Validator, inspired by the RSS/Atom feed validator and implemented by Doug Day. It has helped developers iron out some of the interop wrinkles. But the truth is that iCalendar itself isn’t the problem. It’s implemented well enough, in a wide variety of calendar app and services, to enable much more and much better synchronization of public calendars than we currently enjoy. The iCalendar ecosystem has issues but that’s not why the robust calendar networks I envision don’t exist in every city and town.

It’s the stakeholder ecosystem that never came together. Here are the dramatis personae:

  • Local groups and organizations
  • Media (especially newspapers)
  • State and local governments
  • Non-profits and foundations
  • Vendors of content management systems

I’ve worked with each of them separately. But no one kind of stakeholder can push the Elm City model over the top. That will require collaboration, in cities and towns, among stakeholders. Which, as I’m hardly the first to learn, is a tough sell. I hope somebody smarter than me can figure that out. Maybe that will even be a smarter future version of myself. But meanwhile, I’ll be supporting Yaron Goland’s mission to enable a web of people and devices that communicate directly and securely.

Circular progress

Back when progress bars were linear, not circular, there was an idea that browser-based apps could be written in more than one programming language. One implementation of that idea was called ActiveX Scripting, which was supported by Internet Explorer (and other Windows apps). Of course the ActiveX moniker turned out to be inauspicious on the Web. But let’s recall, for a moment, what the essential idea was. The browser was equipped with an interface that enabled it to work with any scripting engine. I remember playing with a demo browser app that fetched and displayed data three different ways: using JavaScript, VBScript, and Perl. That was in, I think, 1997.

Today you can write a browser-based app in any language you choose, so long as you choose JavaScript. Which, like any programming language, is capable of amazing things. My current favorite example is Adrian Holovaty’s new Soundslice player. Here’s my 2012 writeup on Soundslice. It began as a fabulous tablature-based tool used to annotate and study music for string instruments. Now, with support for standard music notation, it’s becoming a general tool that will (I hope) revolutionize music education.

When he announced the new player, Adrian said:

HTML5 FTW! Screw native apps and their walled gardens.

It’s ironic that this liberation has been achieved by creating another kind of walled garden. Adrian is, after all, the creator of Django, a popular framework for server-based Web apps. Django is written in Python, a language with which Adrian has deep expertise, none of which could be leveraged in the creation of Soundslice.

But progress is circular. Maybe we’ll come back around to the idea that JavaScript need not be the only game in town.

What is a public information officer?

If you’re a public information officer, what do you do? According to Wikipedia:

Public Information Officers (PIOs) are the communications coordinators or spokespersons of certain governmental organizations (i.e. city, county, school district, state government and police/fire departments). They differ from public relations departments of private organizations in that marketing plays a more limited role. The primary responsibility of a PIO is to provide information to the media and public as required by law and according to the standards of their profession. Many PIOs are former journalists, bringing unique and relevant experience to the position. During crises and emergencies, PIOs are often identified by wearing helmets or vests with the letters “PIO” on them.

I have a different idea about what the job (in larger cities and states) or role (in smaller cities and towns) should be. Not only, or even mainly, a spokesperson. Rather, a mentor and coach, helping people, groups, and organizations become better online communicators. And not only, or mainly, those in government. In a city that thinks like the web every public-facing information resource will be bound to its creator’s online identity and linkable into other contexts.

The PIO’s measure of success won’t be the number of documents posted to the city website, or the number of pageviews they draw. It will be the degree to which public-facing entities — government of course, but also schools, hospitals, newspapers, churches, downtown merchants, sports leagues, environmental groups, and many others — properly manage and interconnect their own online spaces. Why? Because a shared understanding of how (and why) to do that will make the city a better place to live and a more attractive place to visit or migrate to.

It’s time to engineer some filter failure

The problem isn’t information overload, Clay Shirky famously said, it’s filter failure. Lately, though, I’m more worried about filter success. Increasingly my filters are being defined for me by systems that watch my behavior and suggest More Like This. More things to read, people to follow, songs to hear. These filters do a great job of hiding things that are dissimilar and surprising. But that’s the very definition of information! Formally it’s the one thing that’s not like the others, the one that surprises you.

So I’m always on the lookout for ways to defeat the filters and see things through lenses other than my own. On Facebook, for example, I stay connected to people with whom I profoundly disagree. As a tourist of other people’s echo chambers I gain perspective on my native echo chamber. Facebook doesn’t discourage this tourism, but it doesn’t actively encourage it either.

The other day an acquaintance posted a link to an article about a hot topic on which we disagree. Knowing my view, Facebook injected a link to an article that confirms it. There are two related problems here. First, in this context I don’t want Facebook to show me what it thinks is related to my view. I want to know more about the evidence that supports the opposing view, and the way in which my acquaintance’s thinking is informed by that evidence. That’s why I maintain the connection! I want to empathize with and understand The Other.

When I polled participants in the thread, I learned that nobody else saw the link that was suggested to me. That’s the second problem. If I hadn’t checked I might have assumed that Facebook was brokering a connection among echo chambers. That would have been cool but it’s not what actually happened.

As I think back on the evolution of social media I recall a few moments when my filters did “fail” in ways that delivered the kinds of surprises I value. Napster was the first. When you found a tune on Napster you could also explore the library of the person who shared that tune. That person had no idea who I was or what I’d like. By way of a tune we randomly shared in common I found many delightful surprises. I don’t have that experience on Pandora today.

Likewise the early blogosophere. I built my echo chamber there by following people whose lenses on the world complemented mine. For us the common thread was Net tech. But anything could and did appear in the feeds we shared directly with one another. Again there were many delightful surprises.

Remember when people warned us about the tyranny of The Daily Me? They were right, it’s happening big time. Of course it’s easy to escape The Daily Me. Try this, for example. Dump all your regular news sources and view the world through a different lens for a week. If you’re part of the US news nexus, for example, try Al Jazeera. It’s just a click away.

But that click isn’t on the path of least resistance. Our filters have become so successful that we fail to notice:

- We don’t control them

- They have agendas

- They distort our connections to people and ideas

I want my filters to fail, and I want dials that control the degrees and kinds of failures.

Names that mean things, names that do things

In Turing’s Cathedral: The Origins of the Digital Universe, George Dyson says of the engineers and mathematicians who birthed computing:

By breaking the distinction between numbers that mean things and numbers that do things, they unleashed the powers of coded sequences, and the world would never be the same.

Consider the number 30 stored in a computer. It can mean something: how many dollars in a bank account, how many minutes a meeting will last. But it can also do something, by representing part of a sequence of instructions that updates the amount in the bank account, or that notifies you when it’s time to go to the meeting. Depending on context, the same number, in the same memory location, can mean something or it can do something.

Deep inside the computer there are only numbers. Out here on the Net we humans prefer to operate in terms of names. Happily it turns out that names can exhibit the same magical duality. That’s particularly true for the special class of names we call Uniform Resource Locators (URLs).

In a 1997 keynote talk Andrew Schulman put up a slide that contained just a URL:

http://wwwapps.ups.com/tracking/tracking.cgi?tracknum=1Z742E220310270799

“Think about what this means,” he said. “Every UPS package has its own home page on the web!”

Also, potentially, every bank transaction, every calendar appointment, every book (or paragraph within every book), every song (or passage or track within every song), every appliance (or component within every appliance). If we needed to, we could create URLs for grains of sand, each as compact and easy to exchange as Andrew Schulman’s Fedex URL. The supply of web names is inexhaustible, and the universe of their meaning is unbounded.

But these names don’t only mean things. They also do things. The URL of a Fedex package does more than merely refer to the package with a unique identifier (though that’s miraculous enough). It also engages with the business process surrounding that package, drawing together status information from a network of cooperating systems and enabling you to interact with those systems.

It takes a while for the implications of all this to sink in. It’s seventeen years since Andrew’s epiphany, you’d think I would have adjusted to it by now, but I’m still constantly surprised and delighted by unanticipated consequences.

Consider this tweet from Etsy’s CTO Kellan Elliot-McCrea:

My new favorite pick me up, searching Twitter for “congrats”, scoped to folks I follow https://twitter.com/search?q=congrats&f=follows 1

What does Kellan’s URL mean? The set of tweets, from the (currently) 1099 people that Kellan follows on Twitter, that include the word “congrats” — information that brings happiness to Kellan.

What does Kellan’s URL do? It activates a computation, inside Twitter’s network of systems, that assembles and displays that information.

Both the meaning and the doing are context-specific in several ways. In the temporal domain, each invocation of the URL yields newer results. In the social domain, Kellan’s invocation queries the 1099 people he follows, mine queries the 1046 I follow, yours will query the population you follow.

Two factors conspire to bring us an ongoing stream of these delightful discoveries. First, systems (like Twitter) that think like the web. In these case that means, among other things, enabling people to invent powerful names. Second, people (like Kellan) who do the inventing.


1I have simplified Kellan’s URL slightly. His original tweet includes the parameter &src=typd. Its purpose seems to be unexplained, and omitting it doesn’t change the result.

Hiroshimas, light bulbs, and touchstone facts

There’s a rough consensus that the heat gain attributable to man-made climate change is equivalent to about one watt per square meter. How can we visualize that? You could say it’s like we’ve added one always-on 100-watt light bulb to every ten-meter-square piece of the planet’s surface. Or you could say that we’re adding the heat equivalent of 400,000 Hiroshima bombs per day.

The Hirosohima meme is fashionable in certain circles. You can even use a blog widget or Facebook app to dramatize the effect. Is that helpful?

Yes, according to Joe Romm:

In my quarter century communicating on climate change, I’ve found that many people in the media and the public have a visceral belief that “Humans are too insignificant to affect global climate.”

The anti-science CNBC anchor Joe Kernen voiced this conviction when he suggested that “as old as the planet is” there is no way “puny, gnawing little humans” could change the climate in “70 years.”

Certainly humans do seem tiny compared to the oceans or even a superstorm like Sandy. So I don’t see anything wrong with trying to find a quantitatively accurate metaphor that puts things in perspective.

Yes, but not without context. Suppose I told you the effect was an order of magnitude smaller: 40,000 bombs per day. Or an order of magnitude larger: 4 million bombs per day. Do you have any intutions about those numbers? I don’t. And unless you’re a scientist working in this domain you don’t either.

The missing context, in this case, is the amount of solar power reaching Earth’s surface. It’s about 175 watts per square meter (1, 2). That’s a lot of Hiroshimas. But let’s focus on our representative square, ten meters on a side. That’s 100 sqare meters, roughly the footprint of an average house in Spain. How many 100-watt bulbs are we talking about?

175 W/m^2 * 100 m^2 = 17,500W

17,500W / 100W/bulb = 175 bulbs

So the baseline for our representative square is 175 bulbs. If we add one more bulb, we increase the wattage by about half of one percent. Some will intuit that the extra 175th is significant. I do. We’re adding a measurable fraction of Earth’s insolation? Whoa.

Others will intuit that it’s neglible. But will this formulation at least enable us to discuss the effect in a way that everyone can meaningfully visualize? Maybe not. Because it depends on an intuition that varying a global parameter by half a percent is a big deal. Which is like having an intution that varying the planet’s temperature by a degree or two is a big deal. Some will have it, others won’t.

I can’t imagine preventing 400,000 Hiroshimas. I would rather think about turning off every 176th light bulb. But I can’t imagine turning off 5 trillion light bulbs either. So maybe Joe Romm is right. If both visualizations are valid, and if the goal is to communicate the underlying intuition, then I suppose unfathomably many bombs says that more compellingly, to most people, than half a percent of the solar flux at Earth’s surface.

And yet: half a percent of the solar flux? Whoa. That’s a pretty useful touchstone fact.

3D printing isn’t the digital literacy that libraries most need to teach

Here’s a story that’s playing out in libraries everywhere:

Library Unveils 3D Printer

Keene residents now have access to a 3D printer allowing everyone the ability to turn the digital into the physical. A brand new MakerBot Replicator 2 is now plugged in at the Keene Public Library.

,..

Libraries are now more than repositories of books for researching but active community centers inviting people to come, make, and create things.

That’s a great mission statement, and it’s one I’ve been suggesting for a long time. But when the Keene Public Library jumps on the 3D printer bandwagon, I’m reminded of its failure to embrace other opportunities to make and create, ones much closer to the library’s core competencies.

The LibraryLookup Project was born in Keene. Our library’s online catalog was the first one I connected to Amazon’s catalog. Over the years libraries around the world adopted the technique. It evolved through several iterations, culminating in a service that alerts you when a book on your Amazon wish list is available in the local library. But one library conspicuously refused to get involved: the Keene Public Library. Why not? One objection was that the method preferentially supported Amazon. So I added support for Barnes and Noble, but the answer was still no.

Then there’s this ITConversations podcast I made with Mike Caulfield. At the time Mike lived in Keene as well, and on the appointed day things weren’t quiet enough to record in either of our homes, so we went to the library and asked to use one of the meeting rooms on the second floor, all of which were empty. The answer: No. Why? According to the rules the rooms are available only for use by “non-profit, civic, cultural, charitable and social organizations.” I pointed out that ITConversations was a non-profit. Still no. In the end we recorded in the upstairs hallway outside the forbidden room.

I don’t mean to pillory the Keene Public Library. It’s a great local library, it’s well used, visitors from towns much bigger than Keene are always impressed. And they’ve done some great work online, notably an archive of historical photos that’s now part of the Flickr commons. Why not encourage the community to engage in that kind of making and creating?

It’s not just the Keene library. At a gathering of makers and hackers last year I sat in a session on the future of libraries. The entire discussion revolved around 3D printers and maker spaces. I asked about other creative literacies: media, webmaking, curation, research. Nobody was interested. It was all about 3D printing.

Here’s my conclusion. 3D printing, and the maker movement for which it is emblematic, are memes that are being marketed with great success. So much so that Evgeny Morozov, who makes a living deflating memes, goes after them in this week’s New Yorker.

Criticism has its place, and all popular memes deserve scrutiny. But there’s no question that the maker movement has tapped into a fundamental urge. We are starting to realize that you can’t build a house, or heat it, or feed the family that lives in it, by manipulating bits. You need to lay hands on atoms. As we re-engage with the physical world we will help heal our economies and our cultures. That’s all good. But it’s not the first thing that comes to mind when libraries seek to transform themselves from centers of consumption into centers of production.

Libraries really are about bits. They are uniquely positioned to adopt and promote digital literacies. Why don’t they? Those literacies aren’t yet being marketed as effectively as 3D printing. We who care need to figure out how to fix that.

Spot the space station

The other night we looked up and saw an unusually large and slow satellite moving across the sky. Could it have been the space station? I found NASA’s Spot the Station page and looked up our location. Sure enough, there was a space station transit on that night, at that time, in that place in the sky.

Naturally I wondered if I could get that schedule of sightings onto my calendar. But sadly, as is so often the case, there is an RSS feed for upcoming sightings but no iCalendar feed. I wish more online services would realize that when your feed is purely a schedule of upcoming events, it’s really useful to render it in iCalendar format as well as RSS. Conversion from RSS to iCalendar is often possible, but it’s rarely trivial, and nobody is going to bother.

Nobody except me, that is. I created an Elm City service to do the conversion, and a helper that a curator can use to invoke it. Here’s a picture of a synthesized NASA calendar feed merged into the Keene hub:

If anyone reading this has the right connections, please do invite NASA to publish iCalendar feeds natively alongside the RSS feeds they currently provide.

My superpower: 3-way calling

Flight and invisibility are fun to imagine, but what are the real superpowers that make a difference in your life? One of mine is 3-way calling. I deploy it when I’m caught in a bureaucratic tangle in which one or more parties don’t want to communicate with one another. Case in point: the ambulance bill from my son’s car accident almost two years ago. He’s fine, but I’m still wrangling to get the responsible insurer to settle with the ambulance service.

Back in August 2012 I mused about the predicament for wired.com. When insurer A’s responsibility ended it refused to communicate with insurer B. As a result of the long delay created by insurer A, the party now responsible – insurer B – denied the claim.

The other day I talked to ambulance service C, they convinced me that insurer B was still on the hook and that they had the documentation to back that up. So I called B and, of course, got nowhere. They were relying on an insidious denial-of-service attack which works by routing all communication through a low-bandwidth channel: me. When each scrap of information extracted from B has to route through me on its way to C, and when C’s responses have to return to B by the same circuitous path, not much can get done. That’s what B wants, of course.

It can seem like a stalemate. B won’t answer C’s calls. When I ask B to call C that always turns out to be against the rules. Here’s where my superpower shines. With B on the phone I say:

“Hang on, I’m putting you on hold for a minute.”

Right there you’ve got them on the run. The hold maneuver is something they do to you, but don’t expect you to do to them.

Now I call C and join them to the call with B.

“Sheila, meet Frank. Frank, Sheila. Now please work this out.”

The negotiation that ensues always intrigues me. Invariably it entails differences in terminology, records, and interpretations. If systems were built to facilitate direct communication those differences could be worked out. But when systems are built to thwart direct communication it’s a logjam until the clock runs out.

Despite knocking their heads together I don’t yet have a final resolution to this matter. My superpower doesn’t always prevail. But it always makes me feel less like a pawn in other people’s games.

Opting out of line-of-business software

When transacting business in a store or a hospital or an auto repair shop I always watch what happens on the computer screen. I’ve never written line-of-business software but deeply respect those who do. It must be a huge challenge to abstract the data, terminology, and rules for some domain into software that can sell to a lot of businesses operating in that domain. Of course there’s a tradeoff. Line-of-business applications typically aren’t user-innovation toolkits. People who use them learn specific procedures, not general skills. Businesses can’t be creative in their use of the software, nor profit from that creativity.

One notable exception is Fix, an auto repair shop in my town owned by my friend Jonah Erikson. Fix doesn’t use any line-of-business software, it runs on LibreOffice, GMail, and Google Calendar. That’s only possible because the team at Fix has an intuitive grasp of the technical fluencies I outlined in Seven ways to think like the web. For example, when you open a case with Fix they create a new spreadsheet. The spreadsheet will have a name like 2013-12-11-Luann’s Passat.ods. No software enforces that convention, it’s just something the front-office folks at Fix invented and do consistently. I’ve long practiced this method myself, and it’s something I wish were widely taught.

Why does something so simple matter so much? Let’s count the reasons.

First, it’s portable. The computer at Fix runs Linux but if there were a need to switch platforms the choice would not be governed by the availability of a line-of-business application on that other platform. That kind of switch hasn’t happened but another did. The spreadsheet files used to reside on a local drive. Now, I noticed on my last visit, they’re on DropBox. Fix didn’t need to wait for a vendor to cloud-enable their estimation and billing, it just happened naturally. No matter where the files live, and no matter what system navigates and searches them, two things will always be true. Date-labelled file names can be sorted in ascending or descending order. And customer names embedded in those file names can be searched for and found.

Second, it’s flexible. There’s freeform annotation within a given job’s spreadsheet. That enables the capture of context that wouldn’t easily fit into a rigid template. But here too there are conventions. An annotation in bold, for example, signifies a task that is proposed but not yet accepted or completed.

Third, it’s free. Fix runs on a tight budget so that matters, but I think freedom to innovate matters more than freedom from a price tag. Using general-purpose rather than line-of-business software, Fix can tailor the software expression of its unique business culture, and both can evolve organically. That freedom is “priceless,” says Fix’s office manager Mary Kate Sheridan.

If you were to watch what happens on Fix’s computer screen you might object that the system requires users to know and do too much. People shouldn’t have to think about filenames and text-formatting conventions, right? Shouldn’t they just focus on doing their jobs? Shouldn’t the software know and enforce all the rules and procedures?

I’m not so sure. In another of my favorite examples, Hugh McGuire, creator of the free audiobooks service LibriVox, imagined a line-of-business application for LibriVox’s readers and quality checkers. He couldn’t afford to commission its development, though, so instead he adapted a web conferencing system, phpBB, to his needs. It remains the foundation of LibriVox to this day. Had Hugh been able to commission the application he wanted, I believe it would have failed. I don’t think lack of special-purpose software hampered the formation of LibriVox’s cuture and methods. On the contrary I think use of general-purpose software enabled that culture and those methods to emerge and evolve.

I realize this approach isn’t for everyone. We need to strike a balance between special-purpose software that’s too rigid and general-purpose software that’s too open-ended. I’m not smart enough to figure out what that middle ground should look like, but I think Bret Victor is and I’ve been inspired by his recent explorations that point the way to great user innovation toolkits. Give people the right tools and they’ll be happier and more effective — not only as employees, but also as citizens of the world.

Podcasts for the blind

The first MP3 player I ever used was some version of the Creative MUVO shown at right. I’ve probably owned a half-dozen of them and I just bought two more on eBay. For me it’s the perfect gadget for listening to podcasts, or songs I’m learning to play and sing, while running or biking or hiking or gardening. In those conditions I don’t want a $500 gadget that I might drop, or dunk, or scratch, with a fancy user interface that can access a vast range of features and capabilities. I just want to press play and listen. If it falls on the ground it probably won’t break. If it does break, oh well, it was $20, get another.

The MUVOs I just bought aren’t for me, though, they’re for my mom. She’s 92, and macular degeneration has advanced past the point where the reading machine we tried to modify for her can be of any use for long-form reading. And yet mom, a former college professor and lifelong voracious reader, continues to read more books than just about anybody I know. She does so by way of audiobooks from the library, and digital audio tapes provided courtesy of a Library of Congress program for the blind. Despite hearing loss which is also very significant, she can hear well enough to listen to spoken word audio.

It occurred to me that she’d also enjoy Long Now Seminars, KUOW Speakers Forum, and other series of podcasts. On a recent visit I verified that the MUVO works great for her, precisely because of its minimalist design. We are, after all, talking about a woman who needs the sort of user interface shown in this TV remote brilliantly hacked by my sister.

Mom can’t use a computer now, and even if she could there’s no way she’d be able to find the podcasts she likes and sync them to a device. That’s OK. I’ve listened to tons of stuff that she’d like, so the plan is to keep a pair of MUVOs in rotation. I’ll load a batch of talks for her onto one MUVO and send it. While she’s listening to that one she’ll have her aide send the other back to me for a reload. It’s a method that leading-edge technologists will wince to think about. Can’t cloud synchronization solve this poor woman’s problem?

No, it can’t. My method is the only one that will work for her. And it has another advantage too. Mom will periodically receive a little package of goodies from me via the old-fashioned, yet-to-be-assimilated-by-Amazon US postal service. All in all it’s another triumph for trailing-edge technologies!

Web servers and web clients working together

For me the most productive programming environments have always exhibited the same pattern. There’s something I think of as kernel space, something else I think of as user space, and most importantly a fluid boundary between them.

For example, my first programming job was writing application software for CD-ROM-based information systems. I wrote that software in a homegrown interpreted language inspired by LISP. The relationship between my software and the engine that powered the interpreter was a two-way street. Sometimes, when I’d find myself repeating a pattern, we’d abstract it and add new primitive constructs to the engine to make the application code cleaner and more efficient. At other times, though, we’d take constructs only available in the engine (kernel space) and export them into the interpreted language (user space). Why? Because user space wasn’t just me acting as a user of the kernel. It was also where the product we were building came into direct contact with its users. We needed to be able to try a lot of different things in user space to find out what would work. Sometimes when we got something working we’d leave it in user space. Other times we’d push it back into the kernel — again, for reasons of clarity and efficiency.

You see the same pattern over and over. In languages like Perl, Python, and Ruby there’s a fluid relationship between the core engines, written in low-level compiled languages, and the libraries written in the dynamic languages supported by the engines.

I realized today that the evolving fluid relationship between web servers and web clients is another example of the pattern. Early on you had to write web software for a server. Now the web client is ascendant and we can do incredible things in JavaScript. But for me, at least, the ascendant client in no way diminishes the server. Now that the two are on a more equal footing I feel more productive than ever.

In my case the server is Windows Azure, and the “kernel” of the system I’m building is written in C# (and a bit of Python). The client is the browser, and “user space” is powered by JavaScript. I’m finding that these two realms are intertwining in delightful ways. For example, one new feature required some additional data structures. Because they’re rebuilt periodically and cached it makes sense to have the server do this work. Initially the server produced a JSON file which the client acquired by means of an AJAX call. When the feature proved out, I decided to streamline things by eliminating that AJAX call. So now the server acquires the JSON and caches it as a C# object in memory. When the page loads the server converts the data back to JSON and makes it directly available to the client.

What about the fact that this arrangement involves two different programming environments? If that bothered me I could be using JavaScript on the server too. But I don’t feel the need. For me, C# is appropriate for kernel space and JavaScript is appropriate for user space. Which language powers which realm isn’t really the point, though. What matters is that the two realms exist and collaborate productively.