When Groove launched somebody asked me to explain why it was an important example of peer-to-peer technology. I said that was the wrong question. What mattered was that Groove empowered people to communicate directly and securely, form ad-hoc networks with trusted family, friends, and associates, and exchange data freely within those networks. P2P, although then much in vogue — there were P2P books, P2P conferences — wasn’t Groove’s calling card, it was a means to an end.

The same holds true for Thali. Yes it’s a P2P system. But no that isn’t the point. Thali puts you in control of communication that happens within networks of trust. That’s what matters. Peer networking is just one of several enablers.

Imagine a different kind of Facebook, one where you are a customer rather than a product. You buy social networking applications, they’re not free. But when you use those apps you are not in an adversarial relationship with a social networking service. You (along with your trusted communication partners) are the service, and the enabling software works for you.

Thali, at its core, is a database that lives on one or more of your devices and is available to one or more apps running on those devices. Because you trust yourself you’ll authorize Thali apps to mesh your devices and sync data across that mesh. The sync happens directly, without traveling through a cloud relay, and is always secured by mutual SSL authentication. You can, of course, also push to the cloud for backup.

Communicating with other people happens the same way. You exchange cryptographic keys with people you trust, you authorize them to see subsets of the data on your mesh of devices, and that data syncs to their device meshes. The default P2P mode means that you don’t depend on a cloud relay that wants access to your data in exchange for the service it provides.

For cloud services that don’t monetize your data, by the way, Thali delivers a huge benefit. Apps like Snapchat and Chess with Friends incur bandwidth costs proportional to their user populations. If users can exchange photos and gameplay directly, those costs vanish. And there’s no penalty for the user. Sending your photos and chess moves directly costs you no more than sending through the cloud.

But the key point is one that Dave Winer made back when P2P was in vogue: the P in P2P is people. With handheld computers (we call them phones) more powerful than the servers of that era we are now ready to find out what a people-to-people web can be.

We’ve lived in New England for 25 years. It’s been a great place to raise a family but that’s done, so we’re moving to northern California. The key attractors are weather and opportunity.

Winter has never been our friend, and if we had needed convincing (we didn’t) the winter of 2013-2014 would have done it. I am half Sicilian, my happy place is 80-degree sunshine, I am not there nearly enough. Luann doesn’t crave the sun the way I do, but she’s ready to say goodbye to icy winters and buggy summers.

The opportunity, for Luann, revolves around her art. Ancient artifacts inspired by the Lascaux cave are not exactly in tune with the New England artistic sensibility. We think she’ll find a more appreciative audience out west.

For me it’s about getting closer to Seattle and San Francisco, the two poles of my professional life. Located between those two poles I’ll still be a remote employee, but I’ll be a lot less remote than I am here. That matters more than, until recently, I was willing to admit.

Earthquakes don’t worry me too much. I was in San Jose for the ’89 Loma Prieta quake. We were at an outdoor poolside meeting, heard it rumble toward us, watched the ground we had thought solid turn to liquid, got soaked by the tidal wave that jumped out of the pool, heard it rumble away. What impressed me most was the resiliency of the built environment. Given what I heard and saw I’d have expected much more to have broken than did.

What does worry me, a bit, is the recent public conversation about ageism in tech. I’m 20 years past the point at which Vinod Khosla would have me fade into the sunset. And I think differently about innovation than Silicon Valley does. I don’t think we lack new ideas. I think we lack creative recombination of proven tech, and the execution and follow-through required to surface its latent value.

Elm City is one example of that. Another is my current project, Thali, Yaron Goland’s bid to create the peer-to-peer web that I’ve long envisioned. Thali is not a new idea. It is a creative recombination of proven tech: Couchbase, mutual SSL authentication, Tor hidden services. To make Thali possible, Yaron is making solid contributions to Thali’s open source foundations. Though younger than me, he is beyond Vinod Khosla’s sell-by date. But he is innovating in a profoundly important way.

Can we draw a clearer distinction between innovation and novelty? That might help us reframe the conversation about ageism in tech.

The Elm City project was my passion and my job for quite some time. It’s still my passion but no longer my job. The model for calendar syndication that I created is working well in a few places, but hasn’t been adopted widely enough to warrant ongoing sponsorship by my employer, Microsoft. And I’ll be the last person to complain about that. A free community information service based on open standards, open source software, and open data? Really? That’s your job? For longer than anyone could reasonably have expected, it was.

So now I’m on to the next project, one that you might think even more unlikely for a Microsoft employee. I’m helping Yaron Goland create something we are both passionate about: the peer-to-peer Web. Yaron’s project is called Thali, and I’ll say more about it later.

But first I want to sum up what I’ve learned from the Elm City effort.

The elevator pitch for Elm City is short and sweet. It’s RSS for calendars. That implies a pub/sub network based on a standard exchange format, in this case iCalendar. And an ecosystem of interoperable software components. And layered on top of that, an ecosystem of cooperating stakeholders.

On the interop front iCalendar doesn’t fare as well as you’d expect, given that it’s been around since 1999 and is baked into calendar software from Google, Microsoft, and Apple (among many others) that’s used every day by hundreds of millions of people. Why is interop still a problem? Because while in theory people and organizations can form iCalendar-based pub/sub networks, in practice few ever try. So iCalendar feeds don’t interoperate nearly as well as you’d expect.

One of the legacies of Elm City is the iCalendar Validator, inspired by the RSS/Atom feed validator and implemented by Doug Day. It has helped developers iron out some of the interop wrinkles. But the truth is that iCalendar itself isn’t the problem. It’s implemented well enough, in a wide variety of calendar app and services, to enable much more and much better synchronization of public calendars than we currently enjoy. The iCalendar ecosystem has issues but that’s not why the robust calendar networks I envision don’t exist in every city and town.

It’s the stakeholder ecosystem that never came together. Here are the dramatis personae:

  • Local groups and organizations
  • Media (especially newspapers)
  • State and local governments
  • Non-profits and foundations
  • Vendors of content management systems

I’ve worked with each of them separately. But no one kind of stakeholder can push the Elm City model over the top. That will require collaboration, in cities and towns, among stakeholders. Which, as I’m hardly the first to learn, is a tough sell. I hope somebody smarter than me can figure that out. Maybe that will even be a smarter future version of myself. But meanwhile, I’ll be supporting Yaron Goland’s mission to enable a web of people and devices that communicate directly and securely.

Back when progress bars were linear, not circular, there was an idea that browser-based apps could be written in more than one programming language. One implementation of that idea was called ActiveX Scripting, which was supported by Internet Explorer (and other Windows apps). Of course the ActiveX moniker turned out to be inauspicious on the Web. But let’s recall, for a moment, what the essential idea was. The browser was equipped with an interface that enabled it to work with any scripting engine. I remember playing with a demo browser app that fetched and displayed data three different ways: using JavaScript, VBScript, and Perl. That was in, I think, 1997.

Today you can write a browser-based app in any language you choose, so long as you choose JavaScript. Which, like any programming language, is capable of amazing things. My current favorite example is Adrian Holovaty’s new Soundslice player. Here’s my 2012 writeup on Soundslice. It began as a fabulous tablature-based tool used to annotate and study music for string instruments. Now, with support for standard music notation, it’s becoming a general tool that will (I hope) revolutionize music education.

When he announced the new player, Adrian said:

HTML5 FTW! Screw native apps and their walled gardens.

It’s ironic that this liberation has been achieved by creating another kind of walled garden. Adrian is, after all, the creator of Django, a popular framework for server-based Web apps. Django is written in Python, a language with which Adrian has deep expertise, none of which could be leveraged in the creation of Soundslice.

But progress is circular. Maybe we’ll come back around to the idea that JavaScript need not be the only game in town.

If you’re a public information officer, what do you do? According to Wikipedia:

Public Information Officers (PIOs) are the communications coordinators or spokespersons of certain governmental organizations (i.e. city, county, school district, state government and police/fire departments). They differ from public relations departments of private organizations in that marketing plays a more limited role. The primary responsibility of a PIO is to provide information to the media and public as required by law and according to the standards of their profession. Many PIOs are former journalists, bringing unique and relevant experience to the position. During crises and emergencies, PIOs are often identified by wearing helmets or vests with the letters “PIO” on them.

I have a different idea about what the job (in larger cities and states) or role (in smaller cities and towns) should be. Not only, or even mainly, a spokesperson. Rather, a mentor and coach, helping people, groups, and organizations become better online communicators. And not only, or mainly, those in government. In a city that thinks like the web every public-facing information resource will be bound to its creator’s online identity and linkable into other contexts.

The PIO’s measure of success won’t be the number of documents posted to the city website, or the number of pageviews they draw. It will be the degree to which public-facing entities — government of course, but also schools, hospitals, newspapers, churches, downtown merchants, sports leagues, environmental groups, and many others — properly manage and interconnect their own online spaces. Why? Because a shared understanding of how (and why) to do that will make the city a better place to live and a more attractive place to visit or migrate to.

The problem isn’t information overload, Clay Shirky famously said, it’s filter failure. Lately, though, I’m more worried about filter success. Increasingly my filters are being defined for me by systems that watch my behavior and suggest More Like This. More things to read, people to follow, songs to hear. These filters do a great job of hiding things that are dissimilar and surprising. But that’s the very definition of information! Formally it’s the one thing that’s not like the others, the one that surprises you.

So I’m always on the lookout for ways to defeat the filters and see things through lenses other than my own. On Facebook, for example, I stay connected to people with whom I profoundly disagree. As a tourist of other people’s echo chambers I gain perspective on my native echo chamber. Facebook doesn’t discourage this tourism, but it doesn’t actively encourage it either.

The other day an acquaintance posted a link to an article about a hot topic on which we disagree. Knowing my view, Facebook injected a link to an article that confirms it. There are two related problems here. First, in this context I don’t want Facebook to show me what it thinks is related to my view. I want to know more about the evidence that supports the opposing view, and the way in which my acquaintance’s thinking is informed by that evidence. That’s why I maintain the connection! I want to empathize with and understand The Other.

When I polled participants in the thread, I learned that nobody else saw the link that was suggested to me. That’s the second problem. If I hadn’t checked I might have assumed that Facebook was brokering a connection among echo chambers. That would have been cool but it’s not what actually happened.

As I think back on the evolution of social media I recall a few moments when my filters did “fail” in ways that delivered the kinds of surprises I value. Napster was the first. When you found a tune on Napster you could also explore the library of the person who shared that tune. That person had no idea who I was or what I’d like. By way of a tune we randomly shared in common I found many delightful surprises. I don’t have that experience on Pandora today.

Likewise the early blogosophere. I built my echo chamber there by following people whose lenses on the world complemented mine. For us the common thread was Net tech. But anything could and did appear in the feeds we shared directly with one another. Again there were many delightful surprises.

Remember when people warned us about the tyranny of The Daily Me? They were right, it’s happening big time. Of course it’s easy to escape The Daily Me. Try this, for example. Dump all your regular news sources and view the world through a different lens for a week. If you’re part of the US news nexus, for example, try Al Jazeera. It’s just a click away.

But that click isn’t on the path of least resistance. Our filters have become so successful that we fail to notice:

- We don’t control them

- They have agendas

- They distort our connections to people and ideas

I want my filters to fail, and I want dials that control the degrees and kinds of failures.

In Turing’s Cathedral: The Origins of the Digital Universe, George Dyson says of the engineers and mathematicians who birthed computing:

By breaking the distinction between numbers that mean things and numbers that do things, they unleashed the powers of coded sequences, and the world would never be the same.

Consider the number 30 stored in a computer. It can mean something: how many dollars in a bank account, how many minutes a meeting will last. But it can also do something, by representing part of a sequence of instructions that updates the amount in the bank account, or that notifies you when it’s time to go to the meeting. Depending on context, the same number, in the same memory location, can mean something or it can do something.

Deep inside the computer there are only numbers. Out here on the Net we humans prefer to operate in terms of names. Happily it turns out that names can exhibit the same magical duality. That’s particularly true for the special class of names we call Uniform Resource Locators (URLs).

In a 1997 keynote talk Andrew Schulman put up a slide that contained just a URL:


“Think about what this means,” he said. “Every UPS package has its own home page on the web!”

Also, potentially, every bank transaction, every calendar appointment, every book (or paragraph within every book), every song (or passage or track within every song), every appliance (or component within every appliance). If we needed to, we could create URLs for grains of sand, each as compact and easy to exchange as Andrew Schulman’s Fedex URL. The supply of web names is inexhaustible, and the universe of their meaning is unbounded.

But these names don’t only mean things. They also do things. The URL of a Fedex package does more than merely refer to the package with a unique identifier (though that’s miraculous enough). It also engages with the business process surrounding that package, drawing together status information from a network of cooperating systems and enabling you to interact with those systems.

It takes a while for the implications of all this to sink in. It’s seventeen years since Andrew’s epiphany, you’d think I would have adjusted to it by now, but I’m still constantly surprised and delighted by unanticipated consequences.

Consider this tweet from Etsy’s CTO Kellan Elliot-McCrea:

My new favorite pick me up, searching Twitter for “congrats”, scoped to folks I follow https://twitter.com/search?q=congrats&f=follows 1

What does Kellan’s URL mean? The set of tweets, from the (currently) 1099 people that Kellan follows on Twitter, that include the word “congrats” — information that brings happiness to Kellan.

What does Kellan’s URL do? It activates a computation, inside Twitter’s network of systems, that assembles and displays that information.

Both the meaning and the doing are context-specific in several ways. In the temporal domain, each invocation of the URL yields newer results. In the social domain, Kellan’s invocation queries the 1099 people he follows, mine queries the 1046 I follow, yours will query the population you follow.

Two factors conspire to bring us an ongoing stream of these delightful discoveries. First, systems (like Twitter) that think like the web. In these case that means, among other things, enabling people to invent powerful names. Second, people (like Kellan) who do the inventing.

1I have simplified Kellan’s URL slightly. His original tweet includes the parameter &src=typd. Its purpose seems to be unexplained, and omitting it doesn’t change the result.

There’s a rough consensus that the heat gain attributable to man-made climate change is equivalent to about one watt per square meter. How can we visualize that? You could say it’s like we’ve added one always-on 100-watt light bulb to every ten-meter-square piece of the planet’s surface. Or you could say that we’re adding the heat equivalent of 400,000 Hiroshima bombs per day.

The Hirosohima meme is fashionable in certain circles. You can even use a blog widget or Facebook app to dramatize the effect. Is that helpful?

Yes, according to Joe Romm:

In my quarter century communicating on climate change, I’ve found that many people in the media and the public have a visceral belief that “Humans are too insignificant to affect global climate.”

The anti-science CNBC anchor Joe Kernen voiced this conviction when he suggested that “as old as the planet is” there is no way “puny, gnawing little humans” could change the climate in “70 years.”

Certainly humans do seem tiny compared to the oceans or even a superstorm like Sandy. So I don’t see anything wrong with trying to find a quantitatively accurate metaphor that puts things in perspective.

Yes, but not without context. Suppose I told you the effect was an order of magnitude smaller: 40,000 bombs per day. Or an order of magnitude larger: 4 million bombs per day. Do you have any intutions about those numbers? I don’t. And unless you’re a scientist working in this domain you don’t either.

The missing context, in this case, is the amount of solar power reaching Earth’s surface. It’s about 175 watts per square meter (1, 2). That’s a lot of Hiroshimas. But let’s focus on our representative square, ten meters on a side. That’s 100 sqare meters, roughly the footprint of an average house in Spain. How many 100-watt bulbs are we talking about?

175 W/m^2 * 100 m^2 = 17,500W

17,500W / 100W/bulb = 175 bulbs

So the baseline for our representative square is 175 bulbs. If we add one more bulb, we increase the wattage by about half of one percent. Some will intuit that the extra 175th is significant. I do. We’re adding a measurable fraction of Earth’s insolation? Whoa.

Others will intuit that it’s neglible. But will this formulation at least enable us to discuss the effect in a way that everyone can meaningfully visualize? Maybe not. Because it depends on an intuition that varying a global parameter by half a percent is a big deal. Which is like having an intution that varying the planet’s temperature by a degree or two is a big deal. Some will have it, others won’t.

I can’t imagine preventing 400,000 Hiroshimas. I would rather think about turning off every 176th light bulb. But I can’t imagine turning off 5 trillion light bulbs either. So maybe Joe Romm is right. If both visualizations are valid, and if the goal is to communicate the underlying intuition, then I suppose unfathomably many bombs says that more compellingly, to most people, than half a percent of the solar flux at Earth’s surface.

And yet: half a percent of the solar flux? Whoa. That’s a pretty useful touchstone fact.

Here’s a story that’s playing out in libraries everywhere:

Library Unveils 3D Printer

Keene residents now have access to a 3D printer allowing everyone the ability to turn the digital into the physical. A brand new MakerBot Replicator 2 is now plugged in at the Keene Public Library.


Libraries are now more than repositories of books for researching but active community centers inviting people to come, make, and create things.

That’s a great mission statement, and it’s one I’ve been suggesting for a long time. But when the Keene Public Library jumps on the 3D printer bandwagon, I’m reminded of its failure to embrace other opportunities to make and create, ones much closer to the library’s core competencies.

The LibraryLookup Project was born in Keene. Our library’s online catalog was the first one I connected to Amazon’s catalog. Over the years libraries around the world adopted the technique. It evolved through several iterations, culminating in a service that alerts you when a book on your Amazon wish list is available in the local library. But one library conspicuously refused to get involved: the Keene Public Library. Why not? One objection was that the method preferentially supported Amazon. So I added support for Barnes and Noble, but the answer was still no.

Then there’s this ITConversations podcast I made with Mike Caulfield. At the time Mike lived in Keene as well, and on the appointed day things weren’t quiet enough to record in either of our homes, so we went to the library and asked to use one of the meeting rooms on the second floor, all of which were empty. The answer: No. Why? According to the rules the rooms are available only for use by “non-profit, civic, cultural, charitable and social organizations.” I pointed out that ITConversations was a non-profit. Still no. In the end we recorded in the upstairs hallway outside the forbidden room.

I don’t mean to pillory the Keene Public Library. It’s a great local library, it’s well used, visitors from towns much bigger than Keene are always impressed. And they’ve done some great work online, notably an archive of historical photos that’s now part of the Flickr commons. Why not encourage the community to engage in that kind of making and creating?

It’s not just the Keene library. At a gathering of makers and hackers last year I sat in a session on the future of libraries. The entire discussion revolved around 3D printers and maker spaces. I asked about other creative literacies: media, webmaking, curation, research. Nobody was interested. It was all about 3D printing.

Here’s my conclusion. 3D printing, and the maker movement for which it is emblematic, are memes that are being marketed with great success. So much so that Evgeny Morozov, who makes a living deflating memes, goes after them in this week’s New Yorker.

Criticism has its place, and all popular memes deserve scrutiny. But there’s no question that the maker movement has tapped into a fundamental urge. We are starting to realize that you can’t build a house, or heat it, or feed the family that lives in it, by manipulating bits. You need to lay hands on atoms. As we re-engage with the physical world we will help heal our economies and our cultures. That’s all good. But it’s not the first thing that comes to mind when libraries seek to transform themselves from centers of consumption into centers of production.

Libraries really are about bits. They are uniquely positioned to adopt and promote digital literacies. Why don’t they? Those literacies aren’t yet being marketed as effectively as 3D printing. We who care need to figure out how to fix that.

The other night we looked up and saw an unusually large and slow satellite moving across the sky. Could it have been the space station? I found NASA’s Spot the Station page and looked up our location. Sure enough, there was a space station transit on that night, at that time, in that place in the sky.

Naturally I wondered if I could get that schedule of sightings onto my calendar. But sadly, as is so often the case, there is an RSS feed for upcoming sightings but no iCalendar feed. I wish more online services would realize that when your feed is purely a schedule of upcoming events, it’s really useful to render it in iCalendar format as well as RSS. Conversion from RSS to iCalendar is often possible, but it’s rarely trivial, and nobody is going to bother.

Nobody except me, that is. I created an Elm City service to do the conversion, and a helper that a curator can use to invoke it. Here’s a picture of a synthesized NASA calendar feed merged into the Keene hub:

If anyone reading this has the right connections, please do invite NASA to publish iCalendar feeds natively alongside the RSS feeds they currently provide.

Flight and invisibility are fun to imagine, but what are the real superpowers that make a difference in your life? One of mine is 3-way calling. I deploy it when I’m caught in a bureaucratic tangle in which one or more parties don’t want to communicate with one another. Case in point: the ambulance bill from my son’s car accident almost two years ago. He’s fine, but I’m still wrangling to get the responsible insurer to settle with the ambulance service.

Back in August 2012 I mused about the predicament for wired.com. When insurer A’s responsibility ended it refused to communicate with insurer B. As a result of the long delay created by insurer A, the party now responsible – insurer B – denied the claim.

The other day I talked to ambulance service C, they convinced me that insurer B was still on the hook and that they had the documentation to back that up. So I called B and, of course, got nowhere. They were relying on an insidious denial-of-service attack which works by routing all communication through a low-bandwidth channel: me. When each scrap of information extracted from B has to route through me on its way to C, and when C’s responses have to return to B by the same circuitous path, not much can get done. That’s what B wants, of course.

It can seem like a stalemate. B won’t answer C’s calls. When I ask B to call C that always turns out to be against the rules. Here’s where my superpower shines. With B on the phone I say:

“Hang on, I’m putting you on hold for a minute.”

Right there you’ve got them on the run. The hold maneuver is something they do to you, but don’t expect you to do to them.

Now I call C and join them to the call with B.

“Sheila, meet Frank. Frank, Sheila. Now please work this out.”

The negotiation that ensues always intrigues me. Invariably it entails differences in terminology, records, and interpretations. If systems were built to facilitate direct communication those differences could be worked out. But when systems are built to thwart direct communication it’s a logjam until the clock runs out.

Despite knocking their heads together I don’t yet have a final resolution to this matter. My superpower doesn’t always prevail. But it always makes me feel less like a pawn in other people’s games.

When transacting business in a store or a hospital or an auto repair shop I always watch what happens on the computer screen. I’ve never written line-of-business software but deeply respect those who do. It must be a huge challenge to abstract the data, terminology, and rules for some domain into software that can sell to a lot of businesses operating in that domain. Of course there’s a tradeoff. Line-of-business applications typically aren’t user-innovation toolkits. People who use them learn specific procedures, not general skills. Businesses can’t be creative in their use of the software, nor profit from that creativity.

One notable exception is Fix, an auto repair shop in my town owned by my friend Jonah Erikson. Fix doesn’t use any line-of-business software, it runs on LibreOffice, GMail, and Google Calendar. That’s only possible because the team at Fix has an intuitive grasp of the technical fluencies I outlined in Seven ways to think like the web. For example, when you open a case with Fix they create a new spreadsheet. The spreadsheet will have a name like 2013-12-11-Luann’s Passat.ods. No software enforces that convention, it’s just something the front-office folks at Fix invented and do consistently. I’ve long practiced this method myself, and it’s something I wish were widely taught.

Why does something so simple matter so much? Let’s count the reasons.

First, it’s portable. The computer at Fix runs Linux but if there were a need to switch platforms the choice would not be governed by the availability of a line-of-business application on that other platform. That kind of switch hasn’t happened but another did. The spreadsheet files used to reside on a local drive. Now, I noticed on my last visit, they’re on DropBox. Fix didn’t need to wait for a vendor to cloud-enable their estimation and billing, it just happened naturally. No matter where the files live, and no matter what system navigates and searches them, two things will always be true. Date-labelled file names can be sorted in ascending or descending order. And customer names embedded in those file names can be searched for and found.

Second, it’s flexible. There’s freeform annotation within a given job’s spreadsheet. That enables the capture of context that wouldn’t easily fit into a rigid template. But here too there are conventions. An annotation in bold, for example, signifies a task that is proposed but not yet accepted or completed.

Third, it’s free. Fix runs on a tight budget so that matters, but I think freedom to innovate matters more than freedom from a price tag. Using general-purpose rather than line-of-business software, Fix can tailor the software expression of its unique business culture, and both can evolve organically. That freedom is “priceless,” says Fix’s office manager Mary Kate Sheridan.

If you were to watch what happens on Fix’s computer screen you might object that the system requires users to know and do too much. People shouldn’t have to think about filenames and text-formatting conventions, right? Shouldn’t they just focus on doing their jobs? Shouldn’t the software know and enforce all the rules and procedures?

I’m not so sure. In another of my favorite examples, Hugh McGuire, creator of the free audiobooks service LibriVox, imagined a line-of-business application for LibriVox’s readers and quality checkers. He couldn’t afford to commission its development, though, so instead he adapted a web conferencing system, phpBB, to his needs. It remains the foundation of LibriVox to this day. Had Hugh been able to commission the application he wanted, I believe it would have failed. I don’t think lack of special-purpose software hampered the formation of LibriVox’s cuture and methods. On the contrary I think use of general-purpose software enabled that culture and those methods to emerge and evolve.

I realize this approach isn’t for everyone. We need to strike a balance between special-purpose software that’s too rigid and general-purpose software that’s too open-ended. I’m not smart enough to figure out what that middle ground should look like, but I think Bret Victor is and I’ve been inspired by his recent explorations that point the way to great user innovation toolkits. Give people the right tools and they’ll be happier and more effective — not only as employees, but also as citizens of the world.

The first MP3 player I ever used was some version of the Creative MUVO shown at right. I’ve probably owned a half-dozen of them and I just bought two more on eBay. For me it’s the perfect gadget for listening to podcasts, or songs I’m learning to play and sing, while running or biking or hiking or gardening. In those conditions I don’t want a $500 gadget that I might drop, or dunk, or scratch, with a fancy user interface that can access a vast range of features and capabilities. I just want to press play and listen. If it falls on the ground it probably won’t break. If it does break, oh well, it was $20, get another.

The MUVOs I just bought aren’t for me, though, they’re for my mom. She’s 92, and macular degeneration has advanced past the point where the reading machine we tried to modify for her can be of any use for long-form reading. And yet mom, a former college professor and lifelong voracious reader, continues to read more books than just about anybody I know. She does so by way of audiobooks from the library, and digital audio tapes provided courtesy of a Library of Congress program for the blind. Despite hearing loss which is also very significant, she can hear well enough to listen to spoken word audio.

It occurred to me that she’d also enjoy Long Now Seminars, KUOW Speakers Forum, and other series of podcasts. On a recent visit I verified that the MUVO works great for her, precisely because of its minimalist design. We are, after all, talking about a woman who needs the sort of user interface shown in this TV remote brilliantly hacked by my sister.

Mom can’t use a computer now, and even if she could there’s no way she’d be able to find the podcasts she likes and sync them to a device. That’s OK. I’ve listened to tons of stuff that she’d like, so the plan is to keep a pair of MUVOs in rotation. I’ll load a batch of talks for her onto one MUVO and send it. While she’s listening to that one she’ll have her aide send the other back to me for a reload. It’s a method that leading-edge technologists will wince to think about. Can’t cloud synchronization solve this poor woman’s problem?

No, it can’t. My method is the only one that will work for her. And it has another advantage too. Mom will periodically receive a little package of goodies from me via the old-fashioned, yet-to-be-assimilated-by-Amazon US postal service. All in all it’s another triumph for trailing-edge technologies!

For me the most productive programming environments have always exhibited the same pattern. There’s something I think of as kernel space, something else I think of as user space, and most importantly a fluid boundary between them.

For example, my first programming job was writing application software for CD-ROM-based information systems. I wrote that software in a homegrown interpreted language inspired by LISP. The relationship between my software and the engine that powered the interpreter was a two-way street. Sometimes, when I’d find myself repeating a pattern, we’d abstract it and add new primitive constructs to the engine to make the application code cleaner and more efficient. At other times, though, we’d take constructs only available in the engine (kernel space) and export them into the interpreted language (user space). Why? Because user space wasn’t just me acting as a user of the kernel. It was also where the product we were building came into direct contact with its users. We needed to be able to try a lot of different things in user space to find out what would work. Sometimes when we got something working we’d leave it in user space. Other times we’d push it back into the kernel — again, for reasons of clarity and efficiency.

You see the same pattern over and over. In languages like Perl, Python, and Ruby there’s a fluid relationship between the core engines, written in low-level compiled languages, and the libraries written in the dynamic languages supported by the engines.

I realized today that the evolving fluid relationship between web servers and web clients is another example of the pattern. Early on you had to write web software for a server. Now the web client is ascendant and we can do incredible things in JavaScript. But for me, at least, the ascendant client in no way diminishes the server. Now that the two are on a more equal footing I feel more productive than ever.

In my case the server is Windows Azure, and the “kernel” of the system I’m building is written in C# (and a bit of Python). The client is the browser, and “user space” is powered by JavaScript. I’m finding that these two realms are intertwining in delightful ways. For example, one new feature required some additional data structures. Because they’re rebuilt periodically and cached it makes sense to have the server do this work. Initially the server produced a JSON file which the client acquired by means of an AJAX call. When the feature proved out, I decided to streamline things by eliminating that AJAX call. So now the server acquires the JSON and caches it as a C# object in memory. When the page loads the server converts the data back to JSON and makes it directly available to the client.

What about the fact that this arrangement involves two different programming environments? If that bothered me I could be using JavaScript on the server too. But I don’t feel the need. For me, C# is appropriate for kernel space and JavaScript is appropriate for user space. Which language powers which realm isn’t really the point, though. What matters is that the two realms exist and collaborate productively.

A recent Twitter exchange reminded me of a 2005 blog post that included this Ray Ozzie quote:

Each fall, as I manually enter the entire Celtics season schedule, my company’s holidays and my childrens’ school calendars into my own personal calendar, I am again reminded how ridiculous it is that The Net has not yet ubiquitously embraced the everyday exchange of virtual objects so basic as calendars and as vCards – which can also likewise be subscribed-to, aggregated into Contact Lists and auto-updated via personal RSS feeds. Bizarre.

We are, of course, still in that ridiculous situation. Dan Brickley asks:

@judell @rozzie any thoughts on why? Technicalities of iCalendar format or something larger?

I can’t answer in 140 characters so I’ll try to answer here. Although I can’t really answer here either. A while ago I concluded that writing prose, at any length, wouldn’t help. I needed to write code, so that’s what I’ve mainly been up to. But from time to time it’s good to pause and reflect.

So, are “technicalities of the iCalendar format” the problem? No. And by no I mean NO, NO, A THOUSAND TIMES NO! Members of the geek tribe really want that to be the problem. We look at the spec, crafted in 1998, with its antique pre-XML format and its quaint line-folding, and we think: Seriously?

But that’s really not the problem. To put this in Chomskyan terms, there’s deep structure and surface structure. iCalendar’s deep structure comprehends dates, times, timezones, recurrence, and a wealth of related things necessary for reliable exchange of time-ordered information. Mapping that deep structure onto other surface structures is something you can do, and people have done, but that hardly matters. Today’s calendar software can convey the deep structure perfectly well using the original format. But for the most part it doesn’t get used that way, and that’s the larger issue.

If you are an ordinary person living in one of the places where the system I’m working on is up and running, and you want to post an event to the newspaper’s community calendar, you will be invited to consider a possibility that you did not even know existed. Don’t email us a copy of your event info, the newspaper will say. And don’t input a copy of it into our database either. Instead manage your public schedule of events using your own calendar program, whatever that may be, then publish it to the web and give us the URL of that calendar feed. You’ll be the authoritative source of the information. You’ll type it in once, it’ll show up on your website, your audience can get it directly onto their personal calendars, and we’ll get it into the newspaper automatically too. If you change a time or location, the change is reflected automatically in all those contexts.

Editors tell me that people are delighted to learn that things can work this way. Deep down people have always felt that computers and networks ought to enable this kind of thing, and always felt vaguely disgruntled that they didn’t.

The change I envision happens when you see your church’s supper or your restaurant’s open mic or your school’s fundraiser or your city’s hazardous waste disposal schedule flowing automatically from your own calendar into other contexts. Then, and only then, the light bulb flicks on. You’ve often wondered why this doesn’t happen everywhere, all the time, for all kinds of information. Now you’ll know how it can.

I’m trying to create that transformative experience for as many people as I can. Writing more prose won’t move the needle so I mostly don’t these days, but below the fold are some of the essays I’ve written on this topic.

Why Johnny can’t syndicate

Indie theaters and open data

We bought the wrong kind of software?

A great disturbance in the force

Calendars in the cloud: No more copy and paste

Ann Arbor’s public schools are thinking like the web

Calendar feeds are a best practice for bookstores

A civic scorecard for public calendars

The long tail of the iCalendar ecosystem

Seven ways to think like the web

AOL’s Patch enshrines the event anti-pattern

In 1995 I attended Novell’s BrainShare conference in Salt Lake City. It was an interesting moment for a local-area-networking company on the cusp of the Internet era. Then-CEO Bob Frankenburg rose to the occasion. His keynote was my first introduction to the now-fashionable Internet of Things. Frankenburg talked up the idea of billions of connected appliances ranging from Las Vegas slot machines to refrigerators.

Almost two decades later that vision is coming into focus. It’ll happen, I’m sure. My vacuum cleaner, microwave, and stove will all be able to phone home. What worries me, though, is that the news they report is unlikely to be good news. Embedded chips won’t compensate for the crummy quality of today’s appliances. Things fail and break at an alarming rate.

That microwave oven we bought new in 2012? When the motherboard failed it was cheaper to junk the whole unit than to fix it. The new stove we bought last year? The ignition is failing and I have to reboot it to make it work. Rebooting a stove? That just ain’t right. And don’t even get me started on the many vacuum cleaners I’ve hated since I foolishly got rid of my mom’s vintage Hoover.

This isn’t just a first world problem, it’s a uniquely 21st-century problem. I’m sure we’ll have an Internet of Things. But I fear it will be an Internet of Things That Used To Work Better.

Next week I’ll be speaking at a conference on technology in higher education. The new online course platforms will, of course, be a central topic. I’m not an educator and I haven’t spent serious time using any of the MOOCs so how can I add value to a discussion of them?

Well, I’ve spent my whole career exploring and explaining many of the technologies that enable — or could enable — networked education. And while I was often seen as an innovator, the truth is that much of my work happened on the trailing edge, not the leading edge. The Network News Transfer Protocol (NNTP) was already ancient when I was experimenting with ways to adapt it for intranet collaboration. Videos of software in action had been possible long before I demonstrated the power of what we now call screencasting. And iCalendar, the venerable standard at the heart of my current effort to bootstrap a calendar web, has been around forever too.

There’s a reason I keep finding novel uses for these trailing-edge technologies. I see them not as closed products and services, but rather as toolkits that invite their users to adapt and extend them. In Democratizing Innovation, Eric von Hippel calls such things “user innovation toolkits” — products or services that, while being used for their intended purposes, also enable their users to express unanticipated intents and find ways to realize them.

Thanks to the philosophical foundations of the Internet — open standards, collaborative design, layered architecture — its technologies typically qualify as user innovation toolkits. That wasn’t true, though, for the Internet era’s first wave of educational technologies. That’s why my friends in that field led a rebellion against learning management systems and sought out their own innovation toolkits: BlueHost, del.icio.us, MediaWiki, WordPress.

My hunch is that those instincts will serve them well in the MOOC era. Educational technologists who thrive will do so by adroitly blending local culture with the global platforms. They’ll package their own offerings for reuse, they’ll find ways to compose hybrid services powered by a diverse mix of human and digital resources, and they’ll route around damage that blocks these outcomes.

These values, skills, and attitudes will help keep a diverse population of universities alive. And to the extent students at those universities absorb them, they’ll be among the most useful lessons learned there.

Once upon I time I’d go down to the kitchen in the morning, turn on the radio, and listen to NHPR while making breakfast. Now I turn on a Logitech Squeezebox to do the same thing. But this morning it failed.

The list of things that could have gone wrong includes:

1. The box itself (hardware, firmware)

2. My Internet router

3. My cable modem

4. My ISP

5. The Internet fabric between my ISP and Logitech’s ISP

6. The Squeezebox service itself

I guess most people would just turn off the Squeezebox, wait a while, and turn it back on. Sometimes I wish I were one of those people. But being me I had to put on my detective hat and work through the checklist. After resetting the box to factory defaults, reconnecting to my local router, and verifying that my connections through the Internet fabric were otherwise OK, I was left with #6 and called Logitech support.

Sure enough, their servers are down. The ETA for a fix is 2-4 hours. It’s tempting to attribute this failure to the complexity of our modern systems. Like when guys bitch about how you used to be able to work on your own car, and now you can’t.

It’s true that the Squeezebox is more complex than the radio I used to have. And the Internet is more complex than the terrestrial radio I used to listen to. But that isn’t really the problem. Dependency on a single point of failure is the real culprit. And it’s worse than I thought:

Logitech leaves Squeezebox fans wondering what’s next

The Squeezebox platform is officially discontinued, but Logitech hasn’t told current owners what they should expect from now on.

In my review of the Logitech UE Smart Radio, there’s a single parenthetical line mentioning that the company is discontinuing the Squeezebox line of products. Incredibly, that’s more than Logitech has officially said on the matter, leaving the passionate fans of the Squeezebox platform wondering what’s going to happen to their network audio streamers.


The point of failure is not the box, or the Internet, but the Squeezebox service. And it doesn’t have to be that way.

The Squeezebox service is just a gateway to other services: Internet radio, Pandora. Those services are all up and running. The Squeezebox could have been built to be able to connect directly to them. But it wasn’t. So when the Squeezebox service is down the box is dead. And if Logitech discontinues the service, the box is not just mostly dead, it’s all dead.

I want my next Internet radio to work like my pre-Internet radio. If it really breaks then OK, that happens. But otherwise it keeps working. Some stations might not be reachable at some times. OK, that happens. But there’s no single point of failure in the fabric. That’s just lame.

In Schneier as a technology leader Dave Winer reacts to this comment about SOAP made by Bruce Schneier at the 2002 Emerging Technology conference: “SOAP is a firewall-friendly protocol like a bullet is skull-friendly.” I’m pretty sure that was the quote because I jotted it down in the notes I took that day. It’s funny how things change. Back then, during the first flush of excitement about web services, SOAP was how the tech industry imagined web services would talk to one another. And REST was, as it still is, how in most cases they actually do talk to one another.

If REST had SOAP’s approval rating back then, Schneier might as easily have said: “REST is firewall-friendly like a bullet is skull-friendly.” That would have been equally true. And equally irrelevant. Because as it turns out, enabling web services to tunnel “securely” through HTTPS is the least of our concerns. If governments have compromised the endpoints, and/or the encryption protocol itself, all bets are off.

In Dave Winer’s notes from that 2002 talk he wrote:

Jon Udell, who I respect enormously said that Schneier was the leading authority on security. My impression, and it’s just an impression, is that this kind of praise has gone to his head.

Dave’s recollection of that conference is accurate. Bruce was snarky. He did bash Microsoft. He also put forward the visionary idea that we can best secure computer networks by managing risks the way the insurance industry does. That was a conclusion he reached after fundamentally rethinking his own long-held assumptions about the capabilities and relevance of cryptography. In my review of his book Secrets and Lies, which describes that intellectual journey, I wrote:

It’s a rare book that distills a lifetime of experience. It’s a rarer one that chronicles the kind of crisis and transformation that Bruce Schneier has undergone in the last few years. He’s emerged with a vital perspective. Cryptography is an amazingly powerful tool, but it’s only a tool. We need to use it for all it’s worth. But at the same time we have to be clear about its limitations, and locate its use within a real-world context that is scarier and more complicated than we dare imagine.

The people I most respect nowadays are those who can change their minds in response to new information and changing circumstances. In 2000, when Secrets and Lies was published, we didn’t dare imagine that our worst adversaries were elements of our own governments. Now that we know that’s true, can Bruce Schneier help lead the way forward? I hope so. And while I agree that a snarky attitude can be a problem, if deployed carefully in the right context — say, a congressional hearing — it might come in handy.

It’s been 3 months since I began rehab for the injury I wrote about in Learning to walk again. Six weeks ago I began working with a team of excellent physical therapists, and I’m making good progress. I’ve started to do a bit of running and biking, but only in an exploratory way. I’m far from being able to resume those activities at normal levels.

Meanwhile I’ve thought a lot about what it takes to make a major biomechanical correction. The effort required is at least as much mental as physical. To recover strength and range of motion in my right leg I’ve got to make sure that it moves in certain ways and not in other ways. That sucks up a huge amount of conscious attention. As a lifelong athlete I know how to marshal that kind of attention, and I’m highly motivated to recover, so there’s a good chance I’ll succeed. But it’s a significant challenge. The PTs say that many folks can’t sustain the long-term focus needed to turn something like this around.

So I continue to imagine a wearable device that would help people offload the supervisory function. I’m envisioning buttons you’d stick onto your major joints. They serve both diagnostic and corrective purposes. In diagnostic mode they do 3D motion capture. You give the data to your physical therapist, she uses it to confirm or enhance her analysis of your case. Then she beams a prescription to your buttons. In corrective mode they embody that prescription, vibrating or buzzing when you move in the wrong way.

Even when uninjured, of course, we’re not biomechanically perfect. We could all improve our posture and gait, and we’d all feel better for it. So an effective device-plus-service solution could help a lot of people.

Would it work? Beats me. I’d love to try but wearable computing isn’t really my sweet spot. If it’s yours, and if you take a crack at this, let me know how it goes.

On the fiftieth anniversary of the I Have a Dream speech I heard a couple of interviews with Clarence Jones, a close associate of Martin Luther King who had helped Dr. King write the speech. In a blog post about Clarence Jones’ book Behind the Dream I reflected on an observation that Jones made about Dr. King’s memory. It was Jones who conveyed the Letter from Birmingham Jail to the world. He was struck by the fact that the letter was full of literary quotations that Dr. King, having no reference materials at hand, recalled from memory. Jones wrote:

What amazed me was that there was absolutely no reference material for Martin to draw upon. There he was [in the Birmingham jail] pulling quote after quote from thin air. The Bible, yes, as might be expected from a Baptist minister, but also British prime minister William Gladstone, Mahatma Gandhi, William Shakespeare, and St. Augustine.

To which I added in my post:

It’s interesting to note that the quotes Clarence Jones seems to recall being in the letter aren’t all there. I don’t find Gladstone, Gandhi, or Shakespeare. I do find, along with St. Augustine, Socrates, Thomas Aquinas, Paul Tillich, Abraham Lincoln, Thomas Jefferson, T.S. Eliot and others.

I revisited that blog post today because I heard something new in one of those recent interviews with Jones. He was sure at the time that the FBI was recording all the phone conferences in which King, Jones, and others planned the march on Washington. He was later proved right, and eventually he acquired the transcripts. From the NPR story:

All these years later, Jones is actually grateful for those wiretaps. Thanks to the FBI, he has a vast — and accurate — archive of the time.

“If I have a fuzzy memory or hazy memory, I look at it, and there’s a verbatim transcript of the conversations about a certain event, a certain person or a certain problem we were discussing,” Jones says.

The jokes practically write themselves nowadays:

@pryderide: Lost all my iPhone contacts. No backup. Anyone got the number to #NSA…? #surveillance #privacy #Snowden

@tefanauss: Introducing nsync – A command-line tool for NSA’s free backup services

@conservJ: Wondering when the email & social media sites are going to change the wording of “lost password” to “Ask the NSA”.

But seriously. Now that we know about the cloud that works against us, where’s the cloud that works for us? It exists, but it’s always been marginal and is now in great peril.

I’ve long advocated for translucent or zero-knowledge systems that manage our data without being able to read it or surrender it.

It used to be apathy that mainly blocked adoption of these systems. Nobody saw why they mattered. Now that we do, they’re suddenly on the ropes. Lavabit. Silent Circle. Will SpiderOak be next?

I’m not into outlining, therefore I’m not a user of Fargo. But if I were I’d jump on the new encryption feature. Do it even if you don’t think you’re storing any secrets you need to protect. Do it just to prove that you can do it, and to challenge those who would deny that.

Last week some friends at a local marketing firm invited me to join them in Boston at a conference called Inbound. I’m glad I went. Not because I learned much about inbound marketing, whatever that is. (Is there a parallel conference called Outbound? How would it differ?) But mainly because I got to hear Kathy Sierra give a really useful talk on optimizing human performance.

The overt purpose of the talk was to invite “content marketers” to create (here I search in vain for another word) “content” that aims not to only engage and inform, but also to help its “users” improve their performance in some domain. That’s a stretch goal for marketing. And I was delighted to see Kathy put it in front of an audience mainly focused on social media best practices, list segmentation, and landing page strategy.

Those aren’t my top concerns. But lately I’ve been working hard at learning to play music. And from that perspective three of Kathy’s themes resonated powerfully with me:

1. Tacit knowledge

2. Abundant examples

3. Deliberate practice

Kathy doesn’t use the phrase tacit knowledge but it’s a touchstone for me so that’s what I’ll call it. She gives the example of chick sexing, a famously hard task. Not many people are able to differentiate male from female chicks. Those who can don’t know, and can’t say, how they do it. Kathy talks about a study showing that novice chick sexers who hung around with experts picked up the skill rapidly by osmosis.

Key to the transmission of this tacit knowledge is an abundance of examples. Brains can use pattern matching to learn directly from other brains. It can happen under the radar, without conscious articulation of technique, but it requires a lot of data. You need to expose your brain to hundreds or thousands of examples of things other people do without knowing quite how they do them.

I think this helps explain why YouTube is so extraordinarily valuable to aspiring musicians. Pick a tune you want to learn. It’s wonderful to find a performance for your instrument that you can see and hear. But typically you won’t find just one, There will often be dozens. I’ve been aware for quite some time that my ability to see and hear many performances of the same tune, by many performers, whose skills and styles vary, accelerates my learning to play the tune. Until now, though, I haven’t been clear about the reason why. Pattern matching requires a lot of data. For a range of skills that can be demonstrated in the medium of online video, YouTube is becoming a robust source of that data.

Of course we can’t learn everything by osmosis. We often need to drag tacit knowledge to the surface, study it, practice it, and then submerge it. As Herbert Simon and William Chase pointed out decades ago, and as Malcolm Gladwell more recently popularized, it can take a long time to acquire expertise this way. Ten thousand hours is the now-famous rule of thumb.

I’ve gotten a late start with music so I’m not sure I’ll be able to clock my ten thousand hours. But in any case the interesting question to me is how best to spend the time I’ve got. I know that I don’t practice as efficiently as I should, and that I’m prone to burning in bad habits. Kathy suggests the following strategy. Pick a tune, or section of a tune, and aim to be able to play it with 95% reliability after practicing for at most 3 sessions of at most 45 minutes each. If you don’t get there, stop. Move the goalpost. Pick a different tune, or a smaller section of the tune, or a slower tempo, and nail that.

It’s hard to be that disciplined. Especially when your head is full of so many examples of the tunes you want to play. Seeing and hearing whole tunes, at tempo, and trying to play along with them, is one crucial mode of learning. Analyzing passages note by note, and trying to perfect them (maybe with the help of a tool like Soundslice), is another. They’re complementary, and I need them both. So thanks, Kathy, for helping me think about how to combine them. And … welcome back!

If you received an email message from me during the early 2000s, it came with an attachment that likely puzzled or annoyed you. The attachment was my digital ID. In theory you could use it for a couple of purposes. One was to verify that I was the authentic sender of the message, and that the content of my message had not been altered enroute.

You could also save my public key and then use it to send me an encrypted message. During the years I was routinely including my digital ID in outbound messages I think I received an encrypted reply once. Maybe twice.

I’ve always thought that everyone should have the option to communicate securely. Once there was little chance any ordinary person would be able to figure out how to do it. Even for me, as a tech journalist who had learned both the theory and practice of secure communication, it was a challenge to get things working. And when I did, who could I talk to? Only someone else who’d traveled the same path. The pool of potential communication partners was too small to matter.

But during the 2000s I hoped for, and then encouraged, developments that promised to democratize private communication. Mainstream email software implemented the relevant Internet standards and integrated the necessary encryption tools. Now if you and I wanted to communicate securely we could just tick some options in our email programs.

But it still hardly ever happened. Why not? It comes down to a question of defaults. In order to make use of the integrated encryption tools you needed a digital ID. The default was that you didn’t have one. And that’s still the default. You have to go out of your way to get a digital ID. You have to alter the default state of your system, and that’s something people mostly won’t do.

Broadly there are two kinds of secure communication. One kind is implemented in programs like Apple’s Mail and Microsoft’s Outlook. (You likely didn’t know that, and almost surely have never used it, but it’s there.) This kind of secure communication relies on a hierarchical system of trust. To use it you acquire a digital ID issued by, and backed by, some authority. It could be a government, it could be commercial provider, in practice it’s usually the latter. Your communication software is configured to trust certain of these providers. And to use it you must trust those providers too.

Another kind of secure communication relies on no higher authority. Instead communication partners trust one another directly, and exchange their digital IDs in pairwise (peer-to-peer) fashion. Among systems that use this approach, PGP (Pretty Good Privacy) is most notable. Another, now discontinued, was Groove.

Much ink has been spilled, and many pixels lit, debating hierarchical/centralized versus peer-to-peer/distributed methods of storing and transmitting data. Of course the definitions of these methods wind up being a bit fuzzy because hierarchical systems can have peer-to-peer aspects and vice versa.

I would bet that Edward Snowden, Laura Poitras, and Glenn Greenwald are using a purely peer-to-peer approach. When the stakes astronomically high, and when your pool of communication partners is very small, that would be the only way to go. It would be a huge inconvenience. You’d need to massively alter the default state of an off-the-shelf computer to enable secure communication. But there’d be no choice. You’d have to do it.

Could standard systems come with software that communicates securely by default? Yes. Methods based on a hybrid of hierarchical and peer-to-peer trust could be practical and convenient. And they could deliver far better than the level of privacy we now enjoy by default, which is none. Would people want them? Until recently the answer was clearly no. Probably the answer is still no. But now, for the first time in my long experience with this topic, ordinary citizens may be ready to entertain the question. Please do.

I’ve written before about why I subscribe to the Ann Arbor Chronicle. As of today, my Ann Arbor Chronicle Number is 10. That’s the number of months I’ve been sending a modest donation to the Chronicle. The data comes from this page which also gives me the Chronicle Number of some other Ann Arborites I’ve met in my travels there:

23 Bill Tozier
32 Peter Honeyman
50 Linda Feldt

Here’s a chart showing the growth in numbers of donors per month1:

The Chronicle’s evolving policy on donation — and disclosure thereof — is, like everything else about the publication, thoughtful and nuanced.

I have two reasons to hope that the trend shown in that chart1 will continue. One is professional. The Chronicle was the first publication to adopt the web of events model that I am trying to establish more widely. So the Chronicle’s success helps me advance that cause.

The other reason is personal. Though I’m a refugee from journalism I care deeply about it. I wish the kind of journalism practiced by the Chronicle on behalf of Ann Arbor could happen in my town. And in yours.

I’m glad there’s a foundation chartered to help journalism reinvent itself. But while I deem the Chronicle eminently worthy of funding from that source it has thus far received none. And maybe that’s a good thing. Over the long run only broad community support will be sustainable. So I hope the Chronicle achieves that, and shows other communities the way.

1 July 2013 notwithstanding. But maybe as of today, August 1, the July data remains incomplete?

2 The spreadsheet behind the chart is here. And the code that created the spreadsheet is here:

import re

f = open('donors.txt')
s = f.read()
s = s.replace('\n\n','\n')
months = re.findall('\d{4,4}.+', s)
lists = re.split('\d{4,4}.+', s)
lists = lists[1:]
assert ( len(months) == len(lists) )

for i in range(len(months)):
  s = lists[i]
  s = s.strip('\n')
  l = s.split('\n')
  print ( '%3s\t%s' % ( len(l), months[i]) )

date copied/pasted from http://annarborchronicle.com/subscribe/ 
looks like this:

2013 July
Linda Diane Feldt
Nancy Quay
Jeremy Peters
Bruce Amrine
Mary Hathaway
Katherine Kahn
Sally Petersen
2013 June

output looks like this:

 93	2013 July
117	2013 June
120	2013 May
120	2013 April


Minds change rarely. I wonder a lot about what happens when they do, and I often ask people this question:

What’s something you believed deeply, for a long time, and then changed your mind about?

This often doesn’t go well. You’ll ask me, naturally enough, for an example — some belief that I once held and then revised. But since any topic I offer as an example intersects with your existing belief system in some way, we wind up talking about that topic and my original question goes unanswered.

It’s easy to discuss positions you support, or oppose, within the framework of your existing belief system. It’s much harder to consider how that belief system has changed, or could change.

Facebook has become a laboratory in which to observe this effect. I’m connected to people across the continuum of ideologies. At both extremes I see the same behavior. News stories are selected, refracted through the lens of ideology, and posted with comments that I can predict with great certainty. These utterances, by definition, convey little information. Nor are they meant to. Their purpose is to reinforce existing beliefs, not to examine them.

Echo chambers aren’t new, of course, and they have nothing to do with the Internet. We seek the like-minded and avoid the differently-minded. On Facebook, though, it’s not so easy to avoid the differently-minded. I regard that as a feature, not a bug. I’m open to re-examining my own beliefs and I welcome you to challenge them. But if you’re not similarly open to re-examining your own beliefs then I can’t take you seriously.

See also the Edge Annual Question for 2008: What Have You Changed Your Mind About?

Over the years I’ve had a number of overuse injuries: tendinitis from too much typing or mousing or music playing, a sore shoulder from too much swimming, painful knees and ankles from too much running. The key phrase here is “too much” and you’d think I’d learn my lesson eventually. But no. When I get excited about doing things I overdo and then, periodically, must back off and recover.

Often, during recovery, as I analyze what’s gone wrong, I find that the problem is not simply overuse but more specifically asymmetric use. Once, during a bout of pain in my right thumb joint, while pondering what the cause might be, I looked down at my hands while I was typing. Clatter clatter clatter BAM! Clatter clatter clatter BAM! The BAM was my right thumb pounding the space bar. I could feel a twinge every time I saw it happen.

In some cases, and that was one of them, shifting to a symmetrical pattern of use is helpful. (As is, of course, not pounding.) I’ve trained myself to alternate thumbs while typing (although, as I look down at my hands now I see that needs reinforcement), to breathe alternately left and right while swimming, to change mouse hands from time to time, to become a switch hitter with the garden shovel.

Every time I go through one of these retraining exercises I reflect on the difficulty of the process. The steps are:

- surface a bad habit that was unconscious

- consciously develop a good habit

- submerge the new habit back into the unconscious

In the latest iteration of the process I am relearning how to walk. It sounds ridiculous. It is ridiculous. But here’s what happened — or rather, my best current understanding of what happened. About a year ago I strained one of the adductors in my right groin. Usually things like that resolve with a bit of rest and some stretching. But this time it didn’t. Last summer I was having trouble lifting my right leg over the bicycle seat when mounting. When the same thing happened on the first ride of this season I knew something had to be corrected. But what?

An acquaintance who does massage asked me to observe the angles of my upper legs while cycling. Next time out I looked down and could hardly believe it. My right knee was out of line by at least 25 degrees! That misalignment was clearly aggravating the injury and not allowing it to heal.

When I got home I put cycling and running on hold and went back to basics. I stood in what felt like a normal position and looked down. Sure enough, my right foot was pointing out noticeably. When I aligned it with my left foot I felt like I was forcing it to pigeon-toe. Then I started to walk. Each step required a conscious effort to align the right foot. It didn’t feel correct. But I could see that it was.

So that’s how it’s gone for the past 5 days. Instead of cycling or running I take the dogs for a hike and focus on alignment. I have to supervise my right foot closely and, when I go up and down over obstacles, I have to supervise my right knee to make sure it stays aligned too.

I can tell that it’s working. But clearly a bad habit that took a year to develop will take more than a few days to correct.

Every time something like this happens I wonder how I could fail to notice something so fundamental. But it really isn’t surprising. We can’t consciously monitor how we use our bodies all the time, and bad habits develop gradually. If there’s any application of wearable computing that will matter to me I think it will be the one that warns me when these kinds of bad habits begin to develop, and helps me correct them. We’re not great analysts of the forces in play as we use our bodies, but computers could be.

Here’s Andy Baio’s farewell to Upcoming, a service I’ve been involved with for a decade. In a March 2005 blog post I wrote about what I hoped Upcoming would become, in my town and elsewhere, and offered some suggestions to help it along. One was a request for an API which Upcoming then lacked. Andy soon responded with an API. It was one of the pillars of my Elm City project for a long while until, as Andy notes in his farewell post, it degraded and became useless.

Today I pulled the plug and decoupled Upcoming from all the Elm City hubs.

In 2009 Andy and I both spoke at a conference in London. Andy was there to announce a new project that would help people crowdsource funding for creative projects. I was there to announce a project that would help people crowdsource public calendars. Now, of course, Kickstarter is a thing. The Elm City project not so much. But I’m pretty sure I’m on the right track, I’m lucky to be in a position to keep pursuing the idea, and although it’s taking longer than I ever imagined I’m making progress. Success, if it comes, won’t look like Upcoming did in its heyday, but it will be a solution to the same problem that Upcoming addressed — a problem we’ve yet to solve.

That same March 2005 blog post resonates with me for another reason. That was the day I walked around my town photographing event flyers on shop windows and kiosks. When I give presentations about the Elm City project I still show a montage of those images. They’re beautiful, and they’re dense with information that isn’t otherwise accessible.

Event flyers outperform web calendars, to this day, because they empower groups and organizations to be the authoritative sources for information about their public events, and to bring those events to the attention of the public. The web doesn’t meet that need yet but it can, and I’m doing my best to see that it does.

My next community calendar workshop will be at the Peninsula Fine Arts Center in Newport News, on Tuesday April 23 at 6PM. It’s for groups and organizations in the Hampton Roads region of Virginia, including Chesapeake, Hampton, Newport News, Norfolk, Portsmouth, Suffolk, Virginia Beach, Williamsburg, and Yorktown. If you’re someone there who’d like help change the way public calendars work in your region, please sign up on EventBrite so we know you’re coming, or contact me directly.

Here’s the pitch from the workshop’s sponsor and host, the Daily Press:

The Community Calendar Project

It’s about time someone came up with a way to get all community events in one place so everyone, everywhere can find out what’s going on at any given time, on any given day.

It’s about time creators of those events – the people, agencies and organizations who work so hard to bring quality education, support and entertainment to the community – had a way to get their messages out there effortlessly.

It’s about time the public can find out about the happenings and events they really care about and never miss an important event again.

AND it’s “time” – or the lack of it – that makes this community initiative being spearheaded by the Daily Press so valuable to everyone. This community calendar will SAVE time – for the event creators, the event seekers and the websites and platforms that work to make this information available.

The Daily Press is partnering with Jon Udell of Microsoft to bring this project to Hampton Roads and make it among the first communities in the country to have an easily searchable, FREE database of events available to the public. And we want to get all of Hampton Roads involved. The only thing required to participate is to agree to use an iCalendar formatted calendar on your own websites or to create events through Facebook. That’s it. Participation guaranteed.

What is an iCalendar? Simply, iCalendar is a computer file format that allows Internet users to exchange calendars with other Internet users. iCalendar is used and supported by personal calendars such as Google Calendar, Apple Calendar (formerly iCal), Microsoft Outlook and Hotmail, Lotus Notes, Yahoo! Calendar, and others, and by web content management systems including WordPress, Drupal, Joomla, and others.

Many of you may already use one of these applications to publish your calendars online, and that is great! That means you can already participate in the calendar network we are bringing together. The rest of you can easily convert and get on board.We’ll tell you how.

On April 23 you are invited to a presentation of the Community Calendar Project. Jon will be on hand to tell you what it is, why it matters and how to get involved. The gathering will take place at 6 p.m. at the Peninsula Fine Arts Center, 101 Museum Drive (across from The Mariners’ Museum) in Newport News.

Light refreshments will be served. Get your FREE tickets so we know how many are attending.

Hope to see you there.

My dad died of congestive heart failure in 2009. The last weeks of his life weren’t what they could have been had we known enough to get him into hospice care. But we didn’t know, and I’ve felt ashamed about that.

If we had it to do over again things would be very different. We’d have brought him home much sooner, made him comfortable, helped him work through a life review, hung out with him, heard and said some things that needed to be heard and said.

As it was we only managed to bring him home for his last day. It was better than not bringing him home at all, but not much better, at least not for him. For us, though, it was transformative. Two generations of our family — my wife and I, our children — had never seen the kind of death that was normal until the modern era. We’d didn’t know why or how to shift gears from medical treatment to palliative care. Now we do and we’re deeply changed — Luann especially. She’s become a hospice volunteer who comforts the dying, supports their families, and counsels survivors.

From her I’ve learned a lot about hospice care. What happened to us, it turns out, is typical. Many people don’t realize how comfortable a dying person can often be at home with proper medication. As a result many delay until the bitter end, and miss out on the emotional and psychological richness that’s possible in a home hospice setting.

A big reason for the delay is the chasm that divides the culture of hospitals from the culture of hospice. Nobody in the hospital advised us to bring dad home a month before he died. A social worker mentioned it, but dad didn’t know what it could mean to make that choice, we didn’t know enough to advocate for it, and medical professionals speak with vastly more authority than do social workers in our current regime.

What hospitals don’t know about hospice is astonishing. Last night, while reading an anthology of science writing, I happened on an essay by Atul Gawande, a physician/writer who, like Oliver Sacks, Perri Klass, and Abraham Verghese, opens windows into the medical world. In 2010, the year after our experience with my dad, he wrote a New Yorker piece called Letting Go that included these revelations:

One Friday morning this spring, I went on patient rounds with Sarah Creed, a nurse with the hospice service that my hospital system operates. I didn’t know much about hospice. I knew that it specialized in providing “comfort care” for the terminally ill, sometimes in special facilities, though nowadays usually at home. I knew that, in order for a patient of mine to be eligible, I had to write a note certifying that he or she had a life expectancy of less than six months. And I knew few patients who had chosen it, except maybe in their very last few days, because they had to sign a form indicating that they understood their disease was incurable and that they were giving up on medical care to stop it. The picture I had of hospice was of a morphine drip. It was not of this brown-haired and blue-eyed former I.C.U. nurse with a stethoscope, knocking on Lee Cox’s door on a quiet street in Boston’s Mattapan neighborhood


Like many people, I had believed that hospice care hastens death, because patients forgo hospital treatments and are allowed high-dose narcotics to combat pain. But studies suggest otherwise. In one, researchers followed 4,493 Medicare patients with either terminal cancer or congestive heart failure. They found no difference in survival time between hospice and non-hospice patients with breast cancer, prostate cancer, and colon cancer. Curiously, hospice care seemed to extend survival for some patients; those with pancreatic cancer gained an average of three weeks, those with lung cancer gained six weeks, and those with congestive heart failure gained three months.

These things once surprised me too. Now, thanks to our brief hospice experience with dad and Luann’s volunteer work since, I take them for granted. And while I’ve felt ashamed not to have arrived at this understanding sooner, in time to help dad, I guess I should cut myself some slack. Atul Gawande didn’t get there any sooner than me.

How could that be? How could a leading medical practitioner (and explainer) reach mid-career lacking such basic and useful knowledge? All too easily when we carve the world into fields of knowledge and then build walls around them.

Last month ago I wrote a column for Wired.com, Rebooting web comments, that attracted some unsavory feedback. Had the flamers read beyond the second paragraph they might have seen that I wasn’t insisting everyone must use verifiable identities online. But they didn’t. So I wrote another column last week, Own your words, to clarify my position.

My first blogging tool, back in 2001, was Dave Winer’s Radio UserLand. One of Dave’s mantras was: “Own your words.” As the blogosphere became a conversational medium, I saw what that could mean. Radio UserLand didn’t support comments. That turned out to be a good constraint to embrace. When conversation emerged, as it always will in any system of communication, it was a cross-blog affair. I’d quote something from your blog on mine, and discuss it. You’d notice, and perhaps write something on your blog referring back to mine.

This cross-blog conversational mode had an interesting property: You owned your words. Everything you wrote went into your own online space, was bound to your identity, became part of your permanent record. As a result, discourse tended to be more civil than what often transpired in Usenet newsgroups or web forums. In those kinds of online spaces, your sense of identity is attenuated. You may or may not be pseudonymous, but either way the things you say don’t stick to you in the same way they do if you say them in your own permanent online space.

Later blogs evolved forum-style comments which concentrated discussion but recreated the old problems: attenuation of identity, loss of ownership of data. Then came Twitter and Facebook and, so the story goes, “social killed the blogosphere.” It was easier to read and write in those online spaces, blogging declined, and Google’s recent decision to retire its RSS reader is being widely regarded as the nail in the blogosphere’s coffin.

Of course that’s wrong. One of the staples of tech punditry is the periodic declaration that something — Unix, the Web, Microsoft, Apple, the blogosphere — is dead.

Will Google Reader’s exit spell the end of the blogosphere or its rebirth? Nobody knows, and since I’m no longer in the pageview business I won’t even hazard a prediction. Instead I want to highlight something that’s bigger than blogs, bigger even than social media. Owning your words is a fundamental principle. It seemed new at the dawn of the blogosphere but its roots ran deeper. They were woven into the fabric of the Internet which, at its core, is a network of peers.

For technical reasons I won’t explore here, it’s not possible (or, I should say, not believed possible) for our computers to be first-class peers on that network, as early Internet-connected computers were. But it is possible for various of our avatars — our websites, our blogs, our calendars — to represent us as first-class peers. That means:

- They use domain names that we own

- They converse with other peers in ways that we enable and can control

- They store data in systems that we authorize and can manage

Your Twitter and Facebook avatars are not first-class peers on the network in these ways. Which isn’t to say they aren’t useful. Second-class peers are incredibly useful, largely because they enable us to avoid the complexities that make it challenging to operate first-class peers.

Those challenges are real. But they’re not insurmountable unless we believe that they are. I don’t believe that. I hope you won’t. What some of us learned at the turn of the millenium — about how to use first-class peers called blogs, and how to converse with other first-class peers — gave us a set of understandings that remain critical to the effective and democratic colonization of the virtual realm. It’s unfinished business, and it may never be finished, but don’t let the tech pundits or anyone else convince you it doesn’t matter. It does.

Next Page »


Get every new post delivered to your Inbox.

Join 6,069 other followers