Category Archives: Uncategorized

The everyday exchange of virtual objects

A recent Twitter exchange reminded me of a 2005 blog post that included this Ray Ozzie quote:

Each fall, as I manually enter the entire Celtics season schedule, my company’s holidays and my childrens’ school calendars into my own personal calendar, I am again reminded how ridiculous it is that The Net has not yet ubiquitously embraced the everyday exchange of virtual objects so basic as calendars and as vCards – which can also likewise be subscribed-to, aggregated into Contact Lists and auto-updated via personal RSS feeds. Bizarre.

We are, of course, still in that ridiculous situation. Dan Brickley asks:

@judell @rozzie any thoughts on why? Technicalities of iCalendar format or something larger?

I can’t answer in 140 characters so I’ll try to answer here. Although I can’t really answer here either. A while ago I concluded that writing prose, at any length, wouldn’t help. I needed to write code, so that’s what I’ve mainly been up to. But from time to time it’s good to pause and reflect.

So, are “technicalities of the iCalendar format” the problem? No. And by no I mean NO, NO, A THOUSAND TIMES NO! Members of the geek tribe really want that to be the problem. We look at the spec, crafted in 1998, with its antique pre-XML format and its quaint line-folding, and we think: Seriously?

But that’s really not the problem. To put this in Chomskyan terms, there’s deep structure and surface structure. iCalendar’s deep structure comprehends dates, times, timezones, recurrence, and a wealth of related things necessary for reliable exchange of time-ordered information. Mapping that deep structure onto other surface structures is something you can do, and people have done, but that hardly matters. Today’s calendar software can convey the deep structure perfectly well using the original format. But for the most part it doesn’t get used that way, and that’s the larger issue.

If you are an ordinary person living in one of the places where the system I’m working on is up and running, and you want to post an event to the newspaper’s community calendar, you will be invited to consider a possibility that you did not even know existed. Don’t email us a copy of your event info, the newspaper will say. And don’t input a copy of it into our database either. Instead manage your public schedule of events using your own calendar program, whatever that may be, then publish it to the web and give us the URL of that calendar feed. You’ll be the authoritative source of the information. You’ll type it in once, it’ll show up on your website, your audience can get it directly onto their personal calendars, and we’ll get it into the newspaper automatically too. If you change a time or location, the change is reflected automatically in all those contexts.

Editors tell me that people are delighted to learn that things can work this way. Deep down people have always felt that computers and networks ought to enable this kind of thing, and always felt vaguely disgruntled that they didn’t.

The change I envision happens when you see your church’s supper or your restaurant’s open mic or your school’s fundraiser or your city’s hazardous waste disposal schedule flowing automatically from your own calendar into other contexts. Then, and only then, the light bulb flicks on. You’ve often wondered why this doesn’t happen everywhere, all the time, for all kinds of information. Now you’ll know how it can.

I’m trying to create that transformative experience for as many people as I can. Writing more prose won’t move the needle so I mostly don’t these days, but below the fold are some of the essays I’ve written on this topic.


Why Johnny can’t syndicate

Indie theaters and open data

We bought the wrong kind of software?

A great disturbance in the force

Calendars in the cloud: No more copy and paste

Ann Arbor’s public schools are thinking like the web

Calendar feeds are a best practice for bookstores

A civic scorecard for public calendars

The long tail of the iCalendar ecosystem

Seven ways to think like the web

AOL’s Patch enshrines the event anti-pattern

The Internet of Things That Used To Work Better

In 1995 I attended Novell’s BrainShare conference in Salt Lake City. It was an interesting moment for a local-area-networking company on the cusp of the Internet era. Then-CEO Bob Frankenburg rose to the occasion. His keynote was my first introduction to the now-fashionable Internet of Things. Frankenburg talked up the idea of billions of connected appliances ranging from Las Vegas slot machines to refrigerators.

Almost two decades later that vision is coming into focus. It’ll happen, I’m sure. My vacuum cleaner, microwave, and stove will all be able to phone home. What worries me, though, is that the news they report is unlikely to be good news. Embedded chips won’t compensate for the crummy quality of today’s appliances. Things fail and break at an alarming rate.

That microwave oven we bought new in 2012? When the motherboard failed it was cheaper to junk the whole unit than to fix it. The new stove we bought last year? The ignition is failing and I have to reboot it to make it work. Rebooting a stove? That just ain’t right. And don’t even get me started on the many vacuum cleaners I’ve hated since I foolishly got rid of my mom’s vintage Hoover.

This isn’t just a first world problem, it’s a uniquely 21st-century problem. I’m sure we’ll have an Internet of Things. But I fear it will be an Internet of Things That Used To Work Better.

MOOCs need to be user innovation toolkits

Next week I’ll be speaking at a conference on technology in higher education. The new online course platforms will, of course, be a central topic. I’m not an educator and I haven’t spent serious time using any of the MOOCs so how can I add value to a discussion of them?

Well, I’ve spent my whole career exploring and explaining many of the technologies that enable — or could enable — networked education. And while I was often seen as an innovator, the truth is that much of my work happened on the trailing edge, not the leading edge. The Network News Transfer Protocol (NNTP) was already ancient when I was experimenting with ways to adapt it for intranet collaboration. Videos of software in action had been possible long before I demonstrated the power of what we now call screencasting. And iCalendar, the venerable standard at the heart of my current effort to bootstrap a calendar web, has been around forever too.

There’s a reason I keep finding novel uses for these trailing-edge technologies. I see them not as closed products and services, but rather as toolkits that invite their users to adapt and extend them. In Democratizing Innovation, Eric von Hippel calls such things “user innovation toolkits” — products or services that, while being used for their intended purposes, also enable their users to express unanticipated intents and find ways to realize them.

Thanks to the philosophical foundations of the Internet — open standards, collaborative design, layered architecture — its technologies typically qualify as user innovation toolkits. That wasn’t true, though, for the Internet era’s first wave of educational technologies. That’s why my friends in that field led a rebellion against learning management systems and sought out their own innovation toolkits: BlueHost, del.icio.us, MediaWiki, WordPress.

My hunch is that those instincts will serve them well in the MOOC era. Educational technologists who thrive will do so by adroitly blending local culture with the global platforms. They’ll package their own offerings for reuse, they’ll find ways to compose hybrid services powered by a diverse mix of human and digital resources, and they’ll route around damage that blocks these outcomes.

These values, skills, and attitudes will help keep a diverse population of universities alive. And to the extent students at those universities absorb them, they’ll be among the most useful lessons learned there.

Single points of failure

Once upon I time I’d go down to the kitchen in the morning, turn on the radio, and listen to NHPR while making breakfast. Now I turn on a Logitech Squeezebox to do the same thing. But this morning it failed.

The list of things that could have gone wrong includes:

1. The box itself (hardware, firmware)

2. My Internet router

3. My cable modem

4. My ISP

5. The Internet fabric between my ISP and Logitech’s ISP

6. The Squeezebox service itself

I guess most people would just turn off the Squeezebox, wait a while, and turn it back on. Sometimes I wish I were one of those people. But being me I had to put on my detective hat and work through the checklist. After resetting the box to factory defaults, reconnecting to my local router, and verifying that my connections through the Internet fabric were otherwise OK, I was left with #6 and called Logitech support.

Sure enough, their servers are down. The ETA for a fix is 2-4 hours. It’s tempting to attribute this failure to the complexity of our modern systems. Like when guys bitch about how you used to be able to work on your own car, and now you can’t.

It’s true that the Squeezebox is more complex than the radio I used to have. And the Internet is more complex than the terrestrial radio I used to listen to. But that isn’t really the problem. Dependency on a single point of failure is the real culprit. And it’s worse than I thought:

Logitech leaves Squeezebox fans wondering what’s next

The Squeezebox platform is officially discontinued, but Logitech hasn’t told current owners what they should expect from now on.

In my review of the Logitech UE Smart Radio, there’s a single parenthetical line mentioning that the company is discontinuing the Squeezebox line of products. Incredibly, that’s more than Logitech has officially said on the matter, leaving the passionate fans of the Squeezebox platform wondering what’s going to happen to their network audio streamers.

CNET

The point of failure is not the box, or the Internet, but the Squeezebox service. And it doesn’t have to be that way.

The Squeezebox service is just a gateway to other services: Internet radio, Pandora. Those services are all up and running. The Squeezebox could have been built to be able to connect directly to them. But it wasn’t. So when the Squeezebox service is down the box is dead. And if Logitech discontinues the service, the box is not just mostly dead, it’s all dead.

I want my next Internet radio to work like my pre-Internet radio. If it really breaks then OK, that happens. But otherwise it keeps working. Some stations might not be reachable at some times. OK, that happens. But there’s no single point of failure in the fabric. That’s just lame.

Friendly firewalls

In Schneier as a technology leader Dave Winer reacts to this comment about SOAP made by Bruce Schneier at the 2002 Emerging Technology conference: “SOAP is a firewall-friendly protocol like a bullet is skull-friendly.” I’m pretty sure that was the quote because I jotted it down in the notes I took that day. It’s funny how things change. Back then, during the first flush of excitement about web services, SOAP was how the tech industry imagined web services would talk to one another. And REST was, as it still is, how in most cases they actually do talk to one another.

If REST had SOAP’s approval rating back then, Schneier might as easily have said: “REST is firewall-friendly like a bullet is skull-friendly.” That would have been equally true. And equally irrelevant. Because as it turns out, enabling web services to tunnel “securely” through HTTPS is the least of our concerns. If governments have compromised the endpoints, and/or the encryption protocol itself, all bets are off.

In Dave Winer’s notes from that 2002 talk he wrote:

Jon Udell, who I respect enormously said that Schneier was the leading authority on security. My impression, and it’s just an impression, is that this kind of praise has gone to his head.

Dave’s recollection of that conference is accurate. Bruce was snarky. He did bash Microsoft. He also put forward the visionary idea that we can best secure computer networks by managing risks the way the insurance industry does. That was a conclusion he reached after fundamentally rethinking his own long-held assumptions about the capabilities and relevance of cryptography. In my review of his book Secrets and Lies, which describes that intellectual journey, I wrote:

It’s a rare book that distills a lifetime of experience. It’s a rarer one that chronicles the kind of crisis and transformation that Bruce Schneier has undergone in the last few years. He’s emerged with a vital perspective. Cryptography is an amazingly powerful tool, but it’s only a tool. We need to use it for all it’s worth. But at the same time we have to be clear about its limitations, and locate its use within a real-world context that is scarier and more complicated than we dare imagine.

The people I most respect nowadays are those who can change their minds in response to new information and changing circumstances. In 2000, when Secrets and Lies was published, we didn’t dare imagine that our worst adversaries were elements of our own governments. Now that we know that’s true, can Bruce Schneier help lead the way forward? I hope so. And while I agree that a snarky attitude can be a problem, if deployed carefully in the right context — say, a congressional hearing — it might come in handy.

A wearable physical therapy prescription

It’s been 3 months since I began rehab for the injury I wrote about in Learning to walk again. Six weeks ago I began working with a team of excellent physical therapists, and I’m making good progress. I’ve started to do a bit of running and biking, but only in an exploratory way. I’m far from being able to resume those activities at normal levels.

Meanwhile I’ve thought a lot about what it takes to make a major biomechanical correction. The effort required is at least as much mental as physical. To recover strength and range of motion in my right leg I’ve got to make sure that it moves in certain ways and not in other ways. That sucks up a huge amount of conscious attention. As a lifelong athlete I know how to marshal that kind of attention, and I’m highly motivated to recover, so there’s a good chance I’ll succeed. But it’s a significant challenge. The PTs say that many folks can’t sustain the long-term focus needed to turn something like this around.

So I continue to imagine a wearable device that would help people offload the supervisory function. I’m envisioning buttons you’d stick onto your major joints. They serve both diagnostic and corrective purposes. In diagnostic mode they do 3D motion capture. You give the data to your physical therapist, she uses it to confirm or enhance her analysis of your case. Then she beams a prescription to your buttons. In corrective mode they embody that prescription, vibrating or buzzing when you move in the wrong way.

Even when uninjured, of course, we’re not biomechanically perfect. We could all improve our posture and gait, and we’d all feel better for it. So an effective device-plus-service solution could help a lot of people.

Would it work? Beats me. I’d love to try but wearable computing isn’t really my sweet spot. If it’s yours, and if you take a crack at this, let me know how it goes.

Why encrypt? Because (for now) we can.

On the fiftieth anniversary of the I Have a Dream speech I heard a couple of interviews with Clarence Jones, a close associate of Martin Luther King who had helped Dr. King write the speech. In a blog post about Clarence Jones’ book Behind the Dream I reflected on an observation that Jones made about Dr. King’s memory. It was Jones who conveyed the Letter from Birmingham Jail to the world. He was struck by the fact that the letter was full of literary quotations that Dr. King, having no reference materials at hand, recalled from memory. Jones wrote:

What amazed me was that there was absolutely no reference material for Martin to draw upon. There he was [in the Birmingham jail] pulling quote after quote from thin air. The Bible, yes, as might be expected from a Baptist minister, but also British prime minister William Gladstone, Mahatma Gandhi, William Shakespeare, and St. Augustine.

To which I added in my post:

It’s interesting to note that the quotes Clarence Jones seems to recall being in the letter aren’t all there. I don’t find Gladstone, Gandhi, or Shakespeare. I do find, along with St. Augustine, Socrates, Thomas Aquinas, Paul Tillich, Abraham Lincoln, Thomas Jefferson, T.S. Eliot and others.

I revisited that blog post today because I heard something new in one of those recent interviews with Jones. He was sure at the time that the FBI was recording all the phone conferences in which King, Jones, and others planned the march on Washington. He was later proved right, and eventually he acquired the transcripts. From the NPR story:

All these years later, Jones is actually grateful for those wiretaps. Thanks to the FBI, he has a vast — and accurate — archive of the time.

“If I have a fuzzy memory or hazy memory, I look at it, and there’s a verbatim transcript of the conversations about a certain event, a certain person or a certain problem we were discussing,” Jones says.

The jokes practically write themselves nowadays:

@pryderide: Lost all my iPhone contacts. No backup. Anyone got the number to #NSA…? #surveillance #privacy #Snowden

@tefanauss: Introducing nsync – A command-line tool for NSA’s free backup services

@conservJ: Wondering when the email & social media sites are going to change the wording of “lost password” to “Ask the NSA”.

But seriously. Now that we know about the cloud that works against us, where’s the cloud that works for us? It exists, but it’s always been marginal and is now in great peril.

I’ve long advocated for translucent or zero-knowledge systems that manage our data without being able to read it or surrender it.

It used to be apathy that mainly blocked adoption of these systems. Nobody saw why they mattered. Now that we do, they’re suddenly on the ropes. Lavabit. Silent Circle. Will SpiderOak be next?

I’m not into outlining, therefore I’m not a user of Fargo. But if I were I’d jump on the new encryption feature. Do it even if you don’t think you’re storing any secrets you need to protect. Do it just to prove that you can do it, and to challenge those who would deny that.

Tacit knowledge, abundant examples, and deliberate practice

Last week some friends at a local marketing firm invited me to join them in Boston at a conference called Inbound. I’m glad I went. Not because I learned much about inbound marketing, whatever that is. (Is there a parallel conference called Outbound? How would it differ?) But mainly because I got to hear Kathy Sierra give a really useful talk on optimizing human performance.

The overt purpose of the talk was to invite “content marketers” to create (here I search in vain for another word) “content” that aims not to only engage and inform, but also to help its “users” improve their performance in some domain. That’s a stretch goal for marketing. And I was delighted to see Kathy put it in front of an audience mainly focused on social media best practices, list segmentation, and landing page strategy.

Those aren’t my top concerns. But lately I’ve been working hard at learning to play music. And from that perspective three of Kathy’s themes resonated powerfully with me:

1. Tacit knowledge

2. Abundant examples

3. Deliberate practice

Kathy doesn’t use the phrase tacit knowledge but it’s a touchstone for me so that’s what I’ll call it. She gives the example of chick sexing, a famously hard task. Not many people are able to differentiate male from female chicks. Those who can don’t know, and can’t say, how they do it. Kathy talks about a study showing that novice chick sexers who hung around with experts picked up the skill rapidly by osmosis.

Key to the transmission of this tacit knowledge is an abundance of examples. Brains can use pattern matching to learn directly from other brains. It can happen under the radar, without conscious articulation of technique, but it requires a lot of data. You need to expose your brain to hundreds or thousands of examples of things other people do without knowing quite how they do them.

I think this helps explain why YouTube is so extraordinarily valuable to aspiring musicians. Pick a tune you want to learn. It’s wonderful to find a performance for your instrument that you can see and hear. But typically you won’t find just one, There will often be dozens. I’ve been aware for quite some time that my ability to see and hear many performances of the same tune, by many performers, whose skills and styles vary, accelerates my learning to play the tune. Until now, though, I haven’t been clear about the reason why. Pattern matching requires a lot of data. For a range of skills that can be demonstrated in the medium of online video, YouTube is becoming a robust source of that data.

Of course we can’t learn everything by osmosis. We often need to drag tacit knowledge to the surface, study it, practice it, and then submerge it. As Herbert Simon and William Chase pointed out decades ago, and as Malcolm Gladwell more recently popularized, it can take a long time to acquire expertise this way. Ten thousand hours is the now-famous rule of thumb.

I’ve gotten a late start with music so I’m not sure I’ll be able to clock my ten thousand hours. But in any case the interesting question to me is how best to spend the time I’ve got. I know that I don’t practice as efficiently as I should, and that I’m prone to burning in bad habits. Kathy suggests the following strategy. Pick a tune, or section of a tune, and aim to be able to play it with 95% reliability after practicing for at most 3 sessions of at most 45 minutes each. If you don’t get there, stop. Move the goalpost. Pick a different tune, or a smaller section of the tune, or a slower tempo, and nail that.

It’s hard to be that disciplined. Especially when your head is full of so many examples of the tunes you want to play. Seeing and hearing whole tunes, at tempo, and trying to play along with them, is one crucial mode of learning. Analyzing passages note by note, and trying to perfect them (maybe with the help of a tool like Soundslice), is another. They’re complementary, and I need them both. So thanks, Kathy, for helping me think about how to combine them. And … welcome back!

If we want private communication we can have it

If you received an email message from me during the early 2000s, it came with an attachment that likely puzzled or annoyed you. The attachment was my digital ID. In theory you could use it for a couple of purposes. One was to verify that I was the authentic sender of the message, and that the content of my message had not been altered enroute.

You could also save my public key and then use it to send me an encrypted message. During the years I was routinely including my digital ID in outbound messages I think I received an encrypted reply once. Maybe twice.

I’ve always thought that everyone should have the option to communicate securely. Once there was little chance any ordinary person would be able to figure out how to do it. Even for me, as a tech journalist who had learned both the theory and practice of secure communication, it was a challenge to get things working. And when I did, who could I talk to? Only someone else who’d traveled the same path. The pool of potential communication partners was too small to matter.

But during the 2000s I hoped for, and then encouraged, developments that promised to democratize private communication. Mainstream email software implemented the relevant Internet standards and integrated the necessary encryption tools. Now if you and I wanted to communicate securely we could just tick some options in our email programs.

But it still hardly ever happened. Why not? It comes down to a question of defaults. In order to make use of the integrated encryption tools you needed a digital ID. The default was that you didn’t have one. And that’s still the default. You have to go out of your way to get a digital ID. You have to alter the default state of your system, and that’s something people mostly won’t do.

Broadly there are two kinds of secure communication. One kind is implemented in programs like Apple’s Mail and Microsoft’s Outlook. (You likely didn’t know that, and almost surely have never used it, but it’s there.) This kind of secure communication relies on a hierarchical system of trust. To use it you acquire a digital ID issued by, and backed by, some authority. It could be a government, it could be commercial provider, in practice it’s usually the latter. Your communication software is configured to trust certain of these providers. And to use it you must trust those providers too.

Another kind of secure communication relies on no higher authority. Instead communication partners trust one another directly, and exchange their digital IDs in pairwise (peer-to-peer) fashion. Among systems that use this approach, PGP (Pretty Good Privacy) is most notable. Another, now discontinued, was Groove.

Much ink has been spilled, and many pixels lit, debating hierarchical/centralized versus peer-to-peer/distributed methods of storing and transmitting data. Of course the definitions of these methods wind up being a bit fuzzy because hierarchical systems can have peer-to-peer aspects and vice versa.

I would bet that Edward Snowden, Laura Poitras, and Glenn Greenwald are using a purely peer-to-peer approach. When the stakes astronomically high, and when your pool of communication partners is very small, that would be the only way to go. It would be a huge inconvenience. You’d need to massively alter the default state of an off-the-shelf computer to enable secure communication. But there’d be no choice. You’d have to do it.

Could standard systems come with software that communicates securely by default? Yes. Methods based on a hybrid of hierarchical and peer-to-peer trust could be practical and convenient. And they could deliver far better than the level of privacy we now enjoy by default, which is none. Would people want them? Until recently the answer was clearly no. Probably the answer is still no. But now, for the first time in my long experience with this topic, ordinary citizens may be ready to entertain the question. Please do.

Why I subscribe to the Ann Arbor Chronicle (part 2)

I’ve written before about why I subscribe to the Ann Arbor Chronicle. As of today, my Ann Arbor Chronicle Number is 10. That’s the number of months I’ve been sending a modest donation to the Chronicle. The data comes from this page which also gives me the Chronicle Number of some other Ann Arborites I’ve met in my travels there:

23 Bill Tozier
32 Peter Honeyman
50 Linda Feldt

Here’s a chart showing the growth in numbers of donors per month1:

The Chronicle’s evolving policy on donation — and disclosure thereof — is, like everything else about the publication, thoughtful and nuanced.

I have two reasons to hope that the trend shown in that chart1 will continue. One is professional. The Chronicle was the first publication to adopt the web of events model that I am trying to establish more widely. So the Chronicle’s success helps me advance that cause.

The other reason is personal. Though I’m a refugee from journalism I care deeply about it. I wish the kind of journalism practiced by the Chronicle on behalf of Ann Arbor could happen in my town. And in yours.

I’m glad there’s a foundation chartered to help journalism reinvent itself. But while I deem the Chronicle eminently worthy of funding from that source it has thus far received none. And maybe that’s a good thing. Over the long run only broad community support will be sustainable. So I hope the Chronicle achieves that, and shows other communities the way.


1 July 2013 notwithstanding. But maybe as of today, August 1, the July data remains incomplete?

2 The spreadsheet behind the chart is here. And the code that created the spreadsheet is here:

import re

f = open('donors.txt')
s = f.read()
s = s.replace('\n\n','\n')
months = re.findall('\d{4,4}.+', s)
lists = re.split('\d{4,4}.+', s)
lists = lists[1:]
assert ( len(months) == len(lists) )

for i in range(len(months)):
  s = lists[i]
  s = s.strip('\n')
  l = s.split('\n')
  print ( '%3s\t%s' % ( len(l), months[i]) )

""" 
date copied/pasted from http://annarborchronicle.com/subscribe/ 
looks like this:

2013 July
Linda Diane Feldt
Nancy Quay
Jeremy Peters
Bruce Amrine
Mary Hathaway
Katherine Kahn
Sally Petersen
...
2013 June
...

output looks like this:

 93	2013 July
117	2013 June
120	2013 May
120	2013 April

"""

Changeable minds

Minds change rarely. I wonder a lot about what happens when they do, and I often ask people this question:

What’s something you believed deeply, for a long time, and then changed your mind about?

This often doesn’t go well. You’ll ask me, naturally enough, for an example — some belief that I once held and then revised. But since any topic I offer as an example intersects with your existing belief system in some way, we wind up talking about that topic and my original question goes unanswered.

It’s easy to discuss positions you support, or oppose, within the framework of your existing belief system. It’s much harder to consider how that belief system has changed, or could change.

Facebook has become a laboratory in which to observe this effect. I’m connected to people across the continuum of ideologies. At both extremes I see the same behavior. News stories are selected, refracted through the lens of ideology, and posted with comments that I can predict with great certainty. These utterances, by definition, convey little information. Nor are they meant to. Their purpose is to reinforce existing beliefs, not to examine them.

Echo chambers aren’t new, of course, and they have nothing to do with the Internet. We seek the like-minded and avoid the differently-minded. On Facebook, though, it’s not so easy to avoid the differently-minded. I regard that as a feature, not a bug. I’m open to re-examining my own beliefs and I welcome you to challenge them. But if you’re not similarly open to re-examining your own beliefs then I can’t take you seriously.


See also the Edge Annual Question for 2008: What Have You Changed Your Mind About?

Learning to walk (again)

Over the years I’ve had a number of overuse injuries: tendinitis from too much typing or mousing or music playing, a sore shoulder from too much swimming, painful knees and ankles from too much running. The key phrase here is “too much” and you’d think I’d learn my lesson eventually. But no. When I get excited about doing things I overdo and then, periodically, must back off and recover.

Often, during recovery, as I analyze what’s gone wrong, I find that the problem is not simply overuse but more specifically asymmetric use. Once, during a bout of pain in my right thumb joint, while pondering what the cause might be, I looked down at my hands while I was typing. Clatter clatter clatter BAM! Clatter clatter clatter BAM! The BAM was my right thumb pounding the space bar. I could feel a twinge every time I saw it happen.

In some cases, and that was one of them, shifting to a symmetrical pattern of use is helpful. (As is, of course, not pounding.) I’ve trained myself to alternate thumbs while typing (although, as I look down at my hands now I see that needs reinforcement), to breathe alternately left and right while swimming, to change mouse hands from time to time, to become a switch hitter with the garden shovel.

Every time I go through one of these retraining exercises I reflect on the difficulty of the process. The steps are:

- surface a bad habit that was unconscious

- consciously develop a good habit

- submerge the new habit back into the unconscious

In the latest iteration of the process I am relearning how to walk. It sounds ridiculous. It is ridiculous. But here’s what happened — or rather, my best current understanding of what happened. About a year ago I strained one of the adductors in my right groin. Usually things like that resolve with a bit of rest and some stretching. But this time it didn’t. Last summer I was having trouble lifting my right leg over the bicycle seat when mounting. When the same thing happened on the first ride of this season I knew something had to be corrected. But what?

An acquaintance who does massage asked me to observe the angles of my upper legs while cycling. Next time out I looked down and could hardly believe it. My right knee was out of line by at least 25 degrees! That misalignment was clearly aggravating the injury and not allowing it to heal.

When I got home I put cycling and running on hold and went back to basics. I stood in what felt like a normal position and looked down. Sure enough, my right foot was pointing out noticeably. When I aligned it with my left foot I felt like I was forcing it to pigeon-toe. Then I started to walk. Each step required a conscious effort to align the right foot. It didn’t feel correct. But I could see that it was.

So that’s how it’s gone for the past 5 days. Instead of cycling or running I take the dogs for a hike and focus on alignment. I have to supervise my right foot closely and, when I go up and down over obstacles, I have to supervise my right knee to make sure it stays aligned too.

I can tell that it’s working. But clearly a bad habit that took a year to develop will take more than a few days to correct.

Every time something like this happens I wonder how I could fail to notice something so fundamental. But it really isn’t surprising. We can’t consciously monitor how we use our bodies all the time, and bad habits develop gradually. If there’s any application of wearable computing that will matter to me I think it will be the one that warns me when these kinds of bad habits begin to develop, and helps me correct them. We’re not great analysts of the forces in play as we use our bodies, but computers could be.

Upcoming is downgoing, Elm City is ongoing

Here’s Andy Baio’s farewell to Upcoming, a service I’ve been involved with for a decade. In a March 2005 blog post I wrote about what I hoped Upcoming would become, in my town and elsewhere, and offered some suggestions to help it along. One was a request for an API which Upcoming then lacked. Andy soon responded with an API. It was one of the pillars of my Elm City project for a long while until, as Andy notes in his farewell post, it degraded and became useless.

Today I pulled the plug and decoupled Upcoming from all the Elm City hubs.

In 2009 Andy and I both spoke at a conference in London. Andy was there to announce a new project that would help people crowdsource funding for creative projects. I was there to announce a project that would help people crowdsource public calendars. Now, of course, Kickstarter is a thing. The Elm City project not so much. But I’m pretty sure I’m on the right track, I’m lucky to be in a position to keep pursuing the idea, and although it’s taking longer than I ever imagined I’m making progress. Success, if it comes, won’t look like Upcoming did in its heyday, but it will be a solution to the same problem that Upcoming addressed — a problem we’ve yet to solve.

That same March 2005 blog post resonates with me for another reason. That was the day I walked around my town photographing event flyers on shop windows and kiosks. When I give presentations about the Elm City project I still show a montage of those images. They’re beautiful, and they’re dense with information that isn’t otherwise accessible.

Event flyers outperform web calendars, to this day, because they empower groups and organizations to be the authoritative sources for information about their public events, and to bring those events to the attention of the public. The web doesn’t meet that need yet but it can, and I’m doing my best to see that it does.

Community calendar workshop next week in Newport News

My next community calendar workshop will be at the Peninsula Fine Arts Center in Newport News, on Tuesday April 23 at 6PM. It’s for groups and organizations in the Hampton Roads region of Virginia, including Chesapeake, Hampton, Newport News, Norfolk, Portsmouth, Suffolk, Virginia Beach, Williamsburg, and Yorktown. If you’re someone there who’d like help change the way public calendars work in your region, please sign up on EventBrite so we know you’re coming, or contact me directly.

Here’s the pitch from the workshop’s sponsor and host, the Daily Press:

The Community Calendar Project

It’s about time someone came up with a way to get all community events in one place so everyone, everywhere can find out what’s going on at any given time, on any given day.

It’s about time creators of those events – the people, agencies and organizations who work so hard to bring quality education, support and entertainment to the community – had a way to get their messages out there effortlessly.

It’s about time the public can find out about the happenings and events they really care about and never miss an important event again.

AND it’s “time” – or the lack of it – that makes this community initiative being spearheaded by the Daily Press so valuable to everyone. This community calendar will SAVE time – for the event creators, the event seekers and the websites and platforms that work to make this information available.

The Daily Press is partnering with Jon Udell of Microsoft to bring this project to Hampton Roads and make it among the first communities in the country to have an easily searchable, FREE database of events available to the public. And we want to get all of Hampton Roads involved. The only thing required to participate is to agree to use an iCalendar formatted calendar on your own websites or to create events through Facebook. That’s it. Participation guaranteed.

What is an iCalendar? Simply, iCalendar is a computer file format that allows Internet users to exchange calendars with other Internet users. iCalendar is used and supported by personal calendars such as Google Calendar, Apple Calendar (formerly iCal), Microsoft Outlook and Hotmail, Lotus Notes, Yahoo! Calendar, and others, and by web content management systems including WordPress, Drupal, Joomla, and others.

Many of you may already use one of these applications to publish your calendars online, and that is great! That means you can already participate in the calendar network we are bringing together. The rest of you can easily convert and get on board.We’ll tell you how.

On April 23 you are invited to a presentation of the Community Calendar Project. Jon will be on hand to tell you what it is, why it matters and how to get involved. The gathering will take place at 6 p.m. at the Peninsula Fine Arts Center, 101 Museum Drive (across from The Mariners’ Museum) in Newport News.

Light refreshments will be served. Get your FREE tickets so we know how many are attending.

Hope to see you there.

Walled fields of knowledge

My dad died of congestive heart failure in 2009. The last weeks of his life weren’t what they could have been had we known enough to get him into hospice care. But we didn’t know, and I’ve felt ashamed about that.

If we had it to do over again things would be very different. We’d have brought him home much sooner, made him comfortable, helped him work through a life review, hung out with him, heard and said some things that needed to be heard and said.

As it was we only managed to bring him home for his last day. It was better than not bringing him home at all, but not much better, at least not for him. For us, though, it was transformative. Two generations of our family — my wife and I, our children — had never seen the kind of death that was normal until the modern era. We’d didn’t know why or how to shift gears from medical treatment to palliative care. Now we do and we’re deeply changed — Luann especially. She’s become a hospice volunteer who comforts the dying, supports their families, and counsels survivors.

From her I’ve learned a lot about hospice care. What happened to us, it turns out, is typical. Many people don’t realize how comfortable a dying person can often be at home with proper medication. As a result many delay until the bitter end, and miss out on the emotional and psychological richness that’s possible in a home hospice setting.

A big reason for the delay is the chasm that divides the culture of hospitals from the culture of hospice. Nobody in the hospital advised us to bring dad home a month before he died. A social worker mentioned it, but dad didn’t know what it could mean to make that choice, we didn’t know enough to advocate for it, and medical professionals speak with vastly more authority than do social workers in our current regime.

What hospitals don’t know about hospice is astonishing. Last night, while reading an anthology of science writing, I happened on an essay by Atul Gawande, a physician/writer who, like Oliver Sacks, Perri Klass, and Abraham Verghese, opens windows into the medical world. In 2010, the year after our experience with my dad, he wrote a New Yorker piece called Letting Go that included these revelations:

One Friday morning this spring, I went on patient rounds with Sarah Creed, a nurse with the hospice service that my hospital system operates. I didn’t know much about hospice. I knew that it specialized in providing “comfort care” for the terminally ill, sometimes in special facilities, though nowadays usually at home. I knew that, in order for a patient of mine to be eligible, I had to write a note certifying that he or she had a life expectancy of less than six months. And I knew few patients who had chosen it, except maybe in their very last few days, because they had to sign a form indicating that they understood their disease was incurable and that they were giving up on medical care to stop it. The picture I had of hospice was of a morphine drip. It was not of this brown-haired and blue-eyed former I.C.U. nurse with a stethoscope, knocking on Lee Cox’s door on a quiet street in Boston’s Mattapan neighborhood

And:

Like many people, I had believed that hospice care hastens death, because patients forgo hospital treatments and are allowed high-dose narcotics to combat pain. But studies suggest otherwise. In one, researchers followed 4,493 Medicare patients with either terminal cancer or congestive heart failure. They found no difference in survival time between hospice and non-hospice patients with breast cancer, prostate cancer, and colon cancer. Curiously, hospice care seemed to extend survival for some patients; those with pancreatic cancer gained an average of three weeks, those with lung cancer gained six weeks, and those with congestive heart failure gained three months.

These things once surprised me too. Now, thanks to our brief hospice experience with dad and Luann’s volunteer work since, I take them for granted. And while I’ve felt ashamed not to have arrived at this understanding sooner, in time to help dad, I guess I should cut myself some slack. Atul Gawande didn’t get there any sooner than me.

How could that be? How could a leading medical practitioner (and explainer) reach mid-career lacking such basic and useful knowledge? All too easily when we carve the world into fields of knowledge and then build walls around them.

Networks of first-class peers

Last month ago I wrote a column for Wired.com, Rebooting web comments, that attracted some unsavory feedback. Had the flamers read beyond the second paragraph they might have seen that I wasn’t insisting everyone must use verifiable identities online. But they didn’t. So I wrote another column last week, Own your words, to clarify my position.

My first blogging tool, back in 2001, was Dave Winer’s Radio UserLand. One of Dave’s mantras was: “Own your words.” As the blogosphere became a conversational medium, I saw what that could mean. Radio UserLand didn’t support comments. That turned out to be a good constraint to embrace. When conversation emerged, as it always will in any system of communication, it was a cross-blog affair. I’d quote something from your blog on mine, and discuss it. You’d notice, and perhaps write something on your blog referring back to mine.

This cross-blog conversational mode had an interesting property: You owned your words. Everything you wrote went into your own online space, was bound to your identity, became part of your permanent record. As a result, discourse tended to be more civil than what often transpired in Usenet newsgroups or web forums. In those kinds of online spaces, your sense of identity is attenuated. You may or may not be pseudonymous, but either way the things you say don’t stick to you in the same way they do if you say them in your own permanent online space.

Later blogs evolved forum-style comments which concentrated discussion but recreated the old problems: attenuation of identity, loss of ownership of data. Then came Twitter and Facebook and, so the story goes, “social killed the blogosphere.” It was easier to read and write in those online spaces, blogging declined, and Google’s recent decision to retire its RSS reader is being widely regarded as the nail in the blogosphere’s coffin.

Of course that’s wrong. One of the staples of tech punditry is the periodic declaration that something — Unix, the Web, Microsoft, Apple, the blogosphere — is dead.

Will Google Reader’s exit spell the end of the blogosphere or its rebirth? Nobody knows, and since I’m no longer in the pageview business I won’t even hazard a prediction. Instead I want to highlight something that’s bigger than blogs, bigger even than social media. Owning your words is a fundamental principle. It seemed new at the dawn of the blogosphere but its roots ran deeper. They were woven into the fabric of the Internet which, at its core, is a network of peers.

For technical reasons I won’t explore here, it’s not possible (or, I should say, not believed possible) for our computers to be first-class peers on that network, as early Internet-connected computers were. But it is possible for various of our avatars — our websites, our blogs, our calendars — to represent us as first-class peers. That means:

- They use domain names that we own

- They converse with other peers in ways that we enable and can control

- They store data in systems that we authorize and can manage

Your Twitter and Facebook avatars are not first-class peers on the network in these ways. Which isn’t to say they aren’t useful. Second-class peers are incredibly useful, largely because they enable us to avoid the complexities that make it challenging to operate first-class peers.

Those challenges are real. But they’re not insurmountable unless we believe that they are. I don’t believe that. I hope you won’t. What some of us learned at the turn of the millenium — about how to use first-class peers called blogs, and how to converse with other first-class peers — gave us a set of understandings that remain critical to the effective and democratic colonization of the virtual realm. It’s unfinished business, and it may never be finished, but don’t let the tech pundits or anyone else convince you it doesn’t matter. It does.

Indie theaters and open data

Movie showtimes are easy to find. Just type something like “movies keene nh” into Google or Bing and they pop right up:

You might assume that this is open data, available for anyone to use. Not so, as web developers interested in such data periodically discover. For example, from MetaFilter:

Q: We initially thought it would be as easy as pulling in an RSS feed from somewhere, but places like Yahoo or Google don’t offer RSS feeds for their showtimes. Doing a little research brought up large firms that provide news and data feeds and that serve up showtimes, but that seems like something that’s designed for high-level sites with national audiences.

So, is there any solution for someone who is just trying to display local showtimes?

A: This is more complicated than you might think. Some theatres maintain that their showtimes are copyrighted, and (try to) control the publication of them. Others have proprietary agreements with favored providers and don’t publish their showtimes elsewhere, to give their media partners a content edge.

What applies to RSS feeds applies to calendar feeds as well. It would be nice to have your local showtimes as an overlay on your personal calendar. But since most theaters don’t make the data openly available, you can’t.

Some indie theaters, however, do serve up the data. Here are some movies that don’t appear when you type “movies keene nh” into Google or Bing:

These are listings from the Putnam Theater at Keene State College. They syndicate to the Elm City hub for the Monadnock region of New Hampshire by way of the college calendar which recently, thanks to Ben Caulfield, added support for standard iCalendar feeds. They appear in the film category of that hub. And in fact they’re all that can appear there.

I’ve decided I’m OK with that. I used to forget about movies at the Putnam because they didn’t show up in standard searches. Now I sync them to my phone and I’m more aware of them. Would I want all the mainstream movies there too? I used to think so, but now I’m not so sure. There are plenty of ways to find what’s playing at mainstream theaters. That doesn’t feel like an awareness problem that needs solving. The indie theaters, though, could use a boost. As I build out Elm City hubs in various cities, I’ve been able to highlight a few with open calendars:

- In Berkeley: UC Berkeley Art Museum / Pacific Film Archive (BAM/PFA)

- In Toronto: Bloor Cinema

And here are some indies whose calendars could be open, but aren’t:

- In Portland: Academy Theater

- In Cambridge, The Brattle Theatre

If you’re an indie theater and would like your listings to be able to flow directly to personal calendars, and indirectly through hubs to community portals, check out how the Putman, BAM/PFA, and the Bloor Cinema are doing it.

Let’s think about what we’re doing right

In The Better Angels of our Nature: Why Violence Has Declined, Steven Pinker compiles massive amounts of evidence to show that we are becoming a more civilized species. The principal yardstick he uses to measure progress is the steady decline, over millenia, in per-capita rates of homicide. But he also measures declines in violence directed towards women, racial groups, children, homosexuals, and animals.

It’s hard to read the chapters about the routine brutality of life during the Roman empire, the Middle Ages, the Renaissance, and — until more recently than we like to imagine — the modern era. An early example:

Far from being hidden in dungeons, torture-executions were forms of popular entertainment, attracting throngs of jubilant spectators who watched the victim struggle and scream. Bodies broken on wheels, hanging from gibbets, or decomposing in iron cages where the victim had been left to die of starvation and exposure were a familiar part of the landscape.

A modern example:

Consider this Life magazine ad from 1952:

Today this ad’s playful, eroticized treatment of domestic violence would put it beyond the pale of the printable. It was by no means unique.

A reader of that 1950s ad would be as horrified as we are today to imagine cheering a public execution in the 1350s. A lot changed in 600 years. But in the 60 years since more has changed. The ad that seemed OK to a 1950s reader would shock most of us here in the 2010s.

Over time we’ve grown less willing and able to commit or condone violence, and our definition of what counts as violence has grown more inclusive. And yet this is deeply counter-intuitive. We tend to feel that the present is more violent and dangerous than the recent past. And our intuition tells us that the 20th century must have been more so than the distant past. That’s why Pinker has to marshal so much evidence. It’s like Darwin’s rhetorical strategy in The Origin of Species. You remind people of a lot of things that they already know in order to lead them to a conclusion they wouldn’t reach on their own.

Will the trend continue? Will aspects of life in the 2010s seem alien to people fifty years hence in the same way that the coffee ad seems alien to us now, and that torture-execution seemed to our parents? (And if so, which aspects?)

Pinker acknowledges that the civilizing trend may not continue. He doesn’t make predictions. Instead he explores, at very great length, the dynamics that have brought us to this point. I won’t try to summarize them here. If you don’t have time to read the book, though, you might want to carve out an hour to listen to his recent Long Now talk. You’ll get much more out of that than from reading reviews and summaries.

Either way, you may dispute some of the theories and mechanisms that Pinker proposes. But if you buy the premise — that all forms of violence have steadily declined throughout history — I think you’ll have to agree with him on one key point. We’re doing something right, and we ought to know more about why and how.

Flash Fill: Text wrangling for non-programmers

As Elm City hubs grow, with respect to both raw numbers of events and numbers of categories, unfiltered lists of categories become unwieldy. So I’m noodling on ways to focus initially on a filtered list of “important” categories. The scare quotes indicate that I’m not yet sure how to empower curators to say what’s important. Categories with more than a threshold number of events? Categories that are prioritized without regard to number of events? Some combination of these heuristics?

To reason about these questions I need to evaluate some data. One source of data about categories is the tag cloud. For any Elm City hub, you can form this URL:

elmcity.cloudapp.net/HUBNAME/tag_cloud

If HUBNAME is AnnArborChronicle, you get a JSON file that looks like this:

[
{ "aadl":348},
{ "aaps":9},
{ "abbot":18},
...
]

This is the data that drives the category picklist displayed in the default rendering of the Ann Arbor hub. A good starting point would be to dump this data into a spreadsheet, sort by most populous categories, and try some filtering.

I could add a feature that serves up this data in some spreadsheet-friendly format, like CSV (comma-separated variable). But I am (virtuously) lazy. I hate to violate the YAGNI (“You aren’t gonna need it”) principle. So I’m inclined to do something quick and dirty instead just to find out if it’ll even be useful to work with that data in a spreadsheet..

One quick-and-dirty approach entails looking for some existing (preferably online) utility that does the trick. In this case I searched for things with names like json2csv and json2xls, found a few candidates, but nothing that immediately did what I wanted.

So some text needs to be wrangled. One source of text to wrangle is the HTML page that contains the category picklist. If you capture its HTML source, you’ll find a sequence of lines like this:

<option value="aadl">aadl (348)</option>
<option value="aaps">aaps (9)</option>
<option value="abbot">abbot (18)</option>

It’s easy to imagine a transformation that gets you from there to here:

aadl	348
aaps	9
abbot	18

Although I’ve often written code to do that kind of transformation, if it’s a quick-and-dirty one-off I don’t even bother. I use the macro recorder in my text editor to define a sequence like:

  • Start selecting at the beginning of a line
  • Go to the first >
  • Delete
  • Go to whitespace
  • Replace with tab
  • Search for (
  • Delete
  • Search for )
  • Delete to end of line
  • Go to next line

This is a skill that’s second nature to me, and that I’ve often wished I could teach others. Many people spend crazy amounts of time doing mundane text reformatting; few take advantage of recordable macros.

But the reality is that recordable macros are the first step along the slippery slope of programming. Most people don’t want to go there, and I don’t blame them. So I’m delighted by a new feature in Excel 2013, called Flash Fill, that will empower everybody to do these kinds of routine text transformations.

Here’s a picture of a spreadsheet with HTML patterns in column A, an example of the name I want extracted in column B, and an example of the number I want in column C.

Given that setup, you invoke Flash Fill in the first empty B and C columns to follow the examples in B1 and C1. Here’s the resulting spreadsheet on SkyDrive. Wow! That’s going to make a difference to a lot of people!

Suppose your data source were instead JSON, as shown above. Here’s another spreadsheet I made using Flash Fill. As will be typical, this took a bit of prep. Flash Fill needs to work on homogenous rows. So I started by dumping the JSON into JSONLint to produce text like this:

[
    {
        "aadl": 348
    },
    {
        "aaps": 9
    },
    {
        "abbot": 18
    },
...
]

I imported that text into Excel 2013 and sorted to isolate a set of rows with a column A like this:

"aadl": 348
"aaps": 9
"abbot": 18

At that point it was a piece of cake to get Flash Fill to carry the names over to column B and the numbers to column C.

Here’s a screencast by Michael Herman that does a nice job showing what Flash Fill can do. It also illustrates a fascinating thing about programming by example. At about 1:25 in the video you’ll see this:

Michael’s example in C1 was meant to tell Flash Fill to transform strings of 9 digits into the familiar nnn-nn-nnnn pattern. Here we see its first try at inferring that pattern. What should have been 306-60-4581 showed up as 306-215-4581. That’s wrong for two reasons. The middle group has three digits instead of two, and they’re the wrong digits. So Michael corrects it and tries again. At 1:55 we see Flash Fill’s next try. Here, given 375459809, it produces 375-65-9809. That’s closer, the grouping pattern looks good, but the middle digits aren’t 45 as we’d expect. He fixes that example and tries again. Now Flash Fill is clear about what’s wanted, and the rest of the column fills automatically and correctly.

But what was Flash Fill thinking when it produced those unintended transformations? And could it tell us what it was thinking?

From a Microsoft Research article about the new feature:

Gulwani and his team developed Flash Fill to learn by example, not demonstration. A user simply shows Flash Fill what he or she wants to do by filling in an Excel cell with the desired result, and Flash Fill quickly invokes an underlying program that can perform the task.

It’s the difference between teaching someone how to make a pizza step by step and simply showing them a picture of a pizza and minutes later eating a hot pie.

But that simplicity comes with a price.

“The biggest challenge,” Gulwani says, “is that learning by example is not always a precise description of the user’s intent — there is a lot of ambiguity involved.

“Take the example of Rick Rashid [Microsoft Research’s chief research officer]. Let’s say you want to convert Rick Rashid to Rashid, R. Where does that ‘R’ come from? Is it the ‘R’ of Rick or the ‘R’ of Rashid? It’s very hard for a program to understand.”

For each situation, Flash Fill synthesizes millions of small programs — 10-20 lines of code — that might accomplish the task. It sounds implausible, but Gulwani’s deep research background in synthesizing code makes it possible. Then, using machine-learning techniques, Flash Fill sorts through these programs to find the one best-suited for the job.

I suspect that while Flash Fill could tell you what it was thinking, you’d have a hard time understanding how it thinks. And for that reason I suspect that hard-core quants won’t rush to embrace it. But that’s OK. Hard-core quants can write code. Flash Fill is for everybody else. It will empower regular folks to do all sorts of useful transformations that otherwise entail ridiculous manual interventions that people shouldn’t waste time on. Be aware that you need to check results to ensure they’re what you expect. But if you find yourself hand-editing text in repetitive ways, get the Excel 2013 preview and give Flash Fill a try. It’s insanely useful.

Homicide rates in context

In U.N. Maps Show U.S. High in Gun Ownership, Low in Homicides, A.W.R. Hawkins presents the following two maps:

From these he concludes:

Notice the correlation between high gun ownership and lower homicide rates.

As these maps show, “more guns, less crime” is true internationally as well as domestically.

The second map depicts homicides per 100,000 people. That’s the same yardstick used in Steven Pinker’s monumental new book The Better Angels of Our Nature: Why Violence has Declined. Pinker marshals massive amounts of data to show that over the long run, and at an accelerating pace, we are less inclined to harm one another. When you look at the data on a per capita basis, even the mass atrocities of the 20th century are local peaks along a steadily declining sawtooth trendline.

One of the most remarkable charts in the book ranks the 20 deadliest episodes in history. It’s adapted from Matthew White’s The Great Big Book of Horrible Things, and appears in a slightly different form in The New Scientist:

Ever heard of the An Lushan Revolt? Well, I hadn’t, but on a per capita basis it dwarfs the first World War.

Pinker says, in a nutshell, that we’re steadily becoming more civilized, and that data about our growing reluctance to kill or harm one another show that. The trend marches through history and spans the globe. There’s regional variation, of course. A couple of charts show the U.S. to be about 5x more violent than Canada and the U.K. But there isn’t one that ranks the U.S. in a world context. So A.W.R. Hawkins’ map of homicide rates got my attention.

The U.S. has the most guns, the first chart says. And it’s one of the safest countries, the second chart says. But that second map doesn’t tell us:

    Where does the U.S. rank?

    How many countries are in the red, pink, yellow, and green categories?

    Which countries are in those categories?

    How do countries rank within those categories?

Here’s another way to visualize the data:

There are a lot of countries mashed together in that green zone. And after Cuba we’re the most violent of them. Five homicides per 100,000 isn’t a number to boast about.

Scientific storytelling

It’s said that every social scientist must, at some point, write a sentence that begins: “Man is the only animal that _____.” Some popular completions of the sentence have been: uses tools, uses language, laughs, contemplates death, commits atrocities. In his new book Jonathan Gottschall offers another variation on the theme: storytelling is the defining human trait. For better and worse we are wired for narrative. A powerful story that captures our attention can help us make sense of the world. Or it can lead us astray.

A story we’ve been told about Easter Island goes like this. The inhabitants cut down all the trees in order to roll the island’s iconic 70-ton statues to their resting places. The ecosystem crashed, and they died off. This story is told most notably by Jared Diamond in Collapse and (earlier) in this 1995 Discover Magazine article:

In just a few centuries, the people of Easter Island wiped out their forest, drove their plants and animals to extinction, and saw their complex society spiral into chaos and cannibalism.

As we try to imagine the decline of Easter’s civilization, we ask ourselves, “Why didn’t they look around, realize what they were doing, and stop before it was too late? What were they thinking when they cut down the last palm tree?”

This is a cautionary tale of reckless ecocide. But according to recent work by Terry Hunt and Carl Lipo, Jared Diamond got the story completely wrong. A new and very different story emerged from their study of the archeological record. Here are some of the points of contrast:

old story new story
Collapse resulted from the islanders’ reckless destruction of their environment (ecocide). Collapse resulted from European-borne diseases and European-inflicted slave trading (genocide).
The trees were cut down to move the statues. Trees weren’t used to move the statues. They were ingeniously designed to be walked along in a rocking motion using only ropes. The trees were destroyed mostly by rats. Which wasn’t a problem anyway because the islanders used the cleared land for agriculture.
Fallen and broken statues resulted from intertribal warfare. Fallen and broken statues resulted from earthquakes.
It must have taken a population of 25,000 or more to make and move all those statues. A population decline to around 4000 at the moment of European contact was evidence of massive collapse. The mode of locomotion for which the statues were designed is highly efficient. There’s no need to suppose a much larger work force than was known to exist.
The people of Easter Island were warlike. The people of Easter Island were peaceful. Because they had to be. Lacking hardwood trees for making new canoes, they were committed once the canoes that brought them were gone. There was no escape. And it’s a hard place to make a living. No fresh water, poor soil, meager fishing. To survive for the hundreds of years that they did, the society had to be “optimized for stability.”

Hunt and Lipo tell this new story in compelling Long Now talk. After the talk Stewart Brand asks how Jared Diamond has responded to their interpretation. Not well, apparently. Once we’re in the grip of a powerful narrative we don’t want to be released from it.

Hunt and Lipo didn’t go to Easter Island with a plan to overturn the old story. They went as scientists with open eyes and open minds, looked at all the evidence, realized it didn’t support the old story, and came up with a new one that better fits the facts. And it happens to be an uplifting one. These weren’t reckless destroyers of an ecosystem. They were careful stewards of limited resources whose artistic output reflects the ingenuity and collaboration that enabled them to survive as long as they did in that hard place.

We’re all invested in stories, and in the assumptions that flow from them. Check your assumptions. It’s a hard thing to do. But it can lead you to better stories.

Check your assumptions

In Computational thinking and life skills I asked myself how to generalize this touchstone principle from computer science:

Focus on understanding why the program is doing what it’s doing, rather than why it’s not doing what you wanted it to.

And here’s what I came up with:

Focus on understanding why your spouse or child or friend or political adversary is doing what he or she is doing, rather than why he or she is not doing what you wanted him or her to.

I’ve been working on that. It’s been a pleasant surprise to find that Facebook can be a useful sandbox in which to practice the technique. I keep channels of communication open to people who hold wildly different political views. It’s tempting to argue with, or suppress, some of them. Instead I listen and observe and try to understand the needs and desires that motivate utterances I find abhorrent.

My daughter, a newly-minted Master of Social Work, will soon be doing that for a living. She’s starting a new job as a dialogue facilitator. How do you nurture conversations that bridge cultures and ideologies? It’s important and fascinating work. And I suspect there are some other computational principles that can helpfully generalize to support it.

Here’s one: Encourage people to articulate and test their assumptions. In the software world, this technique was a revelation that’s led to a revolution in how we create and manage complex evolving systems. The tagline is test-driven development (TDD), and it works like this. You don’t just assume that a piece of code you wrote will do what you expect. You write corresponding tests that prove, for a range of conditions, that it does what you expect.

The technique is simple but profound. One of its early proponents, Kent Beck, has said of its genesis (I’m paraphrasing from a talk I heard but can’t find):

I was stumped, the system wasn’t working, I didn’t know what else to do, so I began writing tests for some of the most primitive methods in the system, things that were so simple and obvious that they couldn’t possible be wrong, and there couldn’t possibly be any reason to verify them with tests. But some of them were wrong, and those tests helped me get the system working again.

Another early proponent of TDD, Ward Cunningham, stresses the resilience of a system that’s well-supported by a suite of tests. In the era of cloud-based software services we don’t ship code on plastic discs once in a while, we continuously evolve the systems we’re building while they’re in use. That wouldn’t be safe or sane if we weren’t continuously testing the software to make sure it keeps doing what we expect even as we change and improve it.

Before you can test anything, though, you need to articulate the assumption that you’re testing. And that’s a valuable skill you can apply in many domains.

Code

Assumption: The URL points to a calendar.

Tests: Does the URL even work? If so, does it point to a valid calendar?

Interpersonal relationships

Assumption: You wouldn’t want to [watch that movie, go to that restaurant, take a walk].

Tests: I thought you wouldn’t want to [watch that movie, go to that restaurant, take a walk] but I shouldn’t assume, I should ask: Would you?

Tribal discourse

Assumption: They want to [take away our guns, proliferate guns].

Tests: ?

I’ll leave the last one as an exercise for the reader. If you feel strongly about that debate (or another) try asking yourself two questions. What do I assume about the opposing viewpoint? How might I test that assumption?

How John McPhee structures stories from his notes

John McPhee has lately been reflecting, in a series of New Yorker articles, on his long career as one of the world’s leading writers of nonfiction. In this week’s issue we learn that one of my favorite of his books, The Pine Barrens, was born on a picnic table. It was there that he lay prone for two weeks, in a panic, searching for a way to structure the vast quantity of material he’d gathered in a year of research. The solution, in this case, was Fred Brown, an elderly Pine Barrens dweller who “had some connection or other to at least three quarters of those Pine Barrens topics whose miscellaneity was giving me writer’s block.” Fred was the key to unlocking that book’s structure. But each book needed a different key.

The approach to structure in factual writing is like returning from a grocery store with materials you intend to cook for dinner. You set them out on the kitchen counter, and what’s there is what you deal with, and all you deal with.

For many years, that meant writing notes on pieces of paper, coding the notes, organizing the notes into folders, retyping notes, cutting and rearranging with scissors and tape. Then came computers, a text editor called KEDIT, and a Princeton colleague named Howard Strauss who augmented KEDIT with a set of macros that supported the methods McPhee had been evolving for 25 years. In the article, McPhee describes two KEDIT extensions: Structur and Alpha.

Structur exploded my notes, It read the codes by which each note was given a destination or destinations (including the dustbin). It created and named as many new KEDIT files as there were codes, and, of course, it preserved the original set.

Alpha implodes the notes it works on. It doesn’t create anything new. It reads codes and then churns a file internally, organizing it in segments in the order in which they are meant to contribute to the writing.

Alpha is the principal, workhorse program I run with KEDIT. Used again and again on an ever-concentrating quantity of notes, it works like nesting utensils. It sorts the whole business at the outset, and then, as I go along, it sorts chapter material and subchapter material, and it not infrequently rearranges the components of a single paragraph.

KEDIT is the only writing tool John McPhee has ever used. And as he is careful to point out, it’s a text editor, not a word processor. No pagination, headers, fonts, WYSIWYG, none of that. Just words and sentences. I can relate to that. My own writing tool of choice is an EMACS clone called Epsilon. I first used it on DOS around 1986 and I’m using it in Windows today to write these words. If I were a writer of long works I might have evolved my use of Epsilon in ways similar to what John McPhee describes. But I’ve only written one book, that was a long time ago, and since then I’ve written at lengths that don’t require that kind of tool support.

Still, I would love to find out more about John McPhee’s toolchain. My interest is partly historical. Howard Strauss died in 2005, and KEDIT is nearing the end of its life. (From kedit.com: “…we are in the process of gradually winding down Mansfield Software Group.”) But I’m also looking forward. Not everyone needs to organize massive quantities of unstructured information. But those who do require excellent tool support, and there’s room for innovation on that front. Anyone who’d like to tackle that challenge would benefit from understanding what John McPhee’s methods are, and how his toolchain supports them.

I’m going to write to John McPhee to ask him if he’d be willing to work with me on a screencast to document his methods. (And also to thank him for countless hours of reading enjoyment.) It’ll be a cold call, because we’ve never communicated, so if any reader of this post happens to have a personal connection, I would greatly appreciate an introduction.

Heating as a service: Xylogen points the way

I love a good story about a product becoming a service. Ray Anderson did it with floor covering, ZipCar does it with cars, Amazon and Microsoft are doing it with IT infrastructure. It’s a sweet model. Service providers own equipment and operations, earn recurring revenue, and are motivated to continuously improve efficiency and customer satisfaction.

There’s even been speculation about turning home heating into a service. Here in New England, where the dominant product is heating oil and oil-burning equipment, that would be a wonderful thing. Because now, for the millions of homeowners who burn oil — and for the businesses who support that system — the incentives are all wrong. We’re collectively abetting the nation’s addiction to oil, and customers’ need to using less oil conflicts with suppliers’ need to sell more.

In From oil to wood pellets: New England’s home heating future I documented my first foray into heating with biomass. In Central heating with a wood gasification boiler I presented the solution that’s actually working for us. Biomass is a viable alternative. But I’m still the owner, operator, and maintainer of the equipment, and the manager of the fuel supply (i.e. buying, stacking, loading). What would it be like to outsource those functions?

For single-family homes, biomass heating as a service is still just a dream. But for commercial buildings it’s a reality, and there’s a great example right in my own backyard. Well, almost. The Monadnock Waldorf School, right around the corner from my house, recently converted to a wood pellet boiler installed by Xylogen, a new company whose tagline is:

We do not sell heating systems. We do not sell fuel. We sell secure, local, renewable heat.

Xylogen’s blog tells the story of the project. Here are some of my favorite excerpts.

From What’s happening at MWS?:

We’re pleased to report that the oil boilers have used a total of 7 gallons of oil from day 1, the bulk consumed during initial tune-up and system testing. The remainder of the usage actually occurred during times when the pellet boiler could have kept up with the building’s requirement for heat. In other words, this operation was a mistake that has now been corrected in the control algorithms.

From We see the big picture too:

Today, an opening to an old ventilation shaft was discovered and promptly covered over. Heated air was escaping the buildng through the grating at such a clip that a small student might have gotten sucked in and trapped on it!

Also, there was an assembly today in the assembly room (makes sense!), so we decided to turn down the heat in advance to try to avoid overheating and waste. It turns out the audience itself raised the temperature at least 6F. Good thing we didn’t start out toasty.

Small, very simple steps can have a big impact. We’re looking at the high tech, the low tech, and everything in between to make a difference.

From True service:

The beauty of automatic real-time monitoring is that it’s possible to identify a problem with the equipment and rectify it before the customer even notices. That is service.

Xylogen is a collaboration between Mark Froling and Henry Spindler. I wish them well and look forward to reading more about their work.


PS: Thanks to Andrew Dey (whom I met last night at a talk by Sustainserv’s Matthew Gardner) for pointing out that Xylogen isn’t just about alternative fuel, but more importantly about an alternative business model.

Calendar feeds are a best practice for bookstores

Bookstores, for all the obvious reasons, are hanging on by their fingernails. What brings people into bookstores nowadays? Some of us still buy and read actual printed books. Some of us enjoy browsing the shelves and tables. Some of us value interaction with friendly and knowledgeable booksellers. And some of us like to see and hear authors when they come to speak and sign books.

There are lots of author events at bookstores. Recently LibraryThing’s Tim Spalding tweeted:

Upcoming bookish events on @LibraryThing Local now over 10,000! http://www.librarything.com/local/helpers

It’s great that LibraryThing “helpers” (individuals, libraries, bookstores) are adding all those events to LibraryThing’s database. But I’d really like to see bookstores help themselves by publishing standard calendar feeds. That way, LibraryThing could ingest those calendars automatically, instead of relying on dedicated helpers to input events one at a time. And the feeds would be available in other contexts as well, syndicating both to our personal calendars (desktop-, phone-, and cloud-based) and to community calendars.

When I saw Tim’s tweet I took a look at how bookstore events are feeding into various elmcity hubs. Here’s a snapshot of what I found:


location store ical feed?
Bright Lights
Monadnock Region of NH Toadstool yes
Cambridge, MA Harvard Bookstore yes
Brookline MA Brookline Booksmith yes
Boston MA Trident Booksellers yes
Ann Arbor MI Crazy Wisdom yes
Portland OR Powell’s yes
Dim Lights
Berkeley East Wind Books indirect
Canada Chapters Indigo indirect
Seattle Third Place Books indirect
… and some others …
Dark Matter
Berkeley City Lights no
Various Barnes and Noble no
Seattle WA Elliot Bay no
… and many others …

There are three buckets:

Bright Lights: These are stores whose web calendars are accompanied by standard iCalendar feeds. Events from these stores appear automatically in the Monadnock, Boston, Ann Arbor, and Portland hubs. These stores’ calendars could also be ingested automatically into LibraryThing, and you could subscribe to them directly.

Dim Lights: These are stores whose web calendars are hosted on Facebook. There isn’t a standard iCalendar feed for Facebook calendars, but the elmcity service can synthesize one using the Facebook API. So I say that these stores have “indirect” iCalendar feeds.

Dark Matter: These are stores whose web calendars are available only in HTML format. Some of these calendars are handcrafted web pages, others are served up by content management systems that produce calendar widgets for display but fail to provide corresponding feeds.

There are a few Bright Lights and some Dim Lights, but most bookstore calendars, like most web calendars of all kinds, are Dark Matter. If you’re a bookstore I urge you to become a Bright Light. Making your calendar available to the web of data is as easy as using Google Calendar or Hotmail Calendar. It’s a best practice that bookstores disregard at their peril.

Harvard vs MIT

As I build out calendar hubs in various cities I’ve been keeping track of major institutions that do, or don’t, provide iCalendar feeds along with their web calendars. At one point I made a scorecard which shows that iCalendar support is unpredictably spotty across a range of cities and institutions. One of the surprises was Boston, where I found iCalendar feeds for neither Harvard nor MIT.

I’ve recently improved the Boston calendar hub and, as part of that exercise, I took another look at the public calendars for both universities. It turns out that Harvard does offer a variety of calendar feeds. I just hadn’t looked hard enough. There’s even an API:

The HarvardEvents API allows you to request data programmatically HarvardEvents in CSV, iCalendar, JSON, JSONP, serialized PHP, RSS, or XML format. The API provides a RESTful interface, which means that you can query it using simple HTTP GET requests.

Nicely done! You’d think that, just down the road, MIT would be doing something similar. But if so I haven’t found it. So for now, the Boston hub includes way more Harvard events than MIT events.

Here’s hoping MIT will follow Harvard’s lead and equip its public calendars with standard data feeds.

Computational thinking and life skills

Surfing the Roku box last night I landed on the MIT Open CourseWare channel and sampled Introduction to Computer Science and Programming. In one lecture Prof. John Guttag offers this timely reminder:

Focus on understanding why the program is doing what it’s doing, rather than why it’s not doing what you wanted it to.

It was timely because I was, in fact, writing a program that wasn’t doing what I expected. And I had, in fact, fallen into the psychological trap that Guttman warns about. When you’re writing software you use abstractions and also create them. What’s more, many of the abstractions you use are the very ones you created. When you live a world of your own invention you can do amazing and wonderful things. But you can also do ridiculous and stupid things. To see the difference between them you must always be prepared to park your ego and consider the latter possibility.

Elsewhere in that lecture, Prof. Guttman talks about Jeanette Wing’s idea that computational thinking involves skills that transcend the computer and information sciences. In 2008, when that lecture was given, many of us were talking about how that might be true. We talked about computational thinking as a “Fourth R” — a cognitive tool as fundamental as Reading, Riting, and Rithmetic.

I never found an example that would resonate broadly. But maybe this will:

Focus on understanding why your spouse or child or friend or political adversary is doing what he or she is doing, rather than why he or she is not doing what you wanted him or her to.

Why I subscribe to the Ann Arbor Chronicle

The Ann Arbor city council met, most recently, on October 15. Why didn’t the Ann Arbor Chronicle’s story on the meeting land until October 24? It took a while for Dave Askins to compile his typically epic 15000-word blog post. It’s an astonishingly detailed record of the meeting — more and better coverage, perhaps, than is available in any city.

The Chronicle describes itself thusly:

Launched in September 2008, the Ann Arbor Chronicle is an online newspaper that focuses on civic affairs and local government coverage. Although we’d likely be classified by most folks as “new media,” in many ways we embrace an ethos that runs contrary to current trends: Longer, in-depth articles; an emphasis on factual accuracy and thoroughness, not speed; and an assumption that our readers are thoughtful, intelligent and engaged in this community.

Who will read 15,000 words on a city council meeting? That depends partly on when the reading occurs. Because while the Chronicle is a newspaper, it is also a living history of the town’s public affairs. There’s no paywall. Every story is, and remains, fully available. That means the Chronicle isn’t just on the web, it is a web. What was said and decided about transportation in October 2012 can be reviewed in 2013 or 2014. The Chronicle is a community memory. In the short term it delivers news. Over the long run it assembles context.

Consider the list of links, below, that I extracted from the October 24 report. Of the 53 links, 23 point to prior Chronicle stories. Paywalled journalism can’t do that, and it’s a crippling limitation. If those who cannot remember the past are condemned to repeat it, mainstream journalism’s online amnesia won’t help move us forward. What happened today is only the tip of the iceberg. We need to know how we got to today. That can’t happen in print. It can only happen online. But tragically it almost never does, so context suffers.

If you scan that list of links you’ll notice something else that mainstream online journalism seldom allows: external links. The majority of the links in the Chronicle’s report point to other sources. Some are the websites of local organizations or local government. Others are documents that weren’t online but have been placed into the public record by the Chronicle. Paywalled journalism rarely does this. Once you’re in it wants to keep you in to rack up pageviews. This is another context killer.

Who will pay for all this luxurious context? Well, there’s me. I don’t live in Ann Arbor. But I went to school there, my daughter does now, I have another connection to the town, and I’m a huge fan of Dave Askins’ and Mary Morgan’s bold venture. So I’m a voluntary subscriber. And I hope I’ll get the chance to support something like the Chronicle in the town where I do live.

As a refugee from the pageview mills I can tell you that model leads nowhere good. I’m ready, willing, and able to back alternatives that use the web as it was meant to be used.


Links extracted from the Ann Arbor Chronicle’s report on the city council meeting of October 15, 2012.

  1. http://a2dda.org/current_projects/a2p5_/
  2. http://alphahouse-ihn.org/
  3. http://annarborchronicle.com/2010/03/03/to-do-bicycle-registry-transit-station/
  4. http://annarborchronicle.com/2010/03/10/county-offers-400k-match-for-skatepark/
  5. http://annarborchronicle.com/2010/04/05/ann-arbor-planning-priorities-take-shape/
  6. http://annarborchronicle.com/2011/03/24/ann-arbor-gives-initial-ok-to-pot-licenses/
  7. http://annarborchronicle.com/2012/02/10/um-ann-arbor-halt-fuller-road-project/
  8. http://annarborchronicle.com/2012/05/13/public-art-rehashed-by-ann-arbor-council/
  9. http://annarborchronicle.com/2012/06/04/ann-arbor-rail-study-moves-ahead/
  10. http://annarborchronicle.com/2012/06/11/city-council-action-focuses-on-transit-topics/
  11. http://annarborchronicle.com/2012/07/19/um-wall-street-parking-moves-ahead/
  12. http://annarborchronicle.com/2012/08/09/city-council-votes-down-park-amendment/
  13. http://annarborchronicle.com/2012/08/14/um-ann-arbor-agree-rail-costs-not-owed/
  14. http://annarborchronicle.com/2012/08/16/council-meeting-floods-fires-demolition/
  15. http://annarborchronicle.com/2012/08/20/planning-group-briefed-on-william-st-project/
  16. http://annarborchronicle.com/2012/09/01/city-council-to-focus-on-land-sale-policy/
  17. http://annarborchronicle.com/2012/09/07/aata-5-year-program-may-2013-tax-vote/
  18. http://annarborchronicle.com/2012/09/09/ann-arbor-dda-board-addresses-housing/
  19. http://annarborchronicle.com/2012/09/10/zoning-transit-focus-of-council-meeting/
  20. http://annarborchronicle.com/2012/09/13/county-tax-hike-for-economic-development/
  21. http://annarborchronicle.com/2012/09/24/council-punts-on-several-agenda-items/
  22. http://annarborchronicle.com/2012/09/27/transit-contract-contingent-on-local-money/
  23. http://annarborchronicle.com/2012/10/11/dda-green-lights-housing-transportation/
  24. http://annarborchronicle.com/2012/10/12/council-may-seek-voter-ok-on-rail-station/
  25. http://annarborchronicle.com/2012/10/12/positions-open-new-transit-authority-board/
  26. http://annarborchronicle.com/events-listing/
  27. http://annarborchronicle.com/wp-content/uploads/2012/08/MalletsDrainFloodingResolution.jpg
  28. http://annarborchronicle.com/wp-content/uploads/2012/10/600-Oct-15.jpg
  29. http://annarborchronicle.com/wp-content/uploads/2012/10/AAS-Conceptual-Construction-Costs-1.pdf
  30. http://annarborchronicle.com/wp-content/uploads/2012/10/AnnArbor-Congestion-now-future.jpg
  31. http://annarborchronicle.com/wp-content/uploads/2012/10/Appeal-12-263-Askins-map-historical-flooding1.pdf
  32. http://annarborchronicle.com/wp-content/uploads/2012/10/briere-derezinski-600.jpg
  33. http://annarborchronicle.com/wp-content/uploads/2012/10/cooper-deck-600.jpg
  34. http://annarborchronicle.com/wp-content/uploads/2012/10/EcologyCenter-Support-Resolution-Oct-2012.pdf
  35. http://annarborchronicle.com/wp-content/uploads/2012/10/greenbelt-hamstead-lane.jpg
  36. http://annarborchronicle.com/wp-content/uploads/2012/10/PA2PPositionPaper.pdf
  37. http://annarborchronicle.com/wp-content/uploads/2012/10/powers-ezekiel-600.jpg
  38. http://annarborchronicle.com/wp-content/uploads/2012/10/shiffler-mitchell-anglin-600.jpg
  39. http://annarborchronicle.com/wp-content/uploads/2012/10/smith-lax-600.jpg
  40. http://annarborchronicle.com/wp-content/uploads/2012/10/smith-listening-600.jpg
  41. http://annarborchronicle.com/wp-content/uploads/2012/10/teall-transit-map-600.jpg
  42. http://michigan.sierraclub.org/huron/
  43. http://protectourlibraries.org/
  44. http://wbwc.org/
  45. http://www.a2gov.org/government/city_administration/city_clerk/pages/default.aspx
  46. http://www.a2gov.org/government/communityservices/ParksandRecreation/parks/Features/Pages/KueblerLangford.aspx
  47. http://www.a2gov.org/government/communityservices/planninganddevelopment/planning/Pages/ZoningOrdinanceReorganizationProject.aspx
  48. http://www.a2gov.org/government/publicservices/fleetandfacility/airport/Pages/default.aspx
  49. http://www.ci.ann-arbor.mi.us/government/communityservices/planninganddevelopment/planning/Pages/NorthMainHuronRiverCorridorProject.aspx
  50. http://www.ecocenter.org/
  51. http://www.environmentalcouncil.org/
  52. http://www.michiganlcv.org/
  53. http://www.soloaviation.aero/