Tacit knowledge, abundant examples, and deliberate practice

Last week some friends at a local marketing firm invited me to join them in Boston at a conference called Inbound. I’m glad I went. Not because I learned much about inbound marketing, whatever that is. (Is there a parallel conference called Outbound? How would it differ?) But mainly because I got to hear Kathy Sierra give a really useful talk on optimizing human performance.

The overt purpose of the talk was to invite “content marketers” to create (here I search in vain for another word) “content” that aims not to only engage and inform, but also to help its “users” improve their performance in some domain. That’s a stretch goal for marketing. And I was delighted to see Kathy put it in front of an audience mainly focused on social media best practices, list segmentation, and landing page strategy.

Those aren’t my top concerns. But lately I’ve been working hard at learning to play music. And from that perspective three of Kathy’s themes resonated powerfully with me:

1. Tacit knowledge

2. Abundant examples

3. Deliberate practice

Kathy doesn’t use the phrase tacit knowledge but it’s a touchstone for me so that’s what I’ll call it. She gives the example of chick sexing, a famously hard task. Not many people are able to differentiate male from female chicks. Those who can don’t know, and can’t say, how they do it. Kathy talks about a study showing that novice chick sexers who hung around with experts picked up the skill rapidly by osmosis.

Key to the transmission of this tacit knowledge is an abundance of examples. Brains can use pattern matching to learn directly from other brains. It can happen under the radar, without conscious articulation of technique, but it requires a lot of data. You need to expose your brain to hundreds or thousands of examples of things other people do without knowing quite how they do them.

I think this helps explain why YouTube is so extraordinarily valuable to aspiring musicians. Pick a tune you want to learn. It’s wonderful to find a performance for your instrument that you can see and hear. But typically you won’t find just one, There will often be dozens. I’ve been aware for quite some time that my ability to see and hear many performances of the same tune, by many performers, whose skills and styles vary, accelerates my learning to play the tune. Until now, though, I haven’t been clear about the reason why. Pattern matching requires a lot of data. For a range of skills that can be demonstrated in the medium of online video, YouTube is becoming a robust source of that data.

Of course we can’t learn everything by osmosis. We often need to drag tacit knowledge to the surface, study it, practice it, and then submerge it. As Herbert Simon and William Chase pointed out decades ago, and as Malcolm Gladwell more recently popularized, it can take a long time to acquire expertise this way. Ten thousand hours is the now-famous rule of thumb.

I’ve gotten a late start with music so I’m not sure I’ll be able to clock my ten thousand hours. But in any case the interesting question to me is how best to spend the time I’ve got. I know that I don’t practice as efficiently as I should, and that I’m prone to burning in bad habits. Kathy suggests the following strategy. Pick a tune, or section of a tune, and aim to be able to play it with 95% reliability after practicing for at most 3 sessions of at most 45 minutes each. If you don’t get there, stop. Move the goalpost. Pick a different tune, or a smaller section of the tune, or a slower tempo, and nail that.

It’s hard to be that disciplined. Especially when your head is full of so many examples of the tunes you want to play. Seeing and hearing whole tunes, at tempo, and trying to play along with them, is one crucial mode of learning. Analyzing passages note by note, and trying to perfect them (maybe with the help of a tool like Soundslice), is another. They’re complementary, and I need them both. So thanks, Kathy, for helping me think about how to combine them. And … welcome back!

If we want private communication we can have it

If you received an email message from me during the early 2000s, it came with an attachment that likely puzzled or annoyed you. The attachment was my digital ID. In theory you could use it for a couple of purposes. One was to verify that I was the authentic sender of the message, and that the content of my message had not been altered enroute.

You could also save my public key and then use it to send me an encrypted message. During the years I was routinely including my digital ID in outbound messages I think I received an encrypted reply once. Maybe twice.

I’ve always thought that everyone should have the option to communicate securely. Once there was little chance any ordinary person would be able to figure out how to do it. Even for me, as a tech journalist who had learned both the theory and practice of secure communication, it was a challenge to get things working. And when I did, who could I talk to? Only someone else who’d traveled the same path. The pool of potential communication partners was too small to matter.

But during the 2000s I hoped for, and then encouraged, developments that promised to democratize private communication. Mainstream email software implemented the relevant Internet standards and integrated the necessary encryption tools. Now if you and I wanted to communicate securely we could just tick some options in our email programs.

But it still hardly ever happened. Why not? It comes down to a question of defaults. In order to make use of the integrated encryption tools you needed a digital ID. The default was that you didn’t have one. And that’s still the default. You have to go out of your way to get a digital ID. You have to alter the default state of your system, and that’s something people mostly won’t do.

Broadly there are two kinds of secure communication. One kind is implemented in programs like Apple’s Mail and Microsoft’s Outlook. (You likely didn’t know that, and almost surely have never used it, but it’s there.) This kind of secure communication relies on a hierarchical system of trust. To use it you acquire a digital ID issued by, and backed by, some authority. It could be a government, it could be commercial provider, in practice it’s usually the latter. Your communication software is configured to trust certain of these providers. And to use it you must trust those providers too.

Another kind of secure communication relies on no higher authority. Instead communication partners trust one another directly, and exchange their digital IDs in pairwise (peer-to-peer) fashion. Among systems that use this approach, PGP (Pretty Good Privacy) is most notable. Another, now discontinued, was Groove.

Much ink has been spilled, and many pixels lit, debating hierarchical/centralized versus peer-to-peer/distributed methods of storing and transmitting data. Of course the definitions of these methods wind up being a bit fuzzy because hierarchical systems can have peer-to-peer aspects and vice versa.

I would bet that Edward Snowden, Laura Poitras, and Glenn Greenwald are using a purely peer-to-peer approach. When the stakes astronomically high, and when your pool of communication partners is very small, that would be the only way to go. It would be a huge inconvenience. You’d need to massively alter the default state of an off-the-shelf computer to enable secure communication. But there’d be no choice. You’d have to do it.

Could standard systems come with software that communicates securely by default? Yes. Methods based on a hybrid of hierarchical and peer-to-peer trust could be practical and convenient. And they could deliver far better than the level of privacy we now enjoy by default, which is none. Would people want them? Until recently the answer was clearly no. Probably the answer is still no. But now, for the first time in my long experience with this topic, ordinary citizens may be ready to entertain the question. Please do.

Why I subscribe to the Ann Arbor Chronicle (part 2)

I’ve written before about why I subscribe to the Ann Arbor Chronicle. As of today, my Ann Arbor Chronicle Number is 10. That’s the number of months I’ve been sending a modest donation to the Chronicle. The data comes from this page which also gives me the Chronicle Number of some other Ann Arborites I’ve met in my travels there:

23 Bill Tozier
32 Peter Honeyman
50 Linda Feldt

Here’s a chart showing the growth in numbers of donors per month1:

The Chronicle’s evolving policy on donation — and disclosure thereof — is, like everything else about the publication, thoughtful and nuanced.

I have two reasons to hope that the trend shown in that chart1 will continue. One is professional. The Chronicle was the first publication to adopt the web of events model that I am trying to establish more widely. So the Chronicle’s success helps me advance that cause.

The other reason is personal. Though I’m a refugee from journalism I care deeply about it. I wish the kind of journalism practiced by the Chronicle on behalf of Ann Arbor could happen in my town. And in yours.

I’m glad there’s a foundation chartered to help journalism reinvent itself. But while I deem the Chronicle eminently worthy of funding from that source it has thus far received none. And maybe that’s a good thing. Over the long run only broad community support will be sustainable. So I hope the Chronicle achieves that, and shows other communities the way.


1 July 2013 notwithstanding. But maybe as of today, August 1, the July data remains incomplete?

2 The spreadsheet behind the chart is here. And the code that created the spreadsheet is here:

import re

f = open('donors.txt')
s = f.read()
s = s.replace('\n\n','\n')
months = re.findall('\d{4,4}.+', s)
lists = re.split('\d{4,4}.+', s)
lists = lists[1:]
assert ( len(months) == len(lists) )

for i in range(len(months)):
  s = lists[i]
  s = s.strip('\n')
  l = s.split('\n')
  print ( '%3s\t%s' % ( len(l), months[i]) )

""" 
date copied/pasted from http://annarborchronicle.com/subscribe/ 
looks like this:

2013 July
Linda Diane Feldt
Nancy Quay
Jeremy Peters
Bruce Amrine
Mary Hathaway
Katherine Kahn
Sally Petersen
...
2013 June
...

output looks like this:

 93	2013 July
117	2013 June
120	2013 May
120	2013 April

"""

Changeable minds

Minds change rarely. I wonder a lot about what happens when they do, and I often ask people this question:

What’s something you believed deeply, for a long time, and then changed your mind about?

This often doesn’t go well. You’ll ask me, naturally enough, for an example — some belief that I once held and then revised. But since any topic I offer as an example intersects with your existing belief system in some way, we wind up talking about that topic and my original question goes unanswered.

It’s easy to discuss positions you support, or oppose, within the framework of your existing belief system. It’s much harder to consider how that belief system has changed, or could change.

Facebook has become a laboratory in which to observe this effect. I’m connected to people across the continuum of ideologies. At both extremes I see the same behavior. News stories are selected, refracted through the lens of ideology, and posted with comments that I can predict with great certainty. These utterances, by definition, convey little information. Nor are they meant to. Their purpose is to reinforce existing beliefs, not to examine them.

Echo chambers aren’t new, of course, and they have nothing to do with the Internet. We seek the like-minded and avoid the differently-minded. On Facebook, though, it’s not so easy to avoid the differently-minded. I regard that as a feature, not a bug. I’m open to re-examining my own beliefs and I welcome you to challenge them. But if you’re not similarly open to re-examining your own beliefs then I can’t take you seriously.


See also the Edge Annual Question for 2008: What Have You Changed Your Mind About?

Learning to walk (again)

Over the years I’ve had a number of overuse injuries: tendinitis from too much typing or mousing or music playing, a sore shoulder from too much swimming, painful knees and ankles from too much running. The key phrase here is “too much” and you’d think I’d learn my lesson eventually. But no. When I get excited about doing things I overdo and then, periodically, must back off and recover.

Often, during recovery, as I analyze what’s gone wrong, I find that the problem is not simply overuse but more specifically asymmetric use. Once, during a bout of pain in my right thumb joint, while pondering what the cause might be, I looked down at my hands while I was typing. Clatter clatter clatter BAM! Clatter clatter clatter BAM! The BAM was my right thumb pounding the space bar. I could feel a twinge every time I saw it happen.

In some cases, and that was one of them, shifting to a symmetrical pattern of use is helpful. (As is, of course, not pounding.) I’ve trained myself to alternate thumbs while typing (although, as I look down at my hands now I see that needs reinforcement), to breathe alternately left and right while swimming, to change mouse hands from time to time, to become a switch hitter with the garden shovel.

Every time I go through one of these retraining exercises I reflect on the difficulty of the process. The steps are:

– surface a bad habit that was unconscious

– consciously develop a good habit

– submerge the new habit back into the unconscious

In the latest iteration of the process I am relearning how to walk. It sounds ridiculous. It is ridiculous. But here’s what happened — or rather, my best current understanding of what happened. About a year ago I strained one of the adductors in my right groin. Usually things like that resolve with a bit of rest and some stretching. But this time it didn’t. Last summer I was having trouble lifting my right leg over the bicycle seat when mounting. When the same thing happened on the first ride of this season I knew something had to be corrected. But what?

An acquaintance who does massage asked me to observe the angles of my upper legs while cycling. Next time out I looked down and could hardly believe it. My right knee was out of line by at least 25 degrees! That misalignment was clearly aggravating the injury and not allowing it to heal.

When I got home I put cycling and running on hold and went back to basics. I stood in what felt like a normal position and looked down. Sure enough, my right foot was pointing out noticeably. When I aligned it with my left foot I felt like I was forcing it to pigeon-toe. Then I started to walk. Each step required a conscious effort to align the right foot. It didn’t feel correct. But I could see that it was.

So that’s how it’s gone for the past 5 days. Instead of cycling or running I take the dogs for a hike and focus on alignment. I have to supervise my right foot closely and, when I go up and down over obstacles, I have to supervise my right knee to make sure it stays aligned too.

I can tell that it’s working. But clearly a bad habit that took a year to develop will take more than a few days to correct.

Every time something like this happens I wonder how I could fail to notice something so fundamental. But it really isn’t surprising. We can’t consciously monitor how we use our bodies all the time, and bad habits develop gradually. If there’s any application of wearable computing that will matter to me I think it will be the one that warns me when these kinds of bad habits begin to develop, and helps me correct them. We’re not great analysts of the forces in play as we use our bodies, but computers could be.

Upcoming is downgoing, Elm City is ongoing

Here’s Andy Baio’s farewell to Upcoming, a service I’ve been involved with for a decade. In a March 2005 blog post I wrote about what I hoped Upcoming would become, in my town and elsewhere, and offered some suggestions to help it along. One was a request for an API which Upcoming then lacked. Andy soon responded with an API. It was one of the pillars of my Elm City project for a long while until, as Andy notes in his farewell post, it degraded and became useless.

Today I pulled the plug and decoupled Upcoming from all the Elm City hubs.

In 2009 Andy and I both spoke at a conference in London. Andy was there to announce a new project that would help people crowdsource funding for creative projects. I was there to announce a project that would help people crowdsource public calendars. Now, of course, Kickstarter is a thing. The Elm City project not so much. But I’m pretty sure I’m on the right track, I’m lucky to be in a position to keep pursuing the idea, and although it’s taking longer than I ever imagined I’m making progress. Success, if it comes, won’t look like Upcoming did in its heyday, but it will be a solution to the same problem that Upcoming addressed — a problem we’ve yet to solve.

That same March 2005 blog post resonates with me for another reason. That was the day I walked around my town photographing event flyers on shop windows and kiosks. When I give presentations about the Elm City project I still show a montage of those images. They’re beautiful, and they’re dense with information that isn’t otherwise accessible.

Event flyers outperform web calendars, to this day, because they empower groups and organizations to be the authoritative sources for information about their public events, and to bring those events to the attention of the public. The web doesn’t meet that need yet but it can, and I’m doing my best to see that it does.

Community calendar workshop next week in Newport News

My next community calendar workshop will be at the Peninsula Fine Arts Center in Newport News, on Tuesday April 23 at 6PM. It’s for groups and organizations in the Hampton Roads region of Virginia, including Chesapeake, Hampton, Newport News, Norfolk, Portsmouth, Suffolk, Virginia Beach, Williamsburg, and Yorktown. If you’re someone there who’d like help change the way public calendars work in your region, please sign up on EventBrite so we know you’re coming, or contact me directly.

Here’s the pitch from the workshop’s sponsor and host, the Daily Press:

The Community Calendar Project

It’s about time someone came up with a way to get all community events in one place so everyone, everywhere can find out what’s going on at any given time, on any given day.

It’s about time creators of those events – the people, agencies and organizations who work so hard to bring quality education, support and entertainment to the community – had a way to get their messages out there effortlessly.

It’s about time the public can find out about the happenings and events they really care about and never miss an important event again.

AND it’s “time” – or the lack of it – that makes this community initiative being spearheaded by the Daily Press so valuable to everyone. This community calendar will SAVE time – for the event creators, the event seekers and the websites and platforms that work to make this information available.

The Daily Press is partnering with Jon Udell of Microsoft to bring this project to Hampton Roads and make it among the first communities in the country to have an easily searchable, FREE database of events available to the public. And we want to get all of Hampton Roads involved. The only thing required to participate is to agree to use an iCalendar formatted calendar on your own websites or to create events through Facebook. That’s it. Participation guaranteed.

What is an iCalendar? Simply, iCalendar is a computer file format that allows Internet users to exchange calendars with other Internet users. iCalendar is used and supported by personal calendars such as Google Calendar, Apple Calendar (formerly iCal), Microsoft Outlook and Hotmail, Lotus Notes, Yahoo! Calendar, and others, and by web content management systems including WordPress, Drupal, Joomla, and others.

Many of you may already use one of these applications to publish your calendars online, and that is great! That means you can already participate in the calendar network we are bringing together. The rest of you can easily convert and get on board.We’ll tell you how.

On April 23 you are invited to a presentation of the Community Calendar Project. Jon will be on hand to tell you what it is, why it matters and how to get involved. The gathering will take place at 6 p.m. at the Peninsula Fine Arts Center, 101 Museum Drive (across from The Mariners’ Museum) in Newport News.

Light refreshments will be served. Get your FREE tickets so we know how many are attending.

Hope to see you there.

Walled fields of knowledge

My dad died of congestive heart failure in 2009. The last weeks of his life weren’t what they could have been had we known enough to get him into hospice care. But we didn’t know, and I’ve felt ashamed about that.

If we had it to do over again things would be very different. We’d have brought him home much sooner, made him comfortable, helped him work through a life review, hung out with him, heard and said some things that needed to be heard and said.

As it was we only managed to bring him home for his last day. It was better than not bringing him home at all, but not much better, at least not for him. For us, though, it was transformative. Two generations of our family — my wife and I, our children — had never seen the kind of death that was normal until the modern era. We’d didn’t know why or how to shift gears from medical treatment to palliative care. Now we do and we’re deeply changed — Luann especially. She’s become a hospice volunteer who comforts the dying, supports their families, and counsels survivors.

From her I’ve learned a lot about hospice care. What happened to us, it turns out, is typical. Many people don’t realize how comfortable a dying person can often be at home with proper medication. As a result many delay until the bitter end, and miss out on the emotional and psychological richness that’s possible in a home hospice setting.

A big reason for the delay is the chasm that divides the culture of hospitals from the culture of hospice. Nobody in the hospital advised us to bring dad home a month before he died. A social worker mentioned it, but dad didn’t know what it could mean to make that choice, we didn’t know enough to advocate for it, and medical professionals speak with vastly more authority than do social workers in our current regime.

What hospitals don’t know about hospice is astonishing. Last night, while reading an anthology of science writing, I happened on an essay by Atul Gawande, a physician/writer who, like Oliver Sacks, Perri Klass, and Abraham Verghese, opens windows into the medical world. In 2010, the year after our experience with my dad, he wrote a New Yorker piece called Letting Go that included these revelations:

One Friday morning this spring, I went on patient rounds with Sarah Creed, a nurse with the hospice service that my hospital system operates. I didn’t know much about hospice. I knew that it specialized in providing “comfort care” for the terminally ill, sometimes in special facilities, though nowadays usually at home. I knew that, in order for a patient of mine to be eligible, I had to write a note certifying that he or she had a life expectancy of less than six months. And I knew few patients who had chosen it, except maybe in their very last few days, because they had to sign a form indicating that they understood their disease was incurable and that they were giving up on medical care to stop it. The picture I had of hospice was of a morphine drip. It was not of this brown-haired and blue-eyed former I.C.U. nurse with a stethoscope, knocking on Lee Cox’s door on a quiet street in Boston’s Mattapan neighborhood

And:

Like many people, I had believed that hospice care hastens death, because patients forgo hospital treatments and are allowed high-dose narcotics to combat pain. But studies suggest otherwise. In one, researchers followed 4,493 Medicare patients with either terminal cancer or congestive heart failure. They found no difference in survival time between hospice and non-hospice patients with breast cancer, prostate cancer, and colon cancer. Curiously, hospice care seemed to extend survival for some patients; those with pancreatic cancer gained an average of three weeks, those with lung cancer gained six weeks, and those with congestive heart failure gained three months.

These things once surprised me too. Now, thanks to our brief hospice experience with dad and Luann’s volunteer work since, I take them for granted. And while I’ve felt ashamed not to have arrived at this understanding sooner, in time to help dad, I guess I should cut myself some slack. Atul Gawande didn’t get there any sooner than me.

How could that be? How could a leading medical practitioner (and explainer) reach mid-career lacking such basic and useful knowledge? All too easily when we carve the world into fields of knowledge and then build walls around them.

Networks of first-class peers

Last month ago I wrote a column for Wired.com, Rebooting web comments, that attracted some unsavory feedback. Had the flamers read beyond the second paragraph they might have seen that I wasn’t insisting everyone must use verifiable identities online. But they didn’t. So I wrote another column last week, Own your words, to clarify my position.

My first blogging tool, back in 2001, was Dave Winer’s Radio UserLand. One of Dave’s mantras was: “Own your words.” As the blogosphere became a conversational medium, I saw what that could mean. Radio UserLand didn’t support comments. That turned out to be a good constraint to embrace. When conversation emerged, as it always will in any system of communication, it was a cross-blog affair. I’d quote something from your blog on mine, and discuss it. You’d notice, and perhaps write something on your blog referring back to mine.

This cross-blog conversational mode had an interesting property: You owned your words. Everything you wrote went into your own online space, was bound to your identity, became part of your permanent record. As a result, discourse tended to be more civil than what often transpired in Usenet newsgroups or web forums. In those kinds of online spaces, your sense of identity is attenuated. You may or may not be pseudonymous, but either way the things you say don’t stick to you in the same way they do if you say them in your own permanent online space.

Later blogs evolved forum-style comments which concentrated discussion but recreated the old problems: attenuation of identity, loss of ownership of data. Then came Twitter and Facebook and, so the story goes, “social killed the blogosphere.” It was easier to read and write in those online spaces, blogging declined, and Google’s recent decision to retire its RSS reader is being widely regarded as the nail in the blogosphere’s coffin.

Of course that’s wrong. One of the staples of tech punditry is the periodic declaration that something — Unix, the Web, Microsoft, Apple, the blogosphere — is dead.

Will Google Reader’s exit spell the end of the blogosphere or its rebirth? Nobody knows, and since I’m no longer in the pageview business I won’t even hazard a prediction. Instead I want to highlight something that’s bigger than blogs, bigger even than social media. Owning your words is a fundamental principle. It seemed new at the dawn of the blogosphere but its roots ran deeper. They were woven into the fabric of the Internet which, at its core, is a network of peers.

For technical reasons I won’t explore here, it’s not possible (or, I should say, not believed possible) for our computers to be first-class peers on that network, as early Internet-connected computers were. But it is possible for various of our avatars — our websites, our blogs, our calendars — to represent us as first-class peers. That means:

– They use domain names that we own

– They converse with other peers in ways that we enable and can control

– They store data in systems that we authorize and can manage

Your Twitter and Facebook avatars are not first-class peers on the network in these ways. Which isn’t to say they aren’t useful. Second-class peers are incredibly useful, largely because they enable us to avoid the complexities that make it challenging to operate first-class peers.

Those challenges are real. But they’re not insurmountable unless we believe that they are. I don’t believe that. I hope you won’t. What some of us learned at the turn of the millenium — about how to use first-class peers called blogs, and how to converse with other first-class peers — gave us a set of understandings that remain critical to the effective and democratic colonization of the virtual realm. It’s unfinished business, and it may never be finished, but don’t let the tech pundits or anyone else convince you it doesn’t matter. It does.

Indie theaters and open data

Movie showtimes are easy to find. Just type something like “movies keene nh” into Google or Bing and they pop right up:

You might assume that this is open data, available for anyone to use. Not so, as web developers interested in such data periodically discover. For example, from MetaFilter:

Q: We initially thought it would be as easy as pulling in an RSS feed from somewhere, but places like Yahoo or Google don’t offer RSS feeds for their showtimes. Doing a little research brought up large firms that provide news and data feeds and that serve up showtimes, but that seems like something that’s designed for high-level sites with national audiences.

So, is there any solution for someone who is just trying to display local showtimes?

A: This is more complicated than you might think. Some theatres maintain that their showtimes are copyrighted, and (try to) control the publication of them. Others have proprietary agreements with favored providers and don’t publish their showtimes elsewhere, to give their media partners a content edge.

What applies to RSS feeds applies to calendar feeds as well. It would be nice to have your local showtimes as an overlay on your personal calendar. But since most theaters don’t make the data openly available, you can’t.

Some indie theaters, however, do serve up the data. Here are some movies that don’t appear when you type “movies keene nh” into Google or Bing:

These are listings from the Putnam Theater at Keene State College. They syndicate to the Elm City hub for the Monadnock region of New Hampshire by way of the college calendar which recently, thanks to Ben Caulfield, added support for standard iCalendar feeds. They appear in the film category of that hub. And in fact they’re all that can appear there.

I’ve decided I’m OK with that. I used to forget about movies at the Putnam because they didn’t show up in standard searches. Now I sync them to my phone and I’m more aware of them. Would I want all the mainstream movies there too? I used to think so, but now I’m not so sure. There are plenty of ways to find what’s playing at mainstream theaters. That doesn’t feel like an awareness problem that needs solving. The indie theaters, though, could use a boost. As I build out Elm City hubs in various cities, I’ve been able to highlight a few with open calendars:

– In Berkeley: UC Berkeley Art Museum / Pacific Film Archive (BAM/PFA)

– In Toronto: Bloor Cinema

And here are some indies whose calendars could be open, but aren’t:

– In Portland: Academy Theater

– In Cambridge, The Brattle Theatre

If you’re an indie theater and would like your listings to be able to flow directly to personal calendars, and indirectly through hubs to community portals, check out how the Putman, BAM/PFA, and the Bloor Cinema are doing it.

Let’s think about what we’re doing right

In The Better Angels of our Nature: Why Violence Has Declined, Steven Pinker compiles massive amounts of evidence to show that we are becoming a more civilized species. The principal yardstick he uses to measure progress is the steady decline, over millenia, in per-capita rates of homicide. But he also measures declines in violence directed towards women, racial groups, children, homosexuals, and animals.

It’s hard to read the chapters about the routine brutality of life during the Roman empire, the Middle Ages, the Renaissance, and — until more recently than we like to imagine — the modern era. An early example:

Far from being hidden in dungeons, torture-executions were forms of popular entertainment, attracting throngs of jubilant spectators who watched the victim struggle and scream. Bodies broken on wheels, hanging from gibbets, or decomposing in iron cages where the victim had been left to die of starvation and exposure were a familiar part of the landscape.

A modern example:

Consider this Life magazine ad from 1952:

Today this ad’s playful, eroticized treatment of domestic violence would put it beyond the pale of the printable. It was by no means unique.

A reader of that 1950s ad would be as horrified as we are today to imagine cheering a public execution in the 1350s. A lot changed in 600 years. But in the 60 years since more has changed. The ad that seemed OK to a 1950s reader would shock most of us here in the 2010s.

Over time we’ve grown less willing and able to commit or condone violence, and our definition of what counts as violence has grown more inclusive. And yet this is deeply counter-intuitive. We tend to feel that the present is more violent and dangerous than the recent past. And our intuition tells us that the 20th century must have been more so than the distant past. That’s why Pinker has to marshal so much evidence. It’s like Darwin’s rhetorical strategy in The Origin of Species. You remind people of a lot of things that they already know in order to lead them to a conclusion they wouldn’t reach on their own.

Will the trend continue? Will aspects of life in the 2010s seem alien to people fifty years hence in the same way that the coffee ad seems alien to us now, and that torture-execution seemed to our parents? (And if so, which aspects?)

Pinker acknowledges that the civilizing trend may not continue. He doesn’t make predictions. Instead he explores, at very great length, the dynamics that have brought us to this point. I won’t try to summarize them here. If you don’t have time to read the book, though, you might want to carve out an hour to listen to his recent Long Now talk. You’ll get much more out of that than from reading reviews and summaries.

Either way, you may dispute some of the theories and mechanisms that Pinker proposes. But if you buy the premise — that all forms of violence have steadily declined throughout history — I think you’ll have to agree with him on one key point. We’re doing something right, and we ought to know more about why and how.

Flash Fill: Text wrangling for non-programmers

As Elm City hubs grow, with respect to both raw numbers of events and numbers of categories, unfiltered lists of categories become unwieldy. So I’m noodling on ways to focus initially on a filtered list of “important” categories. The scare quotes indicate that I’m not yet sure how to empower curators to say what’s important. Categories with more than a threshold number of events? Categories that are prioritized without regard to number of events? Some combination of these heuristics?

To reason about these questions I need to evaluate some data. One source of data about categories is the tag cloud. For any Elm City hub, you can form this URL:

elmcity.cloudapp.net/HUBNAME/tag_cloud

If HUBNAME is AnnArborChronicle, you get a JSON file that looks like this:

[
{ "aadl":348},
{ "aaps":9},
{ "abbot":18},
...
]

This is the data that drives the category picklist displayed in the default rendering of the Ann Arbor hub. A good starting point would be to dump this data into a spreadsheet, sort by most populous categories, and try some filtering.

I could add a feature that serves up this data in some spreadsheet-friendly format, like CSV (comma-separated variable). But I am (virtuously) lazy. I hate to violate the YAGNI (“You aren’t gonna need it”) principle. So I’m inclined to do something quick and dirty instead just to find out if it’ll even be useful to work with that data in a spreadsheet..

One quick-and-dirty approach entails looking for some existing (preferably online) utility that does the trick. In this case I searched for things with names like json2csv and json2xls, found a few candidates, but nothing that immediately did what I wanted.

So some text needs to be wrangled. One source of text to wrangle is the HTML page that contains the category picklist. If you capture its HTML source, you’ll find a sequence of lines like this:

<option value="aadl">aadl (348)</option>
<option value="aaps">aaps (9)</option>
<option value="abbot">abbot (18)</option>

It’s easy to imagine a transformation that gets you from there to here:

aadl	348
aaps	9
abbot	18

Although I’ve often written code to do that kind of transformation, if it’s a quick-and-dirty one-off I don’t even bother. I use the macro recorder in my text editor to define a sequence like:

  • Start selecting at the beginning of a line
  • Go to the first >
  • Delete
  • Go to whitespace
  • Replace with tab
  • Search for (
  • Delete
  • Search for )
  • Delete to end of line
  • Go to next line

This is a skill that’s second nature to me, and that I’ve often wished I could teach others. Many people spend crazy amounts of time doing mundane text reformatting; few take advantage of recordable macros.

But the reality is that recordable macros are the first step along the slippery slope of programming. Most people don’t want to go there, and I don’t blame them. So I’m delighted by a new feature in Excel 2013, called Flash Fill, that will empower everybody to do these kinds of routine text transformations.

Here’s a picture of a spreadsheet with HTML patterns in column A, an example of the name I want extracted in column B, and an example of the number I want in column C.

Given that setup, you invoke Flash Fill in the first empty B and C columns to follow the examples in B1 and C1. Here’s the resulting spreadsheet on SkyDrive. Wow! That’s going to make a difference to a lot of people!

Suppose your data source were instead JSON, as shown above. Here’s another spreadsheet I made using Flash Fill. As will be typical, this took a bit of prep. Flash Fill needs to work on homogenous rows. So I started by dumping the JSON into JSONLint to produce text like this:

[
    {
        "aadl": 348
    },
    {
        "aaps": 9
    },
    {
        "abbot": 18
    },
...
]

I imported that text into Excel 2013 and sorted to isolate a set of rows with a column A like this:

"aadl": 348
"aaps": 9
"abbot": 18

At that point it was a piece of cake to get Flash Fill to carry the names over to column B and the numbers to column C.

Here’s a screencast by Michael Herman that does a nice job showing what Flash Fill can do. It also illustrates a fascinating thing about programming by example. At about 1:25 in the video you’ll see this:

Michael’s example in C1 was meant to tell Flash Fill to transform strings of 9 digits into the familiar nnn-nn-nnnn pattern. Here we see its first try at inferring that pattern. What should have been 306-60-4581 showed up as 306-215-4581. That’s wrong for two reasons. The middle group has three digits instead of two, and they’re the wrong digits. So Michael corrects it and tries again. At 1:55 we see Flash Fill’s next try. Here, given 375459809, it produces 375-65-9809. That’s closer, the grouping pattern looks good, but the middle digits aren’t 45 as we’d expect. He fixes that example and tries again. Now Flash Fill is clear about what’s wanted, and the rest of the column fills automatically and correctly.

But what was Flash Fill thinking when it produced those unintended transformations? And could it tell us what it was thinking?

From a Microsoft Research article about the new feature:

Gulwani and his team developed Flash Fill to learn by example, not demonstration. A user simply shows Flash Fill what he or she wants to do by filling in an Excel cell with the desired result, and Flash Fill quickly invokes an underlying program that can perform the task.

It’s the difference between teaching someone how to make a pizza step by step and simply showing them a picture of a pizza and minutes later eating a hot pie.

But that simplicity comes with a price.

“The biggest challenge,” Gulwani says, “is that learning by example is not always a precise description of the user’s intent — there is a lot of ambiguity involved.

“Take the example of Rick Rashid [Microsoft Research’s chief research officer]. Let’s say you want to convert Rick Rashid to Rashid, R. Where does that ‘R’ come from? Is it the ‘R’ of Rick or the ‘R’ of Rashid? It’s very hard for a program to understand.”

For each situation, Flash Fill synthesizes millions of small programs — 10-20 lines of code — that might accomplish the task. It sounds implausible, but Gulwani’s deep research background in synthesizing code makes it possible. Then, using machine-learning techniques, Flash Fill sorts through these programs to find the one best-suited for the job.

I suspect that while Flash Fill could tell you what it was thinking, you’d have a hard time understanding how it thinks. And for that reason I suspect that hard-core quants won’t rush to embrace it. But that’s OK. Hard-core quants can write code. Flash Fill is for everybody else. It will empower regular folks to do all sorts of useful transformations that otherwise entail ridiculous manual interventions that people shouldn’t waste time on. Be aware that you need to check results to ensure they’re what you expect. But if you find yourself hand-editing text in repetitive ways, get the Excel 2013 preview and give Flash Fill a try. It’s insanely useful.

Homicide rates in context

In U.N. Maps Show U.S. High in Gun Ownership, Low in Homicides, A.W.R. Hawkins presents the following two maps:

From these he concludes:

Notice the correlation between high gun ownership and lower homicide rates.

As these maps show, “more guns, less crime” is true internationally as well as domestically.

The second map depicts homicides per 100,000 people. That’s the same yardstick used in Steven Pinker’s monumental new book The Better Angels of Our Nature: Why Violence has Declined. Pinker marshals massive amounts of data to show that over the long run, and at an accelerating pace, we are less inclined to harm one another. When you look at the data on a per capita basis, even the mass atrocities of the 20th century are local peaks along a steadily declining sawtooth trendline.

One of the most remarkable charts in the book ranks the 20 deadliest episodes in history. It’s adapted from Matthew White’s The Great Big Book of Horrible Things, and appears in a slightly different form in The New Scientist:

Ever heard of the An Lushan Revolt? Well, I hadn’t, but on a per capita basis it dwarfs the first World War.

Pinker says, in a nutshell, that we’re steadily becoming more civilized, and that data about our growing reluctance to kill or harm one another show that. The trend marches through history and spans the globe. There’s regional variation, of course. A couple of charts show the U.S. to be about 5x more violent than Canada and the U.K. But there isn’t one that ranks the U.S. in a world context. So A.W.R. Hawkins’ map of homicide rates got my attention.

The U.S. has the most guns, the first chart says. And it’s one of the safest countries, the second chart says. But that second map doesn’t tell us:

    Where does the U.S. rank?

    How many countries are in the red, pink, yellow, and green categories?

    Which countries are in those categories?

    How do countries rank within those categories?

Here’s another way to visualize the data:

There are a lot of countries mashed together in that green zone. And after Cuba we’re the most violent of them. Five homicides per 100,000 isn’t a number to boast about.

Scientific storytelling

It’s said that every social scientist must, at some point, write a sentence that begins: “Man is the only animal that _____.” Some popular completions of the sentence have been: uses tools, uses language, laughs, contemplates death, commits atrocities. In his new book Jonathan Gottschall offers another variation on the theme: storytelling is the defining human trait. For better and worse we are wired for narrative. A powerful story that captures our attention can help us make sense of the world. Or it can lead us astray.

A story we’ve been told about Easter Island goes like this. The inhabitants cut down all the trees in order to roll the island’s iconic 70-ton statues to their resting places. The ecosystem crashed, and they died off. This story is told most notably by Jared Diamond in Collapse and (earlier) in this 1995 Discover Magazine article:

In just a few centuries, the people of Easter Island wiped out their forest, drove their plants and animals to extinction, and saw their complex society spiral into chaos and cannibalism.

As we try to imagine the decline of Easter’s civilization, we ask ourselves, “Why didn’t they look around, realize what they were doing, and stop before it was too late? What were they thinking when they cut down the last palm tree?”

This is a cautionary tale of reckless ecocide. But according to recent work by Terry Hunt and Carl Lipo, Jared Diamond got the story completely wrong. A new and very different story emerged from their study of the archeological record. Here are some of the points of contrast:

old story new story
Collapse resulted from the islanders’ reckless destruction of their environment (ecocide). Collapse resulted from European-borne diseases and European-inflicted slave trading (genocide).
The trees were cut down to move the statues. Trees weren’t used to move the statues. They were ingeniously designed to be walked along in a rocking motion using only ropes. The trees were destroyed mostly by rats. Which wasn’t a problem anyway because the islanders used the cleared land for agriculture.
Fallen and broken statues resulted from intertribal warfare. Fallen and broken statues resulted from earthquakes.
It must have taken a population of 25,000 or more to make and move all those statues. A population decline to around 4000 at the moment of European contact was evidence of massive collapse. The mode of locomotion for which the statues were designed is highly efficient. There’s no need to suppose a much larger work force than was known to exist.
The people of Easter Island were warlike. The people of Easter Island were peaceful. Because they had to be. Lacking hardwood trees for making new canoes, they were committed once the canoes that brought them were gone. There was no escape. And it’s a hard place to make a living. No fresh water, poor soil, meager fishing. To survive for the hundreds of years that they did, the society had to be “optimized for stability.”

Hunt and Lipo tell this new story in compelling Long Now talk. After the talk Stewart Brand asks how Jared Diamond has responded to their interpretation. Not well, apparently. Once we’re in the grip of a powerful narrative we don’t want to be released from it.

Hunt and Lipo didn’t go to Easter Island with a plan to overturn the old story. They went as scientists with open eyes and open minds, looked at all the evidence, realized it didn’t support the old story, and came up with a new one that better fits the facts. And it happens to be an uplifting one. These weren’t reckless destroyers of an ecosystem. They were careful stewards of limited resources whose artistic output reflects the ingenuity and collaboration that enabled them to survive as long as they did in that hard place.

We’re all invested in stories, and in the assumptions that flow from them. Check your assumptions. It’s a hard thing to do. But it can lead you to better stories.

Check your assumptions

In Computational thinking and life skills I asked myself how to generalize this touchstone principle from computer science:

Focus on understanding why the program is doing what it’s doing, rather than why it’s not doing what you wanted it to.

And here’s what I came up with:

Focus on understanding why your spouse or child or friend or political adversary is doing what he or she is doing, rather than why he or she is not doing what you wanted him or her to.

I’ve been working on that. It’s been a pleasant surprise to find that Facebook can be a useful sandbox in which to practice the technique. I keep channels of communication open to people who hold wildly different political views. It’s tempting to argue with, or suppress, some of them. Instead I listen and observe and try to understand the needs and desires that motivate utterances I find abhorrent.

My daughter, a newly-minted Master of Social Work, will soon be doing that for a living. She’s starting a new job as a dialogue facilitator. How do you nurture conversations that bridge cultures and ideologies? It’s important and fascinating work. And I suspect there are some other computational principles that can helpfully generalize to support it.

Here’s one: Encourage people to articulate and test their assumptions. In the software world, this technique was a revelation that’s led to a revolution in how we create and manage complex evolving systems. The tagline is test-driven development (TDD), and it works like this. You don’t just assume that a piece of code you wrote will do what you expect. You write corresponding tests that prove, for a range of conditions, that it does what you expect.

The technique is simple but profound. One of its early proponents, Kent Beck, has said of its genesis (I’m paraphrasing from a talk I heard but can’t find):

I was stumped, the system wasn’t working, I didn’t know what else to do, so I began writing tests for some of the most primitive methods in the system, things that were so simple and obvious that they couldn’t possible be wrong, and there couldn’t possibly be any reason to verify them with tests. But some of them were wrong, and those tests helped me get the system working again.

Another early proponent of TDD, Ward Cunningham, stresses the resilience of a system that’s well-supported by a suite of tests. In the era of cloud-based software services we don’t ship code on plastic discs once in a while, we continuously evolve the systems we’re building while they’re in use. That wouldn’t be safe or sane if we weren’t continuously testing the software to make sure it keeps doing what we expect even as we change and improve it.

Before you can test anything, though, you need to articulate the assumption that you’re testing. And that’s a valuable skill you can apply in many domains.

Code

Assumption: The URL points to a calendar.

Tests: Does the URL even work? If so, does it point to a valid calendar?

Interpersonal relationships

Assumption: You wouldn’t want to [watch that movie, go to that restaurant, take a walk].

Tests: I thought you wouldn’t want to [watch that movie, go to that restaurant, take a walk] but I shouldn’t assume, I should ask: Would you?

Tribal discourse

Assumption: They want to [take away our guns, proliferate guns].

Tests: ?

I’ll leave the last one as an exercise for the reader. If you feel strongly about that debate (or another) try asking yourself two questions. What do I assume about the opposing viewpoint? How might I test that assumption?

How John McPhee structures stories from his notes

John McPhee has lately been reflecting, in a series of New Yorker articles, on his long career as one of the world’s leading writers of nonfiction. In this week’s issue we learn that one of my favorite of his books, The Pine Barrens, was born on a picnic table. It was there that he lay prone for two weeks, in a panic, searching for a way to structure the vast quantity of material he’d gathered in a year of research. The solution, in this case, was Fred Brown, an elderly Pine Barrens dweller who “had some connection or other to at least three quarters of those Pine Barrens topics whose miscellaneity was giving me writer’s block.” Fred was the key to unlocking that book’s structure. But each book needed a different key.

The approach to structure in factual writing is like returning from a grocery store with materials you intend to cook for dinner. You set them out on the kitchen counter, and what’s there is what you deal with, and all you deal with.

For many years, that meant writing notes on pieces of paper, coding the notes, organizing the notes into folders, retyping notes, cutting and rearranging with scissors and tape. Then came computers, a text editor called KEDIT, and a Princeton colleague named Howard Strauss who augmented KEDIT with a set of macros that supported the methods McPhee had been evolving for 25 years. In the article, McPhee describes two KEDIT extensions: Structur and Alpha.

Structur exploded my notes, It read the codes by which each note was given a destination or destinations (including the dustbin). It created and named as many new KEDIT files as there were codes, and, of course, it preserved the original set.

Alpha implodes the notes it works on. It doesn’t create anything new. It reads codes and then churns a file internally, organizing it in segments in the order in which they are meant to contribute to the writing.

Alpha is the principal, workhorse program I run with KEDIT. Used again and again on an ever-concentrating quantity of notes, it works like nesting utensils. It sorts the whole business at the outset, and then, as I go along, it sorts chapter material and subchapter material, and it not infrequently rearranges the components of a single paragraph.

KEDIT is the only writing tool John McPhee has ever used. And as he is careful to point out, it’s a text editor, not a word processor. No pagination, headers, fonts, WYSIWYG, none of that. Just words and sentences. I can relate to that. My own writing tool of choice is an EMACS clone called Epsilon. I first used it on DOS around 1986 and I’m using it in Windows today to write these words. If I were a writer of long works I might have evolved my use of Epsilon in ways similar to what John McPhee describes. But I’ve only written one book, that was a long time ago, and since then I’ve written at lengths that don’t require that kind of tool support.

Still, I would love to find out more about John McPhee’s toolchain. My interest is partly historical. Howard Strauss died in 2005, and KEDIT is nearing the end of its life. (From kedit.com: “…we are in the process of gradually winding down Mansfield Software Group.”) But I’m also looking forward. Not everyone needs to organize massive quantities of unstructured information. But those who do require excellent tool support, and there’s room for innovation on that front. Anyone who’d like to tackle that challenge would benefit from understanding what John McPhee’s methods are, and how his toolchain supports them.

I’m going to write to John McPhee to ask him if he’d be willing to work with me on a screencast to document his methods. (And also to thank him for countless hours of reading enjoyment.) It’ll be a cold call, because we’ve never communicated, so if any reader of this post happens to have a personal connection, I would greatly appreciate an introduction.

Heating as a service: Xylogen points the way

I love a good story about a product becoming a service. Ray Anderson did it with floor covering, ZipCar does it with cars, Amazon and Microsoft are doing it with IT infrastructure. It’s a sweet model. Service providers own equipment and operations, earn recurring revenue, and are motivated to continuously improve efficiency and customer satisfaction.

There’s even been speculation about turning home heating into a service. Here in New England, where the dominant product is heating oil and oil-burning equipment, that would be a wonderful thing. Because now, for the millions of homeowners who burn oil — and for the businesses who support that system — the incentives are all wrong. We’re collectively abetting the nation’s addiction to oil, and customers’ need to using less oil conflicts with suppliers’ need to sell more.

In From oil to wood pellets: New England’s home heating future I documented my first foray into heating with biomass. In Central heating with a wood gasification boiler I presented the solution that’s actually working for us. Biomass is a viable alternative. But I’m still the owner, operator, and maintainer of the equipment, and the manager of the fuel supply (i.e. buying, stacking, loading). What would it be like to outsource those functions?

For single-family homes, biomass heating as a service is still just a dream. But for commercial buildings it’s a reality, and there’s a great example right in my own backyard. Well, almost. The Monadnock Waldorf School, right around the corner from my house, recently converted to a wood pellet boiler installed by Xylogen, a new company whose tagline is:

We do not sell heating systems. We do not sell fuel. We sell secure, local, renewable heat.

Xylogen’s blog tells the story of the project. Here are some of my favorite excerpts.

From What’s happening at MWS?:

We’re pleased to report that the oil boilers have used a total of 7 gallons of oil from day 1, the bulk consumed during initial tune-up and system testing. The remainder of the usage actually occurred during times when the pellet boiler could have kept up with the building’s requirement for heat. In other words, this operation was a mistake that has now been corrected in the control algorithms.

From We see the big picture too:

Today, an opening to an old ventilation shaft was discovered and promptly covered over. Heated air was escaping the buildng through the grating at such a clip that a small student might have gotten sucked in and trapped on it!

Also, there was an assembly today in the assembly room (makes sense!), so we decided to turn down the heat in advance to try to avoid overheating and waste. It turns out the audience itself raised the temperature at least 6F. Good thing we didn’t start out toasty.

Small, very simple steps can have a big impact. We’re looking at the high tech, the low tech, and everything in between to make a difference.

From True service:

The beauty of automatic real-time monitoring is that it’s possible to identify a problem with the equipment and rectify it before the customer even notices. That is service.

Xylogen is a collaboration between Mark Froling and Henry Spindler. I wish them well and look forward to reading more about their work.


PS: Thanks to Andrew Dey (whom I met last night at a talk by Sustainserv’s Matthew Gardner) for pointing out that Xylogen isn’t just about alternative fuel, but more importantly about an alternative business model.

Calendar feeds are a best practice for bookstores

Bookstores, for all the obvious reasons, are hanging on by their fingernails. What brings people into bookstores nowadays? Some of us still buy and read actual printed books. Some of us enjoy browsing the shelves and tables. Some of us value interaction with friendly and knowledgeable booksellers. And some of us like to see and hear authors when they come to speak and sign books.

There are lots of author events at bookstores. Recently LibraryThing’s Tim Spalding tweeted:

Upcoming bookish events on @LibraryThing Local now over 10,000! http://www.librarything.com/local/helpers

It’s great that LibraryThing “helpers” (individuals, libraries, bookstores) are adding all those events to LibraryThing’s database. But I’d really like to see bookstores help themselves by publishing standard calendar feeds. That way, LibraryThing could ingest those calendars automatically, instead of relying on dedicated helpers to input events one at a time. And the feeds would be available in other contexts as well, syndicating both to our personal calendars (desktop-, phone-, and cloud-based) and to community calendars.

When I saw Tim’s tweet I took a look at how bookstore events are feeding into various elmcity hubs. Here’s a snapshot of what I found:


location store ical feed?
Bright Lights
Monadnock Region of NH Toadstool yes
Cambridge, MA Harvard Bookstore yes
Brookline MA Brookline Booksmith yes
Boston MA Trident Booksellers yes
Ann Arbor MI Crazy Wisdom yes
Portland OR Powell’s yes
Dim Lights
Berkeley East Wind Books indirect
Canada Chapters Indigo indirect
Seattle Third Place Books indirect
… and some others …
Dark Matter
Berkeley City Lights no
Various Barnes and Noble no
Seattle WA Elliot Bay no
… and many others …

There are three buckets:

Bright Lights: These are stores whose web calendars are accompanied by standard iCalendar feeds. Events from these stores appear automatically in the Monadnock, Boston, Ann Arbor, and Portland hubs. These stores’ calendars could also be ingested automatically into LibraryThing, and you could subscribe to them directly.

Dim Lights: These are stores whose web calendars are hosted on Facebook. There isn’t a standard iCalendar feed for Facebook calendars, but the elmcity service can synthesize one using the Facebook API. So I say that these stores have “indirect” iCalendar feeds.

Dark Matter: These are stores whose web calendars are available only in HTML format. Some of these calendars are handcrafted web pages, others are served up by content management systems that produce calendar widgets for display but fail to provide corresponding feeds.

There are a few Bright Lights and some Dim Lights, but most bookstore calendars, like most web calendars of all kinds, are Dark Matter. If you’re a bookstore I urge you to become a Bright Light. Making your calendar available to the web of data is as easy as using Google Calendar or Hotmail Calendar. It’s a best practice that bookstores disregard at their peril.

Harvard vs MIT

As I build out calendar hubs in various cities I’ve been keeping track of major institutions that do, or don’t, provide iCalendar feeds along with their web calendars. At one point I made a scorecard which shows that iCalendar support is unpredictably spotty across a range of cities and institutions. One of the surprises was Boston, where I found iCalendar feeds for neither Harvard nor MIT.

I’ve recently improved the Boston calendar hub and, as part of that exercise, I took another look at the public calendars for both universities. It turns out that Harvard does offer a variety of calendar feeds. I just hadn’t looked hard enough. There’s even an API:

The HarvardEvents API allows you to request data programmatically HarvardEvents in CSV, iCalendar, JSON, JSONP, serialized PHP, RSS, or XML format. The API provides a RESTful interface, which means that you can query it using simple HTTP GET requests.

Nicely done! You’d think that, just down the road, MIT would be doing something similar. But if so I haven’t found it. So for now, the Boston hub includes way more Harvard events than MIT events.

Here’s hoping MIT will follow Harvard’s lead and equip its public calendars with standard data feeds.

Computational thinking and life skills

Surfing the Roku box last night I landed on the MIT Open CourseWare channel and sampled Introduction to Computer Science and Programming. In one lecture Prof. John Guttag offers this timely reminder:

Focus on understanding why the program is doing what it’s doing, rather than why it’s not doing what you wanted it to.

It was timely because I was, in fact, writing a program that wasn’t doing what I expected. And I had, in fact, fallen into the psychological trap that Guttman warns about. When you’re writing software you use abstractions and also create them. What’s more, many of the abstractions you use are the very ones you created. When you live a world of your own invention you can do amazing and wonderful things. But you can also do ridiculous and stupid things. To see the difference between them you must always be prepared to park your ego and consider the latter possibility.

Elsewhere in that lecture, Prof. Guttman talks about Jeanette Wing’s idea that computational thinking involves skills that transcend the computer and information sciences. In 2008, when that lecture was given, many of us were talking about how that might be true. We talked about computational thinking as a “Fourth R” — a cognitive tool as fundamental as Reading, Riting, and Rithmetic.

I never found an example that would resonate broadly. But maybe this will:

Focus on understanding why your spouse or child or friend or political adversary is doing what he or she is doing, rather than why he or she is not doing what you wanted him or her to.

Why I subscribe to the Ann Arbor Chronicle

The Ann Arbor city council met, most recently, on October 15. Why didn’t the Ann Arbor Chronicle’s story on the meeting land until October 24? It took a while for Dave Askins to compile his typically epic 15000-word blog post. It’s an astonishingly detailed record of the meeting — more and better coverage, perhaps, than is available in any city.

The Chronicle describes itself thusly:

Launched in September 2008, the Ann Arbor Chronicle is an online newspaper that focuses on civic affairs and local government coverage. Although we’d likely be classified by most folks as “new media,” in many ways we embrace an ethos that runs contrary to current trends: Longer, in-depth articles; an emphasis on factual accuracy and thoroughness, not speed; and an assumption that our readers are thoughtful, intelligent and engaged in this community.

Who will read 15,000 words on a city council meeting? That depends partly on when the reading occurs. Because while the Chronicle is a newspaper, it is also a living history of the town’s public affairs. There’s no paywall. Every story is, and remains, fully available. That means the Chronicle isn’t just on the web, it is a web. What was said and decided about transportation in October 2012 can be reviewed in 2013 or 2014. The Chronicle is a community memory. In the short term it delivers news. Over the long run it assembles context.

Consider the list of links, below, that I extracted from the October 24 report. Of the 53 links, 23 point to prior Chronicle stories. Paywalled journalism can’t do that, and it’s a crippling limitation. If those who cannot remember the past are condemned to repeat it, mainstream journalism’s online amnesia won’t help move us forward. What happened today is only the tip of the iceberg. We need to know how we got to today. That can’t happen in print. It can only happen online. But tragically it almost never does, so context suffers.

If you scan that list of links you’ll notice something else that mainstream online journalism seldom allows: external links. The majority of the links in the Chronicle’s report point to other sources. Some are the websites of local organizations or local government. Others are documents that weren’t online but have been placed into the public record by the Chronicle. Paywalled journalism rarely does this. Once you’re in it wants to keep you in to rack up pageviews. This is another context killer.

Who will pay for all this luxurious context? Well, there’s me. I don’t live in Ann Arbor. But I went to school there, my daughter does now, I have another connection to the town, and I’m a huge fan of Dave Askins’ and Mary Morgan’s bold venture. So I’m a voluntary subscriber. And I hope I’ll get the chance to support something like the Chronicle in the town where I do live.

As a refugee from the pageview mills I can tell you that model leads nowhere good. I’m ready, willing, and able to back alternatives that use the web as it was meant to be used.


Links extracted from the Ann Arbor Chronicle’s report on the city council meeting of October 15, 2012.

  1. http://a2dda.org/current_projects/a2p5_/
  2. http://alphahouse-ihn.org/
  3. http://annarborchronicle.com/2010/03/03/to-do-bicycle-registry-transit-station/
  4. http://annarborchronicle.com/2010/03/10/county-offers-400k-match-for-skatepark/
  5. http://annarborchronicle.com/2010/04/05/ann-arbor-planning-priorities-take-shape/
  6. http://annarborchronicle.com/2011/03/24/ann-arbor-gives-initial-ok-to-pot-licenses/
  7. http://annarborchronicle.com/2012/02/10/um-ann-arbor-halt-fuller-road-project/
  8. http://annarborchronicle.com/2012/05/13/public-art-rehashed-by-ann-arbor-council/
  9. http://annarborchronicle.com/2012/06/04/ann-arbor-rail-study-moves-ahead/
  10. http://annarborchronicle.com/2012/06/11/city-council-action-focuses-on-transit-topics/
  11. http://annarborchronicle.com/2012/07/19/um-wall-street-parking-moves-ahead/
  12. http://annarborchronicle.com/2012/08/09/city-council-votes-down-park-amendment/
  13. http://annarborchronicle.com/2012/08/14/um-ann-arbor-agree-rail-costs-not-owed/
  14. http://annarborchronicle.com/2012/08/16/council-meeting-floods-fires-demolition/
  15. http://annarborchronicle.com/2012/08/20/planning-group-briefed-on-william-st-project/
  16. http://annarborchronicle.com/2012/09/01/city-council-to-focus-on-land-sale-policy/
  17. http://annarborchronicle.com/2012/09/07/aata-5-year-program-may-2013-tax-vote/
  18. http://annarborchronicle.com/2012/09/09/ann-arbor-dda-board-addresses-housing/
  19. http://annarborchronicle.com/2012/09/10/zoning-transit-focus-of-council-meeting/
  20. http://annarborchronicle.com/2012/09/13/county-tax-hike-for-economic-development/
  21. http://annarborchronicle.com/2012/09/24/council-punts-on-several-agenda-items/
  22. http://annarborchronicle.com/2012/09/27/transit-contract-contingent-on-local-money/
  23. http://annarborchronicle.com/2012/10/11/dda-green-lights-housing-transportation/
  24. http://annarborchronicle.com/2012/10/12/council-may-seek-voter-ok-on-rail-station/
  25. http://annarborchronicle.com/2012/10/12/positions-open-new-transit-authority-board/
  26. http://annarborchronicle.com/events-listing/
  27. http://annarborchronicle.com/wp-content/uploads/2012/08/MalletsDrainFloodingResolution.jpg
  28. http://annarborchronicle.com/wp-content/uploads/2012/10/600-Oct-15.jpg
  29. http://annarborchronicle.com/wp-content/uploads/2012/10/AAS-Conceptual-Construction-Costs-1.pdf
  30. http://annarborchronicle.com/wp-content/uploads/2012/10/AnnArbor-Congestion-now-future.jpg
  31. http://annarborchronicle.com/wp-content/uploads/2012/10/Appeal-12-263-Askins-map-historical-flooding1.pdf
  32. http://annarborchronicle.com/wp-content/uploads/2012/10/briere-derezinski-600.jpg
  33. http://annarborchronicle.com/wp-content/uploads/2012/10/cooper-deck-600.jpg
  34. http://annarborchronicle.com/wp-content/uploads/2012/10/EcologyCenter-Support-Resolution-Oct-2012.pdf
  35. http://annarborchronicle.com/wp-content/uploads/2012/10/greenbelt-hamstead-lane.jpg
  36. http://annarborchronicle.com/wp-content/uploads/2012/10/PA2PPositionPaper.pdf
  37. http://annarborchronicle.com/wp-content/uploads/2012/10/powers-ezekiel-600.jpg
  38. http://annarborchronicle.com/wp-content/uploads/2012/10/shiffler-mitchell-anglin-600.jpg
  39. http://annarborchronicle.com/wp-content/uploads/2012/10/smith-lax-600.jpg
  40. http://annarborchronicle.com/wp-content/uploads/2012/10/smith-listening-600.jpg
  41. http://annarborchronicle.com/wp-content/uploads/2012/10/teall-transit-map-600.jpg
  42. http://michigan.sierraclub.org/huron/
  43. http://protectourlibraries.org/
  44. http://wbwc.org/
  45. http://www.a2gov.org/government/city_administration/city_clerk/pages/default.aspx
  46. http://www.a2gov.org/government/communityservices/ParksandRecreation/parks/Features/Pages/KueblerLangford.aspx
  47. http://www.a2gov.org/government/communityservices/planninganddevelopment/planning/Pages/ZoningOrdinanceReorganizationProject.aspx
  48. http://www.a2gov.org/government/publicservices/fleetandfacility/airport/Pages/default.aspx
  49. http://www.ci.ann-arbor.mi.us/government/communityservices/planninganddevelopment/planning/Pages/NorthMainHuronRiverCorridorProject.aspx
  50. http://www.ecocenter.org/
  51. http://www.environmentalcouncil.org/
  52. http://www.michiganlcv.org/
  53. http://www.soloaviation.aero/

The personal cloud series

In today’s column on wired.com I discuss ways to manage overlapping personal, team, and public calendars.

It’s the 21st in the series, here’s the whole list:

The future’s here, but unevenly distributed 2012-03-02
Kynetx pioneers the Live Web 2012-03-09
What’s in a name? In the cloud, a data service! 2012-03-16
The translucent cloud: balancing privacy and convenience 2012-03-23
The not-so-hidden risk of a cloud meltdown, and why I’m not so worried 2012-03-30
Picture this: hosted lifebits in the personal cloud 2012-04-06
The intentional cloud: say what you mean, become what you say 2012-04-20
Owning your words: personal clouds build professional reputations 2012-04-27
Your smart meter’s data belongs in your personal cloud 2012-05-04
The web is the cloud’s API 2012-05-18
Calendars in the cloud: no more copy and paste 2012-06-01
Publishing has perished: long live the personal cloud 2012-06-08
The personal cloud’s third dimension: webmakers 2012-06-22
Personal cloud as platform: mix and match wisely 2012-06-29
Cooperating services in the cloud 2012-07-13
A domain of one’s own 2012-07-27
I say movie, you say film, our personal clouds can still work together 2012-08-12
Hello personal cloud, goodbye fax 2012-08-31
From personal clouds to community clouds 2012-09-14
Why Johnny can’t syndicate 2012-10-09
Scoping the visibility of your personal cloud 2012-10-19

A great disturbance in the force

If you’re a coach, parent, or student involved with high school sports, you may know of a site called HighSchoolSports.net. It’s a service used by many schools, including the ones in my town, to manage information about teams and schedules. For the elmcity project it’s been a stalwart provider of iCalendar feeds, enabling me to show high school and middle school contests — soccer, football, lacrosse, swimming, etc. — on community-wide calendars in many cities.

Until recently.

One day I noticed a flood of errors from the HighSchoolSports.net feeds. USA Today, it turns out, had acquired HighSchoolSports.net. At first I hoped the errors were just a redirection snag. But when I visited the new site, at usatodayhss.com, I was shocked to see that the iCalendar feature had evidently been removed.

Could that really be true? I wrote to ask; here is the reply.

Thank you for contacting us regarding the iCal feature that was once located on HighSchoolSports.net

The iCal feature, for syncing to personal calendars, is no longer an available feature on USATodayhss.com

We will be launching a mobile version of USATodayhss.com in the very near future. With this mobile feature, you will be able to check schedules on the go with your smart phones or any available internet connective devices.

It was as if a million voices cried out in terror and were suddenly silenced.

USA Today, please reconsider. You are now the steward of data flows that matter to thousands of communities. The data is of a specific type. There is a longstanding standard Internet way to enable that specific type of data to syndicate not only to personal calendars but also to community calendars. A mobile app will be a nice addition. But it is not a replacement for standard data feeds that can syndicate into a variety of contexts.

Thought leadership at the Ann Arbor District Library

In Book: A Futurist’s Manifesto, which is taglined Essays from the bleeding edge of publishing and is co-edited by Brian O’Leary and Hugh McGuire, there’s a refreshingly forward-thinking chapter on public libraries by the Ann Arbor District Library’s Eli Neiburger. In The End of the Public Library (As We Knew It) Eli describes an intriguing model for libraries as purveyors of digital stuff. If you’re a creator of such stuff, and if you’re willing to sign the AADL Digital Content Agreement, you can license your stuff directly to the library:

The Agreement establishes that the library will pay an agreed-upon sum for a license to distribute to authenticated AADL cardholders, from our servers, an agreed-upon set of files, for an agreed-upon period of time. At the end of the term, we can either negotiate a renewal or remove the content from our servers.

The licenses specifies that no DRM, use controls, or encryption will be used, and no use conditions are presented to the AADL customer. In fact, our stock license also allows AADL users to download the files, use them locally and even create derivative works for personal use.

Pretty radical! Why would you, the creator of said stuff, want to take this crazy leap of faith? Eli explains:

Instead of looking at the license fee as compensation for something like a one-time sale, the pricing works when the rightsholder considers how much revenue they would like to expect during the license term from our 54,000-odd cardholders. For niche creators, it’s not hard for the library to beat that number, and all they have to do to get it is agree to the license and deliver the files to our server.

They’re not releasing their content to the world (especially because it’s already out there). They’re just granting a year or so of downloads to these 54,000 people. They get more revenue than they would likely get from those people up front, and the library gets sustainable, usable digital content for its users.

Eli thinks this won’t work for in-demand mass-market stuff anytime soon, if ever. But as he points out:

When everything is everywhere, libraries need to focus on providing — or producing — things that aren’t available anywhere else, not things that are available everywhere you look.

Of course public libraries have always been producers as well as providers. Things libraries produce include local collections, like the Keene Public Library’s exquisitely-curated historical photos and postcards and the Ann Arbor District Library’s The Making of Ann Arbor.

Libraries are also producers of community events, and here’s one I’m delighted to announce will happen at the Ann Arbor District Library on September 26:


A Seminar on Community Information Management

Everybody lives online now. Knowing how to collect and exchange information is now as important a skill as knowing how to drive, but it’s not enough: in order to make the web really work for you, you have to know how to project yourself online, and how to manage the boundary between what’s private and what’s public.

Cities and towns need to know this too. From the mayor’s office and local schools to the slow-pitch league and the local music scene, communities need to have these same skills if they are to survive and thrive in the 21st Century.

This seminar will explore what those skills are and how we can use them to make our communities stronger. We will use one particular case — sharing and synchronizing event calendars in Ann Arbor — to illustrate ideas, but the basic principles we will discuss can be applied to almost every aspect of community life.

While we’ll be talking about the web, this seminar is not for IT specialists, any more than knowing how to drive is something that only auto mechanics need to know.

A one-hour presentation by Jon Udell of Microsoft will be followed by another hour of Q&A and discussion.

This event is free of charge, and particularly of interest to those working for educational, civic and other not-for-profit organizations. It will be helpful to those who want better ways to get the word out about their own organization’s events and news, as well as those who are searching for such information and not always finding it easy to locate.

We’ll address these questions, among others:

– How can we, as a community, most effectively inform one another about goings-on in the region?

– How can our collective information management skills improve quality of life in the region?

– How can they also help us attract tourism and talent from outside the region?

– How do these same skills apply in other domains of public life such as political discourse and education?

We’re inviting organizations that — like public libraries — are significant producers of community events: public schools, colleges and universities, city governments, hospitals, cultural and environmental nonprofits, sports leagues, providers of social services, and more. We’re specifically looking for the people in such organizations who produce and promote events. If you’re one of those folks in or near Ann Arbor and would like to be invited, please let me know.

“Carol, meet Mrs. D; Mrs. D, meet Carol” (An ode to 3-way calling)

My latest wired.com column begins:

Back in February my son lost control of his car and landed in the hospital. Fortunately he has recovered from his injuries. And fortunately we have health insurance. So everything’s OK. However, I’m still — six months later — trying to untangle the bureaucratic mess that ensued.

Multiple health-care providers and multiple insurers are involved. They don’t talk to each other directly. It’s up to me to decode their communications meant for one another and route them appropriately. In the column I imagine a cloud-based tool that helps me do that, and I may live long enough to use it someday. But for now there’s a simple tool, often overlooked, that can help you hack through these bureaucratic thickets. It’s 3-way calling.

Once in an airport I saw a guy with a payphone in one hand and a cellphone in the other hand. He grew more and more agitated. Finally he removed his head from between the phones, rotated one of them, slapped them together microphone-to-speaker, and yelled “Talk to each other!”

I do the same thing all the time, a bit less dramatically. Just this morning I learned from Mrs. D at the hospital that the health insurer still isn’t processing a pile of bills. She was asking me questions that only the insurer could answer. So I called up the health insurer and got Carol. She was asking me questions that only the hospital could answer. I did a hook flash, called Mrs. D, and flashed again.

“Carol, meet Mrs. D; Mrs. D, meet Carol.”

And then I just listened to them work it out. When you do this kind of thing, it’s always fascinating to observe the degree to which organizations are hamstrung by their own org charts, acronyms, and methods. As detailed in the column, the initial sticking point was a magic token called the Exhaustion of Benefits letter, which was supposed to trigger the cutover from auto to health insurance. It took me a long time to figure out what that was and where to route it, but even after I did things remained stuck.

On the provider side, there were org-chart assumptions (e.g., that the hospital and the clinic are separate entities) not evident to the insurer. On the insurer side, there were acronyms (PIP) and terms (“ledger”) not understood by the provider. And on each side there were procedures unfamiliar to the other.

You would think that people who deal with these issues every day would know the drill. But there are many different ways to play the same game. It’s up to the customer to be the referee. If you haven’t tried it, use 3-way calling the next time you find yourself in the middle of one of these messes. For now it’s the best tool for the job.

Food safety and information safety revisited

In Food safety, information safety I reacted to Tim Eberly’s newspaper article on food safety which, I said, didn’t acknowledge an important primary source, Elisabeth Hagen’s blog post, which states the rationale for the poultry inspection policy changes that were the subject of the article.

I’ve since spoken to Tim Eberly about how he researched and wrote that story. He had, in fact, seen Elisabeth Hagen’s blog post, along with many other documents, some of which I’ll cite and link to here. And the rationale she presented, which I complained was missing from the article, is in fact there, albeit in a more diluted form.

In theory the article could have used quotes from Hagen’s blog, or from a HuffPo blog by her boss Alfred Almanza. But in practice, as Tim Eberly points out, these are blogs in name only, effectively they are press releases, and a reporter wants to advance the story beyond that. Unfortunately FSIS declined to be interviewed for the story.

All that said, I was still left wondering about the crux of the story. A decade ago the FSIS began a pilot program that would shift responsibility for direct inspection of slaughtered chickens from FSIS workers to poultry-plant workers. On the face of it, that sounds like a terrible idea, and there’s been lots of criticism ever since. But the FSIS rationale — that its resources are better spent assuring all-up compliance with safety standards, to prevent upstream contamination so less needs to be found downstream — sounds credible too.

Now the pilot program is slated to expand. What should we think about that? You’ll have to decide for yourself. Here are some sources I’ve rounded up that may help.

  • The proposed rule: Modernization of Poultry Slaughter Inspection

    http://www.fsis.usda.gov/OPPDE/rdad/FRPubs/2011-0012E.htm, 4/26/2012

  • Comments on the rule

    http://www.regulations.gov/#!searchResults;rpp=25;po=2250;s=FSIS-2011-0012, April-May 2012, total of 2260 comments

  • Alfred Almanza, FSIS administrator: blog post / news release

    http://www.fsis.usda.gov/News_&_Events/NR_041312_01/index.asp, 4/26/2012 (also http://www.huffingtonpost.com/alfred-v-almanza/chicken-inspection-new-policy_b_1424136.html, 4/13/2012)

    “We have more than a decade of experience slaughter running at 175 bpm, the proposed maximum line speed in the rule. And the data is clear that in these plants, the poultry produced has lower rates of Salmonella, a pathogen that sickens more than 1 million people in the U.S. every year. These plants also maintain superior performance on removing the visual and quality defects that don’t make people sick. Those are the facts, based on the data.”

  • FSIS Self-evaluation of HACCP Inspection Models Project (HIMP)

    http://www.fsis.usda.gov/PDF/Evaluation_HACCP_HIMP.pdf, Aug 2011

    “Because fewer inspectors are required to conduct online carcass inspection in HIMP establishments, FSIS is able to conduct more offline food safety related inspection activities. HIMP establishments have higher compliance with SSOP and HACCP prevention practice regulations and lower levels of non-food safety defects, fecal defect rates, and Salmonella verification testing positive rates than non-HIMP establishments. These data indicate that HIMP inspection provides improvements in food safety and other consumer protections.”

  • Independent review of the HACCP-Based Inspection Models Project by the National Alliance for Food Safety Technical Team

    http://www.fsis.usda.gov/OPPDE/nacmpi/Nov2002/Papers/NAFS97.pdf, 2002

    Conclusions

    1. The authors urge continued FSIS oversight and continuous re-evaluation as HIMP is more broadly implemented.

    2. At this time, no convincing arguments were identified which indicate that adoption of the modified system, under regulatory supervision, would increase risk.

    3. More importantly, the authors find that there are several lines of evidence that strongly argue process improvements from the consumer perspective as related to adoption of the HIMP system

  • GAO report: Weaknesses in Meat and Poultry Inspection Pilot Should Be Addressed Before Implementation, 2001

    http://www.gao.gov/assets/240/233016.pdf

    “It is questionable whether the data generated by the project are indicative of how all of the chicken plants’ inspection systems would perform if modified inspections were adopted nationwide. First, the chicken pilot that USDA designed lacks a control group — a critical design flaw that precludes a comparison between the performance of the inspection systems at those plants that volunteered to participate in the pilot and that of plants that did not participate. Without a control group, USDA cannot determine whether changes in inspections systems are due to personnel changes or other possible explanations, such as the addition of chlorine rinses.”

That last item helps me contextualize the story, which ends like this:

At least one elected official wants FSIS to put the brakes on its proposal.

U.S. Sen. Kirsten Gillibrand, D-New York, asked GAO to conduct another audit of FSIS’s pilot program. The agency said it will do so soon. She then sent a letter to Vilsack, asking him to delay the changes.

“I do not believe USDA should yield inspection responsibilities to plant personnel that have an inherent conflict of interest unless [the pilot program] can be independently verified to be safe and effective,” Gillibrand wrote.

Vilsack wrote back to Gillibrand in a letter filled with FSIS’s talking points on the issue. But more notable is what’s missing: Vilsack doesn’t address the senator’s request.

Reading between the lines, advocates say, that doesn’t spell good news.

I’m now inclined to agree. It sounds like there should be an independent audit of the pilot, and better analysis of the tradeoffs between using FSIS inspectors to monitor plants for all-up compliance with safety standards versus using them side-by-side with plant inspectors looking at birds.

To get to this point, though, I had to work pretty hard to find and evaluate the reporter’s sources, and understand his process. I’m grateful to Tim Eberly for taking the time to help me. I sure wish, though, that journalistic convention permitted him to cite and link to the sources he used.

Food safety, information safety

Yesterday my family and I read this article on food safety which was syndicated to our local paper from the Atlanta Journal-Constitution. It begins provocatively:

One-third of a second.

That’s how long a federal inspector will have to examine slaughtered chickens for contaminants and disease under new rules proposed by the federal government.

In the ensuing 1300 words of the main story that was syndicated to our paper, plus 1100 words of sidebars not included, the reporter — Tim Eberly — explores how the proposal will shift responsibility for hands-on inspection from federal inspectors to poultry plant workers. It’s a portrait of yet another disturbing lapse of oversight in our national food safety system. That much was clear to us when we finished the article. But we were left wondering: why would the USDA so flagrantly subvert its mission?

From the article:

The USDA’s Food Safety Inspection Service, which oversees poultry plants, believes the changes would “ensure and even enhance the safety of the poultry supply by focusing our inspectors’ efforts on activities more directly tied to improving food safety,” FSIS [the USDA’s Food Safety and Inspection Service] spokesman Dirk Fillpot said in a statement.

The agency says it wants inspectors to focus on issues that pose the greatest health risks to the public.

That still doesn’t really explain the USDA’s rationale, though. So I spent five minutes searching online and discovered the following facts:

  • The USDA has a blog.

  • To which USDA officials frequently contribute.

  • Including Dr. Elisabeth Hagen, who is not just an FSIS spokesperson but in fact the offical who oversees the agency’s policies and programs.

On April 19, 2012, Dr. Hagen cited the rationale that was missing from Tim Eberly’s story (bold emphasis mine):

Today, USDA announced an extension to the public comment period for a proposed rule that would modernize the poultry slaughter inspection system.  This new plan would provide us with the opportunity to protect consumers from unsafe food more effectively.  We recognize that this proposal would represent a significant change from the current system and has sparked a debate on how poultry is inspected.  We also value the different opinions being expressed about the proposal and have extended the public comment period to ensure all sides are presented in this debate.

It may surprise you to learn that the USDA has been inspecting poultry in largely the same way since the 1950’s. So, while our scientific knowledge of what causes foodborne illness has evolved, our inspection process has not been updated to reflect this new information. Under this modernization proposal, significant public health benefits will be achieved and foodborne illness will be prevented by focusing our inspectors attention on activities that will better ensure the safety of the poultry you and your family enjoy.

One thing we have learned from the last few decades of advances in food safety technology is that the biggest causes of foodborne illness are the things you don’t see like the harmful pathogens Salmonella and Campylobacter. As part of a continual effort to improve our inspection system, FSIS is proposing to move some inspectors away from quality assurance tasks—namely checking carcasses for bruises and feathers—to focus on food safety tasks, such as ensuring sanitation standards are being met and verifying testing and antimicrobial process controls. This science based approach means our highly-trained inspectors would spend less time looking for obvious physical defects and more time making sure steps poultry processing facilities take to control food safety hazards are working effectively.

The increased emphasis on food safety tasks proposed under the rule is consistent with the agency’s focus on foodborne illness prevention.  Instead of focusing on quality assurance, inspectors will now be able to ensure plants are maintaining sanitary conditions and that food safety hazards are being reduced throughout the entire production process.

Under a pilot program started in 1999, known as the HACCP Inspection Models Program, 20 broiler plants have served as “trial plants” for this new proposal. Test results from the poultry produced in those plants shows lower rates of Salmonella before it goes to the grocery store. The data and test results from this pilot program demonstrate that quality assurance tasks, such as checking for bruises and blemishes, do not provide adequate food safety protections as once was thought over 60 years ago.

Over the years we have seen — again and again — the need to modernize to keep pace with the latest science and threats. This poultry slaughter modernization proposal is about protecting public health, plain and simple, and I encourage stakeholders and the public to read the proposal and then let us know what you think.

Why couldn’t Tim Eberly have found, quoted from, and cited the USDA’s authoritative statement? Why couldn’t the editor who syndicated it into my local paper have added value by doing so?

There’s an analog to food safety: information safety. Reporters (food producers) and editors (inspectors) are chained to a fast-moving production line. But science-based methods can help keep us safe. Use the precious few seconds available to find, and report, authoritative sources.

Where have all the bloggers gone?

When Dave Shields returned from a recent “software sabbatical” (no blogging, tweeting, or Facebooking since 2009) he wondered: Where have all the bloggers gone?:

I suggest you visit Sam Ruby’s planet.intertwingly.net.

(A “planet” is just an aggregation of blogs. The planet hoster makes up a list of blogs, then puts together a simple program so that, whenever a new blog post is made by *anyone* on the list of bloggers, then the blog post is copied to the planet. In brief, readers of the planet see *all* the blogs posts in the list of chosen blogs.)

Now that I’m back blogging, I have found that if I write a post in the morning, and then write another later in the day, or the next morning, then there are only a handful of blog posts from all the other members of the planet in between.

I’m one of those listed at planet.intertwingly.com, and I’m guilty as charged:

Of course that’s a view of the tech blogosphere. But my wife Luann, who blogs in a very different sphere of interest, shows a similar pattern:

Perhaps a more interesting question than “Where have the bloggers gone?” is “What were they doing in the first place?” In my case, from 2003 through 2006, blogging was part of my gig at InfoWorld. For many of the others listed at planet.intertwingly.com it was a professional activity too. Collectively we were the tech industry thinking out loud. We spoke to one another through our blogs, and we monitored our RSS readers closely. That doesn’t happen these days.

Obviously Twitter, Facebook, and (for geeks particularly) Google+ have captured much of that conversational energy. Twitter is especially seductive. Architecturally it’s the same kind of pub/sub network as the RSS-mediated blogosphere. But its 140-character data packets radically lowered the threshold for interaction.

It’s not just about short-form versus long-form, though. Facebook and Google+ are now hosting conversations that would formerly have happened on — or across — blogs. Keystrokes that would have been routed to our personal clouds are instead landing in those other clouds.

I’d rather route all my output through my personal cloud and then, if/when/as appropriate, syndicate pieces of it to other clouds including Twitter, Facebook, and Google+. A few weeks back, WordPress’s Stephane Daury reminded me that I can:

@judell: since your blog is on (our very own) @wordpressdotcom, you can setup the publicize option to push your posts: http://wp.me/PEmnE-1ff.

I replied that I knew about that but preferred to crosspost manually. But I spoke too soon. My reason for not wanting to automate that push was that I wanted to tweak whether (or how) it happens. I should have realized that WordPress had thought of that:

Nice! This is an excellent step in the right direction. Thanks for the reminder, Stephane!

What’s next? Here are some things that will help me consolidate my output in my own personal cloud where it primarily belongs.

  • Different messages to each foreign cloud. Because headlines often need to be audience-specific.

  • Private to my personal cloud, public to foreign clouds. Because the public persona I shape on my blog serves different purposes than the ones I project to foreign clouds. Much of what I say in those other places doesn’t rise to the level of a public blog entry, but I’d still like to route that stuff through my personal cloud so I can centrally control/monitor/archive it.

  • Federate the interaction siloes. Because now I can’t easily follow or respond to commentary directed to my blog echoes in foreign clouds. Or, again, centrally control/monitor/archive that stuff.

In my Wired.com column I often reflect on these kinds of issues. The personal cloud services I envision mostly don’t exist yet. But it’s great to see WordPress.com moving in that direction!