Uncategorized


Movie showtimes are easy to find. Just type something like “movies keene nh” into Google or Bing and they pop right up:

You might assume that this is open data, available for anyone to use. Not so, as web developers interested in such data periodically discover. For example, from MetaFilter:

Q: We initially thought it would be as easy as pulling in an RSS feed from somewhere, but places like Yahoo or Google don’t offer RSS feeds for their showtimes. Doing a little research brought up large firms that provide news and data feeds and that serve up showtimes, but that seems like something that’s designed for high-level sites with national audiences.

So, is there any solution for someone who is just trying to display local showtimes?

A: This is more complicated than you might think. Some theatres maintain that their showtimes are copyrighted, and (try to) control the publication of them. Others have proprietary agreements with favored providers and don’t publish their showtimes elsewhere, to give their media partners a content edge.

What applies to RSS feeds applies to calendar feeds as well. It would be nice to have your local showtimes as an overlay on your personal calendar. But since most theaters don’t make the data openly available, you can’t.

Some indie theaters, however, do serve up the data. Here are some movies that don’t appear when you type “movies keene nh” into Google or Bing:

These are listings from the Putnam Theater at Keene State College. They syndicate to the Elm City hub for the Monadnock region of New Hampshire by way of the college calendar which recently, thanks to Ben Caulfield, added support for standard iCalendar feeds. They appear in the film category of that hub. And in fact they’re all that can appear there.

I’ve decided I’m OK with that. I used to forget about movies at the Putnam because they didn’t show up in standard searches. Now I sync them to my phone and I’m more aware of them. Would I want all the mainstream movies there too? I used to think so, but now I’m not so sure. There are plenty of ways to find what’s playing at mainstream theaters. That doesn’t feel like an awareness problem that needs solving. The indie theaters, though, could use a boost. As I build out Elm City hubs in various cities, I’ve been able to highlight a few with open calendars:

- In Berkeley: UC Berkeley Art Museum / Pacific Film Archive (BAM/PFA)

- In Toronto: Bloor Cinema

And here are some indies whose calendars could be open, but aren’t:

- In Portland: Academy Theater

- In Cambridge, The Brattle Theatre

If you’re an indie theater and would like your listings to be able to flow directly to personal calendars, and indirectly through hubs to community portals, check out how the Putman, BAM/PFA, and the Bloor Cinema are doing it.

In The Better Angels of our Nature: Why Violence Has Declined, Steven Pinker compiles massive amounts of evidence to show that we are becoming a more civilized species. The principal yardstick he uses to measure progress is the steady decline, over millenia, in per-capita rates of homicide. But he also measures declines in violence directed towards women, racial groups, children, homosexuals, and animals.

It’s hard to read the chapters about the routine brutality of life during the Roman empire, the Middle Ages, the Renaissance, and — until more recently than we like to imagine — the modern era. An early example:

Far from being hidden in dungeons, torture-executions were forms of popular entertainment, attracting throngs of jubilant spectators who watched the victim struggle and scream. Bodies broken on wheels, hanging from gibbets, or decomposing in iron cages where the victim had been left to die of starvation and exposure were a familiar part of the landscape.

A modern example:

Consider this Life magazine ad from 1952:

Today this ad’s playful, eroticized treatment of domestic violence would put it beyond the pale of the printable. It was by no means unique.

A reader of that 1950s ad would be as horrified as we are today to imagine cheering a public execution in the 1350s. A lot changed in 600 years. But in the 60 years since more has changed. The ad that seemed OK to a 1950s reader would shock most of us here in the 2010s.

Over time we’ve grown less willing and able to commit or condone violence, and our definition of what counts as violence has grown more inclusive. And yet this is deeply counter-intuitive. We tend to feel that the present is more violent and dangerous than the recent past. And our intuition tells us that the 20th century must have been more so than the distant past. That’s why Pinker has to marshal so much evidence. It’s like Darwin’s rhetorical strategy in The Origin of Species. You remind people of a lot of things that they already know in order to lead them to a conclusion they wouldn’t reach on their own.

Will the trend continue? Will aspects of life in the 2010s seem alien to people fifty years hence in the same way that the coffee ad seems alien to us now, and that torture-execution seemed to our parents? (And if so, which aspects?)

Pinker acknowledges that the civilizing trend may not continue. He doesn’t make predictions. Instead he explores, at very great length, the dynamics that have brought us to this point. I won’t try to summarize them here. If you don’t have time to read the book, though, you might want to carve out an hour to listen to his recent Long Now talk. You’ll get much more out of that than from reading reviews and summaries.

Either way, you may dispute some of the theories and mechanisms that Pinker proposes. But if you buy the premise — that all forms of violence have steadily declined throughout history — I think you’ll have to agree with him on one key point. We’re doing something right, and we ought to know more about why and how.

As Elm City hubs grow, with respect to both raw numbers of events and numbers of categories, unfiltered lists of categories become unwieldy. So I’m noodling on ways to focus initially on a filtered list of “important” categories. The scare quotes indicate that I’m not yet sure how to empower curators to say what’s important. Categories with more than a threshold number of events? Categories that are prioritized without regard to number of events? Some combination of these heuristics?

To reason about these questions I need to evaluate some data. One source of data about categories is the tag cloud. For any Elm City hub, you can form this URL:

elmcity.cloudapp.net/HUBNAME/tag_cloud

If HUBNAME is AnnArborChronicle, you get a JSON file that looks like this:

[
{ "aadl":348},
{ "aaps":9},
{ "abbot":18},
...
]

This is the data that drives the category picklist displayed in the default rendering of the Ann Arbor hub. A good starting point would be to dump this data into a spreadsheet, sort by most populous categories, and try some filtering.

I could add a feature that serves up this data in some spreadsheet-friendly format, like CSV (comma-separated variable). But I am (virtuously) lazy. I hate to violate the YAGNI (“You aren’t gonna need it”) principle. So I’m inclined to do something quick and dirty instead just to find out if it’ll even be useful to work with that data in a spreadsheet..

One quick-and-dirty approach entails looking for some existing (preferably online) utility that does the trick. In this case I searched for things with names like json2csv and json2xls, found a few candidates, but nothing that immediately did what I wanted.

So some text needs to be wrangled. One source of text to wrangle is the HTML page that contains the category picklist. If you capture its HTML source, you’ll find a sequence of lines like this:

<option value="aadl">aadl (348)</option>
<option value="aaps">aaps (9)</option>
<option value="abbot">abbot (18)</option>

It’s easy to imagine a transformation that gets you from there to here:

aadl	348
aaps	9
abbot	18

Although I’ve often written code to do that kind of transformation, if it’s a quick-and-dirty one-off I don’t even bother. I use the macro recorder in my text editor to define a sequence like:

  • Start selecting at the beginning of a line
  • Go to the first >
  • Delete
  • Go to whitespace
  • Replace with tab
  • Search for (
  • Delete
  • Search for )
  • Delete to end of line
  • Go to next line

This is a skill that’s second nature to me, and that I’ve often wished I could teach others. Many people spend crazy amounts of time doing mundane text reformatting; few take advantage of recordable macros.

But the reality is that recordable macros are the first step along the slippery slope of programming. Most people don’t want to go there, and I don’t blame them. So I’m delighted by a new feature in Excel 2013, called Flash Fill, that will empower everybody to do these kinds of routine text transformations.

Here’s a picture of a spreadsheet with HTML patterns in column A, an example of the name I want extracted in column B, and an example of the number I want in column C.

Given that setup, you invoke Flash Fill in the first empty B and C columns to follow the examples in B1 and C1. Here’s the resulting spreadsheet on SkyDrive. Wow! That’s going to make a difference to a lot of people!

Suppose your data source were instead JSON, as shown above. Here’s another spreadsheet I made using Flash Fill. As will be typical, this took a bit of prep. Flash Fill needs to work on homogenous rows. So I started by dumping the JSON into JSONLint to produce text like this:

[
    {
        "aadl": 348
    },
    {
        "aaps": 9
    },
    {
        "abbot": 18
    },
...
]

I imported that text into Excel 2013 and sorted to isolate a set of rows with a column A like this:

"aadl": 348
"aaps": 9
"abbot": 18

At that point it was a piece of cake to get Flash Fill to carry the names over to column B and the numbers to column C.

Here’s a screencast by Michael Herman that does a nice job showing what Flash Fill can do. It also illustrates a fascinating thing about programming by example. At about 1:25 in the video you’ll see this:

Michael’s example in C1 was meant to tell Flash Fill to transform strings of 9 digits into the familiar nnn-nn-nnnn pattern. Here we see its first try at inferring that pattern. What should have been 306-60-4581 showed up as 306-215-4581. That’s wrong for two reasons. The middle group has three digits instead of two, and they’re the wrong digits. So Michael corrects it and tries again. At 1:55 we see Flash Fill’s next try. Here, given 375459809, it produces 375-65-9809. That’s closer, the grouping pattern looks good, but the middle digits aren’t 45 as we’d expect. He fixes that example and tries again. Now Flash Fill is clear about what’s wanted, and the rest of the column fills automatically and correctly.

But what was Flash Fill thinking when it produced those unintended transformations? And could it tell us what it was thinking?

From a Microsoft Research article about the new feature:

Gulwani and his team developed Flash Fill to learn by example, not demonstration. A user simply shows Flash Fill what he or she wants to do by filling in an Excel cell with the desired result, and Flash Fill quickly invokes an underlying program that can perform the task.

It’s the difference between teaching someone how to make a pizza step by step and simply showing them a picture of a pizza and minutes later eating a hot pie.

But that simplicity comes with a price.

“The biggest challenge,” Gulwani says, “is that learning by example is not always a precise description of the user’s intent — there is a lot of ambiguity involved.

“Take the example of Rick Rashid [Microsoft Research’s chief research officer]. Let’s say you want to convert Rick Rashid to Rashid, R. Where does that ‘R’ come from? Is it the ‘R’ of Rick or the ‘R’ of Rashid? It’s very hard for a program to understand.”

For each situation, Flash Fill synthesizes millions of small programs — 10-20 lines of code — that might accomplish the task. It sounds implausible, but Gulwani’s deep research background in synthesizing code makes it possible. Then, using machine-learning techniques, Flash Fill sorts through these programs to find the one best-suited for the job.

I suspect that while Flash Fill could tell you what it was thinking, you’d have a hard time understanding how it thinks. And for that reason I suspect that hard-core quants won’t rush to embrace it. But that’s OK. Hard-core quants can write code. Flash Fill is for everybody else. It will empower regular folks to do all sorts of useful transformations that otherwise entail ridiculous manual interventions that people shouldn’t waste time on. Be aware that you need to check results to ensure they’re what you expect. But if you find yourself hand-editing text in repetitive ways, get the Excel 2013 preview and give Flash Fill a try. It’s insanely useful.

In U.N. Maps Show U.S. High in Gun Ownership, Low in Homicides, A.W.R. Hawkins presents the following two maps:

From these he concludes:

Notice the correlation between high gun ownership and lower homicide rates.

As these maps show, “more guns, less crime” is true internationally as well as domestically.

The second map depicts homicides per 100,000 people. That’s the same yardstick used in Steven Pinker’s monumental new book The Better Angels of Our Nature: Why Violence has Declined. Pinker marshals massive amounts of data to show that over the long run, and at an accelerating pace, we are less inclined to harm one another. When you look at the data on a per capita basis, even the mass atrocities of the 20th century are local peaks along a steadily declining sawtooth trendline.

One of the most remarkable charts in the book ranks the 20 deadliest episodes in history. It’s adapted from Matthew White’s The Great Big Book of Horrible Things, and appears in a slightly different form in The New Scientist:

Ever heard of the An Lushan Revolt? Well, I hadn’t, but on a per capita basis it dwarfs the first World War.

Pinker says, in a nutshell, that we’re steadily becoming more civilized, and that data about our growing reluctance to kill or harm one another show that. The trend marches through history and spans the globe. There’s regional variation, of course. A couple of charts show the U.S. to be about 5x more violent than Canada and the U.K. But there isn’t one that ranks the U.S. in a world context. So A.W.R. Hawkins’ map of homicide rates got my attention.

The U.S. has the most guns, the first chart says. And it’s one of the safest countries, the second chart says. But that second map doesn’t tell us:

    Where does the U.S. rank?

    How many countries are in the red, pink, yellow, and green categories?

    Which countries are in those categories?

    How do countries rank within those categories?

Here’s another way to visualize the data:

There are a lot of countries mashed together in that green zone. And after Cuba we’re the most violent of them. Five homicides per 100,000 isn’t a number to boast about.

It’s said that every social scientist must, at some point, write a sentence that begins: “Man is the only animal that _____.” Some popular completions of the sentence have been: uses tools, uses language, laughs, contemplates death, commits atrocities. In his new book Jonathan Gottschall offers another variation on the theme: storytelling is the defining human trait. For better and worse we are wired for narrative. A powerful story that captures our attention can help us make sense of the world. Or it can lead us astray.

A story we’ve been told about Easter Island goes like this. The inhabitants cut down all the trees in order to roll the island’s iconic 70-ton statues to their resting places. The ecosystem crashed, and they died off. This story is told most notably by Jared Diamond in Collapse and (earlier) in this 1995 Discover Magazine article:

In just a few centuries, the people of Easter Island wiped out their forest, drove their plants and animals to extinction, and saw their complex society spiral into chaos and cannibalism.

As we try to imagine the decline of Easter’s civilization, we ask ourselves, “Why didn’t they look around, realize what they were doing, and stop before it was too late? What were they thinking when they cut down the last palm tree?”

This is a cautionary tale of reckless ecocide. But according to recent work by Terry Hunt and Carl Lipo, Jared Diamond got the story completely wrong. A new and very different story emerged from their study of the archeological record. Here are some of the points of contrast:

old story new story
Collapse resulted from the islanders’ reckless destruction of their environment (ecocide). Collapse resulted from European-borne diseases and European-inflicted slave trading (genocide).
The trees were cut down to move the statues. Trees weren’t used to move the statues. They were ingeniously designed to be walked along in a rocking motion using only ropes. The trees were destroyed mostly by rats. Which wasn’t a problem anyway because the islanders used the cleared land for agriculture.
Fallen and broken statues resulted from intertribal warfare. Fallen and broken statues resulted from earthquakes.
It must have taken a population of 25,000 or more to make and move all those statues. A population decline to around 4000 at the moment of European contact was evidence of massive collapse. The mode of locomotion for which the statues were designed is highly efficient. There’s no need to suppose a much larger work force than was known to exist.
The people of Easter Island were warlike. The people of Easter Island were peaceful. Because they had to be. Lacking hardwood trees for making new canoes, they were committed once the canoes that brought them were gone. There was no escape. And it’s a hard place to make a living. No fresh water, poor soil, meager fishing. To survive for the hundreds of years that they did, the society had to be “optimized for stability.”

Hunt and Lipo tell this new story in compelling Long Now talk. After the talk Stewart Brand asks how Jared Diamond has responded to their interpretation. Not well, apparently. Once we’re in the grip of a powerful narrative we don’t want to be released from it.

Hunt and Lipo didn’t go to Easter Island with a plan to overturn the old story. They went as scientists with open eyes and open minds, looked at all the evidence, realized it didn’t support the old story, and came up with a new one that better fits the facts. And it happens to be an uplifting one. These weren’t reckless destroyers of an ecosystem. They were careful stewards of limited resources whose artistic output reflects the ingenuity and collaboration that enabled them to survive as long as they did in that hard place.

We’re all invested in stories, and in the assumptions that flow from them. Check your assumptions. It’s a hard thing to do. But it can lead you to better stories.

In Computational thinking and life skills I asked myself how to generalize this touchstone principle from computer science:

Focus on understanding why the program is doing what it’s doing, rather than why it’s not doing what you wanted it to.

And here’s what I came up with:

Focus on understanding why your spouse or child or friend or political adversary is doing what he or she is doing, rather than why he or she is not doing what you wanted him or her to.

I’ve been working on that. It’s been a pleasant surprise to find that Facebook can be a useful sandbox in which to practice the technique. I keep channels of communication open to people who hold wildly different political views. It’s tempting to argue with, or suppress, some of them. Instead I listen and observe and try to understand the needs and desires that motivate utterances I find abhorrent.

My daughter, a newly-minted Master of Social Work, will soon be doing that for a living. She’s starting a new job as a dialogue facilitator. How do you nurture conversations that bridge cultures and ideologies? It’s important and fascinating work. And I suspect there are some other computational principles that can helpfully generalize to support it.

Here’s one: Encourage people to articulate and test their assumptions. In the software world, this technique was a revelation that’s led to a revolution in how we create and manage complex evolving systems. The tagline is test-driven development (TDD), and it works like this. You don’t just assume that a piece of code you wrote will do what you expect. You write corresponding tests that prove, for a range of conditions, that it does what you expect.

The technique is simple but profound. One of its early proponents, Kent Beck, has said of its genesis (I’m paraphrasing from a talk I heard but can’t find):

I was stumped, the system wasn’t working, I didn’t know what else to do, so I began writing tests for some of the most primitive methods in the system, things that were so simple and obvious that they couldn’t possible be wrong, and there couldn’t possibly be any reason to verify them with tests. But some of them were wrong, and those tests helped me get the system working again.

Another early proponent of TDD, Ward Cunningham, stresses the resilience of a system that’s well-supported by a suite of tests. In the era of cloud-based software services we don’t ship code on plastic discs once in a while, we continuously evolve the systems we’re building while they’re in use. That wouldn’t be safe or sane if we weren’t continuously testing the software to make sure it keeps doing what we expect even as we change and improve it.

Before you can test anything, though, you need to articulate the assumption that you’re testing. And that’s a valuable skill you can apply in many domains.

Code

Assumption: The URL points to a calendar.

Tests: Does the URL even work? If so, does it point to a valid calendar?

Interpersonal relationships

Assumption: You wouldn’t want to [watch that movie, go to that restaurant, take a walk].

Tests: I thought you wouldn’t want to [watch that movie, go to that restaurant, take a walk] but I shouldn’t assume, I should ask: Would you?

Tribal discourse

Assumption: They want to [take away our guns, proliferate guns].

Tests: ?

I’ll leave the last one as an exercise for the reader. If you feel strongly about that debate (or another) try asking yourself two questions. What do I assume about the opposing viewpoint? How might I test that assumption?

Video of my recent talk at the University of Michigan is forthcoming. But since it’s the sort of talk I had to write in advance, I’ve posted the text and slides here.

At noon on Friday I’ll be giving a talk in Ann Arbor, sponsored by the University of Michigan School of Information. The previous speaker on the school’s calendar is Richard Stallman, who is scheduled for Thursday. Apparently we will both be talking about open source software and open data.

John McPhee has lately been reflecting, in a series of New Yorker articles, on his long career as one of the world’s leading writers of nonfiction. In this week’s issue we learn that one of my favorite of his books, The Pine Barrens, was born on a picnic table. It was there that he lay prone for two weeks, in a panic, searching for a way to structure the vast quantity of material he’d gathered in a year of research. The solution, in this case, was Fred Brown, an elderly Pine Barrens dweller who “had some connection or other to at least three quarters of those Pine Barrens topics whose miscellaneity was giving me writer’s block.” Fred was the key to unlocking that book’s structure. But each book needed a different key.

The approach to structure in factual writing is like returning from a grocery store with materials you intend to cook for dinner. You set them out on the kitchen counter, and what’s there is what you deal with, and all you deal with.

For many years, that meant writing notes on pieces of paper, coding the notes, organizing the notes into folders, retyping notes, cutting and rearranging with scissors and tape. Then came computers, a text editor called KEDIT, and a Princeton colleague named Howard Strauss who augmented KEDIT with a set of macros that supported the methods McPhee had been evolving for 25 years. In the article, McPhee describes two KEDIT extensions: Structur and Alpha.

Structur exploded my notes, It read the codes by which each note was given a destination or destinations (including the dustbin). It created and named as many new KEDIT files as there were codes, and, of course, it preserved the original set.

Alpha implodes the notes it works on. It doesn’t create anything new. It reads codes and then churns a file internally, organizing it in segments in the order in which they are meant to contribute to the writing.

Alpha is the principal, workhorse program I run with KEDIT. Used again and again on an ever-concentrating quantity of notes, it works like nesting utensils. It sorts the whole business at the outset, and then, as I go along, it sorts chapter material and subchapter material, and it not infrequently rearranges the components of a single paragraph.

KEDIT is the only writing tool John McPhee has ever used. And as he is careful to point out, it’s a text editor, not a word processor. No pagination, headers, fonts, WYSIWYG, none of that. Just words and sentences. I can relate to that. My own writing tool of choice is an EMACS clone called Epsilon. I first used it on DOS around 1986 and I’m using it in Windows today to write these words. If I were a writer of long works I might have evolved my use of Epsilon in ways similar to what John McPhee describes. But I’ve only written one book, that was a long time ago, and since then I’ve written at lengths that don’t require that kind of tool support.

Still, I would love to find out more about John McPhee’s toolchain. My interest is partly historical. Howard Strauss died in 2005, and KEDIT is nearing the end of its life. (From kedit.com: “…we are in the process of gradually winding down Mansfield Software Group.”) But I’m also looking forward. Not everyone needs to organize massive quantities of unstructured information. But those who do require excellent tool support, and there’s room for innovation on that front. Anyone who’d like to tackle that challenge would benefit from understanding what John McPhee’s methods are, and how his toolchain supports them.

I’m going to write to John McPhee to ask him if he’d be willing to work with me on a screencast to document his methods. (And also to thank him for countless hours of reading enjoyment.) It’ll be a cold call, because we’ve never communicated, so if any reader of this post happens to have a personal connection, I would greatly appreciate an introduction.

I love a good story about a product becoming a service. Ray Anderson did it with floor covering, ZipCar does it with cars, Amazon and Microsoft are doing it with IT infrastructure. It’s a sweet model. Service providers own equipment and operations, earn recurring revenue, and are motivated to continuously improve efficiency and customer satisfaction.

There’s even been speculation about turning home heating into a service. Here in New England, where the dominant product is heating oil and oil-burning equipment, that would be a wonderful thing. Because now, for the millions of homeowners who burn oil — and for the businesses who support that system — the incentives are all wrong. We’re collectively abetting the nation’s addiction to oil, and customers’ need to using less oil conflicts with suppliers’ need to sell more.

In From oil to wood pellets: New England’s home heating future I documented my first foray into heating with biomass. In Central heating with a wood gasification boiler I presented the solution that’s actually working for us. Biomass is a viable alternative. But I’m still the owner, operator, and maintainer of the equipment, and the manager of the fuel supply (i.e. buying, stacking, loading). What would it be like to outsource those functions?

For single-family homes, biomass heating as a service is still just a dream. But for commercial buildings it’s a reality, and there’s a great example right in my own backyard. Well, almost. The Monadnock Waldorf School, right around the corner from my house, recently converted to a wood pellet boiler installed by Xylogen, a new company whose tagline is:

We do not sell heating systems. We do not sell fuel. We sell secure, local, renewable heat.

Xylogen’s blog tells the story of the project. Here are some of my favorite excerpts.

From What’s happening at MWS?:

We’re pleased to report that the oil boilers have used a total of 7 gallons of oil from day 1, the bulk consumed during initial tune-up and system testing. The remainder of the usage actually occurred during times when the pellet boiler could have kept up with the building’s requirement for heat. In other words, this operation was a mistake that has now been corrected in the control algorithms.

From We see the big picture too:

Today, an opening to an old ventilation shaft was discovered and promptly covered over. Heated air was escaping the buildng through the grating at such a clip that a small student might have gotten sucked in and trapped on it!

Also, there was an assembly today in the assembly room (makes sense!), so we decided to turn down the heat in advance to try to avoid overheating and waste. It turns out the audience itself raised the temperature at least 6F. Good thing we didn’t start out toasty.

Small, very simple steps can have a big impact. We’re looking at the high tech, the low tech, and everything in between to make a difference.

From True service:

The beauty of automatic real-time monitoring is that it’s possible to identify a problem with the equipment and rectify it before the customer even notices. That is service.

Xylogen is a collaboration between Mark Froling and Henry Spindler. I wish them well and look forward to reading more about their work.


PS: Thanks to Andrew Dey (whom I met last night at a talk by Sustainserv’s Matthew Gardner) for pointing out that Xylogen isn’t just about alternative fuel, but more importantly about an alternative business model.

Bookstores, for all the obvious reasons, are hanging on by their fingernails. What brings people into bookstores nowadays? Some of us still buy and read actual printed books. Some of us enjoy browsing the shelves and tables. Some of us value interaction with friendly and knowledgeable booksellers. And some of us like to see and hear authors when they come to speak and sign books.

There are lots of author events at bookstores. Recently LibraryThing’s Tim Spalding tweeted:

Upcoming bookish events on @LibraryThing Local now over 10,000! http://www.librarything.com/local/helpers

It’s great that LibraryThing “helpers” (individuals, libraries, bookstores) are adding all those events to LibraryThing’s database. But I’d really like to see bookstores help themselves by publishing standard calendar feeds. That way, LibraryThing could ingest those calendars automatically, instead of relying on dedicated helpers to input events one at a time. And the feeds would be available in other contexts as well, syndicating both to our personal calendars (desktop-, phone-, and cloud-based) and to community calendars.

When I saw Tim’s tweet I took a look at how bookstore events are feeding into various elmcity hubs. Here’s a snapshot of what I found:


location store ical feed?
Bright Lights
Monadnock Region of NH Toadstool yes
Cambridge, MA Harvard Bookstore yes
Brookline MA Brookline Booksmith yes
Boston MA Trident Booksellers yes
Ann Arbor MI Crazy Wisdom yes
Portland OR Powell’s yes
Dim Lights
Berkeley East Wind Books indirect
Canada Chapters Indigo indirect
Seattle Third Place Books indirect
… and some others …
Dark Matter
Berkeley City Lights no
Various Barnes and Noble no
Seattle WA Elliot Bay no
… and many others …

There are three buckets:

Bright Lights: These are stores whose web calendars are accompanied by standard iCalendar feeds. Events from these stores appear automatically in the Monadnock, Boston, Ann Arbor, and Portland hubs. These stores’ calendars could also be ingested automatically into LibraryThing, and you could subscribe to them directly.

Dim Lights: These are stores whose web calendars are hosted on Facebook. There isn’t a standard iCalendar feed for Facebook calendars, but the elmcity service can synthesize one using the Facebook API. So I say that these stores have “indirect” iCalendar feeds.

Dark Matter: These are stores whose web calendars are available only in HTML format. Some of these calendars are handcrafted web pages, others are served up by content management systems that produce calendar widgets for display but fail to provide corresponding feeds.

There are a few Bright Lights and some Dim Lights, but most bookstore calendars, like most web calendars of all kinds, are Dark Matter. If you’re a bookstore I urge you to become a Bright Light. Making your calendar available to the web of data is as easy as using Google Calendar or Hotmail Calendar. It’s a best practice that bookstores disregard at their peril.

As I build out calendar hubs in various cities I’ve been keeping track of major institutions that do, or don’t, provide iCalendar feeds along with their web calendars. At one point I made a scorecard which shows that iCalendar support is unpredictably spotty across a range of cities and institutions. One of the surprises was Boston, where I found iCalendar feeds for neither Harvard nor MIT.

I’ve recently improved the Boston calendar hub and, as part of that exercise, I took another look at the public calendars for both universities. It turns out that Harvard does offer a variety of calendar feeds. I just hadn’t looked hard enough. There’s even an API:

The HarvardEvents API allows you to request data programmatically HarvardEvents in CSV, iCalendar, JSON, JSONP, serialized PHP, RSS, or XML format. The API provides a RESTful interface, which means that you can query it using simple HTTP GET requests.

Nicely done! You’d think that, just down the road, MIT would be doing something similar. But if so I haven’t found it. So for now, the Boston hub includes way more Harvard events than MIT events.

Here’s hoping MIT will follow Harvard’s lead and equip its public calendars with standard data feeds.

Surfing the Roku box last night I landed on the MIT Open CourseWare channel and sampled Introduction to Computer Science and Programming. In one lecture Prof. John Guttag offers this timely reminder:

Focus on understanding why the program is doing what it’s doing, rather than why it’s not doing what you wanted it to.

It was timely because I was, in fact, writing a program that wasn’t doing what I expected. And I had, in fact, fallen into the psychological trap that Guttman warns about. When you’re writing software you use abstractions and also create them. What’s more, many of the abstractions you use are the very ones you created. When you live a world of your own invention you can do amazing and wonderful things. But you can also do ridiculous and stupid things. To see the difference between them you must always be prepared to park your ego and consider the latter possibility.

Elsewhere in that lecture, Prof. Guttman talks about Jeanette Wing’s idea that computational thinking involves skills that transcend the computer and information sciences. In 2008, when that lecture was given, many of us were talking about how that might be true. We talked about computational thinking as a “Fourth R” — a cognitive tool as fundamental as Reading, Riting, and Rithmetic.

I never found an example that would resonate broadly. But maybe this will:

Focus on understanding why your spouse or child or friend or political adversary is doing what he or she is doing, rather than why he or she is not doing what you wanted him or her to.

The Ann Arbor city council met, most recently, on October 15. Why didn’t the Ann Arbor Chronicle’s story on the meeting land until October 24? It took a while for Dave Askins to compile his typically epic 15000-word blog post. It’s an astonishingly detailed record of the meeting — more and better coverage, perhaps, than is available in any city.

The Chronicle describes itself thusly:

Launched in September 2008, the Ann Arbor Chronicle is an online newspaper that focuses on civic affairs and local government coverage. Although we’d likely be classified by most folks as “new media,” in many ways we embrace an ethos that runs contrary to current trends: Longer, in-depth articles; an emphasis on factual accuracy and thoroughness, not speed; and an assumption that our readers are thoughtful, intelligent and engaged in this community.

Who will read 15,000 words on a city council meeting? That depends partly on when the reading occurs. Because while the Chronicle is a newspaper, it is also a living history of the town’s public affairs. There’s no paywall. Every story is, and remains, fully available. That means the Chronicle isn’t just on the web, it is a web. What was said and decided about transportation in October 2012 can be reviewed in 2013 or 2014. The Chronicle is a community memory. In the short term it delivers news. Over the long run it assembles context.

Consider the list of links, below, that I extracted from the October 24 report. Of the 53 links, 23 point to prior Chronicle stories. Paywalled journalism can’t do that, and it’s a crippling limitation. If those who cannot remember the past are condemned to repeat it, mainstream journalism’s online amnesia won’t help move us forward. What happened today is only the tip of the iceberg. We need to know how we got to today. That can’t happen in print. It can only happen online. But tragically it almost never does, so context suffers.

If you scan that list of links you’ll notice something else that mainstream online journalism seldom allows: external links. The majority of the links in the Chronicle’s report point to other sources. Some are the websites of local organizations or local government. Others are documents that weren’t online but have been placed into the public record by the Chronicle. Paywalled journalism rarely does this. Once you’re in it wants to keep you in to rack up pageviews. This is another context killer.

Who will pay for all this luxurious context? Well, there’s me. I don’t live in Ann Arbor. But I went to school there, my daughter does now, I have another connection to the town, and I’m a huge fan of Dave Askins’ and Mary Morgan’s bold venture. So I’m a voluntary subscriber. And I hope I’ll get the chance to support something like the Chronicle in the town where I do live.

As a refugee from the pageview mills I can tell you that model leads nowhere good. I’m ready, willing, and able to back alternatives that use the web as it was meant to be used.


Links extracted from the Ann Arbor Chronicle’s report on the city council meeting of October 15, 2012.

  1. http://a2dda.org/current_projects/a2p5_/
  2. http://alphahouse-ihn.org/
  3. http://annarborchronicle.com/2010/03/03/to-do-bicycle-registry-transit-station/
  4. http://annarborchronicle.com/2010/03/10/county-offers-400k-match-for-skatepark/
  5. http://annarborchronicle.com/2010/04/05/ann-arbor-planning-priorities-take-shape/
  6. http://annarborchronicle.com/2011/03/24/ann-arbor-gives-initial-ok-to-pot-licenses/
  7. http://annarborchronicle.com/2012/02/10/um-ann-arbor-halt-fuller-road-project/
  8. http://annarborchronicle.com/2012/05/13/public-art-rehashed-by-ann-arbor-council/
  9. http://annarborchronicle.com/2012/06/04/ann-arbor-rail-study-moves-ahead/
  10. http://annarborchronicle.com/2012/06/11/city-council-action-focuses-on-transit-topics/
  11. http://annarborchronicle.com/2012/07/19/um-wall-street-parking-moves-ahead/
  12. http://annarborchronicle.com/2012/08/09/city-council-votes-down-park-amendment/
  13. http://annarborchronicle.com/2012/08/14/um-ann-arbor-agree-rail-costs-not-owed/
  14. http://annarborchronicle.com/2012/08/16/council-meeting-floods-fires-demolition/
  15. http://annarborchronicle.com/2012/08/20/planning-group-briefed-on-william-st-project/
  16. http://annarborchronicle.com/2012/09/01/city-council-to-focus-on-land-sale-policy/
  17. http://annarborchronicle.com/2012/09/07/aata-5-year-program-may-2013-tax-vote/
  18. http://annarborchronicle.com/2012/09/09/ann-arbor-dda-board-addresses-housing/
  19. http://annarborchronicle.com/2012/09/10/zoning-transit-focus-of-council-meeting/
  20. http://annarborchronicle.com/2012/09/13/county-tax-hike-for-economic-development/
  21. http://annarborchronicle.com/2012/09/24/council-punts-on-several-agenda-items/
  22. http://annarborchronicle.com/2012/09/27/transit-contract-contingent-on-local-money/
  23. http://annarborchronicle.com/2012/10/11/dda-green-lights-housing-transportation/
  24. http://annarborchronicle.com/2012/10/12/council-may-seek-voter-ok-on-rail-station/
  25. http://annarborchronicle.com/2012/10/12/positions-open-new-transit-authority-board/
  26. http://annarborchronicle.com/events-listing/
  27. http://annarborchronicle.com/wp-content/uploads/2012/08/MalletsDrainFloodingResolution.jpg
  28. http://annarborchronicle.com/wp-content/uploads/2012/10/600-Oct-15.jpg
  29. http://annarborchronicle.com/wp-content/uploads/2012/10/AAS-Conceptual-Construction-Costs-1.pdf
  30. http://annarborchronicle.com/wp-content/uploads/2012/10/AnnArbor-Congestion-now-future.jpg
  31. http://annarborchronicle.com/wp-content/uploads/2012/10/Appeal-12-263-Askins-map-historical-flooding1.pdf
  32. http://annarborchronicle.com/wp-content/uploads/2012/10/briere-derezinski-600.jpg
  33. http://annarborchronicle.com/wp-content/uploads/2012/10/cooper-deck-600.jpg
  34. http://annarborchronicle.com/wp-content/uploads/2012/10/EcologyCenter-Support-Resolution-Oct-2012.pdf
  35. http://annarborchronicle.com/wp-content/uploads/2012/10/greenbelt-hamstead-lane.jpg
  36. http://annarborchronicle.com/wp-content/uploads/2012/10/PA2PPositionPaper.pdf
  37. http://annarborchronicle.com/wp-content/uploads/2012/10/powers-ezekiel-600.jpg
  38. http://annarborchronicle.com/wp-content/uploads/2012/10/shiffler-mitchell-anglin-600.jpg
  39. http://annarborchronicle.com/wp-content/uploads/2012/10/smith-lax-600.jpg
  40. http://annarborchronicle.com/wp-content/uploads/2012/10/smith-listening-600.jpg
  41. http://annarborchronicle.com/wp-content/uploads/2012/10/teall-transit-map-600.jpg
  42. http://michigan.sierraclub.org/huron/
  43. http://protectourlibraries.org/
  44. http://wbwc.org/
  45. http://www.a2gov.org/government/city_administration/city_clerk/pages/default.aspx
  46. http://www.a2gov.org/government/communityservices/ParksandRecreation/parks/Features/Pages/KueblerLangford.aspx
  47. http://www.a2gov.org/government/communityservices/planninganddevelopment/planning/Pages/ZoningOrdinanceReorganizationProject.aspx
  48. http://www.a2gov.org/government/publicservices/fleetandfacility/airport/Pages/default.aspx
  49. http://www.ci.ann-arbor.mi.us/government/communityservices/planninganddevelopment/planning/Pages/NorthMainHuronRiverCorridorProject.aspx
  50. http://www.ecocenter.org/
  51. http://www.environmentalcouncil.org/
  52. http://www.michiganlcv.org/
  53. http://www.soloaviation.aero/

In today’s column on wired.com I discuss ways to manage overlapping personal, team, and public calendars.

It’s the 21st in the series, here’s the whole list:

The future’s here, but unevenly distributed 2012-03-02
Kynetx pioneers the Live Web 2012-03-09
What’s in a name? In the cloud, a data service! 2012-03-16
The translucent cloud: balancing privacy and convenience 2012-03-23
The not-so-hidden risk of a cloud meltdown, and why I’m not so worried 2012-03-30
Picture this: hosted lifebits in the personal cloud 2012-04-06
The intentional cloud: say what you mean, become what you say 2012-04-20
Owning your words: personal clouds build professional reputations 2012-04-27
Your smart meter’s data belongs in your personal cloud 2012-05-04
The web is the cloud’s API 2012-05-18
Calendars in the cloud: no more copy and paste 2012-06-01
Publishing has perished: long live the personal cloud 2012-06-08
The personal cloud’s third dimension: webmakers 2012-06-22
Personal cloud as platform: mix and match wisely 2012-06-29
Cooperating services in the cloud 2012-07-13
A domain of one’s own 2012-07-27
I say movie, you say film, our personal clouds can still work together 2012-08-12
Hello personal cloud, goodbye fax 2012-08-31
From personal clouds to community clouds 2012-09-14
Why Johnny can’t syndicate 2012-10-09
Scoping the visibility of your personal cloud 2012-10-19

If you’re a coach, parent, or student involved with high school sports, you may know of a site called HighSchoolSports.net. It’s a service used by many schools, including the ones in my town, to manage information about teams and schedules. For the elmcity project it’s been a stalwart provider of iCalendar feeds, enabling me to show high school and middle school contests — soccer, football, lacrosse, swimming, etc. — on community-wide calendars in many cities.

Until recently.

One day I noticed a flood of errors from the HighSchoolSports.net feeds. USA Today, it turns out, had acquired HighSchoolSports.net. At first I hoped the errors were just a redirection snag. But when I visited the new site, at usatodayhss.com, I was shocked to see that the iCalendar feature had evidently been removed.

Could that really be true? I wrote to ask; here is the reply.

Thank you for contacting us regarding the iCal feature that was once located on HighSchoolSports.net

The iCal feature, for syncing to personal calendars, is no longer an available feature on USATodayhss.com

We will be launching a mobile version of USATodayhss.com in the very near future. With this mobile feature, you will be able to check schedules on the go with your smart phones or any available internet connective devices.

It was as if a million voices cried out in terror and were suddenly silenced.

USA Today, please reconsider. You are now the steward of data flows that matter to thousands of communities. The data is of a specific type. There is a longstanding standard Internet way to enable that specific type of data to syndicate not only to personal calendars but also to community calendars. A mobile app will be a nice addition. But it is not a replacement for standard data feeds that can syndicate into a variety of contexts.

In Book: A Futurist’s Manifesto, which is taglined Essays from the bleeding edge of publishing and is co-edited by Brian O’Leary and Hugh McGuire, there’s a refreshingly forward-thinking chapter on public libraries by the Ann Arbor District Library’s Eli Neiburger. In The End of the Public Library (As We Knew It) Eli describes an intriguing model for libraries as purveyors of digital stuff. If you’re a creator of such stuff, and if you’re willing to sign the AADL Digital Content Agreement, you can license your stuff directly to the library:

The Agreement establishes that the library will pay an agreed-upon sum for a license to distribute to authenticated AADL cardholders, from our servers, an agreed-upon set of files, for an agreed-upon period of time. At the end of the term, we can either negotiate a renewal or remove the content from our servers.

The licenses specifies that no DRM, use controls, or encryption will be used, and no use conditions are presented to the AADL customer. In fact, our stock license also allows AADL users to download the files, use them locally and even create derivative works for personal use.

Pretty radical! Why would you, the creator of said stuff, want to take this crazy leap of faith? Eli explains:

Instead of looking at the license fee as compensation for something like a one-time sale, the pricing works when the rightsholder considers how much revenue they would like to expect during the license term from our 54,000-odd cardholders. For niche creators, it’s not hard for the library to beat that number, and all they have to do to get it is agree to the license and deliver the files to our server.

They’re not releasing their content to the world (especially because it’s already out there). They’re just granting a year or so of downloads to these 54,000 people. They get more revenue than they would likely get from those people up front, and the library gets sustainable, usable digital content for its users.

Eli thinks this won’t work for in-demand mass-market stuff anytime soon, if ever. But as he points out:

When everything is everywhere, libraries need to focus on providing — or producing — things that aren’t available anywhere else, not things that are available everywhere you look.

Of course public libraries have always been producers as well as providers. Things libraries produce include local collections, like the Keene Public Library’s exquisitely-curated historical photos and postcards and the Ann Arbor District Library’s The Making of Ann Arbor.

Libraries are also producers of community events, and here’s one I’m delighted to announce will happen at the Ann Arbor District Library on September 26:


A Seminar on Community Information Management

Everybody lives online now. Knowing how to collect and exchange information is now as important a skill as knowing how to drive, but it’s not enough: in order to make the web really work for you, you have to know how to project yourself online, and how to manage the boundary between what’s private and what’s public.

Cities and towns need to know this too. From the mayor’s office and local schools to the slow-pitch league and the local music scene, communities need to have these same skills if they are to survive and thrive in the 21st Century.

This seminar will explore what those skills are and how we can use them to make our communities stronger. We will use one particular case — sharing and synchronizing event calendars in Ann Arbor — to illustrate ideas, but the basic principles we will discuss can be applied to almost every aspect of community life.

While we’ll be talking about the web, this seminar is not for IT specialists, any more than knowing how to drive is something that only auto mechanics need to know.

A one-hour presentation by Jon Udell of Microsoft will be followed by another hour of Q&A and discussion.

This event is free of charge, and particularly of interest to those working for educational, civic and other not-for-profit organizations. It will be helpful to those who want better ways to get the word out about their own organization’s events and news, as well as those who are searching for such information and not always finding it easy to locate.

We’ll address these questions, among others:

- How can we, as a community, most effectively inform one another about goings-on in the region?

- How can our collective information management skills improve quality of life in the region?

- How can they also help us attract tourism and talent from outside the region?

- How do these same skills apply in other domains of public life such as political discourse and education?

We’re inviting organizations that — like public libraries — are significant producers of community events: public schools, colleges and universities, city governments, hospitals, cultural and environmental nonprofits, sports leagues, providers of social services, and more. We’re specifically looking for the people in such organizations who produce and promote events. If you’re one of those folks in or near Ann Arbor and would like to be invited, please let me know.

My latest wired.com column begins:

Back in February my son lost control of his car and landed in the hospital. Fortunately he has recovered from his injuries. And fortunately we have health insurance. So everything’s OK. However, I’m still — six months later — trying to untangle the bureaucratic mess that ensued.

Multiple health-care providers and multiple insurers are involved. They don’t talk to each other directly. It’s up to me to decode their communications meant for one another and route them appropriately. In the column I imagine a cloud-based tool that helps me do that, and I may live long enough to use it someday. But for now there’s a simple tool, often overlooked, that can help you hack through these bureaucratic thickets. It’s 3-way calling.

Once in an airport I saw a guy with a payphone in one hand and a cellphone in the other hand. He grew more and more agitated. Finally he removed his head from between the phones, rotated one of them, slapped them together microphone-to-speaker, and yelled “Talk to each other!”

I do the same thing all the time, a bit less dramatically. Just this morning I learned from Mrs. D at the hospital that the health insurer still isn’t processing a pile of bills. She was asking me questions that only the insurer could answer. So I called up the health insurer and got Carol. She was asking me questions that only the hospital could answer. I did a hook flash, called Mrs. D, and flashed again.

“Carol, meet Mrs. D; Mrs. D, meet Carol.”

And then I just listened to them work it out. When you do this kind of thing, it’s always fascinating to observe the degree to which organizations are hamstrung by their own org charts, acronyms, and methods. As detailed in the column, the initial sticking point was a magic token called the Exhaustion of Benefits letter, which was supposed to trigger the cutover from auto to health insurance. It took me a long time to figure out what that was and where to route it, but even after I did things remained stuck.

On the provider side, there were org-chart assumptions (e.g., that the hospital and the clinic are separate entities) not evident to the insurer. On the insurer side, there were acronyms (PIP) and terms (“ledger”) not understood by the provider. And on each side there were procedures unfamiliar to the other.

You would think that people who deal with these issues every day would know the drill. But there are many different ways to play the same game. It’s up to the customer to be the referee. If you haven’t tried it, use 3-way calling the next time you find yourself in the middle of one of these messes. For now it’s the best tool for the job.

In Food safety, information safety I reacted to Tim Eberly’s newspaper article on food safety which, I said, didn’t acknowledge an important primary source, Elisabeth Hagen’s blog post, which states the rationale for the poultry inspection policy changes that were the subject of the article.

I’ve since spoken to Tim Eberly about how he researched and wrote that story. He had, in fact, seen Elisabeth Hagen’s blog post, along with many other documents, some of which I’ll cite and link to here. And the rationale she presented, which I complained was missing from the article, is in fact there, albeit in a more diluted form.

In theory the article could have used quotes from Hagen’s blog, or from a HuffPo blog by her boss Alfred Almanza. But in practice, as Tim Eberly points out, these are blogs in name only, effectively they are press releases, and a reporter wants to advance the story beyond that. Unfortunately FSIS declined to be interviewed for the story.

All that said, I was still left wondering about the crux of the story. A decade ago the FSIS began a pilot program that would shift responsibility for direct inspection of slaughtered chickens from FSIS workers to poultry-plant workers. On the face of it, that sounds like a terrible idea, and there’s been lots of criticism ever since. But the FSIS rationale — that its resources are better spent assuring all-up compliance with safety standards, to prevent upstream contamination so less needs to be found downstream — sounds credible too.

Now the pilot program is slated to expand. What should we think about that? You’ll have to decide for yourself. Here are some sources I’ve rounded up that may help.

  • The proposed rule: Modernization of Poultry Slaughter Inspection

    http://www.fsis.usda.gov/OPPDE/rdad/FRPubs/2011-0012E.htm, 4/26/2012

  • Comments on the rule

    http://www.regulations.gov/#!searchResults;rpp=25;po=2250;s=FSIS-2011-0012, April-May 2012, total of 2260 comments

  • Alfred Almanza, FSIS administrator: blog post / news release

    http://www.fsis.usda.gov/News_&_Events/NR_041312_01/index.asp, 4/26/2012 (also http://www.huffingtonpost.com/alfred-v-almanza/chicken-inspection-new-policy_b_1424136.html, 4/13/2012)

    “We have more than a decade of experience slaughter running at 175 bpm, the proposed maximum line speed in the rule. And the data is clear that in these plants, the poultry produced has lower rates of Salmonella, a pathogen that sickens more than 1 million people in the U.S. every year. These plants also maintain superior performance on removing the visual and quality defects that don’t make people sick. Those are the facts, based on the data.”

  • FSIS Self-evaluation of HACCP Inspection Models Project (HIMP)

    http://www.fsis.usda.gov/PDF/Evaluation_HACCP_HIMP.pdf, Aug 2011

    “Because fewer inspectors are required to conduct online carcass inspection in HIMP establishments, FSIS is able to conduct more offline food safety related inspection activities. HIMP establishments have higher compliance with SSOP and HACCP prevention practice regulations and lower levels of non-food safety defects, fecal defect rates, and Salmonella verification testing positive rates than non-HIMP establishments. These data indicate that HIMP inspection provides improvements in food safety and other consumer protections.”

  • Independent review of the HACCP-Based Inspection Models Project by the National Alliance for Food Safety Technical Team

    http://www.fsis.usda.gov/OPPDE/nacmpi/Nov2002/Papers/NAFS97.pdf, 2002

    Conclusions

    1. The authors urge continued FSIS oversight and continuous re-evaluation as HIMP is more broadly implemented.

    2. At this time, no convincing arguments were identified which indicate that adoption of the modified system, under regulatory supervision, would increase risk.

    3. More importantly, the authors find that there are several lines of evidence that strongly argue process improvements from the consumer perspective as related to adoption of the HIMP system

  • GAO report: Weaknesses in Meat and Poultry Inspection Pilot Should Be Addressed Before Implementation, 2001

    http://www.gao.gov/assets/240/233016.pdf

    “It is questionable whether the data generated by the project are indicative of how all of the chicken plants’ inspection systems would perform if modified inspections were adopted nationwide. First, the chicken pilot that USDA designed lacks a control group — a critical design flaw that precludes a comparison between the performance of the inspection systems at those plants that volunteered to participate in the pilot and that of plants that did not participate. Without a control group, USDA cannot determine whether changes in inspections systems are due to personnel changes or other possible explanations, such as the addition of chlorine rinses.”

That last item helps me contextualize the story, which ends like this:

At least one elected official wants FSIS to put the brakes on its proposal.

U.S. Sen. Kirsten Gillibrand, D-New York, asked GAO to conduct another audit of FSIS’s pilot program. The agency said it will do so soon. She then sent a letter to Vilsack, asking him to delay the changes.

“I do not believe USDA should yield inspection responsibilities to plant personnel that have an inherent conflict of interest unless [the pilot program] can be independently verified to be safe and effective,” Gillibrand wrote.

Vilsack wrote back to Gillibrand in a letter filled with FSIS’s talking points on the issue. But more notable is what’s missing: Vilsack doesn’t address the senator’s request.

Reading between the lines, advocates say, that doesn’t spell good news.

I’m now inclined to agree. It sounds like there should be an independent audit of the pilot, and better analysis of the tradeoffs between using FSIS inspectors to monitor plants for all-up compliance with safety standards versus using them side-by-side with plant inspectors looking at birds.

To get to this point, though, I had to work pretty hard to find and evaluate the reporter’s sources, and understand his process. I’m grateful to Tim Eberly for taking the time to help me. I sure wish, though, that journalistic convention permitted him to cite and link to the sources he used.

Yesterday my family and I read this article on food safety which was syndicated to our local paper from the Atlanta Journal-Constitution. It begins provocatively:

One-third of a second.

That’s how long a federal inspector will have to examine slaughtered chickens for contaminants and disease under new rules proposed by the federal government.

In the ensuing 1300 words of the main story that was syndicated to our paper, plus 1100 words of sidebars not included, the reporter — Tim Eberly — explores how the proposal will shift responsibility for hands-on inspection from federal inspectors to poultry plant workers. It’s a portrait of yet another disturbing lapse of oversight in our national food safety system. That much was clear to us when we finished the article. But we were left wondering: why would the USDA so flagrantly subvert its mission?

From the article:

The USDA’s Food Safety Inspection Service, which oversees poultry plants, believes the changes would “ensure and even enhance the safety of the poultry supply by focusing our inspectors’ efforts on activities more directly tied to improving food safety,” FSIS [the USDA's Food Safety and Inspection Service] spokesman Dirk Fillpot said in a statement.

The agency says it wants inspectors to focus on issues that pose the greatest health risks to the public.

That still doesn’t really explain the USDA’s rationale, though. So I spent five minutes searching online and discovered the following facts:

  • The USDA has a blog.

  • To which USDA officials frequently contribute.

  • Including Dr. Elisabeth Hagen, who is not just an FSIS spokesperson but in fact the offical who oversees the agency’s policies and programs.

On April 19, 2012, Dr. Hagen cited the rationale that was missing from Tim Eberly’s story (bold emphasis mine):

Today, USDA announced an extension to the public comment period for a proposed rule that would modernize the poultry slaughter inspection system.  This new plan would provide us with the opportunity to protect consumers from unsafe food more effectively.  We recognize that this proposal would represent a significant change from the current system and has sparked a debate on how poultry is inspected.  We also value the different opinions being expressed about the proposal and have extended the public comment period to ensure all sides are presented in this debate.

It may surprise you to learn that the USDA has been inspecting poultry in largely the same way since the 1950′s. So, while our scientific knowledge of what causes foodborne illness has evolved, our inspection process has not been updated to reflect this new information. Under this modernization proposal, significant public health benefits will be achieved and foodborne illness will be prevented by focusing our inspectors attention on activities that will better ensure the safety of the poultry you and your family enjoy.

One thing we have learned from the last few decades of advances in food safety technology is that the biggest causes of foodborne illness are the things you don’t see like the harmful pathogens Salmonella and Campylobacter. As part of a continual effort to improve our inspection system, FSIS is proposing to move some inspectors away from quality assurance tasks—namely checking carcasses for bruises and feathers—to focus on food safety tasks, such as ensuring sanitation standards are being met and verifying testing and antimicrobial process controls. This science based approach means our highly-trained inspectors would spend less time looking for obvious physical defects and more time making sure steps poultry processing facilities take to control food safety hazards are working effectively.

The increased emphasis on food safety tasks proposed under the rule is consistent with the agency’s focus on foodborne illness prevention.  Instead of focusing on quality assurance, inspectors will now be able to ensure plants are maintaining sanitary conditions and that food safety hazards are being reduced throughout the entire production process.

Under a pilot program started in 1999, known as the HACCP Inspection Models Program, 20 broiler plants have served as “trial plants” for this new proposal. Test results from the poultry produced in those plants shows lower rates of Salmonella before it goes to the grocery store. The data and test results from this pilot program demonstrate that quality assurance tasks, such as checking for bruises and blemishes, do not provide adequate food safety protections as once was thought over 60 years ago.

Over the years we have seen — again and again — the need to modernize to keep pace with the latest science and threats. This poultry slaughter modernization proposal is about protecting public health, plain and simple, and I encourage stakeholders and the public to read the proposal and then let us know what you think.

Why couldn’t Tim Eberly have found, quoted from, and cited the USDA’s authoritative statement? Why couldn’t the editor who syndicated it into my local paper have added value by doing so?

There’s an analog to food safety: information safety. Reporters (food producers) and editors (inspectors) are chained to a fast-moving production line. But science-based methods can help keep us safe. Use the precious few seconds available to find, and report, authoritative sources.

When Dave Shields returned from a recent “software sabbatical” (no blogging, tweeting, or Facebooking since 2009) he wondered: Where have all the bloggers gone?:

I suggest you visit Sam Ruby’s planet.intertwingly.net.

(A “planet” is just an aggregation of blogs. The planet hoster makes up a list of blogs, then puts together a simple program so that, whenever a new blog post is made by *anyone* on the list of bloggers, then the blog post is copied to the planet. In brief, readers of the planet see *all* the blogs posts in the list of chosen blogs.)

Now that I’m back blogging, I have found that if I write a post in the morning, and then write another later in the day, or the next morning, then there are only a handful of blog posts from all the other members of the planet in between.

I’m one of those listed at planet.intertwingly.com, and I’m guilty as charged:

Of course that’s a view of the tech blogosphere. But my wife Luann, who blogs in a very different sphere of interest, shows a similar pattern:

Perhaps a more interesting question than “Where have the bloggers gone?” is “What were they doing in the first place?” In my case, from 2003 through 2006, blogging was part of my gig at InfoWorld. For many of the others listed at planet.intertwingly.com it was a professional activity too. Collectively we were the tech industry thinking out loud. We spoke to one another through our blogs, and we monitored our RSS readers closely. That doesn’t happen these days.

Obviously Twitter, Facebook, and (for geeks particularly) Google+ have captured much of that conversational energy. Twitter is especially seductive. Architecturally it’s the same kind of pub/sub network as the RSS-mediated blogosphere. But its 140-character data packets radically lowered the threshold for interaction.

It’s not just about short-form versus long-form, though. Facebook and Google+ are now hosting conversations that would formerly have happened on — or across — blogs. Keystrokes that would have been routed to our personal clouds are instead landing in those other clouds.

I’d rather route all my output through my personal cloud and then, if/when/as appropriate, syndicate pieces of it to other clouds including Twitter, Facebook, and Google+. A few weeks back, WordPress’s Stephane Daury reminded me that I can:

@judell: since your blog is on (our very own) @wordpressdotcom, you can setup the publicize option to push your posts: http://wp.me/PEmnE-1ff.

I replied that I knew about that but preferred to crosspost manually. But I spoke too soon. My reason for not wanting to automate that push was that I wanted to tweak whether (or how) it happens. I should have realized that WordPress had thought of that:

Nice! This is an excellent step in the right direction. Thanks for the reminder, Stephane!

What’s next? Here are some things that will help me consolidate my output in my own personal cloud where it primarily belongs.

  • Different messages to each foreign cloud. Because headlines often need to be audience-specific.

  • Private to my personal cloud, public to foreign clouds. Because the public persona I shape on my blog serves different purposes than the ones I project to foreign clouds. Much of what I say in those other places doesn’t rise to the level of a public blog entry, but I’d still like to route that stuff through my personal cloud so I can centrally control/monitor/archive it.

  • Federate the interaction siloes. Because now I can’t easily follow or respond to commentary directed to my blog echoes in foreign clouds. Or, again, centrally control/monitor/archive that stuff.

In my Wired.com column I often reflect on these kinds of issues. The personal cloud services I envision mostly don’t exist yet. But it’s great to see WordPress.com moving in that direction!

The new documentary about Woody Allen is a fine portrait of a long creative life. For me the best part was seeing his drawer of ideas. Early in part two of the documentary, Dick Cavett says Woody Allen once told him that he had endless ideas for movies. Cavett was amazed. “It would take me a year to have just one idea! He has many?” But then we see how the trick is done. Cut to Woody’s bedroom. He opens a drawer in his bedstand, takes out a pile of scraps, spreads them out on the bed, and talks about his process.

Woody: This is my collection, all kinds of scraps, written on hotel stationery and whatnot. I’ll ponder these things. I go through this all the time, every time I start a project.

Interviewer: Read me one note.

Woody: A man inherits all the magic tricks of a great magician. That’s all I have there. But I could see a story.

Ideas, for the most part, are just seeds. They’re cheap and plentiful. A man wakes up in the future (Sleeper). The mother of a man’s genius adopted son turns out to be a prostitute (Mighty Aphrodite). Some ideas are better than others, no doubt. But to grow them into something that matters you have to see the story. And then tell the story.

The elmcity project enables curators to create and manage calendar syndication hubs. These were never intended as destination sites, but rather as infrastructure to support what I’m calling attention hubs: newspapers, hyperlocal blogs, chambers of commerce, arts councils. Such organizations often want to build and display a comprehensive community calendar. They always fail to do so because of what I’m calling the Submit Your Event Antipattern, which looks like this:

I want attention hubs to align themselves with syndication hubs, and to give their contributors an alternative to the copy and paste approach to data syndication. But the attention hubs thus far mostly don’t want to participate. So now I’m trying a complementary approach to building networks of calendar feeds.

The elmcity concept is, after all, radically decentralized. So why should central attention hubs be gatekeepers governing the growth of these networks? They shouldn’t! They should be participants, but so should all the contributing sites.

The model I’ve come up with harkens back to the old idea of a webring: a group of sites that declare a shared interest in some topic. So, consider the hub I’ve built for New Hampshire’s Monadnock Region, which includes Keene, various surrounding towns, and (honorarily) Brattleboro, Vermont. The events flowing through this hub are, of course, tagged. Here’s the default URL for music:

http://elmcity.cloudapp.net/MonadnockNH/html?view=music

Here are the sources that are currently feeding into that view:

  • Apple Hill Center for Chamber Music
  • Fritz Belgian Fries (eventful)
  • Inferno (eventful)
  • McCue’s (eventful)
  • Metropolis Wine Bar (eventful)
  • Mole Hill Theatre (facebook)
  • Monadnock Folklore Society
  • Peterborough Folk Music Society
  • Railroad Tavern (eventful)
  • The Beacon (eventful)
  • The Listening Room at MindFull Books & Ephemera (eventful)
  • The Starving Artist (facebook)
  • Vermont Jazz Center (facebook)
  • Waxy O’Connors (facebook)

These venues represent the music scene in the Monadnock Region, and they have a collective interest in branding and promoting the scene. I’d like to help them do that. So I’ve made a widget-maker that produces a widget they can embed on their sites. Of course it doesn’t only work for the music scene in my region.Here are some other variations on the theme.

This will work for any of the hubs featured at elmcity.cloudapp.net. They’re all still in a bootstrap phase. I’ve seeded them with every iCalendar feed I’ve been able to find and categorize. The resulting views capture enough to be interesting and somewhat useful, but they aren’t yet embraced by the organizers of events. If you’re one such organizer, I’d love for you and the others who collectively form the music or arts or sports or tech scene in one of the places I’m targeting to try making and using a calendar webring.

This month’s Long Now talk is Benjamin Barber’s If Mayors Ruled the World. In the talk, which is a warmup for a forthcoming book with the same title, Barber offers an intriguing view of global governance. It won’t arise from a formal coalition among nation states, he argues. Instead it will emerge — indeed already is emerging — as cities form networks and share best practices. Stewart Brand summarizes the argument:

New York City’s “hyperactive” mayor Michael Bloomberg says, “I don’t listen to Washington very much. The difference between my level of government and other levels of government is that action takes place at the city level. While national government at this time is just unable to do anything, the mayors of this country have to deal with the real world.” After 9/11, New York’s police chief sent his best people to Homeland Security to learn about dealing with terrorism threats. After 18 months they reported, “We’re learning nothing in Washington.” They were sent then to twelve other cities — Singapore, Hong Kong, Paris, Frankfurt, Rio — and built their own highly effective intelligence network city to city, not through Washington or Interpol.

It’s convergent evolution, Barber suggests. Jam a few million people into a limited space and, no matter which nation owns that territory, you’re dealing with the same social, economic, and environmental problems. Solutions that work in one city are likely to work elsewhere. And cities, unlike nations, can’t kick the can down the road indefinitely. The garbage has to be picked up sooner rather than later.

I’m interested in this notion because I’m working on a city-scale best practice that will, I hope, take root in a few cities and then spread to others.

My sister is writing a report for which she needs facts about the growth of New Jersey’s foreign-born population. She found some numbers at census.gov, and we explored them on a Facebook thread. For my friend Mike Caulfield, who’s writing a textbook called Making Fair Comparisons, the discussion reinforced a lot of what he’s been teaching lately. For me it was a reminder that the dream of straightforward access to canonical facts remains elusive.

I wanted to check my sister’s sources. She gave me this link: http://quickfacts.census.gov/qfd/states/34000.html. That page says New Jersey’s 2010 population was 8,791,894, of which 20.3% were foreign-born — so we can compute the number of those folks to be 1,784,754.

I never did find the 2000 counterpart to that report. While searching the FactFinder site, though, I found this page where, with further searching within the page — for Geography: New Jersey and “foreign born” — I landed on a report called “SELECTED CHARACTERISTICS OF THE NATIVE AND FOREIGN-BORN POPULATIONS 2010 ACS 1-year estimates” with an ID of S0501. According to it, there were 1,844,581 foreign-born New Jerseyans, or 21% (not 20.3%) of the same 8,791,894 total.

I cited that link in our Facebook discussion, but later was horrified to find that I actually hadn’t. The base URL never changes. If I navigate to a report on foreign-born New Jerseyans, and you navigate to the same report for Texans, or the whole US, it’s the same URL. This is catastrophic if you’re trying to have a discussion informed by canonical citation of source data.

Meanwhile I still hadn’t found the 2000 counterpart to http://quickfacts.census.gov/qfd/states/34000.html. Back on the FactFinder site I searched in vain for “SELECTED CHARACTERISTICS OF THE NATIVE AND FOREIGN-BORN POPULATIONS 2000″ and for combinations of terms like “foreign-born 2000.” So I searched the web for “foreign-born 2000 census”; both Google and Bing pointed me to http://www.census.gov/prod/2003pubs/c2kbr-34.pdf. From this PDF file I was able to extract New Jersey’s total (8.414,350) and foreign-born (1,476,327) populations in 2000. Now I could complete this table (using, arbitrarily, one of the values I found for 2010 foreign-born):

2000	8,414,340	1,476,327	17.5%
2010	8,791,894	1,784,754	20.3%

Now, finally, we could have the real discussion. Should growth be evaluated in terms of percentages, so (20.3-17.5)/20.3 = 15.7%, or absolute numbers, so (1.784-1.476)/1.476 = 20.9%? It depends, my friend Doug Smith said, on the point you’re trying to make:

When you do the calculation on the growth of the percentages it does not take into account that the total population also grew over the 10 years. So while the percentage of foreign- born people grew by 15.7%, the actual number of foreign-born people in the state grew by 20.3%. If you’re trying to make a case that depends on the total number, like services consumed or potential market size, then you should use the growth of total numbers. If you’re trying to make a case based on percentages, for example the likelihood of encountering a foreign-born individual, then growth based on percentages would be better.

Doug added this intriguing observation:

This small amount of data actually presents a very interesting picture. The total population of NJ grew 4.5% over ten years. During that time, the natural born population grew only 1%, while the foreign-born population grew 21%. This suggests that more than 80% of the population increase over these ten years came as a result of immigration. So, while going from 17.5% foreign-born to 20.3% foreign-born doesn’t seem like much of a change to me, the implications seem huge.

That made me wonder about comparable figures for other states. But the prospect of digging out the numbers from a mishmash of HTML pages and PDF files killed that curiosity. What would help? Let’s give every fact its own home page on the web. The OData is one good way to do that. Imagine census.gov as a web of data. A top-level path might be:

http://odata.census.gov/states

A next-level path might be:

http://odata.census.gov/states/NewJersey

A path to the ACS survey might be:

http://odata.census.gov/states/NewJersey/S0501

By year:

http://odata.census.gov/states/NewJersey/S0501/2010

And finally, paths to individual facts might be:

http://odata.census.gov/states/NewJersey/S0501/2000/ForeignBorn

http://odata.census.gov/states/NewJersey/S0501/2010/ForeignBorn

Nothing’s hidden behind a JavaScript veil or stored in a cookie. The entire web of data is navigable in a standard browser, which displays human-readable Atom feeds if set for human viewing, or raw XML or JSON if used to discover URLs for machine processing. Every URL is a canonical home page for a data set or an individual datum. User-friendly search and navigational tools are built on top of this foundation. Nobody has to deal with raw URLs and feeds. But they’re always available.

I’m not ungrateful for what census.gov (and so many other sites) offer. Any kind of web access to data is infinitely better than no access. But there are better and worse ways to provide access. It’s 2012. We ought to be doing better by now.

I was delighted to read this month’s Milestone column in the Ann Arbor Chronicle. Not only because it features the elmcity calendar syndication service, but also because the Chronicle’s editor, Dave Askins, connects the dots to a larger vision of community information management based on syndication of authoritative sources. Dave makes a seemingly unlikely comparison between the syndication of calendars and of crime reports. He traces a story that was reported by one publication, rewritten and retransmitted by others, and then revised by the original source in a way that wasn’t echoed by the secondaries.

The problem with the approach those organizations take to reporting the “spot news” of crime incidents is that they disconnect the information from its single, authoritative source. And as a result, any update to their original reports would need to be undertaken manually — that is, someone would need to think to do it.

Yes, exactly! Here’s the same thing in the calendar domain. The Knights Chess Club in Keene, NH, meets on Monday evenings. The venue used to be the Best Western hotel. A couple of years ago, the chess club posted that information on its website and also relayed it to our local newspaper, the Keene Sentinel. Sometime later, acting as a proxy for the chess club, I added the same info to one of the calendar feeds that flows into the Keene hub. Then I noticed the event had moved from the Best Western to the E.F. Lane, so I adjusted the event accordingly. Months later, I noticed the listing in the Sentinel. You can guess the punchline: the event was still reported to be at the Best Western! (It has since moved to Langdon Place.)

In the world I imagine and am trying to bootstrap, the chess club itself is the authoritative source for this information. The Sentinel syndicates it from the chess club, as does the chamber of commerce, and the Monadnock Shopper, and What’s Up in the Valley, and any other attention hub that cares about the chess club. When the club updates its info at the source, everybody downstream gets refreshed automatically. Attention hubs compete not by trying to capture the info exclusively, but by “amplifying the signal,” as Dave Askins so nicely puts it, in ways appropriate to their unique editorial missions and capabilities.

Here’s an architectural view of what I have in mind:

Monadnock Arts Alive is a real organization chartered to advance arts and culture in the Monadnock region. It runs an instance of an elmcity hub into which flow calendar feeds from local arts organizations, including the ones shown here.

Monadnock Arts Alive has relationships with the Mariposa Museum, the Sharon Arts Center, the Monadnock Folklore Society, and as few dozen other local arts and culture organizations. It’s appropriate for Arts Alive to merge their calendars into a view that brands them as a collection and “amplifies the signal.” Arts Alive can then retransmit that signal to one or more attention hubs that can leverage the editorial work done by Arts Alive — that is, gathering a set of feeds that represent the local arts scene, and working with sources to refine those feeds.

Attention hubs aren’t restricted by Arts Alive’s choices, though. The Sentinel, the Chamber, or What’s Up in the Valley can use Arts Alive’s combined feed if it suits them. Alternatively they can create their own views that merge some of the feeds on Arts Alive’s list with other feeds not on that list.

What emerges, in theory if not yet in practice, is a pool of sources underlying a network of hubs. In this network sources are always authoritative for their data, as intermediaries are always authoritative for the views of those sources they present. What’s more, all intermediaries can be bypassed. If as an individual I care a lot about a particular source, say the Monadnock Folklore Society, I can subscribe to that calendar directly on my desktop or phone. Similarly, if the Chamber of Commerce has a different idea than Arts Alive about which set of feed best represents the local arts scene, it can go direct to those sources and synthesize its own view.

It’s a very general model. We can, for example, apply it to Dave Askins’ crime reporting example. Police reports aren’t, after all, the only possible authoritative basis for crime reporting. Citizens are another. Major incidents provoke online discussion. That discussion can by aggregated by emergent tags. And it can be filtered by whitelisting particular blogs, Twitter accounts, or other sources according to their reputations. Who establishes those reputations? Attention hubs whose editorial choices define views of reality that subscribers either will or won’t find useful. If you find that an attention hub usefully aggregates citizen chatter you may decide to peer through that lens. If not you can go direct to the sources which, in this model, will always be transparently cataloged by intermediaries.

I’m working the calendar angle because I see it as a way to get a wide variety of people and organizations engaged with this model. But my hope is that they’ll be able to generalize from it, and apply it creatively in other domains.

On certain summer nights in Keene, NH, you can wind the clock back 100 years and experience baseball from another era. Our town hosts an NECBL (New England College Baseball League) team called the Swamp Bats. Their home games, at the high school’s Alumni Field, are like Norman Rockwell paintings come to life. Fans arrive early to set up lawn chairs along the third base line. Between innings children compete in egg-balancing races. A few of the players will end up in the big show, but all of them remind us why baseball is the national pastime.

We like to get out to at least a few games each year. In years past the Bats’ Google Calendar helped remind me to do that. But this year the schedule became a silo. The data is trapped in a web page. It can no longer flow, as it once did, to Keene’s calendar hub or to my personal calendar.

Happily I found a way to liberate that data. So now you can once again see the Swamp Bats calendar here and subscribe to it here. The circumstances and techniques that made this possible are worth exploring.

It turns out that while the Swamp Bats’ own calendar is a data silo, the NECBL’s master calendar does the right thing and complements its HTML view with an iCalendar feed. Crucially the NECBL’s calendar does another right thing: it adopts a consistent naming convention. The titles of events on the calendar look like this:

VM@KS 6:30pm

DW@SM 6:30pm

NB@NG 6:30pm

That’s shorthand for:

Vermont Mountaineers vs Keene Swamp Bats at Keene

Danbury Westerners vs Sanford Mainers at Sanford

New Bedford Bay Sox vs Newport Gulls at Newport

To create an iCalendar feed for Keene home games, I used an iCalendar filter to find “@KS” in the titles of the NECBL feed and produce a new feed with just the matching events.

Since I’ve recently started an elmcity hub for Montpelier Vermont I did the same thing there. Filtering on “@VM” gives a view for the Vermont Mountaineers’ home games that you can see here and subscribe to here.

These maneuvers suggest an interesting twist on the free rider problem. When one person upgrades the quality of an online information source and makes the improved source available to everyone, the free ride everyone can enjoy isn’t a problem at all. It’s a solution.

Last week, for example, I visited Montpelier to speak with a state official about the elmcity project. He described a problem that’s familiar to many parents. His kids are in a sports league; their schedule comes to him formatted as an Excel spreadsheet; he can’t merge that Excel schedule with his personal calendar. So he transcribed the data into Google Calendar. And having done so, he shared it with other parents. That’s a free ride the sports league ought to provide. But in this era of abundant free cloud services, any parent can offer it instead, merely by satisfying a personal need. Web users who become web makers can turn the free rider problem into a solution.

My recent column on smart meters came to the attention of Richard Stallman, who worries about the privacy and surveillance issues I alluded to. In the course of our email discussion a question came up that I’d like to answer but can’t. When a smart meter is utility-owned, rather than DIY like mine, do any of the providers offer choice with respect to the granularity of the data feed that’s phoned home? In theory it would be possible to opt out of a realtime feed and only use the meter to automate the monthly accounting that’s currently still done by a visiting person. I doubt that any utility offers that but I’d like to be proved wrong, and either way it would be nice to know for sure.

Richard, by the way, would like to add the privacy/surveillance issues arising from smart meter deployment to his list of causes, and is looking for someone who wants to lead that charge. If you’re interested and qualified you are welcome to contact him. Here, as I have now learned firsthand, is the protocol. You’ll write to him at rms@gnu.org. You’ll receive an autoreply that begins:

I am not on vacation, but I am at the end of a long time delay. I am located somewhere on Earth, but as far as responding to email is concerned, I appear to be well outside the solar system.

After your message arrives at gnu.org, I will collect it in my next batch of incoming mail, some time within the following 24 hours. I will spend much of the following day reading that batch of mail and will come across your message at some point. If I write a response immediately, it will go out in the next outgoing batch–typically around 24 hours after I collected your message, but occasionally sooner or later than that. Please expect a minimum delay of between 24 and 48 hours in receiving a response to your mail to me.

If a conversation ensues, it will happen on that cycle. This strikes most people as odd. As Jeremy Zawodny once noted, though, Richard Stallman is Getting Things Done. Old School.

A couple of weeks ago all the posts here became invisible. There didn’t seem to be anything I could have done wrong to cause that, so I wrote to the support team at WordPress.com about it. I got a prompt acknowledgement from Erica V. that something was, indeed, wrong. Soon after that she confirmed that the problem was fixed. That left me feeling pretty good about WordPress.com. It’s a free service, after all, I’m only a customer in a rather minimal way: I pay for domain name redirection and for the ability to edit my CSS file. Yet the customer service I received was outstanding.

Then, last week, something else went wrong. The widgets in the right column were getting bumped down by a post that didn’t belong in that column. I tried a few debugging strategies and then wrote to WordPress.com support again. Here was the prompt response from macmanx:

You’re all fixed up now. You had an extra div tag in “Meta-tools for exploring explanations,” but I removed it.

Oops. I know how to validate HTML and should have caught that myself. It’s not something I’d have wanted to bother the support team with. But they didn’t make me feel like a jerk. Again the problem was handled promptly and cheerfully.

You know, we tip for all sorts of services in the physical world, including ones delivered far less capably. Why not tip for excellent online customer service? If there were a tip jar for WordPress.com I’d have used it both times.

Yesterday’s post contains an error so embarrassing that I was briefly tempted to yank the whole thing. But of course That Would Be Wrong. What’s more, the error supports the larger point I was trying to make before I derailed myself.

I was talking about Bret Victor’s notion of explorable explanations, which he illustrates on a page called Ten Brighter Ideas. I’d looked at it before, but when I revisited it yesterday I had trouble believing that the following claim could be true:

If every US household replaced 1 incandescent bulb with a compact fluorescent bulb, we’d save 11.6 TWh (terawatt hours), which is the energy equivalent of 1.5 nuclear reactors or 9.5 coal plants.

Some people intuit what these units and quantities mean. But most of us — me included — don’t. And even experts are prone to error. A few months ago I spotted one such error. A Ph.D. economist wrote an editorial that consistently used billions of barrels of oil rather than, as intended, millions. The column was syndicated to hundreds of newspapers, and so far as I know nobody noticed until I happened to check.

What prompted me to check? My friend Mike Caulfield, who’s been teaching and writing about quantitative literacy, says it’s because in this case I did have some touchstone facts parked in my head, including the number 10 million (roughly) for barrels of oil imported daily to the US.

The reason I’ve been working through a bunch of WolframAlpha exercises lately is that I know I don’t have those touchstones in other areas, and want to develop them. Having worked a few examples about global energy, I thought I’d built up some intuition in that realm. But in this case the intuition that prompted me to check Ten Brigher Ideas was wrong.

When I did check, things went completely off the rails:

If 111 million households each swap out one 75W bulb for a 25W bulb, saving 50W each for 180 hours (i.e. half of each day for a year), we’re looking at 100,000,000 * 50W * 180hr = 999GWh. We’re off by a factor of about 1000.

As Pasi points out in a comment:

Hmm, “half of each day for a year” is not 180 hours, but 365*24/2=4380 hours?

My brain thought days, my fingers wrote hours. I think I’m slightly dsylexic when it comes to units, and so I’m prone to that sort of error. It’s another reason why I use WolframAlpha to check myself. When I do that, I try to take advantage of WolframAlpha’s marvelous ability to automate conversions. For example, during an earlier exercise I needed to visualize the gallon equivalent of the energy released by combustion of one kilogram of gasoline. Normally this would entail looking up the density of gasoline, 0.726 g/cm3, applying that constant, and then converting to gallons. But in WolframAlpha the phrase density of gasoline is meaningful and can be used directly, like so:

http://www.wolframalpha.com/input/?i = ( 1 kilogram / density of gasoline ) in gallons

Similarly, here’s what I could have done to check the Ten Brighter Ideas claim:

http://www.wolframalpha.com/input/?i = (1/2 year) * 111,000,000 * 50W as TWh

That comes to 24 TWh, which is in the ballpark of the claimed 11.6. Maybe Bret assumed lights are cumulatively on 1/4 of the time, I haven’t checked, but if so that would nail it.

Why didn’t I write the WolframAlpha query that way in the first place? Because, I think, we still expect to do a lot of basic computation ourselves. You want the answer in hours? Put hours in. How many? You can figure that out. But should you?

I think it depends. It’s good to exercise your inboard computer — not only to calculate results but also to store and retrieve certain touchstone values. But it’s also good to delegate calculation, storage, and retrieval to outboard computers that can do these things better than we can — if that delegation can be smooth. WolframAlpha points to one way that can happen, Bret Victor’s simulations point to another.

« Previous PageNext Page »

Follow

Get every new post delivered to your Inbox.

Join 6,064 other followers