A geek anti-manifesto

The other day my colleague Scott Hanselman wrote a useful essay called 10 Guerilla Airline Travel Tips for the Geek-Minded Person. It’s a mixture of technical and social strategies. The tech strategies include marshaling data with the help of services like Tripit, FlightStats, and SMS alerts. The social strategies include being nice to service reps, and using the information you’ve marshaled in order to make precise requests that they’re most likely to be able to satisfy.

Scott writes:

I’m a geek, I like tools and I solve problems in my own niche way.

That statement, along with the essay’s tagline — …Tips for the Geek-Minded Person — has been bothering me ever since I read it. Why is it geeky to marshal the best available data? Why is it geeky to use that data to improve your interaction with people and processes?

My Wikipedia page includes this sentence:

Udell has said, “I’m often described as a leading-edge alpha geek, and that’s fair”. 1

I did say that, it’s true. But I’ve come to regret that I did. For a while I thought that was because geek was once defined primarily as a carnival freak. That’s changed, of course. Nowadays the primary senses of the word are obsessive technical enthusiasm and social awkwardness. Which is better than being somebody who bites the heads off chickens. But it’s still not how I want to identify myself. Much more importantly, it’s not how I want the world to identify the highest and best principles of geek identity and culture.

Fluency with digital tools and techniques shouldn’t be a badge of membership in a separate tribe. In conversations with Jeannette Wing and Joan Peckham I’ve explored the idea that what they and others call computational thinking is a form of literacy that needs to become a fourth ‘R’ along with Reading, Writing, and Arithmetic.

The term computational thinking is itself, of course, a problem. In comments here, several folks suggested systems thinking which seems better.

Here’s a nice example of that kind of thinking, from Scott’s essay:

#3 Make their job easy

Speak their language and tell them what they can do to get you out of their hair. Refer to flights by number when calling reservations, it saves huge amounts of time. For example, today I called United and I said:

“Hi, I’m on delayed United 686 to LGA from Chicago. Can you get me on standby on United 680?”

Simple and sweet. I noted that UA680 was the FIRST of the 6 flights delayed and the next one to leave. I made a simple, clear request that was easy to grant. I told them where I was, what happened, and what I needed all in one breath. You want to ask questions where the easiest answer is “Sure!”

I see two related kinds of systems thinking at work here. One engages with an information system in order to marshal data. Another engages with a business process — and with the people who implement that process — in a way that leverages the data, reduces process friction, and also reduces interpersonal friction.

These are basic life skills that everyone should want to master. If we taught them broadly, and if everyone learned them, then this sort of mastery wouldn’t attract the geek label. But we don’t teach these skills broadly, most people don’t learn them, and the language we use isn’t our friend. If systems thinking is geeky then only geeks will be systems thinkers. We can’t afford for that to be true. We need everyone to be a systems thinker.


1 Actually I’d say that Scott Hanselman is a leading-edge alpha geek. I am, at best, a trailing-edge beta or gamma geek. But if someone were to remove the word entirely from my Wikipedia page, I’d be fine with that. I no longer want to be labeled as any kind of geek.

Atul Gawande on why heroes use checklists

The sound track for yesterday’s run was a compelling talk by Atul Gawande about his new book The Checklist Manifesto, which grew from an article in the New Yorker. Although his story is grounded in the practice of health care, the lessons apply much more broadly to every field in which we grapple with complexity.

For most of human history, he argues, we were limited by lack of knowledge. We just didn’t know how to do things right. Now that knowledge is abundant the enemy is no longer ignorance but rather ineptitude — the failure to marshal and apply what we know.

The surprising thing Atul Gawande learned, and now passionately conveys, is that simple checklists turn out to be extraordinarily powerful tools for marshalling knowledge and for ensuring its correct use.

The biggest roadblock is pushback from highly-trained experts who are offended by the idea. After 8 years of medical school, and in a regime that already demands vast amounts of paperwork, why should a doctor have to check off basic items on a list? Because we are fallible in the face of complexity, Gawande says, and because checklists work. Although he led research in this area he was skeptical about adopting checklists in his own operating rooms. But when he did, he made two critical discoveries. First, well-made checklists are easy to use. Second, they almost always caught errors.

Most of those errors turned out to be non-critical. Only a few of the catches saved lives. That alone, of course, is enough reason to adopt checklist discipline. But it was shocking for the medical teams to discover that simple and basic procedures, which they thought were being carried out with 100% fidelity, in fact weren’t.

We are willing to tolerate failure when it results from unavoidable ignorance, Gawande says. If we really don’t know how to cure a disease, then OK. You tried your best, you failed, that’s how it is. But if we do know, and screw up, that’s unforgivable. What do you mean she died because somebody forgot to administer the antibiotic, or to wash his hands? Unacceptable.

The struggle with complexity I know best happens in the realm of software. What do our checklists look like? One obvious form is the test suite. If my software keeps passing its tests as I evolve it, there’s still plenty that can and will go wrong, but at least I know it still does what the tests say it does. Once, recently, I deployed a version of the service I’m building that failed in a way my tests would have caught. How did that happen? I was so sure I hadn’t changed anything that the tests would catch that I didn’t bother to rerun them. That’s an unforgivable lapse of discipline I don’t plan to repeat.

But software tests aren’t really the sort of checklist that Gawande writes and speaks about. Here’s something closer to what he means: Best practices in web development with Python and Django. That list comes from Christopher Groskopf, a web developer at the Chicago Tribune, who writes:

In our fast-paced environment there is little justification for being confused when it could have been avoided by simply writing it down.

We need to recognize and honor this kind of work. It is unsexy but heroic, and I use that word deliberately. The power of the checklist discipline, Gawande says, should prompt us to rethink our definition of heroism. Consider Capt. Chesley “Sully” Sullenberger:

It was fascinating to watch people responding to the miracle on the Hudson. All of us, staring in amazement, thinking what a hero he was. But none of us willing to listen to what he really was saying. He kept saying it wasn’t flight ability, but instead adherence to discipline, and teamwork. But it was as if we couldn’t process what he was trying to tell us.

Because there were checklists, and because everybody used them, Sully could rise above the dumb stuff and focus on the one key decision for which human judgement was required. The heroic part of that flight was not the flight ability of Capt. Sullenberger, it was the willingness of the entire team — including the flight attendants, who then acted through their protocols to get the passengers off that plane in three minutes — to acknowledge their fallibility, admit that they could fail by relying only on training and memory, and exercise the discipline to overcome that fallibility.

The talk raises important questions for practitioners in every field. What makes checklists easy to use? What makes them effective? In the realm of software, we have plenty of examples to look at: django, WordPress, C#, ASP.NET, etc. It might be fruitful to explore these, merge similar lists, and codify stylistic patterns that can govern all such lists.

Hey Honda, I paid for that data!

Yesterday at the Honda dealer’s service desk I found myself in an all-too-familiar situation, craning my head for a glimpse of a screenful of data that I paid for but do not own. Well, that’s not quite true. I do have a degraded form of the data: printouts of work orders. But I don’t have it in a useful form that would enable me to compute the ownership cost of my car, or share its maintenance history with owners of similar cars so we can know which repairs have been normal or abnormal.

Although we tend to focus on the portability of our health care data, the same principles apply to all kinds of service providers. And in many of those cases, we would be less concerned about the privacy of the data.

Why, then, don’t service providers and their customers co-own this data? Is it because providers want to keep high-quality electronic data, while only dispensing low-quality paper data, in order to make their services stickier? It would make a certain kind of sense for Honda to think that way, but I don’t think that’s the answer. Instead:

1. Nobody asks for the data.

2. There’s no convenient way to provide it.

We’ll get over the first hurdle as our cultural expectations evolve. Today it would be weird to find an OData URL printed on your paid work order. In a few years, I hope, that will be normal.

We’ll get over the second hurdle as service providers begin to colonize the cloud. One of the key points I tried to make in a recent interview about cloud computing is that cloud-based services can flip a crucial default setting. If you want to export access to data stored in today’s point-of-sale and back-end systems, you have to swim upstream. But when those systems are cloud-based, you can go with the flow. The data in those systems can still be held closely. But when you’re asked to share it, the request is much easier to satisfy.

Talking with Duncan Wilson about architecture in the age of networked services

My guest for this week’s Innovators show is Duncan Wilson, an engineer with the global consulting firm Arup. We met at the 2010 Microsoft Research Social Computing Symposium, where the theme was city as platform. His presentation, and our follow-on conversation, prompted me to read a couple of books that had long been in my queue: Stewart Brand’s How Buildings Learn and Christopher Alexander’s A Pattern Language.

Reading both of those books, I felt an implicit connection between principles that I’ve learned in an IT context (e.g., separation of concerns, networks of loosely-coupled services), and principles that can inform the practice of architecture — at the scale of buildings, but also of whole cities. Duncan Wilson, and others lucky enough to be working at the forefront of 21st-century architecture, are making that connection explicit.

Consider, for example, the movement of goods in and out of a city. You’d like to consolidate that activity at the perimeter and reduce truck traffic in the core. That’s doable, but only if retailers and suppliers are willing to share information about what they’re shipping. That began to happen in the late 1990s, Duncan says, when retailers and suppliers began to share trucks. Doing the same kind of thing for a city, as Arup’s engineers envision, would entail both a physical arrangement of consolidation centers on the perimeter, and a virtual arrangement of shared data.

Information, Duncan says, is becoming another of the raw materials from which the built environment is made.

Here’s a different example of IT principles crossing over into other realms, from a podcast I listened to on yesterday’s hike:

When you offer multiple services using the same devices, through the same interfaces, you open up opportunities for creative thinking in the storage community.

If you’re talking about data storage, and the frame of reference is IT, that’s not a very compelling statement. We haven’t fully internalized this service-oriented and network-based way of thinking, but we’re getting there.

But that quote doesn’t refer to data storage, it refers to energy storage. The podcast was Stephen Lacey’s excellent Inside Renewable Energy. In this episode, innovators at Ice Energy and A123 describe business models that are deeply informed by the idea of networks of shared services.

Upcoming talk at Kynetx Impact

As Phil Windley mentioned the other day, I’ll be speaking at the Kynetx Impact conference, April 27-28 in Salt Lake City. Last year I interviewed Phil about what Kynetx does. It’s hard to boil it down to an elevator pitch without examples, so here’s one that came up today: Scott Hanselman’s Put Missing Kids on your 404 Page application.

Inspired by a PHP solution to the problem, Scott set out to replicate it for ASP.NET.

But then I realized that a server-side solution wasn’t really necessary.

Could I do it all on the client side? This way anyone could add this feature to their site, regardless of their server-side choice.

One next step, as Scott points out, is to add geolocation so the list of kids you see will be more relevant to you. But there are lots of ways to contextualize that list based on aspects of your identity. And this is what Kynetx applications do: Contextualize your experience of the web based on aspects of your identity.

My own interest in this idea dates back to the LibraryLookup project, which was an early demonstration of the power of client-driven contextualization. It evolved from a bookmarklet to a browser plug-in, but then stalled there for lack of a ubiquitous client-side technology.

Now there is: jQuery. What Scott’s example shows, as do all Kynetx applications, is that we’re ready to make clients more equal partners in the dance of the web. Among other things, this possibility raises horny issues about the control of content — issues that I explored in a 2005 screencast.

But there’s also a deep connection between Phil’s work and the ongoing saga of digital identity. Phil wrote a book on that subject, and has been a key organizer of the Internet Identity Workshop. When he started Kynetx he wasn’t really thinking about a tie-in to Information Cards and the identity metasystem. But the connection emerged organically.

In a Kynetx-enhanced version of the Missing Kids 404 Page application, your browser would present selected aspects of your identity to the services that provide the data, and a Kynetx application would personalize that data in ways meaningful to you.

The Internet began as a network of peers. That arrangement didn’t last long, and there have been several efforts to restore the original symmetry. In the early 2000s, during Napster’s heyday, there was a flurry of interest in peer-to-peer architectures. Thanks to today’s more capable and more standardized browsers, we’re seeing a new wave of interest. I’m looking foward to hanging out at the Kynetx conference and meeting folks who are riding that wave.

Talking with Eric Frank and Jon Williams about Flat World Knowledge, a commercial publisher of open textbooks

My guests for this week’s Innovators show are the co-founder (Eric Frank) and CTO (Jon Williams) of Flat World Knowledge, a new textbook publishing company with a refreshingly disruptive business model. Like any other textbook publishing company, Flat World is building up a stable of authors with whom it has exclusive (or, in this case, semi-exclusive) relationships. Authors assign the Creative Commons Attribution-Noncommercial-Share Alike (by-nc-sa) license to their work. Flat World makes the books freely available online, in HTML and PDF formats. It sells print-on-demand copies of the books direct to students, along with a variety of study aids.

Ebooks are another potential source. But so far, Flat World has found that students overwhelmingly prefer to read printed books. Eric Frank has a wonderfully pragmatic view:

Publishers need to be device-agnostic in the broadest sense. The printed book is one of the devices we target.

As and when students indicate a preference for ebook formats, Flat World will provide them. It does seem that ebook readers are on the cusp of mainstream adoption. But it has seemed that way before. “It would be tragic,” Eric says, “to bet your business on that.”

The bet that Flat World is making is on a neutral format, Docbook XML, from which any other format can be automatically derived. I’ve done a lot of my own publishing to multiple formats from a single source. Back in 2003, when XML support was added to Microsoft Word — which was and is the tool of choice for book-length writing — I thought it would end the painful process of converting Word manuscripts into published formats. That mostly hasn’t happened yet. Jon Williams thinks that’s because, until recently, publishers didn’t need to automate the production of various electronic formats. As that need arises, we should finally begin to see end-to-end automation from original manuscript to published formats.

Flat World is a commercial publisher of open textbooks, and Eric is careful to spell out what he means by open. It doesn’t simply mean free, or collaborative, although there are both free and collaborative ways to use Flat World books. It means precisely what the by-nc-sa license says: You are free to use, share, and remix, with attribution, but not for commercial gain. Whenever a work yields commercial value, in any of the ways it might, that money must flow back to Flat World and its stable of authors.

The dominant revenue stream is print-on-demand. If Flat World is able to scale out its catalog — and that’s the biggest if the company faces — its printed textbooks will be an affordable alternative to conventional offerings. Meanwhile, teachers who adopt Flat World books can adapt them to their needs. In theory, a Flat World book can become a nexus of collaboration, grow more valuable as a result, convert some of that value into revenue, and share that revenue with a community of collaborators as well as with the publisher and author. It’s early days, and that hasn’t happened yet. But I like the way that Flat World has opened a door to that possibility, without betting its business on lots of people walking through it anytime soon.

Why the Maya used a 260-day calendar

Last night I attended a lecture by Vincent Malmström who, in 1973, published a paper in Science proposing an answer to the mysterious (and still controversial) question: Why did the Maya use a 260-day calendar?

Malmström’s 1997 book Cycles of the Sun, Mysteries of the Moon, which he has also made freely available here, tells the whole story from his point of view. It’s a remarkable tale of geography, religion, culture, computation, science, and human foibles.

The Maya actually used three different calendars. The Tzolk’in ran on a 260-day cycle, and the Haab’ used a 365-day cycle. Then there was the Long Count, which counted days since a mythical beginning of time and also included the other two.

The Long Count’s start date was written, in its full form, like this:

0.0.0.0.0, 4 Ahau 8, Cumku

The first five digits measure days in units of 144,000, 7,200, 360, 20, and 1. 4 Ahau is a Tzolk’in day, based on a cycle of 13 numbers with a cycle of 20 days names. 8 Cumku is a Haab’ day, based on 18 20-day months.

Today’s date is 12.19.17.2.3, which Wikipedia’s Long Count page helpfully computes for you using this markup:

Today, {{CURRENTDATE}}, in the Long Count is {{Maya date}} (GMT correlation)

(Here GMT doesn’t stand for Greenwhich Mean Time, but rather for Goodman-Martinez-Thompson.)

But today might be 12.19.17.2.2, according to this calculator. There has, evidently, been epic confusion and controversy about whether the mythical start date was 584,283 or 584,284 or 584,285 days ago. Thompson originally thought 584,285, then changed his mind and decided on 584,283.

Prof. Malmström likes 584,285, which fixes the start date as August 13, 3114 B.C. Why? Thompson didn’t think there was any astronomical basis for the 260-day calendar, but Malmström figured there had to have been. And he wondered where, in that part of the world, you might observe a 260-day astronomical cycle.

It turns out that at latitude 14.8 º N, the sun is directly overhead on August 13 passing southward, and again on April 30 passing northward, an interval of 260 days. August 13 is also the day after the peak of the Perseid meteor shower. Malmström writes:

The signs were therefore unmistakable. First the heavens would give their notice. All night long the skygazer would watch as stars burst from behind the towering mountains to the northeast and flashed across the sky. And the following morning, as the sun arched higher and higher across the heavens, he would watch as the shadow it cast grew steadily shorter, until, as the sun reached its zenith, its shadow completely disappeared. This then, he decided, was the day for his count to begin.

Why count days? If you’re planting maize, you need to calibrate carefully to the arrival of the monsoon rains. The two solar passages correspond roughly to the beginning of the rainy season at the end of April, and the harvest in mid-August.

Note that these passages, and the associated latitude 14.8 º N, don’t apply to the Maya in the Yucatán Peninsula, but instead to an earlier Olmec civilization to the southwest, on the Pacific coast near what is now the border between Mexico and Guatemala. The Mayan new year was July 26, not August 13. But the 260-day calendar predated the Mayans by a millenium.

Just a few decades after its inception, the 260-day “sacred” calendar was augmented by a 365-day “secular” calendar. The problem was that the sacred calendar didn’t quite work. There were 13 20-day cycles — or 20 13-day cycles — during the sun’s southward passage, and what seemed like 8 more 13-day cycles during the northward passage. So when the calendar started running, things seemed to work out — albeit in a delightfully curious way.

Each time the zenithal sun passed overhead on its way south, a new 260-day cycle would begin on a day numbered “1” but with a different name. Thus, the skygazer watched as the beginning of each successive cycle shifted from “1 Alligator” to “1 Snake” to “1 Water” to “1 Reed” and then to “1 Earthquake.”

That didn’t last long, though.

Where the priest had erred, of course, was in concluding that the cycle of the sun could be measured in 28 “bundles” of 13 days. This meant that he had equated its annual migration through the heavens with an interval of 364 days, when in actuality it took about a day and a quarter longer than that. Thus, after only four years had elapsed his count was already off by 5 days. This might go unnoticed by the commoners at first, but certainly, as the error increased with each passing year, it wouldn’t be long before “the cat was out of the bag.”

What a colossal screwup! I like to imagine the priests furiously backpedaling.

OK, wait, I know we said 260, but it’s really 365, but we’ll keep both, don’t worry, it’ll work out, trust us, we know what we’re doing.

Of course the fun never stops. We’re less than two years away from Y 13.0.0.0.0. That’s in 2012, on Dec 23. Or on Dec 22, or Dec 21, depending on which correlation constant you choose. On one of those dates the world will end. Or not. Prof. Malmström suggests you choose 584,285. That’ll give you two extra days to put your affairs in order.


For more on the endlessly weird human reckoning of time, see A literay appreciation of the Olson/Zoneinfo/tz database.

Visualizing the names of your Twitter lists

A while ago I asked the Lazy Web for a service that would produce a tag cloud of the names of the lists on which a Twitter user appears. Mine, for example, would look like this:

The Lazy Web seems not to have taken up the challenge, so I took a crack at it. The solution I came up with is a single-page application, which is just a web page that uses HTML, CSS, and Ajax to do something that’s (hopefully) interesting and useful.

Here’s the page: http://jonudell.net/NamesOfTwitterListsFor.html

It defaults to my Twitter name but you’ll of course want to try yours, and those of others you’re curious about. The first time through, you’ll be prompted to authenticate to api.twitter.com. This looks like the password anti-pattern, but really isn’t. You’re authenticating yourself to the Twitter API in the same way that you normally do to the Twitter website.

Note that since the API call used to build the tag cloud is rate-limited, queries through this page will be charged against your daily allotment of Twitter API usage, just as when you use client applications like TweetDeck or Seesmic.

What will your tag cloud say about you? I don’t think you’ll be surprised. It’s just another of the unique signatures written for us by others. That those signatures do get written, though, and that they can be discovered and read, never ceases to surprise me.

The dynamics of single-page applications also never cease to surprise me. In this case, a tiny 4K web page is all that’s delivered from my modestly-equipped personal webserver. It would probably survive a Slashdotting. If not, the page could be hosted on any other server, or on a other local drive, and would continue to work the same way.

I’m also using jQuery, in this case served from the Microsoft content delivery network, so that’s unlikely to be a bottleneck. The only real limit is Twitter API usage, and that’s spread across all the Twitter users who authenticate through the page.

When you arrange and deploy a tiny amount of HTML, CSS, and JavaScript in this way, you can create a lot of leverage!

Uses of pattern language in the urban century

I’ve long been familiar with the idea of software patterns. But I didn’t connect it to its roots in the architectural writings of Christopher Alexander until I recently listened to Kent Beck’s keynote at the 2008 Rails conference. Kent was deeply influenced by The Timeless Way of Building. That book wasn’t available in my local library. But the companion volume, A Pattern Language: Towns, Buildings, Construction, was. It’s been a revelation to read it for the first time, more than thirty years after it was published, through lenses formed by my experience with software and networks.

Here’s how A Pattern Language summarizes a pattern called FOUR-STORY LIMIT:

Therefore, in any urban area, no matter how dense, keep the majority of buildings four stories high or less.

And here’s how the Portland Pattern Repository summarizes the Singleton Pattern:

Therefore, let the class create and manage the single instance of itself, the Singleton. Wherever in the system you need access to this single instance, query the class.

The stylistic allusion shows a direct literary influence flowing from architectural pattern language to software pattern language. Alexander’s book, by the way, is a pre-Web hypertext. The pattern called FOUR-STORY LIMIT (21), for example, refers to NUMBER OF STORIES (96), DENSITY RINGS (29), BUILDING COMPLEX (95), HOUSING HILL (39), and HIGH PLACES (62). Each of these numbered patterns links to a set of related patterns, as does each page in the Portland Pattern Repository — which was also, of course, the Ur-wiki from which all things wiki are descended.

I suspect we’ve yet to fully elaborate the connections between software, architecture, and networks. Consider these pattern names from Alexander’s 1977 book:

THE DISTRIBUTION OF TOWNS
WEB OF PUBLIC TRANSPORTATION
NETWORK OF LEARNING
WEB OF SHOPPING
ACTIVITY NODES
NECKLACE OF COMMUNITY PROJECTS
CONNECTED PLAY
NETWORK OF PATHS AND CARS
CIRCULATION REALMS

These evocative names, and the sketches that accompany them, arise from a deeply network-oriented way of thinking. Many of the higher-level patterns express core values about connectivity and decentralization. And those values resonate more powerfully now, in our Net-aware world of 2010, than they might have in 1977.

Some of the prescriptions in A Pattern Language can seem absurd. For example, Alexander argues that an optimal urban core should serve a “catch basin” of about 300,000 people, that these cores should be widely distributed, and that each should specialize in some way that makes it world-class. Why?

The problem is clear. On the one hand people will only expend so much effort to get goods and services and attend cultural events, even the very best ones. On the other hand, real variety and choice can only occur where there is concentrated, centralized activity; and when the concentration and centralization becomes too great, then people are no longer willing to take the time to go to it.

Which is fine in theory, but we’ve already built megalopolises surrounded by suburbs. It’s not like we can do it over.

Except when we can. In America the most striking examples are in Michigan where I lived for many years. Detroit, once a city of two million, is being recreated as a city that may end up at less than a third that size. What will become of the rest? It just might be plowed into farmland. If so, a pattern called CITY COUNTRY FINGERS may turn out to be a useful guide.

Likewise there are plans afoot to gather the remaining population of Flint into a few viable neighborhoods and let the vacated land become parks and forests. If that happens, many of the ideas in A Pattern Language, about how to organize neighborhoods and their transportation networks, could come into play.

In Asia, meanwhile, entire new cities are being built from scratch. I recently met an engineer who works for the global consulting firm Arup. For one of their projects, he told me they’re using a simulation of wind flow as one of the constraints on the layout of streets and buildings. The layout is also informed by RING ROADS and LOCAL TRANSPORT AREAS, patterns that yield a tiered distribution network which optimizes the use of delivery trucks.

Networked software is highly malleable, and we take for granted that we can try out different design patterns. The built environment rarely affords the same opportunity. But in this century of urbanization, as circumstances force us to rethink our energy, transportation, and settlement networks, it may turn out to be softer than we suppose, and more open to the influence of pattern languages.

Shiny new uses for familiar old things

Last year I applied for a grant from a philanthropic group, the Knight Foundation, that wants to save journalism by funding the development of new technological methods. I was conflicted about applying because the project I put forward is already well supported by my employer, Microsoft. But since my proposal was to redistribute all of the grant, as a way of exploring an idea about improving the flow of information in communities, I thought it was fair to give it a shot.

My proposal advanced to the final round and was then rejected. Given my initial ambivalence I was OK with that. But the stated rationale has been bugging me ever since. The letter said:

Because there are thousands of proposals and only a few of them advance, we are able to choose only the most innovative ideas. These are new kinds of technologies or techniques, usually things we have never heard of before.

The meme woven into that paragraph has a name: Shiny New Thing syndrome. It is a plague. Technology journalism feeds it. Thought leaders, including Dave Slusher, Jeremy Zawodny, and Jeff Atwood, have denounced it.

I’m clearly biased, since all my best work involves creative remixing of ideas and technologies that are as common as dirt. But I do wonder about the harm that’s done when we equate innovation with shiny new things.

Old things are full of latent value that we’ve yet to discover and unlock. Why? It takes a long time for real understanding to sink in. In Net infrastructure, consider how long it’s taken us to grok what HTTP, REST, HTML, and JavaScript really are and can do. In education, look at the high-value uses that Sal Khan and Dan Meyer find for low-tech screencasting and blogging tools. In journalism and civic life, read what Alan Rusbridger says about Will Perrin’s compelling — and yet so last-century — use of Typepad to activate communities.

Well, I try to do my part. On my show, which is called Interviews with Innovators, I feature people who are more likely to be evolutionary repurposers than revolutionary creators. Maybe I should rename the show Shiny Old Things.

Producing and consuming OData feeds: An end-to-end example

Having waxed theoretical about the Open Data Protocol (OData), it’s time to make things more concrete. I’ve been adding instrumentation to monitor the health and performance of my elmcity service. Now I’m using OData to feed the telemetry into Excel. It makes a nice end-to-end example, so let’s unpack it.

Data capture

The web and worker roles in my Azure service take periodic snapshots of a set of Windows performance counters, and store those to an Azure table. Although I could be using the recently-released Azure diagnostics API, I’d already come up with my own approach. I keep a list of the counters I want to measure in another Azure table, shown here in Cerebrata‘s viewer/editor:

When you query an Azure table like this one, the records come back packaged as content elements within Atom entries:

[sourcecode language=”xml”]
<entry m:etag="W/datetime’2010-02-09T00:00:53.7164253Z’">
<id>http://elmcity.table.core.windows.net/monitor(PartitionKey=’ProcessMonitor&#8217;,
RowKey=’634012704503641218′)</id>
<content type="application/xml">
<m:properties>
<d:PartitionKey>ProcessMonitor</d:PartitionKey>
<d:RowKey>634012704503641218</d:RowKey>
<d:HostName>RD00155D317B3F</d:HostName>
<d:ProcName>WaWorkerHost</d:ProcName>
<d:mem_available_mbytes m:type="Edm.Double">1320</d:mem_available_mbytes>
…snip…
<d:tcp_connections_established m:type="Edm.Double">24</d:tcp_connections_established>
</m:properties>
</content>
</entry>
[/sourcecode]

This isn’t immediately obvious if you use the storage client libary that comes with the Azure SDK, which wraps an ADO.NET Data Services abstraction around the Azure table service. But if you peek under the covers using a tool like Eric Lawrence’s astonishingly capable Fiddler, you’ll see nothing but Atom entries. In order to get direct access to them, I don’t actually use the storage client library in the SDK, but instead use an alternate interface that exposes the underlying HTTP/REST machinery.

Exposing data services

If the Azure table service did not require special authentication, it would itself be an OData service that you could point any OData-aware client at. To fetch recent entries from my table of snapshots, for example, you could use this URL in any browser:

GET http://elmcity.table.core.windows.net/monitor?$filter=Timestamp+gt+datetime’2010-02-08&#8242;

(A table named ‘monitor’ is where the telemetry data are stored.)

The table service does require authentication, though, so in order to export data feeds I’m creating wrappers around selected queries. Until recently, I’ve always packaged the query response as a .NET List of Dictionaries. A record in an Azure table maps nicely to a Dictionary. Both are flexible bags of name/value pairs, and a Dictionary is easily consumed from both C# and IronPython.

To enable OData services I just added an alternate method that returns the raw response from an Azure table query. Then I extended the public namespace of my service, adding a /odata mapping that accepts URL parameters for the name of a table, and for the text of a query. I’m doing this in ASP.NET MVC, but there’s nothing special about the technique. If you were working in, say, Rails or Django, it would be just the same. You’d map out a piece of public namespace, and wire it to a parameterized service that returns Atom feeds.

Discovering data services

An OData-aware client can use an Atom service document to find out what feeds are available from a provider. The one I’m using looks kind of like this:

[sourcecode language=”xml”]
<?xml version=’1.0′ encoding=’utf-8′ standalone=’yes’?>
<service xmlns:atom=’http://www.w3.org/2005/Atom&#8217;
xmlns:app=’http://www.w3.org/2007/app&#8217; xmlns=’http://www.w3.org/2007/app’&gt;
<workspace>
<atom:title>elmcity odata feeds</atom:title>
<collection href=’http://elmcity.cloudapp.net/odata?table=monitor&hours_ago=48′&gt;
<atom:title>recent monitor data (web and worker roles)</atom:title>
</collection>
<collection href="http://elmcity.cloudapp.net/odata?table=monitor&hours_ago=48&amp;
query=ProcName eq ‘WaWebHost’">
<atom:title>recent monitor data (web roles)</atom:title>
</collection>
<collection href="http://elmcity.cloudapp.net/odata?table=monitor&hours_ago=48&amp;
query=ProcName eq ‘WaWorkerHost’">
<atom:title>recent monitor data (worker roles)</atom:title>
</collection>
<collection href="http://elmcity.cloudapp.net/odata?table=counters"&gt;
<atom:title>peformance counters</atom:title>
</collection>
</workspace>
</service>
[/sourcecode]

PowerPivot is an Excel add-in that knows about this stuff. Here’s a picture of PowerPivot discovering those feeds:

It’s straightforward for any application or service, written in any language, running in any environment, to enable this kind of discovery.

Using data services

In my case, PowerPivot — which is an add-in that brings some nice business intelligence capability to Excel — makes a good consumer of my data services. Here are some charts that slice my service’s request execution times in a couple of different ways:

Again, it’s straightforward for any application or service, written in any language, running in any environment, to do this kind of thing. It’s all just Atom feeds with data-describing payloads. There’s nothing special about it, which is the whole point. If things pan out as I hope, we’ll have a cornucopia of OData feeds — from our banks, from our Internet service providers, from our governments, and from every other source that currently publishes data on paper, or in less useful electronic formats like PDF and HTML. And we’ll have a variety of OData clients, on mobile devices and on our desktops and in the cloud, that enable us to work with those data feeds.

Listen, talk, breathe

Linda Stone, coiner of the marvelous phrase continuous partial attention, has lately been exploring another modern pathology she calls email apnea, which means failure to breathe while checking email. In retrospect, we shouldn’t be surprised. Look:

  • The new 25-payline special edition of Wheel of Wealth will have you holding your breath in excitement…

  • Play Online Slot Machine Game. Coin in – spin – hold your breath……Watch those symbols…..Will it or won’t it?

  • After the first two hits you’re holding your breath for the third reel…

We don’t talk about slot-machine apnea but it’s the same syndrome, produced by the same cause: an intermittent, or variable-interval, schedule of reinforcement. Any activity that exhibits this pattern will be powerfully addictive. A dog begging for scraps of food at the table, rewarded only once in a thousand times, will always beg. Likewise a human begging for scraps of attention.

The link between variable-interval reinforcement and email addiction is well known. Less studied is how this plays out in other modes of electronic discourse. The architecture of those modes introduces another key variable: attention payoff. In a group-structured system, like email or Facebook, the payoff is bounded by group size. It’s true that email messages can escape and go viral, but when that happens the attention payoff is never the kind you want.

But in open pub/sub systems, like blogs or Twitter, the payoff is unlimited. Any item that you post could attract worldwide attention, boost your reputation, land you a job, or make a key personal or professional connection. However there’s no guarantee that you’ll get any reinforcement at all. So some fall by the wayside, others become addicted.

“Technology is here to stay,” Linda says. “Can our relationship to it change?”

It must, it can, and it will. But we’ll need to develop some intuitions about global scale and connectedness for which evolution did not prepare us. And then we’ll need to translate them back down to the human scale. Evolution has taught us how to be social. Technology amplifies our ability to give and receive attention, but it doesn’t change the rules of the game. There’s a time to listen, a time to talk, a time to breathe. We’ll remember, and we’ll figure it out.

Talking with Sal Khan about YouTube tutoring as guerilla public service

My guest for this week’s Innovators show is Sal Khan. He’s the creator of http://khanacademy.org, a catalog of more than 1000 YouTube video lessons in math, physics, biology, chemistry, and economics. All of these videos are made by Sal himself, in an engagingly personal style, using simple screencasting tools.

When I first got interested in screencasting, I envisioned the medium not only as a way to demonstrate software, but also as a way to share knowledge at Internet scale. Sal’s work fulfills that vision, and points the way toward a profound and much-needed disruption of our educational system.

At its core, Sal’s project isn’t about YouTube screencasts. It’s about intuition.

I always got frustrated by what went on in the classroom. You see otherwise intelligent peers memorizing facts and not really caring about the actual intuition. And because they didn’t care about the intution in their junior year, when that same idea pops up in senior year, it’s like they’ve never seen it before. It boggled my mind. You’re just relabeling the same concept over and over.

Sal cares about the intuition, and he wants others to care about the intution too. The first beneficiary of that desire was his cousin Nadia, whom he tutored remotely. Then followed other cousins and family friends. Then it dawned on him that there were no limits. The project could scale out. He could become a superempowered individual, reaching anyone who finds value in his method.

One of the key ingredients of that method is improvisation. These videos aren’t carefully planned, and they aren’t edited. As a viewer, you find yourself looking over the shoulder of a smart and broadly knowledgeable person who is solving problems by thinking on his feet. You watch a practitioner at work: engaged with his medium, wrestling with his tools, correcting false starts.

It was Chris Gemignani who first showed me the value of this approach, in a screencast that teaches how to do unexpectedly powerful and elegant Excel charting. He did it in one take. I’d have been tempted to edit out the false starts. But Chris knew better. Learning how a practitioner really thinks about solving a problem is even more valuable than learning the solution to the problem.

One thing that Sal’s lessons can’t be, of course, is interactive. Nor does he pretend that these videos will make teachers obsolete. But he does suggest, and I violently agree, that teachers can and should become curators of online assets like the ones Sal is creating, and should know when and how to weave those assets into their classes.

Teachers should also become connectors. Sal won’t be the only game in town. Other superempowered tutors will emerge. Each will have a unique style. For a given student, a given subject, and a given problem, one or another of those styles may be right. The best teachers will know their own strengths and limitations, will know which online tutors complement their strengths in a variety of ways, and will connect their students with those tutors.

Sal Khan is on fire. He burns with a passion to share his intuitions with anyone and everyone. It is a beautiful thing to see. He has abandoned a lucrative career in finance to do this fulltime, and I am quite sure he will find a way to keep doing it.


PS: The title of this piece refers to Richard Ankrom’s Los Angeles freeway project. At a busy intersection, millions of motorists have been directed to North 5 by a sign that Caltrans omitted. Ankrom created and installed that missing sign.

PPS: I wrote to my son’s math teacher about Sal Khan. She replied: “Thanks for that link to the Khan Academy. I was overwhelmed by how many video lessons he has! He does seem like an inspiring man. Unfortunately, You Tube is blocked here at the high school.”

OData for collaborative sense-making

OData, the Open Data Protocol, is described at odata.org:

The Open Data Protocol (OData) is a web protocol for querying and updating data. OData applies web technologies such as HTTP, Atom Publishing Protocol (AtomPub) and JSON to provide access to information from a variety of applications, services, and stores.

The other day, Pablo Castro wrote an excellent post explaining how developers can implement aspects of the modular OData spec, and outlining some benefits that accrue from each. One of the aspects is query, and Pablo gives this example:

http://ogdi.cloudapp.net/v1/dc/BankLocations?$filter=zipcode eq 20007

One benefit for exposing query to developers, Pablo says, is:

Developers using the Data Services client for .NET would be able to use LINQ against your service, at least for the operators that map to the query options you implemented.

I’d like to suggest that there’s a huge benefit for users as well. Consider Pablo’s example, based on some Washington, DC datasets published using the Open Government Data Initiative toolkit. Let’s look at one of those datasets, BankLocations, through the lens of Excel 2010’s PowerPivot.

PowerPivot adds heavy-duty business analytics to Excel in ways I’m not really qualified to discuss, but for my purposes here that’s beside the point. I’m just using it to show what it can be like, from a user’s perspective, to point an OData-aware client, which could be any desktop or web application, at an OData source, which could be provided by any backend service.

In this case, I pointed PowerPivot at the following URL:

http://ogdi.cloudapp.net/v1/dc/BankLocations

I previewed the Atom feed, selected a subset of the columns, and imported them into a pivot table. I used slicers to help visualize the zipcodes associated with each bank. And I wound up with a view which reports that there are three branches of WashingtonFirst Bank in DC, at three addresses, in two zipcodes.

If I were to name this worksheet, I’d call it WashingonFirst Bank branches in DC. But it has another kind of name, one that’s independent of the user who makes such a view, and of the application used to make it. Here is that other name:

http://ogdi.cloudapp.net/v1/dc/BankLocations?$filter=name eq ‘WashingtonFirst Bank’

If you and I want to have a conversation about banks in Washington, DC, and if we agree that this dataset is an authoritative list of them, then we — and anyone else who cares about this stuff — can converse using a language in which phrases like ‘WashingtonFirst Bank branches in DC’ or ‘banks in zipcode 20007’ are well defined.

If we incorporate this kind of fully articulated web namespace into public online discourse, then others can engage with it too. Suppose, to take just one small example, I find what I think is an error in the dataset. Maybe I think one of the branch addresses is wrong. Or maybe I want to associate some extra information with the address. Today, the way things usually work, I’d visit the source website and look for some kind of feedback mechanism. If there is one, and if I’m willing to provide my feedback in a form it will accept, and if my feedback is accepted, then my effort to engage with that dataset will be successful. But that’s a lot of ifs.

When public datasets provide fully articulated web namespaces, though, things can happen in a more loosely coupled way. I can post my feedback anywhere — for example, right here on this blog. If I have something to say about the WashingtonFirst branch at 1500 K Street, NW, I can refer to it using an URL: 1500 K Street, NW.

That URL is, in effect, a trackback that points to one record in the dataset.1 The service that hosts the dataset could scan the web for these inbound links and, if desired, reflect them back to its users. Or any other service could do the same. Discourse about the dataset can grow online in a decentralized way. The publisher need not explicitly support, maintain, or be liable for that discourse. But it can be discovered and aggregated by any interested party.

The open data movement, in government and elsewhere, aims to help people engage with and participate in processes represented by the data. When you publish data in a fully articulated way, you build a framework for engagement, a trellis for participation. This is a huge opportunity, and it’s what most excites me about OData.


1 PowerPivot doesn’t currently expose that URL, but it could, and so could any other OData-aware application.

“That’s an engineer’s solution!”

I’m listening to the audio version of a very cool talk given by astronaut-turned-artist Alan Bean. (Skip the hokey intro, though, and jump in at minute 7 when he starts.)

He tells great stories about the space program, but also offers wider perspectives on life, art, and human potential.

Along the way, he tells an amusing anecdote about the famous picture of Neil Armstrong planting an American flag onto the moon’s surface. Armstrong told Bean it had been a scary moment, and Bean asked why. Armstrong said (as paraphrased by Bean):

Well, I couldn’t get that flag into the ground, like in training. Up there, those particles in the dirt aren’t rounded like regular sand. On Earth I would just do like that, and it would go in. But up there I did like that and it didn’t go in.

I imagined that when I let go, it would fall into the dirt, and people all over the world would see the American flag fall into the dirt. So I tipped it back until the center of gravity was over the hole. Then I put a little dirt around it. I knew that if I could get it balanced, and get away from it, that without any wind it would stay balanced. So that’s what we did. We got away from it, and we never got close to it again.

Bean adds: “It probably blew over when they launched, but it didn’t make any difference. That’s an engineer’s solution!”

What a great hack!

Today on a conference call I was reminded of another. A few years ago, in an airport, I saw a guy with a cellphone in one hand and a payphone in the other. His ear, brain, and mouth were trying to bridge two phone networks together, it wasn’t working well, and he was visibly frustrated. Finally he removed his head from between the two phones, stuck them together, and reversed them earphone-to-microphone, so the two parties were talking directly to each other.

My conference call today presented a different version of that scenario. It was scheduled as a VOIP call, then was switched to a POTS call, but not everybody got the memo. So I made the POTS call. And since I have a podcast rig that lets me do POTS calls through my computer, using the same headset I use for VOIP, I made the call that way.

Then people started to show up on both the POTS side and the VOIP side. I realized that, unexpectedly, I was hearing both sides and they were hearing me. Both were being conveyed through my computer’s audio subsystem. I was just like the guy with the cellphone on one ear and the payphone on the other.

It would have been cool to do the same kind of earphone-to-microphone hack. But before I got the chance to try, the VOIP folks hung up and dialed back in on the POTS side.

Oh well, maybe next time.

We = (what we eat) – (what they eat)

The sound track for yesterday’s run was a talk by primatologist Richard Wrangham1, author of Catching fire: how cooking made us human. Cooking, he says, has long been thought to be an optional cultural practice, like wearing jewelry. But really, he argues, cooking was the essential technological innovation that enabled us to produce the metabolic energy we needed to become human.

How? Cooked food is more digestible than raw food. And not just by a little, but by a lot. Learn how to control fire, use it to cook your food, and you free up extra energy — plus time that would otherwise be spent masticating. Spend that time hunting, and your metabolic equation gets even better.

Wrangham has fascinating things to say about how this surplus time and energy explains such cultural universals (or former universals) as marriage, sexual division of labor, and the family dinner. Whether you agree or disagree with this analysis, though, it’s supported by an attention-grabbing claim. Everything we thought we knew about absorption of energy from food is wrong.

To this day, Wrangham says, the USDA website2 publishes tables that make no distinction between the nutritional value of cooked and raw food. On this page, for example, the energy content of one large raw egg is given as 75 kcal. The value for one large hard-boiled egg is almost the same: 78 kcal.

This is wrong, Wrangham says. A cooked egg delivers way more energy than a raw egg. How could this be? And how could we not know it?3

Here’s the explanation. We have traditionally measured the energy content of food by comparing input (the food we eat) and output (the feces we excrete). Burn both in a calorimeter, subtract, and the difference is the energy that was extracted from the food.

Yes, but extracted by whom? Or rather, by what? The energy that we humans take from our food has almost all been extracted by the time it reaches the end of the small intestine. But it has a long way to go yet. It must also pass through the large intestine, where dwell a myriad of gut flora. And they, Wrangham says, are hungry. If you eat a raw banana you only get some of its energy, and they get most of the remainder. If you eat a cooked banana, though, you get a lot more of its energy and leave less for them. The end result looks the same, but the internal distribution is quite different.

So you need to compare the energy in food entering the mouth to the energy remaining in the digestive products leaving the small intestine.4 Only then does the dramatic difference between the energies we get from raw versus cooked food become evident.

This is a great parable about instrumentation, measurement, knowledge, and epistemology. What other profound errors of basic understanding arise from misplaced instrumentation? And what might we learn by making simple — and in retrospect obvious — adjustments?


1 Yet another podcast from KUOW’s Speakers’ Forum, which has become one of my most reliable sources of audio brain food.

2 A sad reminder that government website and chamber of horrors are still, too often, synonymous.

3 The error, if it is indeed an error, propagates to WolframAlpha, which sources the USDA data. Compare 100 g of raw egg to 100 g of cooked egg.

4 How do you tap in at that point? Recruit people who have had ileostomies.

Learning lessons from PLATO

If you’re interested in the use of computers and networks to support collaboration, you’ll have heard of PLATO. It was an early courseware system, and by early I mean circa 1960, running on vacuum tubes. But it was also a petri dish in which much of what we now know as online culture first evolved.

I’ve long known that PLATO inspired many other systems, including VAX Notes and Lotus Notes. But I never heard the backstory. So when I found out that Brian Dear is completing a history of PLATO, and planning a conference to commemorate its 50th anniversary, I invited him onto my weekly show to find out more about it. PLATO matters, Brian says, because

it challenges our assumptions of how the online world evolved. It rewrites the history. It’s as if we discovered Wilbur and Orville Wright were not the first to fly a powered plane — that it’d been done faster and longer with a jet aircraft 30 years earlier.

Of couse the same can be said of other early technologies, notably Smalltalk, which introduced ideas and methods that are only now hitting the mainstream. It’s fun to wax nostalgic, but I’d rather explore how these systems arose, why they flourished, and what accounts for the propagation of their memes but not their genes.

From that perspective Brian reminds us, first, that PLATO was expensive. Few universities were willing or able to invest millions in a Control Data mainframe and a fleet of gas-plasma flat-panel bitmapped touch-screen display terminals. Those terminals enabled some extraordinary things, like the interactive music software that captivated Brian as a University of Delaware undergrad. They also enabled a now-extinct species of emoticons, which relied on the bitmapped graphics. But since much of what became PLATO’s essential DNA required only character-mapped graphics, those expensive bitmapped screens became an evolutionary bottleneck.

Another feature that didn’t pass through that bottleneck was PLATO’s ability to make sense of natural language input. Many thousands of programmer hours were invested in enabling PLATO to recognize a variety of human utterances. That in turn enabled courseware authors to create lessons that responded intelligently — and, Brian says, in ways that are sadly still not typical of modern courseware.

Today we can attack that problem by creating open source libraries, by reusing them, and by extending them. That’s a great way to create DNA that can propagate. But it’s useful to consider why it might not. We still, for the most part, create dependencies on specific programming languages, and on the environments in which they run.

As we move into an era of services, though, we can start to imagine a more fluid environment in which capabilities persist across language and system boundaries. Consider this exhibit from an antique PLATO library:

This is a screenshot from the live PLATO system running (in emulation) at cyber1.org. It’s a page from the catalog of functions in PLATO’s CYBIS library. Shown here are some of the methods available to process responses to questions.

Some of those methods might still be useful. And if they’d been packaged in a language- and system-independent way, some might conceivably still be in use.

PLATO programmers didn’t have the option to package their work in a such a way. Now we’re on the cusp of an era in which these kinds of library services can also be language- and system-independent web services. Will we exploit this new possibility? Will some of today’s core services still be delivering value decades from now, freeing developers to add value farther up the stack? It’s worth pondering.

A reading strategy for low-vision users of SmartView Xtend

A while back I reviewed the reading machine that my mom, who suffers from macular degeneration, now depends on. I gave it a thumbs up, but also noted that she was having some problems.

On my last visit I came up with a method that will help, if she can get the hang of it. The method is non-obvious, and isn’t documented anywhere I’ve been able to find, so I made a short movie to illustrate it.

The key insights are:

Use the left margin screw to set a left margin somewhere

It almost doesn’t matter where, you just need a guide for carriage returns.

Position the book and the tray

Getting this right makes a huge difference. My mom was constantly fiddling with the position of the book on the tray. This frustrated her, and seriously impaired her ability to read fluidly.

But if you position the tray correctly, and the book relative to the tray, then you can easily read the whole page without touching or moving the book at all. Here’s how:

Align the bottom left corner of the book with the bottom left corner of the screen.

This is counter-intuitive. The natural expectation is to start at the top of the page. And you do want to start reading there. But I found that establishing a bottom margin is a crucial first maneuver, and it involves three steps:

1 Push tray all the way forward and rightward

2. Place book on tray

3. Move book to align bottom left corner of page with bottom left corner of screen

With the tray still as far forward and as far right as it will go, you have defined both a left margin and a bottom margin for the page. Now read the whole page without touching the book again. Here’s how:

Find the top of the page.

To do that you pull the tray out (forward, towards yourself) until the top margin of the page lines up with the top of the screen.

Read as many lines vertically as the screen can display.

Use only a two-stroke left/right motion of the tray. The sequence is:

1. Slide tray left to reveal ends of lines

2. Slide tray right for carriage return

My mom had been advancing the tray (by pushing it in) once per line. This wastes effort and disrupts context. If the left margin screw is set, a carriage return always goes to the same place. So it was easy — at least for her — to make a visual connection from the end of the previous line to the beginning of the next one.

I realize this part may not work for everyone, and maybe not even for her as her vision worsens. Right now, at her magnification, her screen can display 8 or 10 lines. At higher magnification, when only a few are visible, there will be less context to help make that connection. Then it may become necessary to scroll vertically once per line. But the longer that can be avoided, the better.

Why was this necessary?

Shouldn’t multi-thousand-dollar gizmos like this come with training materials that help people figure this stuff out? Yes, but I’ve given up being shocked that they don’t.

If you’ve got a friend or relative in the same boat, let me know if this writeup — and/or the accompanying video — makes sense.

A note on making the movie

The video combines slides with a side-by-side animation of the tray and the screen. I wound up using PowerPoint, which conveniently handles the three ingredients: text, bitmap graphics, and vector graphics.

Rather than use PowerPoint’s animation features, though, I made a sequence of frames, nudging objects by small increments from frame to frame. This turned out to be a surprisingly easy and approachable technique.

Then I turned on a screen recorder — I used Camtasia, but it could have been any other — and stepped through the frames.

Talking with Greg Wilson about software carpentry

On this week’s podcast, Greg Wilson tells the story of a university course he created, and has taught for many years, called Software Carpentry. I have known Greg for a long time. We are kindred spirits in several ways. Most notably, we like to mine veins of knowledge, experience, and technique that some practitioners take for granted, but that many others haven’t yet discovered — or don’t yet use as well as they could.

I, for example, wonder why we don’t teach everyone basic principles of structured information, namespace design, and syndication. Greg, similarly, wonders why student programmers — and student scientists whose careers increasingly depend on computational methods — are not taught basic principles of version control, debugging, and refactoring. And why we don’t read great software in the same way we read great literature or study landmark scientific experiments. And why the controlled reproducibility of commercial software development isn’t typical of computational science.

If you care about these issues, there are two ways you can help. First, take a look at the reboot of the Software Carpentry course that Greg’s experience has led him to propose. Second, help him find the funding to keep doing this work.

Two interpretations of US health care cost vs. life expectancy

On FiveThirtyEight.com the other day, Andrew Gelman posted this chart illustrating the high cost of US health care:

He did so to correct a “somewhat misleading (in my opinion) presentation of these numbers [that] has been floating around on the web recently.” The misleading graph, which appeared on a National Geographic blog, was — I agree — a confusing way to show information better represented in a scatterplot.

But I’ve seen this data before, and there’s more to the story. Neither the National Geographic nor FiveThirtyEight has anything to say about which numbers they’re charting.

Back in 2005, in a review of John Abramson’s excellent book Overdo$ed America, I noted that he had used a different source to reach a slightly different conclusion.

His chart, based on OECD health-expenditure data (link now 404) and WHO healthy life expectancy data (link still alive), looked like this:

He used it to make the oft-cited point that US healthcare isn’t just wildly expensive, but that it also correlates with worse life expectancy than in many countries that spend less.

I wondered what the chart would look like if based on the same OECD expenditure data but on the OECD’s rather than the WHO’s definition of life expectancy. The result looked like this:

The U.S. is the clear cost outlier on both charts. The first chart, however, places us near the low end of the life expectancy range, justifying Abramson’s assertion that we combine “poor health and high costs.” The second chart places us near the high end of the life expectancy range, suggesting that while value still isn’t proportional to cost, we’re at least buying more value than the first chart indicates.

Although based on older data, this second chart closely resembles the ones recently shown and discussed by the National Geographic and FiveThirtyEight.

My review of Abramson’s book concluded:

Has Abramson spun the data to make his point, just as he accuses the pharmaceutical industry of doing? Of course. Everybody spins the data. What matters is that:

  • Everybody can access the source data, as we can in the case of Abramson’s book but cannot (he argues) in the case of much medical research
  • The interpretation used to drive policy expresses the values shared by the citizenry

Would we generally agree that we should measure the value of our health care in terms of healthy life expectancy, not raw life expectancy? That the WHO’s way of assessing healthy life expectancy is valid? These are kinds of questions that citizens have not been able to address easily or effectively. Pushing the data and surrounding discussion into the blogosphere is the best way — arguably the only way — to change that.

That was five years ago. The data was, and is, out there. So it’s disheartening to see the same chart pop up again without any further discussion of the sources of its data, or of the definitions underlying those sources.

Talking with Doug Day about the iCalendar validator

On this week’s Innovators show, Doug Day joins me to discuss the new iCalendar validator he has recently deployed on Azure.

The project draws inspiration from the pathbreaking RSS/Atom feed validator originally created by Mark Pilgrim and Sam Ruby. The RSS/Atom validator’s test-driven and advice-oriented approach is exemplary, and the iCalendar validator follows in its footsteps.

The tests, in this case, are iCalendar snippets that are, or are not, valid according to the spec. These snippets, packaged into XML files, form a library of examples that does not depend on the programming language used to run the tests. So although Doug’s validator, based on his open source parser, is written in C#, another validator written in Java or Python or Ruby could use the same test suite.

The advice offered is minimal so far, but I hope will expand as the test suite grows. Sam Ruby observes:

Identifying real issues that prevent real feeds from being consumed by real consumers and describing the issue in terms that makes sense to the producer is what most would call value.

In that spirit, I am gathering examples of calendars in the wild and looking for ways to help Doug add value.

In the podcast we discuss a nice example that came up recently in the curators’ room of the elmcity project. A custom-built calendar contained events (VEVENT components, in iCalendar-speak) with no start or end times (DTSTART and DTEND properties). This, it turns out, is not prohibited by the spec. But reporting no error is unhelpful. The author of the calendar — or of the software that produced the calendar — ought to be warned that such a calendar won’t yield a useful or expected result.

Why would anyone produce such a calendar in the first place? This harkens back to the early days of RSS. Many of us found that we could craft simple ad-hoc feeds in order to leverage RSS as a lightweight data exchange. It was liberating to be able to do that. But hand-crafted feeds, or feeds written by hand-crafted software, were valuable only to the extent they would reliably interoperate. Often they would not. The feed validator, by showing what was wrong with these feeds, and explaining why and how to fix them, was a powerful ally for those of us trying to bootstrap a feed ecosystem.

The iCalendar validator has a long way to go yet. But the road ahead is well lit, and I’m grateful to Doug Day for resolving to travel it.

Contextual clothing for naked transparency

The other day I listened to a Spark (CBC Radio) interview with Larry Lessig about his New Republic essay Against Transparency, which begins:

We are not thinking critically enough about where and when transparency works, and where and when it may lead to confusion, or to worse. And I fear that the inevitable success of this movement–if pursued alone, without any sensitivity to the full complexity of the idea of perfect openness–will inspire not reform, but disgust. The “naked transparency movement,” as I will call it here, is not going to inspire change. It will simply push any faith in our political system over the cliff.

The essay was published in October 2009. In this interview from November, Prof. Lessig reflected on the reactions that it provoked. Although the delicious and bitly feedback now suggests that most people understood the essay to be a thoughtfully nuanced critique, there were evidently some early responders who read it as a retreat from openness and an assault on the Internet.

I’m glad I missed the essay when it first appeared. Reading it along with a cloud of feedback from readers and from the author amplifies one of the key points: We don’t really want naked transparency, we want transparency clothed in context.

The Net can be an engine for context assembly, a wonderful phrase I picked up years ago from Jack Ozzie and echoed in several essays. But it can also be a context destroyer.

In the interview, Lessig notes one example of context destruction. The article, which most people will read online, spans eleven pages, each of which wraps its nugget of “content” in layers of distraction. Some early negative comments, Lessig says, came from people who had clearly not read to the end.

Our increasingly compressed and fragmented attention can also be a context destroyer:

What about when the claims are neither true nor false? Or worse, when the claims actually require more than the 140 characters in a tweet?

This is the problem of attention-span. To understand something–an essay, an argument, a proof of innocence– requires a certain amount of attention. But on many issues, the average, or even rational, amount of attention given to understand many of these correlations, and their defamatory implications, is almost always less than the amount of time required. The result is a systemic misunderstanding–at least if the story is reported in a context, or in a manner, that does not neutralize such misunderstanding. The listing and correlating of data hardly qualifies as such a context. Understanding how and why some stories will be understood, or not understood, provides the key to grasping what is wrong with the tyranny of transparency.

Transparency is a necessary but not a sufficient condition. Recently my town’s crime data and council meetings have appeared online. But this remarkable transparency does not alone enable the sort of collaborative sense-making that we all rightly envision.

In the case of crime data, we require a context that includes historical trends, regional and national comparisons, guidance from government about how its local taxonomy relates to regional and national taxonomies, and reporting by newspapers and citizens.

In the case of city council meetings, we require a context that includes relevant state law and local code, and reporting by stakeholders, by newspapers, and by affected citizens.

To enable context assembly, we’ll need to organize the numeric and narrative data produced by the “naked transparency” movement in ways friendly to linking, aggregation, and discovery.

But these principles will need to be adopted more broadly than by governments alone. Everyone needs to understand the principles of linking, aggregation, and discovery, so that everyone can help create the context we crave.

Carbon theater

Borrowing Bruce Schneier’s wonderful term security theater, Rohit Khare has written about privacy theater. Not to be outdone, here’s a letter to my local newspaper about carbon theater.


To: Editors
Re: Carbon challenge in home stretch

We love our sports rivalries, and the classic contest between Keene and Portsmouth has riveted me to my sofa. Let’s recap. Back in April, seacoastonline.com (http://www.seacoastonline.com/articles/20090423-NEWS-904230413) reported:

Municipal employees in Portsmouth and Keene, the state’s two predominant ‘green’ cities, slugged it out over the course of three weeks and, in the end, Keene delivered the knockout punch.

This week, the Sentinel and the Portsmouth Herald advanced the story of this “carbon-busting throwdown” in a joint communique (http://keenesentinel.com/articles/2009/12/29/news/local/free/id_384393.txt):

Garry Dow of Clean Air-Cool Planet, which manages the carbon challenge, said the scales are tipped in Portsmouth’s favor in the second phase, which involves the number of residents in each city to sign up for the challenge.

The challenge? Check it out at http://necarbonchallenge.org/calculator.jsp. There you will find an online form that reminds you to tighten up your house, use compact fluorescent lights, air-dry your dishes, and recycle.

Back in April, more of Keene’s city employees took the survey than Portsmouth’s. But now, in phase two of the carbon-busting throwdown, Portsmouthians are taking the survey at a higher rate than Keeners.

Across New England, according to the Carbon Challenge website, this slugfest has reduced C02 emissions by over 17 million pounds. That’s nothing to sneeze at. It’s two thirds of New York City’s daily waste stream, a third of the mass of the Titanic, a fifth of the C02 produced by the recent Copenhagen conference.

Except, of course, none of the combatants has actually reduced their C02 emissions. They’ve only take an online survey, and pledged to do all sorts of things that might or might not get done.

I’d like to propose a different challenge. Let’s focus on one thing and really get it done. For example, what if every leaky window in Keene were equipped with an interior storm? John Leeke, who runs Historic HomeWorks in Portland, invented this cheap, appropriate, and effective technology. On his website (http://www.historichomeworks.com/forum/viewtopic.php?t=193) he shows how to build interior storms.

I’ve done this, and it’s a vast improvement over the stick-on window kits I’ve used in previous years. Interior storms are just cheap wooden frames with gaskets around the outside and shrink-wrap plastic facing. They press-fit into your window frames from the inside. You get all the benefits of the stick-on kits: zero air infiltration, a second layer of dead air. And there are none of the drawbacks: awkward yearly installation, destructive yearly removal.

“Keene’s down in the standings,” the Sentinel/Herald article says, “but there’s still plenty of time for residents to take the online survey and boost the city’s chance to take home the green prize.”

Well, OK, but I’d like to see Keene define — and then win — a different prize. What if we become the first city to outfit every leaky window in town with an interior storm? And what if we create jobs while doing so? That would be something worth shouting about.

Gov2.0 transparency: An enabler for collaborative sense-making

Recently my town has adopted two innovative web services that I’ve featured on my podcast: CrimeReports.com, which does what its name suggests, and Granicus.com, which delivers video of city council meetings along with synchronized documents.

You can see the Keene instance of CrimeReports here, and our Granicus instance here.

I’m delighted to finally become a user of these systems that I’ve advocated for, written about, and podcasted. I’m also eager to move forward. We’re still only scratching the surface of what Net-mediated democracy can and should become.

In the case of CrimeReports, the next step is clear: Publish the data. It’s nice to see pushpins on a map, but when you’re trying to answer questions — like “Are we having a crime wave?” — you need access to the information that drives the map. Greg Whisenant, the founder of CrimeReports.com, says he’d be happy to publish feeds. But so far the cities that hire him to do canned visualizations of crime data aren’t asking him to do so, because most people aren’t yet asking their city governments to provide source data. So a few intrepid hackers, like Ben Caulfield here in Keene, are reverse-engineering PDF files to get at the information. Check out Ben’s remixed police blotter — it’s awesome. Now imagine what Ben might accomplish if he hadn’t needed to move mountains to uncover the data.

In the case of Granicus, I’m reminded of this item from last year: Net-enhanced democracy: Amazing progress, solvable challenges. The gist of that item was that:

  • It’s amazing to be able to observe the processes of government.

  • It’s still a challenge to make sense of them.

  • Tools that we know how to build and use can help us meet that challenge.

Check out, for example, last week’s Keene city council meeting. Scroll down to an item labeled 2. Ordinance O-2009-21. In this clip, the council agrees to amend the city code for residential real estate tax exemptions. I wish I could link you directly to that portion of the video, which begins at 34:11, in the same way that I can link you to the associated document. But more broadly, I wish that a citizen who tunes in could understand — and help establish — the context for this amendment.

Here’s the new language:

Sec. 86-29 Residential real estate tax exemptions and credits

With regard to property tax exemptions, the city hereby adopts the provisions of RSA 72:37 (Blind); RSA 72:37-b (Disabled); RSA 72:38-b (Deaf or Severely Hearing Impaired); RSA 72:39-a (Elderly); RSA 72:62 (Solar); RSA 72:66 (Wind); and RSA 72:70 (Wood).

With regard to property tax credits, the city hereby adopts the provisions of RSA 72:28, II, (Optional Veterans’ tax credit); RSA 72:29-a , II, (Surviving Spouse); and RSA 72:35, I-a, (Optional Tax Credit for Service-Connected total disability).

In this case, I just happen to know a bit of this amendment’s backstory. Earlier this year I found out — only thanks to a serendipitous encounter with a city councilor at a social event — that my wood gasifier qualified me for an exemption. This was the first such exemption, and to my knowledge is still the only one granted.

If I hadn’t gone through that experience, though, the video clip and its associated document would mean nothing to me. There would be no way to make a connection between state law on the one hand, and a documented case study on the other.

On the next turn of the crank, I hope that services like Granicus will enable us to make those connections. Seeing the process of government in action is a great step forward. Now we need to be able to use links and annotations to help one another make sense of that process.

Talking with Howard Eglowstein about micro-CHP and the maker renaissance

My guest for this week’s Innovators show is my old BYTE pal Howard Eglowstein. Nowadays he’s working for freewatt, a residential micro-CHP (combined heat and power) system, and our conversation revolved partly around that technology.

But I also invited Howard to reflect on the cultural phenomenon that’s celebrated in the pages of Make. Hacking at the intersection of atoms and bits is nothing new for Howard, he’s been doing it his whole career. One of his epic projects was Thumper, a machine he built for the BYTE lab to test the battery life of notebook computers. Thumper used optical sensors to notice when power-saving features kicked in, and robotic fingers to defeat them by pressing keys. (I resurrected this article about Thumper from the (now-abandoned) BYTE archive.)

It makes perfect sense for Howard to be deploying his hybrid skillset in the realm of energy innovation. But why, I’ve wondered lately, did we devalue those skills and inclinations? Why the long lull between the heydays of Popular Mechanics and Make? Here are some of Howard’s observations:

On toys, cars, and patents:

My background is in electronic toys. Toy engineers know how to take a really cool concept and make it cheap. In the 80s — not so much any more — you could get something at the toy store, open it up with a Dremel, and make it do something different. That’s how some kids get their first taste of reverse engineering.

But how many of us can fix our cars anymore? You just can’t. Even car people don’t have the tool and the documentation. A lot of things are done better than everybody else, and they’re secret.

In the toy industry we never patented anything, there was no point. If you patent something you have to tell everybody how it works, and then they have what they need to make an improvement and then steal the idea from you. So you do something amazing and cool, you wow everybody, and by the time they figure out what you’ve done you’ve moved on to something else which is even cooler.

On a friend’s son who is a Make fan:

We’ve really encouraged people to absorb information. But that gets boring after a while. You browse the computer, it’s kind of fun to click on links and see where they go, but it gets old. Meanwhile we’ve got a lot of kids who, let’s face it, probably aren’t going to get together and throw a football around, they’d rather play video games. So in this kid’s case when he gets tired of looking at stuff he goes and builds stuff. I hope that we’re encouraging more people to do that.

After speaking with Howard I was reminded of one project that is providing that encouragement: Natalie Jeremijenko’s feral robotic dogs, which are “upgraded commercially robotic dog toys that have been transformed into activist instruments to find and display urban pollutants.”

So I guess the toy business still is giving some young people their first taste of reverse engineering!

Computational thinking and energy literacy

One of the themes I’ve been exploring for the past few years is computational thinking. It’s an evocative phrase that has led me in a few different directions. One is my intentional use of tagging and syndication as key strategies for social information management. Another is my growing interest in the kinds of uses of WolframAlpha outlined in Kill-A-Watt, WolframAlpha, and the itemized electric bill.

A lot of what I’ve read and heard about WolframAlpha seems to focus on its encyclopedic nature. But it aims to be a compendium of computable knowledge, and as such I think its highest and best use will be to enable computational thinking.

Here’s one small but telling example from my Kill-A-Watt essay:

Q: 9 W * (30 * 24 hours)

A: About half the energy released by combustion of one kilogram of gasoline.

Q: ( 1 kilogram / density of gasoline ) / 2

A: Less than a fifth of a gallon.

I was trying to understand what 9 Watts, over the course of a month, means. WA offered the comparison to the amount of energy in gasoline, but reported in kilograms. I still think in gallons. The conversion is:

( 1 kg / .73 kg/L) / 2 = .685L * .264 gallons / L = .18 gallons

If you don’t do that kind of thing on a regular basis, though — as I don’t, and as many of us don’t — it’s hard to get over the activation threshold. Looking up and applying the relevant formulae is a multistep procedure. WA collapses it into a single step:

( 1 kilogram / density of gasoline ) / 2

It knows the density of gasoline, and when you do the computation it reports results in a variety of units, including gallons.

I was feeling a bit guilty about needing this sort of intellectual crutch. But then I heard from a friend who had just read the Kill-A-Watt/WA piece. It reminded him of an Energy Tribune article entitled Understanding E=mc2 which concludes:

A 1000-MW coal plant — our standard candle — is fed by a 110-car “unit train” arriving at the plant every 30 hours — 300 times a year. Each individual coal car weighs 100 tons and produces 20 minutes of electricity. We are currently straining the capacity of the railroad system moving all this coal around the country. (In China, it has completely broken down.)

A nuclear reactor, on the other hand, refuels when a fleet of six tractor-trailers arrives at the plant with a load of fuel rods once every eighteen months. The fuel rods are only mildly radioactive and can be handled with gloves. They will sit in the reactor for five years. After those five years, about six ounces of matter will be completely transformed into energy. Yet because of the power of E = mc2, the metamorphosis of six ounces of matter will be enough to power the city of San Francisco for five years.

This is what people finds hard to grasp. It is almost beyond our comprehension. How can we run an entire city for five years on six ounces of matter with almost no environmental impact? It all seems so incomprehensible that we make up problems in order to make things seem normal again. A reactor is a bomb waiting to go off. The waste lasts forever, what will we ever do with it? There is something sinister about drawing power from the nucleus of the atom. The technology is beyond human capabilities.

But the technology is not beyond human capabilities. Nor is there anything sinister about nuclear power. It is just beyond anything we ever imagined before the beginning of the 20th century. In the opening years of the 21st century, it is time to start imagining it.

Six ounces of matter? Really? My friend wrote:

I remember at the time I tried to run simple order of magnitude calculations in my head to verify the number, but it got messy, I got sidetracked, and forgot.

This time I went to Wolfram-Alpha, and the answer was right there, clear as day, in seconds (and yes, it’s really 6 ounces of matter).

I went back to the article, and the only quantity of energy reported for San Francisco was that Hetch Hetchy Dam “provides drinking water and 400 megawatts of electricity to San Francisco.” That alone would come to:

400MW * 5 years = ~700 grams = ~25 ounces

Or, if Wikipedia is right and the dam yields only about 220MW, then:

220MW * 5 years = ~386 grams = ~14 ounces

Of course since San Francisco has other sources of power, the amount of matter would be more. Still, this doesn’t invalidate the author’s point: we’re talking ounces, not tons.

When I mentioned this to my friend, though, he wrote back:

I went the other way around:

http://www.wolframalpha.com/input/?i=6oz * c^e2 in gw hr

It gives 4247 GWhr which is definitely in the ballpark for San Francisco.

Sweet!

I didn’t actually follow up on that result just now, but over 5 years it comes to:

http://www.wolframalpha.com/input/?i=4247GWh / 5 years = ~100MW. That’s a quarter of what the article reports for Hetch Hetchy, it’s half what Wikipedia reports, and I still don’t know how it relates to San Francisco’s total power draw.

Even so, we’re playing in the kind of ballpark we need to be able to play in if we’re going to have any kind of reasoned discussion about future energy mixes like Saul Griffith’s straw-man proposal of:

2TW Solar thermal, 2TW Solar PV, 2TW wind, 2TW geothermal, 3TW nukes, 0.5TW biofuels

What I find most striking about the energy literacy talks that Saul’s been giving lately is his ability to move fluidly between the personal quantities of energy we experience directly, the city-scale quantities we experience indirectly, and the global quantities that most of us can scarcely imagine.

My point here isn’t to revisit the dispute that Stewart Brand and Amory Lovins are having about the future role of nuclear power. Nor to endorse William Tucker, the author of that Energy Tribune article, who is a journalist not a scientist or an engineer, and whose argument fails to address issues of security and waste disposal.

Instead I want to focus on how mental power tools like WolframAlpha, by making computable knowledge easier to access and manipulate, can augment our ability to think computationally. If we’re going to reason democratically about the energy, climate, and economic challenges we face, we’re going to need those power tools to be available broadly and used well.

Talking with Randy Julian about bioinformatics

My guest for this week’s Innovators show, Randy Julian, founded the bioinformatics company Indigo BioSystems to help modernize the process of drug discovery. The challenge — and opportunity — is partly to standardize the data formats used to represent experimental data, and to locate that data in shared spaces where it can be linked and recombined.

There’s also the crucial issue of reproducibility. One requirement, as Victoria Stodden said in my conversation with her, is to publish not just data but also the code that processes the data, ideally in an environment where data-transforming computation can be replayed and verified. One of the ways Indigo’s system does that is by hosting instances of R, the wildly popular statistical programming system, in the cloud.

Another key requirement for reproducing an experiment, Randy Julian says, is a robust and machine-readable representation of the design of the experiment. If I don’t know what you’re trying to prove, and how you’re trying to prove it, your data are just numbers to me. If I do know those things, I may be able to verify your results. And we may be able to automate more of the work using machine intelligence and machine labor — a vision that also inspires Jean-Claude Bradley, Cameron Neylon, and others to pursue open-notebook science.

A new validator for iCalendar

In January 2009 I wrote a series of entries [1, 2, 3] documenting examples of ill-formed iCalendar files. And I argued that we need an analog, in calendar space, to the incredibly useful RSS/Atom feed validator.

I’m delighted to report that Doug Day has taken up the challenge. The first incarnation of his validator is up and running at http://icalvalid.cloudapp.net. It’s based on Doug’s DDay.iCal, which is the same .NET-based iCalendar class library used by the elmcity aggregator. But, like the RSS/Atom validator, it’s driven by an extensible and language-independent suite of tests.

The validator reports numerical scores for an iCalendar file, and gives advice about how it will be handled by popular calendar applications. Some examples:

The Keene High Varsity Basketball schedule scores a 96.25: “This calendar has minor problems, but will likely work correctly in major calendar applications.”

The Hannah Grimes Center’s calendar, based on Drupal, scores 92.5: “This calendar has moderate problems, but may work correctly in major calendar applications.”

The Keene Chamber of Commerce calendar scores 0: “This calendar has severe problems; very few (if any) applications will accept this calendar.” (DDay.iCal does, in fact, overlook these problems, and does parse events from this calendar.)

I’m hugely grateful to Doug Day for doing this important work. Although calendars seem to be ubiquitous, familiar, and interoperable, the examples I’ve been collecting in the wild show that, even though the standard has been around for over a decade, the iCalendar ecosystem is still very immature. This validator will help that ecosystem evolve.

The validator itself, of course, will also evolve. You can send feedback to Doug at the address given on its home page. If you’re curating a location or a topic using the elmcity service, you can email me about problem calendars or bring them to the curators’ room.

Stewart Brand’s Whole Earth Discipline

I’ve deeply enjoyed every one of the Long Now seminars, but it wasn’t until this one by Stewart Brand in October that I really got what he’s up to as the convener of this remarkable series of talks. In October he appeared as speaker rather than host/interviewer, and he summarized his new book Whole Earth Discipline. Kevin Kelly calls the book “a short course on how to change your mind intelligently” — in this case, about cities, nuclear power, and genetic and planetary engineering. These are all things that Steward Brand once regarded with suspicion but now sees as crucial tools for a sustainable world.

The book weaves together insights from many of my favorite Long Now talks, including:

I guess the Long Now seminars is the long version of a course on changing your mind. I was already on board with genetic and planetary engineering, but now I think very differently about cities and nuclear power. The book joins these to a common principle: concentrate the harmful stuff. High-density populations and casks of nuclear waste do less harm than scattered populations and dispersed coal residue.

Don’t miss the annotations — a website that reproduces every paragraph that includes citations, links to their sources, and adds updates.

Kill-A-Watt, WolframAlpha, and the itemized electric bill

I’ve always imagined getting an itemized electric bill. We’re not there yet, but when I saw a Kill-A-Watt at Radio Shack last night I remembered the discussion thread at this 2007 blog post and impulsively bought it.

In a way I’m glad I waited until 2009 because a companion tool is available now that wasn’t then: WolframAlpha. Its fluency with units, conversions, and comparisons is really helpful if, like me, you can’t do that stuff quickly and easily in your head.

So, for example, I’m sitting at my desk with the Kill-A-Watt watching my main power strip. I have a mixer here that I use about an hour a week for podcast recording. There’s no power switch because, well, why bother, just leave it on, it’s a tiny draw. Negligible.

I reach over and unplug it. Now I’m drawing 9 fewer watts. But what does that mean? I consult Wolfram Alpha:

Q: 9 W

A: About half the power expended by the human brain.

On a monthly basis?

Q: 9 W * (30 * 24 hours)

A: About half the energy released by combustion of one kilogram of gasoline.

In gallons?

Q: ( 1 kilogram / density of gasoline ) / 2

A: Less than a fifth of a gallon.

Relative to my electric usage, which was 1291 kWh last month?

Q: 9 W / (1291kwh / ( 30 * 24 hours)) * 100

A: Half a percent.

In dollars?

Q: 9 W / (1291kwh / ( 30 * 24 hours)) * $205.60

A: One dollar.

I find these comparisons really helpful. A dollar a month is a rounding error. But if I think of it as the energy equivalent of driving my car 7.2 miles, that makes me want to reach over and unplug the mixer for the 715 hours per month I’m not using it.

Saul Griffith has internalized these calculations, but most of us need help. A next-gen Kill-A-Watt that did these sorts of conversions and comparisons could be a real behavior changer.