Tag Archives: computationalthinking

Talking with Scott Rosenberg about Say Everything, Dreaming in Code, and MediaBugs

My guest for this week’s Innovators show is Scott Rosenberg. He’s the author of two books, most recently Say Everything, subtitled How blogging began, what it’s becoming, and why it matters. Before that he was the Chandler project‘s embedded journalist, and told its story in Dreaming in Code. His current project is MediaBugs, a soon-to-be-launched service that aims to crowd-source the reporting and correction of errors in media coverage.

We began with a discussion of Say Everything. Its account of how blogging came to be is a great read, and a much-needed history of the era. Since I know that story quite well, though, we focused on the blogosphere’s present state and future prospects. Blogging is still a new medium. But those of us who experienced blogging as a conversation flowing through decentralized networks of blogs have now seen still newer (and more centralized) social media capture a lot of that conversation.

The good news is that more people are able to be involved. The fact that millions of people fired up blogs was, and remains, astonishing. But active blogging has proven to be a hard thing to sustain. Meanwhile hordes of people find it relatively easy to be active on Facebook and Twitter.

The bad news is that, as always, there’s no free lunch. While it’s easier to create and sustain network effects using Facebook and Twitter, you sacrifice control of your own data. Scott thinks we’re moving through a transitional phase, and I hope he’s right. We really need the best of two worlds. First, control of the avatars we project into the cloud, and of the data that surrounds them, insofar as that’s possible. Second, frictionless interaction. The tension between these two conflicting needs will define the future of social media.

Two of Scott’s other projects, Dreaming in Code and MediaBugs, are connected in an interesting way. The media project adopts terminology (“filing bugs”) and process (version control, issue tracking) from the realm of software. If MediaBugs helps make non-technical people aware of that crucial way of thinking and acting, it will be a bonus outcome.

A geek anti-manifesto

The other day my colleague Scott Hanselman wrote a useful essay called 10 Guerilla Airline Travel Tips for the Geek-Minded Person. It’s a mixture of technical and social strategies. The tech strategies include marshaling data with the help of services like Tripit, FlightStats, and SMS alerts. The social strategies include being nice to service reps, and using the information you’ve marshaled in order to make precise requests that they’re most likely to be able to satisfy.

Scott writes:

I’m a geek, I like tools and I solve problems in my own niche way.

That statement, along with the essay’s tagline — …Tips for the Geek-Minded Person — has been bothering me ever since I read it. Why is it geeky to marshal the best available data? Why is it geeky to use that data to improve your interaction with people and processes?

My Wikipedia page includes this sentence:

Udell has said, “I’m often described as a leading-edge alpha geek, and that’s fair”. 1

I did say that, it’s true. But I’ve come to regret that I did. For a while I thought that was because geek was once defined primarily as a carnival freak. That’s changed, of course. Nowadays the primary senses of the word are obsessive technical enthusiasm and social awkwardness. Which is better than being somebody who bites the heads off chickens. But it’s still not how I want to identify myself. Much more importantly, it’s not how I want the world to identify the highest and best principles of geek identity and culture.

Fluency with digital tools and techniques shouldn’t be a badge of membership in a separate tribe. In conversations with Jeannette Wing and Joan Peckham I’ve explored the idea that what they and others call computational thinking is a form of literacy that needs to become a fourth ‘R’ along with Reading, Writing, and Arithmetic.

The term computational thinking is itself, of course, a problem. In comments here, several folks suggested systems thinking which seems better.

Here’s a nice example of that kind of thinking, from Scott’s essay:

#3 Make their job easy

Speak their language and tell them what they can do to get you out of their hair. Refer to flights by number when calling reservations, it saves huge amounts of time. For example, today I called United and I said:

“Hi, I’m on delayed United 686 to LGA from Chicago. Can you get me on standby on United 680?”

Simple and sweet. I noted that UA680 was the FIRST of the 6 flights delayed and the next one to leave. I made a simple, clear request that was easy to grant. I told them where I was, what happened, and what I needed all in one breath. You want to ask questions where the easiest answer is “Sure!”

I see two related kinds of systems thinking at work here. One engages with an information system in order to marshal data. Another engages with a business process — and with the people who implement that process — in a way that leverages the data, reduces process friction, and also reduces interpersonal friction.

These are basic life skills that everyone should want to master. If we taught them broadly, and if everyone learned them, then this sort of mastery wouldn’t attract the geek label. But we don’t teach these skills broadly, most people don’t learn them, and the language we use isn’t our friend. If systems thinking is geeky then only geeks will be systems thinkers. We can’t afford for that to be true. We need everyone to be a systems thinker.


1 Actually I’d say that Scott Hanselman is a leading-edge alpha geek. I am, at best, a trailing-edge beta or gamma geek. But if someone were to remove the word entirely from my Wikipedia page, I’d be fine with that. I no longer want to be labeled as any kind of geek.

Computational thinking and energy literacy

One of the themes I’ve been exploring for the past few years is computational thinking. It’s an evocative phrase that has led me in a few different directions. One is my intentional use of tagging and syndication as key strategies for social information management. Another is my growing interest in the kinds of uses of WolframAlpha outlined in Kill-A-Watt, WolframAlpha, and the itemized electric bill.

A lot of what I’ve read and heard about WolframAlpha seems to focus on its encyclopedic nature. But it aims to be a compendium of computable knowledge, and as such I think its highest and best use will be to enable computational thinking.

Here’s one small but telling example from my Kill-A-Watt essay:

Q: 9 W * (30 * 24 hours)

A: About half the energy released by combustion of one kilogram of gasoline.

Q: ( 1 kilogram / density of gasoline ) / 2

A: Less than a fifth of a gallon.

I was trying to understand what 9 Watts, over the course of a month, means. WA offered the comparison to the amount of energy in gasoline, but reported in kilograms. I still think in gallons. The conversion is:

( 1 kg / .73 kg/L) / 2 = .685L * .264 gallons / L = .18 gallons

If you don’t do that kind of thing on a regular basis, though — as I don’t, and as many of us don’t — it’s hard to get over the activation threshold. Looking up and applying the relevant formulae is a multistep procedure. WA collapses it into a single step:

( 1 kilogram / density of gasoline ) / 2

It knows the density of gasoline, and when you do the computation it reports results in a variety of units, including gallons.

I was feeling a bit guilty about needing this sort of intellectual crutch. But then I heard from a friend who had just read the Kill-A-Watt/WA piece. It reminded him of an Energy Tribune article entitled Understanding E=mc2 which concludes:

A 1000-MW coal plant — our standard candle — is fed by a 110-car “unit train” arriving at the plant every 30 hours — 300 times a year. Each individual coal car weighs 100 tons and produces 20 minutes of electricity. We are currently straining the capacity of the railroad system moving all this coal around the country. (In China, it has completely broken down.)

A nuclear reactor, on the other hand, refuels when a fleet of six tractor-trailers arrives at the plant with a load of fuel rods once every eighteen months. The fuel rods are only mildly radioactive and can be handled with gloves. They will sit in the reactor for five years. After those five years, about six ounces of matter will be completely transformed into energy. Yet because of the power of E = mc2, the metamorphosis of six ounces of matter will be enough to power the city of San Francisco for five years.

This is what people finds hard to grasp. It is almost beyond our comprehension. How can we run an entire city for five years on six ounces of matter with almost no environmental impact? It all seems so incomprehensible that we make up problems in order to make things seem normal again. A reactor is a bomb waiting to go off. The waste lasts forever, what will we ever do with it? There is something sinister about drawing power from the nucleus of the atom. The technology is beyond human capabilities.

But the technology is not beyond human capabilities. Nor is there anything sinister about nuclear power. It is just beyond anything we ever imagined before the beginning of the 20th century. In the opening years of the 21st century, it is time to start imagining it.

Six ounces of matter? Really? My friend wrote:

I remember at the time I tried to run simple order of magnitude calculations in my head to verify the number, but it got messy, I got sidetracked, and forgot.

This time I went to Wolfram-Alpha, and the answer was right there, clear as day, in seconds (and yes, it’s really 6 ounces of matter).

I went back to the article, and the only quantity of energy reported for San Francisco was that Hetch Hetchy Dam “provides drinking water and 400 megawatts of electricity to San Francisco.” That alone would come to:

400MW * 5 years = ~700 grams = ~25 ounces

Or, if Wikipedia is right and the dam yields only about 220MW, then:

220MW * 5 years = ~386 grams = ~14 ounces

Of course since San Francisco has other sources of power, the amount of matter would be more. Still, this doesn’t invalidate the author’s point: we’re talking ounces, not tons.

When I mentioned this to my friend, though, he wrote back:

I went the other way around:

http://www.wolframalpha.com/input/?i=6oz * c^e2 in gw hr

It gives 4247 GWhr which is definitely in the ballpark for San Francisco.

Sweet!

I didn’t actually follow up on that result just now, but over 5 years it comes to:

http://www.wolframalpha.com/input/?i=4247GWh / 5 years = ~100MW. That’s a quarter of what the article reports for Hetch Hetchy, it’s half what Wikipedia reports, and I still don’t know how it relates to San Francisco’s total power draw.

Even so, we’re playing in the kind of ballpark we need to be able to play in if we’re going to have any kind of reasoned discussion about future energy mixes like Saul Griffith’s straw-man proposal of:

2TW Solar thermal, 2TW Solar PV, 2TW wind, 2TW geothermal, 3TW nukes, 0.5TW biofuels

What I find most striking about the energy literacy talks that Saul’s been giving lately is his ability to move fluidly between the personal quantities of energy we experience directly, the city-scale quantities we experience indirectly, and the global quantities that most of us can scarcely imagine.

My point here isn’t to revisit the dispute that Stewart Brand and Amory Lovins are having about the future role of nuclear power. Nor to endorse William Tucker, the author of that Energy Tribune article, who is a journalist not a scientist or an engineer, and whose argument fails to address issues of security and waste disposal.

Instead I want to focus on how mental power tools like WolframAlpha, by making computable knowledge easier to access and manipulate, can augment our ability to think computationally. If we’re going to reason democratically about the energy, climate, and economic challenges we face, we’re going to need those power tools to be available broadly and used well.

Kill-A-Watt, WolframAlpha, and the itemized electric bill

I’ve always imagined getting an itemized electric bill. We’re not there yet, but when I saw a Kill-A-Watt at Radio Shack last night I remembered the discussion thread at this 2007 blog post and impulsively bought it.

In a way I’m glad I waited until 2009 because a companion tool is available now that wasn’t then: WolframAlpha. Its fluency with units, conversions, and comparisons is really helpful if, like me, you can’t do that stuff quickly and easily in your head.

So, for example, I’m sitting at my desk with the Kill-A-Watt watching my main power strip. I have a mixer here that I use about an hour a week for podcast recording. There’s no power switch because, well, why bother, just leave it on, it’s a tiny draw. Negligible.

I reach over and unplug it. Now I’m drawing 9 fewer watts. But what does that mean? I consult Wolfram Alpha:

Q: 9 W

A: About half the power expended by the human brain.

On a monthly basis?

Q: 9 W * (30 * 24 hours)

A: About half the energy released by combustion of one kilogram of gasoline.

In gallons?

Q: ( 1 kilogram / density of gasoline ) / 2

A: Less than a fifth of a gallon.

Relative to my electric usage, which was 1291 kWh last month?

Q: 9 W / (1291kwh / ( 30 * 24 hours)) * 100

A: Half a percent.

In dollars?

Q: 9 W / (1291kwh / ( 30 * 24 hours)) * $205.60

A: One dollar.

I find these comparisons really helpful. A dollar a month is a rounding error. But if I think of it as the energy equivalent of driving my car 7.2 miles, that makes me want to reach over and unplug the mixer for the 715 hours per month I’m not using it.

Saul Griffith has internalized these calculations, but most of us need help. A next-gen Kill-A-Watt that did these sorts of conversions and comparisons could be a real behavior changer.