Meta-tools for exploring explanations

At the Canadian University Software Engineering Conference in January, Bret Victor gave a brilliant presentation that continues to resonate in the technical community. No programmer could fail to be inspired by Bret’s vision, which he compellingly demonstrated, of a system that makes software abstractions visual, concrete, and directly manipulable. Among the inspired were Eric Maupin and Chris Granger, both of whom quickly came up with their own implementations — in C# and ClojureScript — of the ideas Bret Victor had fleshed out in JavaScript.

Others imagined what the educational value of such a system might, or might not, be. Noting that Bret’s demo “looks like a powerful visualization system,” Mark Guzdial wrote:

The problem is that visualization is about making information immediate and accessible, but learning is about changes in the mind — invisible associations and structures.

Ned Gulley echoed Mark’s concerns. Citing Bret’s 2009 essay Simulation as a Practical Tool — which contains live demos that use simulation to solve a physics word problem — Ned writes:

It’s beautiful! And it works well as long as you don’t want to modify the essential parameters of the problem. But Victor isn’t helping us learn the metatools that he uses to create this environment. To be fair, that wasn’t his goal, but as a user, I feel like I’m locked in a pretty small task box. More to the point, it’s expensive to create these interactive gems, and there’s only one Bret Victor.

Victor’s real power is his ability to rapidly create and deploy these tools. In a twinkling he can size up a task that is worth studying, put a box around it and spin a tool. He does this so effortlessly, with such mesmerizing legerdemain, that we lose sight of this meta-skill. What Victor was really doing in his talk was illustrating the power of tool spinning, the rapid creation of customized, context-sensitive, insight-generating tools. Direct manipulation is good, but the nature of direct manipulation changes with the context, and the context can’t always be anticipated.

My preferred goal is to make tool spinning (and tool sharing) as easy as possible. If tool spinning is easy, if that is the expressive skill that we give our users, then small task boxes aren’t a problem. You can always make more tools.

Don’t use the thing Bret made. Do the thing that Bret does.

Ned’s right about tool-spinning and I’ll come back to that. But first I want to consider Mark Guzdial’s critique, cited by Ned, that interactive simulations don’t help us build the mental muscles we need to reason analytically and symbolically. Indeed not. But Bret doesn’t claim that they do! In Simulation as a Practical Tool he says flatly:

Regardless of how it’s taught, most people will never be comfortable entering this level of abstraction in order to explore the problems of their lives.

That’s true for the physics problem Bret solves in that essay, it’s true for the programming challenges he tackles in his CUSEC demo, and it’s also true in all scientific and technological realms. Some people can reason effectively using abstract symbol systems. Most think more concretely and reason best about what they can see, touch, explore, control. What matters in the end is that people can reason effectively, about all sorts of things, especially about the scientific knowledge and technological machinery on which our civilization depends.

In his 2011 essay Explorable Explanations, and the companion demo Ten Brighter Ideas, Bret imagines a world in which public discourse about energy (or health or climate or economics) is facilitated by an explorable explanation that:

  • States assumptions, and ties them to supporting data.
  • Makes connections among interacting variables.
  • Enables the viewer/user to vary assumptions and observe consequences.

An example from Ten Brighter Ideas:

Suppose 100% of US households replaced 1 bulb at random with a compact fluorescent.

This, we learn, would save 11.6TWh per year, which is equivalent to 1.5 nuclear reactors or 9.5 coal plants. The underlined elements — 100% of households, 1 bulb — become sliders you can use to vary these inputs. What if only 20% of households get with the program? If each replaces 5 bulbs, we can get the same 11.6TWh savings.

Wait a sec. 11.6TWh? Terawatt hours? Let’s check. If 111 million households each swap out one 75W bulb for a 25W bulb, saving 50W each for 180 hours (i.e. half of each day for a year), we’re looking at 100,000,000 * 50W * 180hr = 999GWh. We’re off by a factor of about 1000. Let’s find out why.

The constants Bret is using are right there in the HTML. Here’s the relevant one:

<div id=”k_nuclearAnnualEnergy”>796.488e12</div>

Likewise the data that Bret is using is right there in the HTML. Here’s the relevant citation:

US Nuclear Generation of Electricity :
December 2009

The spreadsheet gives total US nuclear power production for 2009 as 798,854,585 MWh. That’s roughly 800 million MWh. I guess the constant should have been 796.488e9, not 796.488e12. (Or the units should have been GW not TW.) Checking that: http://www.wolframalpha.com/input/?i=800 million MWh = 1/4 yearly energy production of all nuclear power plants. OK, that makes sense. I can believe US nuclear energy production is in the ballpark of 1/4 of worldwide production.

Now, here’s the claim that Bret’s active document explores:

If every U.S. household installed just one compact fluorescent light bulb, it would displace the electricity provided by one nuclear reactor. 1 = 1!

It comes from a brochure at BeyondNuclear.org entitled Ten Reasons to Say No to Nuclear and Ten Brighter Ideas. As we can see, that’s wrong. We’d need 1000 bulbs per household, not 1 bulb, to displace one nuclear reactor. 1000 = 1!

This kind of thing is embarrassing, but it happens all the time. And while Bret will surely wince if he happens to read this (unless I’ve got it wrong somehow, in which case I’ll wince),

Wince. (See comments.)

We need robust explorable explanations that state assumptions, link to supporting data, and assemble context that enables us to cross-check assumptions and evaluate consequences.

And we need them everywhere, for everything. Consider, for example, the current debate about fracking. We’re having this conversation because, as Daniel Yergin explains in The Quest, a natural gas revolution has gotten underway pretty recently. There’s a lot of more of it available than was thought, particularly in North America, and we can recover it and burn it a lot more cleanly than the coal that generates so much of our electric power. Are there tradeoffs? Of course, There are always tradeoffs. What cripples us is our inability to evaluate them. We isolate every issue, and then polarize it. Economist Ed Dolan writes

These anti-frackers have a simple solution: ban it.

The pro-frackers, too, have a simple solution: get the government out of the way and drill baby, drill.

The environmental impacts of fracking are a real problem, but one to which neither prohibition nor laissez faire seems a sensible solution. Instead, we should look toward mitigation of impacts using economic tools that have been applied successfully in the case of other environmental harms.

In order to do that, we’ve got to be able to put people in both camps in front of an explorable explanation with a slider that varies how much natural gas we choose to produce, linked to other sliders that vary what we pay, in dollars, lives, and environmental impact, not only for fracking but also for coal production and use, for Middle East wars, and so on.

Where will these tools come from? As Ned Gulley says, they’re expensive to build, and you need a scarce talent Bret Victor to build them. And even then, as we’ve seen here, it’s tricky to get right. Where will the tool-spinning meta-tools come from?

I actually think Bret knows, or at least intuits very well, how we can get there, and I think he has shown the way in his book-length 2006 essay MagicInk. That’s what I meant to write about today, but after an unexpected detour through Ten Brighter Ideas I think I’ll do that another time. Meanwhile if you haven’t read MagicInk I commend it to you.

Posted in .

9 thoughts on “Meta-tools for exploring explanations

    1. Make data-driven tools, connected to our best-guess consensus on the facts, to help us follow the money, count it, identify and interconnect the sources. Meanwhile make more tools to help us analyze the energy content of the fuel in question, its abundance, its environmental impact, relative to other fuels. Sounds Pollyanna I know. Easier said than done. But ultimately why not? What would you rather see people who could do it work on?

  1. Hmm, “half of each day for a year” is not 180 hours, but 365*24/2=4380 hours?

  2. On one side, you’re basically saying “we need widgets for everything” — you want to be able to manipulate *any* concept, hack *any* metaphor, as quickly as possible; for this, you need standard, reusable software blocks that take care of the “hard lifting” of software UI, so that (scarce) resources can be dedicated to modelling the actual problems we want to tackle.

    On the other, what Bret says in MagicInk (among various other things) and in the presentation, is that we should *unlearn* our approach to software UI design, of which “widgetry” is a cornerstone, and build interfaces that are finely-tuned for each particular task, because they remove the barrier between action and intention. (This is, nowadays, a common theme; after iOS successfully attacked a few long-established conventions in the field, many people feel that we need to completely rethink every element of software UI. I’m not entirely convinced this is a good thing, as we risk throwing out many babies just to get rid of some dirty bathwater, but I digress).

    It’s clearly a tension that cannot produce a universal solution. We will have to keep implementing trade-offs as best as we can, depending on context and available resources.

  3. One thing I like about Bret Victor’s work is that it embodies all sorts of contradictions. The Explorable Explanations idea is heavy on interaction, but MagicInk argues eloquently that we ought to minimize the need for interaction.

    Custom software UI design is certainly bottlenecked by the scarce resource of programmers. Custom information display is likewise bottlenecked by the scarce resource of designers who can imagine and implement Tuftean information visualization. In both realms widgetry enables mere mortals to stuff done, but not enough, or as well, as we need. And the widgetry hasn’t evolved much. In software UI we’ve had the same menus and sliders, in infoviz the same bar charts and bubble charts, forever.

    What I think MagicInk aims for, and anyway what I long for, is a “user innovation toolkit” (http://www.infoworld.com/d/developer-world/toolkits-user-innovation-934) that enables software designers and infoviz creators to do their work and, at the same time, express intentions that feed back to the creator of the toolkit and guide its evolution.

    Well, we can dream, can’t we?

Leave a Reply