There was no pumpkin riot in Keene

Recently, in a store in Santa Rosa, my wife Luann was waiting behind another customer whose surname, the clerk was thrilled to learn, is Parrish. “That’s the name of the guy in Jumanji,” the clerk said. “I’ve seen that movie fifty times!”

“I’m from Keene, New Hampshire,” Luann said, “the town where that movie was filmed.”

It was a big deal when Robin Williams came to town. You can still see the sign for Parrish Shoes painted on a brick wall downtown. Recently it became the local Robin Williams memorial:

Then the penny dropped. The customer turned to Luann and said: “Keene? Really? Isn’t that where the pumpkin riot happened?”

The Pumpkin Festival began in 1991. In 2005 I made a short documentary film about the event.

It’s a montage of marching bands, face painting, music, kettle corn, folk dancing, juggling, and of course endless ranks of jack-o-lanterns by day and especially by night. We weren’t around this year to see it, but our friends in Keene assure us that if we had been, we’d have seen a Pumpkin Festival just like the one I filmed in 2005. The 2014 Pumpkin Festival was the same family event it’s always been. Many attendees had no idea that, at the other end of Main Street, in the neighborhood around Keene State College, the now-infamous riot was in progress.

No pumpkins were harmed in the riot. Bottles, cans, and rocks were thrown, a car was flipped, fires were set, but — strange as it sounds — none of these activities intersected with the normal course of the festival. Two very different and quite unrelated events occurred in the same town on the same day.

The riot had precursors. Things had been getting out of control in the college’s neighborhood for the past few years. College and town officials were expecting trouble again, and thought they were prepared to contain it. But things got so crazy this year that SWAT teams from around the state were called in to help.

In the aftermath there was an important discussion of white privilege, and of the double standard applied to media coverage of the Keene riot versus the Ferguson protests. Here’s The Daily Kos:

Black folks who are protesting with righteous rage and anger in response to the killing of Michael Brown in Ferguson have been called “thugs”, “animals”, and cited by the Right-wing media as examples of the “bad culture” and “cultural pathologies” supposedly common to the African-American community.

Privileged white college students who riot at a pumpkin festival are “spirited partiers”, “unruly”, or “rowdy”.

Unfortunately the title of that article, White Privilege and the ‘Pumpkin Fest’ Riot of 2014, helped perpetuate the false notion that the Pumpkin Festival turned into a riot. When I mentioned that to a friend he said: “Of course, the media always get things wrong.”

It would be easy to blame the media. In fact, the misconception about what happened in Keene is a collective error. On Twitter, for example, #pumpkinfest became the hashtag that gathered riot-related messages, photos, and videos, and that focused the comparison to Ferguson. Who made that choice? Not the media. Not anyone in particular. It was the network’s choice. And the network got it wrong. Our friends in Keene saw it happening and tried to flood the social media with messages and photos documenting a 2014 Pumpkin Festival that was as happy and peaceful as every other Pumpkin Festival. But once the world had decided there’d been a pumpkin riot it was impossible to reverse that decision.

Is Keene’s signature event now ruined? We’ll see. I don’t think anybody yet knows whether it will continue. Meanwhile it’s worth reflecting on how conventional and social media converged on the same error. There’s nothing magical about the network. It’s just us, and sometimes we get things wrong.

How recently has the website been updated?

Today’s hangout with Gardner Campbell and Howard Rheingold, part of the Connected Courses project, dovetailed nicely with a post I’ve been meaning to write. Our discussion topic was web literacy. One of the literacies that Howard has been promoting is critical consumption of information or, as he more effectively says, “crap detection.” His mini-course on the subject links to a page entitled The CRAP Test which offers this checklist:

    * Currency -

          o How recent is the information?

          o How recently has the website been updated?

          o Is it current enough for your topic?

    * Reliability -

          o What kind of information is included in the resource?

          o Is content of the resource primarily opinion?  Is is balanced?

          o Does the creator provide references or sources for data or quotations?

    * Authority -

          o Who is the creator or author?

          o What are the credentials?

          o Who is the published or sponsor?

          o Are they reputable?

          o What is the publisher’s interest (if any) in this information?

          o Are there advertisements on the website?

    * Purpose/Point of View -

          o Is this fact or opinion?

          o Is it biased?

          o Is the creator/author trying to sell you something?

 

The first criterion, Currency, seems more straightforward than the others. But it isn’t. Web servers often don’t know when the pages they serve were created or last edited. The pages themselves may carry that information, but not in any standard way that search engines can reliably use.

In an earlier web era there was a strong correspondence between files on your computer and pages served up on the web. In some cases that remains true. My home page, for example, is just a hand-edited HTML file. When you fetch the page into your browser, the server transmits the following information in HTTP headers that you don’t see:

HTTP/1.1 200 OK
Date: Thu, 23 Oct 2014 20:54:46 GMT
Server: Apache
Last-Modified: Wed, 06 Aug 2014 19:28:27 GMT

That page was served today but last edited on August 6th.

Nowadays, though, for many good reasons, most pages aren’t hand-edited HTML. Most are served up by systems that assemble pages dynamically from many parts. Such systems may or may not transmit a Last-Modified header. If they do they usually report when the page was assembled, which is about the same time you read it.

Search engines can, of course, know when new pages appear on the web. And there are ways to tap into that knowledge. But such methods are arcane and unreliable. We take it for granted that we can list files in folders on our computers by date. Reviewing web search results doesn’t work that way, so it’s arduous to apply the first criterion of C.R.A.P. detection. If you’re lucky the URL will encode a publication date, as is often true for blogs. In such cases you can gauge freshness without loading the page. Otherwise you’ll need to click the link and look around for cues. Some web publishing systems report when items were published and/or edited, many don’t.

Social media tend to mask this problem because they encourage us to operate in what Mike Caulfield calls StreamMode:

StreamMode is the approach to organizing your thoughts as a history, integrated primarily as a sequence of events. You know that you are in StreamMode if you never return to edit the things you are posting on the web.

He contrasts StreamMode with StateMode:

In StateMode we want a body of work at any given moment to be seen as an integrated whole, the best pass at our current thinking. It’s not a journal trail of how we got here, it’s a description of where we are now.

The ultimate expression of StateMode is the wiki.

But not only the wiki. Any website whose organizing principle is not reverse chronology is operating in StateMode. If you’re publishing that kind of site, how can you make its currency easier to evaluate? If you can choose your publishing system, prefer one that can form URLs with publication dates and embed last-edited timestamps in pages.

In theory, our publishing tools could capture timestamps for the creation and modification of pages. Our web servers could encode those timestamps in HTTP headers and/or in generated pages, using a standard format. Search engines could use those timestamps to reliably sort results. And we could all much more easily evaluate the currency of those results.

In practice that’s not going to happen anytime soon. Makers of publishing tools, servers, and search engines would have to agree on a standard approach and form a critical mass in support of it. Don’t hold your breath waiting.

Can we do better? We spoke today about the web’s openness to user innovation and cited the emergence of Twitter hashtags as an example. Hashtags weren’t baked into Twitter. Chris Messina proposed using them as a way to form ad-hoc groups, drawing (I think) on earlier experience with Internet Relay Chat. Now the scope of hashtags extends far beyond Twitter. The tag for Connected Courses, #ccourses, finds essays, images, and videos from all around the web. Nine keystrokes join you to a group exploration of a set of ideas. Eleven more, #2014-10-23, could locate you on that exploration’s timeline. Would it be worth the effort? Perhaps not. But if we really wanted the result, we could achieve it.

GitHub Pages For The Rest Of Us

In A web of agreements and disagreements I documented one aspect of a recent wiki migration: conversion of MediaWiki’s markup lingo, wikitext, to Github’s lingo, Github Flavored Markdown. Here I’ll describe the GitHub hosting arrangement we ended up with.

There were two ways to do it. We could use GitHub’s built-in per-repository wiki, powered by an engine called Gollum. Or we could use GitHub Pages, a general-purpose web publishing system that powers (most famously) the website for the Ruby language. The engine behind GitHub Pages, Jekyll, is also often used for blogs. If you scan the list of Jekyll sites you’ll see that a great many are software developers’ personal blogs. That’s no accident. They are the folks who most appreciate the benefits of GitHub Pages, which include:

Simple markup. You write Markdown, Jekyll converts it to HTML. Nothing prevents you from mixing in HTML, or using HTML exclusively, but the simplicity of Markdown — more accurately, GitHub Flavored Markdown — is a big draw.

No database. For simple websites and blogs, a so-called dynamic system, backed by a database, can be overkill. You have to install and maintain the database which then regulates all access to your files. Why not just create and edit plain old files, either in a simple lingo like Markdown or in full-blown HTML, then squirt them through an engine that HTMLizes the Markdown (if necessary) and flows them through a site template? People call sites made this way static sites, which I think is a bit of a misnomer. It’s a mouthful but I prefer to call them dynamically generated and statically served. I’ve built a lot of web publishing systems over the years and they all work this way. If what you’re publishing is data that naturally resides in a database then of course you’ll need to feed the site from the database. But if what you’re publishing is stuff that you write, and that most naturally lives in the filesystem, why bother?

Version control. A GitHub Pages site is just a branch in a GitHub repository associated with some special conventions. And a GitHub repository offers many powerful affordances, including an exquisitely capable system for logging, tracking, and visualizing the edits to a set of documents made by one or more people.

Collaborative editing. When more then one person edits a site, each can make a copy of the site’s pages, edit independently of others, and then ask the site’s owner to merge in the changes.

Issue tracking. Both authors and readers of the site can use GitHub to request changes and work collaboratively to resolve those requests.

You don’t have to be a programmer to appreciate these benefits. But GitHub is a programmer-friendly place. And its tools and processes are famously complex, even for programmers. If you use the tools anyway on a daily basis, GitHub Pages will feel natural. Otherwise you’ll need a brain transplant.

Which is a shame, really. All sorts of people would want to take advantage of the benefits of GitHub Pages. If it were packaged very differently, and presented as GitHub Pages For The Rest of Us, they might be able able to. The collaborative creation and management of sets of documents is a general problem that’s still poorly solved for the vast majority of information workers. Mechanisms for collaborative editing, version control, and issue tracking often don’t exist. When they do they’re typically add-on features that every content management system implements in its own way. GitHub inverts that model. Collaborative editing, version control, and issue tracking are standard capabilities that provide a foundation on which many different workflows can be built. Programmers shouldn’t be the only ones able to exploit that synergy.

In this case, though, the authors of the wiki I was migrating are programmers. We use GitHub, and we know how to take advantage of the benefits of GitHub Pages. But there was still a problem. You don’t make a wiki with GitHub Pages, you make a conventional website. And while you can use GitHub Flavored Markdown to make it, the drill involves cloning your repository to a local working directory, then installing Jekyll and using it to compile your Markdown files into their HTML counterparts which you preview and finally push to the upstream repo. It’s a programmer’s workflow. We know the drill. But just because we can work that way doesn’t mean we should. Spontaneity is one of wiki’s great strengths. See something you want to change? Just click to make the page editable and do it. The activation threshold is as low as it can possibly be, and that’s crucial for maintaining documentation. Every extra step in the process is friction that impedes the flow of edits.

So we went with the built-in-wiki. It’s easy to get started, you just click the Wiki link in the sidebar of your GitHub repository and start writing. You can even choose your markup syntax from a list that includes MediaWiki and Markdown. As we went along, though, we felt increasingly constrained by the fixed layout of the built-in wiki. Wide elements like tables and preformatted blocks of text got uncomfortably squeezed. You can create a custom sidebar but that doesn’t replace the default sidebar which lists pages alphabetically in a way that felt intrusive. And we found ourselves using Markdown in strange ways to compensate for the inability to style the wiki.

If only you could use GitHub Pages in a more interactive way, without having to install Jekyll and then compile and push every little change. Well, it turns out that you can. Sort of.

I haven’t actually run Jekyll locally so I may be mistaken, but here’s how it looks to me. Jekyll compiles your site to a local directory which becomes the cache from which it serves up the results of its Markdown-to-HTML conversion. When you push your changes to GitHub, though, the process repeats. GitHub notices when you update the repo and runs Jekyll for you in the cloud. It compiles your Markdown to a cache that it creates and uses on your behalf.

If that’s how it works, shouldn’t you be able to edit your Markdown files directly in the repository, using GitHub’s normal interface for editing and proofing? And wouldn’t that be pretty close to the experience of editing a GitHub wiki?

Yes and yes, with some caveats. Creating new pages isn’t as convenient as in the wiki. You can’t just type in [see here](New Page) and then click the rendered link to conjure that new page into existence. Jekyll requires more ceremony. You have to manually create NewPage.md (not, evidently, “New Page.md”) in your repo. And then you have to edit NewPage.md and add something like this at the top:

---
title: New Page
layout: default
---

Since conjuring new pages by name is arguably the essence of a wiki, this clearly isn’t one. But you can create a page interactively using GitHub’s normal interface. Once that’s done, you can edit and preview “New Page.md” using GitHub’s normal interface. To me the process feels more like using the built-in wiki than compiling locally with Jekyll. And it opens the door to the custom CSS and layouts that the built-in wiki precludes.

There are, alas, still more caveats. You can’t always believe the preview. Some things that look right in preview are wrong in the final rendering. And that final rendering isn’t immediate. Changes take more or less time to show up, depending (I suppose) on how busy GitHub’s cloud-based Jekyll service happens to be. So this is far from a perfect solution. If you only need something a bit more robust than your repository’s README.md file, then the built-in wiki is fine. If you’re creating a major site like ruby-lang.org then you’ll want to install and run Jekyll locally. Between those extremes, though, there’s a middle ground. You can use GitHub Pages in a cloud-based way that delivers a wiki-like editing experience with the ability to use custom CSS and layouts,

I don’t think this particular patch of middle ground will appeal widely. Maybe a hypothetical GitHub Pages For The Rest Of Us will. Or maybe Ward Cunningham’s Smallest Federated Wiki (see also http://hapgood.us/tag/federated-wiki/) will. In any case, the ideas and methods that enable software developers to work together online are ones that everyone will want to learn and apply. The more paths to understanding and mastery, the better.

Voyage of the Captain Kirk Floating Arms Keyboard Chair

When we moved last month we let go of a great many things in order to compress our household and Luann’s studio into a set of ABF U-Pack containers. At one point we planned to shed all our (mostly second-hand) furniture, figuring it’d be cheaper to replace than to ship cross-country. But since Luann had acquired all that furniture, it was much harder for her to let go of it than me. So to put a bit more of my own skin into the game I sacrificed my beloved Captain Kirk chair with Floating Arms keyboard.

The idea was to preserve the essential one-of-a-kind keyboard and replace the commodity chair. Which was foolish, Bodybilt chair’s don’t come cheap. But I was in the grip of an obsession to lighten our load, and there was no time left to sell it, so off to the curb it went. My friend John Washer and I immortalized the moment.

Then, happily, fate intervened. First another friend, George Ponzini, sensibly picked up the chair and took it home. Then we decided to use our reserve fourth U-Pack container to bring a sofa, some living room chairs, and other stuff we thought we’d leave behind. Now there was room for the Captain Kirk chair to come along on our voyage. George kindly brought it back, I packed it, and off to California we went.

Weeks later we unpacked our household containers in our rented home in Santa Rosa. When I set the chair down in my office, the hydraulic lifter broke. Not a disaster, I could live with it at the lowest setting until I could replace the lifter. But then, as we emptied box after box, I began to worry. The Floating Arms keyboard wasn’t showing up. Disaster!

Then, finally, it turned up. Joy!

But when I tried to reattach it to the chair, two crucial parts — the rods that connect the arms of the chair to the custom keyboard — were inexplicably missing. Disaster!

Eventually it dawned on me. This wasn’t the Floating Arms keyboard I’d been using for the past 15 years. It was the original prototype that I’d reviewed for BYTE, and that Workplace Designs had replaced with the production model. I’d had a backup Floating Arms keyboard all this time, forgotten in a box up in the attic. So now I could recreate my setup. I just needed to replace the connector rods and the hydraulic lifter. Joy! Maybe! If those parts were still available!

I called The Human Solution and spoke to the very friendly and helpful Jonan Gardner. He took down the serial number on the chair, asked for photos of the broken hydraulic lifter and the arms into which the missing connector rods needed to fit, and promised to get back to me.

The next day the missing Floating Arms keyboard turned up in the bottom of a bag of shoes. More joy! I hooked it up to my broken-but-still-functional chair and got to work. The first order of business was to contact Jonah and let him know I didn’t need those connector rods, they were attached to the missing-but-now-found keyboard. “You’re lucky,” he said. “We couldn’t have replaced those. But the lifter is still available for your chair, and you can order it.” So I did.

The lifter arrived today. It wasn’t immediately obvious how to extract the old one in order to replace it. There were no fasteners. Do you just need to pound on it with a sledgehammer? I wrote to Jonah and he responded with this video and these instructions:

Someone will need to use a 3-4 pound, short handle, steel-head sledge hammer. Timidity will not get the old cylinder out so do not be afraid to HIT the mechanism. After 20 some-odd years, they are going to have to HIT the mechanism.

That’s just what I needed to know. And he wasn’t kidding about the weight of the hammer. I didn’t have a sledgehammer handy, and a regular hammer didn’t work, so I improvised:

And that did the trick. I HIT the cylinder a bunch of times, it popped out, I popped the new one in, and I’m back in business.

Thank you, Workplace Designs, for inventing the best ergonomic keyboard ever. Thank you, Jonah, for helping me bring it back to life. Thank you, World Wide Web, for enabling Bodybilt to share a video show exactly how hard to HIT when replacing a hydraulic lifter. And thank you, Captain Kirk Floating Arms Keyboard Chair, for being with me all these years. I’m sorry I threatened to abandon you. It’ll never happen again.

A web of agreements and disagreements

Recently I migrated a wiki from one platform to another. It was complicated in a couple of ways. The first wrinkle was hosting. The old wiki ran on a Linux-based virtual machine and the new one runs on GitHub. The second wrinkle was markup. MediaWiki uses one flavor of lightweight markup and GitHub uses (a variant of) another.

The process was confusing even for me. But logistics aside, it raised questions about standards, interoperability, and the challenge of working in an evolving digital realm.

The wiki in question is the documentation for the Thali project which I’ve mentioned in a number of posts. The project is mainly documented by Thali’s creator, Yaron Goland. Why use a wiki? Thali is a fast-moving project. Yaron has a blog, and he could use that to document Thali. But while blogs are agile publishing tools, they don’t shine when it comes to restructuring and spontaneous editing. Those are the great strengths of wikis.

Thali was originally hosted on CodePlex. Since that service doesn’t offer a built-in wiki, Yaron augmented it with a Bitnami MediaWiki image hosted in Azure. This was a DIY setup, not a managed service, which meant that when the Heartbleed Bug showed up he had to patch it himself, and he would have been on the hook again when Shellshock arrived. Life’s too short for that.

Also, with the project’s source code hosted on GitHub, it made sense to explore hosting the documentation there too. It’s simpler for readers of the code and the documentation to find everything in one place. And it’s simpler for writers of both forms of text to put everything in that place. There’s just one service to authenticate too, and tools for version control and issue tracking can be used for both forms of text.

I started by moving a few experimental pages from the MediaWiki to the GitHub wiki. Were there tools that could automate the translation? Maybe, but I’ve learned to walk before attempting to run. Converting a few pages by hand gave me an appreciation of the differences between the two markup languages. Each is a de facto standard with many derived variations. GitHub, for example, uses a variant of Markdown called GitHub Flavored Markdown (GFM). Tools that read and write “standard” Markdown don’t properly read and write GFM.

If I were teaching a course in advanced web literacy, I’d pose the following homework exercise:

You’re required to migrate a wiki from MediaWiki to GitHub. Possible strategies include:

  1. Use a tool that does the translation automatically.
  2. Create that tool if it doesn’t exist
  3. Do the job manually

Evaluate these options.

Of course there are assumptions buried in the problem statement. A web-literate student should first ask: “Why? Are we just chasing a fad? What problems will this migration solve? What problems will it create? ”

Assuming we agree it makes sense, I’d like to see responses that:

  • Enumerate available translators.
  • Cite credible evaluations of them (and explain why they’re credible).
  • Analyze the source and target data to find out which markup features might or might not be supported by the available translators.
  • Consider the translators’ implementation costs. Are they local or cloud-based? If local how much infrastructure must be installed, how complex are its dependencies? If cloud-based how will bulk operations work?
  • If no translators emerge, make a back-of-the-envelope estimate of the distance between two formats and the effort required to create software to map between them.
  • Evaluate the time and effort required to research, acquire, and use an automated tool, vis a vis that required to do the job manually.
  • Estimate the break-even point at which a resuable automated tool pays off.
  • Recognize that there really isn’t a manual option. Doing the job “by hand” in a text editor means using a tool that enables a degree of automation.

In my case that last point proved salient. The tools landscape looked messy, there were only a few dozen pages to move over, the distance between the two markups wasn’t great, it was (for me) a one-time thing, and I wanted to make an editorial pass through the stuff anyway. So I wound up using a text editor. To bridge one gap between the two formats — different syntaxes for hyperlinks — I recorded a macro to convert one to the other.

To achieve this result in MediaWiki:

all about frogs

You type this:

[[Frog|all about frogs]]

In a GitHub wiki it’s this:

[Frog](all about frogs)

So much writing nowadays happens in browsers, never mind word processors, never mind old-school text editors, that it’s worth pointing out those old dogs can do some cool tricks. I won’t even mention which editor I use because people get religious about this stuff. Suffice it to say that it’s one of a class of tools that make it easy to record, and then play back, a sequence of actions like this:

  • Search for [[
  • Put the cursor on the first [
  • Delete it
  • Search for |
  • Change it to ]
  • Type (
  • Search for ]]
  • Change it to )

You might find an automated translator that encodes that same recipe. You might be able to write code to implement it. But for a large class of textual transformations like this you can most certainly use an editor that records and runs macros. Given that the web is still a largely textual medium, where transformations like this one are often needed, it’s a shame that macros are a forgotten art. I often use them to prototype recipes that I’ll then translate into code. But sometimes, as in this case, they’re all the code I need. That’s something I’d want students of web literacy to realize.

What would really make my day, though, would be for one of those students to say:

“Hey, wait a sec. This doesn’t make sense. There is no such thing as GitHub Flavored HTML. Why is there GitHub Flavored Markdown?

Or Standard Flavored Markdown, which quickly became Common Markdown, then CommonMark. How de facto standards become de jure standards, or don’t, is a fascinating subject. The web works as well as it does because we mostly agree on a set of tools and practices. But it evolves when we disagree, try different approaches, and test them against one another in a marketplace of ideas. Citizens of a web-literate planet should appreciate both the agreements and the disagreements.

A cost-effective way to winterize windows

D’Arcy Norman asks:

If there’s a better way to winterize windows than just taping plastic to the frame, I’d love to hear about it.

Indeed. In New Hampshire, when fuel prices first skyrocketed, we did that for a couple of years. It’s an incredibly effective way to stop the leaks that suck precious warm air out of your home. But it’s a royal pain to install the plastic sheeting every fall, and when you remove it in the spring you inevitably pull paint chips off your window frames.

The solution is interior storms, a really nice hack I learned about from John Leeke. He’s a restorer of historic homes, and — what brought him to my attention — a narrator of that work. Interior storms are just removable frames, surrounded by gaskets, to which you attach your plastic sheets permanently. Once made they pop into your window frames in a few seconds every fall, and pop out as easily in the spring.

Achieving that result is, however, not trivial, at least it wasn’t for me. My first generation of interior storms, based on John’s instructions, were suboptimal. The round backer rod material he recommended had to be split lengthwise to form a D profile. I wound up making a jig to do that by drilling a backer-rod-diameter hole in a piece of wood, splitting it in half, embedding a razor blade on an angle, and joining the pieces. Great idea in principle, but in practice it was still hard to draw hundreds of feet of backer rod through the jig and achieve a clean lengthwise split. It was also hard to apply hundreds of feet of double-sided tape to the split material.

The backer rod I used also turned out not to be sufficiently compressible. The critical thing with interior storms is a tight fit. When you tape plastic to your windows you’re guaranteed to get that result, which is why it’s so effective. Interior storms need to press into their surrounding window frames really snugly to achieve the same effect. Inconsistencies in the width of my split backer rod, and the relative incompressibility of the material, resulted in storms that didn’t always fit as snugly as they should have.

Another problem with the first-geneneration storms was flimsy frames. I ripped pine boards lengthwise to create inch-wide frame members. They really should have been inch-and-a-half.

So last year I rebooted and created second-generation storms. I started with inch-and-a-half wide frame members. Then I ditched the backer rod and went with pretaped rubber gasket. It’s a much more expensive material but it obviates the need for do-it-yourself taping and has the compressibility I was looking for.

Yet another problem with the first-gen storms was that I made all the frames from the same template. The windows were nominally all the same dimensions, but it turns out there were minor variations and those matter when you really need to achieve a snug fit.

So the second time around I customized each frame to its window. Yes, it was tedious. But for our house it was necessary, and it might be for many old houses. Here’s the algorithm I came up with:

1. Cut short dummy pieces from spare inch-and-a-half-wide frame members.

2. Attach gasket to one side of each dummy piece.

3. Make all long and short frame members a bit longer than needed.

4. For each frame member:

– Place dummy pieces on either end

– Place frame member between dummy pieces

– Compress the gasket at one end

– Mark frame member at the other end (accounting for gasket compression on that end)

– Cut frame member

– Label frame member (“living room west wall”)

Once the frame is made, you attach the plastic sheet in the usual way. I used Warp Brothers SK-38 kits which come with double-stick tape. You tape around the edge of the frame, lay down the plastic, smooth it by hand, press it down, trim the edges, and use a blow dryer or heat gun to shrink it tight.

Ths is the kind of job I hate doing. You spend lots of time climbing the learning curve, and then once you’re done you never reuse the knowledge you’ve painfully acquired. Since the method is so effective, though, I’ll toss out an idea that’s been percolating for a while.

Consider an older house in a northern climate, with older windows and storms, and adequate attic insulation. The walls may or may not be adequately insulated, but the first line of defense is to tighten up those windows. It’s expensive to replace them, and the replacements are going to be vinyl that will ruin the aesthetics of the house and won’t age well. It’s even more expensive to hire a restorer to rebuild the old windows.

Let’s say that interior storms deliver 80% of the benefit of replacement windows for 10% of the cost. Deploying this solution to all the eligible houses in a region is arguably the most cost-effective way to tighten up that population of houses. But the method I’ve described here won’t scale. It entails more effort, and more hassle, than most folks will be willing to put up with.

How could we scale out deployment of interior storms across a whole community? I’d love to see high schools take on the challenge. Set up a workshop for making interior storms. Market it as a makerspace. No, it’s not 3D printing, but low-tech interior storms deployed community-wide will mean way more to the community than anything a MakerBot can print. Also, turn the operation into a summer jobs program. Teach kids how to run it like a business and pay themselves better than minimum wage.

Since I am now living in Santa Rosa, winterization of windows is no longer a big concern. But I’ve been meaning to document what I learned and did back in New Hampshire. And I would really like to see John Leeke’s idea applied at scale in places where it’s needed. So I hope that the new owner of the house we sold in Keene will be successful with this method, that D’Arcy Norman and others will too, and that communities will figure out how to make it happen at scale.

3D Elastic Storage, part 3: Five stars to U-Pack!

It’s been a busy month. We sold our house in Keene, NH, drove across the country, and rented a house in Santa Rosa, CA. A move like that entails plenty of physical, emotional, and financial stress. The last thing you need is trouble with a fraudulent mover which, sadly, is so common that http://www.movingscam.com/ needs to exist. Luann spent a lot of time exploring the site and Jeff Walker, its founder, wrote her a couple of really helpful and supportive emails. When we realized that a full-service move wasn’t feasible in our case, Jeff agreed that ABF U-Pack — the do-it-yourself company I’d identified as our only viable option — was a good choice.

I’ve chronicled our experience with U-Pack before and during the move. Now that it’s done, I’m wildly positive about the service. Every aspect of it has been thoughtfully and intelligently designed.

The non-standard size and shape of U-Pack’s ReloCube is, at first, surprising. It’s 6’3″ x 7′ x 8’4″, and the long dimension is the height. As Marc Levinson’s The Box wonderfully explains, standardization of shipping containers created the original Internet of Things: a packet-switched network of 20′ and 40′ boxes. Those shapes don’t meet U-Pack’s requirements for granular storage, transport on flatbed trailers, and delivery to curbside parking spaces. But while the ReloCube’s dimensions are non-standard, the ReloCube system provides the key benefits of a packet-switched network: variable capacity, store-and-forward delivery. In our case, we’ve now taken delivery of the two cubes that held our household stuff. The two that hold Luann’s studio remain in storage until we figure out where that stuff will land. Smaller containers enable that crucial flexibility.

Smaller containers are also easier to load. Here’s a picture of a ReloCube interior:

All the surfaces are nicely smooth. And there are plenty of slots for hooking in straps. But I wound up using very few straps because I was able to pack the cubes tightly. It’s easier to do that in a smaller space.

I also like how the doors shut flush against the edge of the cube:

When you lever the doors shut on a tightly-packed container they compress and help stabilize the load. That wouldn’t be a significant factor with an 8x8x16 PODS container but with the smaller ReloCube it can be.

On the receiving end, I wondered how the cubes would be positioned. You’d want them snug to the curb, but then how could the doors open toward the house? The video linked to this picture documents the elegant solution:

The forklift driver placed the cube’s edge on top of the curb. Not shown in the video is the final tap with the forklift that aligned the cube perfectly. These folks really pay attention to details!

I can’t say enough good things about our U-Pack experience. No conventional service offered the flexibility we needed so none was an option, but we did solicit estimates early on and they were astronomical: three to four times the $6300 we paid U-Pack to move four containers across the country and make them available to us on demand. (We’ll also now pay $100 per-month per-container for the two studio containers until we retrieve them.) There was very little paperwork involved. Every U-Pack employee I talked to was friendly and helpful. So I’m giving the service a five-star rating.

For me the experience was an echo of a time, fifty years ago, when our family moved from suburban Philadelphia to New Delhi. Here are some pictures of the “sea trunk” that was delivered, by bullock cart, to 102 Jorbagh.

Now the delivery vehicle is a flatbed trailer:

But the resemblance between our New Delhi sea trunk and our ReloCubes is, I think, not coincidental.

Actually the sea trunk trumped the ReloCube in one way. When it was delivered back home my dad arranged to keep it, and he turned it into a playhouse in the backyard: