Skype Translator will (also) be a tool for language learners

When I saw this video of Skype Translator I realized that beyond just(!) translation, it will be a powerful tool for language learning. Last night I got a glimpse of that near future. Our next door neighbor, Yolanda, came here from Mexico 30 years ago and is fluently bilingual. She was sitting outside with her friend, Carmen, who speaks almost no English. I joined them and tried to listen to their Spanish conversation. I learned a bit of Spanish in high school but I’ve never been conversational. Here in Santa Rosa I’m surrounded by speakers of Spanish, it’s an opportunity to learn, and Yolanda — who has worked as a translator in the court system — is willing to help.

I find myself on parallel tracks with respect to my learning of two different languages: music and Spanish. In both cases I’ve historically learned more from books than by ear. Now I want to put myself into situations that force me to set the books aside, listen intently, and then try to speak appropriately. I can use all the help I can get. Luckily we live in an era of unprecedented tool support. On the musical front, I’ve made good use of Adrian Holovaty’s SoundSlice, a remarkable tool for studying and transcribing musical performances it pulls from YouTube. I haven’t used SoundSlice much for annotation, because I’m trying to develop my ear and my ability to converse musically in realtime. But its ability to slow down part of a tune, and then loop it, has been really helpful in my efforts to interact with real performances.

I suspect that’s why Skype Translator will turn out to be great for language learning. Actually I’m sure that will happen, and here’s why. Last night I showed the Skype Translator video to Yolanda and Carmen. Neither is remotely tech-savvy but both instantly understood what was happening. Yolanda marveled to see machine translation coming alive. Carmen, meanwhile, was transfixed by the bilingual exchange. And when she heard the English translation of a Spanish phrase, I could see her mouthing the English words. I found myself doing the same for the Spanish translation of English phrases.

That’s already a powerful thing, and yet we were only observers of a conversation. When we can be participants, motivated to communicate, the service won’t just be a way to speak across a language gap. It’ll be a way to learn one another’s languages.

No disclosure is needed here, by the way, because I’m a free agent for now. My final day with Microsoft was last Friday. In the end I wasn’t able to contribute in the ways I’d hoped I could. But great things are happening there, and Skype Translator is only one of the reasons I’m bullish on the company’s future.

Human/machine partnership for problems otherwise Too Hard

My recent post about redirecting a page of broken links weaves together two different ideas. First, that the titles of the articles on that page of broken links can be used as search terms in alternate links that lead people to those articles’ new locations. Second, that non-programmers can create macros to transform the original links into alternate search-driven links.

There was lots of useful feedback on the first idea. As Herbert Van de Sompel and Michael Nelson pointed out, it was a really bad idea to discard the original URLs, which retain value as lookup keys into one or more web archives. Alan Levine showed how to do that with the Wayback Machine. That method, however, leads the user to sets of snapshots that don’t consistently mirror the original article, because (I think) Wayback’s captures happened both before and after the breakage.

So for now I’ve restored the original page of broken links, alongside the new page of transformed links. I’m grateful for the ensuing discussion about ways to annotate those transformed links so they’re aware of the originals, and so they can tap into evolving services — like Memento — that will make good use of the originals.

The second idea, about tools and automation, drew interesting commentary as well. dyerjohn pointed to NimbleText, though we agreed it’s more suited to tabular than to textual data. Owen Stephens reminded me that the tool I first knew as Freebase Gridworks, then Google Refine, is going strong as OpenRefine. And while it too is more tabular than textual, “the data is line based,” he says, “and sort of tabular if you squint at it hard.” In Using OpenRefine to manipulate HTML he presents a fully-worked example of how to use OpenRefine to do the transformation I made by recording and playing back a macro in my text editor.

Meanwhile, on Twitter, Paul Walk and Les Carr and I were rehashing the old permathread about coding for non-coders.

The point about MS Word styles is spot on. That mechanism asks people to think abstractly, in terms of classes and instances. It’s never caught on. So with my text-transformation puzzle, Les suggests. Even with tools that enable non-coders to solve the puzzle, getting people across the cognitive threshold is Too Hard.

While mulling this over, I happened to watch Jeremy Howard’s TED talk on machine learning. He demonstrates a wonderful partnership between human and machine. The task is to categorize a large set of images. The computer suggests groupings, the human corrects and refines those groupings, the process iterates.

We’ve yet to inject that technology into our everyday productivity tools, but we will. And then, maybe, we will finally start to bridge the gap between coders and non-coders. The computer will watch the styles I create as I write, infer classes, offer to instantiate them for me, and we will iterate that process. Similarly, when I’m doing a repetitive transformation, it will notice what’s happening, infer the algorithm, offer to implement it for me, we’ll run it experimentally on a sample, then iterate.

Maybe in the end what people will most need to learn is not how to design stylistic classes and instances, or how to write code that automates repetitive tasks, but rather how to partner effectively with machines that work with us to make those things happen. Things that are Too Hard for most living humans and all current machines to do on their own.

Where’s the IFTTT for repetitive manual text transformation?

While updating my home page today, I noticed that that the page listing my InfoWorld articles had become a graveyard of broken links. The stuff is all still there, but at some point the site switched to another content management system without redirecting old URLs. This happens to me from time to time. It’s always annoying. In some cases I’ve moved archives to my own personal web space. But I prefer to keep them alive in their original contexts, if possible. This time around, I came up with a quick and easy way to do that. I’ll describe it here because it illustrates a few simple and effective strategies.

My listing page looks like this:

<p><a href=”http://www.infoworld.com/article/06/11/15/47OPstrategic_1.html”>XQuery and the power of learning by example | Column | 2006-11-15</a></p>

<p><a href=”http://www.infoworld.com/article/06/11/08/46OPstrategic_1.html”>Web apps, just give me the data | Column | 2006-11-08</a></p>

It’s easy to see the underlying pattern:

LINK | CATEGORY | DATE

When I left InfoWorld I searched the site for everything I’d written there and made a list, in the HTML format shown above, that conformed to the pattern. Today I needed to alter all the URLs in that list. My initial plan was to search for each title using this pattern:

site:infoworld.com “jon udell” “TITLE”

For example, try this in Google or Bing:

site:infoworld.com “jon udell” “xquery and the power of learning by example”

Either way, you bypass the now-broken original URL (http://www.infoworld.com/article/06/11/15/47OPstrategic_1.html) and are led to the current one (http://www.infoworld.com/article/2660595/application-development/xquery-and-the-power-of-learning-by-example.html)

The plan was then to write a script that would robotically perform those searches and extract the current URL from each result. But life’s short, I’m lazy, and I realized a couple of things. First, the desired result is usually but not always first, so the script would need to deal with that. Second, what if the URLs change yet again?

That led to an interesting conclusion: the search URLs themselves are good enough for my purposes. I just needed to transform the page of links to broken URLs into a page of links to title searches constrained to infoworld.com and my name. So that’s what I did, it works nicely, and the page is future-proofed against future URL breakage.

I could have written code to do that transformation, but I’d rather not. Also, contrary to a popular belief, I don’t think everyone can or should learn to write code. There are other ways to accomplish a task like this, ways that are easier for me and — more importantly — accessible to non-programmers. I alluded to one of them in A web of agreements and disagreements, which shows how to translate from one wiki format to another just by recording and using a macro in a text editing program. I used that same strategy in this case.

Of course recording a macro is a kind of coding. It’s tricky to get it to do what you intend. So here’s a related strategy: divide a complex transformation into a series of simpler steps. Here are the steps I used to fix the listing page.

Step 1: Remove the old URLs

The old URLS are useless clutter at this point, so just get rid of them.

old: <p><a href=”http://www.infoworld.com/article/06/11/15/47OPstrategic_1.html”>XQuery and the power of learning by example | Column | 2006-11-15</a></p>

new: <p><a href=””>XQuery and the power of learning by example | Column | 2006-11-15</a></p>

how: Search for href=”, mark the spot, search for “>, delete the highlighted selection between the two search targets, go to the next line.

Step 2: Add query templates

We’ve already seen the pattern we need: site:infoworld.com “jon udell” “TITLE”. Now we’ll replace the empty URLs with URLs that include the pattern. To create the template, search Google or Bing for the pattern. (I used Bing but you can use Google the same way.) You’ll see some funny things in the URLs they produce, things like %3A and %22. These are alternate ways of representing the equals sign and the double quote. They make things harder to read, but you need them to preserve the integrity of the URL. Copy this URL from the browser’s location window to the clipboard.

old: <p><a href=””>XQuery and the power of learning by example | Column | 2006-11-15</a></p>

new: <p><a href=”http://www.bing.com/search?q=site%3Ainfoworld.com+%22jon+udell%22+%22%5BTITLE%5D%22″>XQuery and the power of learning by example | Column | 2006-11-15</a></p>

how: Copy the template URL to the clipboard. Then for each line, search for href=””, put the cursor after the first double quote, paste, and go to the next line.

Step 3: Replace [TITLE] in each template with the actual title

old: <p><a href=”http://www.bing.com/search?q=site%3Ainfoworld.com+%22jon+udell%22+%22%5BTITLE%5D%22″>XQuery and the power of learning by example | Column | 2006-11-15</a></p>

new: <p><a href=”http://www.bing.com/search?q=site%3Ainfoworld.com+%22jon+udell%22+%22XQuery and the power of learning by example%22″>XQuery and the power of learning by example | Column | 2006-11-15</a></p>

how: For each line, search for >”, mark the spot, search for |, paste, copy the highlighted section between the two search targets, search for [TITLE], put the cursor at [, delete the next 7 characters, paste from the clipboard.

Now that I’ve written all this down, I’ll admit it looks daunting, and doesn’t really qualify as a “no coding required” solution. It is a kind of coding, to be sure. But this kind of coding doesn’t involve a programming language. Instead you work out how to do things interactively, and then capture and replay those interactions.

I’ll also admit that, even though word processors like Microsoft Word and LibreOffice can do capture and replay, you’ll be hard pressed to pull off a transformation like this using those tools. They’re not set up to do incremental search, or switch between searching and editing while recording. So I didn’t use a word processor, I used a programmer’s text editor. Mine’s an ancient one from Lugaru Software, there are many others, all of which will be familiar only to programmers. Which, of course, defeats my argument for accessibility. If you are not a programmer, you are not going to want to acquire and use a tool made for programmers.

So I’m left with a question. Are there tools — preferably online tools — that make this kind of text transformation widely available? If not, there’s an opportunity to create one. What IFTTT is doing for manual integration of web services is something that could also be done for manual transformation of text. If you watch over an office worker’s shoulder for any length of time, you’ll see that kind of manual transformation happening. It’s a colossal waste of time (and source of error). I could have spent hours reformatting that listing page. Instead it took me a few minutes. In the time I saved I documented how to do it. I wasn’t able to give you a reusable and modifiable online recipe, but that’s doable and would be a wonderful thing to enable.

Why shouting won’t help you talk to a person with hearing loss

I’ve written a few posts [1, 2] about my mom’s use of a reading machine to compensate for macular degeneration, and I made a video that shows the optimal strategy for using the machine. We’re past the point where she can get any benefit from the gadget, though. She needs such extreme magnification that it’s just not worth it any more.

So she’s more dependent than ever on her hearing. Sadly her hearing loss is nearly as profound as her vision loss, and hearing aids can’t compensate as well as we wish. She’s still getting good mileage out of audiobooks, and also podcasts which she listens to on MP3 players that I load up and send her. The clear and well-modulated voice of a single speaker, delivered through headphones that block out other sound, works well for her. But in real-world situations there are often several voices, not clear or well-modulated, coming from different parts of the room and competing with other ambient noise. She depends on hearing aids but as good as they’ve gotten, they can’t yet isolate and clarify those kinds of voices.

One of the best ways to communicate with my mom is to speak to her on the phone. That puts the voice directly in her ear while the phone blocks other sounds. And here’s a pro tip I got from the audiologist I visited today. If she removes the opposite hearing aid, she’ll cut down on ambient noise in the non-conversational ear.

In person, the same principle applies. Put the voice right into her ear. If I lean in and speak directly into her ear, I can speak in a normal voice and she can understand me pretty well. It’s been hard to get others to understand and apply that principle, though. People tend to shout from across the room or even from a few feet away. Those sounds don’t come through as clearly as sounds delivered much more softly directly into the ear. And shouting just amps up the stress in the room, which nobody needs.

Lately, though, the voice-in-the-ear strategy — whether on the phone or in person — had been failing us. We had thought maybe the hearing aids needed be cleaned, but that wasn’t the problem. She’s been accidentally turning down the volume! There’s a button on each hearing aid that you tap to cycle through the volume settings. I don’t think mom understood that, and I know she can’t sense if she touches the button while reseating the device with her finger. To compound the problem, the button’s action defaults to volume reduction. If it went the other way she might have been more likely to notice an accidental change. But really, given that she’s also losing dexterity, the volume control is just a useless affordance for her.

Today’s visit to the audiologist nailed the problem. When he hooked the hearing aids up to his computer and read their logs(!), we could see they’d often been running at reduced volume. On her last visit he’d set them to boot up at a level we’ll call 3 on a scale of 1 to 5. That’s the level he’d determined was best for her. He’d already had an inkling of what could go wrong, because on that visit he’d disabled the button on the left hearing aid. Now both are disabled, and the setting will stick to 3 unless we need to raise it permanently.

Solving that problem will help matters, but hearing aids can only do so much. The audiologist’s digital toolkit includes a simulator that enabled us to hear a pre-recorded sample voice the way my mom hears it. That was rather shocking. The unaltered voice was loud and clear. Then he activated mom’s profile, and the voice faded so low I thought it was gone completely. I had to put my ear right next to the computer’s speaker to hear it at all, and then it was only a low murmur. When there aren’t many hair cells doing their job in the inner ear, it takes a lot of energy to activate the few that still work, and it’s hard apply that energy with finesse.

I’m sure we’ll find ways to compensate more effectively. That won’t happen soon enough for my mom, though. I wonder if the audiologist’s simulator might play a useful role in the meantime. When we speak to a person with major hearing loss we don’t get any feedback about how we’re being heard. It’s easy to imagine a device that would record a range of speech samples, from shouting at a microphone from across the room to shouting at it from a few feet away to speaking softly directly into it. Then the gadget would play those sounds back two ways: first unaltered, then filtered through the listener’s hearing-loss profile. Maybe that would help people realize that shouting doesn’t help, but proper positioning does.

Alternative sources of data on police homicides

There were empty seats at the table on Thursday for young males of color who have been shot by police, most recently Tamar Rice, a 12-year-old boy who was carrying a toy gun. His case resonates powerfully in Santa Rosa where, last year, 13-year-old Andy Lopez was shot for the same reason. He is memorialized in this moveable mural currently on display at the Peace and Justice Center around the corner from Luann’s studio:

Our son is an Airsoft enthusiast, just like  Tamar Rice and Andy Lopez were. Unlike them he is white. In circumstances like theirs, would that have made the crucial difference? We think so. But when you look for data to confirm or reject that intuition, it’s thin and unreliable.

Criminal justice experts note that, while the federal government and national research groups keep scads of data and statistics— on topics ranging from how many people were victims of unprovoked shark attacks (53 in 2013) to the number of hogs and pigs living on farms in the U.S. (upwards of 64,000,000 according to 2010 numbers) — there is no reliable national data on how many people are shot by police officers each year.

How many police shootings a year? No one knows, Washington Post, 09/08/2014

The one available and widely-reported statistic is that, in recent years, there have been about 400 justifiable police homicides annually. In Nobody Knows How Many Americans The Police Kill Each Year, FiveThirtyEight’s Reuben Fischer-Baum reviews several sources for that number, and concludes that while it’s a reasonable baseline, the real number is likely higher.

Fischer-Baum’s article, and others he cites, draw on a couple of key Bureau of Justice reports. They are not hard to find:

https://www.google.com/?q=site:bjs.gov+”Justifiable+Homicide+by+Police

One report, Policing and Homicide, 1976-98: Justifiable Homicide by Police, Police Officers Murdered by Felons [1999], says this about race:

A growing percentage of felons killed by police are white, and a declining percentage are black (figure 4).

Race of felons killed
1978 50% White 49% Black
1988 59% White 39% Black
1998 62% White 35% Black

Felons justifiably killed by police represent a tiny fraction of the total population. Of the 183 million whites in 1998, police killed 225; of the 27 million blacks, police killed 127. While the rate (per million population) at which blacks were killed by police in 1998 was about 4 times that of whites (the figure below and figure 5), the difference used to be much wider: the black rate in 1978 was 8 times the white rate.

A more recent report, Homicide Trends in the United States, 1980-2008, is one key source for the widely-cited number of 400 justifiable police homicides per year:

Interestingly, the gap between justifiable police homicides and justifiable citizen homicides has widened in recent decades.

Table 14 addresses race:

It’s a complicated comparison involving the races of shooters and shootees when the shooters are civilians and also when they are police. In the latter case, the trend noted in the earlier report — “a declining percentage [of ‘felons’ killed by police] are black” — has reversed. Combining this table with the previous one, we get:

% of blacks killed by police
1978 49%
1988 39%
1998 35%
2008 38%

What is our level of confidence in this data? Low. From How many police shootings a year? No one knows.:

“What’s there is crappy data,” said David A. Klinger, a former police officer and criminal justice professor at the University of Missouri who studies police use of force.

Several independent trackers, primarily journalists and academics who study criminal justice, insist the accurate number of people shot and killed by police officers each year is consistently upwards of 1,000 each year.

“The FBI’s justifiable homicides and the estimates from (arrest-related deaths) both have significant limitations in terms of coverage and reliability that are primarily due to agency participation and measurement issues,” said Michael Planty, one of the Justice Department’s chief statisticians, in an email.

Are there other sources we might use? Well, yes. Wikipedia is one place to start. It has lists of killings by law enforcement officers in the U.S.. The introductory page says:

Listed below are lists of people killed by nonmilitary law enforcement officers, whether in the line of duty or not, and regardless of reason or method. Inclusion in the lists implies neither wrongdoing nor justification on the part of the person killed or the officer involved. The listing merely documents the occurrence of a death.

The lists below are incomplete, as the annual average number of justifiable homicides alone is estimated to be near 400.

Each entry cites a source, typically a newspaper report. About once a day (or maybe about twice day), a police officer shoots a civilian somewhere in the U.S. That’s a rare and dramatic event that will almost certainly be noted in a local newspaper. The report may or may not provide complete details, but it’s an independent data point. Everything else we know about the phenomenon is based on self-reporting by law enforcement.

To an analyst, the data that lives in Wikipedia tables is semi-structured. It can be extracted into a spreadsheet or a database, but extracting fully structured data almost always requires some massaging. The script I wrote to massage Wikipedia’s lists of police homicides handles the following irregularities:

  1. From 2009 to 2009 the table lives in a single per-year page. From 2010 onward, the per-year pages subdivide into per-month pages.
  2. The city and state are usually written like this: Florida (Jacksonville). But for the first five months of 2012 they are written like this: Jacksonville, Florida.
  3. The city name, and or the city/state combination, is sometimes written as plain text, and sometimes as a link to the corresponding Wikipedia page.
  4. The city name is sometimes omitted.

The script produces a CSV file. The data prior to 2009 is sparse, so I’ve omitted it. Here’s a yearly summary since 2009:

year  count
----  -----
2009     60
2010     82
2011    157
2012    580
2013    309
2014    472

It looks like this listmaking process didn’t really kick into high gear until 2012. Since then, though, it has produced a lot of data. And for one of those years, 2012, the count of police homicides is 580, versus the Uniform Crime Report’s 426. Each of those 580 incidents cites a source. Here’s one of them I’ve picked randomly:

One person is dead following an officer-involved shooting in Anderson, according to police.

Investigators said they were called to Kings Road about 2:45 a.m. Thursday for a domestic-related incident between a husband and wife.

Coroner Greg Shore said three officers went inside the home, and that’s when a man pointed what appeared to be a gun at officers. At least one officer fired his gun, and the suspect died on the scene, Shore said.

Investigators said the man’s wife, who was inside the home, was taken to the hospital due to injuries from a physical fight with the suspect. Shore said she was doing OK.

Shore said the man officers shot was 47-year-old Paul Leatherwood. Sgt. David Creamer said officers have been called to the same house six times in the past six months for domestic-related issues.

The State Law Enforcement Division is headed to the scene to investigate, which is standard protocol for officer-involved shootings in South Carolina.

One killed in Anderson officer-involved shooting, FoxCarolina.com

We don’t know the race of the shooter or the shootee. We don’t know whether what appeared to be a gun turned out to be a gun. We don’t know whether this was or wasn’t reported as a justifiable homicide. But we could find out.

Every week, a million people listen to the blockbuster podcast Serial, an investigation into a cold case from 1999. A staggering amount of cognitive surplus has been invested in trying to figure out whether Adnan Syed did or did not murder Hae Min Lee. In this blog post, which I picked randomly from the flood of Reddit commentary, a lawyer named Susan Simpson has produced a 14000-word densely-illustrated “comparison of Adnan’s cell phone records to the witness statements provided by Adnan, Jay, Jenn, and Cathy.”

With a fraction of the investigative effort being poured into that one murder, we could know a lot more about many others.

The Church of One Tree: A civic parable

Juilliard Park is one of the jewels of Santa Rosa. It occupies 8.8 acres downtown, adjacent to the SOFA Arts District where, last weekend, thousands gathered for the 10th annual WinterBlast festival. Here’s how the Press Democrat describes the SOFA district:

Loosely gathered around the intersection of South A Street and Sebastopol Avenue, the neighborhood once had a shady reputation, but about a decade ago it began to change, and over those years it emerged as a destination for cuisine and culture.

And it continues to evolve. Today, you’ll find a picturesque cluster of small, independently owned shops, galleries, restaurants and even a live theater company.

We love the neighborhood and its energy. It was a major factor in our decision to move to Santa Rosa. When a small studio became available next door to the Atlas Coffee Company (labeled 1 on the map), Luann jumped at the opportunity. The timing was perfect. WinterBlast introduced hundreds of people to her work and to the stories that inspire it.

On the other side of the park is a landmark labeled Ripley Museum / Church of One Tree (2 on the map). Here’s the history:

The Church of One Tree was built in 1873 from a single redwood tree milled in Guerneville, California. The tree used to construct the Church stood 275 feet high and was 18 feet in diameter. Robert Ripley, a native of Santa Rosa, wrote about the Church of One Tree — where his mother attended services, — as one of his earliest installments of “Believe It or Not!” In 1970, Ripley repurposed the Church of One Tree as the Ripley Memorial Museum which was stocked with curiosities and “Believe it or Not!” memorabilia for nearly two decades. In 2009, the City of Santa Rosa restored the site adding several modern upgrades so that it could be utilized for every type of occasion.

Although that city web page doesn’t say so, the building was moved to Juilliard Park in 1957. It’s one of several landmarks that the city rents out for private events, so it’s no longer open to the public. The building is oddly sited. During the meeting Mayor Scott Bartley called it “backwards.” The front entrance faces the park, not the street, and is sheltered by a grove of redwood trees. That’s made it a magnet for the homeless who use the park. Private events have been disrupted; prospective renters have been spooked; a martial arts class that regularly rented the space found the situation untenable and bailed out.

To address these issues the city’s recreation and parks department proposed a fence that would enclose both the building and the nearby grove of redwoods. I’m not sure when or how I heard about the proposal (Luann hadn’t), but I attended this week’s city council meeting in part because the fence was on the agenda. I wanted to learn more about the issue, and to see how Santa Rosa would handle it.

Both the process and the outcome made me feel good about our new home town. Here’s the item that appeared on the council’s agenda for November 18:

12.2 REPORT – JUILLIARD PARK MASTER PLAN AMENDMENT CHURCH OF ONE TREE SITE ADDING A FENCE

BACKGROUND: The Recreation and Parks Department desires to increase use of the Church of One Tree and protect the building. The Church of One Tree site abuts Juilliard Park. The Master Plan for Juilliard Park was established in 1932, and was most recently amended on February 11, 2014. The Church of One Tree (Church) was placed on the site adjacent to Juilliard Park in 1957, and the building was used as the Ripley Memorial Museum from 1970 to 1998. The building was designated by City Council as a Landmark in 1998. The building has been restored and also modified to comply with the Americans with Disabilities Act.

A motion to recommend approval of the fence was approved by the Board of Community Services on July 23, 2014. A resolution to approve the fence design, with conditions, was approved by the Cultural Heritage Board on November 5, 2014.

RECOMMENDATION: It is recommended by the Department of Recreation and Parks that the Council, by resolution, approve the
Juilliard Park Master Plan Amendment, adding a fence to enclose the Church of One Tree.

When the item came up at the council meeting I thought it might be a done deal. Kelley Magnuson, deputy director of Recreation and Parks, opened her presentation with a slide summarizing the rationale for the fence:

  1. Increase use of the Church of One Tree
  2. Protect the building
  3. Incorporate redwoods within Church of One Tree

Slide 7 showed how the fence would reach into the park to enclose the redwood grove in front of the building. Slide 8 showed the view from Sonoma Avenue. Here, at the back of the building, the fence would block two entrances to the park. The gates (we later learned) would open only to admit guests to private events.

Although two bodies had endorsed the plan — the Cultural Heritage Board and the Board of Community Services — the council immediately began to ask about alternatives.

Councilor Erin Carlstrom:

Instead of building a fence, if we were to engage in more person-to-person contact — enforcement, security, interaction, funding homeless service providers…for example, what would it cost to increase our downtown bicycle patrol? Or adding security for events?

Councilor Ernesto Olivares:

I’m trying to understand how just having a fence solves the problem, it sounds like we have a bigger issue. If the drug use that was on the back porch of the church is now on the other side of the fence, we’re still dealing with an issue. What’s the broad plan to make the park — and the church — safer?

Kelley Magnuson was now in a tough spot. It was becoming clear that the problem she’d been tasked to address had been defined too narrowly. “Our objective today was to get your input on how to increase the use of the building, and protect it,” she said, “but I do agree there’s a larger issue.”

Mayor Scott Bartley, who is an architect, now made a not-entirely-facetious comment:

Why don’t we just pick the building up and turn it around so it faces Sonoma Avenue? Then we can just lock the door, it’ll look like a normal church, and nobody will think anything about it.

He then opened the public comment period.

First up was Ray Killion, who lives three doors down from the Church of One Tree. Here was the opening of his three-minute statement:

I’m against the direction of this fence. Aesthetically, a black iron fence is forbidding, it’s uninviting, it’s put there to say “you’re not welcome here,” and that’s not what I see as a proper message we want to send about our neighborhood.

Ray Killion made the following points, which were echoed by subsequent speakers:

  • Blocking both Sonoma Avenue entrances to the park would deny access to neighbors and visitors, as well as police, fire, and ambulance personnel.
  • The fence wouldn’t solve the crime and nuisance problems in the neighborhood.
  • If the fence must be built, at least keep the gates to the park open.
  • The Juilliard family had given the land to this city on the condition that “the whole of said property shall be forever used for park purposes only and for the use and benefit of the public in general and particularly the citizens of the city of Santa Rosa.” (This quotation from the deed was repeated several times during the evening.)

Referring to the language of the deed, Ray Killion concluded:

I would like to contrast that with what the parks department puts in the agenda tonight: “reserving this part of the park for the use of paid customers.” That’s not the purpose of a public park maintained with public money for the use of the public.

Jack Cabot has lived in the neighborhood for 24 years, owns 8 properties, and was deeply involved in the redevelopment of the SOFA district. In his statement he stressed how “eyes on the street,” which have multiplied thanks to the SOFA renaissance, would again diminish if the paths around the church were blocked.

Bob Wishard, another longtime resident, said that he and his wife had founded Juilliard Park’s original neighborhood watch 23 years ago. He added his voice to the “eyes on the street” chorus.

Floyd Fox reiterated the deed’s stipulation that the whole property was granted for public use. He added this quote: “a breach of any of the foregoing conditions shall cause said premises to revert to the said grantor, his heirs, or assigns.” He also cited resolution 23412 (1998), which established landmark status for the Church of One Tree. Quoting from the resolution — “the proposed Landmark has specific historical, cultural, and architectural value” — he concluded by asking: “Have you weighed the impact the fence would have on those values?”

Jim Macken extolled the park as a resource that should enhance the city’s ability to rent the building. Inappropriate uses of the park are “symptomatic of a larger problem” that the fence won’t fix, he said.

Edward Collins, a neighbor, opposed the fence because it would cut off access to the park and reduce citizen oversight. He also reiterated the deed’s stipulation of full public use. And he closed by referring to this clause in resolution 23412: “the Council found that the proposed Landmark designation is a Class 8 exemption under CEQA.” That exemption from the California Environmental Quality Act would, he argued, be jeopardized by the fence. He cited California Public Resources Code, Section 21084 and CEQA guideline 15300.2 in support of this argument. “If the city wants to proceed with the fence,” he said, “I think it will require a full CEQA review.”

(This was a nice civic moment. I don’t know what the councilors and city attorney were thinking but their faces said: uh oh.)

Next up was Jennifer Collins. “Being closed when not rented excludes the public from a cultural heritage landmark for the benefit of the paying few,” she said. “It punishes the neighborhood, not only by preventing us from using paths we all use regularly without incident, but also by sending a message to everyone that they are not welcome, and that this is a bad neighborhood.” She advocated for better lighting and for surveillance. And she argued that the city’s failure to maximize its revenue from the property is mainly a marketing failure. “There are no signs encouraging visitors to the Luther Burbank Gardens to come on over. Share a docent from there during peak tourist season to show off the church.”

Duane Dewitt, who often appears before the council, spoke next. “I’ve been going to this park since the 1950s,” he said. “When I was a boy we could sit under the redwoods on a hot day, and then go into the Ripley Museum.” He suggested using private security guards during events, and finding ways to open the building to the public at other times.

In her presentation, Lucinda Moore affirmed the historical value of the building, reiterated the importance of open access, and supported the idea of event security as an alternative to a fence.

Matt Martin, who is executive director for Social Advocates for Youth (SAY), was the next speaker. “The best practice for engaging with the homeless community,” he said, “is to do so face to face.” The Sonoma County Board of Supervisors recently allocated $925,000 for that purpose. In Santa Rosa, he said, it will fund outreach teams to engage with the homeless who live along the city’s creeks. He suggested that the city and SAY might be able to collaborate to bring such a team to Juilliard Park.

Next up was Cat Cvengros who is chair of the Board of Community Services. It was her board’s recommendation to build the fence. Now that idea was clearly in trouble. “When this item came before our board back in July,” she said, “we looked at the church as a revenue generator.” The fence does address the revenue problem, she said. “But you’re right, it does not address the larger issue.”

Anne Seeley, chair of Concerned Citizens for Santa Rosa, put her finger on the underlying issue. “We have at war here two different concepts. One is that a previous council directed Recreation and Parks to maximize income (unlike all other departments that aren’t required to) versus all the people who want keep the park open and free.”

That concluded the public comment period. Councilor Julie Combs now made a moving statement, part of which was quoted in the Press Democrat’s story (Santa Rosa council rejects fence at Juilliard Park) the next morning:

We are in some ways defining who we are as a community. We are making a decision about whether we put up fences and increase policing and security, or whether we fund park maintenance and alternatives for homelessness. If we fund park maintenance workers we put eyes on the park, we have a cleaner park, and we encourage people to attend. We have historically put our parks department in an untenable situation. We ask them to provide clean parks without providing them with alternative maintenance funds. I know that this council has turned down increased park maintenance funding on several occasions. So I ask staff to come back with a proposal for park maintenance.

The fence was now dead in the water. But since it was the active agenda item there needed to be a motion not to amend the park’s master plan to allow the fence. Some comments from discussion on that motion:

Councilor Robin Swinth:

As number of the neighbors pointed out, we’re dealing with a larger issue here. It’s an issue of homelessness, and it’s actually a regional issue. We need to get all the stakeholders at the table to resolve this. It’s the neighbors, it’s the homeless advocates, it’s the business owners, it’s the council, there needs to be a broader discussion.

Councilor Carlstrom:

I serve as our representative to the Russian River Watershed Association. One day a very excited woman came to us from the city of Oakland, extolling the virtues of a project they had implemented to install a new water filtration whiz-bang deal, and they’d gone through and cleaned out this big old homeless encampment. I looked at her and said: “Where’d they go?” She looked at me with a blank stare. I get it. You’ve got a siloed job. That’s what we’ve got here. I want to make sure we recognize my appointee to the Board of Community Services, Cat Svengros, as well as our Cultural Heritage Board, for their efforts on this. I know you took a lot of time to discuss this, and it was brought to you in a siloed way, and that’s your job. I want to be clear that I don’t overturn lower boards’ decisions lightly.

Mayor Bartley (echoing citizen comments about marketing the Church of One Tree):

We developed this building as a rental space. It was restored to be an income generator. The big issue — and it’s a different, more global issue — is how we do that. And I think it can be done. When I hear $350 to rent a church for three hours — that’s the bargain of the century. There should be more zeroes. We’re not marketing like we should. If we do that, and fill it with people, it’ll be a success.

Well done, Santa Rosa! Everyone involved was thoughtful, well-spoken, and open to compromise. Homelessness is a major issue here, and there’s plenty of frustration simmering, but the dominant tone wasn’t anger, it was compassion and a determination to work together to do the right things for the community as a whole. That’s part of what I came to see, and I wasn’t disappointed.

I also came to see how well the city’s online services support governance and citizen engagement. On that front there’s room for improvement. The video capture system works impressively well. You can find meetings — including the most recent one I attended on Tuesday — here. The service provider is Granicus, the same company that serves our former home town, Keene, as well as many other cities. It’s wonderful to be able to review council meetings online, anytime and anywhere. Back in 2008, in an interview with Tom Spengler, who was then CEO of Granicus, I was excited about the possibilities it opened up.

Soon after Keene implemented the Granicus service, though, I had to temper my enthusiasm. In Gov2.0 transparency: An enabler for collaborative sense-making I reflected on a key challenge: building accessible context around civic issues. Immediate stakeholders — government officials, citizens directly involved in decisions — create that context in meetings that are open, to be sure, but still often opaque to the uninitiated. Participants share a context that isn’t accessible to more casual observers.

Consider my situation. Our small investment in the SOFA district makes us minor stakeholders in issues affecting Juilliard Park. We’d like to be as well-informed about those issues as I am, now that I’ve plowed through hours of video and dozens of online documents. But that exercise was far too time-consuming to undertake on a regular basis, with respect to Juilliard Park or any other issue that affects us. And in fact, another such issue was on this week’s council agenda. We live in a neighborhood called the West End, near Railroad Square. There’s a train coming to town, and it runs right through our neighborhood. It’s a wonderful thing, and was in fact another factor in our decision to relocate here. But there’s always a tradeoff. In this case, it’s the possible closure of one of the streets in our neighborhood. Here’s a sign I pass every time I walk downtown:

It’s easy to joke about the URL for the draft environmental impact report, which is so long the sign can barely accommodate it. I’d rather my city’s content management system enabled it to form mnemonic URLs, like:

http:/srcity.org/SantaRosaRailroadCrossings

Which, of course, would also make a nice hashtag. In Tags for Democracy I showed how a city can promote a tag, like #SantaRosaRailroadCrossings, as a magnet for conversations that span multiple social networks and institutional websites.

But here I just want to focus on the page behind that formidable URL. It’s an overview of the project, with links to the draft environmental impact report as a whole (600+ pages!) and to the report’s individual sections.

The SMART train will stop at two stations in Santa Rosa, one of which doesn’t yet exist. Construction of the new Guerneville Road station will require a new railroad crossing at Jennings Avenue. Whether it should be an at-grade crossing or an elevated crossing is one key issue under discussion. A related issue is the possible closure of an existing crossing. An at-grade crossing at Jennings Avenue would be the simplest solution, but the California Public Utilities Commission rations the number of these. So adding a new at-grade crossing would require closing a street in our neighborhood. The elevated crossing wouldn’t entail that tradeoff. But as the visualization in the report shows, it’s a monstrosity.

I’ll bet few Santa Rosans have seen that illustration. Yes, the document is online, but it’s daunting. During upcoming conversations about the tradeoffs involved in choosing an at-grade or elevated crossing, wouldn’t it be nice to be able to link directly to that illustration?

Actually you can, and in fact I did just that two paragraphs above. Here’s the link behind the word visualization in that paragraph:

http://ci.santa-rosa.ca.us/doclib/Documents/CDP_Jennings_Avenue_DEIR_Aesthetics.pdf#page=23

It’s a little-known fact that you can form a link to any page within a web-hosted PDF file by appending #page=NUMBER to the URL. It’s challenging to get people onto the same page in open civic discussions; I wish this mechanism were more widely known and used.

Here’s another bit of information that could usefully be highlighted with a link. James Duncan lives near the Jennings site, and has crossed the tracks there for decades. In his statement to the council, he zeroed in on the state requirement to ration the number of crossings:

The pivotal, threshold issue — that isn’t really being discussed — is the position of the Public Utilities Commission to close a crossing in exchange for an at-grade crossing at Jennings.

It’s true there’s a general policy to maintain the status quo. And the interpretation, as I understand it, is that the crossing that exists at Jennings, and has been used all these years, is not [air quote] legal. But there’s no information about what constitutes legal. The federal government maintains an inventory of railroad crossings in the entire nation. But they have a category for what they call uninventoried crossings, and there’s a simple procedure for adding uninventoried crossings to the inventory.

Has James Duncan correctly identified a way out of the painful tradeoff at the heart of this issue? I don’t know, the council doesn’t know, James Duncan doesn’t know, but somebody knows. That person might be a government official or a private citizen (residing in Santa Rosa or elsewhere). A connection between that person and this issue might be brokered by a government official or by a private citizen. But one thing’s for sure. That person won’t want to wade through a 600-page report and hours of video. We’ll want to focus his or her attention on specific parts of documents, and specific parts of video testimony.

The Granicus service enables such linking. That’s how I was able to form the above link to James Duncan’s three-minute statement within the nearly 7 hours of video from Tuesday’s marathon session. But it’s cumbersome to create a link that jumps into the video at specific points. And using those links require a plugin (Flash or Silverlight), which rules out playback on most mobile devices.

If you do create a link, you’ll notice that the URL looks like this:

http://santa-rosa.granicus.com/MediaPlayer.php?clip_id=552&view_id=5&embed=1&entrytime=18150

In this example, entrytime=18150 denotes the number of seconds from the start of the video. It works out to 5 hours, 2 minutes, and 31 seconds, as you can see in this screenshot of the beginning of James Duncan’s statement:

Here’s what you see when you invoke the tool that helps you form a link:

The player pauses, and the clipping tool opens in a new window overlaid on top of it. Note that the beginning of the proposed clip doesn’t correspond to the 5:02:31 point at which the video is paused. (Click the image to enlarge it and see that more clearly.) You can scroll to that point within the clipping tool, but since the player is paused there’s no audio or video to guide you. To appreciate how clumsy that mechanism is, consider this screenshot from a Santa Rosa city council meeting that’s been posted to YouTube:

Right-clicking the video brings up a menu from which you can select Get video URL at current time. If you’re at the 1:20 mark in that video, the link you can copy and paste looks like this:

https://www.youtube.com/watch?v=zH9od5Tvuqg#t=80

That’s how simple and convenient it can be. And YouTube offers a further convenience. People navigate videos in terms of hours, minutes, and seconds. We’re not good at converting between that notation and raw numbers of seconds. But computers are really good at that. So YouTube supports this alternate syntax:

https://www.youtube.com/watch?v=zH9od5Tvuqg#t=1m20s

So really, you don’t even need a special clipping tool to link into a YouTube video at a specific point. You can just add minutes and seconds to the end of any YouTube URL. Their computers will figure out that 1m20s adds up to 80 seconds. Why should referring to a specific point in a city council meeting be any harder than that? It shouldn’t.

The programming to make deep linking in the Granicus player as convenient as deep linking in the YouTube player isn’t rocket science. Why hasn’t it been done? In my experience, these omissions happen because people don’t expect or demand capabilities that software could easily deliver.

Here’s something else that would expand access to archived council videos. Closed captions aren’t part of the service package that Granicus provides to every city, but Santa Rosa’s service includes them. If you turn on the closed captions while watching a Santa Rosa council video, you’ll see that they’re quite good — much better than the auto-generated captions available for YouTube videos. I suspect that’s true because Granicus provides a human transcriber as an optional part of its service.

Transcription quality notwithstanding, text synced to video is a powerful asset. In the Granicus implementation, it enables videos to be searched. For example, you can search the closed captions for Floyd Fox. Here’s the result:

The search returns three items because Mayor Bartley mentioned Floyd Fox three times: twice as an on-deck speaker, and then once as the current speaker. The third link jumps to Floyd Fox’s statement. Although you don’t land in quite the right spot — Floyd’s remarks begin at 3:06:40, the link based on caption search takes you to 3:07:00 — it’s amazing that you can search nearly 7 hours of video and quickly locate Floyd Fox’s statement.

But what if you didn’t know Floyd Fox was speaking? The names of citizens who make public comments don’t appear on the agenda, because they aren’t known in advance. During a meeting, people who wish to speak submit requests written on yellow cards. If the closed caption transcript were available alongside the video, you could scan within it to quickly absorb the sense of various parts of the meeting, and to find things that you didn’t know to look for. The transcript obviously exists. It can be displayed during video playback, and it can be searched. Why isn’t it available as a complete document? Again, it’s trivial for the software to do that. But nobody expects that feature, so nobody asks for it, and it doesn’t happen.

I first wrote about open government technology back in 2006, when Washington DC became the first city to publish data directly from its internal systems. In 2008 I explored a then-new service called Granicus. All along I’ve envisioned a world in which governments run transparently, publishing data that enables citizens and governments to work together. We’ve come a long way. But I am not yet satisfied. Even when meetings and supporting documents are available online, as they often now are, it’s harder than it should be to create the contexts needed for effective collaboration.

Context is, ultimately, a service that we provide to one another. If you’ve read this far, you know more about the fence around the Church of One Tree than anyone who didn’t attend the meeting. I created that context for you. Somebody else could do the same for the railroad crossing issue, and for any other issue in any other town. But it’s so painful to assemble that context that few will try, and fewer will succeed. Better tools aren’t the whole answer. Engaging with online civic proceedings isn’t everyone’s cup of tea. But if it were easier to do — fun, even — the motivated few could do powerful good for their communities.

The Nelson diaspora

This will be our first winter in California. I won’t miss New Hampshire’s snow and ice. But I’ll sure miss our regular Friday night gatherings with friends in Keene. And on Monday nights my thoughts will turn to the village of Nelson, eleven miles up the road. There, for longer than anyone knows, people have been playing fiddle tunes and celebrating a great contra dance tradition. On a cold winter night, when the whirling bodies of the dancers warm up the old town hall, it’s magical.

Gordon Peery, who for decades has accompanied the dancers on piano, once lent me a DVD documentary about the Nelson contra dance tradition. In a scene filmed at the Newport Folk Festival in the mid-1960s, the Nelson contra dancers appeared on the same stage as Bob Dylan and Joan Baez. I had no idea!

Here’s a video of a couple of minutes during a typical Monday night dance. To most people who don’t know the building, or the village, or the people, or the tunes, or the tradition that’s stayed vibrant there for so many years, it won’t mean much to you. To a few, though, it will resonate powerfully. That’s because Nelson, NH is the origin of a contra dance diaspora that spread across the country.

Although we aren’t contra dancers, we visited from time to time just to savor the experience. Then, a few years ago, in search of musical companionship, I began attending the jam that precedes the dance. There, beginning and intermediate musicians to learn how to play the dance tunes, mainly ones collected in these two books:

The Waltz Book opens with a tune written by Bob McQuillen, who played piano at the Nelson dance decades until his death in early 2014. And the book closes with a tune by Niel Gow, the Scottish fiddler who died in 1807.

The Waltz Book also includes a couple of Jay Ungar tunes, including Ashokan Farewell. Most people think it’s a tune from the Civil War. In fact Jay Ungar wrote it in 1982, and it became famous in 1990 as the theme of Ken Burns’ documentary about the Civil War.

The New England Fiddler’s Repertoire might also have included a mix of recent and traditional tunes. Instead it restricts itself to “established tunes” — some attributed to composers from the 1700s or 1800s, others anonymous. But it’s full of reminders that people have never stopped dancing to those traditional tunes. Here’s the footnote to Little Judique:

February 12. Played for a Forestry Meet dance in a barn with a sawdust floor at the University of New Hampshire in Durham. The temperature was 15 degrees below zero.

– Randy Miller, dance journal, 1978

As I page through these books now, and continue to learn to play the tunes in them, I’m grateful to have lived in a place where they were celebrated so well, and to have participated (in a small way) in that celebration. How will I continue that here? I don’t know yet, but I’m sure I’ll find a way.

Swimming against the stream

Congratulations to Contributoria! The Guardian’s experiment in crowd-funded collaborative journalism is a finalist in the digital innovation category of the British Journalism Awards. (Disclosure: Contributoria’s CEO and cofounder, Matt McAlister, is a former InfoWorld colleague.) The site, which launched in January 2014, runs on a 3-month cycle. So the November 2014 issue is online now, the December 2014 issue is in production, and the January 2015 issue is in planning. There are now ten issues archived on the back issues page. Here are the numbers from that page in a spreadsheet:

Who pays the writers? Contributoria’s business page explains:

The writers’ commissions are provided by the community membership pool and other sources of funding such as sponsorship. Initial backing came from the Google sponsored International Press Institute News Innovation Contest. We are currently funded by the Guardian Media Group.

Contributoria is a market with an internal currency denominated in points. I’m currently signed up for a basic membership which comes with 50 points I can direct toward story proposals. If I upgrade to paid membership I’ll have more points to spend. As a Supporter you get 150 points. As a Patron it’s 250 points plus delivery of each issue in print and e-pub formats. I like to support innovative experiments in journalism, such as the (dearly departed) Ann Arbor Chronicle [1, 2], so I may upgrade my membership. But I’m not sure I want to vote, with points, for individual proposals. I might rather donate my points to the project as a whole.

What I would like to do, in any case, is keep an eye on the flow of stories, cherrypick items of interest, and perhaps follow certain writers. So I looked for the RSS feeds that would enable me to do those things, and was more than a bit surprised not to find them. Here’s the scoop:

That was back in February. Nine months later Contributoria’s RSS feeds are still, presumably, climbing the todo list. How could a prominent and potentially award-winning experiment in online journalism regard RSS as an afterthought?

I mean no disrespect, and I won’t point any fingers because they’d point right back at me. I was an early adopter of RSS and an original member of the RSS Advisory Board. For many years an RSS reader was my information dashboard. I used it to organize and monitor items of interest from formal and informal sources, from publications and from peers. It was often the first window open on my computer in the morning.

And then things changed. I don’t remember exactly when, but for me it was even before the demise of Google Reader in July 2013. By then I’d already resigned myself to the notion that social streams were the new RSS. The world had moved on. Many people had never used RSS readers. For those who had, manual control of explicit lists of feeds now seemed more trouble than it was worth. Social graphs make those lists implicit. Our networks have become our filters. Mark Hadman and I might wish Contributoria offered RSS feeds but we’re in a tiny minority.

Of course the network-as-filter model isn’t new. The early blogosphere gave me my first taste of it. Back in 2002, in a short blog entry entitled Using people as filters, I wrote:

As individuals become both producers and consumers of RSS feeds, they can use one another as filters.

It worked well for years. I subscribed to a mix of primary sources and bloggers who were, in turn, subscribed to their own mixes of primary sources and subscribers. I often likened Dave Winer’s notion of triangulation — what happens when several sources converge on the same idea or event — to the summation of action potentials in the human nervous system. It was all very organic. I could easily tune my filter network to stay informed without feeling overwhelmed.

I doubt many of us feel that way now. Armando Alves certainly doesn’t. In Beyond filter failure: the downfall of RSS he laments the decreasing availability of RSS feeds:

I’m afraid the web/tech community has done a lousy job promoting RSS, and even people I consider tech savvy aren’t aware how to use RSS or how it would improve the way they consume information. Between a Facebook newsfeed shaped by commercial interests and a raw stream of information powered by RSS, I’d rather have the latter.

Let’s pause to consider an irony. Armando’s Medium page invites me to follow him there, but not by means of RSS. Instead the Follow link takes me to Medium’s account registration page. There I can log in, using Facebook or Twitter, then await the account verification email that will enable me to enter yet another walled garden.

There is, in fact, an RSS feed for Armando’s Medium posts. But only geeks will find it. Embedded in the page is this bit of code:

<link id=”feedLink” rel=”alternate” type=”application/rss+xml” title=”RSS” href=”/feed/@armandoalves”>

That tells me that I can subscribe to Armando in an RSS reader at this URL: https://medium.com/feed/@armandoalves

More generally it tells me that I can form the feed URL for any author on Medium by appending the @ username to https://medium.com/feed/.

How would a less technical person know these things? You wouldn’t. At one time, when RSS was in vogue, browsers would show you that a page had a corresponding RSS feed and help you add it to your feed reader. That’s no longer true, and not because the web/tech community failed to promote RSS. We promoted it like crazy for years. We got it baked into browsers. Then social media made it seem irrelevant.

But as Armando and many others are beginning to see, we’ve lost control of our filters. In the early blogosphere my social graph connected me to people who followed primary sources. Those sources were, it bears repeating, both formal and informal, both publications and peers. We made lists of sources for ourselves, we curated those lists for one another, and we were individually and collectively accountable for those choices.

Can we regain the control we lost? Do we even want to? If so let’s appreciate that the RSS ecosystem was (and, though weakened, still is) an open network powered by people who make explicit choices about flows of information. And let’s start exercising our choice-making muscles. I’m flexing mine again. The path of least resistance hasn’t worked for me, so my vacation from RSS is over. I want unfiltered access to the publications and people that matter most to me, I want them to be my best filters, and I’m available to return the favor. I may be swimming against the stream but I don’t care. I need the exercise.

Getting the digital autonomy we pay for

As an armchair educational technologist I’ve applauded the emerging notion that we should encourage students to build personal cyberinfrastructure, rooted in a domain of one’s own, that empowers them to live and work effectively. Doing so requires some expertise, but not necessarily this kind:

Authorship has blossomed since the dawn of social media; but even in its rise, authorship has been controlled by the platforms upon which we write. Digital pages are not neutral spaces. As I write this in Google Docs, I’m subject to the terms of service that invisibly manipulate the page; and I am also subject to the whims of the designers of the platform.

Owning our own homes in the digital requires an expertise that this writer does not have. I don’t own my own server, I haven’t learned to code, I haven’t designed my own interfaces, my own web site, nor even my own font. I must content myself to rent, to squat, or to ride the rails.

[Risk, Reward, and Digital Writing]

That’s Sean Michael Morris writing in the journal Hybrid Pedagogy. I agree with the premise that many are disempowered, but not with the conclusion that they’re stuck with that fate. Digital autonomy isn’t a nirvana only geeks can attain. We can all get there if we appreciate some basic principles and help create markets around them.

I’ve owned and operated servers. Nowadays I mostly avoid doing so. I host this blog on WordPress.com, and accept the limitations that entails, because it’s a reasonable tradeoff. Here, on this blog, at this point in my life, I don’t need to engage in the kinds of experimentation that I’ve done (and will do) elsewhere. I just need a place to publish. So I’ve outsourced that function to WordPress.

I haven’t, though, outsourced the function of writing to WordPress. I still write with the same text editor I’ve used for 25 years. When I finish writing this essay I’ll paste it into WordPress and hit the Publish button. This is not an ideal arrangement. I would rather connect my preferred writing tool more directly to WordPress (also to Twitter, Facebook, and other contexts). I’d rather that you could do the same with your preferred writing tool. And I’d like the creators of our writing tools, and of WordPress, to get paid for the work required to make those connections robust and seamless.

The web is made of software components that communicate by means of standard protocols. Some of those standards, like the ones your browser uses to fetch and display web pages, are baked into all the browsers and servers in a way that enables many different makes and models to work together reliably. Other standards, including those that would enable you to connect your favorite writing tool to your favorite publishing environments, are nonexistent or nascent. If you would like those standards to exist, flourish, and work reliably everywhere — and if you are willing to support the work required — then say so!

One of the Elm City project’s core principles is the notion that you ought to be able to publicize events using any calendar application (or service) that you prefer. In this case there is a mature Internet standard. But many vendors of calendar publishing systems don’t bother to implement it. When I ask why not they always say: “Customers aren’t asking for it.”

I don’t want most people to run servers, write code, or design interfaces. I just want people to understand what’s possible in a world of connected, standards-based software components, to recognize when those possibilities aren’t being realized, to expect and demand that they will be, and to pay something for that outcome.

WordPress.com is a valuable service that I could be using for free. In fact I pay $13/year for domain mapping so I can refer to this blog as blog.jonudell.net instead of jonudell.wordpress.com. That’s useful. It helps me consolidate my online presence within a domain of my own. But is that really critical? My wife blogs at luannudell.wordpress.com. Her homepage at www.luannudell.com links to the blog. Writing this post reminds me that I keep forgetting to create the alias blog.luannudell.com. Maybe I will, now that I’m thinking about it, but I’m not sure she’d notice a difference. People find Luann online in the spaces she chooses to inhabit and call her own.

I’d rather she had the option to spend $13/year for a robust connection between Word, which is her preferred writing tool, and WordPress. And then be able to transfer that connection to another writing tool and/or publishing platform if she wants to. Gaining this autonomy doesn’t require deep technical expertise. We just need to understand what’s possible, demand it, and be willing to pay (a little) for it.

We are the media

In an item posted last week, There was no pumpkin riot in Keene, I drew a distinction between two different events that became conflated in the national awareness. There was no rioting during the pumpkin festival at one end of Keene’s Main Street. And no pumpkins were smashed during the riots in the college neighborhood at the other end of the street. But as Reed Hedges noted in a comment on my blog:

The imagined scene of a quaint and boring pumpkin festival erupting in anarchy and violence for no reason was too amusing to resist viral spread across national and internet news.

In conversations about why the story went off the rails, I keep hearing the same refrain. It was “the media’s” fault. Yes, but that begs the question: Which media? Stories are no longer framed exclusively by newspapers, TV, radio, and their counterparts online. Using social media we all participate in that framing, for better and for worse. When we point the finger of blame at “the media” we must also point back at ourselves.

We’re becoming more aware of how and why to be critical consumers of online information. The corollary is not yet widely acknowledged. Because we collectively shape the stories that inform public awareness, we must also learn to be careful producers of online information.

In the aftermath of that chaotic night in Keene, an acquaintance (and prominent local citizen) mentioned in a Facebook post that two people had died. His source? He’d heard it from someone who had in turn heard it on a police scanner. In fact nobody died. I don’t think that careless report amplified the collective misconception, but it easily could have. Our online utterances are news sources. When we like, retweet, and tag those utterances, we shape the flow of news. This is a new kind of power. We’ve got to use it responsibly, and hold ourselves accountable when we don’t.

Let’s talk

Ray Ozzie, in conversation with Ina Fried and Walt Mossberg last week, reflected on his decades-long effort to enlist computers in support of collaborative work. Ina asked whether the tools he’s built — Notes, Groove, now Talko — have been ahead of the curve. Ray’s response:

With Notes it was an uphill battle, and then once it took off, it took off. We built a very substantial business around that, which is what gave me confidence there’s a macro-economic basis for computer-supported collaborative work. If you solve collaboration problems, there’s money to be made.

With Groove that wasn’t the case. It was a niche audience. But Groove is where I got excited about voice. It was used primarily by non-governmental organizations — in Sri Lanka after the tsunami, after Katrina, in many situations where people from different organizations needed to get together very dynamically to get something done. People used the text and file-sharing features in Groove, but they also used the push-to-talk button much more than we expected. Because when you want to convey emotion and urgency, there’s nothing better than your voice.

It’s ironic that Talko’s effort to re-establish voice as a primary mode of communication is ahead of the curve. Somehow we’ve come to accept that talking to one another isn’t a primary function of the devices we call phones, but that typing on them with our thumbs is.

With Talko, you speak in a shared space that’s represented as an audio timeline. Conversations can be asynchronous (like email) or synchronous (like chat), and the transition between those modes is seamless. When I bought my first iPhone in order to try Talko — it’s iOS-only for now — I contacted Matt Pope, Talko’s co-founder, to let him know I was available for Talko-style conversation. Over a period of days we chatted asynchronously, creating an audio timeline made from his voice messages and mine. In that mode Talko is a kind of visual voicemail: a randomly-accessible record of voice messages sent and received.

At one point I happened to be reviewing that conversation when Matt came online, noticed I was active in the conversation, and switched into synchronous mode by saying “Cool! Serendipitous synch!” Just like that we were in a live conversation. But not an ephemeral live conversation. We were still adding to the audio timeline, still building a persistent and shareable construct.

I’d been revisiting my conversation with Matt because, in a parallel Talko conversation with Steve Gillmor, he and I wondered how to exchange Talko sessions. Matt used our live conversation to explain how, and inserted an iPhone screenshot into the stream to illustrate. (Mea culpa. It would have been obvious to me if I’d been a more experienced iPhone user.)

When I hung up with Matt I captured the link for our conversation, which was now a record of a back-and-forth voice messages over several days, plus a live conversation, plus a screenshot injected into the conversation. And I added that link into the parallel conversation I’d been having, on and off for a few days, with Steve.

Talko’s business model is business. In that realm, voicemail is a last resort. And nobody loves a conference call that has to be scheduled in advance, that includes only invited attendees, that leaves no record for attendees (or others recruited later) to review and extend. Email is the universal solvent but it’s bandwidth-challenged with respect to both speed and emotional richness. Voice is a radically underutilized medium for communicating within and across organizations. It’s not a panacea, of course. On a plane, or in a meeting, you often need to communicate silently. But a smarter approach to voice communication will, I’m certain, solve vexing communication problems for business.

And not only for business. My most pressing collaboration challenge right now is helping my sister coordinate care for our elderly mother. My sister lives in New Jersey, I’m in California, mom’s in Pennsylvania. We are in ongoing conversations with mom, with each other, with staff at the facility where mom lives, with her friends there, with an agency that provides supplemental care, and from time to time with the hospital. Communication among all of us is a fragmented mess of emails, text messages, voicemails, pictures of handwritten notes, and of course phone calls. It’s really one ongoing conversation that would ideally leverage voice as much as possible, while enhancing voice with text, images, persistence, tagging, and sharing. I wish I could use Talko to manage that conversation. The iOS-only constraint prevents that for now; I hope it lifts soon.

There was no pumpkin riot in Keene

Recently, in a store in Santa Rosa, my wife Luann was waiting behind another customer whose surname, the clerk was thrilled to learn, is Parrish. “That’s the name of the guy in Jumanji,” the clerk said. “I’ve seen that movie fifty times!”

“I’m from Keene, New Hampshire,” Luann said, “the town where that movie was filmed.”

It was a big deal when Robin Williams came to town. You can still see the sign for Parrish Shoes painted on a brick wall downtown. Recently it became the local Robin Williams memorial:

Then the penny dropped. The customer turned to Luann and said: “Keene? Really? Isn’t that where the pumpkin riot happened?”

The Pumpkin Festival began in 1991. In 2005 I made a short documentary film about the event.

It’s a montage of marching bands, face painting, music, kettle corn, folk dancing, juggling, and of course endless ranks of jack-o-lanterns by day and especially by night. We weren’t around this year to see it, but our friends in Keene assure us that if we had been, we’d have seen a Pumpkin Festival just like the one I filmed in 2005. The 2014 Pumpkin Festival was the same family event it’s always been. Many attendees had no idea that, at the other end of Main Street, in the neighborhood around Keene State College, the now-infamous riot was in progress.

No pumpkins were harmed in the riot. Bottles, cans, and rocks were thrown, a car was flipped, fires were set, but — strange as it sounds — none of these activities intersected with the normal course of the festival. Two very different and quite unrelated events occurred in the same town on the same day.

The riot had precursors. Things had been getting out of control in the college’s neighborhood for the past few years. College and town officials were expecting trouble again, and thought they were prepared to contain it. But things got so crazy this year that SWAT teams from around the state were called in to help.

In the aftermath there was an important discussion of white privilege, and of the double standard applied to media coverage of the Keene riot versus the Ferguson protests. Here’s The Daily Kos:

Black folks who are protesting with righteous rage and anger in response to the killing of Michael Brown in Ferguson have been called “thugs”, “animals”, and cited by the Right-wing media as examples of the “bad culture” and “cultural pathologies” supposedly common to the African-American community.

Privileged white college students who riot at a pumpkin festival are “spirited partiers”, “unruly”, or “rowdy”.

Unfortunately the title of that article, White Privilege and the ‘Pumpkin Fest’ Riot of 2014, helped perpetuate the false notion that the Pumpkin Festival turned into a riot. When I mentioned that to a friend he said: “Of course, the media always get things wrong.”

It would be easy to blame the media. In fact, the misconception about what happened in Keene is a collective error. On Twitter, for example, #pumpkinfest became the hashtag that gathered riot-related messages, photos, and videos, and that focused the comparison to Ferguson. Who made that choice? Not the media. Not anyone in particular. It was the network’s choice. And the network got it wrong. Our friends in Keene saw it happening and tried to flood the social media with messages and photos documenting a 2014 Pumpkin Festival that was as happy and peaceful as every other Pumpkin Festival. But once the world had decided there’d been a pumpkin riot it was impossible to reverse that decision.

Is Keene’s signature event now ruined? We’ll see. I don’t think anybody yet knows whether it will continue. Meanwhile it’s worth reflecting on how conventional and social media converged on the same error. There’s nothing magical about the network. It’s just us, and sometimes we get things wrong.

How recently has the website been updated?

Today’s hangout with Gardner Campbell and Howard Rheingold, part of the Connected Courses project, dovetailed nicely with a post I’ve been meaning to write. Our discussion topic was web literacy. One of the literacies that Howard has been promoting is critical consumption of information or, as he more effectively says, “crap detection.” His mini-course on the subject links to a page entitled The CRAP Test which offers this checklist:

    * Currency –

          o How recent is the information?

          o How recently has the website been updated?

          o Is it current enough for your topic?

    * Reliability –

          o What kind of information is included in the resource?

          o Is content of the resource primarily opinion?  Is is balanced?

          o Does the creator provide references or sources for data or quotations?

    * Authority –

          o Who is the creator or author?

          o What are the credentials?

          o Who is the published or sponsor?

          o Are they reputable?

          o What is the publisher’s interest (if any) in this information?

          o Are there advertisements on the website?

    * Purpose/Point of View –

          o Is this fact or opinion?

          o Is it biased?

          o Is the creator/author trying to sell you something?

 

The first criterion, Currency, seems more straightforward than the others. But it isn’t. Web servers often don’t know when the pages they serve were created or last edited. The pages themselves may carry that information, but not in any standard way that search engines can reliably use.

In an earlier web era there was a strong correspondence between files on your computer and pages served up on the web. In some cases that remains true. My home page, for example, is just a hand-edited HTML file. When you fetch the page into your browser, the server transmits the following information in HTTP headers that you don’t see:

HTTP/1.1 200 OK
Date: Thu, 23 Oct 2014 20:54:46 GMT
Server: Apache
Last-Modified: Wed, 06 Aug 2014 19:28:27 GMT

That page was served today but last edited on August 6th.

Nowadays, though, for many good reasons, most pages aren’t hand-edited HTML. Most are served up by systems that assemble pages dynamically from many parts. Such systems may or may not transmit a Last-Modified header. If they do they usually report when the page was assembled, which is about the same time you read it.

Search engines can, of course, know when new pages appear on the web. And there are ways to tap into that knowledge. But such methods are arcane and unreliable. We take it for granted that we can list files in folders on our computers by date. Reviewing web search results doesn’t work that way, so it’s arduous to apply the first criterion of C.R.A.P. detection. If you’re lucky the URL will encode a publication date, as is often true for blogs. In such cases you can gauge freshness without loading the page. Otherwise you’ll need to click the link and look around for cues. Some web publishing systems report when items were published and/or edited, many don’t.

Social media tend to mask this problem because they encourage us to operate in what Mike Caulfield calls StreamMode:

StreamMode is the approach to organizing your thoughts as a history, integrated primarily as a sequence of events. You know that you are in StreamMode if you never return to edit the things you are posting on the web.

He contrasts StreamMode with StateMode:

In StateMode we want a body of work at any given moment to be seen as an integrated whole, the best pass at our current thinking. It’s not a journal trail of how we got here, it’s a description of where we are now.

The ultimate expression of StateMode is the wiki.

But not only the wiki. Any website whose organizing principle is not reverse chronology is operating in StateMode. If you’re publishing that kind of site, how can you make its currency easier to evaluate? If you can choose your publishing system, prefer one that can form URLs with publication dates and embed last-edited timestamps in pages.

In theory, our publishing tools could capture timestamps for the creation and modification of pages. Our web servers could encode those timestamps in HTTP headers and/or in generated pages, using a standard format. Search engines could use those timestamps to reliably sort results. And we could all much more easily evaluate the currency of those results.

In practice that’s not going to happen anytime soon. Makers of publishing tools, servers, and search engines would have to agree on a standard approach and form a critical mass in support of it. Don’t hold your breath waiting.

Can we do better? We spoke today about the web’s openness to user innovation and cited the emergence of Twitter hashtags as an example. Hashtags weren’t baked into Twitter. Chris Messina proposed using them as a way to form ad-hoc groups, drawing (I think) on earlier experience with Internet Relay Chat. Now the scope of hashtags extends far beyond Twitter. The tag for Connected Courses, #ccourses, finds essays, images, and videos from all around the web. Nine keystrokes join you to a group exploration of a set of ideas. Eleven more, #2014-10-23, could locate you on that exploration’s timeline. Would it be worth the effort? Perhaps not. But if we really wanted the result, we could achieve it.

GitHub Pages For The Rest Of Us

In A web of agreements and disagreements I documented one aspect of a recent wiki migration: conversion of MediaWiki’s markup lingo, wikitext, to Github’s lingo, Github Flavored Markdown. Here I’ll describe the GitHub hosting arrangement we ended up with.

There were two ways to do it. We could use GitHub’s built-in per-repository wiki, powered by an engine called Gollum. Or we could use GitHub Pages, a general-purpose web publishing system that powers (most famously) the website for the Ruby language. The engine behind GitHub Pages, Jekyll, is also often used for blogs. If you scan the list of Jekyll sites you’ll see that a great many are software developers’ personal blogs. That’s no accident. They are the folks who most appreciate the benefits of GitHub Pages, which include:

Simple markup. You write Markdown, Jekyll converts it to HTML. Nothing prevents you from mixing in HTML, or using HTML exclusively, but the simplicity of Markdown — more accurately, GitHub Flavored Markdown — is a big draw.

No database. For simple websites and blogs, a so-called dynamic system, backed by a database, can be overkill. You have to install and maintain the database which then regulates all access to your files. Why not just create and edit plain old files, either in a simple lingo like Markdown or in full-blown HTML, then squirt them through an engine that HTMLizes the Markdown (if necessary) and flows them through a site template? People call sites made this way static sites, which I think is a bit of a misnomer. It’s a mouthful but I prefer to call them dynamically generated and statically served. I’ve built a lot of web publishing systems over the years and they all work this way. If what you’re publishing is data that naturally resides in a database then of course you’ll need to feed the site from the database. But if what you’re publishing is stuff that you write, and that most naturally lives in the filesystem, why bother?

Version control. A GitHub Pages site is just a branch in a GitHub repository associated with some special conventions. And a GitHub repository offers many powerful affordances, including an exquisitely capable system for logging, tracking, and visualizing the edits to a set of documents made by one or more people.

Collaborative editing. When more then one person edits a site, each can make a copy of the site’s pages, edit independently of others, and then ask the site’s owner to merge in the changes.

Issue tracking. Both authors and readers of the site can use GitHub to request changes and work collaboratively to resolve those requests.

You don’t have to be a programmer to appreciate these benefits. But GitHub is a programmer-friendly place. And its tools and processes are famously complex, even for programmers. If you use the tools anyway on a daily basis, GitHub Pages will feel natural. Otherwise you’ll need a brain transplant.

Which is a shame, really. All sorts of people would want to take advantage of the benefits of GitHub Pages. If it were packaged very differently, and presented as GitHub Pages For The Rest of Us, they might be able able to. The collaborative creation and management of sets of documents is a general problem that’s still poorly solved for the vast majority of information workers. Mechanisms for collaborative editing, version control, and issue tracking often don’t exist. When they do they’re typically add-on features that every content management system implements in its own way. GitHub inverts that model. Collaborative editing, version control, and issue tracking are standard capabilities that provide a foundation on which many different workflows can be built. Programmers shouldn’t be the only ones able to exploit that synergy.

In this case, though, the authors of the wiki I was migrating are programmers. We use GitHub, and we know how to take advantage of the benefits of GitHub Pages. But there was still a problem. You don’t make a wiki with GitHub Pages, you make a conventional website. And while you can use GitHub Flavored Markdown to make it, the drill involves cloning your repository to a local working directory, then installing Jekyll and using it to compile your Markdown files into their HTML counterparts which you preview and finally push to the upstream repo. It’s a programmer’s workflow. We know the drill. But just because we can work that way doesn’t mean we should. Spontaneity is one of wiki’s great strengths. See something you want to change? Just click to make the page editable and do it. The activation threshold is as low as it can possibly be, and that’s crucial for maintaining documentation. Every extra step in the process is friction that impedes the flow of edits.

So we went with the built-in-wiki. It’s easy to get started, you just click the Wiki link in the sidebar of your GitHub repository and start writing. You can even choose your markup syntax from a list that includes MediaWiki and Markdown. As we went along, though, we felt increasingly constrained by the fixed layout of the built-in wiki. Wide elements like tables and preformatted blocks of text got uncomfortably squeezed. You can create a custom sidebar but that doesn’t replace the default sidebar which lists pages alphabetically in a way that felt intrusive. And we found ourselves using Markdown in strange ways to compensate for the inability to style the wiki.

If only you could use GitHub Pages in a more interactive way, without having to install Jekyll and then compile and push every little change. Well, it turns out that you can. Sort of.

I haven’t actually run Jekyll locally so I may be mistaken, but here’s how it looks to me. Jekyll compiles your site to a local directory which becomes the cache from which it serves up the results of its Markdown-to-HTML conversion. When you push your changes to GitHub, though, the process repeats. GitHub notices when you update the repo and runs Jekyll for you in the cloud. It compiles your Markdown to a cache that it creates and uses on your behalf.

If that’s how it works, shouldn’t you be able to edit your Markdown files directly in the repository, using GitHub’s normal interface for editing and proofing? And wouldn’t that be pretty close to the experience of editing a GitHub wiki?

Yes and yes, with some caveats. Creating new pages isn’t as convenient as in the wiki. You can’t just type in [see here](New Page) and then click the rendered link to conjure that new page into existence. Jekyll requires more ceremony. You have to manually create NewPage.md (not, evidently, “New Page.md”) in your repo. And then you have to edit NewPage.md and add something like this at the top:

---
title: New Page
layout: default
---

Since conjuring new pages by name is arguably the essence of a wiki, this clearly isn’t one. But you can create a page interactively using GitHub’s normal interface. Once that’s done, you can edit and preview “New Page.md” using GitHub’s normal interface. To me the process feels more like using the built-in wiki than compiling locally with Jekyll. And it opens the door to the custom CSS and layouts that the built-in wiki precludes.

There are, alas, still more caveats. You can’t always believe the preview. Some things that look right in preview are wrong in the final rendering. And that final rendering isn’t immediate. Changes take more or less time to show up, depending (I suppose) on how busy GitHub’s cloud-based Jekyll service happens to be. So this is far from a perfect solution. If you only need something a bit more robust than your repository’s README.md file, then the built-in wiki is fine. If you’re creating a major site like ruby-lang.org then you’ll want to install and run Jekyll locally. Between those extremes, though, there’s a middle ground. You can use GitHub Pages in a cloud-based way that delivers a wiki-like editing experience with the ability to use custom CSS and layouts,

I don’t think this particular patch of middle ground will appeal widely. Maybe a hypothetical GitHub Pages For The Rest Of Us will. Or maybe Ward Cunningham’s Smallest Federated Wiki (see also http://hapgood.us/tag/federated-wiki/) will. In any case, the ideas and methods that enable software developers to work together online are ones that everyone will want to learn and apply. The more paths to understanding and mastery, the better.

Voyage of the Captain Kirk Floating Arms Keyboard Chair

When we moved last month we let go of a great many things in order to compress our household and Luann’s studio into a set of ABF U-Pack containers. At one point we planned to shed all our (mostly second-hand) furniture, figuring it’d be cheaper to replace than to ship cross-country. But since Luann had acquired all that furniture, it was much harder for her to let go of it than me. So to put a bit more of my own skin into the game I sacrificed my beloved Captain Kirk chair with Floating Arms keyboard.

The idea was to preserve the essential one-of-a-kind keyboard and replace the commodity chair. Which was foolish, Bodybilt chair’s don’t come cheap. But I was in the grip of an obsession to lighten our load, and there was no time left to sell it, so off to the curb it went. My friend John Washer and I immortalized the moment.

Then, happily, fate intervened. First another friend, George Ponzini, sensibly picked up the chair and took it home. Then we decided to use our reserve fourth U-Pack container to bring a sofa, some living room chairs, and other stuff we thought we’d leave behind. Now there was room for the Captain Kirk chair to come along on our voyage. George kindly brought it back, I packed it, and off to California we went.

Weeks later we unpacked our household containers in our rented home in Santa Rosa. When I set the chair down in my office, the hydraulic lifter broke. Not a disaster, I could live with it at the lowest setting until I could replace the lifter. But then, as we emptied box after box, I began to worry. The Floating Arms keyboard wasn’t showing up. Disaster!

Then, finally, it turned up. Joy!

But when I tried to reattach it to the chair, two crucial parts — the rods that connect the arms of the chair to the custom keyboard — were inexplicably missing. Disaster!

Eventually it dawned on me. This wasn’t the Floating Arms keyboard I’d been using for the past 15 years. It was the original prototype that I’d reviewed for BYTE, and that Workplace Designs had replaced with the production model. I’d had a backup Floating Arms keyboard all this time, forgotten in a box up in the attic. So now I could recreate my setup. I just needed to replace the connector rods and the hydraulic lifter. Joy! Maybe! If those parts were still available!

I called The Human Solution and spoke to the very friendly and helpful Jonan Gardner. He took down the serial number on the chair, asked for photos of the broken hydraulic lifter and the arms into which the missing connector rods needed to fit, and promised to get back to me.

The next day the missing Floating Arms keyboard turned up in the bottom of a bag of shoes. More joy! I hooked it up to my broken-but-still-functional chair and got to work. The first order of business was to contact Jonah and let him know I didn’t need those connector rods, they were attached to the missing-but-now-found keyboard. “You’re lucky,” he said. “We couldn’t have replaced those. But the lifter is still available for your chair, and you can order it.” So I did.

The lifter arrived today. It wasn’t immediately obvious how to extract the old one in order to replace it. There were no fasteners. Do you just need to pound on it with a sledgehammer? I wrote to Jonah and he responded with this video and these instructions:

Someone will need to use a 3-4 pound, short handle, steel-head sledge hammer. Timidity will not get the old cylinder out so do not be afraid to HIT the mechanism. After 20 some-odd years, they are going to have to HIT the mechanism.

That’s just what I needed to know. And he wasn’t kidding about the weight of the hammer. I didn’t have a sledgehammer handy, and a regular hammer didn’t work, so I improvised:

And that did the trick. I HIT the cylinder a bunch of times, it popped out, I popped the new one in, and I’m back in business.

Thank you, Workplace Designs, for inventing the best ergonomic keyboard ever. Thank you, Jonah, for helping me bring it back to life. Thank you, World Wide Web, for enabling Bodybilt to share a video show exactly how hard to HIT when replacing a hydraulic lifter. And thank you, Captain Kirk Floating Arms Keyboard Chair, for being with me all these years. I’m sorry I threatened to abandon you. It’ll never happen again.

A web of agreements and disagreements

Recently I migrated a wiki from one platform to another. It was complicated in a couple of ways. The first wrinkle was hosting. The old wiki ran on a Linux-based virtual machine and the new one runs on GitHub. The second wrinkle was markup. MediaWiki uses one flavor of lightweight markup and GitHub uses (a variant of) another.

The process was confusing even for me. But logistics aside, it raised questions about standards, interoperability, and the challenge of working in an evolving digital realm.

The wiki in question is the documentation for the Thali project which I’ve mentioned in a number of posts. The project is mainly documented by Thali’s creator, Yaron Goland. Why use a wiki? Thali is a fast-moving project. Yaron has a blog, and he could use that to document Thali. But while blogs are agile publishing tools, they don’t shine when it comes to restructuring and spontaneous editing. Those are the great strengths of wikis.

Thali was originally hosted on CodePlex. Since that service doesn’t offer a built-in wiki, Yaron augmented it with a Bitnami MediaWiki image hosted in Azure. This was a DIY setup, not a managed service, which meant that when the Heartbleed Bug showed up he had to patch it himself, and he would have been on the hook again when Shellshock arrived. Life’s too short for that.

Also, with the project’s source code hosted on GitHub, it made sense to explore hosting the documentation there too. It’s simpler for readers of the code and the documentation to find everything in one place. And it’s simpler for writers of both forms of text to put everything in that place. There’s just one service to authenticate too, and tools for version control and issue tracking can be used for both forms of text.

I started by moving a few experimental pages from the MediaWiki to the GitHub wiki. Were there tools that could automate the translation? Maybe, but I’ve learned to walk before attempting to run. Converting a few pages by hand gave me an appreciation of the differences between the two markup languages. Each is a de facto standard with many derived variations. GitHub, for example, uses a variant of Markdown called GitHub Flavored Markdown (GFM). Tools that read and write “standard” Markdown don’t properly read and write GFM.

If I were teaching a course in advanced web literacy, I’d pose the following homework exercise:

You’re required to migrate a wiki from MediaWiki to GitHub. Possible strategies include:

  1. Use a tool that does the translation automatically.
  2. Create that tool if it doesn’t exist
  3. Do the job manually

Evaluate these options.

Of course there are assumptions buried in the problem statement. A web-literate student should first ask: “Why? Are we just chasing a fad? What problems will this migration solve? What problems will it create? ”

Assuming we agree it makes sense, I’d like to see responses that:

  • Enumerate available translators.
  • Cite credible evaluations of them (and explain why they’re credible).
  • Analyze the source and target data to find out which markup features might or might not be supported by the available translators.
  • Consider the translators’ implementation costs. Are they local or cloud-based? If local how much infrastructure must be installed, how complex are its dependencies? If cloud-based how will bulk operations work?
  • If no translators emerge, make a back-of-the-envelope estimate of the distance between two formats and the effort required to create software to map between them.
  • Evaluate the time and effort required to research, acquire, and use an automated tool, vis a vis that required to do the job manually.
  • Estimate the break-even point at which a resuable automated tool pays off.
  • Recognize that there really isn’t a manual option. Doing the job “by hand” in a text editor means using a tool that enables a degree of automation.

In my case that last point proved salient. The tools landscape looked messy, there were only a few dozen pages to move over, the distance between the two markups wasn’t great, it was (for me) a one-time thing, and I wanted to make an editorial pass through the stuff anyway. So I wound up using a text editor. To bridge one gap between the two formats — different syntaxes for hyperlinks — I recorded a macro to convert one to the other.

To achieve this result in MediaWiki:

all about frogs

You type this:

[[Frog|all about frogs]]

In a GitHub wiki it’s this:

[Frog](all about frogs)

So much writing nowadays happens in browsers, never mind word processors, never mind old-school text editors, that it’s worth pointing out those old dogs can do some cool tricks. I won’t even mention which editor I use because people get religious about this stuff. Suffice it to say that it’s one of a class of tools that make it easy to record, and then play back, a sequence of actions like this:

  • Search for [[
  • Put the cursor on the first [
  • Delete it
  • Search for |
  • Change it to ]
  • Type (
  • Search for ]]
  • Change it to )

You might find an automated translator that encodes that same recipe. You might be able to write code to implement it. But for a large class of textual transformations like this you can most certainly use an editor that records and runs macros. Given that the web is still a largely textual medium, where transformations like this one are often needed, it’s a shame that macros are a forgotten art. I often use them to prototype recipes that I’ll then translate into code. But sometimes, as in this case, they’re all the code I need. That’s something I’d want students of web literacy to realize.

What would really make my day, though, would be for one of those students to say:

“Hey, wait a sec. This doesn’t make sense. There is no such thing as GitHub Flavored HTML. Why is there GitHub Flavored Markdown?

Or Standard Flavored Markdown, which quickly became Common Markdown, then CommonMark. How de facto standards become de jure standards, or don’t, is a fascinating subject. The web works as well as it does because we mostly agree on a set of tools and practices. But it evolves when we disagree, try different approaches, and test them against one another in a marketplace of ideas. Citizens of a web-literate planet should appreciate both the agreements and the disagreements.

A cost-effective way to winterize windows

D’Arcy Norman asks:

If there’s a better way to winterize windows than just taping plastic to the frame, I’d love to hear about it.

Indeed. In New Hampshire, when fuel prices first skyrocketed, we did that for a couple of years. It’s an incredibly effective way to stop the leaks that suck precious warm air out of your home. But it’s a royal pain to install the plastic sheeting every fall, and when you remove it in the spring you inevitably pull paint chips off your window frames.

The solution is interior storms, a really nice hack I learned about from John Leeke. He’s a restorer of historic homes, and — what brought him to my attention — a narrator of that work. Interior storms are just removable frames, surrounded by gaskets, to which you attach your plastic sheets permanently. Once made they pop into your window frames in a few seconds every fall, and pop out as easily in the spring.

Achieving that result is, however, not trivial, at least it wasn’t for me. My first generation of interior storms, based on John’s instructions, were suboptimal. The round backer rod material he recommended had to be split lengthwise to form a D profile. I wound up making a jig to do that by drilling a backer-rod-diameter hole in a piece of wood, splitting it in half, embedding a razor blade on an angle, and joining the pieces. Great idea in principle, but in practice it was still hard to draw hundreds of feet of backer rod through the jig and achieve a clean lengthwise split. It was also hard to apply hundreds of feet of double-sided tape to the split material.

The backer rod I used also turned out not to be sufficiently compressible. The critical thing with interior storms is a tight fit. When you tape plastic to your windows you’re guaranteed to get that result, which is why it’s so effective. Interior storms need to press into their surrounding window frames really snugly to achieve the same effect. Inconsistencies in the width of my split backer rod, and the relative incompressibility of the material, resulted in storms that didn’t always fit as snugly as they should have.

Another problem with the first-generation storms was flimsy frames. I ripped pine boards lengthwise to create inch-wide frame members. They really should have been inch-and-a-half.

So last year I rebooted and created second-generation storms. I started with inch-and-a-half wide frame members. Then I ditched the backer rod and went with pretaped rubber gasket. It’s a much more expensive material but it obviates the need for do-it-yourself taping and has the compressibility I was looking for.

Yet another problem with the first-gen storms was that I made all the frames from the same template. The windows were nominally all the same dimensions, but it turns out there were minor variations and those matter when you really need to achieve a snug fit.

So the second time around I customized each frame to its window. Yes, it was tedious. But for our house it was necessary, and it might be for many old houses. Here’s the algorithm I came up with:

1. Cut short dummy pieces from spare inch-and-a-half-wide frame members.

2. Attach gasket to one side of each dummy piece.

3. Make all long and short frame members a bit longer than needed.

4. For each frame member:

– Place dummy pieces on either end

– Place frame member between dummy pieces

– Compress the gasket at one end

– Mark frame member at the other end (accounting for gasket compression on that end)

– Cut frame member

– Label frame member (“living room west wall”)

Once the frame is made, you attach the plastic sheet in the usual way. I used Warp Brothers SK-38 kits which come with double-stick tape. You tape around the edge of the frame, lay down the plastic, smooth it by hand, press it down, trim the edges, and use a blow dryer or heat gun to shrink it tight.

Ths is the kind of job I hate doing. You spend lots of time climbing the learning curve, and then once you’re done you never reuse the knowledge you’ve painfully acquired. Since the method is so effective, though, I’ll toss out an idea that’s been percolating for a while.

Consider an older house in a northern climate, with older windows and storms, and adequate attic insulation. The walls may or may not be adequately insulated, but the first line of defense is to tighten up those windows. It’s expensive to replace them, and the replacements are going to be vinyl that will ruin the aesthetics of the house and won’t age well. It’s even more expensive to hire a restorer to rebuild the old windows.

Let’s say that interior storms deliver 80% of the benefit of replacement windows for 10% of the cost. Deploying this solution to all the eligible houses in a region is arguably the most cost-effective way to tighten up that population of houses. But the method I’ve described here won’t scale. It entails more effort, and more hassle, than most folks will be willing to put up with.

How could we scale out deployment of interior storms across a whole community? I’d love to see high schools take on the challenge. Set up a workshop for making interior storms. Market it as a makerspace. No, it’s not 3D printing, but low-tech interior storms deployed community-wide will mean way more to the community than anything a MakerBot can print. Also, turn the operation into a summer jobs program. Teach kids how to run it like a business and pay themselves better than minimum wage.

Since I am now living in Santa Rosa, winterization of windows is no longer a big concern. But I’ve been meaning to document what I learned and did back in New Hampshire. And I would really like to see John Leeke’s idea applied at scale in places where it’s needed. So I hope that the new owner of the house we sold in Keene will be successful with this method, that D’Arcy Norman and others will too, and that communities will figure out how to make it happen at scale.

3D Elastic Storage, part 3: Five stars to U-Pack!

It’s been a busy month. We sold our house in Keene, NH, drove across the country, and rented a house in Santa Rosa, CA. A move like that entails plenty of physical, emotional, and financial stress. The last thing you need is trouble with a fraudulent mover which, sadly, is so common that http://www.movingscam.com/ needs to exist. Luann spent a lot of time exploring the site and Jeff Walker, its founder, wrote her a couple of really helpful and supportive emails. When we realized that a full-service move wasn’t feasible in our case, Jeff agreed that ABF U-Pack — the do-it-yourself company I’d identified as our only viable option — was a good choice.

I’ve chronicled our experience with U-Pack before and during the move. Now that it’s done, I’m wildly positive about the service. Every aspect of it has been thoughtfully and intelligently designed.

The non-standard size and shape of U-Pack’s ReloCube is, at first, surprising. It’s 6’3″ x 7′ x 8’4″, and the long dimension is the height. As Marc Levinson’s The Box wonderfully explains, standardization of shipping containers created the original Internet of Things: a packet-switched network of 20′ and 40′ boxes. Those shapes don’t meet U-Pack’s requirements for granular storage, transport on flatbed trailers, and delivery to curbside parking spaces. But while the ReloCube’s dimensions are non-standard, the ReloCube system provides the key benefits of a packet-switched network: variable capacity, store-and-forward delivery. In our case, we’ve now taken delivery of the two cubes that held our household stuff. The two that hold Luann’s studio remain in storage until we figure out where that stuff will land. Smaller containers enable that crucial flexibility.

Smaller containers are also easier to load. Here’s a picture of a ReloCube interior:

All the surfaces are nicely smooth. And there are plenty of slots for hooking in straps. But I wound up using very few straps because I was able to pack the cubes tightly. It’s easier to do that in a smaller space.

I also like how the doors shut flush against the edge of the cube:

When you lever the doors shut on a tightly-packed container they compress and help stabilize the load. That wouldn’t be a significant factor with an 8x8x16 PODS container but with the smaller ReloCube it can be.

On the receiving end, I wondered how the cubes would be positioned. You’d want them snug to the curb, but then how could the doors open toward the house? The video linked to this picture documents the elegant solution:

The forklift driver placed the cube’s edge on top of the curb. Not shown in the video is the final tap with the forklift that aligned the cube perfectly. These folks really pay attention to details!

I can’t say enough good things about our U-Pack experience. No conventional service offered the flexibility we needed so none was an option, but we did solicit estimates early on and they were astronomical: three to four times the $6300 we paid U-Pack to move four containers across the country and make them available to us on demand. (We’ll also now pay $100 per-month per-container for the two studio containers until we retrieve them.) There was very little paperwork involved. Every U-Pack employee I talked to was friendly and helpful. So I’m giving the service a five-star rating.

For me the experience was an echo of a time, fifty years ago, when our family moved from suburban Philadelphia to New Delhi. Here are some pictures of the “sea trunk” that was delivered, by bullock cart, to 102 Jorbagh.

Now the delivery vehicle is a flatbed trailer:

But the resemblance between our New Delhi sea trunk and our ReloCubes is, I think, not coincidental.

Actually the sea trunk trumped the ReloCube in one way. When it was delivered back home my dad arranged to keep it, and he turned it into a playhouse in the backyard:

3D Elastic Storage, part 2

Our U-Pack containers arrived on Thursday, August 21. We loaded them Friday through Monday, they departed on Wednesday, August 27. If your loading phase crosses a weekend you get 5 days to load. That’s enough time to consolidate and reconsolidate as you fill the cubes, and to make final decisions about what to take or toss as you go along.

I’ve always enjoyed the challenge of packing things into containers. It’s kind of like building a stone wall. You wind up with oddly-shaped spaces to fill, and you look for oddly-shaped things that will fill them.

In our case we had more odd shapes than normal. Luann collects, among other things, antique wooden boxes that she uses to frame her sculptures and jewelry. On the first iteration I nested them into one another and consolidated them into standard 6 cu ft boxes. The advantage of standard-size boxes is that you can pack them tightly into a container. But if there’s a lot of air inside those boxes you lose many precious cubic feet.

So we unbundled the boxes and began using them, instead of standard small (1.5 cu ft) or medium (3 cu ft) cardboard boxes, for all the loose stuff that wasn’t packed tightly in the drawers of Luann’s various cabinets of wonders. As we filled the wooden boxes we wrapped them with mover’s wrap. That stuff was incredibly useful! It comes in 20″ by 1000′ rolls, it’s cheap, and it’s wonderfully designed for the purpose. The plastic doesn’t shrink-wrap but it’s tough and sticks to itself. We must have wrapped more than a hundred boxes. As a bonus you can see into the boxes so there’s less need to label the contents.

Packing boxes of different sizes and shapes is like a game of Tetris, but in 3D and with irregular shapes. You pack as tightly as you can, but there will be gaps. Fortunately Luann’s studio offered another useful resource: collections of yarn and fabric. These were originally packed in plastic totes. But totes aren’t space-efficient so we tossed them, redistributed the contents into plastic bags of various shapes and sizes, and evacuated as much air from the bags as we could. The result was a supply of packing material to fill spaces and cushion the load. For the studio containers, in particular, we wound up using very few cardboard boxes. An unanticipated benefit of the wooden boxes: structural support. When you’re stacking into an 8-foot-high space cardboard boxes tend to crush, wooden ones don’t.

In the end we used all four of the containers I’d reserved. Containers #1 and #2 are now storing Luann’s studio, #3 and #4 are storing our household stuff. If we’d been really brutal about excluding furniture we could have used only three and returned the fourth unused at no charge. I liked the idea of starting from scratch with nothing but a table and the bed we bought last year. But it the end a sofa, some chairs, and a few other items came along for the ride.

The household containers held no surprises for U-Pack. But the studio containers, especially #1, raised an eyebrow. There are some heavy items in that load. So heavy that I wound up hiring Dave Gillerlain and his team at Affordable Movers to help me load containers #1 and #2. What weighs so much? Among other things, African trade beads. Luann’s been collecting them for a long time, and she put Keene on the map of places that traders visit. A couple of times a year, Ibrahim Kabba would show up in his van and stage a bead show in our house. The van always rode low, and Kabba wore a back brace to carry in his wares. A cabinet packed full of those beads is a surprisingly dense and heavy object.

Here’s container #1 nearly full:

I’d wanted Dave to distribute the heaviest cabinets between containers #1 and #2, but things went quickly and by the time we got to this point I realized #1 was going to be a beast to lift. A useful refinement for U-Pack would be to embed a scale in each container. That feedback would have helped us balance the studio load between #1 and #2.

We’d left the house by Wednesday morning when the truck showed up to fetch the containers. But I dropped by for a final check, just in time for the pickup. It was the same portly middle-aged guy who had delivered the empties. One person can do the job, but that person is heavily augmented with some serious exoskeletons. This time, I was relieved to see, the forklift was much beefier than the one that had unloaded the empties. Still, I was worried about #1. Sure enough, he’d gotten #2, #3, #4 loaded, had struggled with #1, and was about to reposition the forklift for a second try. “What’s in that one?” I explained as best I could, and asked if it’d be OK. “Yep, just need to come at it from another angle.” He was cheerful, like every U-Pack person I’ve talked to, but despite his optimism I couldn’t bear to watch and drove away. Nobody called from U-Pack, and an hour later the truck and all four cubes were gone.

We left Keene a week after the closing, on September 3, drove across the country visiting friends and family along the way, arrived in Santa Rosa on the evening of the 13th, and rented our new home yesterday, the 15th. It’ll be another week before we can move in, but it’s worth the wait. The place we’ve rented has enough space to unload everything and create a basic working studio for Luann. So we’ll be able to retrieve all four containers and end all the storage charges. But that’s an unexpectedly good outcome. This is the North Bay, space is at a premium, and the rental market is tight as a drum. We were prepared to rent a small apartment, retrieve only the two household containers, then later rent a separate studio and retrieve the two studio containers. Shipping a load to two unknown destinations, for retrieval on two unknown dates, with pay-as-you-use storage for each part of the load, was a tricky set of requirements. U-Pack has designed a really smart system that can, perhaps uniquely, meet those requirements.

Not the link Zillow was looking for

In For sale by owner I talked about the online tools that helped us sell our house. I gave Zillow high marks. Even though our buyers didn’t find us on Zillow — in the end, it was a good old-fashioned drive-by — the service was useful for the reasons I mentioned. But now I’m going to have to subtract some points.

A few days ago I received this email, misleadingly titled Zillow inquiry:

Hi Jon,

I work for Zillow, the online real estate network. When looking for groups that have cited our brand, I came across your great blog post discussing your marketing strategy when selling you (sic) home and noticed you mentioned Zillow. http://blog.jonudell.net/2014/08/05/for-sale-by-owner/

Would you consider linking the word ‘Zillow’ in the third paragraph within the text as a resource to your users? Here’s the URL to the Zillow City Page http://www.zillow.com/keene-nh/

We really appreciate your coverage and thank you for considering the link on your page. Feel free to use me as a point of contact here if you need any data or content in the future, and if nothing else, I’m just glad to have had the chance to connect!

If this is not the correct contact would you please forward it to someone that can be of any assistance, thanks.

Regards,

NAME WITHHELD

I’m withholding the name because the guy was just doing his job. But shame on Zillow for making that his job. It got worse. A few days later:

Hey Jon,

Just wanted to follow up to see if you can help with adding the link. Let me know, thanks!

Regards,

NAME WITHHELD

Where to start? First, this is my blog. I choose whether to link the word ‘Zillow’ in paragraph 3, and if so, where to point that link. And now, because you had the gall to tell me how to do that, and then bug me about it, I’m going to point here.

Second, people who need a link to Zillow in order to find Zillow, if such people exist, are not your customers.

Third, consider who you’re dealing with. Zillow’s users are by definition going through a seriously stressful phase of life. We are likely to be emotionally and physically exhausted by the process of buying and/or selling a home, and by preparing to move. We wake up in the middle of the night obsessing about our checklists. You presume to add to our lists? Disrespectful. Bad form. Don’t.

3D Elastic Storage

If all goes according to plan we’ll close the sale of our house on August 27th and begin meandering across the country, visiting friends and relatives enroute to Santa Rosa. We’ve thrown all the cards up into the air. When we arrive we’ll look for an apartment in which to live for a year while we scope out the region. And we’ll look for a studio for Luann. Conventional movers aren’t set up for what we need to do: ship to storage, then retrieve from storage to several locations at different times. We got a few estimates just to see, they were astronomical, a PODS-style solution was clearly in order.

I spoke to a PODS representative who was so rude that I immediately began checking out the competition. United/Mayflower offers the kind of service we need, but they only use one container size, 8′ x 8′ x 16′. One wouldn’t be enough, two would be overkill. Also they don’t ship to Santa Rosa, so that was the end of that. I did appreciate their comparison between PODS and United/Mayflower containers. Both are nominally 8 x 8 x 16 but they’re right about those interior beams. As I learned when helping friends load a PODS container, they really get in the way.

Next I talked to U-Pack and, unless the response to this blog post reveals an alternative I haven’t considered, we’re going with them. Their elastic storage service had me at hello. Like United/Mayflower they only offer one size of container. But their “ReloCubes” are smaller: 6’3L” x 7’W x 8’4″H. I like that for a couple of reasons. It gives us more flexibility to divide the load into separate deliveries. And I think the smaller containers will be easier to pack well.

How many will we need? There’s no penalty for overestimating, but these are big boxes and we don’t want more in our driveway than necessary. So I want to measure the volume of our load as well as I can. There’s an online estimator but it’s geared toward a conventional household. In our case, more than half the load is Luann’s studio and it’s, well, take the tour and see for yourself. It’s more than an artist’s workspace, it’s really a museum of wonders. When she opens the studio to visitors people spend hours wandering around opening drawers and looking at collections of beads, rubber stamps, fabric, yarn, antique boxes, old tools, globes, maps, dolls. We can’t take everything but these collections are central to what Luann does and we need to recreate them as best we can.

The most generic feature of the U-Pack estimator is the Boxes section. You can enter a number for each of the four standard sizes: small (1.5 cubic feet), medium (3), large (4.5), and extra large (6). I picked up a few of each at Home Depot, assembled them, and begin using them as cubic measuring sticks.

A stack of printer’s type trays = 1 small

Six antique drawers = 1 medium

A lot of stuff like this will just get wrapped and taped. It doesn’t need to be in a box, we just need to know how many box-equivalents of space it’ll consume.

Three vintage suitcases = 1 medium

Luann has a whole collection of vintage suitcases. Now that I’ve accounted for their volume, we can fill ’em up.

ProPanels = 2 extra large

These boxes hold the ProPanels that form the walls of Luann’s booth at shows.

Booth floor = 1.5 extra large

These interlock to form the floor of her booth. Now that we’ve accounted for the volume, we can use them in various configurations to fill space as we pack.

Two shipping boxes = 2 extra large

These were used when she traveled to shows in Philadelphia and Baltimore. They’re perfect for shipping her collection of, wait for it, beaver-chewed sticks.

Box of antlers = 1 extra large

Admit it, this is cool.

I like this method so much that I’m now using it to estimate the household part of the move. It’s far less complex because we’re taking very little. The stuff in the studio is unique and special. The stuff in the house, for the most part, isn’t. We like funky second-hand sofas and chairs, but it doesn’t make sense to transport bulky items like that so we’re unloading most of them and Luann can enjoy reacquiring on the other end.

The U-Pack estimator lists all kinds of household items but in a generic way. How many cubic feet does your bed or chair or small dresser really occupy? I’m measuring in terms of box equivalents. Tonight I’ll compile the data, tomorrow I’ll call up U-Pack to reserve containers, when they arrive we’ll find out how well my method worked.

For sale by owner

Barring the unforeseen we’ll close the sale of our house on August 27. When we sold our first house 14 years ago we used a realtor. This time around we used the web. Here were the three pillars of our online marketing strategy:

A website. I made it with WordPress and packed it with lots of information. In addition to photos, I included floor plans (made by an architect who considered buying the house for himself), an article about our European wood boiler, a page about the historic flag that belongs to our house, and a page about the neighborhood.

Zillow. It’s a great marketing tool, but also a great research tool. I was able to compare our listing to other listings, then tweak ours to differentiate it in the best possible ways. And when I wondered, for example, how significant a factor our barn might be, I was able to search Zillow to find out. Of 150 homes for sale in Keene, only 5 included the word “barn” in their descriptions.

Facebook. We’re moving from a big house to an apartment, so a lot of stuff has to go. Luann organized a series of in-home sales for which she staged the living room, the dining room, and an upstairs bedroom. She artfully arranged and decorated these rooms and put a price tag on everything in them. Facebook was the bast way to advertise these sales. And of course everyone who came got a tour of the house.

The in-house sales began in the spring. On June 25 we listed on Zillow. On July 26 we got an offer that we wound up accepting. There are too many variables in this equation to draw sweeping conclusions. And too few data points: we’ve only owned two homes. But for me the biggest difference between a realtor sale and an owner sale is that realtors don’t want you around for showings. This now seems crazy to me. We love our home, we’re proud to show it, you can’t outsource that love and pride.

The ebb and flow of curbside free stuff

We’re selling our house and unloading a ton of stuff. After many yard sales and many Craigslist postings, there’s still plenty to get rid of. In our town there’s a strong tradition of curbside giveaway. You just put stuff out on the treelawn and it vanishes. This animated GIF documents that process over a period of three days. (You can click it to enlarge the view.)

The kickboxing bag only lasted a few minutes. The kitty litter bins took a few days but eventually they went too. Fun!

Tech’s inequality paradox

Travelers leaving from the San Francisco airport on morning flights know the drill: you stay over the night before at a motel on El Camino Real in San Bruno. Last week I booked the Super 8 which turns out to be perfectly serviceable. As a bonus, it’s right next door to Don Pico’s Mexican Bistro and Cevicheria which is unlike anything else you’ll find on motel row:

The back bar in the new dining room is a 1925 mahogany Brunswick from the Cliff House in San Francisco; the large bullfight mural is an original painting by Roberto Leroy Smith; large mirrors came from Harry Denton’s; the chandeliers are of Austrian crystal, from the World Trade Center at the San Francisco Ferry Building; the trophy fish are from Bing Crosby’s private collection; the large elephant, floral, and deer paintings are from the movie Citizen Kane with Orson Welles; the sombreros are 1920s antiques from a Mexican hat collection acquired from Universal Studios; and the stylized modern art paintings are by California painter Rudy Hess. – http://www.donpicosbistro.com/history/

It was too late for dinner but I sat at the mahogany bar, had a drink and a snack, and talked with Angel, the bartender. He’s a veteran of San Francisco’s culture war. Born and raised in the Mission District, he was driven out seven years ago. At most he could afford a studio apartment and that was no place to raise a young child.

Angel didn’t express the anger that you can now see bubbling to the surface when you walk the streets of San Francisco. Just the sadness of the dispossessed. We talked about many things. At one point he answered a text on his iPhone and it suddenly hit me. That’s the same iPhone that San Francisco’s tech elite carry.

For most things you can buy, there’s almost no limit to what you can spend. A tech billionaire in San Francisco can own a home or a car that costs hundreds of times what Angel can pay for a home or a car. But while it’s possible to buy a gold-plated and diamond-encrusted iPhone, I’ve never seen one. The tech that’s at the heart of San Francisco’s crisis of inequality is a commodity, not a luxury good. It’s a great equalizer. Everybody has a smartphone, everybody has access to the services it provides. But if you’re Angel, you can’t use that phone in the neighborhood you grew up in.

Business registration as a framework for local data

In Crowdsourcing local data the right way I envisioned a different way for businesses to register with state governments. In this model, state governments invite and encourage businesses to be the authoritative sources for their own data, and to announce URLs at which that data is published in standard formats. Instead of plugging data into the state’s website, a business would transmit an URL. The state would sync the data at that URL, assign it a version number, and verify its copy (tethered to the URL) as an approved version. The state would also certify the URL as a source of additional data not required by the state but available from the business at that URL.

For businesses with calendars of public events, one kind of additional data would be those calendars. A while back I met with Steve Cook, deputy commissioner of Vermont’s department of tourism and marketing, to show him the Elm City “web of events” model. We discussed the central challenge: awakening event promoters to the possibility of using their own calendars as feeds that would flow directly into the statewide calendar. How do you light up those feeds? Steve got it. He pointed to another section of the building. “Those guys run the business registration site,” he said. “On the registration form, we already ask for the URL of a business’s home page. How hard would it be to also ask for a calendar URL if they have one?”

Exactly. And by asking for that URL, the state awakens the business to a possibility — authoritative self-publishing of data — that wouldn’t otherwise have occurred to it. This hasn’t yet happened in Vermont. But if Carl Malamud ever becomes Secretary of State in California I’ll bet it will happen there!

Crowdsourcing local data the right way

In How Google Map Hackers Can Destroy a Business at Will, Wired’s Kevin Poulsen sympathizes with local businesses trying to represent themselves online.

Maps are dotted with thousands of spam business listings for nonexistent locksmiths and plumbers. Legitimate businesses sometimes see their listings hijacked by competitors or cloned into a duplicate with a different phone number or website.

These attacks happen because Google Maps is, at its heart, a massive crowdsourcing project, a shared conception of the world that skilled practitioners can bend and reshape in small ways using tools like Google’s Mapmaker or Google Places for Business.

No, these attacks happen because Google Maps isn’t based on the right kind of crowdsourcing. The Wired story continues:

Google seeds its business listings from generally reliable commercial mailing list databases, including infoUSA and Axciom.

Let’s back up a step. Where does infoUSA get its data? From sources like new business filings and company websites, and follow-up calls to verify the data.

Those calls shouldn’t be necessary. The source of truth should be an individual business owner who signs a state registration form and publishes a website. Instead, intermediaries govern what the web knows about that business. If that data were crowdsourced in the right way, it would flow directly from the business owner.

Here’s how that could happen. A state’s process for business registration asks for a URL. If data available at that URL conforms to an agreed-upon format, it populates the registration form. If the registration is approved, the state endorses that URL as the source of truth for basic facts about the business.

Of course the business might provide more information than the state can verify. That’s OK. The state’s website might only record and assure the name and address of the business, plus the URL at which additional facts — not verifiable by the state — are provided by the business owner. Those facts would include the hours of operation. The business owner is the source of truth for those facts. Changes made at the source ripple through the system.

The problem isn’t that information about local businesses is crowdsourced. We’re just doing it wrong.

Things in the era of dematerialization

As we clear out the house in order to move west, we’re processing a vast accumulation of things. This morning I hauled another dozen boxes of books from the attic, nearly all of which we’ll donate to the library. Why did I haul them up there in the first place? We brought them from our previous house, fourteen years ago. I could have spared myself a bunch of trips up and down the stairs by taking them directly to the library back then. But in 2000 we were only in the dawn of the era of dematerialization. You couldn’t count on being able to find a book online, search inside it, have a used copy shipped to you in a couple of days for a couple of dollars.

Now I am both shocked and liberated to realize how few things matter to me. I joke that all I really need is my laptop, my bicycle, and my guitar, but in truth there isn’t much more. For Luann, though, it’s very different. Her cabinets of wonders are essential to who she is and what she does. So they will have to be a logistical priority.

In the age of dematerialization, some things will matter more than ever. Things that aren’t data. Things that are unique. Things made by hand. Things that were touched by other people, in other places, at other times. RadioLab’s podcast about things is a beautiful collection of stories that will help you think about what matters and why, or what doesn’t and why not.

Trails near me

I stayed this week at the Embassy Suites in Bellevue, Washington [1, 2]. Normally when visiting Microsoft I’m closer to campus, but the usual places were booked so I landed here. I don’t recommend the place, by the way, and not because of the door fiasco, that could have happened in any modern hotel. It’s the Hyatt-esque atrium filled with fake boulders and plastic plants that creeps me out. Also the location near the junction of 156th and route 90. Places like this are made for cars, and I want to be able hike and run away from traffic.

A web search turned up no evidence of running trails nearby. So I went down to the gym only to find people waiting in line for the treadmills. Really? It’s depressing enough to run on a treadmill, I’m not going to queue for the privilege. So I headed out, figuring that a run along busy streets is better than no run at all.

Not far from the hotel, on 160th, I found myself in a Boeing industrial park alongside a line of arriving cars. As I jogged past the guard booth a guy leaped out at me and asked for my badge. “I’m just out for a run,” I said. “This is private property,” he said, and pointed to a nearby field. “But I think there’s a trail over there.” I crossed the field and entered part of the Bellevue trail network. The section I ran was paved with gravel, with signs identifying landmarks, destinations, and distances. I ran for 45 minutes, exited into the parking lot of a Subaru dealership near my hotel, and congratulated myself on a nice discovery.

Later I went back to the web to learn more about the trails I’d run. And found nothing that would have enabled a person waiting in line for a treadmill at the Embassy Suites to know that, within a stone’s throw, there were several points of access to a magnificent trail system. The City of Bellevue lists trails alphabetically, but the name of the nearby Robinswood Park Trail had meant nothing to me until I found it myself. Nor did I find anything at the various trails and exercise sites that I checked — laboriously, one by one, because each is its own silo.

I knew exactly what I wanted: running trails near me. That the web didn’t help me find them is, admittedly, a first world problem. What’s more, I like exploring new places on foot and discovering things for myself. But still, the web ought to have enabled that discovery. Why didn’t it, and how could it?

The trails I found have, of course, been walked and hiked and cycled countless times by people who carry devices in their pockets that can record and publish GPS breadcrumbs. Some will have actually done that, but usually by way of an app, like Runtastic, that pumps the data into a siloed social network. You can get the data back and publish it yourself, but that’s not the path of least resistance. And where would you publish to?

Here’s a Thali thought experiment. I tell my phone that I want to capture GPS breadcrumbs whenever it detects that I’m moving at a walking or running pace along a path that doesn’t correspond to a mapped road and isn’t a path it’s seen before. The data lands in my phone’s local Thali database. When I’m done, the data just sits there. If there was nothing notable about this new excursion my retention policy deletes the data after a couple of days.

But maybe I want to contribute it to the commons, so that somebody else stuck waiting in line for a treadmill can know about it. In that case I tell my phone to share the data. Which doesn’t mean publish it to this or that social network silo. As Gary McGraw once memorably said: “I’m already a member of a social network. It’s called the Internet.”

Instead I publish the data to my personal cloud, using coordinates, tags, and a description so that search engines will index it, and aggregators will include it in their heat maps of active trails. Or maybe, because I don’t want my identity bound to those trails, I publish to an anonymizing service. Either way, I might also share with friends. I can do that via my personal cloud, of course, but with Thali I can also sync with them directly.

For now I have no interest in joining sites like Runtastic. Running for me is quiet meditation, I don’t want to be cheered on by virtual onlookers, or track my times and distances, or earn badges. But maybe I’ll change my mind someday. In that case I might join Runtastic and sync my data into it. Later I might switch to another service and sync there. The point is that it’s never not my data. I never have to download it from one place in order to upload it to another. The trails data lives primarily on my phone. Anyone else who interacts with it gets it from me, where “me” means the mesh of devices and personal cloud services that my phone syncs with. I can share it with my real friends without forcing them to meet me in a social network silo. And I can share it with the real social network that we call the web.

Turning it off and on again


In The Internet of Things That Used To Work Better I whined about rebooting my stove. This morning I was stuck outside a hotel room waiting for “engineering” to come and reboot the door. It eventually required a pair of technicians, Luis and Kumar, who jiggled and then replaced batteries (yes, it’s a battery-operated door), then attached two different diagnostic consoles. When they got it working I asked what the problem had been. They had no idea. “Hello, IT, have you tried turning it off and on again?” is the tagline for a civilization whose front-line technicians have no theory of operation. Will the door open when I return tonight? I have no idea. But at least now I know how to turn it off and on again.