Category Archives: Uncategorized

The Nelson diaspora

This will be our first winter in California. I won’t miss New Hampshire’s snow and ice. But I’ll sure miss our regular Friday night gatherings with friends in Keene. And on Monday nights my thoughts will turn to the village of Nelson, eleven miles up the road. There, for longer than anyone knows, people have been playing fiddle tunes and celebrating a great contra dance tradition. On a cold winter night, when the whirling bodies of the dancers warm up the old town hall, it’s magical.

Gordon Peery, who for decades has accompanied the dancers on piano, once lent me a DVD documentary about the Nelson contra dance tradition. In a scene filmed at the Newport Folk Festival in the mid-1960s, the Nelson contra dancers appeared on the same stage as Bob Dylan and Joan Baez. I had no idea!

Here’s a video of a couple of minutes during a typical Monday night dance. To most people who don’t know the building, or the village, or the people, or the tunes, or the tradition that’s stayed vibrant there for so many years, it won’t mean much to you. To a few, though, it will resonate powerfully. That’s because Nelson, NH is the origin of a contra dance diaspora that spread across the country.

Although we aren’t contra dancers, we visited from time to time just to savor the experience. Then, a few years ago, in search of musical companionship, I began attending the jam that precedes the dance. There, beginning and intermediate musicians to learn how to play the dance tunes, mainly ones collected in these two books:

The Waltz Book opens with a tune written by Bob McQuillen, who played piano at the Nelson dance decades until his death in early 2014. And the book closes with a tune by Niel Gow, the Scottish fiddler who died in 1807.

The Waltz Book also includes a couple of Jay Ungar tunes, including Ashokan Farewell. Most people think it’s a tune from the Civil War. In fact Jay Ungar wrote it in 1982, and it became famous in 1990 as the theme of Ken Burns’ documentary about the Civil War.

The New England Fiddler’s Repertoire might also have included a mix of recent and traditional tunes. Instead it restricts itself to “established tunes” — some attributed to composers from the 1700s or 1800s, others anonymous. But it’s full of reminders that people have never stopped dancing to those traditional tunes. Here’s the footnote to Little Judique:

February 12. Played for a Forestry Meet dance in a barn with a sawdust floor at the University of New Hampshire in Durham. The temperature was 15 degrees below zero.

- Randy Miller, dance journal, 1978

As I page through these books now, and continue to learn to play the tunes in them, I’m grateful to have lived in a place where they were celebrated so well, and to have participated (in a small way) in that celebration. How will I continue that here? I don’t know yet, but I’m sure I’ll find a way.

Swimming against the stream

Congratulations to Contributoria! The Guardian’s experiment in crowd-funded collaborative journalism is a finalist in the digital innovation category of the British Journalism Awards. (Disclosure: Contributoria’s CEO and cofounder, Matt McAlister, is a former InfoWorld colleague.) The site, which launched in January 2014, runs on a 3-month cycle. So the November 2014 issue is online now, the December 2014 issue is in production, and the January 2015 issue is in planning. There are now ten issues archived on the back issues page. Here are the numbers from that page in a spreadsheet:

Who pays the writers? Contributoria’s business page explains:

The writers’ commissions are provided by the community membership pool and other sources of funding such as sponsorship. Initial backing came from the Google sponsored International Press Institute News Innovation Contest. We are currently funded by the Guardian Media Group.

Contributoria is a market with an internal currency denominated in points. I’m currently signed up for a basic membership which comes with 50 points I can direct toward story proposals. If I upgrade to paid membership I’ll have more points to spend. As a Supporter you get 150 points. As a Patron it’s 250 points plus delivery of each issue in print and e-pub formats. I like to support innovative experiments in journalism, such as the (dearly departed) Ann Arbor Chronicle [1, 2], so I may upgrade my membership. But I’m not sure I want to vote, with points, for individual proposals. I might rather donate my points to the project as a whole.

What I would like to do, in any case, is keep an eye on the flow of stories, cherrypick items of interest, and perhaps follow certain writers. So I looked for the RSS feeds that would enable me to do those things, and was more than a bit surprised not to find them. Here’s the scoop:

That was back in February. Nine months later Contributoria’s RSS feeds are still, presumably, climbing the todo list. How could a prominent and potentially award-winning experiment in online journalism regard RSS as an afterthought?

I mean no disrespect, and I won’t point any fingers because they’d point right back at me. I was an early adopter of RSS and an original member of the RSS Advisory Board. For many years an RSS reader was my information dashboard. I used it to organize and monitor items of interest from formal and informal sources, from publications and from peers. It was often the first window open on my computer in the morning.

And then things changed. I don’t remember exactly when, but for me it was even before the demise of Google Reader in July 2013. By then I’d already resigned myself to the notion that social streams were the new RSS. The world had moved on. Many people had never used RSS readers. For those who had, manual control of explicit lists of feeds now seemed more trouble than it was worth. Social graphs make those lists implicit. Our networks have become our filters. Mark Hadman and I might wish Contributoria offered RSS feeds but we’re in a tiny minority.

Of course the network-as-filter model isn’t new. The early blogosphere gave me my first taste of it. Back in 2002, in a short blog entry entitled Using people as filters, I wrote:

As individuals become both producers and consumers of RSS feeds, they can use one another as filters.

It worked well for years. I subscribed to a mix of primary sources and bloggers who were, in turn, subscribed to their own mixes of primary sources and subscribers. I often likened Dave Winer’s notion of triangulation — what happens when several sources converge on the same idea or event — to the summation of action potentials in the human nervous system. It was all very organic. I could easily tune my filter network to stay informed without feeling overwhelmed.

I doubt many of us feel that way now. Armando Alves certainly doesn’t. In Beyond filter failure: the downfall of RSS he laments the decreasing availability of RSS feeds:

I’m afraid the web/tech community has done a lousy job promoting RSS, and even people I consider tech savvy aren’t aware how to use RSS or how it would improve the way they consume information. Between a Facebook newsfeed shaped by commercial interests and a raw stream of information powered by RSS, I’d rather have the latter.

Let’s pause to consider an irony. Armando’s Medium page invites me to follow him there, but not by means of RSS. Instead the Follow link takes me to Medium’s account registration page. There I can log in, using Facebook or Twitter, then await the account verification email that will enable me to enter yet another walled garden.

There is, in fact, an RSS feed for Armando’s Medium posts. But only geeks will find it. Embedded in the page is this bit of code:

<link id=”feedLink” rel=”alternate” type=”application/rss+xml” title=”RSS” href=”/feed/@armandoalves”>

That tells me that I can subscribe to Armando in an RSS reader at this URL: https://medium.com/feed/@armandoalves

More generally it tells me that I can form the feed URL for any author on Medium by appending the @ username to https://medium.com/feed/.

How would a less technical person know these things? You wouldn’t. At one time, when RSS was in vogue, browsers would show you that a page had a corresponding RSS feed and help you add it to your feed reader. That’s no longer true, and not because the web/tech community failed to promote RSS. We promoted it like crazy for years. We got it baked into browsers. Then social media made it seem irrelevant.

But as Armando and many others are beginning to see, we’ve lost control of our filters. In the early blogosphere my social graph connected me to people who followed primary sources. Those sources were, it bears repeating, both formal and informal, both publications and peers. We made lists of sources for ourselves, we curated those lists for one another, and we were individually and collectively accountable for those choices.

Can we regain the control we lost? Do we even want to? If so let’s appreciate that the RSS ecosystem was (and, though weakened, still is) an open network powered by people who make explicit choices about flows of information. And let’s start exercising our choice-making muscles. I’m flexing mine again. The path of least resistance hasn’t worked for me, so my vacation from RSS is over. I want unfiltered access to the publications and people that matter most to me, I want them to be my best filters, and I’m available to return the favor. I may be swimming against the stream but I don’t care. I need the exercise.

Getting the digital autonomy we pay for

As an armchair educational technologist I’ve applauded the emerging notion that we should encourage students to build personal cyberinfrastructure, rooted in a domain of one’s own, that empowers them to live and work effectively. Doing so requires some expertise, but not necessarily this kind:

Authorship has blossomed since the dawn of social media; but even in its rise, authorship has been controlled by the platforms upon which we write. Digital pages are not neutral spaces. As I write this in Google Docs, I’m subject to the terms of service that invisibly manipulate the page; and I am also subject to the whims of the designers of the platform.

Owning our own homes in the digital requires an expertise that this writer does not have. I don’t own my own server, I haven’t learned to code, I haven’t designed my own interfaces, my own web site, nor even my own font. I must content myself to rent, to squat, or to ride the rails.

[Risk, Reward, and Digital Writing]

That’s Sean Michael Morris writing in the journal Hybrid Pedagogy. I agree with the premise that many are disempowered, but not with the conclusion that they’re stuck with that fate. Digital autonomy isn’t a nirvana only geeks can attain. We can all get there if we appreciate some basic principles and help create markets around them.

I’ve owned and operated servers. Nowadays I mostly avoid doing so. I host this blog on WordPress.com, and accept the limitations that entails, because it’s a reasonable tradeoff. Here, on this blog, at this point in my life, I don’t need to engage in the kinds of experimentation that I’ve done (and will do) elsewhere. I just need a place to publish. So I’ve outsourced that function to WordPress.

I haven’t, though, outsourced the function of writing to WordPress. I still write with the same text editor I’ve used for 25 years. When I finish writing this essay I’ll paste it into WordPress and hit the Publish button. This is not an ideal arrangement. I would rather connect my preferred writing tool more directly to WordPress (also to Twitter, Facebook, and other contexts). I’d rather that you could do the same with your preferred writing tool. And I’d like the creators of our writing tools, and of WordPress, to get paid for the work required to make those connections robust and seamless.

The web is made of software components that communicate by means of standard protocols. Some of those standards, like the ones your browser uses to fetch and display web pages, are baked into all the browsers and servers in a way that enables many different makes and models to work together reliably. Other standards, including those that would enable you to connect your favorite writing tool to your favorite publishing environments, are nonexistent or nascent. If you would like those standards to exist, flourish, and work reliably everywhere — and if you are willing to support the work required — then say so!

One of the Elm City project’s core principles is the notion that you ought to be able to publicize events using any calendar application (or service) that you prefer. In this case there is a mature Internet standard. But many vendors of calendar publishing systems don’t bother to implement it. When I ask why not they always say: “Customers aren’t asking for it.”

I don’t want most people to run servers, write code, or design interfaces. I just want people to understand what’s possible in a world of connected, standards-based software components, to recognize when those possibilities aren’t being realized, to expect and demand that they will be, and to pay something for that outcome.

WordPress.com is a valuable service that I could be using for free. In fact I pay $13/year for domain mapping so I can refer to this blog as blog.jonudell.net instead of jonudell.wordpress.com. That’s useful. It helps me consolidate my online presence within a domain of my own. But is that really critical? My wife blogs at luannudell.wordpress.com. Her homepage at www.luannudell.com links to the blog. Writing this post reminds me that I keep forgetting to create the alias blog.luannudell.com. Maybe I will, now that I’m thinking about it, but I’m not sure she’d notice a difference. People find Luann online in the spaces she chooses to inhabit and call her own.

I’d rather she had the option to spend $13/year for a robust connection between Word, which is her preferred writing tool, and WordPress. And then be able to transfer that connection to another writing tool and/or publishing platform if she wants to. Gaining this autonomy doesn’t require deep technical expertise. We just need to understand what’s possible, demand it, and be willing to pay (a little) for it.

We are the media

In an item posted last week, There was no pumpkin riot in Keene, I drew a distinction between two different events that became conflated in the national awareness. There was no rioting during the pumpkin festival at one end of Keene’s Main Street. And no pumpkins were smashed during the riots in the college neighborhood at the other end of the street. But as Reed Hedges noted in a comment on my blog:

The imagined scene of a quaint and boring pumpkin festival erupting in anarchy and violence for no reason was too amusing to resist viral spread across national and internet news.

In conversations about why the story went off the rails, I keep hearing the same refrain. It was “the media’s” fault. Yes, but that begs the question: Which media? Stories are no longer framed exclusively by newspapers, TV, radio, and their counterparts online. Using social media we all participate in that framing, for better and for worse. When we point the finger of blame at “the media” we must also point back at ourselves.

We’re becoming more aware of how and why to be critical consumers of online information. The corollary is not yet widely acknowledged. Because we collectively shape the stories that inform public awareness, we must also learn to be careful producers of online information.

In the aftermath of that chaotic night in Keene, an acquaintance (and prominent local citizen) mentioned in a Facebook post that two people had died. His source? He’d heard it from someone who had in turn heard it on a police scanner. In fact nobody died. I don’t think that careless report amplified the collective misconception, but it easily could have. Our online utterances are news sources. When we like, retweet, and tag those utterances, we shape the flow of news. This is a new kind of power. We’ve got to use it responsibly, and hold ourselves accountable when we don’t.

Let’s talk

Ray Ozzie, in conversation with Ina Fried and Walt Mossberg last week, reflected on his decades-long effort to enlist computers in support of collaborative work. Ina asked whether the tools he’s built — Notes, Groove, now Talko — have been ahead of the curve. Ray’s response:

With Notes it was an uphill battle, and then once it took off, it took off. We built a very substantial business around that, which is what gave me confidence there’s a macro-economic basis for computer-supported collaborative work. If you solve collaboration problems, there’s money to be made.

With Groove that wasn’t the case. It was a niche audience. But Groove is where I got excited about voice. It was used primarily by non-governmental organizations — in Sri Lanka after the tsunami, after Katrina, in many situations where people from different organizations needed to get together very dynamically to get something done. People used the text and file-sharing features in Groove, but they also used the push-to-talk button much more than we expected. Because when you want to convey emotion and urgency, there’s nothing better than your voice.

It’s ironic that Talko’s effort to re-establish voice as a primary mode of communication is ahead of the curve. Somehow we’ve come to accept that talking to one another isn’t a primary function of the devices we call phones, but that typing on them with our thumbs is.

With Talko, you speak in a shared space that’s represented as an audio timeline. Conversations can be asynchronous (like email) or synchronous (like chat), and the transition between those modes is seamless. When I bought my first iPhone in order to try Talko — it’s iOS-only for now — I contacted Matt Pope, Talko’s co-founder, to let him know I was available for Talko-style conversation. Over a period of days we chatted asynchronously, creating an audio timeline made from his voice messages and mine. In that mode Talko is a kind of visual voicemail: a randomly-accessible record of voice messages sent and received.

At one point I happened to be reviewing that conversation when Matt came online, noticed I was active in the conversation, and switched into synchronous mode by saying “Cool! Serendipitous synch!” Just like that we were in a live conversation. But not an ephemeral live conversation. We were still adding to the audio timeline, still building a persistent and shareable construct.

I’d been revisiting my conversation with Matt because, in a parallel Talko conversation with Steve Gillmor, he and I wondered how to exchange Talko sessions. Matt used our live conversation to explain how, and inserted an iPhone screenshot into the stream to illustrate. (Mea culpa. It would have been obvious to me if I’d been a more experienced iPhone user.)

When I hung up with Matt I captured the link for our conversation, which was now a record of a back-and-forth voice messages over several days, plus a live conversation, plus a screenshot injected into the conversation. And I added that link into the parallel conversation I’d been having, on and off for a few days, with Steve.

Talko’s business model is business. In that realm, voicemail is a last resort. And nobody loves a conference call that has to be scheduled in advance, that includes only invited attendees, that leaves no record for attendees (or others recruited later) to review and extend. Email is the universal solvent but it’s bandwidth-challenged with respect to both speed and emotional richness. Voice is a radically underutilized medium for communicating within and across organizations. It’s not a panacea, of course. On a plane, or in a meeting, you often need to communicate silently. But a smarter approach to voice communication will, I’m certain, solve vexing communication problems for business.

And not only for business. My most pressing collaboration challenge right now is helping my sister coordinate care for our elderly mother. My sister lives in New Jersey, I’m in California, mom’s in Pennsylvania. We are in ongoing conversations with mom, with each other, with staff at the facility where mom lives, with her friends there, with an agency that provides supplemental care, and from time to time with the hospital. Communication among all of us is a fragmented mess of emails, text messages, voicemails, pictures of handwritten notes, and of course phone calls. It’s really one ongoing conversation that would ideally leverage voice as much as possible, while enhancing voice with text, images, persistence, tagging, and sharing. I wish I could use Talko to manage that conversation. The iOS-only constraint prevents that for now; I hope it lifts soon.

There was no pumpkin riot in Keene

Recently, in a store in Santa Rosa, my wife Luann was waiting behind another customer whose surname, the clerk was thrilled to learn, is Parrish. “That’s the name of the guy in Jumanji,” the clerk said. “I’ve seen that movie fifty times!”

“I’m from Keene, New Hampshire,” Luann said, “the town where that movie was filmed.”

It was a big deal when Robin Williams came to town. You can still see the sign for Parrish Shoes painted on a brick wall downtown. Recently it became the local Robin Williams memorial:

Then the penny dropped. The customer turned to Luann and said: “Keene? Really? Isn’t that where the pumpkin riot happened?”

The Pumpkin Festival began in 1991. In 2005 I made a short documentary film about the event.

It’s a montage of marching bands, face painting, music, kettle corn, folk dancing, juggling, and of course endless ranks of jack-o-lanterns by day and especially by night. We weren’t around this year to see it, but our friends in Keene assure us that if we had been, we’d have seen a Pumpkin Festival just like the one I filmed in 2005. The 2014 Pumpkin Festival was the same family event it’s always been. Many attendees had no idea that, at the other end of Main Street, in the neighborhood around Keene State College, the now-infamous riot was in progress.

No pumpkins were harmed in the riot. Bottles, cans, and rocks were thrown, a car was flipped, fires were set, but — strange as it sounds — none of these activities intersected with the normal course of the festival. Two very different and quite unrelated events occurred in the same town on the same day.

The riot had precursors. Things had been getting out of control in the college’s neighborhood for the past few years. College and town officials were expecting trouble again, and thought they were prepared to contain it. But things got so crazy this year that SWAT teams from around the state were called in to help.

In the aftermath there was an important discussion of white privilege, and of the double standard applied to media coverage of the Keene riot versus the Ferguson protests. Here’s The Daily Kos:

Black folks who are protesting with righteous rage and anger in response to the killing of Michael Brown in Ferguson have been called “thugs”, “animals”, and cited by the Right-wing media as examples of the “bad culture” and “cultural pathologies” supposedly common to the African-American community.

Privileged white college students who riot at a pumpkin festival are “spirited partiers”, “unruly”, or “rowdy”.

Unfortunately the title of that article, White Privilege and the ‘Pumpkin Fest’ Riot of 2014, helped perpetuate the false notion that the Pumpkin Festival turned into a riot. When I mentioned that to a friend he said: “Of course, the media always get things wrong.”

It would be easy to blame the media. In fact, the misconception about what happened in Keene is a collective error. On Twitter, for example, #pumpkinfest became the hashtag that gathered riot-related messages, photos, and videos, and that focused the comparison to Ferguson. Who made that choice? Not the media. Not anyone in particular. It was the network’s choice. And the network got it wrong. Our friends in Keene saw it happening and tried to flood the social media with messages and photos documenting a 2014 Pumpkin Festival that was as happy and peaceful as every other Pumpkin Festival. But once the world had decided there’d been a pumpkin riot it was impossible to reverse that decision.

Is Keene’s signature event now ruined? We’ll see. I don’t think anybody yet knows whether it will continue. Meanwhile it’s worth reflecting on how conventional and social media converged on the same error. There’s nothing magical about the network. It’s just us, and sometimes we get things wrong.

How recently has the website been updated?

Today’s hangout with Gardner Campbell and Howard Rheingold, part of the Connected Courses project, dovetailed nicely with a post I’ve been meaning to write. Our discussion topic was web literacy. One of the literacies that Howard has been promoting is critical consumption of information or, as he more effectively says, “crap detection.” His mini-course on the subject links to a page entitled The CRAP Test which offers this checklist:

    * Currency -

          o How recent is the information?

          o How recently has the website been updated?

          o Is it current enough for your topic?

    * Reliability -

          o What kind of information is included in the resource?

          o Is content of the resource primarily opinion?  Is is balanced?

          o Does the creator provide references or sources for data or quotations?

    * Authority -

          o Who is the creator or author?

          o What are the credentials?

          o Who is the published or sponsor?

          o Are they reputable?

          o What is the publisher’s interest (if any) in this information?

          o Are there advertisements on the website?

    * Purpose/Point of View -

          o Is this fact or opinion?

          o Is it biased?

          o Is the creator/author trying to sell you something?

 

The first criterion, Currency, seems more straightforward than the others. But it isn’t. Web servers often don’t know when the pages they serve were created or last edited. The pages themselves may carry that information, but not in any standard way that search engines can reliably use.

In an earlier web era there was a strong correspondence between files on your computer and pages served up on the web. In some cases that remains true. My home page, for example, is just a hand-edited HTML file. When you fetch the page into your browser, the server transmits the following information in HTTP headers that you don’t see:

HTTP/1.1 200 OK
Date: Thu, 23 Oct 2014 20:54:46 GMT
Server: Apache
Last-Modified: Wed, 06 Aug 2014 19:28:27 GMT

That page was served today but last edited on August 6th.

Nowadays, though, for many good reasons, most pages aren’t hand-edited HTML. Most are served up by systems that assemble pages dynamically from many parts. Such systems may or may not transmit a Last-Modified header. If they do they usually report when the page was assembled, which is about the same time you read it.

Search engines can, of course, know when new pages appear on the web. And there are ways to tap into that knowledge. But such methods are arcane and unreliable. We take it for granted that we can list files in folders on our computers by date. Reviewing web search results doesn’t work that way, so it’s arduous to apply the first criterion of C.R.A.P. detection. If you’re lucky the URL will encode a publication date, as is often true for blogs. In such cases you can gauge freshness without loading the page. Otherwise you’ll need to click the link and look around for cues. Some web publishing systems report when items were published and/or edited, many don’t.

Social media tend to mask this problem because they encourage us to operate in what Mike Caulfield calls StreamMode:

StreamMode is the approach to organizing your thoughts as a history, integrated primarily as a sequence of events. You know that you are in StreamMode if you never return to edit the things you are posting on the web.

He contrasts StreamMode with StateMode:

In StateMode we want a body of work at any given moment to be seen as an integrated whole, the best pass at our current thinking. It’s not a journal trail of how we got here, it’s a description of where we are now.

The ultimate expression of StateMode is the wiki.

But not only the wiki. Any website whose organizing principle is not reverse chronology is operating in StateMode. If you’re publishing that kind of site, how can you make its currency easier to evaluate? If you can choose your publishing system, prefer one that can form URLs with publication dates and embed last-edited timestamps in pages.

In theory, our publishing tools could capture timestamps for the creation and modification of pages. Our web servers could encode those timestamps in HTTP headers and/or in generated pages, using a standard format. Search engines could use those timestamps to reliably sort results. And we could all much more easily evaluate the currency of those results.

In practice that’s not going to happen anytime soon. Makers of publishing tools, servers, and search engines would have to agree on a standard approach and form a critical mass in support of it. Don’t hold your breath waiting.

Can we do better? We spoke today about the web’s openness to user innovation and cited the emergence of Twitter hashtags as an example. Hashtags weren’t baked into Twitter. Chris Messina proposed using them as a way to form ad-hoc groups, drawing (I think) on earlier experience with Internet Relay Chat. Now the scope of hashtags extends far beyond Twitter. The tag for Connected Courses, #ccourses, finds essays, images, and videos from all around the web. Nine keystrokes join you to a group exploration of a set of ideas. Eleven more, #2014-10-23, could locate you on that exploration’s timeline. Would it be worth the effort? Perhaps not. But if we really wanted the result, we could achieve it.