Ambient video awareness and visible conversations

A few years ago Marc Eisenstadt, chief scientist with the Open University’s Knowledge Media Institute, wrote to tell me about a system called BuddySpace. We’ve been in touch on and off since then, and when he heard I’d be in Cambridge for the Technology, Knowledge, and Society conference, he invited me to the OU’s headquarters in Milton Keynes for a visit. I wasn’t able to make that detour, but we got together anyway thanks to KMI’s new media maven Peter Scott, who was at the conference to demonstrate and discuss some of the Open University’s groundbreaking work in video-enhanced remote collaboration.

Peter’s talk focused mainly on Hexagon, a project in “ambient video awareness.” The idea is that a distributed team of webcam-equipped collaborators monitor one anothers’ work environments — at home, in the office, on the road — using hexagonal windows that tile nicely on a computer display. It’s a “room-based” system, Peter says. Surveillance occurs only team members enter a virtual room, thereby announcing their willingness to see and be seen.

Why would anyone want to do that? Suppose Mary wants to contact Joe, and must choose between an assortment of communication options: instant messaging, email, phone, videoconferencing. If she can see that Joe is on the phone, she’ll know to choose email or IM over a phone call or a videoconference. Other visual cues might help her to decide between synchronous IM and asynchronous email. If Joe looks bored and is tapping his fingers, he might be on hold and thus receptive to instantaneous chat. If he’s gesticulating wildly and talking up a storm, though, he’s clearly non-interruptible, in which case Mary should cut him some slack and use email as a buffer.

Hexagon has been made available to a number of groups. Some used it enthusiastically for a while. But only one group so far has made it a permanent habit: Peter’s own research group. As a result, he considers it a failed experiment. Maybe so, but I’m willing to cut the project some slack. It’s true that in the real world, far from research centers dedicated to video-enhanced remote collaboration, you won’t find many people who are as comfortable with extreme transparency — and as fluent with multi-modal communication — as Marc and Peter and their crew. But the real world is moving in that direction, and the camera-crazy UK may be leading the way as seen in the photo at right which juxtaposes a medieval wrought-iron lantern and a modern TV camera.

Meanwhile, some of those not yet ready for Hexagon may find related Open University projects, like FlashMeeting, to be more approachable. FlashMeeting is a lightweight videoconferencing system based on Adobe’s Flash Communication Server. Following his talk, Peter used his laptop to set up a FlashMeeting conference that included the two of us, Marc Eisenstadt at OU headquarters, and Tony Hirst who joined from the Isle of Wight. It’s a push-to-talk system that requires speakers to take turns. You queue for the microphone by clicking on a “raise your hand” icon. Like all such schemes, it’s awkward in some ways and convenient in others.

There were two awkward bits for me. First, I missed the free-flowing give-and-take of a full duplex conversation. Second, I had to divide my attention between mastering the interface and participating in the conference. At one point, for example, I needed to dequeue a request to talk. That’s doable, but in focusing on how to do it I lost the conversational thread.

Queue-to-talk is a common protocol, of course — it’s how things work at conferences, for example. In the FlashMeeting environment it serves to chunk the conversation in a way that’s incredibly useful downstream. All FlashMeeting conferences are recorded and can be played back. Because people queue to talk, it’s easy to chunk the playback into fragments that map the structure of the conversation. You can see the principle at work in this playback. Every segment boundary has an URL. If a speaker runs long, his or her segment will be subdivided to ensure fine-grained access to all parts of the meeting.

The chunking also provides data that can be used to visualize the “shape” of a meeting. These conversational maps clearly distinguish between, for example, meetings that are presentations dominated by one speaker, versus meetings that (like ours) are conversations among co-equals. The maps also capture subtleties of interaction. You can see, for example, when someone’s hand has been raised for a long time, and whether that person ultimately does speak or instead withdraws from the queue.

A map of a conversation

I expect the chunking is also handy for random-access navigation. In the conversation mapped here, for example, I spoke once at some length. If I were trying to recall what I said at that point, seeing the structure would help me pinpoint where to tune in.

Although Hexagon hasn’t caught on outside the lab, Peter says there’s been pretty good uptake of FlashMeeting because people “know how to have meetings.” I wonder if that’s really true, though. I suspect we know less about meetings than we think we do, and that automated analysis could tell us a lot.

The simple act of recording and playback can be a revelation. Once, for example, I recorded (with permission) a slightly tense phone negotiation. When I played it back, I heard myself making strategic, tactical, and social errors. I learned a lot from that, and might have learned even more if I’d had the benefit of the kinds of conversational x-rays that the OU researchers are developing.

Virtual worlds with exotic modes of social interaction tend to be headline-grabbers. Witness the Second Life PR fad, for example. By contrast, technology that merely reflects our real-world interactions back to us isn’t nearly so sexy. For most of us, though, in most cases, it might be a lot more useful.

In Redmond next week

My new job at Microsoft starts Monday, and I’ll be on campus in Redmond all week. The first 1.5 days are all HR stuff I’m told, but after that I’ll be available. I’m setting up various meetings, but of course I don’t know many of the folks I ought to meet. So if you’re one of those folks, would like to get together, and are within earshot of this blog, speak up. We could maybe coordinate right here in comments, or if that’s too weird, then email me at my permanent address, judell at mv dot com.

A case of suspected fraud

I had an odd experience a few weeks ago, related to the conference I just attended. The Australian organizers had volunteered to book my Boston-London flight. Then one afternoon I got a call from Charlotte, a travel agent who works in the U.S. branch of the organizers’ Australian travel agency. She thought the booking was probably fraudulent, and cited three reasons:

  1. The booking was issued in the name of one of the organizers, but I was listed as the traveler.
  2. The fare was unusually high.
  3. The email address, which concatenated the organizer’s name with the name of the travel agency, seemed odd to her.

She: “I’m pretty sure this is bogus.”

Me: “How would it be in someone’s interest to fraudulently book me a flight?”

She: “Who knows? It could be anything. When you’ve seen as much of this kind of thing as I have, you give up on trying to figure out people’s motives.”

Suddenly the whole thing felt wrong to me. I recalled how sparse the conference website had been when I’d last visited it the week before. The keynote speakers, including me, were listed, but everything else was placeholders. So I went back to the site and…nothing was there. Holy crap! Was it conceivable that the whole deal was some kind of malicious prank? That unlikely conclusion began to seem disturbingly likely when I googled around, found the organizer’s site and an affiliated academic site, and discovered that they were dead too.

Finally I found a page listing advisory board members, and called the person who lives closest to me, an academic in New Jersey. She verified that the company and conference were real. When I went back to recheck the websites they were up and running again, and the suspiciously sparse schedule was now fully populated.

My post-mortem analysis of this strange combination of circumstances raised a couple of interesting points:

Eyeballs on transactions.
After things got sorted out and the flight was booked, I had a long conversation with the travel agent. It seemed unusual that she had personally reviewed this transaction and, on her own initiative, flagged it as suspicious. Was that company policy, I asked? No, she said. The company mostly uses an automated system. It just happens that, in her remote branch office, Charlotte sees all the bookings, is motivated to review them, and brings substantial energy and intelligence to that task.

She told me she catches real fraud attempts every week or so. To the company at large, this is just spoilage. It gets written off as a cost of doing business. We assume that eyeballs on transactions are uneconomical. But is that really true? After this experience, and in view of my conversation with Paul English about the practicality of human-intensive customer service, I wonder if we should revisit that assumption.

Locality of trust.
This was an international conference, and the members of the advisory board live all around the world. The one I chose to contact, though, is the one who lives closest to me. Of course I’d be unlikely to call overseas first, because of long-distance tolls and time zones. But there were various folks in the U.S. I could have called, yet I picked the person who lives in New Jersey. Why? In retrospect I believe that’s because New Jersey is closer to my home than Illinois or California. Of course it’s completely irrational to trust a New Jerseyite more than a Californian for that reason. And yet, at a moment when nothing seemed certain, I acted out that irrational behavior. Trust shouldn’t diminish as the square of distance but, in our unconscious minds, I think it probably does. I’ll bet Jim Russell would agree.

All’s well that ends well. The conference organizers turned out to be really pleasant folks. (I’m downplaying their identities here, though you could triangulate them if you wanted to, because they’re naturally a bit embarrassed about what happened.) I enjoyed giving my talk, I met interesting people, I got to see Cambridge for the first time, it was a good trip. But for a couple of hours on that afternoon in December things were really weird!

Future tailors

I’m at the Technology, Knowledge, and Society conference in Cambridge UK, where I spoke this morning on the theme of network-enabled apprenticeship. It’s a topic I began developing last fall for a talk at the University of Michigan. I don’t feel that I nailed it the first time around, nor this time either, but it’s provoked a lot of interesting and helpful discussion.

My argument is that for most of human history, in tribal life, village life, or farm life, it was common to be able to watch people do their daily work. Kids who grew up on a farm, for example, saw the whole picture — animal husbandry, equipment maintenance, finance. They understood more about work than kids who only saw dad go to the office, do nobody knew what, and return at the end of the day.

To the extent that we now find it culturally acceptable to narrate our work online, in textual and especially in multimedia formats, we can among other things function as teachers and mentors. We can open windows into our work worlds through which people can find out, much more than was ever possible before, what it is like to do various kinds of work.

I claim this will help people, in particular younger people, sample different kinds of work and, in some cases, progress from transient web interactions to deeper relationships in cyberspace and/or in meatspace. And I suggest that those relationships could evolve into something resembling apprenticeships.

There are plenty of holes in this argument, and James Governor, whom I met for the first time yesterday in London, drove a truck through one of them. It’s nice to have loose coupling and lightweight affiliation, he said, but apprenticeship was always a durable commitment that involved submitting to a discipline. It wasn’t about window-shopping. Point taken.

Today on a walk in Cambridge I met Andrew Jackson, a bespoke tailor who’s just opened up a shop here, and we had a great conversation on this topic. Thanks to Thomas Mahon’s English Cut, which is a great example of work narration, I know a lot more than I otherwise would about this craft. When I asked Andrew if he’s having trouble bringing people into the business I touched a nerve. It’s a huge issue for him.

Maybe, I suggested, online narration of aspects of his craft would be a way to attract worthy apprentices. But he was way ahead of me. Among other things his firm trains tailors in other countries, and they deliver that training over the Internet, using video. That’s not the problem, he said. The problem is that young people just don’t want to do the work. They want to be rock-star fashion designers, not cutters and tailors, and they will not submit to the discipline of his trade. What’s worse, he added, is that little or no stigma now attaches to unemployment.

Andrew Jackson has a good job that nobody else seems to want. The same holds true, he says, for the guy who fixes all the lead-framed windows in Cambridge. He’s been doing it forever, he knows everybody in the town, he does lucrative and socially rewarding work, and yet he cannot find anyone who wants to help him and eventually step into his role.

So, back to the drawing board. I do think that online narration of work will be a necessary way to attract new talent. But it may not be sufficient. It may also be necessary to demonstrate the non-monetary rewards of doing the work. The window repairer, for example, may enjoy low stress and much autonomy, may see and hear a lot of the interior life of the town, and may enjoy pleasant relationships with long-term customers.

If he told you his story, or if someone else did, those rewards might become clear to you. Admittedly there’s no guarantee that outcome will occur. But if nobody tells the story, we can pretty much guarantee that it won’t.

A conversation with Graham Glass about the future of education

This week’s podcast is a conversation with Graham Glass, a software veteran who’s self-funding the development of edu 2.0, a web-based educational support system. It seems like a big change from Graham’s previous projects: ObjectSpace Voyager, The Mind Electric’s Glue and Gaia, webMethods Fabric. But not really, says Graham. It’s always been about the reuse of components, whether they’re software objects or learning objects.

Graham and I share a passion for project-based learning, and in the podcast he refers to an EdVisions video on that subject which you can find here. I also (again) referenced the extraordinary talk by John Willinsky which I discussed and linked to here.

I know that technologists always say that the latest inventions are going to revolutionize education, and I know that mostly hasn’t been true. Still, I can’t help but think that we’re on verge of a dramatic overhaul of education, and that systems like the one Graham is building will play a key role in enabling that to happen.

Trusted feeds

As several folks rightly pointed out in comments here, a community site based on tagging and syndication is exquisitely vulnerable to abuse. In the first incarnation of the photos page, for example, a malicious person could have posted something awful to Flickr and it would have shown up on that page. Flickr has its own abuse-handling system, of course, but its response time might not be quick enough to avert bad PR for

My first thought was to attach an abuse-reporting link to every piece of externally sourced content. It would be a hair trigger that would — in a Wiki-like way — allow anyone to shoot first (i.e., remove an offensive item immediately) and enable the site management to ask questions later (i.e., review logs, revert removals if need be.)

I’m still interested in trying that approach, but not as the mainstay. Instead I want to promote the idea of trusted feeds. There are currently two on that photos page, one from Flickr and one from local blogger Lorianne DiSabato. I know Lorianne and I trust her to produce a stream of high-quality photos (and essays) about life in our community.

After reviewing the Flickr photostreams of the people whose recent photos match a Flickr search for “Keene NH” I decided to extend provisional trust to them, as well, so I put their names on a list of trusted feeds.

Then I restricted the page to just those feeds, and added a note explaining that anyone who sends an email request can join the list of trusted feeds.

Of course anything short of frictionless participation is an obstacle. On the other hand, based on my conversation with Paul English about customer service, there’s a lot to be said for a required step in the process that forms a human relationship — attenuated by email, true, but still, a relationship.

I think it’s even more interesting when the service, or site, is rooted in a geographical place. On the world wide web, I’m always forming those kinds of relationships with people I will never meet. But on a place-based site, I may already have met these folks. If I haven’t yet, I might still. Trust on the Internet has a very different flavor when the scope is local.

A couple of years ago I was on a panel of media types at a local community leadership seminar, where I was the token blogger. The topic was how the community gathers and disseminates news. NHPR’s executive editor Jon Greenberg said what needed to be said about blogging, which was helpful because it was more credible to that audience coming from him than from me. Even so, there was a lot of pushback. When it was suggested that people could consume a richer and more varied diet of news, they balked. “It’s your[the media’s] job to sift and summarize, not ours.”

Similarly, when it was suggested that people could produce news about the local issues where they are stakeholders and have important knowledge, the pushback was: “But you can’t trust random information on the Internet.”

I found that fascinating. Here were a bunch of folks — a hospital administrator, a fire chief, a school nurse, a librarian — who all know one another. What they seemed to be saying, though, is the Internet would invalidate that trust.

Now I assume that they trust emails from one another. Likewise phone calls, which are increasingly carried over the Internet. And if the fire chief wrote a blog that the school nurse subscribed to, there would be no doubt in the mind of John, the school nurse, that the information blogged by Mary, the fire chief, was real and trustworthy.

Until you join the two-way web, though, you don’t really see how it’s like other familiar modes of communication: phone, email. Or how the nature of that communication differs depending on whether the communicating parties live near one another.

If feeds begin to flow locally, it’ll be easy to trust them in a way that’ll supply most of the moderation we need. The problem, of course, is getting those feeds to flow. Bill Seitz asked:

So you think the “average” person will have Flickr and accounts in addition to joining your site?

No, I don’t, though over time more will use these or equivalent services. So yes, I also need to show how any online resource that’s being created, anywhere, for any purpose, can flow into the community site. It only takes two agreements:

  1. An agreement on where to find the source.
  2. An agreement to trust the source.

In the short-to-medium term, those sources are not going to be friendly to me, the developer. So I’ll have to go the extra mile to bring them in, as I’m doing on the events page.

Conceptual barriers

I’ve planted the seed that I hope will grow into the kind of community site that defines community the old-fashioned way — people living in the same place — as well as in the modern sense of network affiliation. The project has raised a bunch of technical, operational, and aesthetic issues.

Technical: Django is working well for me, but I haven’t invested deeply in it yet. Patrick Phelan, a web developer I’ve corresponded with for years, reminded me the other day that my reluctance is strategic. With any framework, buy-in cuts two ways, and you should never take unnecessary dependencies. Patrick noted that I am using WSGI, a Python-based Web Server Gateway Interface, to connect Django by way of FastCGI to my commodity hosting service. And he pointed out that a rich WSGI ecosystem is evolving that could enable me to proceed in the minimalistic style I prefer, integrating best-of-breed middleware (e.g., URL mapping, templating) as needed. If the preceding sentence makes any sense to you, but you haven’t heard about Paste and Pylon (as I had not until Patrick pointed me at them), then you might want to watch the Google TechTalk that Patrick recommends.

Operational: I’m doing this project on $8/month commodity hosting because I want to understand, and explain, how much can be accomplished for how little. Bottom line: amazingly much for amazingly little. For years I’ve supplied my own infrastructure, so I never had the experience of using a hosting service that provides web wrappers to: create subdomains; provision databases and email accounts; deploy blogs and wikis. Sweet! At the same time, though, I’m struck by how much specialized cross-domain knowledge I’ve had to muster. For example, the first service I’ve built on the site, a community version of LibraryLookup, relies on programmatic use of authenticated SMTP to send signup confirmation messages and status alerts. I figured out how to do that in Python, but it took some head-scratching, and my solution isn’t particularly robust. For me, spending an extra buck a month for a more robust solution (ideally delivered as a language-independent web service) would be an option I’d consider. For many people, though, it would be an enabler for things that otherwise wouldn’t happen. There’s a ton of opportunity in this space for buck-a-month services like that.

Aesthetic: For now I’m going with an aggressively Web 0.1 style, a la and craigslist. My wife’s first comment was: “So, you are going to pretty it up a bit, right?” I dunno, you can argue it both ways. The current arrangement has the advantage of being The Simplest Thing That Could Possibly Work. But virtuous laziness aside, it may be that craigslist, in particular, has validated the Web 0.1 aesthetic for community information services. Or it may be that my wife’s first reaction was correct, and I’ll have to look for a volunteer designer. We’ll see.

None of these issues are top of mind for me now, though, because they’re all trumped by a conceptual issue. How do I demonstrate methods of syndication, tagging, and service composition so that people will understand them and, more importantly, apply them?

Consider the version of LibraryLookup that I’ve built for this site. The protocol is, admittedly, abstract. It invites you to use your Amazon wishlist not only for its existing purposes — keeping track of stuff you’re interested in, registering for gifts you’d like to receive — but also as an interface to your local library.

Dan Chudnov thinks this is a questionable approach, and his point about interlibrary loan is well taken. But we don’t have through-the-web interlibrary loan in my town, and if we did, I’d still want to use Amazon as my primary interface to it. To me, it’s obvious why and how to wire those things together. To most people, it isn’t, and that’s the challenge.

To meet that challenge, I’m stepping back from some things things that have been articles of faith for me. For example, this service does not yet notify by way of RSS. Just email for now. Of course I can and will offer RSS, but in my community (as in most) that is not the preferred way to receive notifications.

Everything else about this service will be unfamiliar to most people:

  • That an Amazon wishlist can serve multiple purposes.
  • That LibraryLookup is OK with Amazon. (It is. Jeff Bezos told me so.)
  • That we should expect to be able to wire the web to suit our purposes.

The lone familiar aspect of this service, I realized, is that once in a while you get an email alerting you that something you want is available. Everyone will understand that. But the rest is going to be hard, and I’ve concluded that evangelizing RSS in this context would only muddy the waters even more.

In other ways, though, I’m pushing hard for the unfamiliar. It would be an obvious thing to use Django’s wonderful automation of database CRUD (create, read, update, delete) operations to directly manage events, businesses, outdoor activities, media, and other collections of items of local interest. People are familiar with the notion of a site that you contribute directly to, and I could do things that way, but for the most part I don’t want to. I want to show that you can contribute indirectly, from almost anywhere, and that services like Flickr and can be the database.

I got a great idea about how to approach this from Mark Phippard, a software guy who lives in my town (though we’ve not yet met in person). Mark wrote to offer technical assistance, which I’m glad to receive, but I wrote back asking for help breaking through the conceptual barrier. How do I motivate the idea of indirect, loosely-coupled contribution?

Mark mentioned that one of his pet peeves is the dearth of online information about local restaurants. You can find their phone numbers on the web, but he’d like to see their menus. That’s a perfect opportunity to show how Flickr can be used as a database. If Mark, or I, or someone else scans or photographs a couple of restaurant menus and posts them to Flickr, tagged with ‘restaurant’ and ‘menu’ and ‘elmcityinfo’, we’ll have the seed of a directory that anyone can help populate very easily. Along the way, we might be able to show that Flickr isn’t the only way to do it. A blog can also serve the purpose, or a personal site with photo albums made and uploaded by JAlbum. So long as we agree on a tag vocabulary, I can federate stuff from a variety of sources.

And now, I’m off to collect some local restaurant menus. A nice little fieldwork project for my sabbatical!

A conversation with Paul English about customer service and human dignity

This week’s podcast features Paul English. He’s a software veteran who’s been VP of technology at Intuit and runs the Internet travel search engine at, but is best known for the IVR Cheat Sheet. Now available at, this popular database of voice-system shortcuts makes it easier for people to get the human assistance they crave when calling customer service centers.

The gethuman project isn’t just a list of IVR hacks anymore. It’s evolved into a consumer movement that publishes best practices for quality phone service and rates companies’ adherence to those best practices.

Although human-intensive customer service is usually regarded as costly and inefficient, operations like craigslist — where Craig Newmark’s title is, famously, customer service representative and founder — invite us to rethink that conventional wisdom.’s customer service was inspired by craigslist. Paul English says that making his engineers directly responsible for customer service has done wonders for the software development process. Because they’re on the front lines dealing with the fallout from poor usability, they’re highly motivated to improve it.

We also discussed web-based data management. The original IVR Cheat Sheet was done with Intuit QuickBase, an early and little-known entrant into a category that’s now heating up: web databases.

Finally, we talked about Partners in Health, the organization to which Paul English donates his consulting fees. The story of Partners in Health is told in Tracy Kidder’s book Mountains Beyond Mountains: Healing the World: The Quest of Dr. Paul Farmer. At the end of the podcast I mention that I’d added that book to my Amazon wishlist. The other day, while looking for something to listen to on an afternoon run, I checked my RSS reader and saw that the book was available in my local library in audio format. Sweet! Two afternoon runs later, I’m halfway through. It’s both an inspirational tale about Paul Farmer’s mission and a case study in how holistic health care systems can operate far more cost-effectively than most do today.

PowerBook rot

Back in 2003 I wrote an essay on the dreaded syndrome of Windows rot. As fate would have it, I am still using that same machine, and it’s been quite stable since then. In a year or so, we’ll start hearing opinions about the relative rot-resistance of Vista versus XPSP2. But meanwhile, I’m plagued by a different syndrome: PowerBook rot.

Both of my PowerBooks are afflicted. About six months ago, my 2001-era Titanium G4 began to suffer sporadic WiFi signal loss along with the kinds of narcolepsy and spontaneous shutdowns that many new Intel Macs have exhibited. I’ve tried all the obvious things, including reseating the Airport card and resetting the NVRAM and PMU, but to no avail. The WiFi is negotiable, I could try a different card or just use the machine at home on a wired LAN, but if I can’t fix the worsening narcolepsy and shutdowns it’s all over. Something’s gone funky on the motherboard, I guess, and this machine’s too old and beat up to justify replacing it.

Then, a couple of weeks ago, my 2005-era G4 caught the spontaneous shutdown bug. I wondered if it might be protesting my new job, but when I noticed half my RAM was missing, I diagnosed lower memory slot failure. So now that machine is away having its motherboard replaced, a procedure that appears to be suspiciously routine for my local Apple store.

It’s always dangerous to extrapolate from anecdotal experience, and there’s never good data on this kind of thing, but I must say that while researching these problems I’ve seen a lot of bitching about PowerBooks. Is it just me, or are these things not built to last?

An experiment in online community

For my sabbatical project, I’m laying foundations for a community website. The project is focused on my hometown, but the idea is to do things in a way that can serve as a model for other towns. So I’m using cheaply- or freely-available infrastructure that can be deployed on commodity hosting services. The Python-based web development framework Django qualifies, with some caveats I’ve mentioned. And Django is particularly well-suited to this project because it distills the experiences of the developers of the top-notch community websites of Lawrence, Kansas. The story of those sites, told here by Rob Curley, is inspirational.

I’m also using Assembla for version control (with Subversion), issue tracking (with Trac), and other collaborative features described by my friend Andy Singleton in this podcast (transcript). It’s downright remarkable to be able to conjure up all this infrastructure instantly and for free. I’m a lone wolf on this project so far, but I hope to recruit collaborators, and I look forward to being able to work with them in this environment.

I have two goals for this project. First, aggregate and normalize existing online resources. Second, show people how and why to create online resources in ways that are easy to aggregate and normalize.

Online event calendars are one obvious target. The newspaper has one, the college has one, the city has one, and there’s also a smattering of local events listed in places like Yahoo Local, Upcoming, and Eventful. So far I’ve welded four such sources into a common calendar, and wow, what a messy job that’s been. The newspaper, the college, and the city offer web calendars only as HTML, which I can and do scrape. In theory the Yahoo/Upcoming and Eventful stuff is easier to work with, but in practice, not so much. Yahoo Local offers no structured outputs. Upcoming does, the events reflected into it from Y Local use hCalendar format, but finding and using tools to parse that stuff always seems to involve more time and effort than I expect. Eventful’s structured outputs are RSS and iCal. If you want details about events, such as location and time, you need to parse the iCal, which is non-trivial but doable. If you just need the basics, though — date, title, link — it’s trivial to get that from the RSS feed.

I’m pretty good at scraping and parsing and merging, but I don’t want to make a career out of it. The idea is to repurpose various silos in ways that are immediately useful, but also lead people to discover better ways to manage their silos — or, ultimately, to discover alternatives to the silos.

An example of a better way to manage a siloed calendar would be to publish it in structured formats as well as HTML. But while that would make things easier for me, I doubt that iCal or RSS have enough mainstream traction to make it a priority for a small-town newspaper, college, or town government. If folks could flip a switch and make the legacy web calendar emit structured output, they might do that. But otherwise — and I’d guess typically — it’s not going to happen.

For event calendars, switching to a hosted service is becoming an attractive alternative. In the major metro areas with big colleges and newspapers, it may make sense to manage event information using in-house IT systems, although combining these systems will require effort and is thus unlikely to occur. But for the many smaller communities like mine, it’s hard to justify a do-it-yourself approach. Services like Upcoming and Eventful aren’t simply free, they’re much more capable than homegrown solutions will ever be. If you’re starting from scratch, the choice would be a no-brainer — if more people realized these services were available, and understood what they can do. If you’re already using a homegrown service, though, it’ll be hard to overcome inertia and make a switch.

How to overcome that inertia? In theory, if I reflect screenscraped events out to Upcoming and/or Eventful, the additional value they’ll have there will encourage gradual migration. If anyone’s done something like that successfully, I’d be interested to hear about it.

On another front, I hope to showcase the few existing local blogs and encourage more local blogging activity. Syndication and tagging make it really easy to federate such activity. But although I know that, and doubtless every reader of this blog knows that, most people still don’t.

I think the best way to show what’s possible will be to leverage services like Flickr and YouTube. There are a lot more folks who are posting photos and videos related to my community than there are folks who are blogging about my community. Using text search and tag search, I can create a virtual community space in which those efforts come together. If that community space gains some traction, will people start to figure out that photos and videos described and tagged in certain simple and obvious ways are, implicitly, contributions to the community space? Might they then begin to realize that other self-motivated activities, like blogging, could also contribute to the community space, as and when they intersect with the public agenda?

I dunno, but I’d really like to see it happen. So, I’m doing the experiment.

A conversation with John Halamka about health information exchange

Dr. John Halamka joins me for this week’s podcast. He’s a renaissance guy: a physician, a CIO, and a healthcare IT innovator whose work I mentioned in a pair of InfoWorld columns. Lots of people are talking about secure exchange of medical records and portable continuity of care documents. John Halamka is on the front lines actually making these visions real. Among other activities he chairs the New England Health Electronic Data Interchange Network (NEHEN), which began exchanging financial and insurance data almost a decade ago and is now handling clinical data as well in the form of e-prescriptions. The technical, legal, and operational issues are daunting, but you’ll enjoy his pragmatic style and infectious enthusiasm.

We also discuss the national initiative to create a standard for continuity of care documents that will provide two key benefits. First, continuity both within and across regions. Second, data on medical outcomes that can be used by patients to choose providers, and by providers to study the effectiveness of procedures and medicines.

Websites mentioned in this podcast include:

Oh, and there’s a new feed address for this series:

Django gymnastics

Recently I’ve been noodling with Django, a Python-based web application framework that’s comparable in many ways to Ruby on Rails. It appeals to me for a variety of reasons. Python has been my language of choice for several years, and going forward I expect it to help me build bridges between the worlds of LAMP and .NET. Django’s templating and object-relational mapping features are, as in RoR, hugely productive. And Django’s through-the-web administration reminds me of a comparable feature in Zope that I’ve always treasured. It’s incredibly handy to be able to delegate basic CRUD operations to trusted associates, who can use the built-in interface in lieue of the friendlier one you’d want to create for the general public.

The recommended way to deploy Django is to run it under mod_python, the Apache module that keeps Python interpreters in memory for high performance. But a lot of popular web hosting services don’t support that arrangement. For example, I just signed up for an account at BlueHost, the service used by the instructional technologists at the University of Mary Washington, and I looked into what it would take to get Django working in that environment.

Despite helpful clues it still took a while to work out the solution. In the process I reactivated dormant neurons in the parts of my brain dedicated to such esoterica as mod_rewrite and FastCGI, but I’d rather have been working with Django than working out how to configure it.

By way of contrast, setting up WordPress — a more well-known and popular application — was a one-click operation thanks to Fantastico, an add-on installer for the cPanel site manager.

I’ve heard it said that a compelling screencast is one key factor influencing the adoption of a new web-based application. One-click install in shared hosting environments has to be another. For a while, anyway, until the virtualization juggernaut gives everyone the illusion of dedicated hosting.

Video knowledge

Sean McCown is a professional database administrator who writes the Database Underground blog for InfoWorld. Lately his postings have been full of references to videos. One day, he watched a Sysinternals training flick, combining live video with screencasting, and made immediate use of it to pinpoint and fix a problem. Another day, he made his own training screencast:

I sat down last night and made a video of the restore procedure for one of our ETL processes. It was 10mins long, and it explained everything someone would need to know to recover the process from a crash. [Database Underground: Not just a DR plan anymore]

Screencasting is poised to become a routine tool of business communication, but there are still a few hurdles to overcome. For starters, video capture isn’t as accessible as it ought to be. Second Life gets it right: there’s always a camera available, and you can turn it on at any time. Every desktop OS should work like that.

Meanwhile, I’ll reiterate some advice: Camtasia is an excellent tool for capturing screen video on Windows, but its $300 price tag covers a lot of editing and production features that you may never use if you’re capturing in stream-of-consciousness mode for purposes of documentation. In that case, the free Windows Media Encoder is perfectly adequate.

On the Mac I’d been using Snapz Pro X for short flicks, but it takes forever to save long sessions. Next time I do a long-form Mac screencast I’ll try iShowU. That’s what Peter Wayner used for his AJAX screencasts. Peter says that iShowU saves instantly. I tried the demo, and it does.

Finally, there’s the odd hack I tried here: I used the camera’s display as the Mac’s screen, and captured to tape. If the 720×480 format is appropriate for your subject — and when the focus is a single application window, it can be — this is a nice way to collect a lot of raw material without chewing up a ton of disk space.

Capture mechanics aside, I think the bigger impediment is mindset. To do what Sean did — that is, narrate and show an internal process, for internal consumption — you have to overcome the same natural reticence that makes dictation such an awkward process for those of us who haven’t formerly incorporated it into our work style. You also have to overcome the notion, which we unconsciously absorb from our entertainment-oriented culture, that video is a form of entertainment. It can be. Depending on the producer, a screencast documenting a disaster recovery scenario could be side-splittingly funny. And if the humor didn’t compromise the message, a funny version would be much more effective than a dry recitation. But even a dry recitation is way, way better than what’s typically available: nothing.

Trailing-edge requirements for a community app

One of the projects I’m tackling on sabbatical is a community version of LibraryLookup. The service I wanted to create is described here: an RSS feed that’s updated when a book on your Amazon wishlist becomes available in your local library. Originally I planned to build a simple web application that would register Amazon wishlist IDs and produce custom RSS feeds for each registrant. But as I thought about what would make this service palatable to a community, I saw two problems with that approach:

  1. Familiarity. Most folks will not be familiar with RSS. If the primary goal is to get people using the service, rather than to evangelize RSS, it should use the more familiar style of email notification.
  2. Deployability. A web application needs to be hosted somewhere. In most communities, the library won’t be able to host the service on its own infrastructure. But if it’s hosted elsewhere, there will be a (rational) reluctance to take a dependency on that provider.

To address the first concern, I’m doing this as an old-fashioned email-based app. You subscribe or unsubscribe by sending email with a command and a wishlist ID in the Subject: header. And you receive notifications about book availability by way of email.

To address the second concern, I’m doing it as a client-side Python script, so that the only dependency is some version of Python and an Internet connection.

Because a library might not even be able to dedicate an email address for this purpose, I’m exploring the use of Gmail as the communication engine. In order for that to work, Python has to be able to make secure and authenticated POP and SMTP connections. Happily, it can.

The recipe for connecting Python to Gmail’s POP service is trivial:

import poplib
p = poplib.POP3_SSL(‘’)

The recipe for connecting Python to Gmail’s SMTP service is less obvious:

import smtplib,
s = smtplib.SMTP(“”)
auth = ‘\x00USERNAME\x00PASSWORD’
eauth = base64.b64encode(auth)
s.putcmd(“AUTH PLAIN”)

This won’t work with no authentication, but neither will it work with the SMTP module’s login() which uses the wrong authentication type (i.e., LOGIN rather than PLAIN, I think).

Any POP/SMTP servers can be used, of course, so there’s no dependency on Gmail here, but it’s nice to see that Gmail can easily be pressed into service if need be.

It feels retro and trailing-edge to do an email-based app but, in order to make it familiar and deployable that seems like the right approach.

Larry O’Brien serves up three hardball questions

It is both sobering and gratifying to see folks asking the same questions about my upcoming gig that I’ve been asking myself. Larry O’Brien serves up three hardballs:

1. To what extent will the inherent imperative to advocate MS technologies stifle him?

Note that there is also a weird corollary: What about the MS technologies that I’m deeply fond of? In the past, nobody (well, hardly anybody) would question my motives if I got fired up about Monad or LINQ or IronPython. Now that’s bound to happen.

In any case, the only way this will work is if I explore and advocate things I believe in. So that’s what I plan to do. Some of those things will exist within the MS portfolio, some outside. Identifying value wherever it exists, and finding useful ways to extract and recombine it, is what I do. I hope I can continue to do that effectively as an MS employee but, of course, we’ll all just have to wait and see.

2. Will he be reduced to just a conduit of information (Microsoft’s new A-list blogger) or will he continue to contribute new creations?

Hand-on tinkering is critical to my method. I have in mind a long-running project that will enable me to try out lots of interesting things, while creating something useful in the process. I don’t want to say more until I’ve laid some foundations, but yes, I do plan to keep on contributing in the modest ways that I always have.

3. Will direct knowledge of unannounced initiatives keep him quiet on the very subjects on which he’s passionate?

Part of this career change goes beyond switching employers. The disconnect between the geek world and the civilian world has really been bugging me lately. Leading edge aside, there’s so much potential at the trailing edge that languishes because nobody helps people connect the dots. On the desktop, on the web, and everywhere else touched by computers and networks, people are running on 2 cylinders. And when we upgrade their computers and operating systems, that doesn’t tend to change.

I really, really want to show a lot of people how to use more of what they’ve got. Smarter methods of communication. More powerful data analysis and visualization. Surprisingly simple kinds of integration. These are my passions, and as Larry points out, they tend to involve fairly simple and accessible tools and techniques. In theory, to pursue this part of my mission, I don’t need to know about every secret project in the pipeline. Whether it’ll actually work out that way in practice, I dunno.

Being here, being there

Mike Champion raises an interesting point that applies to Microsoft but also more broadly:

The culture at MS is very F2F-oriented…if you’re out of sight, you have to work hard not to be out of mind.

But then he adds:

Geographic distance will help keep you from getting sucked into the groupthink of whatever group you’re in. Microsoft collectively needs to be constantly reminded what the world looks like to people whose view isn’t fogged up by our typical drizzle or distracted by the scenery on the sunny days.

We’re entering an era in which our personal, social, and professional lives are increasingly network-mediated. Trust-at-a-distance is a new possibility, with economic ramifications that everyone from Yochai Benkler to Jim Russell is trying to figure out. As someone who’s worked remotely for 8 years, and is about to work remotely for a company with relatively few remote employees, this question is extremely interesting to me.

On the one hand, I’ve learned that I can accomplish a lot because I spend an abormal percentage of my waking hours in flow rather than in meetings. I’ve also learned that network-mediated interactions can be more productive than F2F interactions. Consider my August screencast with Jim Hugunin, or my May screencast with Anders Hejlsberg, or indeed any of the other screencasts in that series. They’re all scheduled events, mediated by telephone and screensharing. I can’t see how physical colocation would improve them.

On the other hand, there’s the “watercooler” effect: being in a place, you see and hear and smell things that aren’t otherwise transmitted through the network. I have no doubt whatsoever that shared physical space matters in ways we can’t begin to describe or understand.

But as collaboration in shared virtual space takes its rightful place alongside collaboration in shared physical space, shouldn’t a company whose products are key enablers of virtual collaboration be eating its own dogfood?

Of course things are never as black-and-white as they appear. So I’m going to bookmark this posting and return to it in six months. Hopefully by then I’ll know more about the value of being here and of being there.

Turning 50

It’s been an unusual week. On December 3 I turned 50. On Dec 8 I announced that I’m leaving InfoWorld and joining Microsoft. It’s not a coincidence. When I saw 50 looming, a couple of years ago, I started to get really clear about what I want to do with the next 25. I’ve been laying out the vision to anyone who will listen, and I’ll continue to do so here, but first things first. Yesterday’s announcement left a couple of questions unasked and unaswered, so without further ado:

Q: Are you relocating to Redmond?

A: No. I’ll continue to work from my home office in New Hampshire. At first I’ll be spending maybe one week in four in Redmond, because there’s a lot of connecting to do. In the long run I may wind up traveling almost that much, but I hope to locations elsewhere than Redmond as often as not.

In January, for example, I’ll be speaking at Techology, Knowledge, and Society in Cambridge UK. And in May, at GOVIS in New Zealand. As was true for my recent talks in Guadalajara and Ann Arbor, I don’t expect to encounter any Silicon Valley regulars at these events. I do expect to give and to receive important insights about how people everywhere can use infotech to further their occupational, educational, personal, social, and civic agendas.

Q: What will happen to your archive?

A: I’ve experienced namespace disruption before, and am very keen to avoid it this time around. Fortunately it’s in InfoWorld’s best interest to preserve my blog archive. Worst case, the material will be rehosted because nobody else at InfoWorld uses Radio UserLand anymore. In that case, I’ve offered to help redirect the current namespace to a different one. I’m keeping my fingers crossed, but I hope there won’t be a problem.

Q: Why would you work for them? Not since Standard Oil has such a brutal vicious rapacious thuggish company with such power existed.

A: That question, in private email from someone I deeply respect, reminded me that yesterday’s Q and A left some important things unsaid. In particular, although I mentioned Ray Ozzie and Kim Cameron and Jean Paoli and Jim Hugunin and JJ Allaire, I egregiously failed to mention such equally important folks as:

Tim Fahlberg, who wants to use screencasting to reinvent math education, and who was thrilled that I picked up on his mission and amplified it in InfoWorld, but who because of that only gained a tiny bit more of the exposure he deserves.

Dan Thomas, who’s pumping the operational data of city government out onto the web where, despite all my efforts so far, nobody except me sees that it’s there or imagines what to do with it.

Mike Frost, who’s building out a version of the energy web today instead of waiting for government to never do it.

To these stories I’ll add my own NHPR commentaries about online-map-enabled community work, rediscovery of the local library, and the social capital we can build when we work from home.

My proposal was to be an evangelist for the Net, to continue discovering and telling these kinds of stories, and to use them as the framework within which to explore and explain Microsoft’s current and emerging technologies.

When I met with Jeff Sandquist I had just finished this podcast with Jim Russell. It’s a story about migration and the mobility of intellectual capital, refracted through Jim’s experience with the Pittsburgh diaspora. Neither Microsoft’s nor any other vendor’s technologies are discussed. I’m certain that the ideas Jim lays out in this podcast will inspire new business models for social software, but it’s all rather speculative.

I explained to Jeff that it had taken me most of a day to interview Jim Russell, then edit our rambling two-hour discussion down to something more coherent. And I said: “Reality check, you’re OK with that?” He said yes. I do not regard that answer as evidence of thuggishness or rapaciousness. I regard it as a sign of enlightenment, and I am calibrating my expectations accordingly.