Tony Hammond works with the new technology team at Nature Publishing Group. His company publishes a flock of scientific journals in print and online including, most prominently, Nature. It also operates Connotea, a social bookmarking service for scientists. In this week’s podcast we talk about digital object identifiers which are, in effect, super-URLs designed to survive commercial churn and to work reliably for hundreds of years.
Many of us are becoming publishers nowadays, and we’d like to imagine that all our stuff could enjoy that level of consistency and durability. Few of us are prepared to make the necessary investment, but it’s interesting to hear from someone who has.
If you do decide to explicitly publish your bookmarks in a sidebar widget on your blog, it may change the way you bookmark. It did for me, anyway. The balance shifted away from purely personal information management and toward the kind of editorial sensibility that governs the blog. It was around this time that private bookmarks became available in del.icio.us, and that’s been helpful. If I’m researching something and I just want to collect a list of resources labeled with some obscure tag meaningful only to me, there’s no need to flow that stuff onto my blog page. Conversely, if I want to draw attention to something in a public way, I can. It sounds great in principle, but in practice I think the friction involved in making that choice on a per-item basis made me less likely to bookmark either publicly or privately.
Here’s another unintended consequence that illustrates the surprising things that can happen in this web of information we are spinning. I realized the other day that my public del.icio.us bookmarks are appearing on every page of my mothballed InfoWorld blog. To stop that happening, I’d need to tweak its template — but I no longer have access to it!
Meanwhile, ironically, I haven’t yet figured out how (or whether) I can inject the del.icio.us JSON feed into the hosted WordPress blog I’m running here. For now, I’ve decided to embrace this constraint. Perhaps if my del.icio.us account feel less directly connected to what I am publishing now, I’ll use it more freely.
These scenarios are rather odd, quite interesting, and slightly scary. As building software systems with components morphs into building information systems with feeds, they’ll become increasingly common.
As I mentioned the other day, it’d be useful to have audio-only versions of many of the Channel 9 videos for folks who (like me) have more time to listen away from the computer than to watch at the computer. Pretty soon that’ll start happening for new stuff. Then, of course, there’s the archive. One proposal was to sift out the most popular videos and convert them first. But what does “most popular” mean? It depends.
Pageviews and downloads are one way to measure, and of course the sites have those stats. Citations are another. I’m always interested to see how frequently things are being cited in blog postings and shared bookmark systems. The visualization I did here, which tracks citations of the ACLU Pizza fictional screencast as seen through the lenses of del.icio.us and Bloglines, is a nice example.
So I scooped up the video URLs mentioned in this RSS feed and ran the same sort of analysis. The Bloglines results were fine, but the del.icio.us results were wonky. Eventually I found the problem. Del.icio.us, unlike Bloglines, treats the URLs that you feed to its citation counter in a case-sensitive way. And there are multiple spellings of the “same” URL pattern on Channel 9, including:
Because del.icio.us citations attach to particular spellings, you’d need to query using each of them to get the whole story.
This case-sensitivity is one of the subtler forms of a much more general problem. Content management systems quite commonly provide different URLs for the same resource, and each of those is an invitation for citation indexing systems to wander down divergent paths.
For a long time I’ve thought that strong URL discipline was the best way to avoid this problem. But there are other approaches. In the academic world, where citations are taken very seriously, digital object identifiers play a much more important role than they do on the web. We web folk could learn a thing or two from our academic cousins, and that’s one reason why I’ll be interviewing Nature.com’s Tony Hammond for an upcoming podcast.
If you’re curious, by the way, here’s the data from Bloglines. I don’t have the del.icio.us data yet because I managed to get myself blocked from the server while futzing around — sorry Josh, I’ll query more slowly next time.
|158||169962||Otto Berkes – Origami’s Architect gives first look at Ultramobile PCs|
|134||151853||Robert Fripp – Behind the scenes at Windows Vista recording session|
|090||271984||Scott Guthrie – MIX07, Work, and Personal Details Revealed|
|057||270965||Windows Home Server|
|043||261254||Looking at XNA – Part Two|
|034||116347||Steve Ball – Learning about Audio in Windows Vista|
|029||116702||Paul Vick and Erik Meijer – Dynamic Programming in Visual Basic|
|028||019174||Andy Wilson – First look at MSR’s "touch light"|
|024||056393||Steve Swartz – Talking about SOA|
|010||268480||Special Holiday Episode IV: Don Box and Chris Anderson|
|010||256597||WCF Ships, Doug Purdy Dances, and Don Box Sings|
|009||252457||A Chat and Demo about LINQ with Wee Hyong (Singapore MVP SQL)|
|009||208891||What’s Microsoft Speech Server (Beta)?|
|008||039280||Herb Sutter – The future of Visual C++, Part I|
|008||010189||Anders Hejlsberg – What’s so great about generics?|
|006||273697||Anders Hejlsberg, Herb Sutter, Erik Meijer, Brian Beckman: Software Composability and the Future of Languages|
|006||272229||Ulrik Molgaard Honoré: Production planning with Dynamics AX|
|006||229585||Programming in the Age of Concurrency: The Accelerator Project|
|006||159231||Office 12 – Word to PDF File Translation|
|005||267098||The Best XNA Movie in the UNIVERSE|
|005||221610||Shankar Vaidyanathan – VC++ IDE: Past, Present and Future|
|004||274865||Scott Hanselman & Jeffrey Snover Discuss Windows PowerShell|
|004||009894||Building a Picture Frame with Windows CE 5.0 – Step 1|
|003||274069||Brad Abrams on AJAX for ISVs|
|003||266221||MultiPoint: What. How. Why.|
|003||248575||Software Security at Microsoft: ACE Team Tour, Part 2|
|003||246477||Exploring the new Domain-Specific Language (DSL) Tools with Stuart Kent|
|003||237142||VSTO 2005 Second Edition Beta: Martin Sawicki|
|003||029505||Gabriel Torok – Protecting .NET applications through obfuscation|
|002||271257||Adam Carter and Mike Adams on Managed Services|
|002||269462||Tara Roth: Not your father’s world of Software Test|
|002||265667||Revisiting WiMo – The Windows Mobile Robot|
|002||263358||Joe Stegman talks about the "WPF/E" CTP|
|002||013653||Jason Flaks – What is Windows Media Connect?|
|001||274644||Beam me over, Scotty: Introducing Transporter Suite|
|001||274641||Sharepoint Templates: What. How. Why.|
|001||273337||New Vista GUI Stuff For Devs|
|001||273061||Mike Barrett: Testing and Deploying IPV6|
|001||270453||Technology Roundtable #1|
|001||267604||UK Community: DeveloperDeveloper Day|
|001||263442||Expression – Part One: The Overview|
|001||232481||WPF Chart Control (from the perspective of summer interns)|
|000||273120||Ask The Experts! : Anders Hejlsberg|
|000||271378||Ask The Experts! : KD Hallman|
|000||264874||Rob Short: Operating System Evolution|
|000||263902||Windows 2000 to Windows Vista: Road to Compatibility|
|000||238608||Windows Vista: Ready for ReadyDrive|
Today’s 4-minute screencast, which explores Vista’s common feed system, serves multiple purposes. First, I wanted to familiarize myself with this stuff, and do so in a way that would elicit responses that help me understand how other folks are reacting to it. I am intensely interested in the reasons why people do or don’t take to the notion of reading RSS feeds. Mostly, as we know, they haven’t.
The assumption is that surfacing the concepts more prominently in the OS will help, and I think that’s true, but there’s a lot going on here. For example, even just explaining to people how feeds are like-but-unlike email is a huge challenge. When you start from the perspective of reading feeds versus reading email, it’s hard to see the difference. One key distinction — that feeds are by-invitation-only and can be easily and effectively shut down, versus email which is uninvited and can be very hard to deflect — is fairly abstract and hasn’t sunk in yet for most people.
When you start from the perspective of writing feeds versus writing email, the differences, and the benefits that flow from those differences, are even more compelling — at least to me. But the reasons why are even more abstract: manufactured serendipity, maximization of scope, awareness networking. How might Vista, or any desktop operating system, help surface these concepts?
I also made this screencast to find out what it’s like to make screencasts of Vista. I haven’t yet installed Camtasia on my newly-acquired Vaio laptop, because I want to repave that machine with a final version of Vista that I don’t have yet. But no worries, there’s always good old Windows Media Encoder. I’ve always said it’s an underappreciated jewel, and evidently that’s still true as it is not inclulded in Vista.
After capturing with Windows Media Encoder I transferred the file to my XP box for editing in Camtasia. As always, the process reminded me of Pascal’s famous quote: “If I had time, I would write a shorter letter.” Boiling a screencast down to its essence is really hard. One of the biggest challenges is meshing the video footage with the audio narration. I want to produce a series of screencasts that illustrate this process, but I’m not sure how best to separate out the kinds of general principles I outlined here from details of specific applications and delivery formats.
A couple of final points about the RSS features shown in the screencast. It shows how to acquire feeds one at a time into the common pool using IE, and how to acquire batches of feeds into Outlook by importing an OPML file, but there’s no obvious way to load a batch from OPML into the common pool. I know I could write that app, but is there one lying around somewhere that I’ve missed? Also, how do you batch-delete feeds from Outlook once you’ve acquired them via OPML?
Yesterday I noticed that my new home page was a disaster in IE7. I’d cribbed a CSS-based three-column layout from Google’s Page Creator. Then, as is my custom, I’d pruned away as much of the complexity as I could. But evidently I pruned too much of, or the wrong parts of, the the CSS gymnastics encoded in that example. So I wrecked the layout for Internet Explorer.
As per comments here, standards support in IE7 is a thorny issue and discussions of it are heavily polarized. But I aim to be (cue Jon Stewart) a uniter not a divider. So here I simply want to give props to this article by Matthew Levine, from A List Apart. From it, I cribbed the wrapper-free Holy Grail. It’s a minimalistic CSS-based three-column layout that seems to work well in every browser I’ve tried: IE6, IE7, Firefox, Safari.
To be honest, although I’m hugely fond of CSS styling, I’ve always struggled with CSS layouts, and I know I’m not the only one in that boat. When you read the explanation in Matthew’s article, you can see why. CSS layout is like one of those games where you slide 15 tiles around in a 16-square matrix. In principle it is a declarative language, but in practice the techniques are highly procedural: Step 1, Step 2, etc.
Whether that’s good or bad, and to what extent CSS layout really does trump table-based layout — these are interesting questions but separate discussions. The bottom line here is that I wanted to do a CSS-based layout, I wanted it to be as minimal as possible so I’d have the best shot at understanding and maintaining it, and I wanted it to behave reasonably in a variety of browsers. For that I needed the pattern, or recipe, which Matthew’s article helpfully provided.
It’s appropriate that he calls the technique the Holy Grail because the three-column layout applies very broadly. Yet, though there are tips and tricks all around the web, I’m not aware of a well-known cookbook, or pattern library, that:
- Identifies the handful of most popular layouts.
- Illustrates minimal, bare-bones CSS treatments.
- Certifies those treatments for cross-browser use.
For extra credit, this cookbook could filter the recipes according to whether support in each of the major browsers is must have or nice to have or optional.
Cross-browser issues have always been a headache, they still are, and the reality is that dealing with them requires hacks. The more we consolidate and simplify the hacks, a la Matthew Levine’s holy grail, the better.
For a couple of years I’ve been trying to transfer my experience of listening to podcasts to my dad. There’s so much interesting stuff to listen to, and he has both the time and the interest to listen, so in theory it’s a perfect fit. But in practice, though he’s heard a few of the talks I’ve forwarded to him as links, I haven’t managed to create the “aha” moment for him. This past week, though, may have been the tipping point. He’d landed in the hospital and I was determined to give him an alternative to the in-room TV. So I loaded up my old 256MB Creative MuVo with a selection of favorite talks, bought him a pair of headphones, gave him the kit, and showed him how to turn it on and press play.
It’s been a huge success. The next challenge, of course, will be to show him how to refill the gadget once he’s listened to the dozen or so hours of stuff I gave him. But I hope I’ve won the important battle. Time will tell, and I could be wrong, but my hunch is that what remains — a conversation about feeds, podcatchers, and USB cables — will be a mop-up operation.
In the tech industry, though, I think we often pretend that the mop-up operation is the battle. We talk obsessively about the user experience, and we recognize that we invariably fail to make it as crisp and coherent as it should be. But user experience is an overloaded term. I propose that we unpack it into (at least) two separate concepts. One is the basis of the “aha” moment. For now I’ll call it the use experience. In this example, it’s the experience of listening to spoken-word podcasts from sources that, just a few years ago, weren’t available.
I’ll reserve the term user experience for something else: the tax we pay in order to enjoy the use experience. This tax is not the basis of an “aha” moment. It’s expressed in terms of the devices, cables, batteries, applications, menus, dialog boxes, and — last but not least — the concepts we must grapple with in order to reliably reproduce the use experience. A great user experience makes all this crap relatively less awkward, confusing, and annoying. A lousy user experience makes it relatively more so. But the point is that it’s all crap! It’s the tax we pay to enjoy the use experience, and we want to pay as little of it as we can get away with.
How do you engineer a great use experience, as opposed to a great user experience? Part of the answer is deep personalization. So while the talks I loaded onto that MuVo for my dad included some of my favorites from ITConversations, the Long Now Foundation, TED, and elsewhere, I also included some my own podcasts. And that’s what tipped it for my dad. He’s proud of the work I do, but most of it has always been inaccessible to him. So I picked a handful of my most general-interest podcasts on education, government, and health care. And that worked really well. Every time I visited last weekend, he was listening to one of mine. But when I talked to him mid-week he was listening to the Burt Rutan TED talk that I’d hoped he would enjoy.
This is a personal story, but I’m certain the principles apply more broadly. On the Microsoft campus this past week, for example, I got together with Mike Champion for coffee and a chat. Among other things we talked about Channel 9 which he rarely tunes into, although it features a variety of things that would interest him. We also discussed the fact that, while there are spaces in his life into which he might enjoyably and profitably inject podcast listening — long bike rides, for example — he hasn’t yet done so.
Right after our chat I walked into my first team meeting with the Channel 9 and 10 folks and recalled a point I’d made a while ago, which is that video isn’t the medium of choice for Mike’s bike ride. He knows what Anders Hejlsberg and Jim Gray look like. He doesn’t have time in front of a computer (or a handheld video player) to watch them talk. But he does have time on his bike to listen to them talk. Everybody in the meeting agreed that peeling off sound tracks from the videos and making them available as podcasts is a no-brainer, so it looks like that’ll happen. Thanks in advance, Adam, for agreeing to make it happen, and please don’t take this as arm-twisting. I know you’re busy and will get to this when you can. I’m telling this story to make a larger point which I think may provoke some useful discussion.
The larger point is that all of us, me included, tend to focus on engineering the user experience and tend to forget about engineering the use experience. A better user experience, in this case, is partly about making audio files available, and partly about organizing podcast feeds so that I could subscribe to everything that comes down the pipe featuring Anders Hejlsberg or Jim Gray or other folks I want to tune in.
Those tweaks will probably lower the activation threshold enough for Mike to hop over it. But I’m not certain of that. There are still obstacles to overcome. What will motivate Mike to overcome them? An “aha” moment, a good use experience. So how do you engineer that?
I like Mike, but not enough to give him a preloaded MP3 player. I could, however, make him a mix of some Channel 9 stuff. And as I did for my dad, I’d want to include some of the other stuff that I’m always recommending to people.
How exactly to do that is an interesting question. As a hip Internet citizen and podcast aficionado I’ll be inclined to find a podcast remixing service, use it to make Mike’s mix, then point him to the feed it emits. But I actually think that would be the wrong approach. If I point him to a podcast feed, I force him to grapple with the podcatching user experience. But I don’t want to clobber him right away with a user experience. First I want to give him a satisfying use experience.
Different requirements dictate different engineering solutions. If I’m trying to engineer a delightful use experience it might be best to hand him a ZIP file of MP3s. I know he’ll know what to do with that, using any kind of MP3 player, without having to deal with any new tools or concepts.
Now of course Mike, being a typically super-smart and super-technical Microsoft employee, is perfectly able to deal with new tools and concepts. That’s what he does for a living, after all, and he does it because he likes to.
But in this context, I think that’s a red herring. Just because Mike can power through the crap doesn’t mean that he should have to, at least not right away, at least not if it can be avoided. The less to distract him from that “aha” moment, the better.
There’s probably a whole literature on this topic, and the terminological distinction I’m trying to make here may have been made differently and better elsewhere, in which case I’ll appreciate pointers to that literature. Terminology aside, I think the distinction is important in lots of ways. In terms of Channels 9 and 10, for example, it suggests the following:
1. As do video stores, Channels 9 and 10 could offer staff picks.
2. The picks could be made available not only as feeds, but also as bundles.
3. The picks could mix store-brand stuff from 9 and 10 with related stuff from elsewhere.
4. Viewers and listeners who follow 9/10 (and other sources) could use remix services hosted at 9/10 (or elsewhere) to share their own picks as feeds (or bundles).
But I think this principle applies much more broadly. Recently, for example, I mentioned my positive reaction to the $8/month commodity hosting offered by BlueHost.com. Ironically, BlueHosts’s founder recently blogged about how Microsoft doesn’t — and he thinks, can’t — play in that market. Could that change? If so, how? I can’t answer those questions at the moment, but I can say that good answers would lead with use experiences and follow with user experiences.
When you provision an instance of a MySQL database at BlueHost, you have a much better user experience than you have when you provision one from the command line, but to be honest it’s not a great user experience. Lots more could be done to clarify the concepts of databases, users, passwords, rights, and so on. Still, relative to the command-line alternative, you can much more quickly and more easily have the experience of deploying a world-accessible database-backed application. When you have that kind of use experience, you become an adopter of the enabling technology. It’s that powerful.
Last week’s Friday podcast ran afoul of travel craziness but the series continues this week with a further exploration of Dabble DB, the through-the-web database that was also featured in a screencast. In my conversation with Avi Bryant and Andrew Catton we explore some of the underpinnings of Dabble, including the remarkable fact that it’s written in the Squeak implementation of Smalltalk.
I’ve underplayed that point until now, because I’m trying to broaden the appeal of what I do, but it turns out that Dabble DB is a great example of how dynamic languages can produce effects that people see, interact with, and care very much about. Programmers aren’t the only ones to benefit from direct manipulation of objects, continuous refinement, and always-live data. We all need things to work that way, so it’s cool to see how the dynamic qualities of Dabble’s Smalltalk engine bubble up into the application.
In response to my item on media hacking the other day, this comment alerted me to a really sweet bookmarklet that adds a slider to a Flash movie. You don’t get timecodes but you do get start/pause/scrub which is a tremendous benefit.
When I tried it out, on both Firefox and IE, I was reminded again about the relative inaccessibility of bookmarklets in recent versions of IE. In Firefox it’s a drag-and-drop to the linkbar, and even that procedure eludes most people. In IE it’s a much more complicated dance which I illustrated in my Bookmarklets 101 screencast.
Because I now aim to improve digital literacy as broadly as I can, I’ll be focusing more than I have in the past on the browser that most people still use, which is IE. Here I’d like to toss out a couple of points for discussion and follow-up.
What’s the deal nowadays? At one time I heard about Trixie but not so much lately. I’ll revisit it myself, but I’m curious to hear reports on Trixie’s compatibility with Greasemonkey userscripts, its rate of adoption, and its security model.
The original plan to meet up at the Crossroads Mall on Thursday night turned out to conflict with what sounds like a popular and fun event: the Seattle podcasting network get-together. So I’ll be there instead, and if anyone wants to go out for drinks later, I’ll be up for that. Dennis Hamilton, who originally proposed the Crossroads Mall, has kindly volunteered to drop by there and see if anybody who does happen to show up wants to go over to the podcast meeting instead.
In honor of my first get-together with the MSDN Channel 9 and 10 folks later today, I thought I’d do a spot of media hacking in support of the cause. One of the things that caught my eye recently was Brian Jones’ screencast on data/view separation in Word 2007. It’s published as a SWF (Shockwave Flash) movie and, like other SWF files on Channel 9, it’s delivered into the browser directly, without a controls wrapper. So there’s no way to see the length of the screencast, or pause it, or scroll around in it, or — as I was inclined to do — refer to a segment within the screencast.
I figured it would be a snap to grab the controller that Camtasia Studio emits and tweak its configuration file to point to Brian’s screencast. But that seemingly simple hack turned into a merry chase. It turns out that the Camtasia controller isn’t entirely generic. It embeds (at least) the width and height of the controlled video. I could use Camtasia to create a new controller, but I don’t have that software here with me, and in any case it seems like there should be a way to override those values.
First, though, I took a step back and spent some time looking for a generic SWF component to play back SWFs. For FLV (Flash video) files, I’ve made great use of flvplayer.swf 1. It’s a nice simple widget that does just the one thing I want: it accepts the address of an FLV file as a parameter, and it plays that file. There has to be an analogous swfplayer.swf, right? Well, I looked hard and didn’t find it, maybe someone can enlighten us on that score.
Circling back to the Camtasia controller, I asked myself another question. There has to be an easy way to not only display, but also edit, the header of a SWF file, right? Again, I looked hard and came up empty handed. Now it was a challenge, so I dug into things like SWF::File and Flasm, tools for picking apart and reassembling SWF files. Neither quite did the trick. Then I remembered a tip from Rich Kilmer about Kinetic Fusion, a Java toolkit for roundtripping between SWF and XML. Using it, I was able to convert the SWF to XML, alter the width/height values, and recreate the SWF 2.
I know, I know, this is crazy, there has to be a better way, and I hope someone will enlighten me. But in any case, I finally did succeed, sort of. Here’s a controllable version of Brian’s screencast:
One further complication: I’d hoped to publish only the modified controller and configuration file, leaving the screencast in situ on Channel 9. But the cross-domain nature of that arrangement seems to rule it out. So I wound up rehosting the video on the same server as the controller and configuration file. In this case, just to keep things interesting, that server happens to be my Amazon S3 account.
Anyway, if you’ve made it this far, I can now refer you to the segment of that screencast. At 6:45 (of 9:53), Brian shows how to swap out one batch of XML data associated with a Word document and swap in another. I’ll say more about why I found that interesting in another post. Meanwhile, I’ll be pondering how one of my perennial interests — URL-adddressable and randomly accessible rich media — can help expose more of the considerable value that’s contained in the Channel 9 screencasts.
1. If you’re a Flash developer, it’s trivial to whip up your own playback control. But it’s non-trivial for regular folks who just want to embed videos in HTML pages. These folks find themselves rooting around on the net for components that should be way easier to find and use.
2. If you try this on the Camtasia controller, note that the decompiled XML won’t immediately recompile. The generated ActionScript contains a handful of references to this:componentName that instead should be this.componentName.
Once in a blue moon I find myself sitting in a new employee orientation. Today, as on other occasions, I was struck by how hard it is for the benefits people to explain their offerings. The presentation is necessarily general but everyone’s circumstances are particular. There’s no good way to bridge that gap in a large group session.
My guess is that a lot of the folks who were in that session today will, upon joining their teams tomorrow, ask for advice about various health and investment options. But team members won’t necessarily be the best sources of advice, because similar work circumstances don’t map to similar life circumstances. What new employees really need is to compare notes with other employees in similar life circumstances.
Benefits people and coworkers often won’t be in a position to meet that need. But a social application that matched up employees in similar life circumstances could be a great way to transfer highly particular kinds of benefits knowledge.
A few years ago Marc Eisenstadt, chief scientist with the Open University’s Knowledge Media Institute, wrote to tell me about a system called BuddySpace. We’ve been in touch on and off since then, and when he heard I’d be in Cambridge for the Technology, Knowledge, and Society conference, he invited me to the OU’s headquarters in Milton Keynes for a visit. I wasn’t able to make that detour, but we got together anyway thanks to KMI’s new media maven Peter Scott, who was at the conference to demonstrate and discuss some of the Open University’s groundbreaking work in video-enhanced remote collaboration.
Peter’s talk focused mainly on Hexagon, a project in “ambient video awareness.” The idea is that a distributed team of webcam-equipped collaborators monitor one anothers’ work environments — at home, in the office, on the road — using hexagonal windows that tile nicely on a computer display. It’s a “room-based” system, Peter says. Surveillance occurs only team members enter a virtual room, thereby announcing their willingness to see and be seen.
Why would anyone want to do that? Suppose Mary wants to contact Joe, and must choose between an assortment of communication options: instant messaging, email, phone, videoconferencing. If she can see that Joe is on the phone, she’ll know to choose email or IM over a phone call or a videoconference. Other visual cues might help her to decide between synchronous IM and asynchronous email. If Joe looks bored and is tapping his fingers, he might be on hold and thus receptive to instantaneous chat. If he’s gesticulating wildly and talking up a storm, though, he’s clearly non-interruptible, in which case Mary should cut him some slack and use email as a buffer.
Hexagon has been made available to a number of groups. Some used it enthusiastically for a while. But only one group so far has made it a permanent habit: Peter’s own research group. As a result, he considers it a failed experiment. Maybe so, but I’m willing to cut the project some slack. It’s true that in the real world, far from research centers dedicated to video-enhanced remote collaboration, you won’t find many people who are as comfortable with extreme transparency — and as fluent with multi-modal communication — as Marc and Peter and their crew. But the real world is moving in that direction, and the camera-crazy UK may be leading the way as seen in the photo at right which juxtaposes a medieval wrought-iron lantern and a modern TV camera.
Meanwhile, some of those not yet ready for Hexagon may find related Open University projects, like FlashMeeting, to be more approachable. FlashMeeting is a lightweight videoconferencing system based on Adobe’s Flash Communication Server. Following his talk, Peter used his laptop to set up a FlashMeeting conference that included the two of us, Marc Eisenstadt at OU headquarters, and Tony Hirst who joined from the Isle of Wight. It’s a push-to-talk system that requires speakers to take turns. You queue for the microphone by clicking on a “raise your hand” icon. Like all such schemes, it’s awkward in some ways and convenient in others.
There were two awkward bits for me. First, I missed the free-flowing give-and-take of a full duplex conversation. Second, I had to divide my attention between mastering the interface and participating in the conference. At one point, for example, I needed to dequeue a request to talk. That’s doable, but in focusing on how to do it I lost the conversational thread.
Queue-to-talk is a common protocol, of course — it’s how things work at conferences, for example. In the FlashMeeting environment it serves to chunk the conversation in a way that’s incredibly useful downstream. All FlashMeeting conferences are recorded and can be played back. Because people queue to talk, it’s easy to chunk the playback into fragments that map the structure of the conversation. You can see the principle at work in this playback. Every segment boundary has an URL. If a speaker runs long, his or her segment will be subdivided to ensure fine-grained access to all parts of the meeting.
The chunking also provides data that can be used to visualize the “shape” of a meeting. These conversational maps clearly distinguish between, for example, meetings that are presentations dominated by one speaker, versus meetings that (like ours) are conversations among co-equals. The maps also capture subtleties of interaction. You can see, for example, when someone’s hand has been raised for a long time, and whether that person ultimately does speak or instead withdraws from the queue.
I expect the chunking is also handy for random-access navigation. In the conversation mapped here, for example, I spoke once at some length. If I were trying to recall what I said at that point, seeing the structure would help me pinpoint where to tune in.
Although Hexagon hasn’t caught on outside the lab, Peter says there’s been pretty good uptake of FlashMeeting because people “know how to have meetings.” I wonder if that’s really true, though. I suspect we know less about meetings than we think we do, and that automated analysis could tell us a lot.
The simple act of recording and playback can be a revelation. Once, for example, I recorded (with permission) a slightly tense phone negotiation. When I played it back, I heard myself making strategic, tactical, and social errors. I learned a lot from that, and might have learned even more if I’d had the benefit of the kinds of conversational x-rays that the OU researchers are developing.
Virtual worlds with exotic modes of social interaction tend to be headline-grabbers. Witness the Second Life PR fad, for example. By contrast, technology that merely reflects our real-world interactions back to us isn’t nearly so sexy. For most of us, though, in most cases, it might be a lot more useful.
My new job at Microsoft starts Monday, and I’ll be on campus in Redmond all week. The first 1.5 days are all HR stuff I’m told, but after that I’ll be available. I’m setting up various meetings, but of course I don’t know many of the folks I ought to meet. So if you’re one of those folks, would like to get together, and are within earshot of this blog, speak up. We could maybe coordinate right here in comments, or if that’s too weird, then email me at my permanent address, judell at mv dot com.
I had an odd experience a few weeks ago, related to the conference I just attended. The Australian organizers had volunteered to book my Boston-London flight. Then one afternoon I got a call from Charlotte, a travel agent who works in the U.S. branch of the organizers’ Australian travel agency. She thought the booking was probably fraudulent, and cited three reasons:
- The booking was issued in the name of one of the organizers, but I was listed as the traveler.
- The fare was unusually high.
- The email address, which concatenated the organizer’s name with the name of the travel agency, seemed odd to her.
She: “I’m pretty sure this is bogus.”
Me: “How would it be in someone’s interest to fraudulently book me a flight?”
She: “Who knows? It could be anything. When you’ve seen as much of this kind of thing as I have, you give up on trying to figure out people’s motives.”
Suddenly the whole thing felt wrong to me. I recalled how sparse the conference website had been when I’d last visited it the week before. The keynote speakers, including me, were listed, but everything else was placeholders. So I went back to the site and…nothing was there. Holy crap! Was it conceivable that the whole deal was some kind of malicious prank? That unlikely conclusion began to seem disturbingly likely when I googled around, found the organizer’s site and an affiliated academic site, and discovered that they were dead too.
Finally I found a page listing advisory board members, and called the person who lives closest to me, an academic in New Jersey. She verified that the company and conference were real. When I went back to recheck the websites they were up and running again, and the suspiciously sparse schedule was now fully populated.
My post-mortem analysis of this strange combination of circumstances raised a couple of interesting points:
Eyeballs on transactions.
After things got sorted out and the flight was booked, I had a long conversation with the travel agent. It seemed unusual that she had personally reviewed this transaction and, on her own initiative, flagged it as suspicious. Was that company policy, I asked? No, she said. The company mostly uses an automated system. It just happens that, in her remote branch office, Charlotte sees all the bookings, is motivated to review them, and brings substantial energy and intelligence to that task.
She told me she catches real fraud attempts every week or so. To the company at large, this is just spoilage. It gets written off as a cost of doing business. We assume that eyeballs on transactions are uneconomical. But is that really true? After this experience, and in view of my conversation with Paul English about the practicality of human-intensive customer service, I wonder if we should revisit that assumption.
Locality of trust.
This was an international conference, and the members of the advisory board live all around the world. The one I chose to contact, though, is the one who lives closest to me. Of course I’d be unlikely to call overseas first, because of long-distance tolls and time zones. But there were various folks in the U.S. I could have called, yet I picked the person who lives in New Jersey. Why? In retrospect I believe that’s because New Jersey is closer to my home than Illinois or California. Of course it’s completely irrational to trust a New Jerseyite more than a Californian for that reason. And yet, at a moment when nothing seemed certain, I acted out that irrational behavior. Trust shouldn’t diminish as the square of distance but, in our unconscious minds, I think it probably does. I’ll bet Jim Russell would agree.
All’s well that ends well. The conference organizers turned out to be really pleasant folks. (I’m downplaying their identities here, though you could triangulate them if you wanted to, because they’re naturally a bit embarrassed about what happened.) I enjoyed giving my talk, I met interesting people, I got to see Cambridge for the first time, it was a good trip. But for a couple of hours on that afternoon in December things were really weird!
I’m at the Technology, Knowledge, and Society conference in Cambridge UK, where I spoke this morning on the theme of network-enabled apprenticeship. It’s a topic I began developing last fall for a talk at the University of Michigan. I don’t feel that I nailed it the first time around, nor this time either, but it’s provoked a lot of interesting and helpful discussion.
My argument is that for most of human history, in tribal life, village life, or farm life, it was common to be able to watch people do their daily work. Kids who grew up on a farm, for example, saw the whole picture — animal husbandry, equipment maintenance, finance. They understood more about work than kids who only saw dad go to the office, do nobody knew what, and return at the end of the day.
To the extent that we now find it culturally acceptable to narrate our work online, in textual and especially in multimedia formats, we can among other things function as teachers and mentors. We can open windows into our work worlds through which people can find out, much more than was ever possible before, what it is like to do various kinds of work.
I claim this will help people, in particular younger people, sample different kinds of work and, in some cases, progress from transient web interactions to deeper relationships in cyberspace and/or in meatspace. And I suggest that those relationships could evolve into something resembling apprenticeships.
There are plenty of holes in this argument, and James Governor, whom I met for the first time yesterday in London, drove a truck through one of them. It’s nice to have loose coupling and lightweight affiliation, he said, but apprenticeship was always a durable commitment that involved submitting to a discipline. It wasn’t about window-shopping. Point taken.
Today on a walk in Cambridge I met Andrew Jackson, a bespoke tailor who’s just opened up a shop here, and we had a great conversation on this topic. Thanks to Thomas Mahon’s English Cut, which is a great example of work narration, I know a lot more than I otherwise would about this craft. When I asked Andrew if he’s having trouble bringing people into the business I touched a nerve. It’s a huge issue for him.
Maybe, I suggested, online narration of aspects of his craft would be a way to attract worthy apprentices. But he was way ahead of me. Among other things his firm trains tailors in other countries, and they deliver that training over the Internet, using video. That’s not the problem, he said. The problem is that young people just don’t want to do the work. They want to be rock-star fashion designers, not cutters and tailors, and they will not submit to the discipline of his trade. What’s worse, he added, is that little or no stigma now attaches to unemployment.
Andrew Jackson has a good job that nobody else seems to want. The same holds true, he says, for the guy who fixes all the lead-framed windows in Cambridge. He’s been doing it forever, he knows everybody in the town, he does lucrative and socially rewarding work, and yet he cannot find anyone who wants to help him and eventually step into his role.
So, back to the drawing board. I do think that online narration of work will be a necessary way to attract new talent. But it may not be sufficient. It may also be necessary to demonstrate the non-monetary rewards of doing the work. The window repairer, for example, may enjoy low stress and much autonomy, may see and hear a lot of the interior life of the town, and may enjoy pleasant relationships with long-term customers.
If he told you his story, or if someone else did, those rewards might become clear to you. Admittedly there’s no guarantee that outcome will occur. But if nobody tells the story, we can pretty much guarantee that it won’t.
This week’s podcast is a conversation with Graham Glass, a software veteran who’s self-funding the development of edu 2.0, a web-based educational support system. It seems like a big change from Graham’s previous projects: ObjectSpace Voyager, The Mind Electric’s Glue and Gaia, webMethods Fabric. But not really, says Graham. It’s always been about the reuse of components, whether they’re software objects or learning objects.
Graham and I share a passion for project-based learning, and in the podcast he refers to an EdVisions video on that subject which you can find here. I also (again) referenced the extraordinary talk by John Willinsky which I discussed and linked to here.
I know that technologists always say that the latest inventions are going to revolutionize education, and I know that mostly hasn’t been true. Still, I can’t help but think that we’re on verge of a dramatic overhaul of education, and that systems like the one Graham is building will play a key role in enabling that to happen.
As several folks rightly pointed out in comments here, a community site based on tagging and syndication is exquisitely vulnerable to abuse. In the first incarnation of the photos page, for example, a malicious person could have posted something awful to Flickr and it would have shown up on that page. Flickr has its own abuse-handling system, of course, but its response time might not be quick enough to avert bad PR for elmcity.info.
My first thought was to attach an abuse-reporting link to every piece of externally sourced content. It would be a hair trigger that would — in a Wiki-like way — allow anyone to shoot first (i.e., remove an offensive item immediately) and enable the site management to ask questions later (i.e., review logs, revert removals if need be.)
I’m still interested in trying that approach, but not as the mainstay. Instead I want to promote the idea of trusted feeds. There are currently two on that photos page, one from Flickr and one from local blogger Lorianne DiSabato. I know Lorianne and I trust her to produce a stream of high-quality photos (and essays) about life in our community.
After reviewing the Flickr photostreams of the people whose recent photos match a Flickr search for “Keene NH” I decided to extend provisional trust to them, as well, so I put their names on a list of trusted feeds.
Then I restricted the page to just those feeds, and added a note explaining that anyone who sends an email request can join the list of trusted feeds.
Of course anything short of frictionless participation is an obstacle. On the other hand, based on my conversation with Paul English about customer service, there’s a lot to be said for a required step in the process that forms a human relationship — attenuated by email, true, but still, a relationship.
I think it’s even more interesting when the service, or site, is rooted in a geographical place. On the world wide web, I’m always forming those kinds of relationships with people I will never meet. But on a place-based site, I may already have met these folks. If I haven’t yet, I might still. Trust on the Internet has a very different flavor when the scope is local.
A couple of years ago I was on a panel of media types at a local community leadership seminar, where I was the token blogger. The topic was how the community gathers and disseminates news. NHPR’s executive editor Jon Greenberg said what needed to be said about blogging, which was helpful because it was more credible to that audience coming from him than from me. Even so, there was a lot of pushback. When it was suggested that people could consume a richer and more varied diet of news, they balked. “It’s your[the media’s] job to sift and summarize, not ours.”
Similarly, when it was suggested that people could produce news about the local issues where they are stakeholders and have important knowledge, the pushback was: “But you can’t trust random information on the Internet.”
I found that fascinating. Here were a bunch of folks — a hospital administrator, a fire chief, a school nurse, a librarian — who all know one another. What they seemed to be saying, though, is the Internet would invalidate that trust.
Now I assume that they trust emails from one another. Likewise phone calls, which are increasingly carried over the Internet. And if the fire chief wrote a blog that the school nurse subscribed to, there would be no doubt in the mind of John, the school nurse, that the information blogged by Mary, the fire chief, was real and trustworthy.
Until you join the two-way web, though, you don’t really see how it’s like other familiar modes of communication: phone, email. Or how the nature of that communication differs depending on whether the communicating parties live near one another.
If feeds begin to flow locally, it’ll be easy to trust them in a way that’ll supply most of the moderation we need. The problem, of course, is getting those feeds to flow. Bill Seitz asked:
So you think the “average” person will have Flickr and del.icio.us accounts in addition to joining your site?
No, I don’t, though over time more will use these or equivalent services. So yes, I also need to show how any online resource that’s being created, anywhere, for any purpose, can flow into the community site. It only takes two agreements:
- An agreement on where to find the source.
- An agreement to trust the source.
In the short-to-medium term, those sources are not going to be friendly to me, the developer. So I’ll have to go the extra mile to bring them in, as I’m doing on the events page.
I’ve planted the seed that I hope will grow into the kind of community site that defines community the old-fashioned way — people living in the same place — as well as in the modern sense of network affiliation. The project has raised a bunch of technical, operational, and aesthetic issues.
Technical: Django is working well for me, but I haven’t invested deeply in it yet. Patrick Phelan, a web developer I’ve corresponded with for years, reminded me the other day that my reluctance is strategic. With any framework, buy-in cuts two ways, and you should never take unnecessary dependencies. Patrick noted that I am using WSGI, a Python-based Web Server Gateway Interface, to connect Django by way of FastCGI to my commodity hosting service. And he pointed out that a rich WSGI ecosystem is evolving that could enable me to proceed in the minimalistic style I prefer, integrating best-of-breed middleware (e.g., URL mapping, templating) as needed. If the preceding sentence makes any sense to you, but you haven’t heard about Paste and Pylon (as I had not until Patrick pointed me at them), then you might want to watch the Google TechTalk that Patrick recommends.
Operational: I’m doing this project on $8/month commodity hosting because I want to understand, and explain, how much can be accomplished for how little. Bottom line: amazingly much for amazingly little. For years I’ve supplied my own infrastructure, so I never had the experience of using a hosting service that provides web wrappers to: create subdomains; provision databases and email accounts; deploy blogs and wikis. Sweet! At the same time, though, I’m struck by how much specialized cross-domain knowledge I’ve had to muster. For example, the first service I’ve built on the site, a community version of LibraryLookup, relies on programmatic use of authenticated SMTP to send signup confirmation messages and status alerts. I figured out how to do that in Python, but it took some head-scratching, and my solution isn’t particularly robust. For me, spending an extra buck a month for a more robust solution (ideally delivered as a language-independent web service) would be an option I’d consider. For many people, though, it would be an enabler for things that otherwise wouldn’t happen. There’s a ton of opportunity in this space for buck-a-month services like that.
Aesthetic: For now I’m going with an aggressively Web 0.1 style, a la del.icio.us and craigslist. My wife’s first comment was: “So, you are going to pretty it up a bit, right?” I dunno, you can argue it both ways. The current arrangement has the advantage of being The Simplest Thing That Could Possibly Work. But virtuous laziness aside, it may be that craigslist, in particular, has validated the Web 0.1 aesthetic for community information services. Or it may be that my wife’s first reaction was correct, and I’ll have to look for a volunteer designer. We’ll see.
None of these issues are top of mind for me now, though, because they’re all trumped by a conceptual issue. How do I demonstrate methods of syndication, tagging, and service composition so that people will understand them and, more importantly, apply them?
Consider the version of LibraryLookup that I’ve built for this site. The protocol is, admittedly, abstract. It invites you to use your Amazon wishlist not only for its existing purposes — keeping track of stuff you’re interested in, registering for gifts you’d like to receive — but also as an interface to your local library.
Dan Chudnov thinks this is a questionable approach, and his point about interlibrary loan is well taken. But we don’t have through-the-web interlibrary loan in my town, and if we did, I’d still want to use Amazon as my primary interface to it. To me, it’s obvious why and how to wire those things together. To most people, it isn’t, and that’s the challenge.
To meet that challenge, I’m stepping back from some things things that have been articles of faith for me. For example, this service does not yet notify by way of RSS. Just email for now. Of course I can and will offer RSS, but in my community (as in most) that is not the preferred way to receive notifications.
Everything else about this service will be unfamiliar to most people:
- That an Amazon wishlist can serve multiple purposes.
- That LibraryLookup is OK with Amazon. (It is. Jeff Bezos told me so.)
- That we should expect to be able to wire the web to suit our purposes.
The lone familiar aspect of this service, I realized, is that once in a while you get an email alerting you that something you want is available. Everyone will understand that. But the rest is going to be hard, and I’ve concluded that evangelizing RSS in this context would only muddy the waters even more.
In other ways, though, I’m pushing hard for the unfamiliar. It would be an obvious thing to use Django’s wonderful automation of database CRUD (create, read, update, delete) operations to directly manage events, businesses, outdoor activities, media, and other collections of items of local interest. People are familiar with the notion of a site that you contribute directly to, and I could do things that way, but for the most part I don’t want to. I want to show that you can contribute indirectly, from almost anywhere, and that services like Flickr and del.icio.us can be the database.
I got a great idea about how to approach this from Mark Phippard, a software guy who lives in my town (though we’ve not yet met in person). Mark wrote to offer technical assistance, which I’m glad to receive, but I wrote back asking for help breaking through the conceptual barrier. How do I motivate the idea of indirect, loosely-coupled contribution?
Mark mentioned that one of his pet peeves is the dearth of online information about local restaurants. You can find their phone numbers on the web, but he’d like to see their menus. That’s a perfect opportunity to show how Flickr can be used as a database. If Mark, or I, or someone else scans or photographs a couple of restaurant menus and posts them to Flickr, tagged with ‘restaurant’ and ‘menu’ and ‘elmcityinfo’, we’ll have the seed of a directory that anyone can help populate very easily. Along the way, we might be able to show that Flickr isn’t the only way to do it. A blog can also serve the purpose, or a personal site with photo albums made and uploaded by JAlbum. So long as we agree on a tag vocabulary, I can federate stuff from a variety of sources.
And now, I’m off to collect some local restaurant menus. A nice little fieldwork project for my sabbatical!
This week’s podcast features Paul English. He’s a software veteran who’s been VP of technology at Intuit and runs the Internet travel search engine at Kayak.com, but is best known for the IVR Cheat Sheet. Now available at gethuman.com, this popular database of voice-system shortcuts makes it easier for people to get the human assistance they crave when calling customer service centers.
The gethuman project isn’t just a list of IVR hacks anymore. It’s evolved into a consumer movement that publishes best practices for quality phone service and rates companies’ adherence to those best practices.
Although human-intensive customer service is usually regarded as costly and inefficient, operations like craigslist — where Craig Newmark’s title is, famously, customer service representative and founder — invite us to rethink that conventional wisdom. Kayak.com’s customer service was inspired by craigslist. Paul English says that making his engineers directly responsible for customer service has done wonders for the software development process. Because they’re on the front lines dealing with the fallout from poor usability, they’re highly motivated to improve it.
We also discussed web-based data management. The original IVR Cheat Sheet was done with Intuit QuickBase, an early and little-known entrant into a category that’s now heating up: web databases.
Finally, we talked about Partners in Health, the organization to which Paul English donates his consulting fees. The story of Partners in Health is told in Tracy Kidder’s book Mountains Beyond Mountains: Healing the World: The Quest of Dr. Paul Farmer. At the end of the podcast I mention that I’d added that book to my Amazon wishlist. The other day, while looking for something to listen to on an afternoon run, I checked my RSS reader and saw that the book was available in my local library in audio format. Sweet! Two afternoon runs later, I’m halfway through. It’s both an inspirational tale about Paul Farmer’s mission and a case study in how holistic health care systems can operate far more cost-effectively than most do today.
Back in 2003 I wrote an essay on the dreaded syndrome of Windows rot. As fate would have it, I am still using that same machine, and it’s been quite stable since then. In a year or so, we’ll start hearing opinions about the relative rot-resistance of Vista versus XPSP2. But meanwhile, I’m plagued by a different syndrome: PowerBook rot.
Both of my PowerBooks are afflicted. About six months ago, my 2001-era Titanium G4 began to suffer sporadic WiFi signal loss along with the kinds of narcolepsy and spontaneous shutdowns that many new Intel Macs have exhibited. I’ve tried all the obvious things, including reseating the Airport card and resetting the NVRAM and PMU, but to no avail. The WiFi is negotiable, I could try a different card or just use the machine at home on a wired LAN, but if I can’t fix the worsening narcolepsy and shutdowns it’s all over. Something’s gone funky on the motherboard, I guess, and this machine’s too old and beat up to justify replacing it.
Then, a couple of weeks ago, my 2005-era G4 caught the spontaneous shutdown bug. I wondered if it might be protesting my new job, but when I noticed half my RAM was missing, I diagnosed lower memory slot failure. So now that machine is away having its motherboard replaced, a procedure that appears to be suspiciously routine for my local Apple store.
It’s always dangerous to extrapolate from anecdotal experience, and there’s never good data on this kind of thing, but I must say that while researching these problems I’ve seen a lot of bitching about PowerBooks. Is it just me, or are these things not built to last?
For my sabbatical project, I’m laying foundations for a community website. The project is focused on my hometown, but the idea is to do things in a way that can serve as a model for other towns. So I’m using cheaply- or freely-available infrastructure that can be deployed on commodity hosting services. The Python-based web development framework Django qualifies, with some caveats I’ve mentioned. And Django is particularly well-suited to this project because it distills the experiences of the developers of the top-notch community websites of Lawrence, Kansas. The story of those sites, told here by Rob Curley, is inspirational.
I’m also using Assembla for version control (with Subversion), issue tracking (with Trac), and other collaborative features described by my friend Andy Singleton in this podcast (transcript). It’s downright remarkable to be able to conjure up all this infrastructure instantly and for free. I’m a lone wolf on this project so far, but I hope to recruit collaborators, and I look forward to being able to work with them in this environment.
I have two goals for this project. First, aggregate and normalize existing online resources. Second, show people how and why to create online resources in ways that are easy to aggregate and normalize.
Online event calendars are one obvious target. The newspaper has one, the college has one, the city has one, and there’s also a smattering of local events listed in places like Yahoo Local, Upcoming, and Eventful. So far I’ve welded four such sources into a common calendar, and wow, what a messy job that’s been. The newspaper, the college, and the city offer web calendars only as HTML, which I can and do scrape. In theory the Yahoo/Upcoming and Eventful stuff is easier to work with, but in practice, not so much. Yahoo Local offers no structured outputs. Upcoming does, the events reflected into it from Y Local use hCalendar format, but finding and using tools to parse that stuff always seems to involve more time and effort than I expect. Eventful’s structured outputs are RSS and iCal. If you want details about events, such as location and time, you need to parse the iCal, which is non-trivial but doable. If you just need the basics, though — date, title, link — it’s trivial to get that from the RSS feed.
I’m pretty good at scraping and parsing and merging, but I don’t want to make a career out of it. The idea is to repurpose various silos in ways that are immediately useful, but also lead people to discover better ways to manage their silos — or, ultimately, to discover alternatives to the silos.
An example of a better way to manage a siloed calendar would be to publish it in structured formats as well as HTML. But while that would make things easier for me, I doubt that iCal or RSS have enough mainstream traction to make it a priority for a small-town newspaper, college, or town government. If folks could flip a switch and make the legacy web calendar emit structured output, they might do that. But otherwise — and I’d guess typically — it’s not going to happen.
For event calendars, switching to a hosted service is becoming an attractive alternative. In the major metro areas with big colleges and newspapers, it may make sense to manage event information using in-house IT systems, although combining these systems will require effort and is thus unlikely to occur. But for the many smaller communities like mine, it’s hard to justify a do-it-yourself approach. Services like Upcoming and Eventful aren’t simply free, they’re much more capable than homegrown solutions will ever be. If you’re starting from scratch, the choice would be a no-brainer — if more people realized these services were available, and understood what they can do. If you’re already using a homegrown service, though, it’ll be hard to overcome inertia and make a switch.
How to overcome that inertia? In theory, if I reflect screenscraped events out to Upcoming and/or Eventful, the additional value they’ll have there will encourage gradual migration. If anyone’s done something like that successfully, I’d be interested to hear about it.
On another front, I hope to showcase the few existing local blogs and encourage more local blogging activity. Syndication and tagging make it really easy to federate such activity. But although I know that, and doubtless every reader of this blog knows that, most people still don’t.
I think the best way to show what’s possible will be to leverage services like Flickr and YouTube. There are a lot more folks who are posting photos and videos related to my community than there are folks who are blogging about my community. Using text search and tag search, I can create a virtual community space in which those efforts come together. If that community space gains some traction, will people start to figure out that photos and videos described and tagged in certain simple and obvious ways are, implicitly, contributions to the community space? Might they then begin to realize that other self-motivated activities, like blogging, could also contribute to the community space, as and when they intersect with the public agenda?
I dunno, but I’d really like to see it happen. So, I’m doing the experiment.
Dr. John Halamka joins me for this week’s podcast. He’s a renaissance guy: a physician, a CIO, and a healthcare IT innovator whose work I mentioned in a pair of InfoWorld columns. Lots of people are talking about secure exchange of medical records and portable continuity of care documents. John Halamka is on the front lines actually making these visions real. Among other activities he chairs the New England Health Electronic Data Interchange Network (NEHEN), which began exchanging financial and insurance data almost a decade ago and is now handling clinical data as well in the form of e-prescriptions. The technical, legal, and operational issues are daunting, but you’ll enjoy his pragmatic style and infectious enthusiasm.
We also discuss the national initiative to create a standard for continuity of care documents that will provide two key benefits. First, continuity both within and across regions. Second, data on medical outcomes that can be used by patients to choose providers, and by providers to study the effectiveness of procedures and medicines.
Websites mentioned in this podcast include:
- Commission for Certification of Health Care Technoloogy
- Healthcare Information Technology Standards Panel
- American Health Information Community
Oh, and there’s a new feed address for this series: feeds.feedburner.com/JonUdellFridayPodcasts.
Recently I’ve been noodling with Django, a Python-based web application framework that’s comparable in many ways to Ruby on Rails. It appeals to me for a variety of reasons. Python has been my language of choice for several years, and going forward I expect it to help me build bridges between the worlds of LAMP and .NET. Django’s templating and object-relational mapping features are, as in RoR, hugely productive. And Django’s through-the-web administration reminds me of a comparable feature in Zope that I’ve always treasured. It’s incredibly handy to be able to delegate basic CRUD operations to trusted associates, who can use the built-in interface in lieue of the friendlier one you’d want to create for the general public.
The recommended way to deploy Django is to run it under mod_python, the Apache module that keeps Python interpreters in memory for high performance. But a lot of popular web hosting services don’t support that arrangement. For example, I just signed up for an account at BlueHost, the service used by the instructional technologists at the University of Mary Washington, and I looked into what it would take to get Django working in that environment.
Despite helpful clues it still took a while to work out the solution. In the process I reactivated dormant neurons in the parts of my brain dedicated to such esoterica as mod_rewrite and FastCGI, but I’d rather have been working with Django than working out how to configure it.
By way of contrast, setting up WordPress — a more well-known and popular application — was a one-click operation thanks to Fantastico, an add-on installer for the cPanel site manager.
I’ve heard it said that a compelling screencast is one key factor influencing the adoption of a new web-based application. One-click install in shared hosting environments has to be another. For a while, anyway, until the virtualization juggernaut gives everyone the illusion of dedicated hosting.
Sean McCown is a professional database administrator who writes the Database Underground blog for InfoWorld. Lately his postings have been full of references to videos. One day, he watched a Sysinternals training flick, combining live video with screencasting, and made immediate use of it to pinpoint and fix a problem. Another day, he made his own training screencast:
I sat down last night and made a video of the restore procedure for one of our ETL processes. It was 10mins long, and it explained everything someone would need to know to recover the process from a crash. [Database Underground: Not just a DR plan anymore]
Screencasting is poised to become a routine tool of business communication, but there are still a few hurdles to overcome. For starters, video capture isn’t as accessible as it ought to be. Second Life gets it right: there’s always a camera available, and you can turn it on at any time. Every desktop OS should work like that.
Meanwhile, I’ll reiterate some advice: Camtasia is an excellent tool for capturing screen video on Windows, but its $300 price tag covers a lot of editing and production features that you may never use if you’re capturing in stream-of-consciousness mode for purposes of documentation. In that case, the free Windows Media Encoder is perfectly adequate.
On the Mac I’d been using Snapz Pro X for short flicks, but it takes forever to save long sessions. Next time I do a long-form Mac screencast I’ll try iShowU. That’s what Peter Wayner used for his AJAX screencasts. Peter says that iShowU saves instantly. I tried the demo, and it does.
Finally, there’s the odd hack I tried here: I used the camera’s display as the Mac’s screen, and captured to tape. If the 720×480 format is appropriate for your subject — and when the focus is a single application window, it can be — this is a nice way to collect a lot of raw material without chewing up a ton of disk space.
Capture mechanics aside, I think the bigger impediment is mindset. To do what Sean did — that is, narrate and show an internal process, for internal consumption — you have to overcome the same natural reticence that makes dictation such an awkward process for those of us who haven’t formerly incorporated it into our work style. You also have to overcome the notion, which we unconsciously absorb from our entertainment-oriented culture, that video is a form of entertainment. It can be. Depending on the producer, a screencast documenting a disaster recovery scenario could be side-splittingly funny. And if the humor didn’t compromise the message, a funny version would be much more effective than a dry recitation. But even a dry recitation is way, way better than what’s typically available: nothing.
One of the projects I’m tackling on sabbatical is a community version of LibraryLookup. The service I wanted to create is described here: an RSS feed that’s updated when a book on your Amazon wishlist becomes available in your local library. Originally I planned to build a simple web application that would register Amazon wishlist IDs and produce custom RSS feeds for each registrant. But as I thought about what would make this service palatable to a community, I saw two problems with that approach:
- Familiarity. Most folks will not be familiar with RSS. If the primary goal is to get people using the service, rather than to evangelize RSS, it should use the more familiar style of email notification.
- Deployability. A web application needs to be hosted somewhere. In most communities, the library won’t be able to host the service on its own infrastructure. But if it’s hosted elsewhere, there will be a (rational) reluctance to take a dependency on that provider.
To address the first concern, I’m doing this as an old-fashioned email-based app. You subscribe or unsubscribe by sending email with a command and a wishlist ID in the Subject: header. And you receive notifications about book availability by way of email.
To address the second concern, I’m doing it as a client-side Python script, so that the only dependency is some version of Python and an Internet connection.
Because a library might not even be able to dedicate an email address for this purpose, I’m exploring the use of Gmail as the communication engine. In order for that to work, Python has to be able to make secure and authenticated POP and SMTP connections. Happily, it can.
The recipe for connecting Python to Gmail’s POP service is trivial:
p = poplib.POP3_SSL(‘pop.gmail.com’)
The recipe for connecting Python to Gmail’s SMTP service is less obvious:
s = smtplib.SMTP(“smtp.gmail.com”)
auth = ‘\x00USERNAME\x00PASSWORD’
eauth = base64.b64encode(auth)
This won’t work with no authentication, but neither will it work with the SMTP module’s login() which uses the wrong authentication type (i.e., LOGIN rather than PLAIN, I think).
Any POP/SMTP servers can be used, of course, so there’s no dependency on Gmail here, but it’s nice to see that Gmail can easily be pressed into service if need be.
It feels retro and trailing-edge to do an email-based app but, in order to make it familiar and deployable that seems like the right approach.
It is both sobering and gratifying to see folks asking the same questions about my upcoming gig that I’ve been asking myself. Larry O’Brien serves up three hardballs:
1. To what extent will the inherent imperative to advocate MS technologies stifle him?
Note that there is also a weird corollary: What about the MS technologies that I’m deeply fond of? In the past, nobody (well, hardly anybody) would question my motives if I got fired up about Monad or LINQ or IronPython. Now that’s bound to happen.
In any case, the only way this will work is if I explore and advocate things I believe in. So that’s what I plan to do. Some of those things will exist within the MS portfolio, some outside. Identifying value wherever it exists, and finding useful ways to extract and recombine it, is what I do. I hope I can continue to do that effectively as an MS employee but, of course, we’ll all just have to wait and see.
2. Will he be reduced to just a conduit of information (Microsoft’s new A-list blogger) or will he continue to contribute new creations?
Hand-on tinkering is critical to my method. I have in mind a long-running project that will enable me to try out lots of interesting things, while creating something useful in the process. I don’t want to say more until I’ve laid some foundations, but yes, I do plan to keep on contributing in the modest ways that I always have.
3. Will direct knowledge of unannounced initiatives keep him quiet on the very subjects on which he’s passionate?
Part of this career change goes beyond switching employers. The disconnect between the geek world and the civilian world has really been bugging me lately. Leading edge aside, there’s so much potential at the trailing edge that languishes because nobody helps people connect the dots. On the desktop, on the web, and everywhere else touched by computers and networks, people are running on 2 cylinders. And when we upgrade their computers and operating systems, that doesn’t tend to change.
I really, really want to show a lot of people how to use more of what they’ve got. Smarter methods of communication. More powerful data analysis and visualization. Surprisingly simple kinds of integration. These are my passions, and as Larry points out, they tend to involve fairly simple and accessible tools and techniques. In theory, to pursue this part of my mission, I don’t need to know about every secret project in the pipeline. Whether it’ll actually work out that way in practice, I dunno.
Mike Champion raises an interesting point that applies to Microsoft but also more broadly:
The culture at MS is very F2F-oriented…if you’re out of sight, you have to work hard not to be out of mind.
But then he adds:
Geographic distance will help keep you from getting sucked into the groupthink of whatever group you’re in. Microsoft collectively needs to be constantly reminded what the world looks like to people whose view isn’t fogged up by our typical drizzle or distracted by the scenery on the sunny days.
We’re entering an era in which our personal, social, and professional lives are increasingly network-mediated. Trust-at-a-distance is a new possibility, with economic ramifications that everyone from Yochai Benkler to Jim Russell is trying to figure out. As someone who’s worked remotely for 8 years, and is about to work remotely for a company with relatively few remote employees, this question is extremely interesting to me.
On the one hand, I’ve learned that I can accomplish a lot because I spend an abormal percentage of my waking hours in flow rather than in meetings. I’ve also learned that network-mediated interactions can be more productive than F2F interactions. Consider my August screencast with Jim Hugunin, or my May screencast with Anders Hejlsberg, or indeed any of the other screencasts in that series. They’re all scheduled events, mediated by telephone and screensharing. I can’t see how physical colocation would improve them.
On the other hand, there’s the “watercooler” effect: being in a place, you see and hear and smell things that aren’t otherwise transmitted through the network. I have no doubt whatsoever that shared physical space matters in ways we can’t begin to describe or understand.
But as collaboration in shared virtual space takes its rightful place alongside collaboration in shared physical space, shouldn’t a company whose products are key enablers of virtual collaboration be eating its own dogfood?
Of course things are never as black-and-white as they appear. So I’m going to bookmark this posting and return to it in six months. Hopefully by then I’ll know more about the value of being here and of being there.
It’s been an unusual week. On December 3 I turned 50. On Dec 8 I announced that I’m leaving InfoWorld and joining Microsoft. It’s not a coincidence. When I saw 50 looming, a couple of years ago, I started to get really clear about what I want to do with the next 25. I’ve been laying out the vision to anyone who will listen, and I’ll continue to do so here, but first things first. Yesterday’s announcement left a couple of questions unasked and unaswered, so without further ado:
Q: Are you relocating to Redmond?
A: No. I’ll continue to work from my home office in New Hampshire. At first I’ll be spending maybe one week in four in Redmond, because there’s a lot of connecting to do. In the long run I may wind up traveling almost that much, but I hope to locations elsewhere than Redmond as often as not.
In January, for example, I’ll be speaking at Techology, Knowledge, and Society in Cambridge UK. And in May, at GOVIS in New Zealand. As was true for my recent talks in Guadalajara and Ann Arbor, I don’t expect to encounter any Silicon Valley regulars at these events. I do expect to give and to receive important insights about how people everywhere can use infotech to further their occupational, educational, personal, social, and civic agendas.
Q: What will happen to your weblog.infoworld.com/udell archive?
A: I’ve experienced namespace disruption before, and am very keen to avoid it this time around. Fortunately it’s in InfoWorld’s best interest to preserve my blog archive. Worst case, the material will be rehosted because nobody else at InfoWorld uses Radio UserLand anymore. In that case, I’ve offered to help redirect the current namespace to a different one. I’m keeping my fingers crossed, but I hope there won’t be a problem.
Q: Why would you work for them? Not since Standard Oil has such a brutal vicious rapacious thuggish company with such power existed.
A: That question, in private email from someone I deeply respect, reminded me that yesterday’s Q and A left some important things unsaid. In particular, although I mentioned Ray Ozzie and Kim Cameron and Jean Paoli and Jim Hugunin and JJ Allaire, I egregiously failed to mention such equally important folks as:
Tim Fahlberg, who wants to use screencasting to reinvent math education, and who was thrilled that I picked up on his mission and amplified it in InfoWorld, but who because of that only gained a tiny bit more of the exposure he deserves.
Dan Thomas, who’s pumping the operational data of city government out onto the web where, despite all my efforts so far, nobody except me sees that it’s there or imagines what to do with it.
Mike Frost, who’s building out a version of the energy web today instead of waiting for government to never do it.
To these stories I’ll add my own NHPR commentaries about online-map-enabled community work, rediscovery of the local library, and the social capital we can build when we work from home.
My proposal was to be an evangelist for the Net, to continue discovering and telling these kinds of stories, and to use them as the framework within which to explore and explain Microsoft’s current and emerging technologies.
When I met with Jeff Sandquist I had just finished this podcast with Jim Russell. It’s a story about migration and the mobility of intellectual capital, refracted through Jim’s experience with the Pittsburgh diaspora. Neither Microsoft’s nor any other vendor’s technologies are discussed. I’m certain that the ideas Jim lays out in this podcast will inspire new business models for social software, but it’s all rather speculative.
I explained to Jeff that it had taken me most of a day to interview Jim Russell, then edit our rambling two-hour discussion down to something more coherent. And I said: “Reality check, you’re OK with that?” He said yes. I do not regard that answer as evidence of thuggishness or rapaciousness. I regard it as a sign of enlightenment, and I am calibrating my expectations accordingly.