Doug Kaye’s PodCorps launches today

When Doug Kaye first told me about the idea that was launched today as PodCorps, he had me at hello. Every day there are events somewhere that might usefully be audio-recorded and published on the Internet: lectures, meetings, political rallies. In many cases the participants would be happy to have their spoken words recorded and published, but wouldn’t have a clue about the mechanics of digital audio recording and Internet publishing.

Doug’s idea is to create a corps of volunteer stringers who can show up at these events with their digital recorders, process the digital audio, and then publish it — typically at the Internet Archive.

To ask a PodCorps volunteer to show up at an event, the event producer posts the event on Eventful.com with the tag podcorps. This is a lovely example of a technique that Esther Dyson calls visible demand. It’s also an illustration of another key idea: that most people will achieve lightweight service integration by simply using agreed-upon tags. I explore this idea at my own experimental community information site, elmcity.info, which hosts nothing directly but instead gathers tagged items from elsewhere. That idea has, to be honest, gotten very little traction so far. In particular, I’ve had no success getting people in my community on board with eventful.com or upcoming.org or any other online event service. But now that PodCorps reinforces the idea, I’ve got another arrow in my quiver and another chance to make the case.

There’s a huge opportunity here to transform communication patterns in a fundamental way. Checking my local events calendar, for example, I see that the following event is scheduled for tonight at the local college:

Mon., Apr. 16
7 to 8:30pm
Pond Side 2 located on Bruder St – Keene State College

Building Smart – Highlighting Local Best Practices

Come and join us in discussing the challenges and successes of implementing innovative building materials, technologies, and design solutions into the built environment.

The information exchanged at that meeting, and at countless meetings like it, has historically been available only to those who attend. There are a million reasons why local folks who might want to attend nevertheless cannot: no babysitter, schedule conflict, etc. And of course remote folks have no opportunity to attend, even though the information exchanged might be highly relevant to them.

In the same way that blogging can help you make optimal use of your keystrokes, podcasting can help you make optimal use of your spoken words.

Of course even if tonight’s smart building discussion were recorded and published, it would be unlikely to attract many listeners. But so what? If it only benefits a few, that’s fine. This isn’t podcasting to build audiences and “monetize” downloads. It’s podcasting to expand access to public discussion. And that’s just an inherently good idea. A few listeners who otherwise wouldn’t have been able to attend an event, multiplied by lots of events, adds up to a big collective benefit.

Podcast feeds for LibriVox

Yesterday I interviewed Hugh McGuire about LibriVox for next week’s ITConversations podcast. In the course of our conversation I was reminded that LibriVox catalog pages — like this one for White Fang — include MP3s and Oggs for individual chapters, plus a zip file containing the whole book, but not an RSS feed suitable for automatic downloading into a podcatcher. And as Hugh and I discussed, a typical reaction to the zip file is: “Now what?”

So I’ve written a little script to produce RSS feeds. It seems useful to me, and I hope it’ll prove useful to the LibriVox community, but before I release it I’d like to check my assumptions.

Here are three sample feeds.

William Blake, Songs of Innocence and Experience

Arthur Conan Doyle, The Hound of the Baskervilles

Jack London, White Fang

I mostly use a Creative MuVo, and sometimes an iPod, so these are the two scenarios I’ve tested. For my purposes, the requirements in both cases are:

  • The files display and play in order in iTunes and Windows Media Player1
  • The files display and play in order on the player
  • Both the name and index of each file are easily legible on the player

The flash-memory-based MuVo seems to need the filenames shortened to 28 characters. And as I realized on a long bike ride last summer, when a book was playing out of order, it also seems to want the generated index numbers before, rather than after, the filenames. So I think the format should be:

01_hound-of-the-baskerv.mp3
02_hound-of-the-baskerv.mp3
...

I’m assuming this format will work for other flash-memory-based non-iPod players out there, but that’s something I’d like to check. If you have one of those players I’d be curious to know it handles these feeds.

For the iPod and iTunes, a different hack was required. A podcast feed is not the natural format for a multi-chapter book. The software expects to display and play items in reverse chronological order. I thought that if everything had the same pubDate the secondary sort would ascend by name, but that didn’t seem to work. So in these feeds, the (arbitrary) pubDate decrements by seconds as the index counter increments. You wind up with a format like this:

file: 01_hound-of-the-baskerv.mp3 pubdate: Sat, 14 Apr 2007 05:00:15 -0000
file: 02_hound-of-the-baskerv.mp3 pubdate: Sat, 14 Apr 2007 05:00:14 -0000
...

Kinda hokey, but it seems to work for me, see what you think. I’d like to be able to give something back to LibriVox. I haven’t gotten around to recording any chapters, but maybe this will help the cause.


1 Forgot WMP’s not a podcatcher yet, alas.

A conversation with Bob Glushko and AnnaLee Saxenian about the interdisciplinary science of service design

For this week’s ITConversations podcast I got together with Bob Glushko and AnnaLee Saxenian to discuss their new program in services design at UC Berkeley’s school of information. I had earlier interviewed Bob Glushko about the book he co-authored, with Tim McGrath, on document engineering. Now a professor in the school of information at Berkeley, Bob headed up Commerce One’s XML architecture and technical standards activities from 1999 to 2002, and is now a member of the OASIS board.

AnnaLee Saxenian is the dean of Berkeley’s school of information. Her 1996 book, Regional Advantage: Culture and Competition in Silicon Valley and Route 128, is the classic and often-cited study of how gregarious engineers in the Valley created social capital that produced a competitive advantage for the region. In 1996 2006 she followed that with The New Argonauts: Regional Advantage in a Global Economy.

To commemorate the announcement of their new program, Information and Service Design, a symposium was held in early March. Graduate students gave presentations based on papers they’d written, and in preparation for this podcast I watched more of the videos of those presentations than I had planned to. These are mostly older students who have returned to school with a combination of work experience and an appreciation for the contemporary digital lifestyle. Now they’re learning how to apply those perspectives to the new interdisciplinary science of service design. You can see, in those videos, that they’re having fun learning about this stuff. And you can hear, in this podcast, that Bob Glushko and AnnaLee Saxenian are having fun figuring out how to teach it.

Skype podcasting revisited

My podcasts are almost invariably recordings of phone calls. Following the advice of my audio guru, Doug Kaye, I’m using a Telos ONE to achieve decent audio quality using POTS (plain old telephone service). But from time to time I revisit the question of whether Internet calling, using Skype or another voice-over-IP solution, can produce results of equal (or better) quality.

The good news, since the last time I tried this, is that it’s easier to record Skype calls. The recipes used to involve a whole lot of baling wire and black magic. But now there are Skype add-ins that simplify things quite a bit.

For today’s test I used two different products: MX Skype Recorder for Windows, and the Ecamm Call Recorder for the Mac. Because friends and family now refuse my requests to involve them in audio recording experiments — and who can blame them? — I manned both ends of a Skype call, shuttling between a PC in one room (with an analog headset) and a Mac in the other (with a USB headset).

Doug Kaye’s recipe for using the Telos ONE involves splitting the caller and callee onto separate channels of a stereo recording. That enables the kind of editing I illustrated in this brief screencast. I’m happy to report that both of these Skype recorders enable the same kind of thing. MX Skype Recorder will directly produce a split-channel WAV file. The Ecamm Call Recorder produces a QuickTime movie with two stereo tracks, one for each half of the call, but you can extract them and recombine the parts to achieve the same result.

So that’s all good. But when I finished my test call and reviewed both recordings, I found in both cases that while the quality was fine for the local voice, it was sketchy for the remote voice. In particular, listening to the recordings made from each end of the call, I hear the occasional dropouts and compression artifacts that I always hear in every Internet call, whether it’s on Vonage or Skype or iChat or Windows Messenger.

Just for kicks, I took the two recordings apart, swapped channels, and put them back together to create two new versions of the test call. One combines both local voices and it sounds like this. The other combines both remote voices and it sounds like this.

I’ll be recording a podcast tomorrow and, since it’ll be an international call, I’d like to be able to use voice-over-IP. Based on these results, though, my conclusion is that combining two locally-made recordings — one of which the interview will upload and I will download after the call — will yield the best outcome.

In this case my interviewee is willing to play along, so we’ll give it a shot and see how it goes. In general, of course, that isn’t something you can expect an interviewee to do. But I don’t see a workable alternative. Skype-based podcasting still doesn’t feel like a first class option. Am I missing something?

Too busy to blog? Count your keystrokes.

Some years ago, very suddenly, I ran into the brick wall of repetitive stress injury. I had to lay off keyboards entirely for a couple of weeks, and wound up writing most of the first draft of my book in longhand on yellow legal pads. I got through it thanks to an innovative keyboard plus stretching and some weightlifting. Nowadays I’m fine so long as I’m diligent about stretching and lifting. And as a bonus, a strategy I developed at that time continues to serve me well. I call it the principle of keystroke conservation.

Although I no longer have to ration my keystroke output in order to avoid crossing a pain threshold, I still find it useful to think of keystroke output as a scarce resource, the use of which can (and should) be optimized. Blogging is a key part of that optimization, though I don’t think many people see it that way yet.

When people tell me they’re too busy to blog, I ask them to count up their output of keystrokes. How many of those keystrokes flow into email messages? Most. How many people receive those email messages? Few. How many people could usefully benefit from those messages, now or later? More than a few, maybe a lot more.

From this perspective, blogging is a communication pattern that optimizes for the amount of awareness and influence that each keystroke can possibly yield. Some topics, of course, are necessarily private and interpersonal. But a surprising amount of business communication is potentially broader in scope. If your choice is to invest keystrokes in an email to three people, or in a blog entry that could be read by those same three people plus more — maybe many more — why not choose the latter? Why not make each keystroke work as hard as it can?

I explored this idea in Practical Internet Groupware, and it’s coming around again now that I’m working for Microsoft. Although the company makes incredibly good use of public-facing blogs, internal communication revolves mostly around face-to-face meetings and one-to-few email. As a remote employee steeped in the blogosphere’s many-to-many communication pattern, I’d love to make more internal use of that pattern.

To that end, when people tell me they’re too busy to blog I invoke the principle of keystroke conservation. Was the email message you wrote to three people possibly of use to thirty, or three hundred, or thirty thousand? If so, consider blogging it — externally if that’s appropriate, or internally otherwise. Then, if you want to make sure those three people see the message, go ahead and email them a pointer to it.

That simple maneuver can have powerful network effects. To exploit them, you have to realize that the delivery of a message, and the notification of delivery, do not necessarily coincide. Most of the time, in email, they do. The message is both notification and payload. But a message can also notify and point to a payload which is available to the recipient but also to other people and processes in other contexts. That arrangement costs hardly any extra keystrokes, and hardly any extra time. But it’s an optimization that can radically expand influence and awareness.

Online incunabula

My latest podcast is up at ITConversations. Here’s the intro I wrote for the show:

Although Tim Berners-Lee once famously declared that “Cool URIs don’t change,” factors beyond our control make it hard for most of us to avoid link rot. Geoffrey Bilder is the director of strategic initiatives for CrossRef, a company whose mission is “to be the citation linking backbone for all scholarly information in electronic form.” CrossRef, in other words, is in the business of combating link rot.

The world of scholarly and professional publishing revolves around reliable citation. In previous podcasts with Tony Hammond and Dan Chudnov I’ve explored some of the technologies and methods used by these publishers — including digital object identifiers and OpenURL — to assure that reliability.

CrossRef plays a key role in that technological ecosystem. In this conversation, Geoffrey and I discuss how everyday blog publishing systems could offer the same kinds of persistence, integrity, and accountability provided by scholarly and professional publishing systems. And we explore why that might matter more than most people would think.

The title of this item refers to a fascinating riff by Geoffrey toward the end of the show. The word incunabula isn’t something you run into every day, or even (in my case) every decade. It refers to books that were produced before 15011, during the infancy of printing when, as Geoffrey explains:

People were clearly uncomfortable moving from manuscripts to printed books. They’d print these books, and then they’d decorate them by hand. They’d add red capitals to the beginnings of paragraphs, and illuminate the margins, because they didn’t entirely trust this printed thing. It somehow felt of less quality, less formal, less official, less authoritative. And here we are, trying to make our online stuff more like printed stuff. This is the incunabula of the digital age that we’re creating at the moment. And it’s going to change.

So much of the apparatus that we take for granted when we look at a book — the table of contents, page numbers, running heads, footnotes — that wasn’t common currency. It got developed. Page numbers didn’t make much sense if there was only one edition of something. This kind of stuff got developed and adopted over a fairly long period of time.

If you treat Vannevar Bush as Gutenberg, we haven’t even gotten to Martin Luther yet, we haven’t even gotten to 1525. In fact, whereas people stopped trying to decorate manuscripts by 1501, we’re still trying to replicate print online. So in some ways they were way ahead of us in building new mechanisms for communicating, and new apparatus for the stuff they were dealing with.

When I try to tell people what we’re doing at CrossRef, I say that we’re trying to help define what the new apparatus and infrastructure will be.

And that’s what we’re doing in the blogosphere too. One of the bridges I’d like to help build is one between these two domains, each of which has so much to learn from the other.


1 It’s unclear (to me) why that cutoff date is always given as 1501.

It isn’t (yet) all about the Internet

I’ve been doing an occasional series of commentaries for New Hampshire Public Radio on topics at the intersection of technology and society. The latest one, which aired this weekend, riffs on an item posted here about using sites like YouTube and Blip to catalog video clips about candidates who visit New Hampshire.

About an hour after the spot first aired, on Friday evening, I received a heartwarming response from an independent documentary filmmaker who said in part:

Just heard your commentary on NHPR, and jumped on YouTube to see your clip.

I wanted to say that your idea is profound and powerful. It’s one of those ideas which is so simple and so obvious you wonder why we all didn’t think of it.

A database of significant clips from candidates is the first thing I’ve come across in a long time that feels fresh and hopeful.

So, the next time a candidate comes to Warner (and lots are scheduled) I’ll bring my camera and upload the clip.

Excellent!

There’s a lesson here for me as well. It’s profound, powerful, simple, and obvious, and I wonder why it has taken me so long to think of it. The lesson is that it isn’t (yet) all about the Internet. Using new media for all they’re worth, blogging and podcasting like crazy, I’ve mostly failed to make connections between a number of important ideas and the vast majority of the folks who could apppreciate and advance them. By reaching out to public radio, I connect with people I’ve never reached before — people who mostly aren’t reading blogs or downloading podcasts, but who are listening to the radio while driving or making dinner.

I haven’t made many of those connections yet, but when I do it feels great and inspires me to try to make more. For example, I’ve struggled for several years to make concrete for people the abstract idea that tags are second-order addresses that create rendezvous points in information space. We in the vanguard just take that for granted. We’re used to attending conferences whose opening announcements include the declaration of the tag (e.g., etech2007) that will be used to aggregate photos and blog entries related to the event. But most people haven’t had that experience yet. So it was a real thrill to see NHPR’s primer on how to tag election-related clips on YouTube and Blip. Thanks to a single two-minute spot on the radio, I’ve helped make that idea concrete for people who will never read this blog.

Exploring Office Live


Today’s four-minute screencast explores Office Live. It shows how to codelessly create a database table in the cloud, add data-collection and -display widgets to pages of an Office Live site, and then manage that data through the web and also from a remote Access client. To be clear, although Office Live Basics, which includes domain name and web hosting plus email service, is free, I made this screencast using the $20/month Office Live Essentials which adds contact management, document libraries, blog and wiki features, secure private workspaces, and the ability to create customized data collection and display as shown in the screencast.

Among technical folk, the elevator pitch for Office Live is: hosted SharePoint. But that will mean nothing to many of the SMBs (small-to-medium businesses) that Office Live seeks to attract as customers. Which is fine because those folks don’t need to know anything about SharePoint. What they do need, but mostly don’t know that they need, is a way to manage public and private data in the cloud — but with an umbilical cord that connects back to the desktop applications they have and use.

Although Office Live can in fact meet that need, it’s not obvious that it can. When Walt Mossberg and Joe Wilcox tire-kicked the service, they produced nothing more than a couple of brochureware sites, and I can hardly blame them. Although the data-gathering and data-display features of my otherwise brochureware-only site required no coding to implement, I had to use a lot of technical savvy to achieve the codeless solution.

So, commentators — including Walt Mossberg and Robert Scoble — were shocked to discover that Office Live isn’t a hosted version of Office along the lines of Google Docs and Spreadsheets. Meanwhile, developers are scoping out the opportunity to build on the platform. And customers, so far as I can see, are mostly responding to free web hosting and email.

What about all the small-to-medium businesses who today manage data on the desktop, and who could benefit enormously from the ability to push some of that data management into the cloud while retaining the umbilical cord to the desktop? There’s an interesting do-it-yourself opportunity here which I’m pretty sure those folks are not seeing. And again, who can blame them? Although the screencast shows what’s possible, SharePoint is an ungainly contraption that I had to wrestle into submission in order to produce it.

Nevertheless, I’m fascinated by the possibilities here. Hundreds of millions of people manage data in desktop applications Excel and Access. They’re the base. Vastly fewer people manage data in web applications like QuickBase or Dabble DB. They’re the vanguard. If Office Live can become a bridge between the base and the vanguard, that would be a good thing for everyone.

History or technology: Which is the better defense of identity? Both.

Kim Cameron had the same reaction to the Sierra affair as I did: Stronger authentication, while no panacea, would be extremely helpful. Kim writes:

Maybe next time Allan and colleagues will be using Information Cards, not passwords, not shared secrets. This won’t extinguish either flaming or trolling, but it can sure make breaking in to someone’s site unbelievably harder.

Commenting on Kim’s entry, Richard Gray (or, more precisely, a source of keystrokes claiming to be one of many Richard Grays) objects on the grounds that all is hopeless so long as digital and real identities are separable:

For so long identity technical commentators have pushed the idea that a person’s digital identity and their real identity can be tightly bound together then suddenly, when the weakness is finally exposed everyone once again is forced to say ‘This digital identity is nothing more than a string puppet that I control. I didn’t do this thing, some other puppet master did.’

Yep, it’s a problem, and there’s no bulletproof solution, but we can and should make it a lot harder for the impersonating puppet master to seize control of the strings.

Elsewhere, Stephen O’Grady asks whether history (i.e., a person’s observable online track record) or technology (i.e., strong authentication) is the better defense.

My answer to Stephen is: You need both. I’ve never met Stephen in person, so in one sense, to me, he’s just another source of keystrokes claiming to represent a person. But behind those keystrokes there is a mind, and I’ve observed the workings of that mind for some years now, and that track record does, as Stephen says, powerfully authenticate him.

“Call me naive,” Stephen says, “but I’d like to think that my track record here counts for something.”

Reprising the comment I made on his blog: it counts for a lot, and I rely on mine in just the same way for the same reasons. But: counts for whom? Will the millions who were first introduced to Kathy Sierra and Chris Locke on CNN recently bother explore to their track records and reach their own conclusions?

More to the point, what about Alan Herrell’s1 track record? I would be inclined to explore it but I can’t, now, without digging it out of the Google cache.

The best defense is a strong track record and an online identity that’s as securely yours as is feasible.

The identity metasystem that Kim Cameron has been defining, building, and evangelizing is an important step in the right direction. I thought so before I joined Microsoft, and I think so now.

It’s not a panacea. Security is a risk continuum with tradeoffs all along the way. Evaluating the risk and the tradeoffs, in meatspace or in cyberspace, is psychologically hard. Evaluating security technologies, in both realms, is intellectually hard. But in the long run we have no choice, we have to deal with these difficulties.

The other day I lifted this quote from my podcast with Phil Libin:

The basics of asymmetric cryptography are fundamental concepts that any member of society who wants to understand how the world works, or could work, needs to understand.

When Phil said, that my reaction was, “Oh, come on, I’d like to think that could happen but let’s get real. Even I have to stop and think about how that stuff works, and I’ve been aware of it for many years. How can we ever expect those concepts to penetrate the mass consciousness?”

At 21:10-23:00 in the podcast2, Phil answers in a fascinating way. Ask twenty random people on the street why the government can’t just print as much money as it wants, he said, and you’ll probably get “a reasonable explanation of inflation in some percentage of those cases.” That completely abstract principle, unknown before Adam Smith, has sunk in. Over time, Phil suggests, the principles of asymmetric cryptography, as they relate to digital identity, will sink in too. But not until those principles are embedded in common experiences, and described in common language.


1 In various blog postings I have seen this name spelled Alan Herrell, Allan Herrell, and Allen Herrell. I presume the first spelling is probably correct, because it returns orders of magnitude more search hits. In principle, the various people who share each of these spellings could claim their unique identities by declaring biographical details about themselves (“I am the author of _____,” “I worked for _______”) and digitally signing those declarations. In practice nobody does, yet, but it’s starting to become clear why we’d want to.

2 Hey Doug and Phil, the clip feature is gone?

Simple and automatic services

Concept count is a useful metric when you’re trying to figure out which technologies will or won’t be adopted. I mentioned this idea in a discussion of calendar cross-publishing, where I enumerated the numbingly long list of concepts I had to understand in order to achieve bidirectional synchronization of my Outlook (business) and Google (family) calendars.

Yesterday, when my Jurassic-era (i.e., 2003) LG cellphone died, I went out and bought a Motorola KRZR. Then, despite my best intentions, I stayed up way too late figuring out all the things it can do. Talk about concept count! MP3 player, still camera, video camera, USB drive, calendar, voice recognition and dialing, speakerphone, voice recorder, wired and wireless file transfer, assorted Bluetooth services, assorted data services, installable applications (email, news, weather), text messaging, multimedia messaging, GPS…oh, and by the way, it makes phone calls too.

One thing missing from this cornucupia, though, was contact and calendar synchronization. The old LG lasted long enough to surrender this data to U.S. Cellular’s all-purpose synchronizer, which squirted it into the new phone, but I lacked an ongoing solution. The salesman, of course, knew nothing about the available synchronization options. I guess most people don’t even expect that the data they accumulate on their phones will get backed up anywhere.

That is my expectation, though. So immediately I descended into the circle of hell where random postings on web forums lead you to obscure applications, strange device drivers, and contradictory advice.

Because I’d been down this road before, I had a clue where I was going. My first stop was BitPim which almost worked on my Mac but in the end didn’t, and sort of worked on my Vista laptop once I installed the Motorola driver that makes the phone’s USB port look like a COM port.

Next I tried Sync Cell on Vista and got better results. It seems to connect to the emulated COM port more reliably than BitPim, and can perform the full range of transfers: contacts, calendar events, files.

But wait a second, this is a Bluetooth phone, can’t the synchronization go wireless? Yes, no, it depends what you mean. Both computers can pair with the phone, and both operating systems can natively see the phone’s file system and transfer its files back and forth. But contact (and calendar) synchronization requires third-party software. In the end I got Synch Cell to work on Vista, but achieving that result required a number of steps, the last of which was to enable a Bluetooth service called Dial-Up Networking to make the Bluetooth connection look like a COM port.

Let’s add up some of the key concepts involved in just the single activity of backing up your contacts:

  1. Device drivers
  2. File systems
  3. Network services
  4. Wired networks
  5. Wireless networks
  6. Pairing
  7. Port emulation

This is clearly unsustainable, which is why the salesfolk don’t even discuss the possibility of backing up your contacts.

There’s an obvious right answer: Provide a service that just works. On a recent episode of the Technometria podcast David Platt gave a nice example. Like most people he was backup-challenged. Then he found a service that quietly, in the background, squirts his data into the cloud.

Why can’t there be something like that for my phone?

Heh. As it turns out, there is. Simple and automatic services. That’s a concept people will be able to wrap their heads around.

Online accountability and the threat of impersonation

Tim O’Reilly has distilled the lessons of the Kathy Sierra affair, and Tim Bray further distills them into a single dictum: “You’re accountable for what appears on your Web site.” He elaborates:

if a Web site is yours, you are ethically and perhaps legally responsible for what’s there, whoever wrote it. This is reality; deal with it.

Agreed. I’ve always believed that, which is why for over a decade I’ve advocated cryptographically strong ways to assert online identity. So long as we depend on authentication by name and password, we are frighteningly vulnerable to impersonators who could irreparably damage our online reputations.

Let’s not lose sight of the message that Doc Searls received from Alan Herrell, who says in part:

Just about every online account that i have has been compromised. Most importantly my digital identity and user/password for typepad and wordpress.

The Kathy Sierra mess is horrific. I am not who ever used my identity and my picture!!

I’ve never read Alan Herrell’s now-discontinued blog, and know nothing about his involvement in this whole affair, but the fact is that we’re all vulnerable to the kind of impersonation that Alan Herrell describes.

There’s no perfect defense. But if I had to use cryptographically strong multi-factor authentication to log into my blog publishing system, and if I also had to digitally sign every one of my entries, I’d be far less vulnerable to malicious impersonation.

As we project more of our personal and professional identities into the Net, we create new demands for supporting infrastructure, and thus new opportunities for commercial services. To the extent that you are your website, you will need — and will pay for — a website that’s as secure, as reliable, and as persistent as you can afford to make it.

Update: I’ve just learned that the anonymous sploggers who run biginternetmall convinced someone that this anonymous ripoff of this item of mine was a legitimate posting. Yet another facet of the same issue.

A conversation with Bill Crow about HD Photo

For the last five years, Bill Crow has been working on HD Photo, a new image file format that’s intended to supplant the JPEG format currently at the heart of the digital photography ecosystem.

I first met Bill many years ago when he came to BYTE to show us HP NewWave, which was probably the earliest effort to produce an object-oriented file system for Windows — originally, believe it or not, for Windows 1.0. The connection between NewWave and HD Photo is tenuous, but it does exist in the sense that the metadata strategies we’re seeing today (see the truth is in the file) point the way toward ending the tyranny of the hierarchical file system.

Today’s podcast begins by revisiting NewWave, but it’s mostly about HD Photo: Why it was created, how it works, what it will mean to both amateur photographers (“happy snappers”) as well as pros, and how it will be standardized and baked into a next generation of digital cameras.

Along the way I learned a huge amount about the current state of digital photography. For example, I knew that pros prefer to shoot in RAW format, but I wasn’t clear what that meant. According to Bill, a RAW image is just sensor data from a high-end camera, which photo processing software later turns into an image. The professional photographer trades away convenience for control and flexibility. In the case of the JPEG images produced by the vast majority of digicams, though, it’s the other way around. We get usable images without any fuss, but we give up the ability to reinterpret the data. HD Photo aims for the best of both worlds: ultimate control and flexibility if you desire, convenience when you don’t.

Although Bill guesses we’re two years away from commercial HD Photo cameras, the format is being used today to support Photosynth. As he explains on his blog, a compressed Photo HD image has a regular structure that makes it possible to extract images at various levels of detail without decoding the entire image.

There’s a whole lot more to the story. I hugely enjoyed this conversation, and I think you will too.

A conversation with Phil Libin about REAL ID

My first podcast on ITConversations is with Phil Libin, president of CoreStreet, a company to which I gave an InfoWorld Innovators Award in 2004 for its approach to massively scalable credentials validation. CoreStreet has worked with the U.S. Department of Defense on its Common Access Card program, so Phil has been a ringside observer of what may be the world’s most successful large-scale deployment of smart identity cards.

From that perspective, I invited Phil to comment on the Department of Homeland Security’s recently published guidelines for the more secure state driver’s licenses mandated by the REAL ID act.

Part of the context for our conversation was a letter to the editor I’d written to my local newspaper in response to an editorial that rejected the notion of REAL ID on the grounds that any government initiative toward stronger credentials will necessarily lead to the Orwellian Big Brother. What I’ve always thought, and what Phil Libin thinks too, is that the technologies of digital identity can be tools of empowerment or oppression, depending on how we understand and apply them, and that for that reason we’ve got to understand them properly.

At one point Phil said:

The basics of asymmetric cryptography are fundamental concepts that any member of society who wants to understand how the world works, or could work, needs to understand.

That’s a tall order. And in fact, it’s outside the scope of the current REAL ID proposal which calls for 2D barcodes rather than for smartcard technology. But Phil makes a great argument for why a broad understanding of the basics of cryptography is necessary, and for how as a society we might achieve it. This conversation is one small step toward that goal.

The essence of openness

A couple of nights ago I listened to one of Koranteng Ofosu-Amaah‘s appearances on Chris Lydon’s Open Source show. Here’s how Chris Lydon defines what he does:

We designed the show to invert the traditional relationship between broadcast and the web: we aren’t a public radio show with a web community, we’re a web community that produces a daily hour of radio.

Recently I’ve run into similarly expansive uses of the term open source. Here’s Brian Jones reflecting on his experience at a Harvard workshop entitled “Cross-Boundary Governance through Agreements and Standards”:

I had previously thought about open source more in terms of the licensing model chosen. Well obviously the folks from the defense department weren’t thinking they wanted to put all the content under the GPL, but instead they wanted a system where people could easily share information within their targeted community.

And here’s Dare Obasanjo reacting to Dave Winer’s proposal to create an open source implementation of this month’s cyber-craze, Twitter:

One of the primary benefits to customers of using Open Source software is that it denies vendor lock-in because the source code is available and freely redistributable. This is a strong benefit when the source code is physically distributed to the user either as desktop software or as server software that the user installs.


Things are different in the “Web 2.0” world of social software for two reasons. The obvious one being that the software isn’t physically distributed to the users but the less obvious reason is that social software depends on network effects.

For years now, I’ve been tracing an arc from open source software to open services to open data. What’s the common thread? Collaboration. People working together in shared information spaces, using shared technical and social protocols, to achieve shared goals.

If you asked me to define the essence of openness, I couldn’t say it any better than that.

Ink by the barrel

By a strange coincidence, Wired’s Fred Vogelstein and I were both on Microsoft’s Redmond campus in mid-January. He was there to finish reporting the story that just exploded on TechMeme, and I was there for my new employee orientation. Here are some key perspectives: Chris Anderson (Wired), Fred Vogelstein (Wired), Jeff Sandquist (MSFT, my boss), Frank Shaw (Waggener-Edstrom), and Mary Jo Foley (ZDNET). Bottom line: Vogelstein writes a “transparency” story about Microsoft’s Channel 9, an internal assessment of Vogelstein is inadvertantly forwarded to him, the irony becomes the story.

For me, there are some other Wired-related ironies swirling around here. I have long believed that we’re moving toward the Transparent Society foretold in David Brin’s seminal book. When I attended the Highlands Forum the other week, I learned a bit more of that book’s backstory. I’d known that it was a popular Wired story in 1996. I hadn’t known that the seed of that story was a talk given at the Highlands Forum a few years before.

The transparent society envisioned in that book is profoundly radical in ways that we’re only beginning to appreciate. For example, I was until recently an industry journalist who was studied by Microsoft and Waggener-Edstrom in the same way that Fred Vogelstein found out he was being studied. I’ve since found such reports about me floating around on the Microsoft intranet. As many of today’s commentators have noted, the existence of such documents should come as no surprise to anyone who’s played the journalism game on either side of the fence (or on both). Those documents weren’t published, either accidentally or on purpose, but if they had been, nobody would have been harmed. In fact, I’d have been interested to know how my views were coming across, and I might even have sharpened those views accordingly. That’s one example of a new way of thinking that David Brin’s book has opened up for me.

Here’s another. When I speak in public about the emergent blogosphere, I try to steer clear of the tired old debate in which bloggers and journalists are cast as antagonists. What today’s TechMeme cluster shows is that they are in fact collaborators. Wired doesn’t own this story, everyone involved has a piece of it, and all of those pieces are discoverable and interlinked. It’s a wonderful thing to see.

This will become my new benchmark example of network-based storytelling but, ironically, the old example I’ve been using for years also involves Wired. Five years ago the magazine ran a short piece about Mitch Kapor’s Chandler project entitled The Outlook Killer?. That was precisely what Mitch did not intend. “I don’t want to play into the meme that Chandler is an Outlook killer,” he wrote in an email which he also blogged. Later he elaborated that the headline “firmly bracketed the article in the David vs. Goliath trope I got agreement would not be used,” and concluded:

It’s fortunate that a weblog is a wonderful, alternate, and complementary forum in which to speak directly, thus by-passing the intermediation of formal media.

Back in 2002 his reaction wasn’t as visible as it would be today, but it got noticed, and that was a leading indicator that the famous saying “Never pick a fight with a man who buys ink by the barrel” — attributed variously Twain, Liebling, Wilde and Dr. Johnson — was due for revision. Five years on it’s even clearer that ink — happily for the trees we used to sacrifice to it — matters less, electrons matter more, and the playing field is closer to level.

It’ll never be completely level, nor should it be, because society requires a healthy balance between professional storytellers who synthesize the work that other people do, and amateur storytellers who are the ones doing that work, all operating within a sphere of relative transparency. We have made remarkable strides toward achieving that balance.

Authenticated RSS feeds

Today I created a private blog site — that is, Internet-accessible but SSL-and-password-protected — and realized that there was no easy way for most people to subscribe to it. Even if the popular cloud-based readers like Bloglines and Google Reader supported authenticated feeds, I wouldn’t want to let them use my credentials to impersonate me.

What about the Microsoft RSS Platform? I discovered to my surprise that it won’t read authenticated feeds either. I’m way late to the party on this one. Scott Hanselman sounded the alarm last September. (He also speculated usefully about a CardSpace-strengthened approach to secure RSS.)

Way back in February, Dare Obasanjo had weighed in on why authenticated feeds would matter, and in March Sean Lyndersay explained on Charlie Wood’s blog why the feature didn’t make the cut.

My own case helps bolster Sean’s point that password-protected feeds are rare birds. Despite all the blog publishing and feedreading I’ve done over the years, today was the first time I’ve created, and then turned around and subscribed to, an authenticated feed.

Still, there are all kinds of messages that I’d rather receive from banks and credit card companies by way of RSS pull (under my control) rather than by way of email push (under their control). But if Windows itself doesn’t yet read authenticated feeds, it’s hard for those companies to justify producing such feeds. Chicken and egg.

So how did I finally subscribe to it? With Dare Obasanjo’s RSS Bandit, the first desktop-based reader I’ve touched in years.

Update: Thanks to this comment I have discovered that Outlook 2007 is one of the standalone RSS readers that can subscribe to authenticated feeds. I had originally thought otherwise but that was operator error on my part. It does work, provided that Outlook 2007 is set up to subscribe autonomously rather than to use the common feed store. Here’s a screencast that shows IE7 and Outlook 2007 interacting with the common feed store, as well as Outlook 2007 working autonomously.

Update 2: A followup question came up today. That screencast shows how to make Outlook 2007 use the common feed list. (See File->Import.) But how do you switch away from that choice in order to read authenticated feeds? ANSWER: Tools -> Options -> Other -> Advanced.

Koranteng!

This comment on the previous item brought a huge smile to my face. It’s from Koranteng Ofosu-Amaah, a kindred spirit whom I’ve never yet met in person but whose remarkable essays on music, politics, and technology I regard as some of the highest literary accomplishments this new form called blogging has so far produced. You know those services that offer to print blogs and bind them into books? In general I see no point to them, but I would like to hold the Book of Koranteng in my hand and read it by the fireplace. Except not really, because the Book of Koranteng is, appropriately to the medium, ferociously hypertextual.

I’d lost track of Koranteng for a while. A different RSS feed maybe? Anyway, it’s great to reconnect. The last piece I remember reading, from back in October, was this epic poem about journalism, misdirection, a buried lead, and the Middle East. It’s part of his Things Fall Apart series which…well, I can’t begin to describe it, but don’t try to read this stuff at work. Read it at home, by the fireplace, on a WiFi-connected laptop. It’s amazing.

Like a moth to the Freebase flame

Freebase is aptly named, I am drawn like a moth to its flame. I realize it can be annoying to discuss things that folks can’t try out for themselves, and I can’t (yet) do anything about that, but I hope that a few more observations will be welcome.

The comment attached to my first item about Freebase, by Metaweb’s Chris Maden, provides an enlightening glimpse into how knowledge gardening in a structured wiki like Freebase will differ from its counterpart in an unstructured wiki like Wikipedia. Here’s what Chris had to say about the Freebase record for me, which I had tweaked:

I noted that his place of birth was “Philadelphia,” which was odd; our cities tend to be named with their state included. Sure enough, “Philadelphia” had been created accidentally by some other user as a “location,” and then Jon had reused it. So I:
1) Changed Jon’s place of birth to “Philadelphia, Pennsylvania” (which is a “location” and a “city/town”).
2) Added a type to “Philadelphia”: “duplicate.”
3) Added a property to “Philadelphia”: it is a duplicate of “Philadelphia, Pennsylvania.”
4) Removed the “location” type from “Philadelphia” to keep it from coming up in autocomplete for other location properties.
By marking it as a duplicate, if someone does end up using it, our topic merge tool can find it and its namesake and combine their properties. This will be more heavily automated as we gain confidence in our detection algorithms.

Fascinating.

Emboldened by this narrative, I created my first user-defined Freebase type. Because the system is so new, there are some quite fundamental things that (so far as I can see) haven’t yet been defined. I wanted to create entries for some of my personal projects, such as LibraryLookup and elmcity.info, so I created a type called Project and added the properties Goal and Collaborators. That enabled me to add entries for my two personal projects, describe their goals, and associate myself with them as a collaborator.

But as I said, it’s the social dimension that’ll kick this whole thing into high gear. When I did a text search in Freebase for the word “project” a bunch of things fell out, including the Helix digital media framework. The original Freebase record, sourced from Wikipedia, was typeless. I promoted it to an instance of Project, and by doing so I’ve invited anybody who visits that record to add a Goal and some Collaborators.

I’m not one of those collaborators, but I have an interest in the project and would like to be able to discover who’s working on it. More broadly, I’d like to be able to answer questions like: “Who among the Helix collaborators is also working on .NET projects?”

I can’t answer that question now, and I may never be able to in Freebase or its imminent competitor, Radar Networks. But the point is that it cost me very little to declare Helix as a Project — onced the type was defined, that is — and that provides an immediate benefit just to me. As with social bookmarking, the act of public annotation is a useful aid to memory and recall.

If my invitation to contribute structured data about Helix is accepted by others, that’d be great. But there too, enlightened self-interest can be the prime mover, as it should be. By leaving their fingerprints on things that they care about, people can shape those things for their own purposes. When those fingerprints lead to mutual discovery and collaboration, that’s icing on the cake.

Of course there are all kinds of things that we care about, and would like to declare to be related. For example, I’ve recently been watching these two trend lines, which chart the relative fortunes of weblog.infoworld.com/udell and blog.jonudell.net in the Technorati ranking system:

I’d love to declare once that that these two blogs are related to me, then ask Technorati and a bunch of other services to refer to that relationship. Maybe that’ll happen sooner than I thought.

Thinking about my InfoWorld friends

The blogosphere has pre-announced what IDG is now, as a result, “expected to confirm on Monday”: that InfoWorld will cease print publication. There will be plenty of time for fond remembrances of InfoWorld’s storied past, and for clever prognostications about its online future. But for now, as someone who’s loved and lost a magazine, I just want to say to my friends there who were blindsided and are losing sleep over this: Been there, done that, it’s no fun, good luck.

Semantic web as social enjoyment

The recent launch of Freebase.com, the first application of the semantic web engine being developed by Danny Hillis’ new company, Metaweb, was written up by, among others, Esther Dyson, Tim O’Reilly, and Martin Heller, from whom I received an invitation to try Freebase. (Note: I don’t yet seem to have invitations that I can dispense.)

If you scan those articles and the blogospheric halo surrounding them, you’ll soon glean the essentials. Freebase is like Wikipedia in the sense that it’s an open data project. But where Wikipedia is a database of unstructured articles, Freebase is a database of categorized and related items. You can use it to add or edit items and, more ambitiously, to create or extend the categories themselves.

There’s been a lot of discussion about how this approach does or doesn’t match up with the W3C’s vision for the semantic web, and the suite of standards and technologies associated with it. I’ll leave that to the experts and simply reiterate one crucial point. The authors of the semantic web are going to be people, not machines. And people will only want to play the game if it’s easy, natural, and fun.

Early indications are that Freebase is going to be a whole lot of fun. In his walkthrough Tim O’Reilly calls it addictive, and explains why. Because the system thinks in terms of relationships among types of items, a single act of data entry can produce multiple outcomes.

Tim’s writeup gives a couple of examples of what that’s like. Here’s mine. I found a record for myself in the system, sourced from Wikipedia. I updated it to say that I’m the author of the book Practical Internet Groupware. Then I added that Tim O’Reilly was the editor of my book. That single edit altered the records on both ends of the author/editor relationship. My book’s record now showed Tim O’Reilly as its editor, and Tim’s record sprouted a Books Edited list that contained my book as its first item.

Nice. This is just a Hello World example, of course, but it has the feel of something that people will be able to understand, will want to use, and will enjoy in a social way.

A couple of years ago, I wrote a column entitled WinFS and social information management. It concluded like so:

Developers have always tried, and so far always failed, to define reusable objects that meet the needs of knowledge workers in the real world. Meanwhile, in the era of social computing, we’re learning to watch for the patterns that emerge as people interact in information-rich contexts, and then pave those cow paths. The first WinFS-aware applications, which will be personal information managers with hooks for sharing and synchronization, won’t align with this strategy.

These WinFS applications will, however, enable you to pave your own cow paths, for example by storing and reusing queries. Nobody can know how people will ultimately want to share these contexts among WinFS clients in a peer-to-peer fashion, on WinFS servers when they emerge, and on the global XML Web. So I hope Microsoft will come to see WinFS not only as a platform for developers, but also as an environment in which users can do simple things that yield powerful social effects.

Nowadays a lot of folks say WinFS was doomed from the start and should never have been attempted. I didn’t think that then and don’t now. I did, clearly, wish that WinFS had been part of a strategy of cooperation with the cloud. And I’d still like to see some version of that scenario play out.

Citizen ads, no thanks. Citizen analysis, yes please.

Phil de Vellis begins and ends his 15 minutes of fame with this remark:

This ad was not the first citizen ad, and it will not be the last. The game has changed.

Yeah, but that’s not the game-changing behavior I’m looking for. This is just an attack ad produced by a citizen rather than by an ad agency, not an ironic deconstruction of attack ads. Even if it were, that would be helpful only in small doses.

The game-changing behavior I am looking for is something completely different. As I suggested here, we’re now in a position to slice and dice what politicians and pro pundits say, by candidate and by issue, across venues, recombine that material to support a whole new level of scrutiny and analysis.

It’s doable but, admittedly, not an easy thing to incent or coordinate. So how about this. Along with the opt-in $3 Presidential Campaign Fund, let’s have an opt-in $3 Citizen Media Fund. Use the proceeds to collect raw video footage of candidates, and create Mechanical Turk HITs (human intelligence tasks) to parcel out the editing and tagging. If there’s money left over, apply the same treatment to all of the ads.

I have met the enemy and it is tribalism

In 2001 I went to an O’Reilly conference with the unwieldy name Peer-to-Peer and Web Services. It was, in retrospect, the forerunner of the more succintly named Web 2.0 conference. The 2001 conference, which had originally been scheduled for late September, was pushed into November by the 9/11 disaster. But it wound up being one of the most eclectic I’ve ever attended, and for that reason one of the best. I rubbed elbows with hackers, musicians, lawyers, journalists, venture capitalists, and most unusually for me, soldiers.

It was a soldier who made the most lasting impression. Earl Wardell, who worked for the Joint Chiefs, said things I would never have thought a soldier could say to a crowd like us. Conventional command-and-control wasn’t working. The enemy had mastered the art of network agility. It was now imperative for our military services to understand and apply the ways of the web, and we in the vanguard were invited to help guide that historic transformation.

It was a stunning moment. Since then, I’ve wondered from time to time whether that invitation had remained open, whether it had been accepted, and if so what were the outcomes.

This month that invitation was extended to me, and by a strange coincidence not once but twice. Last week I spoke to an intelligence advisory board at an undisclosed location near Washington, on a panel where I was flanked by a Google executive and a Nashville music promoter. This week I spoke at the Highlands Forum in Carmel, California, where members of the Web 2.0 tribe met with our military counterparts.

Both gatherings were extraordinary events for me. The rules of engagement between “my” tribe and “their” tribe are loosely defined, so I’m not sure how much I can or should say here, but I will report the following observations.

First, the invitation I heard in 2001 was real, and remains open. Some of the best and brightest minds in the US military are keenly aware that the emerging web will be a fundamental enabler of the transformation they urgently wish to effect.

Second, the future is as unevenly distributed inside the DoD as it is everywhere else. I have met folks who are discouraged, cynical, and who see no signs of the needed transformation. And I’ve met other folks who are energized, hopeful, and deeply engaged in making that transformation happen.

Third, I have met the enemy and it is tribalism. I recently heard an interview with E.O. Wilson in which he was asked to react to the critiques of religion that Sam Harris and Richard Dawkins have famously been making. The problem isn’t religion, Wilson said, it’s tribalism. The two often coincide but they are not the same thing. Religion is not a pernicious force in the world. Tribalism is.

I said when I joined Microsoft that my goal was to build bridges. We need to build bridges within the technical world, between the Microsoft tribe and the open source tribe. We also need to build bridges between the geek tribe as a whole and the rest of the world, because when you strip away the Linux and Vista T-Shirts we geeks share much more DNA with one another than with the vast majority born without the hacking chromosome.

We also need to build bridges between the civilian tribe and the military tribe. I’ve now had the rare opportunity to see that those bridges are in the process of being built, and I’ll do whatever I can to keep that momentum going.

Meanwhile, here’s my takeway. Tribalism is an aspect of human nature, so it must once have served a purpose, but it no longer does. It’s a piece of evolutionary baggage that we can no longer afford to carry around. I don’t know if we can let go of it, but we had better at least try.

Rich application engines and user innovation

Last week Brendan Eich and Dare Obasanjo were batting around the topics of openness, rich Internet applications, and user innovation. The statement that found its way into my del.icio.us stream was this one from Brendan:

I assert that there is something wrong with web-like “rich” formats that aren’t hyperlink-able or indexable by search-engines.

Me too. Linking and indexing, which are enabled by open standards, are in turn key enablers of user innovation. To the extent that Flash can be web-like, or that WPF/E can, we surely want them to be.

But to Dare’s point, that’s not the endgame. Rich Internet apps should also expand what linking and indexing mean, suggest new standards, and create opportunities for new kinds of user innovation.

I see one such opportunity in the realm of audio and video. I’ve been wrangling both this weekend for an upcoming talk, and it’s a huge chore. The kinds of standard affordances that we take for granted on the textual web — select, copy, reorganize, link, paste — are missing in action on the audio-visual web. The lack of such affordances in our current crop of (mostly) proprietary media players suggests that open source and open standards can help move things along. But nobody in the open world or in the proprietary world has really figured out what those affordances need to be in the first place. So I guess we’ll keep on running parallel R&D efforts until we do.

Friday podcast moving to IT Conversations

The Friday podcast will be on vacation this week and next. When it returns, on March 30, I’m thrilled to report that the show will have a new home on IT Conversations. I started out there with the Gillmor Gang back in May 2004. In the summer of that year, according to Lucas Gonze’s alternate history, podcasting was born.

Two years later I struck out on my own in order to pursue a different style. I’ve done 42 episodes in that style, I’ve worked hard to make each of them worth listening to, and I’m proud of the results.

I’m delighted to now bring my show to IT Conversations, a groundbreaking operation that continues to thrive under the guidance of my friends and mentors Doug Kaye and Phil Windley, and thanks to a crew of other folks who keep the bits flowing.

Art Rhyno’s science project

Art Rhyno’s title is Systems Librarian but he should consider adding Mad Scientist to his business card because he is full of wild and crazy and — to me, at least — brilliant ideas. Last year, when I was a judge for the Talis “Mashing up the Library” competion, one of my favorite entries was this one from Art. The project mirrors a library catalog to the desktop and integrates it with desktop search. The searcher in this case is Google Desktop, but could be another, and the integration is accomplished by exposing the catalog as a set of Web Folders, which Art correctly describes as “Microsoft’s in-built and oft-overlooked WebDAV option.”

There’s more going on in this example than even I can easily wrap my head around, but let’s step back and consider the document itself, which Art provides at this URL:

http://librarycog.uwindsor.ca:8087/artblog/librarycog/indexcat

That’s a vey special URL. Art explains:

This document was created in OpenOffice and is served directly on the web using Cocoon’s nifty Zip support and the elegant and sensible XML syntax of OpenDocument.

In other words, Art writes and maintains an OpenOffice document, but an intermediary translates it on the fly into an HTML document. Other translations are equally feasible — to Word’s XML format, for example. What’s more:

Add in WebDAV support, and the barriers between the desktop and the Web start to blur, and the options for repurposing content achieve megaton levels.

Now let’s switch gears and look at some remarkable developments in the realm of Office. In this entry, Doug Mahugh explores the anatomy of a Word document that embeds within itself a contact record that’s in hCard format. The exact same chunk of XHTML data that you’d find on a web page, like this one, lives inside the Word document.

What’s more, the fields of the contact record can be individually read and written because they’re bound to controls. So if you download the file from Doug’s blog and open it up in Word 2007, you can modify the contact record in situ. The rewritten fields appear inline with the text of the document, but under the covers they’re written into a custom XML part — that is, a file of XML that lives inside the ZIP file that is the new format for Word docs.

(The mechanism that Doug describes for wiring the interactive controls to the custom part in which they’re stored is radically simplified by Matthew Scott’s Content Control Toolkit, a really nice visual editor and mapper that’s freely available as both an executable .NET program and as C# source code.)

The style of intermediation that Art Rhyno’s been developing — based on the notion of what he calls a WebDAV proxy — could produce powerful effects in this realm too, blurring the boundaries between XML file formats on the one hand and between the desktop and the web on the other. For example, the act of opening a file containing an embedded hCard could silently trigger the extraction of that contact information, and the storage of it locally or remotely or both.

I’d like to explore this theme on the Windows desktop and find out what’s possible. Art’s weapon of choice is Cocoon, the Apache project’s XML pipelining framework. And of course Cocoon can run on the Windows desktop. But deploying it there isn’t something many people are likely to want to do. So I’m looking for a way to achieve similar effects with infrastructure that’s based on (or ideally already contained within) the .NET Framework. Does it exist?

Direct-to-camcorder screen recording

For several of my screencasts I used an unusual method which I mentioned here. I made my camcorder be the computer’s display, and dubbed the output to tape1. My reasons were twofold. First, I wanted to capture a lot of raw footage without having to wait for the captured data to get written to a file, which can be slow. Second, I wanted to be able to edit in iMovie. Although I have Camtasia and use it often, I reach for iMovie when I need precise frame-by-frame control, and when I’m laying down audio narration in a precise way. Camtasia isn’t good at those things, and neither is Windows Movie Maker. I’ve tried Adobe Premier but it does way more than I need and the learning curve intimidated me. (It also ain’t cheap.) If there is a basic Windows movie editor that meets my requirements, I’d love to hear about it, and so would my screencasting colleagues at MSDN Channel 9. Meanwhile I’ll continue to reach for iMovie. But moving files from a Windows-based capture tool over to iMovie on the Mac, and then back to Windows where I continue to rely on Camtasia for final production, is a huge hassle. Hence the notion of using the camcorder as a bridge between the two worlds.

For the screencasts mentioned above, I connected my Mac to the camcorder with an S-Video cable, detected the camcorder as a display, and captured at 720×480. It’s a challenge to arrange a presentation in that small rectangle, but — particularly when you’re demonstrating a single application window — it can be done.

Today when I updated the Vista video driver for my Compaq nc8340, which has an ATI Mobility Radeon X1600, I repeated the experiment in Vista. This 20-second screencast shows the results for two different capture resolutions: 1024×768 and 800×600. (With this Windows-based setup, talking to the same camcorder, 720×480 doesn’t seem to be an option.) Both captures get squashed down to the standard digital video resolution of 720×480, and neither is crystal clear, but I think both are usable, though you should judge for yourself. I’d lean toward the 800×600 resolution which I’ve found to be ideal for two reasons. First, it minimizes the amount of video data you have to ship over the wire to your viewers, and that still matters. Second, it forces the demo to focus on where the action is, rather than displaying the full panoply of the modern GUI which can often be overwhelming.


1 One of my goals in writing that post was to assure that a future search for ‘udell pv-gs400 s-video’ would find the reminder to myself, embedded in that post, about how to dub to tape. And now, sure enough, it does.

Installing Flash on Vista

I’ve been meaning to mention that on two different Vista boxes, the Flash runtime did not install in its usual seamless way. In both cases I wound up going to the Adobe website and installing manually from there. I believe I tried from both IE and Firefox, but I can’t remember for sure.

I promptly forgot about the issue, but heard about it recently from some other folks. Does anyone within earshot of this blog know what the issue might be?

Greasing the skids for network travelers: Burger Kings versus ATM machines

In a few different ways and places, lately, I’ve asked the question: “How many social networks can one person join?” The context for that question was the recent appearance of the social network fatigue meme.

Discussion of this topic has focused on how to make it easier to join and participate in multiple networks. That’s an interesting challenge, and it invites technical solutions in the realm of data portability. If I could carry my reputation and group affiliations with me from one social network to another, the argument goes, I could more easily hop from one network to another.

Maybe so. But that argument begs two key questions. First, why would I want to hop from one network to another? Second, is making that easier a good thing for me and for everyone else involved?

On one level the answer to the first question is obvious. Every social network provides its own unique experiences and enforces its own set of rules. Those experiences and rules are designed for different purposes. Flickr is about sharing photos, not finding a mate, though I’m sure Flickr has made more than a few matches by now. Conversely Match.com is about matchmaking, not photos, though photos play a central role on Match.com. So in order to satisfy multiple needs we may be obliged to join different networks, have different experiences, learn different rules.

A subtler answer to the first question is that having different experiences and learning different rules is inherently valuable. That’s why travel is a good thing. When we visit other places, and observe (or ideally participate in) other cultures, we’re better for it. We learn new things about the cultures we visit, and we deepen our understanding of the culture we return to.

From this perspective, the answer to the second question is that there are wrong ways and right ways to grease the skids for culture-hoppers. An example of the wrong way is the Burger King on the Champs-Élysées. An example of the right way is the ATM machine next door that takes my American debit card and dispenses Euros.

I mention all this because I’m returning from a meeting that brought together people from very different networks and cultures. The purpose of the meeting was, somewhat reflexively, to discuss how to build bridges of understanding among people from different networks and cultures. These kinds of cross-disciplinary efforts are always fun and interesting, but in my experience they end when the meeting ends.

In principle we could visit one another’s worlds more often by visiting them virtually. In practice I’ve never seen a virtual exchange program, but it’s a conceivable application of online multiplayer gaming. And it would be a valuable one.

Culture-hopping is a skill that, like any other, improves with practice. In the real world it’s one that’s slow, expensive, and arduous to improve, so most of us don’t improve it as much as we could. Simulation could help make the process faster, cheaper, and easier.

The best culture-hoppers are the ones Malcolm Gladwell calls connectors. They are scarce resources and prime movers. Lois Weisberg was able to bring people together and make things happen, according to Gladwell, because she could move in many different social circles and adapt to a variety of cultural protocols. That’s a hardwired talent fully expressed in relatively few Lois Weisbergs. But it’s also a talent that we all possess in some degree, that we could improve with practice, and that would make us more effective to the extent we did.

A good simulator would make it easy to visit a foreign culture, but not too easy. Your ATM card should work, because without it you wouldn’t be able to do anything at all. But no Burger Kings, that’d be cheating. To play the game properly you’d have to sample the local cuisine.

GoDaddy’s bad buffness day

Last week Kim Cameron wrote about a problem at Flickr that resulted in wrong photos being displayed. Flickr’s acknowledgement and explanation of the problem earned this commendation from Axel Eble, which Kim cited:

Folks, this is one of the best pieces of crisis management I have ever seen! It states the problem; it states the solution; it takes the blame where necessary and it gives a promise to the future. Now, if we could set this as mandatory teaching for all companies worldwide I would feel so much better. [The Quiet Earth]

Kim went on to note that while this new transparency is a great thing, it’s not enough to be transparent, you must also be competent. And he borrowed this wonderful phrase from Don Tapscott: “If you are going to be naked, you had better be buff.”

Yesterday my DNS provider, GoDaddy, had a bad buffness day. My site was offline for hours, during which time the blogosphere speculated wildly about problems related to Daylight Saving Time. GoDaddy had nothing to say about it when I checked yesterday, and has nothing now, though it seems that at some point a note about technical difficulties was posted.

Scanning the commentary on various sites yesterday yielded no conclusion. The outage either was, or wasn’t, a denial of service attack unrelated to DST. I never knew which, yesterday, and I still don’t today.

The corollary to “If you are going to be naked, you had better be buff” is clearly not “On a bad buffness day, cover up.”

Primary sources? You don’t need ’em. Trust us.

When the inspector general of the US Department of Justice issues a special report, it tends to make news. The latest report, a dissection of the FBI’s use of “national security letters” under the Patriot Act, is no exception. References to this report are everywhere in the news today. But links to the report are less plentiful.

I made the chart below by scanning the first three pages of Google’s cluster of stories on this topic. After eliminating duplicates, I found 12 sites linking to the original report and 42 sites not linking.

In the blogosophere, you could scarely imagine mentioning a publicly-available report without also linking to it (e.g., Technorati, Bloglines). But in the mainstream media, it’s still the exception rather than rule.

(PS: I went to junior high school with the DOJ’s inspector general, whose name is Glenn Fine. I’ve mused before about the anomaly that makes my web presence so much larger than his. But in the real world, he’s the one who commands the respect of the US attorney general. Way to go, Glenn!)

(PPS: Ryan Tomayko was surprised to see that any of the sites linked to the report. It’s a good point. Things are progressing.)

Sites linking to the DOJ report: 12 Sites NOT linking to the DOJ report: 42
charlotte.com

www.businessweek.com

www.chron.com

www.ft.com

www.guardian.co.uk

www.helenair.com

www.journaltimes.com

www.kansascity.com

www.npr.org

www.nytimes.com

www.redding.com

www.theglobeandmail.com

abcnews.go.com

english.people.com.cn

in.today.reuters.com

news.bostonherald.com

online.wsj.com

sportsillustrated.cnn.com

timesofindia.indiatimes.com

www.570news.com

www.abcnews.go.com

www.baltimoresun.com

www.boston.com

www.canada.com

www.casperstartribune.net

www.centredaily.com

www.chicagotribune.com

www.chinapost.com.tw

www.columbusdispatch.com

www.dailynews.com

www.detnews.com

www.forbes.com

www.foxnews.com

www.guardian.co.uk

www.guelphmercury.com

www.helenair.com

www.kansascity.com

www.kentucky.com

www.latimes.com

www.mlive.com

www.msnbc.msn.com

www.myfoxdc.com

www.newsday.com

www.nysun.com

www.pressofatlanticcity.com

www.smh.com.au

www.startribune.com

www.tbo.com

www.time.com

www.washingtonpost.com

www.wfaa.com

www.whbf.com

www.wstm.com