Kudos to the New York Times for its remarkable interactive transcript of the Republican debate. The display has two tabs: Video Transcript, and Transcript Analyzer. The Video Transcript is a side-by-side display of the video and the transcript, linked together and randomly accessible from either side, plus a list of topics that you can jump to. These are the kinds of features you’d like to be able to take for granted, but which aren’t always implemented as cleanly as they are here.

But the Transcript Analyzer takes the game to a new level. Or at least, one that I haven’t seen before in a mainstream publication. The entire conversation is chunked and can be visualized in several ways. It’s reminiscent of the Open University’s FlashMeeting technology which I mentioned here.

In the Times’ visualizer, you can see at a glance the length and wordcount of all participants’ contributions — including YouTube participants who are aggregated as “YouTube user”. Selecting a participant highlights their contributions, and when you mouse over the colored bars, that section of the transcript pops up.

Even more wonderful is the ability to search for words, and see at a glance which chunks from which participants contain those words. The found chunks are highlighted and, in a really nice touch, the locations of the found words within the chunks are indicated with small dark bars. Mouse over a found chunk, and the transcript pops up with the found words in bold. Wow! It’s just stunningly well done.

The point of all this, of course, is not to exhibit stunning technical virtuosity, although it does. The point is to be able to type in a word like, say, energy, and instantly discover that only one candidate said anything substantive on the topic. (It was Mitt Romney, by the way.) Somehow, in all of the presidential campaigning, that topic continues to languish. But with tools like this, citizens can begin to focus with laserlike precision not only on what candidates are saying, but also — and in some ways more crucially — on what they are not.

Hats off to the Times’ Shan Carter, Gabriel Dance, Matt Ericson, Tom Jackson, Jonathan Ellis, and Sarah Wheaton for their great work on this amazing conversation visualizer.