I’m just wrapped up a screencast about the elmcity project. It’ll stand in for me at an upcoming event I can’t attend, and serve as an explanation I can point others too. This is the first screencast I’ve worked on in ages, and also the first in which I appear as a picture-in-picture talking head. The process has been challenging, and I want to write about it while the details are fresh.
Software teleprompters
After writing the script, I realized I’d need a teleprompter in order to read it effectively into the camera. You’d expect to find lots of software prompters floating around on the web, including some free ones, and you’d be right about that. But I had to work through a bunch of them before I found one that worked well for me. I tried CuePrompter, TeleKast, and many others. All failed in some dimension of control: margins, speed, transport. Finally I settled on PromptDog, which is free to try but is the one I’ll buy when I go this route again. It does everything well, but what really put it over the top for me was the way it wires scroll speed to the mouse wheel.
If you’ve read from a software-based teleprompter before, you’ll already know this, but it was new to me. No matter what scroll speed you choose, you need to vary it as you go along. That’s because words and sentences take varying amounts of time to speak, but you need to keep your eyes focused near the top of the screen where the camera sits. With most of the programs I tried, you manage this focal zone by stopping and starting the scroll. But for me, at least, the stops and starts were distracting. PromptDog’s mouse-wheel-driven variable speed control made it much easier to stay in the focal zone. Reading from a software teleprompter is hard, at least for me. I was happy for all the help I could get.
Picture-in-picture video
For this screencast, I upgraded from Camtasia 5 to Camtasia 7. It can record directly from a camcorder, but my second-hand Panasonic PV-GS400 doesn’t seem to work well in that mode. So I recorded to tape, imported the results to a file, and imported that file into Camtasia as a PIP (picture-in-picture) video. On import you tell Camtasia how big your PIP window will be, and where it will show up in the larger video window. I made the PIP window a quarter the size of, vertically centered in, and flush right with the larger 1024×768 video window.
I’d sssumed you could move the PIP window around, and grow it or shrink it, to accomodate different kinds of underlying screencast action. But that assumption was wrong. For a given segment of PIP video, the window stays where you put it. This leads to my first feature request for Camtasia 8: a PIP preview rectangle when recording the screen.
Often it’s OK to let the PIP video just overlay the screen action. But sometimes you don’t want it to hide an essential part of the screen. To avoid that, you have to compose the screen around the PIP window. Lacking a visual cue for the PIP window’s borders, I had to guess. Often I guessed wrong, and had to recompose and reshoot a piece of screen action.
Note that you can vary the size and location of the PIP window by splitting the PIP video into segments and assigning different sizes and locations to each segment. That’s a lot of work, though. And you don’t really want to split the PIP video into segments because then you can’t manipulate the whole track.
Editing audio, motion video, and screen video all together
I made things hard on myself because I’d forgotten that Camtasia invites you to do more integrated editing than you should. In principle you can, for example, run a noise-reduction pass on your audio in Camtasia. In practice, I would prefer to do that in Adobe Audition, which does the job faster and better. What I should have done is grab the sound track out of the captured motion video, run Audition’s noise reducer, recombine the audio and video, and then import into Camtasia for editing.
Instead I edited everything down in Camtasia, then tried to do an export/process/import pass on the audio. When you export, Camtasia renders the audio based on your edits. Unfortunately it came out a few seconds longer than expected. I think that’s because the differing frame rates for screen video on the one hand, and motion video plus audio on the other, make it hard to keep things in synch. Next time around I’m going to try matching the frame rates to see if that helps.
(In the end I decided it was worth redoing the edit anyway, so I split the AVI file I’d recorded from the camcorder, fixed the audio, imported it back into Camtasia, and redid the relatively few edits I’d made to the PIP video.)
In the past, I’ve done some carefully edited screencasts where things that I say are tightly synched to things happening on screen. (“…when I click on this link, we see that …) It’s easy to pull that off when you can’t see the speaker, because you can mess with the screen video, or the audio, or both. When you can see the speaker, it’s much harder. Motion video isn’t nearly so forgiving as audio, so you have to do almost all the synch adjustment in the screen video. Or else re-record some or all of the motion video.
To PIP or not to PIP?
Is all this effort worth the trouble? When Scott Hanselman surveyed his readers about screencasts, he asked, among other things, “PIP or no PIP?” More than half agreed with the statement: “Too much PIP (Picture in Picture) video of the presenter is distracting.” And I think that’s true for screencasts that show how to do stuff with software.
When a screencast shows why to do stuff with software, though, I think the talking head may make more sense. Now, my instinct is to be a voice only, as I am on my podcast. But if the screencast is going to represent me at an event, it seems like I should try to project myself there.
More broadly, the topic is something I care about and have struggled to communicate effectively. If this method of presentation works better than others I’ve tried, even if only for some people, then it’s worth doing. My communication kit needs as many tools as I can pack into it.
Now that I’ve knocked the rust off my screencasting skills, I’m looking forward to redoing this video based on feedback. And since it was made for a ten-minute conference slot, I should probably also do some shorter versions that will work in different contexts.
One thing that’s becoming terribly clear: If I want these to make sense to broad audiences, I need to speak plainly and illustrate with simple everyday examples. I’ve been embarrassingly slow to figure that out, but I am learning. In the screencast I just wrapped, which is all about syndication, I never use that word. It’s a start!
when do we get to see it?
It wound up not being shown at the event because of a playback glitch involving the audio track.
However, I was later able to upload it to blip.tv where it plays back ok:
http://blip.tv/dashboard/episode/3761715
Feedback welcome. It’s long (10min) and I know I’ll probably need versions at 5 and maybe even 2 or 3 min.
But the main thing, in all versions, will be to tell the story in a way that doesn’t make non-geek eyes glaze over.
I’d prefer to have a crossfade between video commentary and the screencast. That is, when there’s a long exposition, show the video; when you need to demonstrate on the computer, fade to the screencast with audio.
PIP adds no value for me, as I either want to watch what’s happening in the screencast, or actually see the speaker’s face as he talks. Both are worse with PIP.
I agree. When I redo this, as I’ll need to do for other reasons anyway, that’s how I want to do it.
How does one get authorized to view that blip.tv clip. I’m logged into blip.tv, but it appears I need further authorization to view the video. :-(