I’m deeply fascinated by software instrumentation in all its varieties. When Scott Dart told me that he had hard data on how many people are using the tagging features in Photo Gallery, I wanted to know how. The answer is SQM, which is pronounced “squim” and which expands to Software Quality Metrics.
According to Partha Sundaram, my guest for today’s podcast, SQM was formerly used on a per-application basis, but is now, in Vista, also a piece of core infrastructure that can be used to analyze how the operating system itself is being used in the field. He reviews the current use of SQM in Vista, and some future goals for the technology.
One of those goals is to make it more obvious, to customers who have given consent to the anonymized and aggregated collection of their data, what’s been learned from that data, and how that knowledge has been used to improve the software.
That scenario is outside SQM’s scope, though, Partha says, and would require a style of data collection that’s way more granular than what SQM is designed for. You could potentially use SQM to find out how often tags are renamed — in itself an interesting question — but not what the tags were changed from or to.
Privacy advocates will probably be relieved to know that. And indeed the whole idea of user-defined instrumentation might seem rather esoteric. But I’ll argue that it really isn’t. In my talk with Mary Czerwinski, for example, Mary noted that by using another internal logging tool she found that she’d been spending almost two-thirds of her time in email, she resolved to change that, and she succeeded. For the many people who subscribe to the Getting Things Done methodology, it would be a boon to be able to ask and answer questions about personal habits of communication and information management.
To that end, that you’d want to have a system-wide framework with which to define meaningful events and analyze them. Of course you couldn’t just watch events rattling around within Windows. You’d also want to insert a probe into your network connection so that you could watch, and correlate, events traveling across HTTP, SMTP, and other Internet connections.
Given privacy concerns, this whole notion would be a tough sell to say the least. But you can’t improve what you can’t measure. If we want to make software better, we’ll need more and better software instrumentation.
3 thoughts on “A conversation with Partha Sundaram about software instrumentation”
When you speak about granularity, I think you’re making a good case for dynamic languages’ capabilities here, especially pure OOP ones like Smalltalk. Next time you talk with Avi Bryant, ask to see his Seaside web platform … with profiling turned on. Since everything is an object, and Smalltalk’s built in execution time profiling works on all objects, all Avi’s team had to do was expose the built-in object profiling and he gets per page web server profiling for free that he can toggle on and off anytime while the application is running. The same profiling is easy to expose to utilizers of native OS GUI apps as well. Why shouldn’t application utilizers have access to some/many/all of a programmer’s tool set and/or allow them or trusted friends to build/share more elegant tools built on top of the programmers’ tools? Yahoo Pipes would be so much more fun if it worked on all objects exposed in all apps. ;-)
Come to think of it, you can ask Avi how easy it would be for him to collect, aggregate, and publish/syndicate user’s text/tag/widget/control use info in his Squeak/Smalltalk platform.