In response to some questions about creating standalone HealthVault applications, Eric Gunnerson responds on the HealthVault blog:
With client applications, we can verify what user account is being used, but we can’t verify which application they’re using. Given the importance of maintaining the privacy of health data, that makes us concerned.
There are, of course, cryptographic protocols that could be used to verify a client application. And the kinds of folks who read this blog are among the most likely to be able to make reliable use of those protocols. But I can appreciate the dilemma. The archetypal user of HealthVault is a mom who functions as a family’s health manager. How are you going to walk her through the protocols necessary to assure that a client application she downloads from the Net is properly certified for use with HealthVault? A screwup isn’t just her problem, of course. It’s a big time problem for HealthVault. Eric concludes:
In the longer term, it may be possible to construct an application verification that is sufficiently trustworthy to grant access similar to what web applications get.
Does anyone think this problem is more tractable in the near term? If so, how?
There is, meanwhile, this interesting twist:
In the short term, we are considering allowing partners to build client applications that only have write privs – applications could use them to add data to HealthVault, but wouldn’t be able to read any data (an interesting case where write access is less privileged than read access). This would allow developers to write applications such as data importers.
A curious inversion indeed. HealthVault is going to create all kinds of fascinating thought experiments.
5 thoughts on “Can mom verify a HealthVault application?”
There was a fascinating series on CIO.com recently about the increasingly sophisticated service economy behind malware, and how it’s almost impossible for banks to protect against exploits running on infected machines. Key quote: “In the next generation, we will all do business with infected end points”.
But the first Immutable Law of (Computer) Security is
Of course you can try embedding “trusting computing” in the hardware, but that simply brings up the question of who do you trust?.
“Of course you can try embedding “trusting computing” in the hardware, but that simply brings up the question of who do you trust?.”
That movie is beautifully done. But at the end we’re back to the same place. How do we give mom a reasonable assurance that her family’s health data will not be phished?
You don’t and can’t.
We had fraud and falsification in the past, before computers, and we’ll have it in the future. If people can make money by subverting things they will be subverted. You can make it hader by adding encryption and trust algorithms (PKI, Trusted Computer, HTTPS, AES), but in the end you have to draw a line somewhere (app, OS, BIOS, TPM chip, Marine with shiney shoes and a .45 ACP) and that’s where the attacks will happen.