Kim Cameron had the same reaction to the Sierra affair as I did: Stronger authentication, while no panacea, would be extremely helpful. Kim writes:
Maybe next time Allan and colleagues will be using Information Cards, not passwords, not shared secrets. This won’t extinguish either flaming or trolling, but it can sure make breaking in to someone’s site unbelievably harder.
Commenting on Kim’s entry, Richard Gray (or, more precisely, a source of keystrokes claiming to be one of many Richard Grays) objects on the grounds that all is hopeless so long as digital and real identities are separable:
For so long identity technical commentators have pushed the idea that a person’s digital identity and their real identity can be tightly bound together then suddenly, when the weakness is finally exposed everyone once again is forced to say ‘This digital identity is nothing more than a string puppet that I control. I didn’t do this thing, some other puppet master did.’
Yep, it’s a problem, and there’s no bulletproof solution, but we can and should make it a lot harder for the impersonating puppet master to seize control of the strings.
Elsewhere, Stephen O’Grady asks whether history (i.e., a person’s observable online track record) or technology (i.e., strong authentication) is the better defense.
My answer to Stephen is: You need both. I’ve never met Stephen in person, so in one sense, to me, he’s just another source of keystrokes claiming to represent a person. But behind those keystrokes there is a mind, and I’ve observed the workings of that mind for some years now, and that track record does, as Stephen says, powerfully authenticate him.
“Call me naive,” Stephen says, “but I’d like to think that my track record here counts for something.”
Reprising the comment I made on his blog: it counts for a lot, and I rely on mine in just the same way for the same reasons. But: counts for whom? Will the millions who were first introduced to Kathy Sierra and Chris Locke on CNN recently bother explore to their track records and reach their own conclusions?
More to the point, what about Alan Herrell’s1 track record? I would be inclined to explore it but I can’t, now, without digging it out of the Google cache.
The best defense is a strong track record and an online identity that’s as securely yours as is feasible.
The identity metasystem that Kim Cameron has been defining, building, and evangelizing is an important step in the right direction. I thought so before I joined Microsoft, and I think so now.
It’s not a panacea. Security is a risk continuum with tradeoffs all along the way. Evaluating the risk and the tradeoffs, in meatspace or in cyberspace, is psychologically hard. Evaluating security technologies, in both realms, is intellectually hard. But in the long run we have no choice, we have to deal with these difficulties.
The other day I lifted this quote from my podcast with Phil Libin:
The basics of asymmetric cryptography are fundamental concepts that any member of society who wants to understand how the world works, or could work, needs to understand.
When Phil said, that my reaction was, “Oh, come on, I’d like to think that could happen but let’s get real. Even I have to stop and think about how that stuff works, and I’ve been aware of it for many years. How can we ever expect those concepts to penetrate the mass consciousness?”
At 21:10-23:00 in the podcast2, Phil answers in a fascinating way. Ask twenty random people on the street why the government can’t just print as much money as it wants, he said, and you’ll probably get “a reasonable explanation of inflation in some percentage of those cases.” That completely abstract principle, unknown before Adam Smith, has sunk in. Over time, Phil suggests, the principles of asymmetric cryptography, as they relate to digital identity, will sink in too. But not until those principles are embedded in common experiences, and described in common language.
1 In various blog postings I have seen this name spelled Alan Herrell, Allan Herrell, and Allen Herrell. I presume the first spelling is probably correct, because it returns orders of magnitude more search hits. In principle, the various people who share each of these spellings could claim their unique identities by declaring biographical details about themselves (“I am the author of _____,” “I worked for _______”) and digitally signing those declarations. In practice nobody does, yet, but it’s starting to become clear why we’d want to.
The other biggie about asymmetric cryptography is that signed things (log-on entries, documents, posts) become highly non-repudiatable. You may not know who did it, but you can be confident that was done by the possessor of the secret key. If someone allows that key to be no longer secret and fails to report it, or willingly shares the key, there will be harder-to-escape consequences.
It is the non-repudiation aspect that allowed digital signatures to be accepted as legal under appropriate conditions. That will also make life harder, along with the measures that make such keys harder to lose.
Oh, and I completely agree that it will take a long time. For one thing, developers of web sites and other on-line settings are not yet providing their end in a consistent way, and it may take a while before sites stop externalizing the costs of their defective security approaches.
“You may not know who did it, but you can be confident that was done by the possessor of the secret key. If someone allows that key to be no longer secret and fails to report it, or willingly shares the key, there will be harder-to-escape consequences.”
Yes, thanks for pointing that out. An evil puppet master who controls your secret key is a fearsomely powerful opponent. And we have, as yet, little to no experience with those harder-to-escape consequences. But sooner or later we will have to grapple with them, and there will be ways to do it. Actions never occur in a vaccum, there’s always context.
To your point about sharing of keys: physical tokens may be a helpful way to discourage that behavior. At least, that’s what Denise Anthony’s research at Dartmouth suggests. I mentioned it here — http://www.infoworld.com/article/04/07/30/31OPstrategic_1.html — and it also came up in my podcast with Barry Ribbeck — http://blog.jonudell.net/2007/03/09/a-conversation-with-barry-ribbeck-about-digital-identity-in-higher-education/
“The best defense is a strong track record and an online identity that’s as securely yours as is feasible.”
agreed. track records are an imperfect defense here, given examples such as the ones cited. not to mention the drive-bys that will reach quick and lasting conclusions on the basis, very often, of a single entry.
unfortunately, imperfect as it may be, i’d maintain it’s the best defense in that it’s the only one currently available ;)
but i’ll be the first to sign up for strong authentication technologies as they’re made available. i just don’t have the same faith that phil does in the appetite for a solution, at least in a near term timeframe. i think we’ll have to wait either for a generational shift, or some dramatic rise in identity related issues.
Jon,
As you don’t have CardSpace enabled here, you can’t actually verify that I am the said same Richard from Kim’s blog. However in a satisfyingly circular set of references I imagine that what follows will serve to authenticate me in exactly the manner that Stephen described. :)
I’m going to mark a line somewhere between the view that reputation will protect us from harm and that the damage that can be done will be reversible. Reputation is a great authenticating factor, indeed it fits most of the requirements of an identity. Its trusted by the recipient, it requires lots of effort to create, and is easy to test against. Amongst people who know each other well its probably the source of information that is relied upon the most. (“That doesn’t sound like them” is a common phrase)
However, this isn’t the way that our society appears to work. When my wife reads the celebrity magazines she is unlikely to rely on reputation as a measure for their actions. Worse than this, when she does use reputation, it is built from a collection of previous celebrity offerings.
To lay it out simply, no matter who should steal my identity (phone, passwords etc.) they would struggle to damage my relationship with my current employer as they know me and have a reputation to authenticate my actions with. They could do a very good job of destroying any hope I have of getting a job anywhere else though. Regardless of the truth I would be forced to explain myself at every subsequent meeting. The public won’t have done the background checks, they’ll only know what they’ve heard. Why would they take the risk and employ me, I *might* be lying.
Incredibly, the private reputation that Allen has built up (and Stephen and the rest of us rely on) has probably helped to save a large portion of his public reputation. Doing a google for “Allen Herrell” doesn’t find netizens baying for his blood, it finds a large collection of people who have rallied behind him to declare ‘He would not do this’.
Now what I’m about to say is going to seem a little crazy but please think it through to the end before cutting it down completely. So long as our online identities are fragile and easily compromised people will be wary to trust them. If we lower the probability of an identity failing, people will, as a result, place more faith in that identity. But if we can’t reduce the probability of failure to zero then when some pour soul suffers the inevitable failure of their identity, so many more people will have placed faith in it that undoing the damage may be almost impossible. It would seem then that the unreliability of our identity is in fact our last line of defence.
My point then is that while it is useful to spend time improving authentication schemes perhaps we are neglecting the importance of non-repudiation within the system. If it was impossible for anyone other than me to communicate my password string to an authentication system then that password would be fine for authentication and it wouldn’t even be necessary to encrypt the text wherever it was stored!
“Now what I’m about to say is going to seem a little crazy but please think it through to the end before cutting it down completely.”
There’s no reason whatsoever to cut it down. Your point is an excellent one, in line with orcmid’s point about “harder-to-escape” consequences.
As we all seem to agree, reputation is and will remain critical. Going forward, we’ll need to hold two seemingly contradictory ideas in our heads at the same time. First, that online identity can be a more certain construct than it is today. But second, that we must be prepared to doubt it when circumstances warrant, and use other mechanisms to triangulate on identity.
The problem is that none of this has ever really been tested. I’d like to think that a failure of digital identity would trigger other real-world mechanisms which, in turn, could help diagnose the failure and help improve the digital identity infrastructure. But until we do the experiment we’ll never know.
We can already do this with FOAF. Edd Dumbill even has a page on signing FOAF
files.