If you received an email message from me during the early 2000s, it came with an attachment that likely puzzled or annoyed you. The attachment was my digital ID. In theory you could use it for a couple of purposes. One was to verify that I was the authentic sender of the message, and that the content of my message had not been altered enroute.
You could also save my public key and then use it to send me an encrypted message. During the years I was routinely including my digital ID in outbound messages I think I received an encrypted reply once. Maybe twice.
I’ve always thought that everyone should have the option to communicate securely. Once there was little chance any ordinary person would be able to figure out how to do it. Even for me, as a tech journalist who had learned both the theory and practice of secure communication, it was a challenge to get things working. And when I did, who could I talk to? Only someone else who’d traveled the same path. The pool of potential communication partners was too small to matter.
But during the 2000s I hoped for, and then encouraged, developments that promised to democratize private communication. Mainstream email software implemented the relevant Internet standards and integrated the necessary encryption tools. Now if you and I wanted to communicate securely we could just tick some options in our email programs.
But it still hardly ever happened. Why not? It comes down to a question of defaults. In order to make use of the integrated encryption tools you needed a digital ID. The default was that you didn’t have one. And that’s still the default. You have to go out of your way to get a digital ID. You have to alter the default state of your system, and that’s something people mostly won’t do.
Broadly there are two kinds of secure communication. One kind is implemented in programs like Apple’s Mail and Microsoft’s Outlook. (You likely didn’t know that, and almost surely have never used it, but it’s there.) This kind of secure communication relies on a hierarchical system of trust. To use it you acquire a digital ID issued by, and backed by, some authority. It could be a government, it could be commercial provider, in practice it’s usually the latter. Your communication software is configured to trust certain of these providers. And to use it you must trust those providers too.
Another kind of secure communication relies on no higher authority. Instead communication partners trust one another directly, and exchange their digital IDs in pairwise (peer-to-peer) fashion. Among systems that use this approach, PGP (Pretty Good Privacy) is most notable. Another, now discontinued, was Groove.
Much ink has been spilled, and many pixels lit, debating hierarchical/centralized versus peer-to-peer/distributed methods of storing and transmitting data. Of course the definitions of these methods wind up being a bit fuzzy because hierarchical systems can have peer-to-peer aspects and vice versa.
I would bet that Edward Snowden, Laura Poitras, and Glenn Greenwald are using a purely peer-to-peer approach. When the stakes astronomically high, and when your pool of communication partners is very small, that would be the only way to go. It would be a huge inconvenience. You’d need to massively alter the default state of an off-the-shelf computer to enable secure communication. But there’d be no choice. You’d have to do it.
Could standard systems come with software that communicates securely by default? Yes. Methods based on a hybrid of hierarchical and peer-to-peer trust could be practical and convenient. And they could deliver far better than the level of privacy we now enjoy by default, which is none. Would people want them? Until recently the answer was clearly no. Probably the answer is still no. But now, for the first time in my long experience with this topic, ordinary citizens may be ready to entertain the question. Please do.
7 thoughts on “If we want private communication we can have it”
Hello Jon, I still provide S/MIME and PGP email options although I don’t receive them frequently. Too much friction is required especially on mobile devices.
The closure of two “ultra-private” e-mail services shows just how weak the system really is.: https://business.silentcircle.com/articles/08-14-13-mit/
Nobody sane can be bothered to mess with encryption science when all they want to do is send a damn email.
I have written a really simple encryption app, which I would be happy to give away. It simply converts plain text to encrypted 64 bit Triple DES on screen and puts it on the clipboard.
It does the revers too.
So you can encrypt the content – all or part of it, and anybody with the programme can read it. It works for social networks as well as any blog or plaintext.
It has one weakness, the classic – how to get people to use the right key.
On the other hand, they can configure any key they like, I have no control.
Hey; Nice piece; I used to care about privacy related topics especially with my communications and social media. Steganography was something I adored; I even did code a simple code that would encrypt messages but i never used it… until I thought of “do I need to encrypt my stuff?” the answer was that, i realized most of my stuff are just regular stuff, I was not worried much of whether they are read by anyone. Still I see that privacy is of great importance and each their business…
Keep up the good work
Encryption only secures the content of the message, not the headers. So recipients, subject and other information remains in plain text. So depending on your definition of ‘secure’, email (even encrypted) is not secure at all.
I agree with this in principle, but telling us that we may “save my public key and then use it to send me an encrypted message” does little good. You need to tell us HOW. Just including a link to instructions for common platforms would do.
Oh it’s much worse than that. The logistics of secure email are famously obtuse and have not improved in a decade. Because, chicken-and-egg, nobody ever had a reason to use secure mail. I’m not saying that current events will necessarily provide the reason, but if that were to happen, things would get fixed in a hurry.
I’d like to set up encrypted email between a friend and me. We both use Microsoft Outlook at our respective employers. Is there no step-by-step, simple, write-up explaining how to do this?