http://bitcoinism.blogspot.com/2013/09/building-pgp-web-of-trust-that-people.html?m=1
Sunday, September 8, 2013
Building a PGP web of trust that people will actually use
Back in the early 1990s, PGP introduced the concept of web of trust (WoT)- a means by which users of personal encryption can know whether or not the key they are using actually belongs to the person they want to communicate with. Without some mechanism for verifying the connection between a cryptographic key and an identity encryption is mostly useless. All an attacker needs to do is insert themselves between the sender and recipient and trick each party into using the attacker's key for encryption instead of the intended recipient. This is the man in the middle attack, and based on the actual state of personal encryption as it is currently deployed, is still an unsolved problem 22 years after PGP was introduced.
There are two basic ways to solve the problem - first, all users can register with a central authority who vouches for their identity. This is the model used by SSL certificates, and it's worse than useless. Despite the illusion of security they provide, certificate authorities are routinely subverted by governments and other types of criminals, and there is little that a user can do to avoid this weakness inherent to a centralized model.
The web of trust concept is based on the idea of decentralized trust and social networks. Instead of trusting Verisign to validate identities, you validate the identities of the people you know and export this information to a public database. Then you rely on you friends to vouch for the people they know, and those friends to vouch still more people, and so on until you can create a trust chain between any two arbitrary identities.
This approach avoids the inherent problems of central authorities, but in practice virtually nobody uses it outside the open source software community, and even there it is hit or miss. The rest of this articles is going to discuss two reasons for the failure to deploy this technology, and how to solve them. First, the problems:
If we assume the purpose of a WoT is to unambiguously and unimpeachably map public keys to human beings, there are two ways in which the typical key signing party fails.
There are two basic ways to solve the problem - first, all users can register with a central authority who vouches for their identity. This is the model used by SSL certificates, and it's worse than useless. Despite the illusion of security they provide, certificate authorities are routinely subverted by governments and other types of criminals, and there is little that a user can do to avoid this weakness inherent to a centralized model.
The web of trust concept is based on the idea of decentralized trust and social networks. Instead of trusting Verisign to validate identities, you validate the identities of the people you know and export this information to a public database. Then you rely on you friends to vouch for the people they know, and those friends to vouch still more people, and so on until you can create a trust chain between any two arbitrary identities.
This approach avoids the inherent problems of central authorities, but in practice virtually nobody uses it outside the open source software community, and even there it is hit or miss. The rest of this articles is going to discuss two reasons for the failure to deploy this technology, and how to solve them. First, the problems:
- The software tools are hard to use, even for experts. As as result, even people who understand how important it is usually don't bother.
- Usability failures of privacy software should be regarded as possessing the same degree of severity as algorithmic or code failures, because there is no difference in practice between a message that is sent in cleartext because encryption was too hard and one that was decrypted by an adversary due to an implementation flaw.
- Even among the tiny minority of users who bother to sign keys, within the tiny minority of users who bother to encrypt at all, almost nobody agrees what it actually means to sign a key.
- The definition of "identity" in a cryptographic sense does not directly map to how our brains naturally process it, and this impedance mismatch has never been addressed successfully.
- Most users of personal encryption can't explain what they are actually verifying when they sign another person's public key.
What is Identity?
The stereotypical way that PGP users build out the WoT is via a key signing party. A group of people who meet in person, typically at a software conference, and exchange public keys. They they sign the public keys they collect and (hopefully) remember to upload those signatures to key servers where they can be used by others. The amount of identity verification that is applied is highly inconsistent. Some people might verify the government-issued ID card of the person handing them a key (or a key fingerprint), others might just blindly sign anything that gets handed to them. Most frequently of all, however, is that the key signing party never happens at all.If we assume the purpose of a WoT is to unambiguously and unimpeachably map public keys to human beings, there are two ways in which the typical key signing party fails.
- The mere presentation of a public key or a key fingerprint does not prove the person delivering it actually controls the associated private key. The only way to such ownership may be proved is if the person can sign data on the spot which could not have been predicted ahead of time.
- Government issued ID cards are useless when it comes to what we actually mean when we talk about about identity. For example, I could meet someone at a key signing party with a valid government-issued ID card containing the name "Linus Torvalds". In principle, I could meet an arbitrarily high number of unique people all sharing that same name. They won't all be the Linus Torvalds, though.
Identity, as we humans understand it, is a set of shared experiences. We don't know our friends by a set of characters printed on a piece of plastic; we know our friends by the past interactions (direct or otherwise) we've had with them.
If I want to send an encrypted email to Linus Torvalds, and if I want to use some kind of public database to help me make sure I'm using the right key for encryption, the actual question I want the database to answer is not, "Does the owner of this public key posses an ID card containing the name Linux Torvalds?" The actual question I want answered is, "Is the owner of this public key the inventor of the Linux kernel?"
The signatures that form the basis of the existing WoT are thus useless because they don't certify the right data - the data that forms the basis for how we actually understand identity. Before we can have an effective WoT, one that normal people are willing to use, we first need a well-defined method of representing identity that matches our intuitive understanding.
The signatures that form the basis of the existing WoT are thus useless because they don't certify the right data - the data that forms the basis for how we actually understand identity. Before we can have an effective WoT, one that normal people are willing to use, we first need a well-defined method of representing identity that matches our intuitive understanding.
Getting Identity Right
A successful WoT must be built very much like a social networking site, because that's how we obtain the shared experience information for certification, and that's the model that hundreds of millions of people all over the world are already comfortable using.
We also need to take advantage of mobile computing technology. Secure key exchange has to occur through tamperproof channels, and there's no way to achieve that in practice except in person. 22 years ago nobody had a smartphone and not many had laptops, but now enough people own smartphones that our key exchange protocols can rely on their capabilities.
The rest of this proposal assumes that we can trust the hardware we own. This is a known-false assumption, and an urgent problem, but solving it is something that will have to be handled via other efforts.
Given a more through understanding of the nature of identity, and with the understanding that the protocol must prioritize usability at least as much as cryptographic integrity, let's approach the problem by building an enjoyable social networking game that just happens to build a secure WoT as a side effect.
Imagine a social networking site called "iMet". The way it works is that users register on this account, and fill out facts about themselves. The facts could be serious like the kind you'd put on LinkedIn, or frivolous like most Facebook posts. Users "friend" other users by meeting them in person and using a smartphone app to certify. They are then presented with a list of facts about the person they just met which can be answered as true, false, or unsure. Their scores go up based on the number of people they met, and the accuracy of their answers. Users can also compete with their friends for obtaining the shortest path to famous or otherwise noteworthy people. Properly implemented, this application sufficiently fun and compelling such that people would participate for its own sake, without needing to care about cryptography.
Behind the scenes, however, these interactions can be leveraged to build a secure WoT. When users "iMeet" with their smartphones they are actually performing a secure key exchange over NFC or camera/QR code, whichever is available and most convenient.
The specific facts are represented as text strings. When users answer questions about other users, their clients sign a (key id, string, ACK/NACK, date) tuple. These tuples are publicly searchable and can be used by PGP clients for WoT calculations.
Another problem is that most people don't understand the difference between time-variant and time-invariant facts. My date of birth is time-invariant. Most of the other facts which form my identity are not. The text strings should be formatted in a way that time-variant facts can be sanely represented and verified. The UI should not accept a "street address" fact without an associated date range. When I certify my friend's street address, by default my signature should be interpreted to mean "true as of the date of the signature", unless I specify otherwise. Clients who are parsing the public database must be able to intelligently handle the time element of truth.
We also need to take advantage of mobile computing technology. Secure key exchange has to occur through tamperproof channels, and there's no way to achieve that in practice except in person. 22 years ago nobody had a smartphone and not many had laptops, but now enough people own smartphones that our key exchange protocols can rely on their capabilities.
The rest of this proposal assumes that we can trust the hardware we own. This is a known-false assumption, and an urgent problem, but solving it is something that will have to be handled via other efforts.
Given a more through understanding of the nature of identity, and with the understanding that the protocol must prioritize usability at least as much as cryptographic integrity, let's approach the problem by building an enjoyable social networking game that just happens to build a secure WoT as a side effect.
Imagine a social networking site called "iMet". The way it works is that users register on this account, and fill out facts about themselves. The facts could be serious like the kind you'd put on LinkedIn, or frivolous like most Facebook posts. Users "friend" other users by meeting them in person and using a smartphone app to certify. They are then presented with a list of facts about the person they just met which can be answered as true, false, or unsure. Their scores go up based on the number of people they met, and the accuracy of their answers. Users can also compete with their friends for obtaining the shortest path to famous or otherwise noteworthy people. Properly implemented, this application sufficiently fun and compelling such that people would participate for its own sake, without needing to care about cryptography.
Behind the scenes, however, these interactions can be leveraged to build a secure WoT. When users "iMeet" with their smartphones they are actually performing a secure key exchange over NFC or camera/QR code, whichever is available and most convenient.
The specific facts are represented as text strings. When users answer questions about other users, their clients sign a (key id, string, ACK/NACK, date) tuple. These tuples are publicly searchable and can be used by PGP clients for WoT calculations.
Usability details
Anyone who has ever tried to put user-supplied information into a database knows that regular people are terrible at structuring data. This protocol allows for user-supplied arbitrary text strings, but an actual implementation should go to great lengths to sanitize their inputs first. For example, the UI should ask for common facts, such as birth date, and format them in an agreed-upon way.Another problem is that most people don't understand the difference between time-variant and time-invariant facts. My date of birth is time-invariant. Most of the other facts which form my identity are not. The text strings should be formatted in a way that time-variant facts can be sanely represented and verified. The UI should not accept a "street address" fact without an associated date range. When I certify my friend's street address, by default my signature should be interpreted to mean "true as of the date of the signature", unless I specify otherwise. Clients who are parsing the public database must be able to intelligently handle the time element of truth.
Unsolved problems
- Cheating by manually creating signatures without actually meeting in person: is this an actual problem, and if so, how could it be fixed?
- Distribution of data: Will there be just one site handling all this data, or will it be distributed somehow?
- If there are multiple providers, how to you make sure data is globally available?
- If there is a single provider, how do you prevent the CA failure mode?
What do we do with it?
Successfully building a secure, decentralized WoT is just the first step to building a large number of other secure services. Once secure cryptographic identities exist and are available the WoT forms a foundation that can be used by other projects:
- Encrypted communications
- Personal clouds
- Website logins
Really the sky is the limit once the WoT exists, we don't yet know what is ultimately possible once it exists because so far we've never got to the point of building one that works for a critical mass of the population. Given recent events, there's never been a better time to to do it than now.
No comments:
Post a Comment