Home Tags Web of Trust

Tag: Web of Trust

Google ventures into public key encryption

Google announced an early prototype of Key Transparency, its latest open source effort to ensure simpler, safer, and secure communications for everyone.

The project’s goal is to make it easier for applications services to share and discover public keys for users, but it will be a while before it's ready for prime time. Secure communications should be de rigueur, but it remains frustratingly out of reach for most people, more than 20 years after the creation of Pretty Good Privacy (PGP).

Existing methods where users need to manually find and verify the recipients’ keys are time-consuming and often complicated. Messaging apps and file sharing tools are limited in that users can communicate only within the service because there is no generic, secure method to look up public keys. “Key Transparency is a general-use, transparent directory, which makes it easy for developers to create systems of all kinds with independently auditable account data,” Ryan Hurst and Gary Belvin, members of Google’s security and privacy engineering team, wrote on the Google Security Blog. Key Transparency will maintain a directory of online personae and associated public keys, and it can work as a public key service to authenticate users.

Applications and services can publish their users’ public keys in Key Transparency and look up other users’ keys.

An audit mechanism keeps the service accountable.

There is the security protection of knowing that everyone is using the same published key, and any malicious attempts to modify the record with a different key will be immediately obvious. “It [Key Transparency] can be used by account owners to reliably see what keys have been associated with their account, and it can be used by senders to see how long an account has been active and stable before trusting it,” Hurst and Belvin wrote. The idea of a global key lookup service is not new, as PGP previously attempted a similar task with Global Directory.

The service still exists, but very few people know about it, let alone use it. Kevin Bocek, chief cybersecurity strategist at certificate management vendor Venafi, called Key Transparency an "interesting" project, but expressed some skepticism about how the technology will be perceived and used. Key Transparency is not a response to a serious incident or a specific use case, which means there is no actual driving force to spur adoption.

Compare that to Certificate Transparency, Google’s framework for monitoring and auditing digital certificates, which came about because certificate authorities were repeatedly mistakenly issuing fraudulent certificates. Google seems to be taking a “build it, and maybe applications will come,” approach with Key Transparency, Bocek said. The engineers don’t deny that Key Transparency is in early stages of design and development. “With this first open source release, we're continuing a conversation with the crypto community and other industry leaders, soliciting feedback, and working toward creating a standard that can help advance security for everyone," they wrote. While the directory would be publicly auditable, the lookup service will reveal individual records only in response to queries for specific accounts.

A command-line tool would let users publish their own keys to the directory; even if the actual app or service provider decides not to use Key Transparency, users can make sure their keys are still listed. “Account update keys” associated with each account—not only Google accounts—will be used to authorize changes to the list of public keys associated with that account. Google based the design of Key Transparency on CONIKS, a key verification service developed at Princeton University, and integrated concepts from Certificate Transparency.

A user client, CONIKS integrates with individual applications and services whose providers publish and manage their own key directories, said Marcela Melara, a second-year doctoral fellow at Princeton University’s Center for Information Technology Policy and the main author of CONIKS.

For example, Melara and her team are currently integrating CONIKS to work with Tor Messenger.

CONIKS relies on individual directories because people can have different usernames across services. More important, the same username can belong to different people on different services. Google changed the design to make Key Transparency a centralized directory. Melara said she and her team have not yet decided if they are going to stop work on CONIKS and start working on Key Transparency. One of the reasons for keeping CONIKS going is that while Key Transparency’s design may be based on CONIKS, there may be differences in how privacy and auditor functions are handled.

For the time being, Melara intends to keep CONIKS an independent project. “The level of privacy protections we want to see may not translate to [Key Transparency’s] internet-scalable design,” Melara said. On the surface, Key Transparency and Certificate Transparency seem like parallel efforts, with one providing an auditable log of public keys and the other a record of digital certificates. While public keys and digital certificates are both used to secure and authenticate information, there is a key difference: Certificates are part of an existing hierarchy of trust with certificate authorities and other entities vouching for the validity of the certificates. No such hierarchy exists for digital keys, so the fact that Key Transparency will be building that web of trust is significant, Venafi’s Bocek said. “It became clear that if we combined insights from Certificate Transparency and CONIKS we could build a system with the properties we wanted and more,” Hurst and Belvin wrote.

Google floats prototype Key Transparency to tackle secure swap woes

♪ I've got the key, I've got the secreeeee-eeet ♪ Google has released an open-source technology dubbed Key Transparency, which is designed to offer an interoperable directory of public encryption keys. Key Transparency offers a generic, secure way to discover public keys.

The technology is built to scale up to internet size while providing a way to establish secure communications through untrusted servers.

The whole approach is designed to make encrypted apps easier and safer to use. Google put together elements of Certificate Transparency and CONIKS to develop Key Transparency, which it made available as an open-source prototype on Thursday. The approach is a more efficient means of building a web of trust than older alternatives such as PGP, as Google security engineers Ryan Hurst and Gary Belvin explain in a blog post. Existing methods of protecting users against server compromise require users to manually verify recipients' accounts in-person.

This simply hasn't worked. The PGP web-of-trust for encrypted email is just one example: over 20 years after its invention, most people still can't or won't use it, including its original author. Messaging apps, file sharing, and software updates also suffer from the same challenge. Key Transparency aims to make the relationship between online personas and public keys "automatically verifiable and publicly auditable" while supporting important user needs such as account recovery. "Users should be able to see all the keys that have been attached to an account, while making any attempt to tamper with the record publicly visible," Google's security boffins explain. The directory will make it easier for developers to create systems of all kinds with independently auditable account data, Google techies add.

Google is quick to emphasise that the technology is very much a work in progress. "It's still very early days for Key Transparency. With this first open-source release, we're continuing a conversation with the crypto community and other industry leaders, soliciting feedback, and working toward creating a standard that can help advance security for everyone," they explain. The project so far has already involved collaboration from the CONIKS team, Open Whisper Systems, as well as the security engineering teams at Yahoo! and internally at Google. Early reaction to the project from some independent experts such as Matthew Green, a professor of cryptography at Johns Hopkins University, has been positive. Kevin Bocek, chief cyber-security strategist at certificate management vendor Venafi, was much more sceptical. "Google's introduction of Key Transparency is a 'build it and hope the developers will come' initiative," he said. "There is not the clear compelling event as there was with Certificate Transparency, when the fraudulent issuance of digital [certificates] was starting to run rampant. Moreover, building a database of public keys not linked to digital certificates has been attempted before with PGP and never gain[ed] widespread adoption." ® Sponsored: Want to know more about Privileged Access Management? Visit The Register's hub

Op-ed: I’m throwing in the towel on PGP, and I work...

EnlargeChristiaan Colen reader comments 4 Share this story Filippo Valsorda is an engineer on the Cloudflare Cryptography team, where he's deploying and helping design TLS 1.3, the next revision of the protocol implementing HTTPS. He also created a Heartbleed testing site in 2014.

This post originally appeared on his blog and is re-printed with his permission. After years of wrestling with GnuPG with varying levels of enthusiasm, I came to the conclusion that it's just not worth it, and I'm giving up—at least on the concept of long-term PGP keys.

This editorial is not about the gpg tool itself, or about tools at all. Many others have already written about that.
It's about the long-term PGP key model—be it secured by Web of Trust, fingerprints or Trust on First Use—and how it failed me. Trust me when I say that I tried.
I went through all the setups.
I used Enigmail.
I had offline master keys on a dedicated Raspberry Pi with short-lived subkeys.
I wrote custom tools to make handwritten paper backups of offline keys (which I'll publish sooner or later).
I had YubiKeys. Multiple.
I spent days designing my public PGP policy. I traveled two hours by train to meet the closest Biglumber user in Italy to get my first signature in the strong set.
I have a signature from the most connected key in the set.
I went to key-signing parties in multiple continents.
I organized a couple. I have the arrogance of saying that I understand PGP.
In 2013 I was dissecting the packet format to brute force short IDs.
I devised complex silly systems to make device subkeys tie to both my personal and company master keys.
I filed usability and security issues in GnuPG and its various distributions. All in all, I should be the perfect user for PGP: competent, enthusiast, embedded in a similar community. But it just didn't work. First, there's the adoption issue others talked about extensively.
I get, at most, two encrypted e-mails a year. Then, there's the UX problem: easy crippling mistakes; messy keyserver listings from years ago; "I can't read this e-mail on my phone" or "on the laptop;" "I left the keys I never use on the other machine." But the real issues, I realized, are more subtle.
I never felt confident in the security of my long-term keys.

The more time passed, the more I would feel uneasy about any specific key. Yubikeys would get exposed to hotel rooms. Offline keys would sit in a far away drawer or safe.
Vulnerabilities would be announced. USB devices would get plugged in. A long-term key is as secure as the minimum common denominator of your security practices over its lifetime. It's the weak link. Worse, long-term key patterns, like collecting signatures and printing fingerprints on business cards, discourage practices that would otherwise be obvious hygiene: rotating keys often, having different keys for different devices, compartmentalization.
Such practices actually encourage expanding the attack surface by making backups of the key. We talk about Pets vs.

Cattle in infrastructure; those concepts would apply just as well to keys! If I suspect I'm compromised, I want to be able to toss the laptop and rebootstrap with minimum overhead.

The worst outcome possible for a scheme is making the user stick with a key that has a suspicion of compromise, because the cost of rotating would be too high. And all this for what gain? "Well, of course, long-term trust." Yeah, about that.
I never, ever, ever successfully used the WoT to validate a public key.

And remember, I have a well-linked key.
I haven't done a formal study, but I'm almost positive that everyone that used PGP to contact me has, or would have done (if asked), one of the following: pulled the best-looking key from a keyserver, most likely not even over TLS used a different key if replied with "this is my new key" re-sent the e-mail unencrypted if provided an excuse like "I'm traveling" Travel in particular is hostile to long-term keys, making this kind of fresh start impractical. Moreover, I'm not even sure there's an attacker that long-term keys make sense against. Your average adversary probably can't MitM Twitter DMs (which means you can use them to exchange fingerprints opportunistically, while still protecting your privacy).

The Mossad will do Mossad things to your machine, whatever key you use. Finally, these days I think I care much more about forward secrecy, deniability, and ephemerality than I do about ironclad trust.

Are you sure you can protect that long-term key forever? Because when an attacker decides to target you and succeeds, they won't have access just from that point forward; they'll have access to all your past communications, too.

And that's ever more relevant. Moving forward I'm not dropping to plaintext. Quite the opposite.

But I won't be maintaining any public long-term key. Mostly I'll use Signal or WhatsApp, which offer vastly better endpoint security on iOS, ephemerality, and smoother key rotation. If you need to securely contact me, your best bet is to DM me asking for my Signal number.
If needed we can decide an appropriate way to compare fingerprints.
If we meet in person and need to set up a secure channel, we will just exchange a secret passphrase to use with what's most appropriate: OTR, Pond, Ricochet. If it turns out we really need PGP, we will set up some ad-hoc keys, more à la Operational PGP.
Same for any signed releases or canaries I might maintain in the future. To exchange files, we will negotiate Magic Wormhole, OnionShare, or ad-hoc PGP keys over the secure channel we already have.

The point is not to avoid the gpg tool, but the PGP key management model. If you really need to cold-contact me, I might maintain a Keybase key, but no promises.
I like rooting trust in your social profiles better since it makes key rotation much more natural and is probably how most people know me anyway. I'm also not dropping YubiKeys.
I'm very happy about my new YubiKey 4 with touch-to-operate, which I use for SSH keys, password storage, and machine bootstrap.

But these things are one hundred percent under my control. About my old keys and transitioning I broke the offline seal of all my keys.
I don't have reason to believe they are compromised, but you should stop using them now. Below are detached signatures for the Markdown version of this document from all keys I could still find. In the coming weeks I'll import all signatures I received, make all the signatures I promised, and then publish revocations to the keyservers.
I'll rotate my Keybase key.

Eventually, I'll destroy the private keys. See you on Signal. (Or Twitter.) Giving up on PGP.mdGiving up on PGP.md.B8CC58C51CAEA963.ascGiving up on PGP.md.C5C92C16AB6572C2.ascGiving up on PGP.md.54D93CBC8AA84B5A.asc Giving up on PGP.md.EBF01804BCF05F6B.asc [coming once I recover the passphrase from another country] Note: I expect the "Moving forward" section to evolve over time, as tools come and go.

The signed .md file won't change, an unauthenticated .diff will appear below for verification convenience.

PGP admins: Kill short keys now, or Alice will become Chuck

Someone's impersonating the likes of Linus Torvalds with attacks via keyservers The issue of short PGP IDs is back on the agenda, with unknown scammers spoofing identities like Linus Torvalds and Tor core developer Isis Agora Lovecruft. Short keys are just what the name describes: instead of someone passing their whole PGP key to someone else to get a message going, people would memorise the last eight hex characters of their full fingerprint. Hence, as explained back in March by Debian dev Gunner Wolf, Alice might give Bob the short key (Wolf's is C1DB 921F), and Bob would search a key repository to find Alice's full fingerprint. The problem: we've known for about five years that short keys are prone to collisions; and in 2012, the Evil32 project published a 32-bit colliding key for the whole PGP Web of Trust. Colliding short keys can be used for an impersonation attack: Chuck's last 32 bits collide with Alice's, so by publishing a fake short key, he can convince Bob to use it for messages instead of the real one. What seems to have happened is that short key impersonation is escalating: as well as Linus, this post to the Linux Kernel Mailing List identifies Greg Kroah-Hartman “and other kernel devs” as being impersonated in the MIT key server: Search Result of 0x00411886: https://pgp.mit.edu/pks/lookup?search=0x00411886&op=indexFake Linus Torvalds: 0F6A 1465 32D8 69AE E438 F74B 6211 AA3B [0041 1886]Real Linus Torvalds: ABAF 11C6 5A29 70B1 30AB E3C4 79BE 3E43 [0041 1886] Search Result of 0x6092693E: https://pgp.mit.edu/pks/lookup?search=0x6092693E&op=indexFake Greg Kroah-Hartman: 497C 48CE 16B9 26E9 3F49 6301 2736 5DEA [6092 693E]Real Greg Kroah-Hartman: 647F 2865 4894 E3BD 4571 99BE 38DB BDC8 [6092 693E] A responder to Lovecruft's Tweet reckons the fakes uploaded to the MIT keyserver number 20,000: @isislovecruft @ncardozo you and about 20k others.

The https://t.co/qJK8g5kP8Z set has been uploaded to the keyservers. — Jeroen van der Ham (@1sand0s) August 16, 2016 Evil32's Eric Swanson has posted at Hackernews that he's generated revocation certificates for all of the fake keys. And for any sysadmin configuring a PGP system: turn short keys off.

Convenience is never better than security. ® Sponsored: Global DDoS threat landscape report