14.1 C
London
Saturday, September 23, 2017
Home Tags PGP

Tag: PGP

Since deleted, post gave public and private key for Adobe incident response team.
Change the name to A-d'oh!-be An absent-minded security staffer just accidentally leaked Adobe's private PGP key onto the internet.…
Meanwhile Huawei gets green light, despite failure to verify source code UK spy agency GCHQrsquo;s cyber security arm, CESG, was left without PGP encryption for more than four months, according to a government report.…
Second of 2 AlphaBay sellers arrested in 2016 pleads guilty: Abdullah Almashwali.
Custom PGP BlackBerry smartphones allegedly used by criminal gangs for secure messaging have suffered a setback.
The security and privacy community was abuzz over the weekend after Google said it was open-sourcing E2Email, a Chrome plugin designed to ease the implementation and use of encrypted email. While this is welcome news, the project won't go anywhere i...

The security researcher cited in the report acknowledged that the word 'backdoor' was probably not the best choice.

WhatsApp this week denied that its app provides a "backdoor" to encrypted texts.

A report published Friday by The Guardian, citing cryptography and security researcher Tobias Boelter, suggests a security vulnerability within WhatsApp could be used by government agencies as a backdoor to snoop on users.

"This claim is false," a WhatsApp spokesman told PCMag in an email.

The Facebook-owned company will "fight any government request to create a backdoor," he added.

WhatsApp in April turned on full end-to-end encryption—using the Signal protocol developed by Open Whisper Systems—to protect messages from the prying eyes of cybercriminals, hackers, "oppressive regimes," and even Facebook itself.

The system, as described by The Guardian, relies on unique security keys traded and verified between users in an effort to guarantee communications are secure and cannot be intercepted. When any of WhatsApp's billion users get a new phone or reinstall the program, their encryption keys change—"something any public key cryptography system has to deal with," Open Whisper Systems founder Moxie Marlinspike wrote in a Friday blog post.

During that process, messages may back up on the phone, waiting their turn to be re-encrypted.

According to The Guardian, that's when someone could sneak in, fake having a new phone, and hijack the texts.

But according to Marlinspike, "the fact that WhatsApp handles key changes is not a 'backdoor,' it is how cryptography works.

"Any attempt to intercept messages in transmit by the server is detectable by the sender, just like with Signal, PGP, or any other end-to-end encrypted communication system," he wrote.

"We appreciate the interest people have in the security of their messages and calls on WhatsApp," co-founder Brian Acton wrote in a Friday Reddit post. "We will continue to set the record straight in the face of baseless accusations about 'backdoors' and help people understand how we've built WhatsApp with critical security features at such a large scale.

"Most importantly," he added, "we'll continue investing in technology and building simple features that help protect the privacy and security of messages and calls on WhatsApp."

In a blog post, Boelter said The Guardian's decision to use the word "backdoor" was probably not "the best choice there, but I can also see that there are arguments for calling it a 'backdoor.'" But Facebook was "furious and issued a blank denial, [which] polarized sides.

"I wish I could have had this debate with the Facebook Security Team in...private, without the public listening and judging our opinions, agreeing on a solution and giving a joint statement at the end," Boelter continued.
In an earlier post, Boelter said he reported the vulnerability in April 2016, but Facebook failed to fix it.

Boelter—a German computer scientist, entrepreneur, and PhD student at UC Berkeley focusing on Security and Cryptography—acknowledged that resolving the issue in public is a double-edged sword.

"The ordinary people following the news and reading headlines do not understand or do not bother to understand the details and nuances we are discussing now. Leaving them with wrong impressions leading to wrong and dangerous decisions: If they think WhatsApp is 'backdoored' and insecure, they will start using other means of communication. Likely much more insecure ones," he wrote. "The truth is that most other messengers who claim to have "end-to-end encryption" have the same vulnerability or have other flaws. On the other hand, if they now think all claims about a backdoor were wrong, high-risk users might continue trusting WhatsApp with their most sensitive information."

Boelter said he'd be content to leave the app as is if WhatsApp can prove that "1) too many messages get [sent] to old keys, don't get delivered, and need to be [re-sent] later and 2) it would be too dangerous to make blocking an option (moxie and I had a discussion on this)."

Then, "I could actually live with the current implementation, except for voice calls of course," provided WhatsApp is transparent about the issue, like adding a notice about key change notifications being delayed.

Google announced an early prototype of Key Transparency, its latest open source effort to ensure simpler, safer, and secure communications for everyone.

The project’s goal is to make it easier for applications services to share and discover public keys for users, but it will be a while before it's ready for prime time. Secure communications should be de rigueur, but it remains frustratingly out of reach for most people, more than 20 years after the creation of Pretty Good Privacy (PGP).

Existing methods where users need to manually find and verify the recipients’ keys are time-consuming and often complicated. Messaging apps and file sharing tools are limited in that users can communicate only within the service because there is no generic, secure method to look up public keys. “Key Transparency is a general-use, transparent directory, which makes it easy for developers to create systems of all kinds with independently auditable account data,” Ryan Hurst and Gary Belvin, members of Google’s security and privacy engineering team, wrote on the Google Security Blog. Key Transparency will maintain a directory of online personae and associated public keys, and it can work as a public key service to authenticate users.

Applications and services can publish their users’ public keys in Key Transparency and look up other users’ keys.

An audit mechanism keeps the service accountable.

There is the security protection of knowing that everyone is using the same published key, and any malicious attempts to modify the record with a different key will be immediately obvious. “It [Key Transparency] can be used by account owners to reliably see what keys have been associated with their account, and it can be used by senders to see how long an account has been active and stable before trusting it,” Hurst and Belvin wrote. The idea of a global key lookup service is not new, as PGP previously attempted a similar task with Global Directory.

The service still exists, but very few people know about it, let alone use it. Kevin Bocek, chief cybersecurity strategist at certificate management vendor Venafi, called Key Transparency an "interesting" project, but expressed some skepticism about how the technology will be perceived and used. Key Transparency is not a response to a serious incident or a specific use case, which means there is no actual driving force to spur adoption.

Compare that to Certificate Transparency, Google’s framework for monitoring and auditing digital certificates, which came about because certificate authorities were repeatedly mistakenly issuing fraudulent certificates. Google seems to be taking a “build it, and maybe applications will come,” approach with Key Transparency, Bocek said. The engineers don’t deny that Key Transparency is in early stages of design and development. “With this first open source release, we're continuing a conversation with the crypto community and other industry leaders, soliciting feedback, and working toward creating a standard that can help advance security for everyone," they wrote. While the directory would be publicly auditable, the lookup service will reveal individual records only in response to queries for specific accounts.

A command-line tool would let users publish their own keys to the directory; even if the actual app or service provider decides not to use Key Transparency, users can make sure their keys are still listed. “Account update keys” associated with each account—not only Google accounts—will be used to authorize changes to the list of public keys associated with that account. Google based the design of Key Transparency on CONIKS, a key verification service developed at Princeton University, and integrated concepts from Certificate Transparency.

A user client, CONIKS integrates with individual applications and services whose providers publish and manage their own key directories, said Marcela Melara, a second-year doctoral fellow at Princeton University’s Center for Information Technology Policy and the main author of CONIKS.

For example, Melara and her team are currently integrating CONIKS to work with Tor Messenger.

CONIKS relies on individual directories because people can have different usernames across services. More important, the same username can belong to different people on different services. Google changed the design to make Key Transparency a centralized directory. Melara said she and her team have not yet decided if they are going to stop work on CONIKS and start working on Key Transparency. One of the reasons for keeping CONIKS going is that while Key Transparency’s design may be based on CONIKS, there may be differences in how privacy and auditor functions are handled.

For the time being, Melara intends to keep CONIKS an independent project. “The level of privacy protections we want to see may not translate to [Key Transparency’s] internet-scalable design,” Melara said. On the surface, Key Transparency and Certificate Transparency seem like parallel efforts, with one providing an auditable log of public keys and the other a record of digital certificates. While public keys and digital certificates are both used to secure and authenticate information, there is a key difference: Certificates are part of an existing hierarchy of trust with certificate authorities and other entities vouching for the validity of the certificates. No such hierarchy exists for digital keys, so the fact that Key Transparency will be building that web of trust is significant, Venafi’s Bocek said. “It became clear that if we combined insights from Certificate Transparency and CONIKS we could build a system with the properties we wanted and more,” Hurst and Belvin wrote.
Enlargereader comments 20 Share this story The Guardian roiled security professionals everywhere on Friday when it published an article claiming a backdoor in Facebook's WhatsApp messaging service allows attackers to intercept and read encrypted messages.
It's not a backdoor—at least as that term is defined by most security experts. Most would probably agree it's not even a vulnerability. Rather, it's a limitation in what cryptography can do in an app that caters to more than 1 billion users. At issue is the way WhatsApp behaves when an end user's encryption key changes.

By default, the app will use the new key to encrypt messages without ever informing the sender of the change.

By enabling a security setting, users can configure WhatsApp to notify the sender that a recently transmitted message used a new key. Critics of Friday's Guardian post, and most encryption practitioners, argue such behavior is common in encryption apps and often a necessary requirement.

Among other things, it lets existing WhatsApp users who buy a new phone continue an ongoing conversation thread. Tobias Boelter, a Ph.D. candidate researching cryptography and security at the University of California at Berkeley, told the Guardian that the failure to obtain a sender's explicit permission before using the new key challenged the often-repeated claim that not even WhatsApp or its owner Facebook can read encrypted messages sent through the service. He first reported the weakness to WhatsApp last April.
In an interview on Friday, he stood by the backdoor characterization. "At the time I discovered it, I thought it was not a big deal... and they will fix it," he told Ars. "The fact that they still haven't fixed it yet makes me wonder why." A tale of two encrypted messaging apps Boelter went on to contrast the way WhatsApp handles new keys with the procedure used by Signal, a competing messaging app that uses the same encryption protocol.
Signal allows a sender to verify a new key before using it. WhatsApp, on the other hand, by default trusts the new key with no notification—and even when that default is changed, it notifies the sender of the change only after the message is sent. Moxie Marlinspike, developer of the encryption protocol used by both Signal and WhatsApp, defended the way WhatsApp behaves. "The fact that WhatsApp handles key changes is not a 'backdoor,'" he wrote in a blog post. "It is how cryptography works.

Any attempt to intercept messages in transmit by the server is detectable by the sender, just like with Signal, PGP, or any other end-to-end encrypted communication system." He went on to say that, while it's true that Signal, by default, requires a sender to manually verify keys and WhatsApp does not, both approaches have potential security and performance drawbacks.

For instance, many users don't understand how to go about verifying a new key and may turn off encryption altogether if it prevents their messages from going through or generates error messages that aren't easy to understand.
Security-conscious users, meanwhile, can enable security notifications and rely on a "safety number" to verify new keys. He continued: Given the size and scope of WhatsApp's user base, we feel that their choice to display a non-blocking notification is appropriate.
It provides transparent and cryptographically guaranteed confidence in the privacy of a user's communication, along with a simple user experience.

The choice to make these notifications "blocking" would in some ways make things worse.

That would leak information to the server about who has enabled safety number change notifications and who hasn't, effectively telling the server who it could MITM transparently and who it couldn't; something that WhatsApp considered very carefully. Even if others disagree about the details of the UX, under no circumstances is it reasonable to call this a "backdoor," as key changes are immediately detected by the sender and can be verified. In an interview, Marlinspike said Signal was in the process of moving away from strictly enforced blocking. He also said that WhatsApp takes strict precautions to prevent its servers from knowing which users have enabled security notifications, making it impossible for would-be attackers to target only those who have them turned off. Boelter theorized that the lack of strict blocking could most easily be exploited by people who gain administrative control over WhatsApp servers, say by a government entity that obtains a court order.

The attacker could then change the encryption key for a targeted phone number.

By default, WhatsApp will use the imposter key to encrypt messages without ever warning the receiver of the crucial change.

By making the targeted phone temporarily unavailable over the network for a period of hours or days, messages that were sent during that time will be stored in a queue. Once the phone became available again, the messages will be encrypted with the new attacker-controlled key. Of course, there are some notable drawbacks that make such an attack scenario highly problematic from the standpoint of most attackers.

For the attack to work well, it would require control of a WhatsApp server, which is something most people would consider extraordinarily difficult to do.

Absent control over a WhatsApp server, an attack would require abusing something like the SS7 routing protocol for cellular networks to intercept SMS messages.

But even then, the attacker who wanted to acquire more than a single message would have to figure out a way to make the targeted phone unavailable over the network before impersonating it. What's more, it wouldn't be hard for the sender to eventually learn of the interception, and that's often a deal-breaker in many government surveillance cases. Last, the attack wouldn't work against encrypted messages stored on a seized phone. In a statement, WhatsApp officials wrote: WhatsApp does not give governments a "backdoor" into its systems and would fight any government request to create a backdoor.

The design decision referenced in the Guardian story prevents millions of messages from being lost, and WhatsApp offers people security notifications to alert them to potential security risks. WhatsApp published a technical white paper on its encryption design and has been transparent about the government requests it receives, publishing data about those requests in the Facebook Government Requests Report. Ultimately, there's little evidence of a vulnerability and certainly none of a backdoor—which is usually defined as secret functionality for defeating security measures. WhatsApp users should strongly consider turning on security notifications by accessing Settings > Account > Security.
♪ I've got the key, I've got the secreeeee-eeet ♪ Google has released an open-source technology dubbed Key Transparency, which is designed to offer an interoperable directory of public encryption keys. Key Transparency offers a generic, secure way to discover public keys.

The technology is built to scale up to internet size while providing a way to establish secure communications through untrusted servers.

The whole approach is designed to make encrypted apps easier and safer to use. Google put together elements of Certificate Transparency and CONIKS to develop Key Transparency, which it made available as an open-source prototype on Thursday. The approach is a more efficient means of building a web of trust than older alternatives such as PGP, as Google security engineers Ryan Hurst and Gary Belvin explain in a blog post. Existing methods of protecting users against server compromise require users to manually verify recipients' accounts in-person.

This simply hasn't worked. The PGP web-of-trust for encrypted email is just one example: over 20 years after its invention, most people still can't or won't use it, including its original author. Messaging apps, file sharing, and software updates also suffer from the same challenge. Key Transparency aims to make the relationship between online personas and public keys "automatically verifiable and publicly auditable" while supporting important user needs such as account recovery. "Users should be able to see all the keys that have been attached to an account, while making any attempt to tamper with the record publicly visible," Google's security boffins explain. The directory will make it easier for developers to create systems of all kinds with independently auditable account data, Google techies add.

Google is quick to emphasise that the technology is very much a work in progress. "It's still very early days for Key Transparency. With this first open-source release, we're continuing a conversation with the crypto community and other industry leaders, soliciting feedback, and working toward creating a standard that can help advance security for everyone," they explain. The project so far has already involved collaboration from the CONIKS team, Open Whisper Systems, as well as the security engineering teams at Yahoo! and internally at Google. Early reaction to the project from some independent experts such as Matthew Green, a professor of cryptography at Johns Hopkins University, has been positive. Kevin Bocek, chief cyber-security strategist at certificate management vendor Venafi, was much more sceptical. "Google's introduction of Key Transparency is a 'build it and hope the developers will come' initiative," he said. "There is not the clear compelling event as there was with Certificate Transparency, when the fraudulent issuance of digital [certificates] was starting to run rampant. Moreover, building a database of public keys not linked to digital certificates has been attempted before with PGP and never gain[ed] widespread adoption." ® Sponsored: Want to know more about Privileged Access Management? Visit The Register's hub
It's time for security researchers and vendors to agree on a standard responsible disclosure timeline. Animal Man, Dolphin, Rip Hunter, Dane Dorrance, the Ray. Ring any bells? Probably not, but these characters fought fictitious battles on the pages of DC Comics in the 1940s, '50s, and '60s. As part of the Forgotten Heroes series, they were opposed by the likes of Atom-Master, Enchantress, Ultivac, and other Forgotten Villains. Cool names aside, the idea of forgotten heroes seems apropos at a time when high-profile cybersecurity incidents continue to rock the headlines and black hats bask in veiled glory. But what about the good guys? What about the white hats, these forgotten heroes? For every cybercriminal looking to make a quick buck exploiting or selling a zero-day vulnerability, there's a white hat reporting the same vulnerabilities directly to the manufacturers. Their goal is to expose dangerous exploits, keep users protected, and perhaps receive a little well-earned glory for themselves along the way. This process is called "responsible disclosure." Although responsible disclosure has been going on for years, there's no formal industry standard for reporting vulnerabilities. However, most responsible disclosures follow the same basic steps. First, the researcher identifies a security vulnerability and its potential impact. During this step, the researcher documents the location of the vulnerability using screenshots or pieces of code. They may also create a repeatable proof-of-concept attack to help the vendor find and test a resolution. Next, the researcher creates a vulnerability advisory report including a detailed description of the vulnerability, supporting evidence, and a full disclosure timeline. The researcher submits this report to the vendor using the most secure means possible, usually as an email encrypted with the vendor's public PGP key. Most vendors reserve the [email protected] email alias for security advisory submissions, but it could differ depending on the organization. After submitting the advisory to the vendor, the researcher typically allows the vendor a reasonable amount of time to investigate and fix the exploit, per the advisory full disclosure timeline. Finally, once a patch is available or the disclosure timeline (including any extensions) has elapsed, the researcher publishes a full disclosure analysis of the vulnerability. This full disclosure analysis includes a detailed explanation of the vulnerability, its impact, and the resolution or mitigation steps. For example, see this full disclosure analysis of a cross-site scripting vulnerability in Yahoo Mail by researcher Jouko Pynnönen. How Much Time?Security researchers haven't reached a consensus on exactly what "a reasonable amount of time" means to allow a vendor to fix a vulnerability before full public disclosure. Google recommends 60 days for a fix or public disclosure of critical security vulnerabilities, and an even shorter seven days for critical vulnerabilities under active exploitation. HackerOne, a platform for vulnerability and bug bounty programs, defaults to a 30-day disclosure period, which can be extended to 180 days as a last resort. Other security researchers, such as myself, opt for 60 days with the possibility of extensions if a good-faith effort is being made to patch the issue. I believe that full disclosure of security vulnerabilities benefits the industry as a whole and ultimately serves to protect consumers. In the early 2000s, before full disclosure and responsible disclosure were the norm, vendors had incentives to hide and downplay security issues to avoid PR problems instead of working to fix the issues immediately. While vendors attempted to hide the issues, bad guys were exploiting these same vulnerabilities against unprotected consumers and businesses. With full disclosure, even if a patch for the issue is unavailable, consumers have the same knowledge as the attackers and can defend themselves with workarounds and other mitigation techniques. As security expert Bruce Schneier puts it, full disclosure of security vulnerabilities is "a damned good idea." I've been on both ends of the responsible disclosure process, as a security researcher reporting issues to third-party vendors and as an employee receiving vulnerability reports for my employer's own products. I can comfortably say responsible disclosure is mutually beneficial to all parties involved. Vendors get a chance to resolve security issues they may otherwise have been unaware of, and security researchers can increase public awareness of different attack methods and make a name for themselves by publishing their findings. My one frustration as a security researcher is that the industry lacks a standard responsible disclosure timeline. We already have a widely accepted system for ranking the severity of vulnerabilities in the form of the Common Vulnerability Scoring System (CVSS). Perhaps it's time to agree on responsible disclosure time periods based on CVSS scores? Even without an industry standard for responsible disclosure timelines, I would call for all technology vendors to fully cooperate with security researchers. While working together, vendors should be allowed a reasonable amount of time to resolve security issues and white-hat hackers should be supported and recognized for their continued efforts to improve security for consumers. If you're a comic book fan, then you'll know even a vigilante can be a forgotten hero.  Related Content: Marc Laliberte is an information security threat analyst at WatchGuard Technologies. Specializing in network security technologies, Marc's industry experience allows him to conduct meaningful information security research and educate audiences on the latest cybersecurity ... View Full Bio More Insights
Aurich Lawson / Thinkstockreader comments 25 Share this story Neal H. Walfield is a hacker at g10code working on GnuPG.

This op-ed was written for Ars Technica by Walfield, in response to Filippo Valsorda's "I'm giving up on PGP" story that was published on Ars last week. Every once in a while, a prominent member of the security community publishes an article about how horrible OpenPGP is. Matthew Green wrote one in 2014 and Moxie Marlinspike wrote one in 2015.

The most recent was written by Filippo Valsorda, here on the pages of Ars Technica, which Matthew Green says "sums up the main reason I think PGP is so bad and dangerous." In this article I want to respond to the points that Filippo raises.
In short, Filippo is right about some of the details, but wrong about the big picture.

For the record, I work on GnuPG, the most popular OpenPGP implementation. Forward secrecy isn't always desirable Filippo's main complaint has to do with OpenPGP's use of long-term keys.
Specifically, he notes that due to the lack of forward secrecy, the older a key is, the more communication will be exposed by its compromise.

Further, he observes that OpenPGP's trust model includes incentives to not replace long-term keys. First, it's true that OpenPGP doesn't implement forward secrecy (or future secrecy).

But, OpenPGP could be changed to support this. Matthew Green and Ian Miers recently proposed puncturable forward secure encryption, which is a technique to add forward secrecy to OpenPGP-like systems.

But, in reality, approximating forward secrecy has been possible since OpenPGP adopted subkeys decades ago. (An OpenPGP key is actually a collection of keys: a primary key that acts as a long-term, stable identifier, and subkeys that are cryptographically bound to the primary key and are used for encryption, signing, and authentication.) Guidelines on how to approximate forward secrecy were published in 2001 by Ian Brown, Adam Back, and Ben Laurie.

Although their proposal is only for an approximation of forward secrecy, it is significantly simpler than Green and Miers' approach, and it works in practice. As far as I know, Brown et al.'s proposal is not often used. One reason for this is that forward secrecy is not always desired.

For instance, if you encrypt a backup using GnuPG, then your intent is to be able to decrypt it in the future.
If you use forward secrecy, then, by definition, that is not possible; you've thrown away the old decryption key.
In the recent past, I've spoken with a number of GnuPG users including 2U and 1010data.

These two companies told me that they use GnuPG to protect client data.

Again, to access the data in the future, the encryption keys need to be retained, which precludes forward secrecy. This doesn't excuse the lack of forward secrecy when using GnuPG to protect e-mail, which is the use case that Filippo concentrates on.

The reason that forward secrecy hasn't been widely deployed here is that e-mail is usually left on the mail server in order to support multi-device access.
Since mail servers are not usually trusted, the mail needs to be kept encrypted.

The easiest way to accomplish this is to just not strip the encryption layer.
So, again, forward secrecy would render old messages inaccessible, which is often not desired. But, let's assume that you really want something like forward secrecy.

Then following Brown et al.'s approach, you just need to periodically rotate your encryption subkey.
Since your key is identified by the primary key and not the subkey, creating a new subkey does not change your fingerprint or invalidate any signatures, as Filippo states.

And, as long as your communication partners periodically refresh your key, rotating subkeys is completely transparent. Ideally, you'll want to store your primary key on a separate computer or smartcard so that if your computer is compromised, then only the subkeys are compromised.

But, even if you don't use an offline computer, and an attacker also compromises your primary key, this approach provides a degree of future secrecy: your attacker will be able to create new subkeys (since she has your primary key), and sign other keys, but she'll probably have to publish them to use them, which you'll eventually notice, and she won't be able to guess any new subkeys using the existing keys. Enlarge / Circuit Benders and more at the 2011 Doo Dah Parade. Sheyneinlalaland Physical attacks vs. cyber attacks So, given that forward secrecy is possible, why isn't it enabled by default? We know from Snowden that when properly implemented, "encryption … really is one of the few things that we can rely on." In other words, when nation states crack encryption, they aren't breaking the actual encryption, they are circumventing it.

That is, they are exploiting vulnerabilities or using national security letters (NSLs) to break into your accounts and devices.

As such, if you really care about protecting your communication, you are much better off storing your encryption keys on a smartcard then storing them on your computer. Given this, it's not clear that forward secrecy is that big of a gain, since smartcards won't export private keys.
So, when Filippo says that he is scared of an evil maid attack and is worried that someone opened his safe with his offline keys while he was away, he's implicitly stating that his threat model includes a physical, targeted attack.

But, while moving to the encrypted messaging app Signal gets him forward secrecy, it means he can't use a smartcard to protect his keys and makes him more vulnerable to a cyber attack, which is significantly easier to conduct than a physical attack. Another problem that Filippo mentions is that key discovery is hard.
Specifically, he says that key server listings are hard to use.

This is true.

But, key servers are in no way authenticated and should not be treated as authoritative.
Instead, if you need to find someone's key, you should ask that person for their key's fingerprint. Unfortunately, our research suggests that for many GnuPG users, picking up the phone is too difficult. So, after our successful donation campaign two years ago, we used some of the money to develop a new key discovery technique called the Web Key Directory (WKD).

Basically, the WKD provides a canonical way to find a key given an e-mail address via HTTPS.

This is not as good as checking the fingerprint, but since only the mail provider and the user can change the key, it is a significant improvement over the de facto status quo. WKD has already been deployed by Posteo, and other mail providers are in the process of integrating it (consider asking your mail provider to support it). Other people have identified the key discovery issue, too. Micah Lee, for instance, recently published GPG Sync, and the INBOME group and the pretty Easy privacy (p≡p) project are working on opportunistically transferring keys via e-mail. Signal isn't our saviour Filippo also mentions the multi-device problem.
It's true that using keys on multiple devices is not easy. Part of the problem is that OpenPGP is not a closed ecosystem like Signal, which makes standardising a secret key exchange protocol much more difficult. Nevertheless, Tankred Hase did some work on private key synchronisation while at whiteout.io.

But, if you are worried about targeted attacks as Filippo is, then keeping your keys on a computer, never mind multiple computers, is not for you.
Instead, you want to keep your keys on a smartcard.
In this case, using your keys from multiple computers is easy: just plug the token in (or use NFC)! This assumes that there is an OpenPGP-capable mail client on your platform of choice.

This is the case for all of the major desktop environments, and there is also an excellent plug-in for K9 on Android called OpenKeychain. (There are also some solutions available for iOS, but I haven't evaluated them.) Even if you are using Signal, the multi-device problem is not completely solved.

Currently, it is possible to use Signal from a desktop and a smartphone or a tablet, but it is not possible to use multiple smartphones or tablets. One essential consideration that Filippo doesn't adequately address is that contacting someone on Signal requires knowing their mobile phone number. Many people don't want to make this information public.
I was recently chatting with Jason Reich, who is the head of OPSEC at BuzzFeed, and he told me that he spends a lot of time teaching reporters how to deal with the death and rape threats that they regularly receive via e-mail.

Based on this, I suspect that many reporters would opt to not publish their phone number even though it would mean missing some stories.
Similarly, while talking to Alex Abdo, a lawyer from the ACLU, I learned that he receives dozens of encrypted e-mails every day, and he is certain that some of those people would not have contacted him or the ACLU if they couldn't remain completely anonymous. Another point that Filippo doesn't cover is the importance of integrity; he focused primarily on confidentiality (i.e., encryption).
I love the fact that messages that I receive from DHL are signed (albeit using S/MIME and not OpenPGP).

This makes detecting phishing attempts trivial.
I wish more businesses would do this. Of course, Signal also provides integrity protection, but I definitely don't want to give all businesses my phone number given their record of protecting my e-mail address. Moreover, most of this type of communication is done using e-mail, not Signal. I want to be absolutely clear that I like Signal. When people ask me how they can secure their communication, I often recommend it.

But, I view Signal as complementary to OpenPGP.

First, e-mail is unlikely to go away any time soon.
Second, Signal doesn't allow transferring arbitrary data including documents.

And, importantly, Signal has its own problems.
In particular, the main Signal network is centralised, not federated like e-mail, the developers actively discourage third-party clients, and you can't choose your own identity.

These decisions are a rejection of a free and open Internet, and pseudononymous communication. In conclusion, Filippo has raised a number of important points.

But, with respect to long-term OpenPGP keys being fatally flawed and forward secrecy being essential, I think he is wrong and disagree with his compromises in light of his stated threat model.
I agree with him that key discovery is a serious issue.

But, this is something that we've been working to address. Most importantly, Signal cannot replace OpenPGP for many people who use it on a daily basis, and the developers' decision to make Signal a walled garden is problematic.
Signal does complement OpenPGP, though, and I'm glad that it's there. Neal H. Walfield is a hacker at g10code working on GnuPG. His current project is implementing TOFU for GnuPG.

To avoid conflict of interests, GnuPG maintenance and development is funded primary by donations. You can find him on Twitter @nwalfield.
E-mail: neal@gnupg.org OpenPGP: 8F17 7771 18A3 3DDA 9BA4 8E62 AACB 3243 6300 52D9 This post originated on Ars Technica UK