Home Tags Pretty Good Privacy

Tag: Pretty Good Privacy

PGP public key and self-service postal kiosk expose online drug dealers

Second of 2 AlphaBay sellers arrested in 2016 pleads guilty: Abdullah Almashwali.

Police hack PGP server with 3.6 million messages from organized crime...

Custom PGP BlackBerry smartphones allegedly used by criminal gangs for secure messaging have suffered a setback.

Chrome extension brings encryption to Gmail

The security and privacy community was abuzz over the weekend after Google said it was open-sourcing E2Email, a Chrome plugin designed to ease the implementation and use of encrypted email. While this is welcome news, the project won't go anywhere i...

WhatsApp: Encrypted Message Backdoor Reports Are 'Baseless'

The security researcher cited in the report acknowledged that the word 'backdoor' was probably not the best choice.

WhatsApp this week denied that its app provides a "backdoor" to encrypted texts.

A report published Friday by The Guardian, citing cryptography and security researcher Tobias Boelter, suggests a security vulnerability within WhatsApp could be used by government agencies as a backdoor to snoop on users.

"This claim is false," a WhatsApp spokesman told PCMag in an email.

The Facebook-owned company will "fight any government request to create a backdoor," he added.

WhatsApp in April turned on full end-to-end encryption—using the Signal protocol developed by Open Whisper Systems—to protect messages from the prying eyes of cybercriminals, hackers, "oppressive regimes," and even Facebook itself.

The system, as described by The Guardian, relies on unique security keys traded and verified between users in an effort to guarantee communications are secure and cannot be intercepted. When any of WhatsApp's billion users get a new phone or reinstall the program, their encryption keys change—"something any public key cryptography system has to deal with," Open Whisper Systems founder Moxie Marlinspike wrote in a Friday blog post.

During that process, messages may back up on the phone, waiting their turn to be re-encrypted.

According to The Guardian, that's when someone could sneak in, fake having a new phone, and hijack the texts.

But according to Marlinspike, "the fact that WhatsApp handles key changes is not a 'backdoor,' it is how cryptography works.

"Any attempt to intercept messages in transmit by the server is detectable by the sender, just like with Signal, PGP, or any other end-to-end encrypted communication system," he wrote.

"We appreciate the interest people have in the security of their messages and calls on WhatsApp," co-founder Brian Acton wrote in a Friday Reddit post. "We will continue to set the record straight in the face of baseless accusations about 'backdoors' and help people understand how we've built WhatsApp with critical security features at such a large scale.

"Most importantly," he added, "we'll continue investing in technology and building simple features that help protect the privacy and security of messages and calls on WhatsApp."

In a blog post, Boelter said The Guardian's decision to use the word "backdoor" was probably not "the best choice there, but I can also see that there are arguments for calling it a 'backdoor.'" But Facebook was "furious and issued a blank denial, [which] polarized sides.

"I wish I could have had this debate with the Facebook Security Team in...private, without the public listening and judging our opinions, agreeing on a solution and giving a joint statement at the end," Boelter continued.
In an earlier post, Boelter said he reported the vulnerability in April 2016, but Facebook failed to fix it.

Boelter—a German computer scientist, entrepreneur, and PhD student at UC Berkeley focusing on Security and Cryptography—acknowledged that resolving the issue in public is a double-edged sword.

"The ordinary people following the news and reading headlines do not understand or do not bother to understand the details and nuances we are discussing now. Leaving them with wrong impressions leading to wrong and dangerous decisions: If they think WhatsApp is 'backdoored' and insecure, they will start using other means of communication. Likely much more insecure ones," he wrote. "The truth is that most other messengers who claim to have "end-to-end encryption" have the same vulnerability or have other flaws. On the other hand, if they now think all claims about a backdoor were wrong, high-risk users might continue trusting WhatsApp with their most sensitive information."

Boelter said he'd be content to leave the app as is if WhatsApp can prove that "1) too many messages get [sent] to old keys, don't get delivered, and need to be [re-sent] later and 2) it would be too dangerous to make blocking an option (moxie and I had a discussion on this)."

Then, "I could actually live with the current implementation, except for voice calls of course," provided WhatsApp is transparent about the issue, like adding a notice about key change notifications being delayed.

Google ventures into public key encryption

Google announced an early prototype of Key Transparency, its latest open source effort to ensure simpler, safer, and secure communications for everyone.

The project’s goal is to make it easier for applications services to share and discover public keys for users, but it will be a while before it's ready for prime time. Secure communications should be de rigueur, but it remains frustratingly out of reach for most people, more than 20 years after the creation of Pretty Good Privacy (PGP).

Existing methods where users need to manually find and verify the recipients’ keys are time-consuming and often complicated. Messaging apps and file sharing tools are limited in that users can communicate only within the service because there is no generic, secure method to look up public keys. “Key Transparency is a general-use, transparent directory, which makes it easy for developers to create systems of all kinds with independently auditable account data,” Ryan Hurst and Gary Belvin, members of Google’s security and privacy engineering team, wrote on the Google Security Blog. Key Transparency will maintain a directory of online personae and associated public keys, and it can work as a public key service to authenticate users.

Applications and services can publish their users’ public keys in Key Transparency and look up other users’ keys.

An audit mechanism keeps the service accountable.

There is the security protection of knowing that everyone is using the same published key, and any malicious attempts to modify the record with a different key will be immediately obvious. “It [Key Transparency] can be used by account owners to reliably see what keys have been associated with their account, and it can be used by senders to see how long an account has been active and stable before trusting it,” Hurst and Belvin wrote. The idea of a global key lookup service is not new, as PGP previously attempted a similar task with Global Directory.

The service still exists, but very few people know about it, let alone use it. Kevin Bocek, chief cybersecurity strategist at certificate management vendor Venafi, called Key Transparency an "interesting" project, but expressed some skepticism about how the technology will be perceived and used. Key Transparency is not a response to a serious incident or a specific use case, which means there is no actual driving force to spur adoption.

Compare that to Certificate Transparency, Google’s framework for monitoring and auditing digital certificates, which came about because certificate authorities were repeatedly mistakenly issuing fraudulent certificates. Google seems to be taking a “build it, and maybe applications will come,” approach with Key Transparency, Bocek said. The engineers don’t deny that Key Transparency is in early stages of design and development. “With this first open source release, we're continuing a conversation with the crypto community and other industry leaders, soliciting feedback, and working toward creating a standard that can help advance security for everyone," they wrote. While the directory would be publicly auditable, the lookup service will reveal individual records only in response to queries for specific accounts.

A command-line tool would let users publish their own keys to the directory; even if the actual app or service provider decides not to use Key Transparency, users can make sure their keys are still listed. “Account update keys” associated with each account—not only Google accounts—will be used to authorize changes to the list of public keys associated with that account. Google based the design of Key Transparency on CONIKS, a key verification service developed at Princeton University, and integrated concepts from Certificate Transparency.

A user client, CONIKS integrates with individual applications and services whose providers publish and manage their own key directories, said Marcela Melara, a second-year doctoral fellow at Princeton University’s Center for Information Technology Policy and the main author of CONIKS.

For example, Melara and her team are currently integrating CONIKS to work with Tor Messenger.

CONIKS relies on individual directories because people can have different usernames across services. More important, the same username can belong to different people on different services. Google changed the design to make Key Transparency a centralized directory. Melara said she and her team have not yet decided if they are going to stop work on CONIKS and start working on Key Transparency. One of the reasons for keeping CONIKS going is that while Key Transparency’s design may be based on CONIKS, there may be differences in how privacy and auditor functions are handled.

For the time being, Melara intends to keep CONIKS an independent project. “The level of privacy protections we want to see may not translate to [Key Transparency’s] internet-scalable design,” Melara said. On the surface, Key Transparency and Certificate Transparency seem like parallel efforts, with one providing an auditable log of public keys and the other a record of digital certificates. While public keys and digital certificates are both used to secure and authenticate information, there is a key difference: Certificates are part of an existing hierarchy of trust with certificate authorities and other entities vouching for the validity of the certificates. No such hierarchy exists for digital keys, so the fact that Key Transparency will be building that web of trust is significant, Venafi’s Bocek said. “It became clear that if we combined insights from Certificate Transparency and CONIKS we could build a system with the properties we wanted and more,” Hurst and Belvin wrote.

Reported “backdoor” in WhatsApp is in fact a feature, defenders say

Enlargereader comments 20 Share this story The Guardian roiled security professionals everywhere on Friday when it published an article claiming a backdoor in Facebook's WhatsApp messaging service allows attackers to intercept and read encrypted messages.
It's not a backdoor—at least as that term is defined by most security experts. Most would probably agree it's not even a vulnerability. Rather, it's a limitation in what cryptography can do in an app that caters to more than 1 billion users. At issue is the way WhatsApp behaves when an end user's encryption key changes.

By default, the app will use the new key to encrypt messages without ever informing the sender of the change.

By enabling a security setting, users can configure WhatsApp to notify the sender that a recently transmitted message used a new key. Critics of Friday's Guardian post, and most encryption practitioners, argue such behavior is common in encryption apps and often a necessary requirement.

Among other things, it lets existing WhatsApp users who buy a new phone continue an ongoing conversation thread. Tobias Boelter, a Ph.D. candidate researching cryptography and security at the University of California at Berkeley, told the Guardian that the failure to obtain a sender's explicit permission before using the new key challenged the often-repeated claim that not even WhatsApp or its owner Facebook can read encrypted messages sent through the service. He first reported the weakness to WhatsApp last April.
In an interview on Friday, he stood by the backdoor characterization. "At the time I discovered it, I thought it was not a big deal... and they will fix it," he told Ars. "The fact that they still haven't fixed it yet makes me wonder why." A tale of two encrypted messaging apps Boelter went on to contrast the way WhatsApp handles new keys with the procedure used by Signal, a competing messaging app that uses the same encryption protocol.
Signal allows a sender to verify a new key before using it. WhatsApp, on the other hand, by default trusts the new key with no notification—and even when that default is changed, it notifies the sender of the change only after the message is sent. Moxie Marlinspike, developer of the encryption protocol used by both Signal and WhatsApp, defended the way WhatsApp behaves. "The fact that WhatsApp handles key changes is not a 'backdoor,'" he wrote in a blog post. "It is how cryptography works.

Any attempt to intercept messages in transmit by the server is detectable by the sender, just like with Signal, PGP, or any other end-to-end encrypted communication system." He went on to say that, while it's true that Signal, by default, requires a sender to manually verify keys and WhatsApp does not, both approaches have potential security and performance drawbacks.

For instance, many users don't understand how to go about verifying a new key and may turn off encryption altogether if it prevents their messages from going through or generates error messages that aren't easy to understand.
Security-conscious users, meanwhile, can enable security notifications and rely on a "safety number" to verify new keys. He continued: Given the size and scope of WhatsApp's user base, we feel that their choice to display a non-blocking notification is appropriate.
It provides transparent and cryptographically guaranteed confidence in the privacy of a user's communication, along with a simple user experience.

The choice to make these notifications "blocking" would in some ways make things worse.

That would leak information to the server about who has enabled safety number change notifications and who hasn't, effectively telling the server who it could MITM transparently and who it couldn't; something that WhatsApp considered very carefully. Even if others disagree about the details of the UX, under no circumstances is it reasonable to call this a "backdoor," as key changes are immediately detected by the sender and can be verified. In an interview, Marlinspike said Signal was in the process of moving away from strictly enforced blocking. He also said that WhatsApp takes strict precautions to prevent its servers from knowing which users have enabled security notifications, making it impossible for would-be attackers to target only those who have them turned off. Boelter theorized that the lack of strict blocking could most easily be exploited by people who gain administrative control over WhatsApp servers, say by a government entity that obtains a court order.

The attacker could then change the encryption key for a targeted phone number.

By default, WhatsApp will use the imposter key to encrypt messages without ever warning the receiver of the crucial change.

By making the targeted phone temporarily unavailable over the network for a period of hours or days, messages that were sent during that time will be stored in a queue. Once the phone became available again, the messages will be encrypted with the new attacker-controlled key. Of course, there are some notable drawbacks that make such an attack scenario highly problematic from the standpoint of most attackers.

For the attack to work well, it would require control of a WhatsApp server, which is something most people would consider extraordinarily difficult to do.

Absent control over a WhatsApp server, an attack would require abusing something like the SS7 routing protocol for cellular networks to intercept SMS messages.

But even then, the attacker who wanted to acquire more than a single message would have to figure out a way to make the targeted phone unavailable over the network before impersonating it. What's more, it wouldn't be hard for the sender to eventually learn of the interception, and that's often a deal-breaker in many government surveillance cases. Last, the attack wouldn't work against encrypted messages stored on a seized phone. In a statement, WhatsApp officials wrote: WhatsApp does not give governments a "backdoor" into its systems and would fight any government request to create a backdoor.

The design decision referenced in the Guardian story prevents millions of messages from being lost, and WhatsApp offers people security notifications to alert them to potential security risks. WhatsApp published a technical white paper on its encryption design and has been transparent about the government requests it receives, publishing data about those requests in the Facebook Government Requests Report. Ultimately, there's little evidence of a vulnerability and certainly none of a backdoor—which is usually defined as secret functionality for defeating security measures. WhatsApp users should strongly consider turning on security notifications by accessing Settings > Account > Security.

Google floats prototype Key Transparency to tackle secure swap woes

♪ I've got the key, I've got the secreeeee-eeet ♪ Google has released an open-source technology dubbed Key Transparency, which is designed to offer an interoperable directory of public encryption keys. Key Transparency offers a generic, secure way to discover public keys.

The technology is built to scale up to internet size while providing a way to establish secure communications through untrusted servers.

The whole approach is designed to make encrypted apps easier and safer to use. Google put together elements of Certificate Transparency and CONIKS to develop Key Transparency, which it made available as an open-source prototype on Thursday. The approach is a more efficient means of building a web of trust than older alternatives such as PGP, as Google security engineers Ryan Hurst and Gary Belvin explain in a blog post. Existing methods of protecting users against server compromise require users to manually verify recipients' accounts in-person.

This simply hasn't worked. The PGP web-of-trust for encrypted email is just one example: over 20 years after its invention, most people still can't or won't use it, including its original author. Messaging apps, file sharing, and software updates also suffer from the same challenge. Key Transparency aims to make the relationship between online personas and public keys "automatically verifiable and publicly auditable" while supporting important user needs such as account recovery. "Users should be able to see all the keys that have been attached to an account, while making any attempt to tamper with the record publicly visible," Google's security boffins explain. The directory will make it easier for developers to create systems of all kinds with independently auditable account data, Google techies add.

Google is quick to emphasise that the technology is very much a work in progress. "It's still very early days for Key Transparency. With this first open-source release, we're continuing a conversation with the crypto community and other industry leaders, soliciting feedback, and working toward creating a standard that can help advance security for everyone," they explain. The project so far has already involved collaboration from the CONIKS team, Open Whisper Systems, as well as the security engineering teams at Yahoo! and internally at Google. Early reaction to the project from some independent experts such as Matthew Green, a professor of cryptography at Johns Hopkins University, has been positive. Kevin Bocek, chief cyber-security strategist at certificate management vendor Venafi, was much more sceptical. "Google's introduction of Key Transparency is a 'build it and hope the developers will come' initiative," he said. "There is not the clear compelling event as there was with Certificate Transparency, when the fraudulent issuance of digital [certificates] was starting to run rampant. Moreover, building a database of public keys not linked to digital certificates has been attempted before with PGP and never gain[ed] widespread adoption." ® Sponsored: Want to know more about Privileged Access Management? Visit The Register's hub

A Look Inside Responsible Vulnerability Disclosure

It's time for security researchers and vendors to agree on a standard responsible disclosure timeline. Animal Man, Dolphin, Rip Hunter, Dane Dorrance, the Ray. Ring any bells? Probably not, but these characters fought fictitious battles on the pages of DC Comics in the 1940s, '50s, and '60s. As part of the Forgotten Heroes series, they were opposed by the likes of Atom-Master, Enchantress, Ultivac, and other Forgotten Villains. Cool names aside, the idea of forgotten heroes seems apropos at a time when high-profile cybersecurity incidents continue to rock the headlines and black hats bask in veiled glory. But what about the good guys? What about the white hats, these forgotten heroes? For every cybercriminal looking to make a quick buck exploiting or selling a zero-day vulnerability, there's a white hat reporting the same vulnerabilities directly to the manufacturers. Their goal is to expose dangerous exploits, keep users protected, and perhaps receive a little well-earned glory for themselves along the way. This process is called "responsible disclosure." Although responsible disclosure has been going on for years, there's no formal industry standard for reporting vulnerabilities. However, most responsible disclosures follow the same basic steps. First, the researcher identifies a security vulnerability and its potential impact. During this step, the researcher documents the location of the vulnerability using screenshots or pieces of code. They may also create a repeatable proof-of-concept attack to help the vendor find and test a resolution. Next, the researcher creates a vulnerability advisory report including a detailed description of the vulnerability, supporting evidence, and a full disclosure timeline. The researcher submits this report to the vendor using the most secure means possible, usually as an email encrypted with the vendor's public PGP key. Most vendors reserve the [email protected] email alias for security advisory submissions, but it could differ depending on the organization. After submitting the advisory to the vendor, the researcher typically allows the vendor a reasonable amount of time to investigate and fix the exploit, per the advisory full disclosure timeline. Finally, once a patch is available or the disclosure timeline (including any extensions) has elapsed, the researcher publishes a full disclosure analysis of the vulnerability. This full disclosure analysis includes a detailed explanation of the vulnerability, its impact, and the resolution or mitigation steps. For example, see this full disclosure analysis of a cross-site scripting vulnerability in Yahoo Mail by researcher Jouko Pynnönen. How Much Time?Security researchers haven't reached a consensus on exactly what "a reasonable amount of time" means to allow a vendor to fix a vulnerability before full public disclosure. Google recommends 60 days for a fix or public disclosure of critical security vulnerabilities, and an even shorter seven days for critical vulnerabilities under active exploitation. HackerOne, a platform for vulnerability and bug bounty programs, defaults to a 30-day disclosure period, which can be extended to 180 days as a last resort. Other security researchers, such as myself, opt for 60 days with the possibility of extensions if a good-faith effort is being made to patch the issue. I believe that full disclosure of security vulnerabilities benefits the industry as a whole and ultimately serves to protect consumers. In the early 2000s, before full disclosure and responsible disclosure were the norm, vendors had incentives to hide and downplay security issues to avoid PR problems instead of working to fix the issues immediately. While vendors attempted to hide the issues, bad guys were exploiting these same vulnerabilities against unprotected consumers and businesses. With full disclosure, even if a patch for the issue is unavailable, consumers have the same knowledge as the attackers and can defend themselves with workarounds and other mitigation techniques. As security expert Bruce Schneier puts it, full disclosure of security vulnerabilities is "a damned good idea." I've been on both ends of the responsible disclosure process, as a security researcher reporting issues to third-party vendors and as an employee receiving vulnerability reports for my employer's own products. I can comfortably say responsible disclosure is mutually beneficial to all parties involved. Vendors get a chance to resolve security issues they may otherwise have been unaware of, and security researchers can increase public awareness of different attack methods and make a name for themselves by publishing their findings. My one frustration as a security researcher is that the industry lacks a standard responsible disclosure timeline. We already have a widely accepted system for ranking the severity of vulnerabilities in the form of the Common Vulnerability Scoring System (CVSS). Perhaps it's time to agree on responsible disclosure time periods based on CVSS scores? Even without an industry standard for responsible disclosure timelines, I would call for all technology vendors to fully cooperate with security researchers. While working together, vendors should be allowed a reasonable amount of time to resolve security issues and white-hat hackers should be supported and recognized for their continued efforts to improve security for consumers. If you're a comic book fan, then you'll know even a vigilante can be a forgotten hero.  Related Content: Marc Laliberte is an information security threat analyst at WatchGuard Technologies. Specializing in network security technologies, Marc's industry experience allows him to conduct meaningful information security research and educate audiences on the latest cybersecurity ... View Full Bio More Insights

Op-ed: Why I’m not giving up on PGP

Aurich Lawson / Thinkstockreader comments 25 Share this story Neal H. Walfield is a hacker at g10code working on GnuPG.

This op-ed was written for Ars Technica by Walfield, in response to Filippo Valsorda's "I'm giving up on PGP" story that was published on Ars last week. Every once in a while, a prominent member of the security community publishes an article about how horrible OpenPGP is. Matthew Green wrote one in 2014 and Moxie Marlinspike wrote one in 2015.

The most recent was written by Filippo Valsorda, here on the pages of Ars Technica, which Matthew Green says "sums up the main reason I think PGP is so bad and dangerous." In this article I want to respond to the points that Filippo raises.
In short, Filippo is right about some of the details, but wrong about the big picture.

For the record, I work on GnuPG, the most popular OpenPGP implementation. Forward secrecy isn't always desirable Filippo's main complaint has to do with OpenPGP's use of long-term keys.
Specifically, he notes that due to the lack of forward secrecy, the older a key is, the more communication will be exposed by its compromise.

Further, he observes that OpenPGP's trust model includes incentives to not replace long-term keys. First, it's true that OpenPGP doesn't implement forward secrecy (or future secrecy).

But, OpenPGP could be changed to support this. Matthew Green and Ian Miers recently proposed puncturable forward secure encryption, which is a technique to add forward secrecy to OpenPGP-like systems.

But, in reality, approximating forward secrecy has been possible since OpenPGP adopted subkeys decades ago. (An OpenPGP key is actually a collection of keys: a primary key that acts as a long-term, stable identifier, and subkeys that are cryptographically bound to the primary key and are used for encryption, signing, and authentication.) Guidelines on how to approximate forward secrecy were published in 2001 by Ian Brown, Adam Back, and Ben Laurie.

Although their proposal is only for an approximation of forward secrecy, it is significantly simpler than Green and Miers' approach, and it works in practice. As far as I know, Brown et al.'s proposal is not often used. One reason for this is that forward secrecy is not always desired.

For instance, if you encrypt a backup using GnuPG, then your intent is to be able to decrypt it in the future.
If you use forward secrecy, then, by definition, that is not possible; you've thrown away the old decryption key.
In the recent past, I've spoken with a number of GnuPG users including 2U and 1010data.

These two companies told me that they use GnuPG to protect client data.

Again, to access the data in the future, the encryption keys need to be retained, which precludes forward secrecy. This doesn't excuse the lack of forward secrecy when using GnuPG to protect e-mail, which is the use case that Filippo concentrates on.

The reason that forward secrecy hasn't been widely deployed here is that e-mail is usually left on the mail server in order to support multi-device access.
Since mail servers are not usually trusted, the mail needs to be kept encrypted.

The easiest way to accomplish this is to just not strip the encryption layer.
So, again, forward secrecy would render old messages inaccessible, which is often not desired. But, let's assume that you really want something like forward secrecy.

Then following Brown et al.'s approach, you just need to periodically rotate your encryption subkey.
Since your key is identified by the primary key and not the subkey, creating a new subkey does not change your fingerprint or invalidate any signatures, as Filippo states.

And, as long as your communication partners periodically refresh your key, rotating subkeys is completely transparent. Ideally, you'll want to store your primary key on a separate computer or smartcard so that if your computer is compromised, then only the subkeys are compromised.

But, even if you don't use an offline computer, and an attacker also compromises your primary key, this approach provides a degree of future secrecy: your attacker will be able to create new subkeys (since she has your primary key), and sign other keys, but she'll probably have to publish them to use them, which you'll eventually notice, and she won't be able to guess any new subkeys using the existing keys. Enlarge / Circuit Benders and more at the 2011 Doo Dah Parade. Sheyneinlalaland Physical attacks vs. cyber attacks So, given that forward secrecy is possible, why isn't it enabled by default? We know from Snowden that when properly implemented, "encryption … really is one of the few things that we can rely on." In other words, when nation states crack encryption, they aren't breaking the actual encryption, they are circumventing it.

That is, they are exploiting vulnerabilities or using national security letters (NSLs) to break into your accounts and devices.

As such, if you really care about protecting your communication, you are much better off storing your encryption keys on a smartcard then storing them on your computer. Given this, it's not clear that forward secrecy is that big of a gain, since smartcards won't export private keys.
So, when Filippo says that he is scared of an evil maid attack and is worried that someone opened his safe with his offline keys while he was away, he's implicitly stating that his threat model includes a physical, targeted attack.

But, while moving to the encrypted messaging app Signal gets him forward secrecy, it means he can't use a smartcard to protect his keys and makes him more vulnerable to a cyber attack, which is significantly easier to conduct than a physical attack. Another problem that Filippo mentions is that key discovery is hard.
Specifically, he says that key server listings are hard to use.

This is true.

But, key servers are in no way authenticated and should not be treated as authoritative.
Instead, if you need to find someone's key, you should ask that person for their key's fingerprint. Unfortunately, our research suggests that for many GnuPG users, picking up the phone is too difficult. So, after our successful donation campaign two years ago, we used some of the money to develop a new key discovery technique called the Web Key Directory (WKD).

Basically, the WKD provides a canonical way to find a key given an e-mail address via HTTPS.

This is not as good as checking the fingerprint, but since only the mail provider and the user can change the key, it is a significant improvement over the de facto status quo. WKD has already been deployed by Posteo, and other mail providers are in the process of integrating it (consider asking your mail provider to support it). Other people have identified the key discovery issue, too. Micah Lee, for instance, recently published GPG Sync, and the INBOME group and the pretty Easy privacy (p≡p) project are working on opportunistically transferring keys via e-mail. Signal isn't our saviour Filippo also mentions the multi-device problem.
It's true that using keys on multiple devices is not easy. Part of the problem is that OpenPGP is not a closed ecosystem like Signal, which makes standardising a secret key exchange protocol much more difficult. Nevertheless, Tankred Hase did some work on private key synchronisation while at whiteout.io.

But, if you are worried about targeted attacks as Filippo is, then keeping your keys on a computer, never mind multiple computers, is not for you.
Instead, you want to keep your keys on a smartcard.
In this case, using your keys from multiple computers is easy: just plug the token in (or use NFC)! This assumes that there is an OpenPGP-capable mail client on your platform of choice.

This is the case for all of the major desktop environments, and there is also an excellent plug-in for K9 on Android called OpenKeychain. (There are also some solutions available for iOS, but I haven't evaluated them.) Even if you are using Signal, the multi-device problem is not completely solved.

Currently, it is possible to use Signal from a desktop and a smartphone or a tablet, but it is not possible to use multiple smartphones or tablets. One essential consideration that Filippo doesn't adequately address is that contacting someone on Signal requires knowing their mobile phone number. Many people don't want to make this information public.
I was recently chatting with Jason Reich, who is the head of OPSEC at BuzzFeed, and he told me that he spends a lot of time teaching reporters how to deal with the death and rape threats that they regularly receive via e-mail.

Based on this, I suspect that many reporters would opt to not publish their phone number even though it would mean missing some stories.
Similarly, while talking to Alex Abdo, a lawyer from the ACLU, I learned that he receives dozens of encrypted e-mails every day, and he is certain that some of those people would not have contacted him or the ACLU if they couldn't remain completely anonymous. Another point that Filippo doesn't cover is the importance of integrity; he focused primarily on confidentiality (i.e., encryption).
I love the fact that messages that I receive from DHL are signed (albeit using S/MIME and not OpenPGP).

This makes detecting phishing attempts trivial.
I wish more businesses would do this. Of course, Signal also provides integrity protection, but I definitely don't want to give all businesses my phone number given their record of protecting my e-mail address. Moreover, most of this type of communication is done using e-mail, not Signal. I want to be absolutely clear that I like Signal. When people ask me how they can secure their communication, I often recommend it.

But, I view Signal as complementary to OpenPGP.

First, e-mail is unlikely to go away any time soon.
Second, Signal doesn't allow transferring arbitrary data including documents.

And, importantly, Signal has its own problems.
In particular, the main Signal network is centralised, not federated like e-mail, the developers actively discourage third-party clients, and you can't choose your own identity.

These decisions are a rejection of a free and open Internet, and pseudononymous communication. In conclusion, Filippo has raised a number of important points.

But, with respect to long-term OpenPGP keys being fatally flawed and forward secrecy being essential, I think he is wrong and disagree with his compromises in light of his stated threat model.
I agree with him that key discovery is a serious issue.

But, this is something that we've been working to address. Most importantly, Signal cannot replace OpenPGP for many people who use it on a daily basis, and the developers' decision to make Signal a walled garden is problematic.
Signal does complement OpenPGP, though, and I'm glad that it's there. Neal H. Walfield is a hacker at g10code working on GnuPG. His current project is implementing TOFU for GnuPG.

To avoid conflict of interests, GnuPG maintenance and development is funded primary by donations. You can find him on Twitter @nwalfield.
E-mail: neal@gnupg.org OpenPGP: 8F17 7771 18A3 3DDA 9BA4 8E62 AACB 3243 6300 52D9 This post originated on Ars Technica UK

Op-ed: I’m throwing in the towel on PGP, and I work...

EnlargeChristiaan Colen reader comments 4 Share this story Filippo Valsorda is an engineer on the Cloudflare Cryptography team, where he's deploying and helping design TLS 1.3, the next revision of the protocol implementing HTTPS. He also created a Heartbleed testing site in 2014.

This post originally appeared on his blog and is re-printed with his permission. After years of wrestling with GnuPG with varying levels of enthusiasm, I came to the conclusion that it's just not worth it, and I'm giving up—at least on the concept of long-term PGP keys.

This editorial is not about the gpg tool itself, or about tools at all. Many others have already written about that.
It's about the long-term PGP key model—be it secured by Web of Trust, fingerprints or Trust on First Use—and how it failed me. Trust me when I say that I tried.
I went through all the setups.
I used Enigmail.
I had offline master keys on a dedicated Raspberry Pi with short-lived subkeys.
I wrote custom tools to make handwritten paper backups of offline keys (which I'll publish sooner or later).
I had YubiKeys. Multiple.
I spent days designing my public PGP policy. I traveled two hours by train to meet the closest Biglumber user in Italy to get my first signature in the strong set.
I have a signature from the most connected key in the set.
I went to key-signing parties in multiple continents.
I organized a couple. I have the arrogance of saying that I understand PGP.
In 2013 I was dissecting the packet format to brute force short IDs.
I devised complex silly systems to make device subkeys tie to both my personal and company master keys.
I filed usability and security issues in GnuPG and its various distributions. All in all, I should be the perfect user for PGP: competent, enthusiast, embedded in a similar community. But it just didn't work. First, there's the adoption issue others talked about extensively.
I get, at most, two encrypted e-mails a year. Then, there's the UX problem: easy crippling mistakes; messy keyserver listings from years ago; "I can't read this e-mail on my phone" or "on the laptop;" "I left the keys I never use on the other machine." But the real issues, I realized, are more subtle.
I never felt confident in the security of my long-term keys.

The more time passed, the more I would feel uneasy about any specific key. Yubikeys would get exposed to hotel rooms. Offline keys would sit in a far away drawer or safe.
Vulnerabilities would be announced. USB devices would get plugged in. A long-term key is as secure as the minimum common denominator of your security practices over its lifetime. It's the weak link. Worse, long-term key patterns, like collecting signatures and printing fingerprints on business cards, discourage practices that would otherwise be obvious hygiene: rotating keys often, having different keys for different devices, compartmentalization.
Such practices actually encourage expanding the attack surface by making backups of the key. We talk about Pets vs.

Cattle in infrastructure; those concepts would apply just as well to keys! If I suspect I'm compromised, I want to be able to toss the laptop and rebootstrap with minimum overhead.

The worst outcome possible for a scheme is making the user stick with a key that has a suspicion of compromise, because the cost of rotating would be too high. And all this for what gain? "Well, of course, long-term trust." Yeah, about that.
I never, ever, ever successfully used the WoT to validate a public key.

And remember, I have a well-linked key.
I haven't done a formal study, but I'm almost positive that everyone that used PGP to contact me has, or would have done (if asked), one of the following: pulled the best-looking key from a keyserver, most likely not even over TLS used a different key if replied with "this is my new key" re-sent the e-mail unencrypted if provided an excuse like "I'm traveling" Travel in particular is hostile to long-term keys, making this kind of fresh start impractical. Moreover, I'm not even sure there's an attacker that long-term keys make sense against. Your average adversary probably can't MitM Twitter DMs (which means you can use them to exchange fingerprints opportunistically, while still protecting your privacy).

The Mossad will do Mossad things to your machine, whatever key you use. Finally, these days I think I care much more about forward secrecy, deniability, and ephemerality than I do about ironclad trust.

Are you sure you can protect that long-term key forever? Because when an attacker decides to target you and succeeds, they won't have access just from that point forward; they'll have access to all your past communications, too.

And that's ever more relevant. Moving forward I'm not dropping to plaintext. Quite the opposite.

But I won't be maintaining any public long-term key. Mostly I'll use Signal or WhatsApp, which offer vastly better endpoint security on iOS, ephemerality, and smoother key rotation. If you need to securely contact me, your best bet is to DM me asking for my Signal number.
If needed we can decide an appropriate way to compare fingerprints.
If we meet in person and need to set up a secure channel, we will just exchange a secret passphrase to use with what's most appropriate: OTR, Pond, Ricochet. If it turns out we really need PGP, we will set up some ad-hoc keys, more à la Operational PGP.
Same for any signed releases or canaries I might maintain in the future. To exchange files, we will negotiate Magic Wormhole, OnionShare, or ad-hoc PGP keys over the secure channel we already have.

The point is not to avoid the gpg tool, but the PGP key management model. If you really need to cold-contact me, I might maintain a Keybase key, but no promises.
I like rooting trust in your social profiles better since it makes key rotation much more natural and is probably how most people know me anyway. I'm also not dropping YubiKeys.
I'm very happy about my new YubiKey 4 with touch-to-operate, which I use for SSH keys, password storage, and machine bootstrap.

But these things are one hundred percent under my control. About my old keys and transitioning I broke the offline seal of all my keys.
I don't have reason to believe they are compromised, but you should stop using them now. Below are detached signatures for the Markdown version of this document from all keys I could still find. In the coming weeks I'll import all signatures I received, make all the signatures I promised, and then publish revocations to the keyservers.
I'll rotate my Keybase key.

Eventually, I'll destroy the private keys. See you on Signal. (Or Twitter.) Giving up on PGP.mdGiving up on PGP.md.B8CC58C51CAEA963.ascGiving up on PGP.md.C5C92C16AB6572C2.ascGiving up on PGP.md.54D93CBC8AA84B5A.asc Giving up on PGP.md.EBF01804BCF05F6B.asc [coming once I recover the passphrase from another country] Note: I expect the "Moving forward" section to evolve over time, as tools come and go.

The signed .md file won't change, an unauthenticated .diff will appear below for verification convenience.

A beginner’s guide to beefing up your privacy and security online

Enlarge / Unfortunately, it's easier to stick a lock on the Brooklyn Bridge than it is to secure your data. We can at least try to help, though.Andrew Cunningham reader comments 47 Share this story With Thanksgiving behind us, the holiday season in the US is officially underway.
If you're reading Ars, that can only mean one thing: you'll be answering technical questions that your relatives have been saving since the last time you visited home. This year in addition to doing the regular hardware upgrades, virus scans, and printer troubleshooting, consider trying to advise the people in your life about better safeguarding their security and privacy. Keeping your data safe from attackers is one of the most important things you can do, and keeping your communications and browsing habits private can keep that data from being used to track your activities. This is not a comprehensive guide to security, nor should it be considered good enough for professional activists or people who suspect they may be under targeted surveillance.

This is for people who use their phones and computers for work and in their personal lives every single day and who want to reduce the chances that those devices and the accounts used by those devices will be compromised.

And while security often comes at some cost to usability, we've also done our best not to impact the fundamental utility and convenience of your devices. These recommendations simply don't represent the absolute best in security and privacy—the Electronic Frontier Foundation (EFF) has excellent, more in-depth guides on security for activists and protesters that you can read if you want to get even further out into the weeds.

But these are all good, basic best practices you can use if, like so many of us, you want to protect yourself against security breaches and trolls.

Feel free to share it directly with those in your life who insist on doing the computer work themselves. Protecting your devices Install updates, especially for your operating system and your browser This ought to be self-evident, but: install updates for your phones, tablets, and computers as soon as you can when they’re made available.

The most important kinds of software updates are those for the operating system itself and for your browser, since Chrome, Firefox, Safari, Edge, and the rest are common points of entry for attackers. Updates for password managers and other apps on your system are also important, though, so don't ignore those update prompts when you see them. Waiting a day or two to make sure these updates don’t break anything major is fine, but don’t ignore update prompts for days or weeks at a time.

By the time an update exists for a security flaw, it is often already being used in attacks, which is why it’s important to install updates as quickly as possible. On this note, also be careful about using Android phones, which often run out-of-date software without current security patches.

Google’s Nexus and Pixel phones, which get software updates promptly and directly from Google, are the best way to make sure you’re up to date; while Samsung’s newer smartphones are also patched relatively promptly, everything else in the Android ecosystem is hit or miss. Use strong passwords and passcodes Having your accounts hacked is what you should be the most worried about—more on this later—but it’s also important to secure the devices you’re using to access those accounts. It goes without saying that you should use a good, strong password to protect every single user account on any PCs or Macs. On smartphones, you should use as strong a PIN or password as you reasonably can.
If your phone uses a fingerprint reader, take advantage of that added convenience by locking your phone with a strong alphanumeric password.

Target a 12- to 14-character minimum, since shorter passwords are more susceptible to brute force attacks. Encrypt your phones and computers If you need an oversimplified but easily understood way to explain "encryption" to someone, think of it as a massively complex decoder ring; when data is encrypted, it can only be accessed and read by a person or device that has the “key” needed to translate it back into its original form.
It’s important to encrypt your communications, and it’s also important to encrypt the devices you use to access any sensitive data since that data can be stored on them locally whether you realize it or not. The basic encryption guide we ran last year is still current; I’ll cover basic guidelines here, but refer to that for fuller details. iPhones and iPads are encrypted by default. Use a strong passcode and you’ll generally be fine. Macs are not encrypted by default, but FileVault disk encryption is fairly easy to enable in the Security section of the System Preferences. Some newer Android phones are encrypted by default, but go to the Settings and check under Security to confirm (this may differ depending on the phone you use).
If the phone isn’t encrypted, it’s fairly easy to turn it on in the Security settings; protect the phone with a strong passcode afterward. Older phones and tablets may suffer a performance hit, but anything made in the last two or so years should have no major problems. Windows PCs tend not to be encrypted by default, and it’s only easy to enable encryption on newer PCs with the more expensive “Pro” versions of Windows. Windows can be encrypted by default, but only by supporting an esoteric list of requirements that few PCs meet. Protecting your accounts Two-factor authentication The most significant thing you can do to protect your e-mail, bank, Apple, Facebook, Twitter, Google, Amazon, Dropbox, and other accounts is still to enable two-factor authentication (sometimes called two-step authentication or 2FA).

This means using a combination of multiple credentials to get into your account, usually a password and a six-digit code sent to your phone or generated by an authenticator app. There are three primary types of authentication: something you know (i.e. a password), something you have (i.e. your phone or a secure key), or something you are (i.e. your fingerprint or face).

To be considered “true” two-factor authentication, each factor needs to be from a different one of those three categories.
So, for instance, something that requires a password plus your phone is two-factor authentication.
Something that just asks you for two passwords is not, since they’re both something you know. SMS texts sent to your phone are relatively easy to hijack for determined attackers, so you should definitely use an authenticator app whenever possible.
I prefer Authy, but Google Authenticator is also widely used. When you enable two-factor authentication on an account, the first time you log in to an account on a new phone or computer, you’ll generally be asked to enter a special code after you enter your password.

Anyone who has your password but doesn’t have the code can’t get into your accounts. You may also need to sign back in on all of your other devices before you can use them with your account again. Here are instructions for setting up two-factor authentication for a variety of services; if you can’t find yours on this list, Google is your friend; twofactorauth.org is also a helpful resource. Apple Google Microsoft Twitter Facebook Dropbox Slack Amazon Paypal Venmo Stripe Using a password manager (and good password practices) Two-factor authentication is great, but it’s only extra protection on top of good, strong passwords and password practices.
Security researcher Brian Krebs has a good primer on password security here, but the most important things to remember are: Don’t use the same password for multiple sites/services, especially if you use those sites/services to store personal data. Change your password regularly, and change it immediately if you suspect that the service has been hacked or that someone else has tried to use your account. Use the strongest passwords you can. Using various characters (capital and lowercase letters, numbers, punctuation) is important, but password length is also important.

Consider a 12-to-14-character password to be a useful minimum, depending on the site’s password policies. Remembering passwords is annoying, especially if you’re changing them all the time. One solution to this problem is to use a password manager.

These are apps that generate long, random, complex passwords and store them for you in encrypted form either on your device or in the cloud. You have to set and remember one strong master password (we recommend perhaps writing this down and putting it in a safe and secure place), but the app does the rest. There are lots of password managers available, but 1Password is probably the best known and best supported.
It costs $2.99 a month for one person and $4.99 a month for a family of up to five people, and there’s a 30-day free trial available as well. LastPass is also an OK free alternative, though this sort of protection is worth the cost.
It’s also generally a good idea to support companies that do security- and privacy-related work going forward. Protecting your communications and Internet use Enlarge / WhatsApp is one messaging service that features end-to-end encryption, though it's no longer your best option. Andrew Cunningham Using Signal for SMS and voice calls Protecting your communications from being intercepted and read is one of the most important things you can do, but it’s also more difficult than other security measures we've discussed so far. Using an encrypted messaging service is the best way to protect your texts from prying eyes.
If you’re using Apple’s iMessage service (i.e. blue bubbles), you’re already using an encrypted service, but the downside is that it only works between two Apple devices and that Apple may still be able to hand out your data if asked. For communications between an iPhone and an Android phone or between two Android phones, your best option is Signal, a secure SMS app by Open Whisper Systems that provides encryption for both texting and voice calls.

Both you and your recipient will need to have Signal installed for it to work, but the app makes it easy to send out download links to your recipients and it’s easy to set up and use.

The EFF has detailed installation instructions for iOS and for Android. Another encrypted messaging service you may have heard of is WhatsApp, but the company’s acquisition by Facebook in early 2014 has given rise to some concerns among security and privacy advocates.
Still, depending on what the people you know already use, it could be better than just plain SMS or other chat services. Using VPNs, especially on public Wi-Fi You know those unsecured public networks that you log into when you’re at the cafe or coffee shop? Not only can anyone also get on that network and potentially exploit it, but attackers with relatively simple, inexpensive tools can see all of the data that travels between your phone or laptop and the wireless router.

Even networks with passwords (like those you’d use at work or in a hotel, for instance) can expose your data to other people who have the network password. The answer here is to use a Virtual Private Network, or VPN.
If you think of the streams of data going between a router and everything connected to it as an actual stream, then a VPN is a sort of straw or tube that keeps your stream separate from everyone else’s.
VPN services can also hide your browsing data from your Internet service provider, and they can give you some degree of protection from trackers used by websites and ad networks. (Again, like most measures, this is not a guaranteed way to achieve perfect security.) Subscribing to a VPN does cost money, but there are many options that will run $10 or less per month. Private Internet Access offers support for Windows, macOS, iOS, Android, and Linux; will let you use the service on up to five devices simultaneously; and costs a relatively inexpensive $6.95 a month or $39.95 a year (which breaks down to $3.33 a month).
If you use public wireless networks with any frequency at all, a VPN is a must-have and well worth the cost. VPNs aren't cure-alls, since some public networks are set up to keep them from working—sometimes on purpose so they can show you ads, sometimes by accident because they want to keep the networks from being used for anything other than basic Internet. Using a Mi-Fi hotspot or your phone's tethering features when you're in public can be expensive, but it can also provide some peace of mind when you're having trouble getting your VPN to work on a public network. E-mail security (is hard to do) E-mail security is difficult, and both of our security experts on staff have described it to me as a "lost cause" and "fundamentally broken." The primary problem is that even if you take precautions to protect your end of the conversation, you can do little to secure the servers and clients in between and on the receiving end.
Some services like Gmail offer enabled-by-default encryption between your computer and their servers, but sending a message from one server to another is still often unencrypted. Paid services like ProtonMail are promising—it promises enhanced security and privacy and they won't read your messages or scrape data from them so they can sell ads to you—but it hasn't been thoroughly audited, and it only really works as intended when sending mail between ProtonMail accounts. And longstanding e-mail encryption tools like PGP ("Pretty Good Privacy") are notoriously difficult to set up and use. You should definitely do what you can to secure your e-mail from casual snooping, and you should protect your account with the tools we've already mentioned—using an account from a major provider like Google, Microsoft, or Yahoo with a strong password and two-factor authentication enabled is a good way to start.

But for truly sensitive communications that you want to keep private, using Signal or WhatsApp or even Facebook Messenger's "Secret Conversations" feature is a better way to do it. Deleting old e-mails Another mitigating factor for the e-mail problem is message retention—someone with ten years' worth of data to dig through is naturally going to reveal more about themselves than someone who only has six months of messages. Even free e-mail providers often give you so much storage space that it can be tempting to be a digital packrat and just keep everything, both for nostalgic reasons and just in case you ever need it for something.

But the more communications you store, the more information that companies, law enforcement, and hackers have to track your wheelings and dealings. Consider how important or sensitive your communications are, and consider how often you actually need old e-mails.

Consider deleting e-mails at regular intervals—deleting things after one year or even six months can be a good way to start if this is something you’re worried about, and think about deleting unimportant messages even more frequently. Next steps If you’ve done all of these things and you’re looking to do more, the EFF’s Surveillance Self-Defense page is a good resource.
It has more in-depth technical explanations for many of the concepts discussed here, as well as further recommendations.

The EFF also offers Chrome and Firefox plugins like Privacy Badger and HTTPS Everywhere, which (respectively) attempt to keep ads from tracking you across multiple sties and load content over an encrypted HTTPS connection rather than a standard HTTP connection whenever HTTPS is available. You could also look into things like the Tor project, which goes to greater lengths to obstruct surveillance and ensure privacy.

17 essential tools to protect your online identity, privacy

Make no mistake: Professional and state-sponsored cybercriminals are trying to compromise your identity -- either at home, to steal your money; or at work, to steal your employer’s money, sensitive data, or intellectual property. Most users know the basics of computer privacy and safety when using the internet, including running HTTPS and two-factor authentication whenever possible, and checking haveibeenpwned.com to verify whether their email addresses or user names and passwords have been compromised by a known attack. But these days, computer users should go well beyond tightening their social media account settings.

The security elite run a variety of programs, tools, and specialized hardware to ensure their privacy and security is as strong as it can be. Here, we take a look at this set of tools, beginning with those that provide the broadest security coverage down to each specific application for a particular purpose. Use any, or all, of these tools to protect your privacy and have the best computer security possible. Everything starts with a secure device Good computer security starts with a verified secure device, including safe hardware and a verified and intended boot experience.
If either can be manipulated, there is no way higher-level applications can be trusted, no matter how bulletproof their code. Enter the Trusted Computing Group.
Supported by the likes of IBM, Intel, Microsoft, and others, TCG has been instrumental in the creation of open, standard-based secure computing devices and boot pathways, the most popular of which are the Trusted Platform Module (TPM) chip and self-encrypting hard drives. Your secure computing experience begins with TPM. TPM. The TPM chip provides secure cryptographic functions and storage.
It stores trusted measurements and private keys of higher-level processes, enabling encryption keys to be stored in the most secure manner possible for general-purpose computers. With TPM, computers can verify their own boot processes, from the firmware level up.

Almost all PC manufacturers offer models with TPM chips.
If your privacy is paramount, you’ll want to ensure the device you use has an enabled TPM chip. UEFI. Universal Extensible Firmware Interface is an open standards firmware specification that replaces the far less secure BIOS firmware chips. When enabled, UEFI 2.3.1 and later allow device manufacturers to “lock” in the device’s originating firmware instructions; any future updates must be signed and validated in order to update the firmware.

BIOS, on the other hand, can be corrupted with a minimum number of malicious bytes to “brick” the system and make it unusable until sent back to the manufacturer. Without UEFI, sophisticated malicious code can be installed to bypass all your OS’s security protections. Unfortunately, there is no way to convert from BIOS to UEFI, if that’s what you have. Secure operating system boot. Your operating system will need self-checking processes to ensure its intended boot process hasn’t been compromised. UEFI-enabled systems (v.2.3.1 and later) can use UEFI’s Secure Boot process to begin a trusted boot process. Non-UEFI systems may have a similar feature, but it’s important to understand that if the underlying hardware and firmware do not have the necessary self-checking routines built in, upper-level operating system checks cannot be trusted as much. Secure storage. Any device you use should have secure, default, encrypted storage, for both its primary storage and any removable media storage devices it allows. Local encryption makes it significantly harder for physical attacks to read your personal data. Many of today’s hard drives are self-encrypting, and many OS vendors (including Apple and Microsoft) have software-based drive encryption. Many portable devices offer full-device encryption out of the box. You should not use a device and/or OS that does not enable default storage encryption. Two-factor authentication. Two-factor authentication is fast becoming a must in today’s world, where passwords are stolen by the hundreds of millions annually. Whenever possible, use and require 2FA for websites storing your personal information or email.
If your computing device supports 2FA, turn it on there. When 2FA is required, it ensures an attacker can’t simply guess or steal your password. (Note that using a single biometric factor, such as a fingerprint, is not even close to being as secure as 2FA.
It’s the second factor that gives the strength.) 2FA ensures that an attacker cannot phish you out of your logon credentials as easily as they could if you were using a password alone.

Even if they get your password or PIN, they will still have to get the second logon factor: biometric trait, USB device, cellphone, smart card, device, TPM chip, and so on.
It has been done, but is significantly more challenging. Be aware, though, that if an attacker gains total access to the database that authenticates your 2FA logon, they will have the super admin access necessary to access your data without your 2FA credentials. Logon account lockout. Every device you use should lock itself when a certain number of bad logons have been attempted.

The number isn’t important.

Any value between 5 and 101 is reasonable enough to keep an attacker from guessing your password or PIN. However, lower values mean that unintentional logons might end up locking you out of your device. Remote find. Device loss or theft is one of the most common means of data compromise. Most of today’s devices (or OSes) come with a feature, often not enabled by default, to find a lost or stolen device. Real-life stories abound in which people have been able to find their devices, often at a thief’s location, by using remote-find software. Of course, no one should confront a thief.

Always get law enforcement involved. Remote wipe. If you can’t find a lost or stolen device, the next best thing is to remotely wipe all personal data. Not all vendors offer remote wipe, but many, including Apple and Microsoft, do. When activated, the device, which is hopefully already encrypted and protected against unauthorized logons, will either wipe all private data when a certain number of incorrect logons are entered or when instructed to do so upon the next connection to the internet (after being instructed to wipe itself by you). All of the above provide a foundation for an overall secure computing experience. Without firmware, boot, and storage encryption protection mechanisms, a truly secure computing experience cannot be ensured.

But that’s only the start. True privacy requires a secure network The most paranoid computer security practitioners want every network connection they use to be secured.

And it all starts with a VPN. Secure VPN. Most of us are familiar with VPNs, from connecting remotely to our work networks.

Corporate VPNs provide secure connectivity from your offsite remote location to the company network, but often offer no or limited protection to any other network location. Many hardware devices and software programs allow you to use a secure VPN no matter where you connect. With these boxes or programs, your network connection is encrypted from your device to your destination, as far as possible.

The best VPNs hide your originating information and/or randomly tunnel your connection among many other participating devices, making it harder for eavesdroppers to determine your identity or location. Tor is the most used, free, secure VPN service available today. Using a Tor-enabled browser, all of your network traffic is routed over randomly selected intermediate nodes, encrypting as much as the traffic as possible.

Tens of millions of people rely on Tor to provide a reasonable level of privacy and security.

But Tor has many well-known weaknesses, ones that other secure VPN solutions, such as MIT’s Riffle or Freenet are attempting to solve. Most of these attempts, however, are more theoretical than deployed (for example, Riffle) or require opt-in, exclusionary participation to be more secure (such as Freenet).

Freenet, for example, will only connect to other participating Freenet nodes (when in “darknet” mode) that you know of in advance. You can’t connect to other people and sites outside of Freenet when in this mode. Anonymity services. Anonymity services, which may or may not provide VPN as well, are an intermediate proxy that completes a network request on behalf of the user.

The user submits his or her connection attempt or browser connection to the anonymity site, which completes the query, obtains the result, and passes it back to the user.

Anyone eavesdropping on the destination connection would be more likely to be stopped from tracking beyond the anonymity site, which hides the originator’s information.

There are loads of anonymity services available on the web. Some anonymity sites store your information, and some of these have been compromised or forced by law enforcement to provide user information. Your best bet for privacy is to choose an anonymity site, like Anonymizer, that doesn’t store your information for longer than the current request.

Another popular, commercial secure VPN service is HideMyAss. Anonymity hardware. Some people have attempted to make Tor and Tor-based anonymity easier using specially configured hardware. My favorite is Anonabox (model: anbM6-Pro), which is a portable, Wi-Fi-enabled VPN and Tor router.
Instead of having to configure Tor on your computer/device, you can simply use Anonabox instead. Secure VPNs, anonymity services, and anonymity hardware can enhance your privacy greatly by securing your network connections.

But one big note of caution: No device or service offering security and anonymity has proved to be 100 percent secure.

Determined adversaries and unlimited resources can probably eavesdrop on your communications and determine your identity.

Everyone who uses a secure VPN, anonymity services, or anonymity hardware should communicate with the knowledge that any day their private communications could become public. Secure applications are a must as well With a secure device and secure connections, security experts use the most (reasonable) secure applications they can find. Here’s a rundown of some of your best bets for protecting your privacy. Secure browsing. Tor leads the way for secure, almost end-to-end Internet browsing. When you can’t use Tor or a Tor-like VPN, make sure the browser you use has been set to its most secure settings. You want to prevent unauthorized code (and sometimes legitimate code) from executing without your being aware.
If you have Java, uninstall it (if not using it) or make sure critical security patches are applied. Most browsers now offer “private browsing” modes. Microsoft calls this feature InPrivate; Chrome, Incognito.

These modes erase or do not store browsing history locally and are useful in preventing local, unauthorized forensic investigations from being as fruitful. Use HTTPS for all internet searches (and connections to any website), especially in public locations.

Enable your browser’s Do Not Track features.

Additional software can prevent your browser experience from being tracked, including browser extensions Adblock Plus, Ghostery, Privacy Badger, or DoNotTrackPlus.
Some popular sites try to detect these extensions and block your use of their sites unless you disable them while on their sites. Secure email. The original “killer app” for the internet, email is well-known for violating user’s privacy.

The internet’s original open standard for securing email, S/MIME, is being less used all the time.
S/MIME requires each participating user to exchange public encryption keys with other users.

This requirement has proved overly daunting for less savvy users of the internet. These days most corporations that require end-to-end email encryption use commercial email services or appliances that allow secure email to be sent via HTTPS-enabled sites. Most commercial users of these services or devices say they are easy to implement and work with, but can sometimes be very expensive. On the personal side there are dozens of secure email offerings.

The most popular (and widely used in many businesses) is Hushmail. With Hushmail, you either use the Hushmail website to send and receive secure email or install and use a Hushmail email client program (available for desktops and some mobile devices). You can use your own, original email address, which gets proxied through Hushmail’s proxy services, or obtain a Hushmail email address, a cheaper solution. Hushmail is one among dozens of secure email providers currently available. Secure chat. Most OS- and device-provided chat programs do not offer strong security and privacy.

For strong end-to-end security you need to install an additional chat program. Luckily, there are dozens of chat programs, both free and commercial, that claim to offer greater security.
Some require installation of a client app; others offer website services. Most require all parties to communicate with the same program or use the same website (or at least the same chat protocol and protection). Common secure chat programs include ChatCrypt, ChatSecure, and Cryptocat. Most secure chat clients have the same basic features, so pick the one that enables you to communicate with the broadest set of people you need to securely chat with. Secure payments. Most payment systems are required to store lots of information about you and your purchases, and they are usually required to provide payment or payer details when asked by law enforcement.

Even if they aren’t required to provide detailed data to the police or governments, many payment databases are compromised each year by malicious hackers. Most users wishing for greater payment anonymity on the internet are turning to online cryptocurrencies, such as bitcoin. Users must first buy bitcoins, usually via traditional online payment methods, and must go through bitcoin exchanges to get their bitcoin value back out into traditional currencies.

Each exchange into and out of bitcoin typically takes a small payment fee. Of course, the privacy and anonymity of virtual currencies comes with real risk.

They are usually not considered legal currency and may not be provided the same protections under law as “real” currencies.

They may also have incredible price volatility, with the value of your holdings potentially jumping or declining by huge margins in a single day.
It’s also possible that a single crypto attack could result in permanent, unrecoverable loss. Hackers have been successful in stealing millions of dollars in bitcoins, and sometimes those thefts are not reimbursed by the compromised holders. As for credit cards, you can buy and use temporary online (or physical) credit cards. Most credit card agencies offer temporary cards, often at slightly high fee rates, which can be used for a temporary set period of time or even one-time use.
If a website gets compromised, exposing your temporary credit card, you won’t be at a loss because you’ll never use it again. Secure file transfers. Probably the only class of applications that offer more alternatives than secure email is secure file transfer.

Any program using SSH or SCP allows encrypted and secure file sharing, and there are dozens, if not hundreds, of commercial offerings. Users who wish to securely share files while also preserving their anonymity have a myriad of choices. One of the most popular commercial services is BTGuard.
It provides file anonymity services over the BitTorrent, a very popular peer-to-peer file sharing protocol. Anything Phil Zimmerman creates. Phil Zimmermann, creator of Pretty Good Privacy (PGP), cares deeply about privacy. He was willing to risk being arrested, imprisoned, and even potentially faced the U.S. death penalty because he strongly believed that everyone on the planet deserved good privacy tools. Every good and experienced computer security person I know and trust uses PGP.

To work with PGP, each participant creates their own private/public key pair and shares their public key with other participants for securely sending files, emails, or other content. Symantec bought and has supported PGP commercially since 2010, but dozens of open source versions are available and trusted, including OpenPGP.
If you don’t have PGP, get it, install it, and use it. Zimmermann, who was also behind Hushmail, is a co-founder of Silent Circle, which offers secure solutions for a range of technologies.
It even offers the Blackphone, which was designed from the ground up to be the most secure, generally accessible cellphone ever.

There have been some hacks of the Blackphone, but it still is the cellphone that prizes privacy and security above all other features -- at least as much as one can and still sell the product to the general population. Whatever Phil Zimmermann creates or promotes can be assured to be well thought out, delivering privacy and security in spades. Related articles