11.8 C
London
Sunday, September 24, 2017
Home Tags Public Key Encryption

Tag: Public Key Encryption

Google announced an early prototype of Key Transparency, its latest open source effort to ensure simpler, safer, and secure communications for everyone.

The project’s goal is to make it easier for applications services to share and discover public keys for users, but it will be a while before it's ready for prime time. Secure communications should be de rigueur, but it remains frustratingly out of reach for most people, more than 20 years after the creation of Pretty Good Privacy (PGP).

Existing methods where users need to manually find and verify the recipients’ keys are time-consuming and often complicated. Messaging apps and file sharing tools are limited in that users can communicate only within the service because there is no generic, secure method to look up public keys. “Key Transparency is a general-use, transparent directory, which makes it easy for developers to create systems of all kinds with independently auditable account data,” Ryan Hurst and Gary Belvin, members of Google’s security and privacy engineering team, wrote on the Google Security Blog. Key Transparency will maintain a directory of online personae and associated public keys, and it can work as a public key service to authenticate users.

Applications and services can publish their users’ public keys in Key Transparency and look up other users’ keys.

An audit mechanism keeps the service accountable.

There is the security protection of knowing that everyone is using the same published key, and any malicious attempts to modify the record with a different key will be immediately obvious. “It [Key Transparency] can be used by account owners to reliably see what keys have been associated with their account, and it can be used by senders to see how long an account has been active and stable before trusting it,” Hurst and Belvin wrote. The idea of a global key lookup service is not new, as PGP previously attempted a similar task with Global Directory.

The service still exists, but very few people know about it, let alone use it. Kevin Bocek, chief cybersecurity strategist at certificate management vendor Venafi, called Key Transparency an "interesting" project, but expressed some skepticism about how the technology will be perceived and used. Key Transparency is not a response to a serious incident or a specific use case, which means there is no actual driving force to spur adoption.

Compare that to Certificate Transparency, Google’s framework for monitoring and auditing digital certificates, which came about because certificate authorities were repeatedly mistakenly issuing fraudulent certificates. Google seems to be taking a “build it, and maybe applications will come,” approach with Key Transparency, Bocek said. The engineers don’t deny that Key Transparency is in early stages of design and development. “With this first open source release, we're continuing a conversation with the crypto community and other industry leaders, soliciting feedback, and working toward creating a standard that can help advance security for everyone," they wrote. While the directory would be publicly auditable, the lookup service will reveal individual records only in response to queries for specific accounts.

A command-line tool would let users publish their own keys to the directory; even if the actual app or service provider decides not to use Key Transparency, users can make sure their keys are still listed. “Account update keys” associated with each account—not only Google accounts—will be used to authorize changes to the list of public keys associated with that account. Google based the design of Key Transparency on CONIKS, a key verification service developed at Princeton University, and integrated concepts from Certificate Transparency.

A user client, CONIKS integrates with individual applications and services whose providers publish and manage their own key directories, said Marcela Melara, a second-year doctoral fellow at Princeton University’s Center for Information Technology Policy and the main author of CONIKS.

For example, Melara and her team are currently integrating CONIKS to work with Tor Messenger.

CONIKS relies on individual directories because people can have different usernames across services. More important, the same username can belong to different people on different services. Google changed the design to make Key Transparency a centralized directory. Melara said she and her team have not yet decided if they are going to stop work on CONIKS and start working on Key Transparency. One of the reasons for keeping CONIKS going is that while Key Transparency’s design may be based on CONIKS, there may be differences in how privacy and auditor functions are handled.

For the time being, Melara intends to keep CONIKS an independent project. “The level of privacy protections we want to see may not translate to [Key Transparency’s] internet-scalable design,” Melara said. On the surface, Key Transparency and Certificate Transparency seem like parallel efforts, with one providing an auditable log of public keys and the other a record of digital certificates. While public keys and digital certificates are both used to secure and authenticate information, there is a key difference: Certificates are part of an existing hierarchy of trust with certificate authorities and other entities vouching for the validity of the certificates. No such hierarchy exists for digital keys, so the fact that Key Transparency will be building that web of trust is significant, Venafi’s Bocek said. “It became clear that if we combined insights from Certificate Transparency and CONIKS we could build a system with the properties we wanted and more,” Hurst and Belvin wrote.
Whether quantum computing is 10 years away or is already here, it promises to make current encryption methods obsolete, so enterprises need to start laying the groundwork for new encryption methods. A quantum computer uses qubits instead of bits.

A bit can be a zero or a one, but a qubit can be both simultaneously, which is weird and hard to program, but once folks get it working, it has the potential to be significantly more powerful than any of today's computers. And it will make many of today's public key algorithms obsolete, said Kevin Curran, IEEE senior member and a professor at the University of Ulster, where he heads up the Ambient Intelligence Research Group. That includes today's most popular algorithms, he said.

For example, one common encryption method is based on the fact that it is extremely difficult to find the factors of very large numbers. "All of these problems can be solved on a powerful quantum computer," he said. He added that the problems are mostly like with public key systems, where the information is encoded and decoded by different people.
Symmetric algorithms, commonly used to encrypt local files and databases, don't have the same weaknesses and will survive a bit longer.

And increasing the length of the encryption keys will make those algorithms more secure. For public key encryption, such as that used for online communications and financial transactions, possible post-quantum alternatives include lattice-based, hash-based, and multivariate cryptographic algorithms as well as those that update today's Diffie-Hellman algorithm with supersingular elliptic curves. Google is already experimenting with some of these, Curran said. "Google is working with the Lattice-based public-key New Hope algorithm," he said. "They are deploying it in Chrome where a small fraction of connections between desktop Chrome and Google's servers will use a post-quantum key-exchange algorithm.

By adding a post-quantum algorithm on top of the existing one, they are able to experiment without affecting user security." Flexibility is key Some future-proof encryption algorithms have already been developed and are now being tested, but enterprises need to start checking now whether their systems, both those that they have developed themselves and those provided by vendors, are flexible enough to allow old, obsolete algorithms to be early replaced by new ones. Fortunately, according to Curran, there are already algorithms out there that seem to be workable replacements and that can run on existing computers. One company that is paying very close attention to this is Echoworx, which provides on-premises and cloud-based enterprise encryption software. Quantum computing will break all of today's commonly used encryption algorithms, said Sam Elsharif, vice president of software development at Echoworx.

Encryption that today's most sophisticated computer can break only after thousands of years of work will be beaten by a quantum computer in minutes. "This is obviously very troubling, since it's the core of our business," he said. "Echoworx will be in trouble -- but so will all of today's infrastructure." Since longer keys won't work for public key encryption and companies will need to replace their algorithms, the encryption technology needs to be modular. "It's called cryptographic agility," he said. "It means that you don't hard-wire encryption algorithms into your software, but make them more like pluggable modules.

This is how software should be designed, and this is what we do at Echoworx ." Once post-quantum algorithms have been tested and become standards, Echoworx will be able swap out the old ones with the new ones, he said. "You will still have a problem with old data," he said. "That data will either have to be destroyed or re-encrypted." Hardware-based encryption appliances will also need to be replaced if they can't be upgraded, he said. Don't worry, it's still a long way off How soon is this going to be needed? Not right away, some experts say. "The threat is real," said Elsharif. "The theory is proven, it's just a matter of engineering." But that engineering could take 10, 15 or 20 years, he said. Ulster University's Curran says that quantum computers need to have at least 500 qubits before they can start breaking current encryption, and the biggest current quantum computer has less than 15 qubits. "So there is no immediate worry," said Curran. However, research organizations should be working on the problem now, he said. "We may very well find that we do not actually need post-quantum cryptography but that risk is perhaps too large to take and if we do not conduct the research now, then we may lose years of critical research in this area." Meanwhile, there's no reason for an attacker to try to break encryption by brute force if they can simply hack into users' email accounts or use stolen credentials to access databases and key files. Companies still have lots of work to do on improving authentication, fixing bugs, and patching outdated, vulnerable software. "Many steps need to be taken to tighten up a company’s vulnerability footprint before even discussing encryption," said Justin Fier, director of cyber intelligence and analysis at Darktrace. In addition, when attackers are able to bypass encryption, they usually do it because the technology is not implemented correctly, or uses weak algorithms. "We still have not employed proper protection of our data using current cryptography, let alone a future form," he said. "Quantum computing is still very much theoretical," he added. "Additionally, even if a prototype had been designed, the sheer cost required to build and operate the device within the extreme temperature constraints would make it difficult to immediately enter the mainstream marketplace." No, go right ahead and panic Sure, the typical criminal gang might not have a quantum computer right now with which to do encryption. But that's not necessarily true for all attackers, Mike Stute, chief scientist at security firm Masergy Communications. There have already been public announcements from China about breakthroughs in both quantum computing and in unbreakable quantum communications. "It's probably safe to say that nation states are not on the first generation of the technology but are probably on the second," he said. There are even some signs that nation states are able to break encryption, Stute added.
It might not be a fast process, but it's usable. "They have to focus on what they really want," he said. "And bigger quantum computer will do more." That means that companies with particularly sensitive data might want to start looking at upgrading their encryption algorithms sooner rather than later. Plus, there are already some quantum computers already on the market, he added. The first commercial quantum computer was released by D-Wave Systems more than a year ago, and Google was one of its first customers. "Most everyone was skeptical, but they seem to have passed the test," said Stute. The D-Wave computer claims to have 1,000 qubits -- and the company has announced a 2,000-qubit computer that will be coming out in 2017. But they're talking about a different kind of qubit, Stute said.
It has a very limited set of uses, he said, unlike a general-purpose quantum computer like IBM's which would be well suited for cracking encryption. IBM's quantum computer has five qubits, and is commercially available. "You can pay them to do your calculations," he said. "I was able to do some testing, and it all seems on the up and up.
It's coming faster than we think." Related video: This story, "Prepare now for the quantum computing revolution in encryption" was originally published by CSO.
EnlargeJorge Láscar reader comments 30 Share this story Researchers have devised a way to place undetectable backdoors in the cryptographic keys that protect websites, virtual private networks, and Internet servers.

The feat allows hackers to passively decrypt hundreds of millions of encrypted communications as well as cryptographically impersonate key owners. The technique is notable because it puts a backdoor—or in the parlance of cryptographers, a "trapdoor"—in 1,024-bit keys used in the Diffie-Hellman key exchange.

Diffie-Hellman significantly raises the burden on eavesdroppers because it regularly changes the encryption key protecting an ongoing communication.

Attackers who are aware of the trapdoor have everything they need to decrypt Diffie-Hellman-protected communications over extended periods of time, often measured in years. Knowledgeable attackers can also forge cryptographic signatures that are based on the widely used digital signature algorithm. As with all public key encryption, the security of the Diffie-Hellman protocol is based on number-theoretic computations involving prime numbers so large that the problems are prohibitively hard for attackers to solve.

The parties are able to conceal secrets within the results of these computations.

A special prime devised by the researchers, however, contains certain invisible properties that make the secret parameters unusually susceptible to discovery.

The researchers were able to break one of these weakened 1,024-bit primes in slightly more than two months using an academic computing cluster of 2,000 to 3,000 CPUs. Backdooring crypto standards—"completely feasible" To the holder, a key with a trapdoored prime looks like any other 1,024-bit key.

To attackers with knowledge of the weakness, however, the discrete logarithm problem that underpins its security is about 10,000 times easier to solve.

This efficiency makes keys with a trapdoored prime ideal for the type of campaign former National Security Agency contractor Edward Snowden exposed in 2013, which aims to decode vast swaths of the encrypted Internet. "The Snowden documents have raised some serious questions about backdoors in public key cryptography standards," Nadia Heninger, one of the University of Pennsylvania researchers who participated in the project, told Ars. "We are showing that trapdoored primes that would allow an adversary to efficiently break 1,024-bit keys are completely feasible." While NIST—short for the National Institute for Standards and Technology—has recommended minimum key sizes of 2,048 bits since 2010, keys of half that size remain abundant on the Internet.

As of last month, a survey performed by the SSL Pulse service found that 22 percent of the top 200,000 HTTPS-protected websites performed key exchanges with 1,024-bit keys.

A belief that 1,024-bit keys can only be broken at great cost by nation-sponsored adversaries is one reason for the wide use. Other reasons include implementation and compatibility difficulties. Java version 8 released in 2014, for instance, didn't support Diffie-Hellman or DSA keys larger than 1,024 bits.

And, to this day, the DNSSEC specification for securing the Internet's domain name system limits keys to a maximum of 1,024 bits. Poisoning the well Solving a key's discrete logarithm problem is significant in the Diffie-Hellman arena. Why? Because a handful of primes are frequently standardized and used by a large number of applications. If the NSA or another adversary succeeded in getting one or more trapdoored primes adopted as a mainstream specification, the agency would have a way to eavesdrop on the encrypted communications of millions, possibly hundreds of millions or billions, of end users over the life of the primes.
So far, the researchers have found no evidence of trapdoored primes in widely used applications.

But that doesn't mean such primes haven't managed to slip by unnoticed. In 2008, the Internet Engineering Task Force published a series of recommended prime numbers for use in a variety of highly sensitive applications, including the transport layer security protocol protecting websites and e-mail servers, the secure shell protocol for remotely administering servers, the Internet key exchange for securing connections, and the secure/multipurpose Internet mail extensions standard for e-mail. Had the primes contained the type of trapdoor the researchers created, there would be virtually no way for outsiders to know, short of solving mathematical problems that would take centuries of processor time. Similarly, Heninger said, there's no way for the world at large to know that crucial 1,024-bit primes used by the Apache Web server aren't similarly backdoored.
In an e-mail, she wrote: We show that we are never going to be able to detect primes that have been properly trapdoored.

But we know exactly how the trapdoor works, and [we] can quantify the massive advantage it gives to the attacker.
So people should start asking pointed questions about how the opaque primes in some implementations and standards were generated. Why should the primes in RFC 5114 be trusted without proof that they have not been trapdoored? How were they generated in the first place? Why were they standardized and pretty widely implemented by VPNs without proof that they were generated with verifiable randomness? Unlike prime numbers in RSA keys, which are always supposed to be unique, certain Diffie-Hellman primes are extremely common.
If the NSA or another adversary managed to get a trapdoored prime adopted as a real or de facto standard, it would be a coup.

From then on, the adversary would have possession of the shared secret that two parties used to generate ephemeral keys during a Diffie-Hellman-encrypted conversation. Remember Dual_EC_DRBG? Such a scenario, assuming it happened, wouldn't be the first time the NSA intentionally weakened standards so it could more easily defeat cryptographic protections.
In 2007, for example, NIST backed NSA-developed code for generating random number generators.

Almost from the start, the so-called Dual_EC_DRBG was suspected of containing a deliberately designed weakness that allowed the agency to quickly derive the cryptographic keys that relied on the algorithm for crucial randomness.
In 2013, some six years later, Snowden-leaked documents all but confirmed the suspicions. RSA Security, at the time owned by the publicly traded corporation EMC, responded by warning customers to stop using Dual_EC_DRBG.

At the time, Dual_EC_DRBG was the default random number generator in RSA's BSAFE and Data Protection Manager programs. Early this year, Juniper Networks also removed the NSA-developed number generator from its NetScreen line of firewalls after researchers determined it was one of two backdoors allowing attackers to surreptitiously decrypt VPN traffic. In contrast to 1,024-bit keys, keys with a trapdoored prime of 2,048 bits take 16 million times longer to crack, or about 6.4 × 109 core-years, compared with the 400 core-years it took for the researchers to crack their trapdoored 1,024-bit prime. While even the 6.4 × 109 core-year threshold is considered too low for most security experts, the researchers—from the University of Pennsylvania and France's National Institute for Research in Computer Science and Control at the University of Lorraine—said their research still underscores the importance of retiring 1,024-bit keys as soon as possible. "The discrete logarithm computation for our backdoored prime was only feasible because of the 1,024-bit size, and the most effective protection against any backdoor of this type has always been to use key sizes for which any computation is infeasible," they wrote in a research paper published last week. "NIST recommended transitioning away from 1,024-bit key sizes for DSA, RSA, and Diffie-Hellman in 2010. Unfortunately, such key sizes remain in wide use in practice." In addition to using sizes of 2,048 bits or bigger, the researchers said, keys must also be generated in a way that holders can verify the randomness of the underlying primes. One way to do this is to generate primes where most of the bits come from what cryptographers call "a 'nothing up my sleeve' number such as pi or e." Another method is for standardized primes to include the seed values used to ensure their randomness.
Sadly, such verifications are missing from a wide range of regularly used 1,024-bit primes. While the Federal Information Processing Standards imposed on US government agencies and contractors recommends a seed be published along with the primes they generated, the recommendation is marked as optional. The only widely used primes the researchers have seen come with such assurances are those generated using the Oakley key determination protocol, the negotiated Finite Field Diffie-Hellman Ephemeral Parameters for TLS version 1.3, and the Java Development Kit. Cracking crypto keys most often involves the use of what's known as the number field sieve algorithm to solve, depending on the key type, either its discrete logarithm or factorization problem.

To date, the biggest prime known to have its discrete logarithm problem solved was 768 bits in length from last year.

The feat took about 5,000 core years.

By contrast, solving the discrete logarithm problem for the researcher's 1,024-bit key with the trapdoored prime required about a tenth of the computation. "More distressing" Since the early 1990s, researchers have known that certain composite integers are especially susceptible to being factored by NFS.

They also know that primes with certain properties allow for easier computation of discrete logarithms.

This special set of primes can be broken much more quickly than regular primes using NFS.

For some 25 years, researchers believed the trapdoored primes weren't a threat because they were easy to spot.

The new research provided novel insights into the special number field sieve that proved these assumptions wrong. Heninger wrote: The condition for being able to use the faster form of the algorithm (the "special" in the special number field sieve) is that the prime has a particular property.

For some primes that's easy to see, for example if a prime is very close to a power of 2. We found some implementations using primes like this, which are clearly vulnerable. We did discrete log computations for a couple of them, described in Section 6.2 of the paper. But there are also primes for which this is impossible to detect. (Or, more precisely, would be as much work to detect as it is to just do the discrete log computation the hard way.) This is more distressing, since there's no way for any user to tell that a prime someone gives them has this special property or not, since it just looks like a large prime. We discuss in the paper how to construct primes that have this special property but the property is undetectable unless you know the trapdoor secret. It's possible to give assurance that a prime does not contain a trapdoor like this. One way is to generate primes where most of the bits come from a "nothing up my sleeve" number like e or pi.
Some standards do this.

Another way is to give the seeds used for a verifiable random generation algorithm. With the current batch of existing 1,024-bit primes already well past their, well, prime, the time has come to retire them to make way for 2,048-bit or even 4,096-bit replacements.

Those 1,024-bit primes that can't be verified as truly random should be viewed with special suspicion and banished from accepted standards as soon as possible.
Relying on passwords is no longer enough, and some kind of two-factor authentication is a necessary component to secure applications, networks, and systems. However, the most common kind of two-factor authentication -- sending special codes via SMS messages -- may no longer be an acceptable form. In the latest draft version of its Digital Authentication Guideline, the United States National Institute of Standards and Technology (NIST) is discouraging companies from using SMS-based authentication in their two-factor authentication schemes. Many services offer two-factor authentication by asking users to enter into the app or site one-time passcodes sent via SMS to verify the transaction.

Concerned about the weaknesses in the SMS mechanism, NIST is now recommending that developers use tokens and software cryptographic authenticators instead.  "OOB [out-of-band] using SMS is deprecated, and will no longer be allowed in future releases of this guidance," NIST wrote in a draft version of the DAG. Software companies follow the guidelines set by NIST in their applications since federal agencies aren't allowed to use applications that don't conform to NIST guidelines.

This is especially relevant for secure electronic communications. SMS-based two-factor authentication is considered an insecure process because someone other than the user may be in possession of the phone and would be able to trigger the login request.
In some cases, the contents of the text message appear on the lock screen, which means the code is exposed to anyone who glances at the screen. NIST isn't deprecating SMS-based methods just because someone may be able to intercept the codes by taking control of the handset, since that risk also exists with tokens and software authenticators.

The main reason NIST appears to be down on SMS is because it is insecure over VoIP. There has been a significant increase in attacks targeting SMS-based two-factor authentication recently.
SMS messages can be hijacked over some VoIP services.
Security researchers have used weaknesses in the SMS protocol to remotely interact with applications on the target phone and compromising users. A recent attack used social engineering to bypass Google's two-factor authentication.

Criminals sent users text messages informing them that someone was trying to break into their Gmail accounts and that they should enter the passcode to temporarily lock the account.

The passcode -- which was a real code generated by Google when the attackers tried to log in -- arrived in a separate text message, and users who didn't realize the first message was not legitimate would pass the unique code on to the criminals. "NIST's decision to deprecate SMS two-factor authentication is a smart one," said Keith Graham, CTO of authentication provider SecureAuth. "The days of vanilla two-factor approaches are no longer enough for security." NIST outlines the future of SMS-based authentication in the DAG.
If the out-of-band verification is to be made via SMS message on a public mobile phone network, the verifier has to verify the phone number is on an actual mobile network and not associated to a Voice-over-IP or some other software-based phone service.
It should also not be possible to change the phone number receiving the SMS message without using two-factor authentication. For now, applications and services using SMS-based authentication can continue to do so as long as it isn't a service that virtualizes phone numbers.

Developers and application owners should be exploring other options, including dedicated two-factor app such as Google Authenticator, which uses a secret key and time to generate a unique code locally on the device for the user to enter into the application. Hardware tokens such as RSA's SecurID display a new code every few seconds.

A hardware security dongle such as YubiKey, used by many companies including Google and GitHub, supports one-time passwords, public key encryption, and authentication. Knowing that NIST is not very happy with SMS will push the authentication industry towards more secure options. Many popular services and applications offer only SMS-based authentication, including Twitter and online banking services from major banks. Once the NIST guidelines are final, these services will have to make some changes. Many developers are increasingly looking at fingerprint recognition, especially since the latest mobile devices have fingerprint sensors. Organizations can also employ adaptive authentication techniques, such as layering device recognition, geo-location, login history, or even behavioral biometrics to continually verify the true identity of the user, Graham said. NIST acknowledged that biometrics is gaining steam as a method for authentication, but refrained from issuing a full recommendation because biometrics aren't considered secret and can be obtained and forged by attackers through various methods.

Biometric methods are acceptable only if they are used with another authentication factor, according to the draft guidelines. "[Biometrics] can be obtained online or by taking a picture of someone with a camera phone (e.g. facial images) with or without their knowledge, lifted from objects someone touches (e.g., latent fingerprints), or captured with high resolution images (e.g., iris patterns for blue eyes)," NIST wrote in the DAG. The current version of the DAG is in public preview, which means the guidelines are still under discussion and NIST is soliciting feedback from partners and NIST stakeholders.

At this point, it appears NIST is moving away from recommending SMS-based authentication as a secure method for out-of-band verification.
If it doesn't happen in this version, it will likely happen in future versions.

Anyone who wants to review and comment can use GitHub to do so. "It only seemed appropriate for us to engage where so much of our community already congregates and collaborates," NIST wrote. SMS was an easy way to get developers, application owners, and users started on the two-factor authentication journey, because it was also the simplest.
SMS is better than no two-factor at all, but the never-ending stream of data breaches indicates that better and stronger authentication methods are needed.