Tag: Public Key
The ongoing evolution of the malware demonstrates that the cybercriminals are not about to bid farewell to their brainchild, which is providing them with a steady revenue stream.
This sample is what could be considered as the “father” of other XPan ransomware variants.
A considerable amount of indicators within the source code depict the early origins of this sample.
After penetrating an organization's network the threat actors used the PsExec tool to install ransomware on all endpoints and servers in the organization.
The next interesting fact about this ransomware is that the threat actors decided to use the well-known Petya ransomware to encrypt user data.
The first malware program to lock up people’s files and ask for a ransom was the PC Cyborg Trojan in 1989.
It was created by Harvard-trained evolutionary biologist Dr. Joseph Popp, who was working on several AIDS-related projects at the time. Dr. Popp sent a floppy disk containing a program covering AIDS information, teaching, and testing to tens of thousands of mailing list subscribers.
At startup, a crude EULA warned users they had to pay for the program—and the author reserved the legal right to “ensure termination of your use of the programs ....
These program mechanisms will adversely affect other program applications on microcomputers.” Most people didn’t read the EULA and ran the program without paying for it. After 90 boots, the program crudely encrypted/obfuscated the user’s hard drive data, rendering it inaccessible, and asked for a payment of $189 to be sent to a Panamanian post office box. (Check out a great analysis of the Trojan.) Ransomware evolution Early ransomware used symmetric key encryption, and the cipher algorithm was often poorly constructed.
Encryption experts could frequently break the ransomware easily, and because the symmetric key was the same shared key in every infection, every computer touched by the same ransomware program could be unlocked at once. Eventually, ransomware authors learned to use public key cryptography (where both a private key and a second public key is involved) and started to use popular, well-known, well-tested cipher algorithms.
A different key pair was generated for each infection, which made ransomware a very difficult problem to solve. By the middle 2000s, tough-to-break ransomware was becoming very popular, but the problem of how hackers would collect their money remained. Real money and credit card transactions can be traced. Enter CryptoLocker, the first widespread ransomware program to demand bitcoin payments.
CryptoLocker first appeared in 2013. When matched with randomly generated email addresses and “darknet” pathways, it became almost impossible to catch ransomware hackers. Ransomware writers and distributors are now making tens, if not hundreds of millions, of dollars off their victims. These days ransomware keeps getting more dangerous and targeted. Ransomware programs are now being developed to attack specific types of data, such as database tables, mobile devices, IoT units, and televisions.
This page chronicles all the significant developments from the last year or so. Defeating ransomware First, you need to verify that you’ve actually been hit by ransomware. Less sophisticated programs merely take over your current browser session or computer screen.
They make the same blackmail claims as a more sophisticated ransomware program, but don’t encrypt any files.
All you need to do is reboot the computer and/or use a program like Process Explorer to remove the malicious file. Nothing beats a good backup. Nothing beats a current, offline backup.
The “offline” part is important because many ransomware programs will look for your online backups and render them unusable, too. Get patched. Making sure your system is fully patched is a great way to prevent any malware from infecting your computer.
But also see if they are the real patches from the real vendors. Unfortunately, fake patches often contain ransomware. Don’t get tricked. Don’t let yourself get socially engineered into installing ransomware.
In other words, don’t install anything sent to you in email or offered to you when visiting a website.
If a website says you need to install something, either leave the website and don’t go back—or leave the website and install the software directly from the legitimate vendor’s website. Never let a website install another vendor’s software for you. Use antimalware software. Everyone needs to run at least one antimalware program. Windows comes with Windows Defender, but there are dozens of commercial competitors and some good freebies. Ransomware is malware.
Antimalware software can stop the majority of variants before they hit. Use a whitelisting program. Application control or whitelisting programs stop any unauthorized program from executing.
These programs are probably the best defense against ransomware (besides a good offline backup).
Although many people think application control programs are too cumbersome to use, expect them to become much more accepted as ransomware continues to grow, at least in business computing.
The days of allowing employees to run any program they want are numbered. What to do if you’re locked up If all your critical data is backed up and safe, then you’ll be back in business in a few hours’ time. You’ll still need to reformat/reset/restore your device, however. Luckily, that process gets easier with each new operating system version. Using another safe, uninfected computer, restore your backup.
Apply all critical security patches, restore your data, and resolve never to do what you did that got your device locked up in the first place. If you don’t have a clean backup copy of your critical data and absolutely need the data, you have two options: Find an unlock key or pay the ransomware demand. Using another safe, trusted computer, research as much as you can about the particular ransomware variant you have.
The screen message presented by the ransomware will help you identify the variant. If you’re lucky, your ransomware variant may already have been unlocked. Many antimalware vendors have programs to detect and unlock ransomware (if it recognizes the variant and has the unlock key). Run that program first. It may take an offline scan to get rid of the ransomware.
Several websites also offer unlocking services, free and commercial, for particular ransomware variants. Here’s an example of a ransomware unlocker.
Also, believe it or not, ransomware distributors will even occasionally apologize and release their own unlocking programs. Lastly, many people choose to pay the ransomware to recover their files. Most experts and companies recommend against paying ransom because it only encourages the ransomware creators and distributors. Yet quite often it works.
It’s your computer and data, so it’s up to you whether to pay the ransom. Be aware that in many cases people have paid up and their files have remained encrypted.
But these cases seem to be in the minority.
If ransomware didn’t unlock files after the money was paid, everyone would learn that—and ransomware attackers would make less money. I hope you never become a ransomware victim.
The odds of infection, unfortunately, are getting worse as ransomware gains popularity and sophistication.
The security researcher cited in the report acknowledged that the word 'backdoor' was probably not the best choice.
WhatsApp this week denied that its app provides a "backdoor" to encrypted texts.
A report published Friday by The Guardian, citing cryptography and security researcher Tobias Boelter, suggests a security vulnerability within WhatsApp could be used by government agencies as a backdoor to snoop on users.
"This claim is false," a WhatsApp spokesman told PCMag in an email.
The Facebook-owned company will "fight any government request to create a backdoor," he added.
WhatsApp in April turned on full end-to-end encryption—using the Signal protocol developed by Open Whisper Systems—to protect messages from the prying eyes of cybercriminals, hackers, "oppressive regimes," and even Facebook itself.
The system, as described by The Guardian, relies on unique security keys traded and verified between users in an effort to guarantee communications are secure and cannot be intercepted. When any of WhatsApp's billion users get a new phone or reinstall the program, their encryption keys change—"something any public key cryptography system has to deal with," Open Whisper Systems founder Moxie Marlinspike wrote in a Friday blog post.
During that process, messages may back up on the phone, waiting their turn to be re-encrypted.
According to The Guardian, that's when someone could sneak in, fake having a new phone, and hijack the texts.
But according to Marlinspike, "the fact that WhatsApp handles key changes is not a 'backdoor,' it is how cryptography works.
"Any attempt to intercept messages in transmit by the server is detectable by the sender, just like with Signal, PGP, or any other end-to-end encrypted communication system," he wrote.
"We appreciate the interest people have in the security of their messages and calls on WhatsApp," co-founder Brian Acton wrote in a Friday Reddit post. "We will continue to set the record straight in the face of baseless accusations about 'backdoors' and help people understand how we've built WhatsApp with critical security features at such a large scale.
"Most importantly," he added, "we'll continue investing in technology and building simple features that help protect the privacy and security of messages and calls on WhatsApp."
In a blog post, Boelter said The Guardian's decision to use the word "backdoor" was probably not "the best choice there, but I can also see that there are arguments for calling it a 'backdoor.'" But Facebook was "furious and issued a blank denial, [which] polarized sides.
"I wish I could have had this debate with the Facebook Security Team in...private, without the public listening and judging our opinions, agreeing on a solution and giving a joint statement at the end," Boelter continued.
In an earlier post, Boelter said he reported the vulnerability in April 2016, but Facebook failed to fix it.
Boelter—a German computer scientist, entrepreneur, and PhD student at UC Berkeley focusing on Security and Cryptography—acknowledged that resolving the issue in public is a double-edged sword.
"The ordinary people following the news and reading headlines do not understand or do not bother to understand the details and nuances we are discussing now. Leaving them with wrong impressions leading to wrong and dangerous decisions: If they think WhatsApp is 'backdoored' and insecure, they will start using other means of communication. Likely much more insecure ones," he wrote. "The truth is that most other messengers who claim to have "end-to-end encryption" have the same vulnerability or have other flaws. On the other hand, if they now think all claims about a backdoor were wrong, high-risk users might continue trusting WhatsApp with their most sensitive information."
Boelter said he'd be content to leave the app as is if WhatsApp can prove that "1) too many messages get [sent] to old keys, don't get delivered, and need to be [re-sent] later and 2) it would be too dangerous to make blocking an option (moxie and I had a discussion on this)."
Then, "I could actually live with the current implementation, except for voice calls of course," provided WhatsApp is transparent about the issue, like adding a notice about key change notifications being delayed.
The project’s goal is to make it easier for applications services to share and discover public keys for users, but it will be a while before it's ready for prime time. Secure communications should be de rigueur, but it remains frustratingly out of reach for most people, more than 20 years after the creation of Pretty Good Privacy (PGP).
Existing methods where users need to manually find and verify the recipients’ keys are time-consuming and often complicated. Messaging apps and file sharing tools are limited in that users can communicate only within the service because there is no generic, secure method to look up public keys. “Key Transparency is a general-use, transparent directory, which makes it easy for developers to create systems of all kinds with independently auditable account data,” Ryan Hurst and Gary Belvin, members of Google’s security and privacy engineering team, wrote on the Google Security Blog. Key Transparency will maintain a directory of online personae and associated public keys, and it can work as a public key service to authenticate users.
Applications and services can publish their users’ public keys in Key Transparency and look up other users’ keys.
An audit mechanism keeps the service accountable.
There is the security protection of knowing that everyone is using the same published key, and any malicious attempts to modify the record with a different key will be immediately obvious. “It [Key Transparency] can be used by account owners to reliably see what keys have been associated with their account, and it can be used by senders to see how long an account has been active and stable before trusting it,” Hurst and Belvin wrote. The idea of a global key lookup service is not new, as PGP previously attempted a similar task with Global Directory.
The service still exists, but very few people know about it, let alone use it. Kevin Bocek, chief cybersecurity strategist at certificate management vendor Venafi, called Key Transparency an "interesting" project, but expressed some skepticism about how the technology will be perceived and used. Key Transparency is not a response to a serious incident or a specific use case, which means there is no actual driving force to spur adoption.
Compare that to Certificate Transparency, Google’s framework for monitoring and auditing digital certificates, which came about because certificate authorities were repeatedly mistakenly issuing fraudulent certificates. Google seems to be taking a “build it, and maybe applications will come,” approach with Key Transparency, Bocek said. The engineers don’t deny that Key Transparency is in early stages of design and development. “With this first open source release, we're continuing a conversation with the crypto community and other industry leaders, soliciting feedback, and working toward creating a standard that can help advance security for everyone," they wrote. While the directory would be publicly auditable, the lookup service will reveal individual records only in response to queries for specific accounts.
A command-line tool would let users publish their own keys to the directory; even if the actual app or service provider decides not to use Key Transparency, users can make sure their keys are still listed. “Account update keys” associated with each account—not only Google accounts—will be used to authorize changes to the list of public keys associated with that account. Google based the design of Key Transparency on CONIKS, a key verification service developed at Princeton University, and integrated concepts from Certificate Transparency.
A user client, CONIKS integrates with individual applications and services whose providers publish and manage their own key directories, said Marcela Melara, a second-year doctoral fellow at Princeton University’s Center for Information Technology Policy and the main author of CONIKS.
For example, Melara and her team are currently integrating CONIKS to work with Tor Messenger.
CONIKS relies on individual directories because people can have different usernames across services. More important, the same username can belong to different people on different services. Google changed the design to make Key Transparency a centralized directory. Melara said she and her team have not yet decided if they are going to stop work on CONIKS and start working on Key Transparency. One of the reasons for keeping CONIKS going is that while Key Transparency’s design may be based on CONIKS, there may be differences in how privacy and auditor functions are handled.
For the time being, Melara intends to keep CONIKS an independent project. “The level of privacy protections we want to see may not translate to [Key Transparency’s] internet-scalable design,” Melara said. On the surface, Key Transparency and Certificate Transparency seem like parallel efforts, with one providing an auditable log of public keys and the other a record of digital certificates. While public keys and digital certificates are both used to secure and authenticate information, there is a key difference: Certificates are part of an existing hierarchy of trust with certificate authorities and other entities vouching for the validity of the certificates. No such hierarchy exists for digital keys, so the fact that Key Transparency will be building that web of trust is significant, Venafi’s Bocek said. “It became clear that if we combined insights from Certificate Transparency and CONIKS we could build a system with the properties we wanted and more,” Hurst and Belvin wrote.
The public key is downloaded to the computer, but the private key never leaves the server and remains in the attackers' possession.
This is the key that victims pay to get access to. The problem with reaching out to a server on the internet after installation of ransomware is that it creates a weak link for attackers.
For example, if the server is known by security companies and is blocked by a firewall, the encryption process doesn't start. Some ransomware programs can perform so-called offline encryption, but they use the same RSA public key that's hard-coded into the malware for all victims.
The downside with this approach for attackers is that a decryptor tool given to one victim will work for all victims because they share the same private key as well. The Spora creators have solved this problem, according to researchers from security firm Emsisoft who analyzed the program's encryption routine. The malware does contain a hard-coded RSA public key, but this is used to encrypt a unique AES key that is locally generated for every victim.
This AES key is then used to encrypt the private key from a public-private RSA key pair that's also locally generated and unique for every victim. Finally, the victim's public RSA key is used to encrypt the AES keys that are used to encrypt individual files. In other words, the Spora creators have added a second round of AES and RSA encryption to what other ransomware programs have been doing until now. When victims want to pay the ransom, they have to upload their encrypted AES keys to the attackers' payment website.
The attackers will then use their master RSA private key to decrypt it and return it back to the victim -- likely bundled in a decryptor tool. The decryptor will use this AES key to decrypt the victim's unique RSA private key that was generated locally and that key will then be used to decrypt the per-file AES keys needed to recover the files. In this way, Spora can operate without the need of a command-and-control server and avoid releasing a master key that will work for all victims, the Emsisoft researchers said in a blog post. "Unfortunately, after evaluating the way Spora performs its encryption, there is no way to restore encrypted files without access to the malware author’s private key." Other aspects of Spora also set it apart from other ransomware operations.
For example, its creators have implemented a system that allows them to ask different ransoms for different types of victims. The encrypted key files that victims have to upload on the payments website also contain identifying information collected by the malware about the infected computers, including unique campaign IDs. This means that if the attackers launch a Spora distribution campaign specifically targeted at businesses, they will be able to tell when victims of that campaign will try to use their decryption service.
This allows them to automatically adjust the ransom amount for consumers or organizations or even for victims in different regions of the world. Furthermore, in addition to file decryption, the Spora gang offers other "services" that are priced separately, such as "immunity," which ensures that the malware will not infect a computer again, or "removal" which will also remove the program after decrypting the files.
They also offer a full package, where the victim can buy all three for a lower price. The payments website itself is well designed and looks professional.
It has an integrated live chat feature and the possibility of getting discounts.
From what the Emsisoft researchers observed, the attackers respond promptly to messages. All this points to Spora being a professional and well-funded operation.
The ransom values observed so far are lower than those asked by other gangs, which could indicate the group behind this threat wants to establish itself quickly. So far, researchers have seen Spora distributed via rogue email attachments that pose as invoices from an accounting software program popular in Russia and other Russian-speaking countries.
In 2002, Adleman himself won the Turing Award, often referred to at the Nobel Prize of the computing world. Like many of his Turing Award-winning peers, Adleman is still actively involved in solving some of today’s most important computer and security problems. His love of math and number theory, combined with his interest in molecular biology, created a whole new way of thinking about computing that blurs the lines between silicon and life.
If we ever see bio-robots that think and act like humans, Dr.
Adleman will be one of the people you should thank. I asked Dr.
Adleman about his contributions to the creation of the RSA algorithm back in 1977.
I knew that Whitfield Diffie, Martin Hellman, and Ralph Merkle had first worked out public key crypto the previous year, but hadn’t quite figured out how to use large prime numbers -- and the difficulty of factoring them eventually took over the world.
Adleman had this to say: I was the number theorist in residence. Ron [Rivest] and Adi [Shamir] were really more interested in public crypto than I was initially.
I was more interested in math and number theory at the time, and at first I couldn’t see how great a role crypto would play in our lives in the future.
But as Ron and Adi came to understand that solving their problems would probably involve algorithmic number theory, I got involved. Basically, Ron and Adi would propose many different solutions [42 to be exact], and I would quickly shoot them down.
They would make many attempts over the months, and I would run into them at birthday parties and celebrations and find flaws. One night, at a Passover dinner, Ron drank a lot of wine.
After dinner, around midnight, Ron called me and told me about the large prime number and factoring idea that would eventually become RSA.
And right on the phone I said, “Congratulations, you’ve done it!” I knew we couldn’t prove it was unbreakable, but I couldn’t see any flaws. The RSA guys went on to form a company and popularize public cryptography. Dr.
Adleman’s interest in molecular biology, especially the HIV virus, also bore fruit.
In 1983, one of Adleman’s students, Frederick Cohen, created the first (or one of the earliest) self-replicating programs, which copied itself to other programs to spread.
Adleman saw the similarities between his biological work on HIV and what Cohen was doing, and he called Cohen’s creation a computer “virus.” Cohen credited Adleman with creating the name in his 1984 paper, “Experiments with Computer Viruses.” Computing with DNA A decade later, in 1994, Dr.
Adleman introduced the world to DNA computing in his seminal paper, “Molecular Computation of Solutions to Combinatorial Problems.” I remember reading the news stories surrounding his announcement with a mix of astonishment and incredulity.
If it had been announced this year, I’d still probably be checking to see if it was fake news.
But it wasn’t and isn’t.
Someone had figured out how for the first time to use biological life to compute. I asked Dr.
Adleman how the concept of using DNA to compute came to him. He replied: It came to me because of my interest in theoretical computer science and HIV. My interest in HIV led me to ask a colleague if I could get into his lab to become more proficient in professional molecular biology.
There I saw the world of DNA.
It was like being in Disney World! Since I had read Alan Turing’s 1936 paper, “On Computable Numbers, with an Application to the Entscheidungsproblem,” I knew that computing was easy, that the basic components were all around us. All you had to do was find a way of storing information and a way of doing simple operations on it.
I realized that DNA was a magnificent way of storing information and that living things had created enzymes to manipulate that information.
So I knew DNA computing would work. Life and computation are not very different from one another after all. Maybe we can’t put silicon computers into human cells, but we might be able to put DNA computers into them. One of the best parts of my discovery is what my students and others have done with it.
They have started to make structures out of DNA.
It even has a name: DNA origami.
They have even made DNA smiley faces.
If you need 50 billion statues of yourself, they can build them out of DNA. Cybercatastrophes I asked Dr.
Adleman what concerned him the most about computer security. He acknowledged what he was about to say might sound a bit apocalyptic: It’s not any immediate problem.
There are a zillion immediate problems, and a whole industry trying to respond to those.
But I hope security experts will take a longer view. What I’ve thought about, worried about, and am actually writing a book about is the “compuverse,” its extremely rapid evolution, and its potential for catastrophe. For example, we are all aware that it is an easy thing to attack an internet site.
But the major powers, and perhaps others, are almost surely working to acquire the ability to take down an entire nation’s computation power for a prolonged period of time.
A first-world country with no computational infrastructure is a country with no economy, no food, no power, and ultimately not a country at all.
In the not too distant future, cyberweapons may become weapons of mass destruction.
Computer security experts might be able to prepare for or prevent that from happening. To end on a slightly more positive note, there may be a small silver lining to our difficulties protecting computer systems.
Suppose some leader decides to hit “the button” to launch nuclear weapons.
There are lot of computations between that button and the weapons.
In today’s world, can the leader still be sure that what he thinks will happen will? Currently, Adleman is working a new approach to complex analysis called strata and writing a book on memes.
That’s in addition to his day job as a computer science professor at the University of Southern California.
It’s great to see one of the earliest contributors to computers and networks as we know them still going strong and contributing important insights to problems we face today.
In the security realm, I’ve always been a huge fan of the Trusted Computing Group.
It’s one of the few vendor organizations that truly makes computers more secure in a holistic manner. The Fast Identity Online (FIDO) Alliance is another group with lots of vendor participation that’s making headway in computer security.
Formed in 2012, FIDO focuses on strong authentication, moving the online world past less secure password logons and emphasizing safer browsers and security devices when accessing websites, web services, and cloud offerings.
Its mission statement includes the words “open standards,” “interoperable,” and “scalable” — and the organization is actually doing it.
Better, FIDO wants to do this in a way that’s so easy, users actually want to use the methods and devices. All FIDO authentication methods use public/private key cryptography, which makes them highly resistant to credential phishing and man-in-the-middle attacks.
Currently, FIDO has two authentication-specification mechanisms: Universal Authentication Framework (UAF), a “passwordless” method, and Universal Second Factor (U2F), a two-factor authentication (2FA) method.
The last method may involve a password, which can be noncomplex, because the additional factor ensures the overall strength.
FIDO authentication must be supported by your device or browser, along with the authenticating site or service. With UAF, the user registers their device with the participating site or service and chooses to implement an authentication factor, such as PIN or biometric ID. When connecting to the site or service, or conducting a transaction that requires strong authentication, the device performs local authentication (verifying the PIN or biometric identity) and passes along the success or failure to the remote site or service. With U2F, an additional security device (a cellphone, USB dongle, or so on) is used as the second factor after the password or PIN has been provided. The public/private key cryptography used behind the scenes is very reminiscent of TLS negotiations.
Both the server and the client have a private/public key pair, and they only share the public key with each other to facilitate authentication over a protected transmission method. The web server’s public key is used to send randomly created “challenge” information back and forth between the server and client.
The client’s private key never leaves the client device and can be used only when the user physically interacts with the device. FIDO authentication goes much further than traditional TLS.
It links “registered” devices to their users and those devices to the eventual websites or services.
Traditional TLS only guarantees server authentication to the client. One authentication device can be linked to many (or all) websites and services.
A nice graphical overview of the FIDO authentication process can be found here. Google Security Keys Google recently touted the success of its physical, FIDO-enabled “Security Keys” in a new whitepaper.
The versions touted in the paper are small, USB-enabled dongles with touch-sensitive capacitors that act as the second factor.
Each dongle has a unique device ID, which is registered to the user on each participating website.
The public cryptography is Elliptical Curve Cryptography (ECC), with 256-bit keys (aka ECDSA_P256) and SHA-256 for signing. Google tested its Security Keys by giving them to more than 50,000 employees and made them an option for Google online service customers.
Google’s results? Zero successful phishing, faster authentication, and lower support costs—can’t beat that.
The only negative was the one-time purchase cost of the devices, although Google says consumers should be able to buy Security Key devices for as little as $6 each.
That’s not bad for greater peace of mind. FIDO updates FIDO recently announced the 1.1 version of its specification.
It includes support for Bluetooth Low Energy, smartcards, and near-field communications (NFC).
FIDO authentication can already be used by more than 1.5 billion user accounts, including through Dropbox, GitHub, PayPal, Bank of America, NTT DoCoMo, and Salesforce.
Six of the top 10 mobile handset vendors already support FIDO, at least on some devices; mobile wallet vendors say they will participate as well. The 2.0 version of the FIDO specification is already in the works.
FIDO 2.0 is partitioned into two parts: the Web Authentication Spec, which is now in the W3C Web Authentication working group; and the remaining parts, including remote device authentication—which should allow you, for example, to unlock your workstation with your cellphone. Reducing the use of stolen credentials takes a big bite out of online crime.
I can only hope that the web continues to adopt the FIDO authentication standards as fast as possible.
After years of previous attempts at similar initiatives, this one looks posed for broad success.
A bit can be a zero or a one, but a qubit can be both simultaneously, which is weird and hard to program, but once folks get it working, it has the potential to be significantly more powerful than any of today's computers. And it will make many of today's public key algorithms obsolete, said Kevin Curran, IEEE senior member and a professor at the University of Ulster, where he heads up the Ambient Intelligence Research Group. That includes today's most popular algorithms, he said.
For example, one common encryption method is based on the fact that it is extremely difficult to find the factors of very large numbers. "All of these problems can be solved on a powerful quantum computer," he said. He added that the problems are mostly like with public key systems, where the information is encoded and decoded by different people.
Symmetric algorithms, commonly used to encrypt local files and databases, don't have the same weaknesses and will survive a bit longer.
And increasing the length of the encryption keys will make those algorithms more secure. For public key encryption, such as that used for online communications and financial transactions, possible post-quantum alternatives include lattice-based, hash-based, and multivariate cryptographic algorithms as well as those that update today's Diffie-Hellman algorithm with supersingular elliptic curves. Google is already experimenting with some of these, Curran said. "Google is working with the Lattice-based public-key New Hope algorithm," he said. "They are deploying it in Chrome where a small fraction of connections between desktop Chrome and Google's servers will use a post-quantum key-exchange algorithm.
By adding a post-quantum algorithm on top of the existing one, they are able to experiment without affecting user security." Flexibility is key Some future-proof encryption algorithms have already been developed and are now being tested, but enterprises need to start checking now whether their systems, both those that they have developed themselves and those provided by vendors, are flexible enough to allow old, obsolete algorithms to be early replaced by new ones. Fortunately, according to Curran, there are already algorithms out there that seem to be workable replacements and that can run on existing computers. One company that is paying very close attention to this is Echoworx, which provides on-premises and cloud-based enterprise encryption software. Quantum computing will break all of today's commonly used encryption algorithms, said Sam Elsharif, vice president of software development at Echoworx.
Encryption that today's most sophisticated computer can break only after thousands of years of work will be beaten by a quantum computer in minutes. "This is obviously very troubling, since it's the core of our business," he said. "Echoworx will be in trouble -- but so will all of today's infrastructure." Since longer keys won't work for public key encryption and companies will need to replace their algorithms, the encryption technology needs to be modular. "It's called cryptographic agility," he said. "It means that you don't hard-wire encryption algorithms into your software, but make them more like pluggable modules.
This is how software should be designed, and this is what we do at Echoworx ." Once post-quantum algorithms have been tested and become standards, Echoworx will be able swap out the old ones with the new ones, he said. "You will still have a problem with old data," he said. "That data will either have to be destroyed or re-encrypted." Hardware-based encryption appliances will also need to be replaced if they can't be upgraded, he said. Don't worry, it's still a long way off How soon is this going to be needed? Not right away, some experts say. "The threat is real," said Elsharif. "The theory is proven, it's just a matter of engineering." But that engineering could take 10, 15 or 20 years, he said. Ulster University's Curran says that quantum computers need to have at least 500 qubits before they can start breaking current encryption, and the biggest current quantum computer has less than 15 qubits. "So there is no immediate worry," said Curran. However, research organizations should be working on the problem now, he said. "We may very well find that we do not actually need post-quantum cryptography but that risk is perhaps too large to take and if we do not conduct the research now, then we may lose years of critical research in this area." Meanwhile, there's no reason for an attacker to try to break encryption by brute force if they can simply hack into users' email accounts or use stolen credentials to access databases and key files. Companies still have lots of work to do on improving authentication, fixing bugs, and patching outdated, vulnerable software. "Many steps need to be taken to tighten up a company’s vulnerability footprint before even discussing encryption," said Justin Fier, director of cyber intelligence and analysis at Darktrace. In addition, when attackers are able to bypass encryption, they usually do it because the technology is not implemented correctly, or uses weak algorithms. "We still have not employed proper protection of our data using current cryptography, let alone a future form," he said. "Quantum computing is still very much theoretical," he added. "Additionally, even if a prototype had been designed, the sheer cost required to build and operate the device within the extreme temperature constraints would make it difficult to immediately enter the mainstream marketplace." No, go right ahead and panic Sure, the typical criminal gang might not have a quantum computer right now with which to do encryption. But that's not necessarily true for all attackers, Mike Stute, chief scientist at security firm Masergy Communications. There have already been public announcements from China about breakthroughs in both quantum computing and in unbreakable quantum communications. "It's probably safe to say that nation states are not on the first generation of the technology but are probably on the second," he said. There are even some signs that nation states are able to break encryption, Stute added.
It might not be a fast process, but it's usable. "They have to focus on what they really want," he said. "And bigger quantum computer will do more." That means that companies with particularly sensitive data might want to start looking at upgrading their encryption algorithms sooner rather than later. Plus, there are already some quantum computers already on the market, he added. The first commercial quantum computer was released by D-Wave Systems more than a year ago, and Google was one of its first customers. "Most everyone was skeptical, but they seem to have passed the test," said Stute. The D-Wave computer claims to have 1,000 qubits -- and the company has announced a 2,000-qubit computer that will be coming out in 2017. But they're talking about a different kind of qubit, Stute said.
It has a very limited set of uses, he said, unlike a general-purpose quantum computer like IBM's which would be well suited for cracking encryption. IBM's quantum computer has five qubits, and is commercially available. "You can pay them to do your calculations," he said. "I was able to do some testing, and it all seems on the up and up.
It's coming faster than we think." Related video: This story, "Prepare now for the quantum computing revolution in encryption" was originally published by CSO.