Tag: Public Key
The ongoing evolution of the malware demonstrates that the cybercriminals are not about to bid farewell to their brainchild, which is providing them with a steady revenue stream.
This sample is what could be considered as the “father” of other XPan ransomware variants.
A considerable amount of indicators within the source code depict the early origins of this sample.
After penetrating an organization's network the threat actors used the PsExec tool to install ransomware on all endpoints and servers in the organization.
The next interesting fact about this ransomware is that the threat actors decided to use the well-known Petya ransomware to encrypt user data.
The first malware program to lock up people’s files and ask for a ransom was the PC Cyborg Trojan in 1989.
It was created by Harvard-trained evolutionary biologist Dr. Joseph Popp, who was working on several AIDS-related projects at the time. Dr. Popp sent a floppy disk containing a program covering AIDS information, teaching, and testing to tens of thousands of mailing list subscribers.
At startup, a crude EULA warned users they had to pay for the program—and the author reserved the legal right to “ensure termination of your use of the programs ....
These program mechanisms will adversely affect other program applications on microcomputers.” Most people didn’t read the EULA and ran the program without paying for it. After 90 boots, the program crudely encrypted/obfuscated the user’s hard drive data, rendering it inaccessible, and asked for a payment of $189 to be sent to a Panamanian post office box. (Check out a great analysis of the Trojan.) Ransomware evolution Early ransomware used symmetric key encryption, and the cipher algorithm was often poorly constructed.
Encryption experts could frequently break the ransomware easily, and because the symmetric key was the same shared key in every infection, every computer touched by the same ransomware program could be unlocked at once. Eventually, ransomware authors learned to use public key cryptography (where both a private key and a second public key is involved) and started to use popular, well-known, well-tested cipher algorithms.
A different key pair was generated for each infection, which made ransomware a very difficult problem to solve. By the middle 2000s, tough-to-break ransomware was becoming very popular, but the problem of how hackers would collect their money remained. Real money and credit card transactions can be traced. Enter CryptoLocker, the first widespread ransomware program to demand bitcoin payments.
CryptoLocker first appeared in 2013. When matched with randomly generated email addresses and “darknet” pathways, it became almost impossible to catch ransomware hackers. Ransomware writers and distributors are now making tens, if not hundreds of millions, of dollars off their victims. These days ransomware keeps getting more dangerous and targeted. Ransomware programs are now being developed to attack specific types of data, such as database tables, mobile devices, IoT units, and televisions.
This page chronicles all the significant developments from the last year or so. Defeating ransomware First, you need to verify that you’ve actually been hit by ransomware. Less sophisticated programs merely take over your current browser session or computer screen.
They make the same blackmail claims as a more sophisticated ransomware program, but don’t encrypt any files.
All you need to do is reboot the computer and/or use a program like Process Explorer to remove the malicious file. Nothing beats a good backup. Nothing beats a current, offline backup.
The “offline” part is important because many ransomware programs will look for your online backups and render them unusable, too. Get patched. Making sure your system is fully patched is a great way to prevent any malware from infecting your computer.
But also see if they are the real patches from the real vendors. Unfortunately, fake patches often contain ransomware. Don’t get tricked. Don’t let yourself get socially engineered into installing ransomware.
In other words, don’t install anything sent to you in email or offered to you when visiting a website.
If a website says you need to install something, either leave the website and don’t go back—or leave the website and install the software directly from the legitimate vendor’s website. Never let a website install another vendor’s software for you. Use antimalware software. Everyone needs to run at least one antimalware program. Windows comes with Windows Defender, but there are dozens of commercial competitors and some good freebies. Ransomware is malware.
Antimalware software can stop the majority of variants before they hit. Use a whitelisting program. Application control or whitelisting programs stop any unauthorized program from executing.
These programs are probably the best defense against ransomware (besides a good offline backup).
Although many people think application control programs are too cumbersome to use, expect them to become much more accepted as ransomware continues to grow, at least in business computing.
The days of allowing employees to run any program they want are numbered. What to do if you’re locked up If all your critical data is backed up and safe, then you’ll be back in business in a few hours’ time. You’ll still need to reformat/reset/restore your device, however. Luckily, that process gets easier with each new operating system version. Using another safe, uninfected computer, restore your backup.
Apply all critical security patches, restore your data, and resolve never to do what you did that got your device locked up in the first place. If you don’t have a clean backup copy of your critical data and absolutely need the data, you have two options: Find an unlock key or pay the ransomware demand. Using another safe, trusted computer, research as much as you can about the particular ransomware variant you have.
The screen message presented by the ransomware will help you identify the variant. If you’re lucky, your ransomware variant may already have been unlocked. Many antimalware vendors have programs to detect and unlock ransomware (if it recognizes the variant and has the unlock key). Run that program first. It may take an offline scan to get rid of the ransomware.
Several websites also offer unlocking services, free and commercial, for particular ransomware variants. Here’s an example of a ransomware unlocker.
Also, believe it or not, ransomware distributors will even occasionally apologize and release their own unlocking programs. Lastly, many people choose to pay the ransomware to recover their files. Most experts and companies recommend against paying ransom because it only encourages the ransomware creators and distributors. Yet quite often it works.
It’s your computer and data, so it’s up to you whether to pay the ransom. Be aware that in many cases people have paid up and their files have remained encrypted.
But these cases seem to be in the minority.
If ransomware didn’t unlock files after the money was paid, everyone would learn that—and ransomware attackers would make less money. I hope you never become a ransomware victim.
The odds of infection, unfortunately, are getting worse as ransomware gains popularity and sophistication.
The security researcher cited in the report acknowledged that the word 'backdoor' was probably not the best choice.
WhatsApp this week denied that its app provides a "backdoor" to encrypted texts.
A report published Friday by The Guardian, citing cryptography and security researcher Tobias Boelter, suggests a security vulnerability within WhatsApp could be used by government agencies as a backdoor to snoop on users.
"This claim is false," a WhatsApp spokesman told PCMag in an email.
The Facebook-owned company will "fight any government request to create a backdoor," he added.
WhatsApp in April turned on full end-to-end encryption—using the Signal protocol developed by Open Whisper Systems—to protect messages from the prying eyes of cybercriminals, hackers, "oppressive regimes," and even Facebook itself.
The system, as described by The Guardian, relies on unique security keys traded and verified between users in an effort to guarantee communications are secure and cannot be intercepted. When any of WhatsApp's billion users get a new phone or reinstall the program, their encryption keys change—"something any public key cryptography system has to deal with," Open Whisper Systems founder Moxie Marlinspike wrote in a Friday blog post.
During that process, messages may back up on the phone, waiting their turn to be re-encrypted.
According to The Guardian, that's when someone could sneak in, fake having a new phone, and hijack the texts.
But according to Marlinspike, "the fact that WhatsApp handles key changes is not a 'backdoor,' it is how cryptography works.
"Any attempt to intercept messages in transmit by the server is detectable by the sender, just like with Signal, PGP, or any other end-to-end encrypted communication system," he wrote.
"We appreciate the interest people have in the security of their messages and calls on WhatsApp," co-founder Brian Acton wrote in a Friday Reddit post. "We will continue to set the record straight in the face of baseless accusations about 'backdoors' and help people understand how we've built WhatsApp with critical security features at such a large scale.
"Most importantly," he added, "we'll continue investing in technology and building simple features that help protect the privacy and security of messages and calls on WhatsApp."
In a blog post, Boelter said The Guardian's decision to use the word "backdoor" was probably not "the best choice there, but I can also see that there are arguments for calling it a 'backdoor.'" But Facebook was "furious and issued a blank denial, [which] polarized sides.
"I wish I could have had this debate with the Facebook Security Team in...private, without the public listening and judging our opinions, agreeing on a solution and giving a joint statement at the end," Boelter continued.
In an earlier post, Boelter said he reported the vulnerability in April 2016, but Facebook failed to fix it.
Boelter—a German computer scientist, entrepreneur, and PhD student at UC Berkeley focusing on Security and Cryptography—acknowledged that resolving the issue in public is a double-edged sword.
"The ordinary people following the news and reading headlines do not understand or do not bother to understand the details and nuances we are discussing now. Leaving them with wrong impressions leading to wrong and dangerous decisions: If they think WhatsApp is 'backdoored' and insecure, they will start using other means of communication. Likely much more insecure ones," he wrote. "The truth is that most other messengers who claim to have "end-to-end encryption" have the same vulnerability or have other flaws. On the other hand, if they now think all claims about a backdoor were wrong, high-risk users might continue trusting WhatsApp with their most sensitive information."
Boelter said he'd be content to leave the app as is if WhatsApp can prove that "1) too many messages get [sent] to old keys, don't get delivered, and need to be [re-sent] later and 2) it would be too dangerous to make blocking an option (moxie and I had a discussion on this)."
Then, "I could actually live with the current implementation, except for voice calls of course," provided WhatsApp is transparent about the issue, like adding a notice about key change notifications being delayed.
The project’s goal is to make it easier for applications services to share and discover public keys for users, but it will be a while before it's ready for prime time. Secure communications should be de rigueur, but it remains frustratingly out of reach for most people, more than 20 years after the creation of Pretty Good Privacy (PGP).
Existing methods where users need to manually find and verify the recipients’ keys are time-consuming and often complicated. Messaging apps and file sharing tools are limited in that users can communicate only within the service because there is no generic, secure method to look up public keys. “Key Transparency is a general-use, transparent directory, which makes it easy for developers to create systems of all kinds with independently auditable account data,” Ryan Hurst and Gary Belvin, members of Google’s security and privacy engineering team, wrote on the Google Security Blog. Key Transparency will maintain a directory of online personae and associated public keys, and it can work as a public key service to authenticate users.
Applications and services can publish their users’ public keys in Key Transparency and look up other users’ keys.
An audit mechanism keeps the service accountable.
There is the security protection of knowing that everyone is using the same published key, and any malicious attempts to modify the record with a different key will be immediately obvious. “It [Key Transparency] can be used by account owners to reliably see what keys have been associated with their account, and it can be used by senders to see how long an account has been active and stable before trusting it,” Hurst and Belvin wrote. The idea of a global key lookup service is not new, as PGP previously attempted a similar task with Global Directory.
The service still exists, but very few people know about it, let alone use it. Kevin Bocek, chief cybersecurity strategist at certificate management vendor Venafi, called Key Transparency an "interesting" project, but expressed some skepticism about how the technology will be perceived and used. Key Transparency is not a response to a serious incident or a specific use case, which means there is no actual driving force to spur adoption.
Compare that to Certificate Transparency, Google’s framework for monitoring and auditing digital certificates, which came about because certificate authorities were repeatedly mistakenly issuing fraudulent certificates. Google seems to be taking a “build it, and maybe applications will come,” approach with Key Transparency, Bocek said. The engineers don’t deny that Key Transparency is in early stages of design and development. “With this first open source release, we're continuing a conversation with the crypto community and other industry leaders, soliciting feedback, and working toward creating a standard that can help advance security for everyone," they wrote. While the directory would be publicly auditable, the lookup service will reveal individual records only in response to queries for specific accounts.
A command-line tool would let users publish their own keys to the directory; even if the actual app or service provider decides not to use Key Transparency, users can make sure their keys are still listed. “Account update keys” associated with each account—not only Google accounts—will be used to authorize changes to the list of public keys associated with that account. Google based the design of Key Transparency on CONIKS, a key verification service developed at Princeton University, and integrated concepts from Certificate Transparency.
A user client, CONIKS integrates with individual applications and services whose providers publish and manage their own key directories, said Marcela Melara, a second-year doctoral fellow at Princeton University’s Center for Information Technology Policy and the main author of CONIKS.
For example, Melara and her team are currently integrating CONIKS to work with Tor Messenger.
CONIKS relies on individual directories because people can have different usernames across services. More important, the same username can belong to different people on different services. Google changed the design to make Key Transparency a centralized directory. Melara said she and her team have not yet decided if they are going to stop work on CONIKS and start working on Key Transparency. One of the reasons for keeping CONIKS going is that while Key Transparency’s design may be based on CONIKS, there may be differences in how privacy and auditor functions are handled.
For the time being, Melara intends to keep CONIKS an independent project. “The level of privacy protections we want to see may not translate to [Key Transparency’s] internet-scalable design,” Melara said. On the surface, Key Transparency and Certificate Transparency seem like parallel efforts, with one providing an auditable log of public keys and the other a record of digital certificates. While public keys and digital certificates are both used to secure and authenticate information, there is a key difference: Certificates are part of an existing hierarchy of trust with certificate authorities and other entities vouching for the validity of the certificates. No such hierarchy exists for digital keys, so the fact that Key Transparency will be building that web of trust is significant, Venafi’s Bocek said. “It became clear that if we combined insights from Certificate Transparency and CONIKS we could build a system with the properties we wanted and more,” Hurst and Belvin wrote.
The public key is downloaded to the computer, but the private key never leaves the server and remains in the attackers' possession.
This is the key that victims pay to get access to. The problem with reaching out to a server on the internet after installation of ransomware is that it creates a weak link for attackers.
For example, if the server is known by security companies and is blocked by a firewall, the encryption process doesn't start. Some ransomware programs can perform so-called offline encryption, but they use the same RSA public key that's hard-coded into the malware for all victims.
The downside with this approach for attackers is that a decryptor tool given to one victim will work for all victims because they share the same private key as well. The Spora creators have solved this problem, according to researchers from security firm Emsisoft who analyzed the program's encryption routine. The malware does contain a hard-coded RSA public key, but this is used to encrypt a unique AES key that is locally generated for every victim.
This AES key is then used to encrypt the private key from a public-private RSA key pair that's also locally generated and unique for every victim. Finally, the victim's public RSA key is used to encrypt the AES keys that are used to encrypt individual files. In other words, the Spora creators have added a second round of AES and RSA encryption to what other ransomware programs have been doing until now. When victims want to pay the ransom, they have to upload their encrypted AES keys to the attackers' payment website.
The attackers will then use their master RSA private key to decrypt it and return it back to the victim -- likely bundled in a decryptor tool. The decryptor will use this AES key to decrypt the victim's unique RSA private key that was generated locally and that key will then be used to decrypt the per-file AES keys needed to recover the files. In this way, Spora can operate without the need of a command-and-control server and avoid releasing a master key that will work for all victims, the Emsisoft researchers said in a blog post. "Unfortunately, after evaluating the way Spora performs its encryption, there is no way to restore encrypted files without access to the malware author’s private key." Other aspects of Spora also set it apart from other ransomware operations.
For example, its creators have implemented a system that allows them to ask different ransoms for different types of victims. The encrypted key files that victims have to upload on the payments website also contain identifying information collected by the malware about the infected computers, including unique campaign IDs. This means that if the attackers launch a Spora distribution campaign specifically targeted at businesses, they will be able to tell when victims of that campaign will try to use their decryption service.
This allows them to automatically adjust the ransom amount for consumers or organizations or even for victims in different regions of the world. Furthermore, in addition to file decryption, the Spora gang offers other "services" that are priced separately, such as "immunity," which ensures that the malware will not infect a computer again, or "removal" which will also remove the program after decrypting the files.
They also offer a full package, where the victim can buy all three for a lower price. The payments website itself is well designed and looks professional.
It has an integrated live chat feature and the possibility of getting discounts.
From what the Emsisoft researchers observed, the attackers respond promptly to messages. All this points to Spora being a professional and well-funded operation.
The ransom values observed so far are lower than those asked by other gangs, which could indicate the group behind this threat wants to establish itself quickly. So far, researchers have seen Spora distributed via rogue email attachments that pose as invoices from an accounting software program popular in Russia and other Russian-speaking countries.
In 2002, Adleman himself won the Turing Award, often referred to at the Nobel Prize of the computing world. Like many of his Turing Award-winning peers, Adleman is still actively involved in solving some of today’s most important computer and security problems. His love of math and number theory, combined with his interest in molecular biology, created a whole new way of thinking about computing that blurs the lines between silicon and life.
If we ever see bio-robots that think and act like humans, Dr.
Adleman will be one of the people you should thank. I asked Dr.
Adleman about his contributions to the creation of the RSA algorithm back in 1977.
I knew that Whitfield Diffie, Martin Hellman, and Ralph Merkle had first worked out public key crypto the previous year, but hadn’t quite figured out how to use large prime numbers -- and the difficulty of factoring them eventually took over the world.
Adleman had this to say: I was the number theorist in residence. Ron [Rivest] and Adi [Shamir] were really more interested in public crypto than I was initially.
I was more interested in math and number theory at the time, and at first I couldn’t see how great a role crypto would play in our lives in the future.
But as Ron and Adi came to understand that solving their problems would probably involve algorithmic number theory, I got involved. Basically, Ron and Adi would propose many different solutions [42 to be exact], and I would quickly shoot them down.
They would make many attempts over the months, and I would run into them at birthday parties and celebrations and find flaws. One night, at a Passover dinner, Ron drank a lot of wine.
After dinner, around midnight, Ron called me and told me about the large prime number and factoring idea that would eventually become RSA.
And right on the phone I said, “Congratulations, you’ve done it!” I knew we couldn’t prove it was unbreakable, but I couldn’t see any flaws. The RSA guys went on to form a company and popularize public cryptography. Dr.
Adleman’s interest in molecular biology, especially the HIV virus, also bore fruit.
In 1983, one of Adleman’s students, Frederick Cohen, created the first (or one of the earliest) self-replicating programs, which copied itself to other programs to spread.
Adleman saw the similarities between his biological work on HIV and what Cohen was doing, and he called Cohen’s creation a computer “virus.” Cohen credited Adleman with creating the name in his 1984 paper, “Experiments with Computer Viruses.” Computing with DNA A decade later, in 1994, Dr.
Adleman introduced the world to DNA computing in his seminal paper, “Molecular Computation of Solutions to Combinatorial Problems.” I remember reading the news stories surrounding his announcement with a mix of astonishment and incredulity.
If it had been announced this year, I’d still probably be checking to see if it was fake news.
But it wasn’t and isn’t.
Someone had figured out how for the first time to use biological life to compute. I asked Dr.
Adleman how the concept of using DNA to compute came to him. He replied: It came to me because of my interest in theoretical computer science and HIV. My interest in HIV led me to ask a colleague if I could get into his lab to become more proficient in professional molecular biology.
There I saw the world of DNA.
It was like being in Disney World! Since I had read Alan Turing’s 1936 paper, “On Computable Numbers, with an Application to the Entscheidungsproblem,” I knew that computing was easy, that the basic components were all around us. All you had to do was find a way of storing information and a way of doing simple operations on it.
I realized that DNA was a magnificent way of storing information and that living things had created enzymes to manipulate that information.
So I knew DNA computing would work. Life and computation are not very different from one another after all. Maybe we can’t put silicon computers into human cells, but we might be able to put DNA computers into them. One of the best parts of my discovery is what my students and others have done with it.
They have started to make structures out of DNA.
It even has a name: DNA origami.
They have even made DNA smiley faces.
If you need 50 billion statues of yourself, they can build them out of DNA. Cybercatastrophes I asked Dr.
Adleman what concerned him the most about computer security. He acknowledged what he was about to say might sound a bit apocalyptic: It’s not any immediate problem.
There are a zillion immediate problems, and a whole industry trying to respond to those.
But I hope security experts will take a longer view. What I’ve thought about, worried about, and am actually writing a book about is the “compuverse,” its extremely rapid evolution, and its potential for catastrophe. For example, we are all aware that it is an easy thing to attack an internet site.
But the major powers, and perhaps others, are almost surely working to acquire the ability to take down an entire nation’s computation power for a prolonged period of time.
A first-world country with no computational infrastructure is a country with no economy, no food, no power, and ultimately not a country at all.
In the not too distant future, cyberweapons may become weapons of mass destruction.
Computer security experts might be able to prepare for or prevent that from happening. To end on a slightly more positive note, there may be a small silver lining to our difficulties protecting computer systems.
Suppose some leader decides to hit “the button” to launch nuclear weapons.
There are lot of computations between that button and the weapons.
In today’s world, can the leader still be sure that what he thinks will happen will? Currently, Adleman is working a new approach to complex analysis called strata and writing a book on memes.
That’s in addition to his day job as a computer science professor at the University of Southern California.
It’s great to see one of the earliest contributors to computers and networks as we know them still going strong and contributing important insights to problems we face today.