Despite billions of dollars of infrastructure investment and relentless technological innovation, it doesn’t take much more than a short drive to notice an essential reality of the connected era: You can’t assume a network connection will be available every time you want it.
As mobile developers, it’s a truth that’s convenient to ignore. Offline states in apps can be confounding to handle, but the problem begins with a basic and incorrect assumption—that offline is, by default, an error state.
That made sense when we built apps for desktop computers with dedicated ethernet uplinks.
It doesn’t make sense when the closing of an elevator’s doors renders an app completely useless or when it’s reasonable to expect that your application will be used in places that lack a reliable cellular infrastructure.To read this article in full or to leave a comment, please click here
By using proprietary mobile apps, it is possible to get some useful features, but if a car thief were to gain access to the mobile device that belongs to a victim that has the app installed, then would car theft not become a mere trifle?
In a letter to editors and publishers, retired RAF Air Vice-Marshal Andrew Vallance, who holds the post of defence and security media advisory secretariat, said on Wednesday: In view of media stories alleging that a former SIS [secret intelligence service; MI6] officer was the source of the information which allegedly compromises president-elect Donald Trump, would you and your journalists please seek my advice before making public that name. The guidance was given through fear that revealing the identity of the ex-MI6 man "could assist terrorist or other hostile organisations." Nonetheless, the BBC and other major British news organisations have disclosed details of the individual, whose name and current directorship at a London-based private security firm was initially published in the US press and heavily shared on social media. But such a decision by the BBC and others is a stark departure from the past when publications and broadcasters that received a so-called D-notice (defence notice), later replaced by a DA-notice (defence advisory notice), would often fall into line with the MoD's request in a very British spirit of collaboration. Enlarge / Google quit the D-notice committee in response to the Snowden revelations. NOVA/PBS The D-notice first came into play in 1912, two years before World War I broke out, when Whitehall mandarins decided that an organisation should be created that addressed matters of national interest. Members of the press were included on the advisory panel, and they remain so to this day. However, the makeup has changed a little: the likes of Google representatives have sat on the committee, for example, though, the US ad giant withdrew its voluntary support in light of Edward Snowden's damning disclosures about the NSA. Historically, publishers and editors have largely responded in kind to the frightfully polite requests from the MoD. Members of the committee have long argued that it doesn't amount to censorship from the British government, instead insisting that they are simply exercising restraint with stories that may, on reflection, damage national security.
But Vallance and his predecessors can only gently nudge the press to consider the sensitive material they have in their possession before publishing it. Where disputes arise between the government and publications, Vallance works independently as a go-between to "help resolve disagreement about what should be disclosed" before any legal action is taken against the press to suppress information by way of a court injunction. But today, the relevance of the D-notice—as it continually tends to be described—seems to be slowly ossifying, and we can see this from the decision by the likes of the BBC to publish the name of the ex-spy at the centre of the uncorroborated Trump dossier story, which claims that Russia has compromising information about the president-elect. In 2015, in acknowledgement that it was becoming increasingly difficult to put a lid on sensitive information being shared online, the UK government renamed the DA-notice to the Defence and Security Media Advisory (DSMA)—a system which currently costs £250,000 a year to run.
The inclusion of the word "security" is perhaps there to try to make it crystal clear to the media that supposedly risky disclosures endanger not only military and spook-types, but also British citizens. But, while it continues to try to sign up more digital and social media representatives, the DSMA committee has admitted that there is "no obvious answer" to the challenges presented by the Web.
It has previously argued that the "mainstream media" remains the superior source for news, regardless of gossipy tittle-tattle—no matter how inflammatory or lacking in reality—that is shared online.
Events in recent months, though, seem to suggest that the line is more blurred than ever before because it is far less clear who is setting the news agenda. We're in for a long four years if the answer turns out to be Trump's Twitter account. This post originated on Ars Technica UK
The public key is downloaded to the computer, but the private key never leaves the server and remains in the attackers' possession.
This is the key that victims pay to get access to. The problem with reaching out to a server on the internet after installation of ransomware is that it creates a weak link for attackers.
For example, if the server is known by security companies and is blocked by a firewall, the encryption process doesn't start. Some ransomware programs can perform so-called offline encryption, but they use the same RSA public key that's hard-coded into the malware for all victims.
The downside with this approach for attackers is that a decryptor tool given to one victim will work for all victims because they share the same private key as well. The Spora creators have solved this problem, according to researchers from security firm Emsisoft who analyzed the program's encryption routine. The malware does contain a hard-coded RSA public key, but this is used to encrypt a unique AES key that is locally generated for every victim.
This AES key is then used to encrypt the private key from a public-private RSA key pair that's also locally generated and unique for every victim. Finally, the victim's public RSA key is used to encrypt the AES keys that are used to encrypt individual files. In other words, the Spora creators have added a second round of AES and RSA encryption to what other ransomware programs have been doing until now. When victims want to pay the ransom, they have to upload their encrypted AES keys to the attackers' payment website.
The attackers will then use their master RSA private key to decrypt it and return it back to the victim -- likely bundled in a decryptor tool. The decryptor will use this AES key to decrypt the victim's unique RSA private key that was generated locally and that key will then be used to decrypt the per-file AES keys needed to recover the files. In this way, Spora can operate without the need of a command-and-control server and avoid releasing a master key that will work for all victims, the Emsisoft researchers said in a blog post. "Unfortunately, after evaluating the way Spora performs its encryption, there is no way to restore encrypted files without access to the malware author’s private key." Other aspects of Spora also set it apart from other ransomware operations.
For example, its creators have implemented a system that allows them to ask different ransoms for different types of victims. The encrypted key files that victims have to upload on the payments website also contain identifying information collected by the malware about the infected computers, including unique campaign IDs. This means that if the attackers launch a Spora distribution campaign specifically targeted at businesses, they will be able to tell when victims of that campaign will try to use their decryption service.
This allows them to automatically adjust the ransom amount for consumers or organizations or even for victims in different regions of the world. Furthermore, in addition to file decryption, the Spora gang offers other "services" that are priced separately, such as "immunity," which ensures that the malware will not infect a computer again, or "removal" which will also remove the program after decrypting the files.
They also offer a full package, where the victim can buy all three for a lower price. The payments website itself is well designed and looks professional.
It has an integrated live chat feature and the possibility of getting discounts.
From what the Emsisoft researchers observed, the attackers respond promptly to messages. All this points to Spora being a professional and well-funded operation.
The ransom values observed so far are lower than those asked by other gangs, which could indicate the group behind this threat wants to establish itself quickly. So far, researchers have seen Spora distributed via rogue email attachments that pose as invoices from an accounting software program popular in Russia and other Russian-speaking countries.
It's a reality that now has one phone-maker in some unusual legal crosshairs. Apple, maker of the ever-popular iPhone, is being sued on allegations that its FaceTime app contributed to the highway death of a 5-year-old girl named Moriah Modisette.
In Denton County, Texas, on Christmas Eve 2014, a man smashed into the Modisette family's Toyota Camry as it stopped in traffic on southbound Interstate 35W. Police say that the driver was using the FaceTime application and never saw the brake lights ahead of him.
In addition to the tragedy, father James, mother Bethany, and daughter Isabella all suffered non-fatal injuries during the crash two years ago. The Modisette family now wants Apple to pay damages for the mishap.
The family alleges the Cupertino, California-based technology company had a duty to warn motorists against using the app and that it could have used patented technology to prohibit drivers from utilizing the app.
According to the suit (PDF) filed in Santa Clara County Superior Court: Plaintiffs allege APPLE, INC.'s failure to design, manufacture, and sell the Apple iPhone 6 Plus with the patented, safer alternative design technology already available to it that would automatically lock-out or block users from utilizing APPLE, INC.'s 'FaceTime' application while driving a motor vehicle at highway speed, and failure to warn users that the product was likely to be dangerous when used or misused in a reasonably foreseeable manner and/or instruct on the safe usage of this and similar applications, rendered the Apple iPhone 6 defective when it left defendant APPLE, INC's possession, and were a substantial factor in causing plaintiffs' injuries and decedent's death. The patent referenced, issued by the US patent office in April 2014, is designed to provide a "lock-out mechanism" to prevent iPhone use by drivers.
The patent claims a "motion analyzer" and a "scenery analyzer" help prevent phone use. The reliability of such lock-out services, however, has come into question. "The motion analyzer can detect whether the handheld computing device is in motion beyond a predetermined threshold level.
The scenery analyzer can determine whether a holder of handheld computing device is located within a safe operating area of a vehicle.
And the lock-out mechanism can disable one or more functions of the handheld computing device based on output of the motion analyzer, and enable the one or more functions based on output of the scenery analyzer," according to the patent. Apple has not commented on the lawsuit, but it has said that drivers are responsible for their behavior. "For those customers who do not wish to turn off their iPhones or switch into Airplane Mode while driving to avoid distractions, we recommend the easy-to-use Do Not Disturb and Silent Mode features," Apple said in a statement. The suit comes amid mounting reports of motorists crashing while being distracted with their phones.
Such accidents take place as drivers engage in everything from playing Pokémon Go to texting.
In the US, proposed solutions have popped up from the mundane (more officers watching for phone usage) to the unorthodox (Textalyzers that prevent cars from functioning; alarms that go off if a driver's hands leave the wheel for three seconds).
Elsewhere, British officials are to meet with phone makers in a push for a "drive safe" phone mode in the UK. Garrett Wilhelm, the accused driver of the car that smashed into the Modisette vehicle, has been charged with manslaughter.
Anything that you say or do can be used against you. You have the right to an attorney.
If you cannot afford one, one will be provided to you.
Do you understand these rights as I have read them to you? The basic idea behind the Miranda warning is to provide someone being arrested with information about their constitutional rights against compelled self-incrimination (Fifth Amendment) during a custodial situation and to reassure them of their right to an attorney (Sixth Amendment). This warning stems from a 1966 Supreme Court case, Miranda v.
Arizona, where a kidnapping and rape suspect, Ernesto Miranda, confessed to the crime without the benefit of a lawyer and without being fully informed of his rights to not self-incriminate.
Today, all American police officers must recite some version of the Miranda warning while taking someone into custody due to the Supreme Court’s landmark 5-4 decision. In the half-century since the Miranda decision, a lot has changed.
For one, many of us carry smartphones containing a rich trove of personal data in our pockets that might interest law enforcement.
In fact, it wasn’t until 2014 that police officers nationwide were specifically ordered not to search people’s phones without a warrant during an arrest. In 1966, no one envisioned a world where we carried powerful computers in our pockets, so it's time for an update to the Miranda warning.
A modernized version would need to make clear not only that anyone can refuse to speak, but that speaking might involve inputting a passcode to open up a phone.
After speaking with several legal experts, here’s our "digital Miranda," based on our best understanding of current law. You have the right to remain silent.
This right includes declining to provide information that does not require speaking, such as entering a passcode to unlock a digital device, like a smartphone.
Anything that you say or do can be used against you.
Any data retrieved from your device can also be used against you. You have the right to an attorney.
If you cannot afford one, one will be provided to you.
Do you understand these rights as I have read them to you? We recognize that this revised Miranda warning has no actual force of law.
It’s simply meant as a way to think about encryption, constitutional rights, and contemporary interactions with police. Remember, you only get Mirandized during a “custodial situation” Chris Yarzab/Flickr Back in 2014, the court unanimously found in Riley v.
California that law enforcement must get a warrant before searching mobile phones during an arrest. Prior to Riley, at least some law enforcement officials were searching some suspects’ phones on the grounds that data on the phones could be used to aid their investigations. Writing for a rare unanimous court, Chief Justice John Roberts argued dismissively against the government, saying that searching a phone was not at all like searching a wallet. “That is like saying a ride on horseback is materially indistinguishable from a flight to the moon,” he concluded. Riley showed that the Supreme Court has started to think in fundamentally new ways about privacy in relation to the digital devices that are almost always with us.
So, then, we wondered, would most people even think to challenge law enforcement when asked to unlock their device, whether during an arrest, or otherwise? In fact, just after the Riley decision in 2014, a California Highway Patrol officer asked a woman to unlock her phone and hand it over during a traffic stop on suspicion of a DUI.
It’s worth noting that as this was just a traffic stop, which is not generally considered to be a “custodial situation.” She did not need to be given a Miranda warning, either. Recall, Riley only dealt with a very specific situation: requiring a warrant incident to arrest. The officer, Sean Harrington, found semi-nude pictures on the woman’s phone, which he then sent on to himself and shared with his buddies. (Harrington has since left the CHP, was prosecuted, took a plea deal, and is currently on probation.) We guess that most people wouldn’t know about Riley, nor many of their other constitutional rights and how they apply in the modern world. Most people probably would follow whatever instructions, whether legal or not, given to them by an (ideally well-intentioned) officer of the law. (To be clear, we’ve yet to find an example where evidence was tossed in a case because an officer blatantly ignored Riley.) When in doubt, ask for a lawyer and stay quiet One of the key elements of understanding post-Miranda criminal procedure is that suspects don’t always have to be read their rights. Miranda only kicks in during what’s called a “custodial” situation, typically an arrest. (A 2009 article from PoliceOne.com describes “how to talk to suspects without Mirandizing.”) When we asked around, Orin Kerr, a law professor at George Washington University, was quick to point out that there is a post-Miranda Supreme Court decision that involves what’s known as a “consent search.” In this 1973 decision, in a case known as Schneckloth v.
Bustamonte, the court found that a search is still allowed where consent is granted, even if the defendant is not expressly informed of his or her constitutional rights to refuse such a search. In that case, Sunnyvale, California, Police Officer James Rand pulled over a car containing six people at 2:40am on a traffic stop for a broken tail light. When Office Rand asked the men to produce identification, only one, Joe Alcala, complied. Rand asked him if he could search the car, and Alcala agreed.
The search yielded stolen checks in the car. One of the passengers, Robert Bustamante, was eventually charged with possessing stolen checks.
The men challenged the search, and eventually, the Supreme Court found that the men were under no legal obligation to consent to a search. Moreover, the officer did not have to inform the men of their rights until one of them had been arrested. Similarly, the woman who had the unfortunate interaction with the CHP officer in 2014 was under no obligation to unlock her phone, much less hand it over. Harrington didn’t have to read “Jane Doe” a Miranda warning—she was not under arrest.
As many cops know, criminals often will still talk even after they are Mirandized. “The nice thing about Miranda is that it doesn’t require [police] to say too much,” Mark Jaffe, a criminal defense lawyer who specializes in computer crimes, told Ars. (Jaffe has represented defendants in cases that Ars has written about, including Matthew Keys and Deric Lostutter.) Jaffe explained that many law enforcement officers want a clear, bright line like Miranda, as to what is acceptable in certain situations. But what about a scenario where law enforcement simply comes knocking at your door, asking that you help out? What rights do you have in such a non-custodial setting? In February 2016, a woman in Glendale, California, was ordered to depress her fingerprint on a seized iPhone. Months later, in May 2016, federal law enforcement officials, also in Los Angeles County, were successful in getting judicial approval for two highly unusual searches of a seized smartphone at two different Southern California homes, one in Lancaster and one in West Covina, about 90 miles away.
The signed warrants allowed the authorities to force a resident reasonably believed to be a user to press their fingerprints on the phone to see if it would unlock. (Under iOS and Android, fingerprints as passcodes only work for 48 hours, after that timeframe, the regular passcode is required.
Court records show that the warrants were presumably executed within that 48-hour window.) While there is no evidence that any of the residents attempted to challenge this order in court, it seems that someone could have. Presumably a person could have refused, possibly risking contempt of court and even the use of physical force to get a fingerprint onto the phone. “You shouldn’t resist a police order, you should lodge your dissent, and you should ask and clarify that they’re asking you to do it,” Alex Abdo, an attorney with the American Civil Liberties Union, told Ars. “But you should comply—as a lawyer that’s the advice you’re going to have to give.” Kerr didn’t think that a Lancaster-style situation would be considered custodial, and so wouldn’t trigger Miranda.
In other words, given the court’s holding in Schneckloth, our revised Miranda warning wouldn’t matter anyway. This seems reasonable—there are plenty of situations where many people might want to be helpful to police. Plus, we generally want police to be able to solve crimes.
But not everyone may be so forthcoming or trusting of police. Jaffe even proposed a short verbal warning that law enforcement could use as a Miranda-style warning in non-custodial situations: “I would like to search your car/house/phone. Please understand I don’t have a warrant to do so.” Supreme Court has yet to rule Being enticed or even compelled to hand over passcodes or fingerprint-enabled passcodes gets to the heart of the “going dark” problem. Law enforcement says that modern “unbreakable” encryption frustrates lawful investigations aimed at tech-savvy criminals who refuse to unlock their data. As Ars has reported before, under the Fifth Amendment, defendants cannot generally be compelled to provide self-incriminating testimony (“what you know”).
In 2012, the 11th US Circuit Court of Appeals ruled in favor of a defendant (“John Doe”) accused of possessing child pornography. “We conclude that the decryption and production would be tantamount to testimony by Doe of his knowledge of the existence and location of potentially incriminating files; of his possession, control, and access to the encrypted portions of the drives; and of his capability to decrypt the files,” the court wrote. The government did not pursue the issue further.
For now the 11th Circuit ruling, which covers Alabama, Florida and Georgia, remains the highest court to have directly addressed the subject. But that doesn’t mean that other judges see it this way, and some have ordered forced decryption. Shortly after the 11th Circuit ruling, a judge ordered a Colorado woman to decrypt her laptop computer so prosecutors could use the files against her in a criminal case.
The case, in which the judge also found that the woman's Fifth Amendment privilege against compelled self-incrimination was not violated, ultimately settled itself without her having to cough up the password and decrypt her computer for the authorities. More recently, a former Philadelphia police sergeant, referred to in court documents as yet another John Doe, still remains in custody for refusing an April 2016 court order to decrypt hard drives that authorities believe contain child porn.
That case is currently pending before the 3rd US Circuit Court of Appeals, and a decision could come at any time.
In court filings, Doe’s lawyers largely relied on the 11th Circuit’s decision. But, giving a fingerprint (“what you are”) for the purposes of identification or matching to an unknown fingerprint found at a crime scene has been allowed.
It wasn’t until relatively recently, after all, that fingerprints could be used to unlock a smartphone.
The crux of the legal theory here is that a compelled fingerprint isn’t testimonial, it’s simply a compelled production—like being forced to hand over a key to a safe. In the Lancaster court filings, nearly all of the cases that the government cites predate the implementation of fingerprint readers, except for a 2014 state case from Virginia.
As Ars reported at the time, a Virginia Circuit Court judge ruled that a person does not need to provide a passcode to unlock their phone for the police.
The court also ruled that demanding a suspect provide a fingerprint to unlock a phone would be constitutional. However, the Virginia state case, while interesting, has little legal relevance to ongoing federal cases across the country. “I’m not sure that I would ever provide my passcode if it would incriminate myself,” Brian Owsley, a law professor at the University of North Texas and a former federal magistrate judge, told Ars. “What’s the max that you’re going to face for refusing to obey a court order? If you’re facing life sentence without parole you’re better off being obstinate—it’s not the job of the accused ever to make the job easier for the prosecution.” What is custodial, anyway? Enlarge Marc Falardeau Situations where law enforcement demands passwords in what they believe are noncustodial situations are surely set to become standard practice, if they haven’t already. On a cold February morning earlier this year, no less than 10 armed officers from various law enforcement agencies, all wearing body armor, showed up to execute a search warrant on Justin Ashmore’s two-bedroom apartment in Arkansas. According to Ashmore’s lawyer, Carrie Jernigan, her client answered the door in his underwear. He was held to the side of the room as the search began.
Ashmore was then led upstairs to his bedroom to stay out of earshot of his eight-year-old son, who was also in the apartment and questioned. One of the federal agents began peppering Ashmore with questions and statements like: “Tell me why we are here today?” and “Don't play stupid, you know why we are here.” Ashmore initially thought perhaps it was because he had a small amount of marijuana in his freezer.
The questioning agent told him he didn’t care about the weed. As the interrogation went on, the agent eventually came out with it: “Tell me about the child porn movies you have been downloading.” According to the government, this was roughly when Ashmore confessed. As Jernigan wrote: At no time upstairs was Defendant Ashmore ever advised of his Miranda warnings or ever told he was free to leave.
In fact, he was denied his ability to leave.
Defendant Ashmore was also told he had to give the agents all passwords to all his electronic devices or it may be a very long time before he sees his own son again, to which he complied and gave the agents the information. In their own filings, prosecutors dispute many parts of Jernigan's account. "When interviewing Defendant, Special Agent Cranor and TFO Heffner did not use strong arm tactics and deceptive stratagems during questioning," they wrote, adding: "The agents advised Defendant that they were there to serve a search warrant regarding child pornography downloads at his residence and did not ask him to guess as to why they were there." The government argued that this scenario was not custodial, and so Ashmore did not have to be read his rights. According to the US District Judge PK Holmes’ December 2016 13-page opinion, it wasn’t until after Ashmore confessed to having the marijuana that he was Mirandized, at which point he allegedly confessed a second time to having downloaded child porn. Judge Holmes further explained that because the defendant had not been adequately given a Miranda warning before he gave up the password to his cellphone and computer, that his two alleged confessions and the two passwords to his devices should be suppressed. “The Government’s position is effectively that because officers never Mirandized Ashmore for his alleged confession related to child pornography, they could not have circumvented Miranda on purpose,” Judge Holmes continued. “Having listened to their testimony and observed their demeanor on this point, the Court does not believe the officers’ testimony and finds that they deliberately avoided giving Ashmore a Miranda warning.” But, in this case, Judge Holmes concluded, the data found on those devices would be allowed to be presented as evidence during trial as the warrants were valid. Plus, police would have been able to access them anyway as the computer hard drive was unencrypted and Ashmore’s Samsung Android’s passcode could have been circumvented easily.
In legal terms, this is known as the “independent source doctrine [which] is an exception to the exclusionary rule.” For now, Ashmore is set to go to trial January 17, 2017 in Fort Smith, Arkansas. "Agencies are given tools to use to investigate crimes and they should entirely be allowed to use those tools," Erik Rasmussen, a lawyer and former Secret Service special agent who focused on computer crimes, told Ars. "It changes all the time, because the adversaries change all the time."
The Electronic Frontier Foundation aims to protect Web traffic by encrypting the entire Internet using HTTPS.
Chrome now puts a little warning marker in the Address Bar next to any non-secure HTTP address.
Encryption is important, and not only for Web surfing.
If you encrypt all of the sensitive documents on your desktop or laptop, a hacker or laptop thief won't be able to parley their possession into identity theft, bank account takeover, or worse.
To help you select an encryption product that's right for your computer, we've rounded up a collection of current products.
As we review more products in this area, we'll keep the list up to date.
No Back Doors
When the FBI needed information from the San Bernardino shooter's iPhone, they asked Apple for a back door to get past the encryption.
But no such back door existed, and Apple refused to create one.
The FBI had to hire hackers to get into the phone.
Why wouldn't Apple help? Because the moment a back door or similar hack exists, it becomes a target, a prize for the bad guys.
It will leak sooner or later.
In a talk at Black Hat this past summer, Apple's Ivan Krstic revealed that the company has done something similar in their cryptographic servers. Once the fleet of servers is up and running, they physically destroy the keys that would permit modification.
Apple can't update them, but the bad guys can't get in either.
All of the products in this roundup explicitly state that they have no back door, and that's as it should be.
It does mean that if you encrypt an essential document and then forget the encryption password, you've lost it for good.
Two Main Approaches
Back in the day, if you wanted to keep a document secret you could use a cipher to encrypt it and then burn the original. Or you could lock it up in a safe.
The two main approaches in encryption utilities parallel these options.
One type of product simply processes files and folders, turning them into impenetrable encrypted versions of themselves.
The other creates a virtual disk drive that, when open, acts like any other drive on your system. When you lock the virtual drive, all of the files you put into it are completely inaccessible.
Similar to the virtual drive solution, some products store your encrypted data in the cloud.
This approach requires extreme care, obviously.
Encrypted data in the cloud has a much bigger attack surface than encrypted data on your own PC.
Which is better? It really depends on how you plan to use encryption.
If you're not sure, take advantage of the 30-day free trial offered by each of these products to get a feel for the different options.
Secure Those Originals
After you copy a file into secure storage, or create an encrypted version of it, you absolutely need to wipe the unencrypted original. Just deleting it isn't sufficient, even if you bypass the Recycle Bin, because the data still exists on disk, and data recovery utilities can often get it back.
Some encryption products avoid this problem by encrypting the file in place, literally overwriting it on disk with an encrypted version.
It's more common, though, to offer secure deletion as an option.
If you choose a product that lacks this feature, you should find a free secure deletion tool to use along with it.
Overwriting data before deletion is sufficient to balk software-based recovery tools. Hardware-based forensic recovery works because the magnetic recording of data on a hard drive isn't actually digital.
It's more of a waveform.
In simple terms, the process involves nulling out the known data and reading around the edges of what's left.
If you really think someone (the feds?) might use this technique to recover your incriminating files, you can set your secure deletion tool to make more passes, overwriting the data beyond what even these techniques can recover.
An encryption algorithm is like a black box.
Dump a document, image, or other file into it, and you get back what seems like gibberish. Run that gibberish back through the box, with the same password, and you get back the original.
The U.S. government has settled on Advanced Encryption Standard (AES) as a standard, and all of the products gathered here support AES.
Even those that support other algorithms tend to recommend using AES.
If you're an encryption expert, you may prefer another algorithm, Blowfish, perhaps, or the Soviet government's GOST.
For the average user, however, AES is just fine.
Public Key Cryptography and Sharing
Passwords are important, and you have to keep them secret, right? Well, not when you use Public Key Infrastructure (PKI) cryptography.
With PKI, you get two keys. One is public; you can share it with anyone, register it in a key exchange, tattoo it on your forehead—whatever you like.
The other is private, and should be closely guarded.
If I want to send you a secret document, I simply encrypt it with your public key. When you receive it, your private key decrypts it.
Using this system in reverse, you can create a digital signature that proves your document came from you and hasn't been modified. How? Just encrypt it with your private key.
The fact that your public key decrypts it is all the proof you need. PKI support is less common than support for traditional symmetric algorithms.
If you want to share a file with someone and your encryption tool doesn't support PKI, there are other options for sharing. Many products allow creation of a self-decrypting executable file. You may also find that the recipient can use a free, decryption-only tool.
What's the Best?
Right now there are three Editors' Choice products in the consumer-accessible encryption field.
The first is the easiest to use of the bunch, the next is the most secure, and the third is the most comprehensive.
AxCrypt Premium has a sleek, modern look, and when it's active you'll hardly notice it.
Files in its Secured Folders get encrypted automatically when you sign out, and it's one of the few that support public key cryptography.
CertainSafe Digital Safety Deposit Box goes through a multistage security handshake that authenticates you to the site and authenticates the site to you. Your files are encrypted, split into chunks, and tokenized.
Then each chunk gets stored on a different server.
A hacker who breached one server would get nothing useful.
Folder Lock can either encrypt files or simply lock them so nobody can access them.
It also offers encrypted lockers for secure storage.
Among its many other features are file shredding, free space shredding, secure online backup, and self-decrypting files.
The other products here also have their merits, too, of course. Read the capsules below and then click through to the full reviews to decide which one you'll use to protect your files. Have an opinion on one of the apps reviewed here, or a favorite tool we didn't mention? Let us know in the comments.
FEATURED IN THIS ROUNDUP
The latest research into the group of cybercriminals selling alleged NSA spy tools reinforced the idea that they'd received the classified materials from an insider within the intelligence agency, security company Flashpoint said. Analysis of the latest ShadowBrokers dump, which was announced earlier in the month on the blogging platform Medium by "Boceffus Cleetus," suggests the spy tools were initially taken directly from an NSA code repository by a rogue insider, Flashpoint said.
The company's researchers analyzed the sample file containing implants and exploits and various screenshots provided in the post and have "medium confidence" that an NSA employee or contractor initially leaked the tools, said Ronnie Tokazowski, senior malware analyst with Flashpoint. However, they were still "uncertain of how these documents were exfiltrated," he said. ShadowBrokers first began offering more than a dozen sophisticated tools for sale -- such as software for extracting decryption keys from Cisco PIX firewalls -- in underground marketplaces over the summer.
The post-exploitation tools, intended to give attackers a way to gain a foothold in the network or move around laterally after the initial breach, targeted flaws in commercial appliances and software.
The Cisco vulnerability (now patched) would have allowed attackers to spy on encrypted communications, for example. Flashpoint's investigators believe the files were taken from a code repository because the sample file was written in the Markdown, a lightweight markup language commonly used in code repositories to simplify how files are parsed. "Looking at the dump and how the data is structured, we're fairly certain it's from internal code repository and likely an employee or contractor who had access to it," said Tokazowski. When the first set of ShadowBrokers were put up for sale, there was speculation that attackers had either successfully breached NSA infrastructure or NSA operatives had mistakenly left sensitive files on a publicly accessible staging server.
Shortly afterwards, the FBI arrested NSA contractor Harold Martin for stealing government materials.
Some of the tools included in the ShadowBrokers dump were among the classified materials in Martin's possession, suggesting some kind of involvement with the theft and sale. While Flashpoint's Tokazowski rejected the idea that the cybercriminals had stolen the files directly through external remote access or discovered them on an external staging server, he did not draw any conclusions whether Martin was involved. While the contractor denies he gave anyone the files, it seems quite possible that someone else may have broken into his non-classified computer to steal the tools. The theft of the ShadowBrokers files overlap somewhat with former Booz Hamilton consultant Edward Snowden who stole thousands of NSA-related documents, but Flashpoint said there was nothing linking the theft of these tools with the former NSA contractor. "The close proximity of events raises the question if there were multiple insiders acting independently during 2013," Tokazowski said. Nation-state attacks and flashy attacks tend to consume most of the security attention, but malicious insiders pose a significant threat to enterprise networks because they already have access to sensitive data and systems. Most IT teams will never have to worry about dealing with a nation-state attack, but every single one of them has to face the prospect of an employee or an administrator going rogue and stealing corporate secrets or damaging the network. Mistakes as a result of careless insiders, such as when employees copy files for non-malicious reasons but the copies get stolen by adversaries, are also common. In the case of The ShadowBrokers, the contractor or employee may have had limited access to the tools since the implants and exploits released thus far appear to be all Linux- and Unix-based.
An insider with wider access would theoretically have been able to grab different types of tools. There's not enough evidence to understand the rogue insider's motivations for stealing the spy tools, but Flashpoint doesn't think it was money. The implants and exploits in this set appear to have been developed between 2005 to 2013, such as the ElatedMonkey exploit, which targeted a local privilege escalation flaw in a 2008 version of the web hosting control panel interface cPanel.
The attack tools are several years old, making it likely the NSA has already moved on to more modern exploitation tools.
If the insider wanted to sell them, the time to do so was shortly after the theft. "If The Shadow Brokers were trying to make a profit, the exploits would have been offered shortly after July 2013, when the information would have been most valuable," Flashpoint said.