14.1 C
London
Saturday, September 23, 2017
Home Tags Plaintext

Tag: Plaintext

It's only 0.0026 per cent of traffic, but it's all in plaintext so deserves a red flag Google's Chrome browser will soon label file transfer protocol (FTP) services insecure.…
HPE's SiteScope is vulnerable to several cryptographic issues,insufficiently protected credentials,and missing authentication.
Researchers say an audio driver that comes installed on some HP-manufactured computers can record users keystrokes and store them in a world-readable plaintext file.
Greyhound allows four-digit PINs and stores them in plaintext.
Plaintext passwords.
In 2017 UK magazine publisher Future's FileSilo website has been raided by hackers, who have made off with, among other information, unencrypted user account passwords.…
Update your StruxureWare Data Center Expert to v7.4, quick! Schneider Electric has issued a patch for its StruxureWare Data Center Expert industrial control kit following the discovery of a flaw that could allow remote access to unencrypted passwords.…
Known oppressive regimes including Egypt, and er... the UK? Oh, the IP Act is law... ProtonMail, the privacy-focused email business, has launched a Tor hidden service to combat the censorship and surveillance of its users. The move is designed to counter actions "by totalitarian governments around the world to cut off access to privacy tools" and the Swiss company specifically cited "recent events such as the Egyptian government's move to block encrypted chat app Signal, and the passage of the Investigatory Powers Act in the UK that mandates tracking all web browsing activity". Speaking to The Register, ProtonMail's CEO and co-founder Andy Yen said: "We do expect to see more censorship this year of ProtonMail and services like us." First launched in 2014 by scientists who met at CERN and had become concerned by the mass-surveillance suggested by the Edward Snowden revelations, ProtonMail is engineered to protect its users' communications by using client-side encryption through users' browsers, meaning ProtonMail's servers never have access to any plaintext content. Combined with Switzerland's strong privacy laws, the freemium service has increasingly been seen as a popular destination for spooked citizens.
It has faced enormous DDoS attacks by assumed nation-state adversaries, and following the election of Donald Trump, sign-ups at the service doubled. Users can navigate to the Tor network through: https://protonirockerxow.onion Today, ProtonMail is announcing the introduction of a Tor hidden service, or onion site, which will allow users to directly connect to their encrypted email accounts through the Tor network at the URL https://protonirockerxow.onion, which ProtonMail said it expended "considerable CPU time" to generate for the sake of finding a hash that was more human readable and less prone to phishing. Additionally, the onion site also has a valid SSL certificate issued to Proton Technologies AG by DigiCert.

This is a reasonably novel innovation as the classical Certificate Authority system isn't compatible with Tor, where onion addresses are self-generated rather than purchased from a registrar. Yen told The Register: "The problem is, if you act as your own CA, you run the issue of not trusting that certificate authority by default." As such, ProtonMail reached out to the Tor Project, which suggested it get in touch with DigiCert, who had previously provided the CA service for Facebook. "Given ProtonMail's recent growth, we realize that the censorship of ProtonMail in certain countries is inevitable and we are proactively working to prevent this." said Yen. "Tor provides a way to circumvent certain Internet blocks so improving our compatibility with Tor is a natural first step." In the coming months, the Tor Project stated it would be "making additional security and privacy enhancements to ProtonMail, including finishing some of the leftover items from our 2016 Security Roadmap". ® Sponsored: Want to know more about Privileged Access Management? Visit The Register's hub
'Panic Button' could be pressed by miscreants, repeatedly The Rave Panic Button app, designed to allow businesses to summon emergency services, allows miscreants to easily 'swat' targets by making false reports of emergencies says security researcher Randy Westergren. The app, which has a small install base of up to 10,000 users, has shuttered the holes Westergren identified. The vulnerabilities allowed businesses to place a series of rapid 911 calls reporting active shooters, fires and other threats. Because it's aimed at businesses, the app also sends emergency services building plans and alerts staff to threats. Westergren says the app could therefore cause plans to be sent to unknown parties, and staff spooked by phantom emergencies. Westergren found serious holes in the app that allowed external attackers to lodge false emergency call outs, an act similar to swatting - maliciously summoning SWAT teams - if attackers were to select the app's active shooter option. "As I reviewed the code, I began to realise the product had been designed without a fundamental concern for security — an extremely concerning issue given the nature of the app and how easily attackers could abuse it," Westergren says. "Not only were bad actors able to view and collect sensitive data about users and facilities, they would also be able to impersonate users and make requests on their behalf. "An attacker would be able to spoof panic calls to legitimate facility locations; he could even interfere with real-life emergency panic calls." Westergren found hardcoded plaintext authentication values that gave rise to easy spoofing attacks. Developers fixed the flaws in about six weeks, but Westergren still recommends users uninstall the app citing suspicions that the software could have similar security shortfalls. "... it remains highly concerning that the software was released in this condition at all," the hacker says. "Since it’s probable that other components of the system have been designed with similarly insufficient security measures, I would recommend customers of Rave’s Panic Button immediately suspend its use." ® Sponsored: Customer Identity and Access Management
Christmas came early for Facebook bug bounty hunter Tommy DeVoss who was paid $5,000 this week for discovering a security vulnerability that allowed him to view the private email addresses of any Facebook user. “The hack allowed me to harvest as many email addresses as I wanted from anybody on Facebook,” DeVoss said. “It didn’t matter how private you thought your email address was – I could of grabbed it.” DeVoss said on Thanksgiving Day he discovered the vulnerability and reported it to Facebook via its bug bounty program.

After weeks of going back and forth verifying what the exact bug was and how it was exploited, Facebook said it would award him $5,000 for the discovery.

And on Tuesday it did. The bug was tied to the user-generated Facebook Groups feature that allows any member to create an affinity group on the social network’s platform.

DeVoss discovered as an administrator of a Facebook Group he could invite any Facebook member to have Admin Roles via Facebook’s system to do things such as edit post or add new members. Those invitations were handled by Facebook and sent to the invited recipient’s Facebook Messages inbox, but also to the Facebook user’s email address associated with their account.
In many cases users choose to keep their email addresses private.

DeVoss discovered, despite privacy settings set by Facebook members, he was able to gain access to any Facebook user’s email address whether he was Friends with them or not. DeVoss found when he cancelled pending invitations to those invited to be Facebook Group Administrators there was a glitch. “While Facebook waits for the confirmation, the user is forwarded to a Page Roles tab that includes a button to cancel the request,” he said. Next, he switched to Facebook’s mobile view of the Page Roles tab. Here DeVoss was able to view the full email addresses of anyone he wanted to cancel from becoming a Facebook Group Administrator. “I noticed that when you clicked to cancel the administrator invitation on the mobile page, you were redirected to a page with the email address in the URL,” he said. “Now all you have to do is pluck the plaintext version of the confidential email address straight from the URL.” The impact of this vulnerability could be diverse, he wrote in a blog post outlining his discovery. “Harvesting email addresses this way contradicts Facebook’s privacy policy and could lead to targeted phishing attempts or other malicious purposes.” Facebook confirmed the hack and said it has no evidence the vulnerability was ever misused.

Facebook said it has implemented a fix to prevent the issue from being exploited. DeVoss, a software developer in Virginia, said this is the largest bug bounty payment he has ever earned. He told Threatpost he participates in a number of bug bounty programs including Yahoo’s and the Hack the Pentagon program. For its part, in October Facebook announced it has paid out more than $5 million to 900 researchers in the five years since it implemented its bug bounty program.

The company said it paid out $611,741 to 149 researchers in the first half of 2016 alone. Facebook was one of the first websites to launch a bug program when it followed in the footsteps of both Mozilla and Google in August 2011. In February, the company paid $10,000 to a 10-year-old boy from Finland after he discovered an API bug in the image sharing app Instagram, which Facebook bought for $1B in 2012. The company awarded $15,000 to Anand Prakash in March for a bug allowed him to crack open any of Facebook’s 1.1 billion accounts using a rudimentary brute force password attack.
EnlargeChristiaan Colen reader comments 4 Share this story Filippo Valsorda is an engineer on the Cloudflare Cryptography team, where he's deploying and helping design TLS 1.3, the next revision of the protocol implementing HTTPS. He also created a Heartbleed testing site in 2014.

This post originally appeared on his blog and is re-printed with his permission. After years of wrestling with GnuPG with varying levels of enthusiasm, I came to the conclusion that it's just not worth it, and I'm giving up—at least on the concept of long-term PGP keys.

This editorial is not about the gpg tool itself, or about tools at all. Many others have already written about that.
It's about the long-term PGP key model—be it secured by Web of Trust, fingerprints or Trust on First Use—and how it failed me. Trust me when I say that I tried.
I went through all the setups.
I used Enigmail.
I had offline master keys on a dedicated Raspberry Pi with short-lived subkeys.
I wrote custom tools to make handwritten paper backups of offline keys (which I'll publish sooner or later).
I had YubiKeys. Multiple.
I spent days designing my public PGP policy. I traveled two hours by train to meet the closest Biglumber user in Italy to get my first signature in the strong set.
I have a signature from the most connected key in the set.
I went to key-signing parties in multiple continents.
I organized a couple. I have the arrogance of saying that I understand PGP.
In 2013 I was dissecting the packet format to brute force short IDs.
I devised complex silly systems to make device subkeys tie to both my personal and company master keys.
I filed usability and security issues in GnuPG and its various distributions. All in all, I should be the perfect user for PGP: competent, enthusiast, embedded in a similar community. But it just didn't work. First, there's the adoption issue others talked about extensively.
I get, at most, two encrypted e-mails a year. Then, there's the UX problem: easy crippling mistakes; messy keyserver listings from years ago; "I can't read this e-mail on my phone" or "on the laptop;" "I left the keys I never use on the other machine." But the real issues, I realized, are more subtle.
I never felt confident in the security of my long-term keys.

The more time passed, the more I would feel uneasy about any specific key. Yubikeys would get exposed to hotel rooms. Offline keys would sit in a far away drawer or safe.
Vulnerabilities would be announced. USB devices would get plugged in. A long-term key is as secure as the minimum common denominator of your security practices over its lifetime. It's the weak link. Worse, long-term key patterns, like collecting signatures and printing fingerprints on business cards, discourage practices that would otherwise be obvious hygiene: rotating keys often, having different keys for different devices, compartmentalization.
Such practices actually encourage expanding the attack surface by making backups of the key. We talk about Pets vs.

Cattle in infrastructure; those concepts would apply just as well to keys! If I suspect I'm compromised, I want to be able to toss the laptop and rebootstrap with minimum overhead.

The worst outcome possible for a scheme is making the user stick with a key that has a suspicion of compromise, because the cost of rotating would be too high. And all this for what gain? "Well, of course, long-term trust." Yeah, about that.
I never, ever, ever successfully used the WoT to validate a public key.

And remember, I have a well-linked key.
I haven't done a formal study, but I'm almost positive that everyone that used PGP to contact me has, or would have done (if asked), one of the following: pulled the best-looking key from a keyserver, most likely not even over TLS used a different key if replied with "this is my new key" re-sent the e-mail unencrypted if provided an excuse like "I'm traveling" Travel in particular is hostile to long-term keys, making this kind of fresh start impractical. Moreover, I'm not even sure there's an attacker that long-term keys make sense against. Your average adversary probably can't MitM Twitter DMs (which means you can use them to exchange fingerprints opportunistically, while still protecting your privacy).

The Mossad will do Mossad things to your machine, whatever key you use. Finally, these days I think I care much more about forward secrecy, deniability, and ephemerality than I do about ironclad trust.

Are you sure you can protect that long-term key forever? Because when an attacker decides to target you and succeeds, they won't have access just from that point forward; they'll have access to all your past communications, too.

And that's ever more relevant. Moving forward I'm not dropping to plaintext. Quite the opposite.

But I won't be maintaining any public long-term key. Mostly I'll use Signal or WhatsApp, which offer vastly better endpoint security on iOS, ephemerality, and smoother key rotation. If you need to securely contact me, your best bet is to DM me asking for my Signal number.
If needed we can decide an appropriate way to compare fingerprints.
If we meet in person and need to set up a secure channel, we will just exchange a secret passphrase to use with what's most appropriate: OTR, Pond, Ricochet. If it turns out we really need PGP, we will set up some ad-hoc keys, more à la Operational PGP.
Same for any signed releases or canaries I might maintain in the future. To exchange files, we will negotiate Magic Wormhole, OnionShare, or ad-hoc PGP keys over the secure channel we already have.

The point is not to avoid the gpg tool, but the PGP key management model. If you really need to cold-contact me, I might maintain a Keybase key, but no promises.
I like rooting trust in your social profiles better since it makes key rotation much more natural and is probably how most people know me anyway. I'm also not dropping YubiKeys.
I'm very happy about my new YubiKey 4 with touch-to-operate, which I use for SSH keys, password storage, and machine bootstrap.

But these things are one hundred percent under my control. About my old keys and transitioning I broke the offline seal of all my keys.
I don't have reason to believe they are compromised, but you should stop using them now. Below are detached signatures for the Markdown version of this document from all keys I could still find. In the coming weeks I'll import all signatures I received, make all the signatures I promised, and then publish revocations to the keyservers.
I'll rotate my Keybase key.

Eventually, I'll destroy the private keys. See you on Signal. (Or Twitter.) Giving up on PGP.mdGiving up on PGP.md.B8CC58C51CAEA963.ascGiving up on PGP.md.C5C92C16AB6572C2.ascGiving up on PGP.md.54D93CBC8AA84B5A.asc Giving up on PGP.md.EBF01804BCF05F6B.asc [coming once I recover the passphrase from another country] Note: I expect the "Moving forward" section to evolve over time, as tools come and go.

The signed .md file won't change, an unauthenticated .diff will appear below for verification convenience.
The Fifth Element is a problem - the input argument that didn't get checked is an RCE hole The developers of open source webmail package Roundcube want sysadmins to push in a patch, because a bug in versions prior to 1.2.3 let an attacker crash it remotely – by sending what looks like valid e-mail data. The authors overlooked sanitising the fifth argument (the _from parameter) in mail() – and that meant someone only needed to compose an e-mail with malicious info in that argument to attack Roundcube. It works because of how the program flows in a default installation. User input from the Roundcube UI is passed to PHP's mail() function, and mail() calls sendmail. Because the user input wasn't sanitised until the bug-fix, the fifth argument when calling mail() could be used to execute sendmail with the -X option to log all mail traffic – and that, according to RIPS Technologies in this blog post, could be abused to spawn a malicious PHP file in the target server's Webroot directory. After looking over the code and the regex that was meant to sanitise the _from parameter, the RIPS Technologies' analysts worked out that an HTTP request to the server could use that parameter to put a malicious PHP file onto the system, like this: example@example.com -OQueueDirectory=/tmp -X/var/www/html/rce.php The malicious rce.php can be populated with PHP code that's inserted in an e-mail's subject line. “Since the email data is unencoded, the subject parameter will be reflected in plaintext which allows the injection of PHP tags into the shell file”, the post states. Roundcube posted a patch to GitHub at the end of November, and issued a version 1.2.3 here. ® Sponsored: Customer Identity and Access Management