Home Tags Plaintext

Tag: Plaintext

Keylogger Found in Audio Drivers on Some HP Machines

Researchers say an audio driver that comes installed on some HP-manufactured computers can record users keystrokes and store them in a world-readable plaintext file.

Meet Greyhound.com, the site that doesn’t allow password changes

Greyhound allows four-digit PINs and stores them in plaintext.

Mag publisher Future stored your FileSilo passwords in plaintext. Then hackers...

Plaintext passwords.
In 2017 UK magazine publisher Future's FileSilo website has been raided by hackers, who have made off with, among other information, unencrypted user account passwords.…

Another Schneider vuln: Plaintext passwords on client-side RAM resolved

Update your StruxureWare Data Center Expert to v7.4, quick! Schneider Electric has issued a patch for its StruxureWare Data Center Expert industrial control kit following the discovery of a flaw that could allow remote access to unencrypted passwords.…

ProtonMail launches Tor hidden service to dodge totalitarian censorship

Known oppressive regimes including Egypt, and er... the UK? Oh, the IP Act is law... ProtonMail, the privacy-focused email business, has launched a Tor hidden service to combat the censorship and surveillance of its users. The move is designed to counter actions "by totalitarian governments around the world to cut off access to privacy tools" and the Swiss company specifically cited "recent events such as the Egyptian government's move to block encrypted chat app Signal, and the passage of the Investigatory Powers Act in the UK that mandates tracking all web browsing activity". Speaking to The Register, ProtonMail's CEO and co-founder Andy Yen said: "We do expect to see more censorship this year of ProtonMail and services like us." First launched in 2014 by scientists who met at CERN and had become concerned by the mass-surveillance suggested by the Edward Snowden revelations, ProtonMail is engineered to protect its users' communications by using client-side encryption through users' browsers, meaning ProtonMail's servers never have access to any plaintext content. Combined with Switzerland's strong privacy laws, the freemium service has increasingly been seen as a popular destination for spooked citizens.
It has faced enormous DDoS attacks by assumed nation-state adversaries, and following the election of Donald Trump, sign-ups at the service doubled. Users can navigate to the Tor network through: https://protonirockerxow.onion Today, ProtonMail is announcing the introduction of a Tor hidden service, or onion site, which will allow users to directly connect to their encrypted email accounts through the Tor network at the URL https://protonirockerxow.onion, which ProtonMail said it expended "considerable CPU time" to generate for the sake of finding a hash that was more human readable and less prone to phishing. Additionally, the onion site also has a valid SSL certificate issued to Proton Technologies AG by DigiCert.

This is a reasonably novel innovation as the classical Certificate Authority system isn't compatible with Tor, where onion addresses are self-generated rather than purchased from a registrar. Yen told The Register: "The problem is, if you act as your own CA, you run the issue of not trusting that certificate authority by default." As such, ProtonMail reached out to the Tor Project, which suggested it get in touch with DigiCert, who had previously provided the CA service for Facebook. "Given ProtonMail's recent growth, we realize that the censorship of ProtonMail in certain countries is inevitable and we are proactively working to prevent this." said Yen. "Tor provides a way to circumvent certain Internet blocks so improving our compatibility with Tor is a natural first step." In the coming months, the Tor Project stated it would be "making additional security and privacy enhancements to ProtonMail, including finishing some of the leftover items from our 2016 Security Roadmap". ® Sponsored: Want to know more about Privileged Access Management? Visit The Register's hub

911 app is a joke, says security researcher Randy Westergren

'Panic Button' could be pressed by miscreants, repeatedly The Rave Panic Button app, designed to allow businesses to summon emergency services, allows miscreants to easily 'swat' targets by making false reports of emergencies says security researcher Randy Westergren. The app, which has a small install base of up to 10,000 users, has shuttered the holes Westergren identified. The vulnerabilities allowed businesses to place a series of rapid 911 calls reporting active shooters, fires and other threats. Because it's aimed at businesses, the app also sends emergency services building plans and alerts staff to threats. Westergren says the app could therefore cause plans to be sent to unknown parties, and staff spooked by phantom emergencies. Westergren found serious holes in the app that allowed external attackers to lodge false emergency call outs, an act similar to swatting - maliciously summoning SWAT teams - if attackers were to select the app's active shooter option. "As I reviewed the code, I began to realise the product had been designed without a fundamental concern for security — an extremely concerning issue given the nature of the app and how easily attackers could abuse it," Westergren says. "Not only were bad actors able to view and collect sensitive data about users and facilities, they would also be able to impersonate users and make requests on their behalf. "An attacker would be able to spoof panic calls to legitimate facility locations; he could even interfere with real-life emergency panic calls." Westergren found hardcoded plaintext authentication values that gave rise to easy spoofing attacks. Developers fixed the flaws in about six weeks, but Westergren still recommends users uninstall the app citing suspicions that the software could have similar security shortfalls. "... it remains highly concerning that the software was released in this condition at all," the hacker says. "Since it’s probable that other components of the system have been designed with similarly insufficient security measures, I would recommend customers of Rave’s Panic Button immediately suspend its use." ® Sponsored: Customer Identity and Access Management

Clever Facebook Hack Reveals Private Email Address of Any User

Christmas came early for Facebook bug bounty hunter Tommy DeVoss who was paid $5,000 this week for discovering a security vulnerability that allowed him to view the private email addresses of any Facebook user. “The hack allowed me to harvest as many email addresses as I wanted from anybody on Facebook,” DeVoss said. “It didn’t matter how private you thought your email address was – I could of grabbed it.” DeVoss said on Thanksgiving Day he discovered the vulnerability and reported it to Facebook via its bug bounty program.

After weeks of going back and forth verifying what the exact bug was and how it was exploited, Facebook said it would award him $5,000 for the discovery.

And on Tuesday it did. The bug was tied to the user-generated Facebook Groups feature that allows any member to create an affinity group on the social network’s platform.

DeVoss discovered as an administrator of a Facebook Group he could invite any Facebook member to have Admin Roles via Facebook’s system to do things such as edit post or add new members. Those invitations were handled by Facebook and sent to the invited recipient’s Facebook Messages inbox, but also to the Facebook user’s email address associated with their account.
In many cases users choose to keep their email addresses private.

DeVoss discovered, despite privacy settings set by Facebook members, he was able to gain access to any Facebook user’s email address whether he was Friends with them or not. DeVoss found when he cancelled pending invitations to those invited to be Facebook Group Administrators there was a glitch. “While Facebook waits for the confirmation, the user is forwarded to a Page Roles tab that includes a button to cancel the request,” he said. Next, he switched to Facebook’s mobile view of the Page Roles tab. Here DeVoss was able to view the full email addresses of anyone he wanted to cancel from becoming a Facebook Group Administrator. “I noticed that when you clicked to cancel the administrator invitation on the mobile page, you were redirected to a page with the email address in the URL,” he said. “Now all you have to do is pluck the plaintext version of the confidential email address straight from the URL.” The impact of this vulnerability could be diverse, he wrote in a blog post outlining his discovery. “Harvesting email addresses this way contradicts Facebook’s privacy policy and could lead to targeted phishing attempts or other malicious purposes.” Facebook confirmed the hack and said it has no evidence the vulnerability was ever misused.

Facebook said it has implemented a fix to prevent the issue from being exploited. DeVoss, a software developer in Virginia, said this is the largest bug bounty payment he has ever earned. He told Threatpost he participates in a number of bug bounty programs including Yahoo’s and the Hack the Pentagon program. For its part, in October Facebook announced it has paid out more than $5 million to 900 researchers in the five years since it implemented its bug bounty program.

The company said it paid out $611,741 to 149 researchers in the first half of 2016 alone. Facebook was one of the first websites to launch a bug program when it followed in the footsteps of both Mozilla and Google in August 2011. In February, the company paid $10,000 to a 10-year-old boy from Finland after he discovered an API bug in the image sharing app Instagram, which Facebook bought for $1B in 2012. The company awarded $15,000 to Anand Prakash in March for a bug allowed him to crack open any of Facebook’s 1.1 billion accounts using a rudimentary brute force password attack.

Op-ed: I’m throwing in the towel on PGP, and I work...

EnlargeChristiaan Colen reader comments 4 Share this story Filippo Valsorda is an engineer on the Cloudflare Cryptography team, where he's deploying and helping design TLS 1.3, the next revision of the protocol implementing HTTPS. He also created a Heartbleed testing site in 2014.

This post originally appeared on his blog and is re-printed with his permission. After years of wrestling with GnuPG with varying levels of enthusiasm, I came to the conclusion that it's just not worth it, and I'm giving up—at least on the concept of long-term PGP keys.

This editorial is not about the gpg tool itself, or about tools at all. Many others have already written about that.
It's about the long-term PGP key model—be it secured by Web of Trust, fingerprints or Trust on First Use—and how it failed me. Trust me when I say that I tried.
I went through all the setups.
I used Enigmail.
I had offline master keys on a dedicated Raspberry Pi with short-lived subkeys.
I wrote custom tools to make handwritten paper backups of offline keys (which I'll publish sooner or later).
I had YubiKeys. Multiple.
I spent days designing my public PGP policy. I traveled two hours by train to meet the closest Biglumber user in Italy to get my first signature in the strong set.
I have a signature from the most connected key in the set.
I went to key-signing parties in multiple continents.
I organized a couple. I have the arrogance of saying that I understand PGP.
In 2013 I was dissecting the packet format to brute force short IDs.
I devised complex silly systems to make device subkeys tie to both my personal and company master keys.
I filed usability and security issues in GnuPG and its various distributions. All in all, I should be the perfect user for PGP: competent, enthusiast, embedded in a similar community. But it just didn't work. First, there's the adoption issue others talked about extensively.
I get, at most, two encrypted e-mails a year. Then, there's the UX problem: easy crippling mistakes; messy keyserver listings from years ago; "I can't read this e-mail on my phone" or "on the laptop;" "I left the keys I never use on the other machine." But the real issues, I realized, are more subtle.
I never felt confident in the security of my long-term keys.

The more time passed, the more I would feel uneasy about any specific key. Yubikeys would get exposed to hotel rooms. Offline keys would sit in a far away drawer or safe.
Vulnerabilities would be announced. USB devices would get plugged in. A long-term key is as secure as the minimum common denominator of your security practices over its lifetime. It's the weak link. Worse, long-term key patterns, like collecting signatures and printing fingerprints on business cards, discourage practices that would otherwise be obvious hygiene: rotating keys often, having different keys for different devices, compartmentalization.
Such practices actually encourage expanding the attack surface by making backups of the key. We talk about Pets vs.

Cattle in infrastructure; those concepts would apply just as well to keys! If I suspect I'm compromised, I want to be able to toss the laptop and rebootstrap with minimum overhead.

The worst outcome possible for a scheme is making the user stick with a key that has a suspicion of compromise, because the cost of rotating would be too high. And all this for what gain? "Well, of course, long-term trust." Yeah, about that.
I never, ever, ever successfully used the WoT to validate a public key.

And remember, I have a well-linked key.
I haven't done a formal study, but I'm almost positive that everyone that used PGP to contact me has, or would have done (if asked), one of the following: pulled the best-looking key from a keyserver, most likely not even over TLS used a different key if replied with "this is my new key" re-sent the e-mail unencrypted if provided an excuse like "I'm traveling" Travel in particular is hostile to long-term keys, making this kind of fresh start impractical. Moreover, I'm not even sure there's an attacker that long-term keys make sense against. Your average adversary probably can't MitM Twitter DMs (which means you can use them to exchange fingerprints opportunistically, while still protecting your privacy).

The Mossad will do Mossad things to your machine, whatever key you use. Finally, these days I think I care much more about forward secrecy, deniability, and ephemerality than I do about ironclad trust.

Are you sure you can protect that long-term key forever? Because when an attacker decides to target you and succeeds, they won't have access just from that point forward; they'll have access to all your past communications, too.

And that's ever more relevant. Moving forward I'm not dropping to plaintext. Quite the opposite.

But I won't be maintaining any public long-term key. Mostly I'll use Signal or WhatsApp, which offer vastly better endpoint security on iOS, ephemerality, and smoother key rotation. If you need to securely contact me, your best bet is to DM me asking for my Signal number.
If needed we can decide an appropriate way to compare fingerprints.
If we meet in person and need to set up a secure channel, we will just exchange a secret passphrase to use with what's most appropriate: OTR, Pond, Ricochet. If it turns out we really need PGP, we will set up some ad-hoc keys, more à la Operational PGP.
Same for any signed releases or canaries I might maintain in the future. To exchange files, we will negotiate Magic Wormhole, OnionShare, or ad-hoc PGP keys over the secure channel we already have.

The point is not to avoid the gpg tool, but the PGP key management model. If you really need to cold-contact me, I might maintain a Keybase key, but no promises.
I like rooting trust in your social profiles better since it makes key rotation much more natural and is probably how most people know me anyway. I'm also not dropping YubiKeys.
I'm very happy about my new YubiKey 4 with touch-to-operate, which I use for SSH keys, password storage, and machine bootstrap.

But these things are one hundred percent under my control. About my old keys and transitioning I broke the offline seal of all my keys.
I don't have reason to believe they are compromised, but you should stop using them now. Below are detached signatures for the Markdown version of this document from all keys I could still find. In the coming weeks I'll import all signatures I received, make all the signatures I promised, and then publish revocations to the keyservers.
I'll rotate my Keybase key.

Eventually, I'll destroy the private keys. See you on Signal. (Or Twitter.) Giving up on PGP.mdGiving up on PGP.md.B8CC58C51CAEA963.ascGiving up on PGP.md.C5C92C16AB6572C2.ascGiving up on PGP.md.54D93CBC8AA84B5A.asc Giving up on PGP.md.EBF01804BCF05F6B.asc [coming once I recover the passphrase from another country] Note: I expect the "Moving forward" section to evolve over time, as tools come and go.

The signed .md file won't change, an unauthenticated .diff will appear below for verification convenience.

Open source Roundcube webmail can be attacked … by sending it...

The Fifth Element is a problem - the input argument that didn't get checked is an RCE hole The developers of open source webmail package Roundcube want sysadmins to push in a patch, because a bug in versions prior to 1.2.3 let an attacker crash it remotely – by sending what looks like valid e-mail data. The authors overlooked sanitising the fifth argument (the _from parameter) in mail() – and that meant someone only needed to compose an e-mail with malicious info in that argument to attack Roundcube. It works because of how the program flows in a default installation. User input from the Roundcube UI is passed to PHP's mail() function, and mail() calls sendmail. Because the user input wasn't sanitised until the bug-fix, the fifth argument when calling mail() could be used to execute sendmail with the -X option to log all mail traffic – and that, according to RIPS Technologies in this blog post, could be abused to spawn a malicious PHP file in the target server's Webroot directory. After looking over the code and the regex that was meant to sanitise the _from parameter, the RIPS Technologies' analysts worked out that an HTTP request to the server could use that parameter to put a malicious PHP file onto the system, like this: example@example.com -OQueueDirectory=/tmp -X/var/www/html/rce.php The malicious rce.php can be populated with PHP code that's inserted in an e-mail's subject line. “Since the email data is unencoded, the subject parameter will be reflected in plaintext which allows the injection of PHP tags into the shell file”, the post states. Roundcube posted a patch to GitHub at the end of November, and issued a version 1.2.3 here. ® Sponsored: Customer Identity and Access Management

In the three years since IETF said pervasive monitoring is an...

IETF Security director Stephen Farrell offers a report card on evolving defences FEATURE After three years of work on making the Internet more secure, the Internet Engineering Task Force (IETF) still faces bottlenecks: ordinary peoples' perception of risk, sysadmins worried about how to manage encrypted networks, and – more even than state snooping – an advertising-heavy 'net business model that relies on collecting as much information as possible. In a wide-ranging 45-minute, 4,000-word interview (full transcript in this PDF), IETF Security Area Director Stephen Farrell gave a report card of what's happened since the Internet Architecture Board declared that “pervasive monitoring is an attack”, in RFC 7258. Much of the discussion used Farrell's presentation to the NORDUnet conference in September, and the slides are here. Let's boil the ocean, so we can cook an elephant.

And eat it. Given the sheer scale of the effort involved – the IETF's list of RFCs passed the 8,000 mark in November – nobody expected the world to get a private Internet quickly, but Farrell told The Register some of the key in-IETF efforts have progressed well: its UTA (Using TLS in Applications), DPRIVE (DNS Privacy), and TCPINC (TCP INCreased security, which among other things is working to revive the tcpcrypt proposal rejected earlier in the decade). UTA: The idea is to get rid of the nasty surprises that happen when someone realises a standard (and therefore code written to that standard) still references a “laggard” protocol – so, for example, nobody gets burned complying with a standard that happens to reference a deprecated SSL or TLS standard. “The UTA working group produced RFC 7525 (Recommendations for Secure Use of Transport Layer Security (TLS) and Datagram Transport Layer Security (DTLS), https://tools.ietf.org/html/rfc7525 here).

The last time I looked, there were something like 50 RFCs that are referencing that [The Register checked this list, provided by Farrell – it seems to be close to 70 already].” The idea of UTA is that a protocol written 10 or 15 years ago should be updated so it no longer references the then-current version of TLS, he said. “That's being used in order to provide a common reference: as people update their implementations, they'll reference a more modern version of TLS, currently TLS 1.2, and as TLS 1.3 is finished, we have an automated-ish way of getting those updates percolating through to the documentation sets. “That's quite successful, I think, because it normalises and updates and modernises a bunch of recommendations.” DNSPRIV: Readers will recall that IETF 97 was the venue for the launch of Stubby, a demonstrator for securing DNS queries from the user to their DNS responder. Stubby, a demonstrator of DNS privacy work That, Farrell said, is a good example of where DNSPRIV is at – on the user side, it's ready for experimental code to go into service. “DNS privacy is something that is ready to experiment with.

The current work in DPRIVE was how to [secure] the hop between and the next DNS provider you talk to. “That's an easy problem to tackle – you talk to that DNS resolver a lot, and you have some shared space, so the overhead of doing the crypto stuff is nowhere.” Getting upstream to where DNS queries become recursive – your ISP can't answer, so they pass the query upwards – is much harder, he said. “Assuming that [the ISP] needs to find “where is theregister.co.uk?”, he'll eventually talk to the UK ccTLD, and then he'll go talk to .co.uk and then he'll go talk to theregister.co.uk – it's forking the communications a lot more, and it's a little harder to see how to efficiently amortise the crypto. “The DPRIVE working group are now examining whether they think they can produce some technology that will work for that part of the problem.” TCPINC: Some of the questions in this working group may never be seen by ordinary Internet users, but they're still important, Farrell said. “I think we're close to having some TCP-crypt-based RFCs issued, there's been code for that all along. Whether or not we'll get much deployment of that, we'll see.” “I think there are a bunch of applications that maybe wouldn't be visible to the general public. Let's say you have an application server that has to run over a socket – an application that runs on top of the Linux kernel, say, where you have to use the kernel because of the interfaces involved, and you can't provide the security above the kernel because you need it inside. “That's where TCPINC fits in.
Storage – they have really complex interfaces between the network-available storage server and the kernel, and there's lots of complex distributed processing going on.” That's important to “the likes of NetApp and EMC and so on”, he said: “For some of those folks, being able to slot in security inside the kernel, with TCPINC, is attractive.
Some, I might expect, will adopt that sort of thing – but it may never be seen on the public Internet.” Security and the end-to-end model Farrell said more encryption is changing the Internet in ways the general public probably doesn't think about – but which they'll appreciate. The old end-to-end model – the “neutral Internet” – has been under both overt and covert attack for years: carriers want to be more than passive bit-pipes, so they look for ways that traffic management can become a revenue stream; while advertisers want access to traffic in transit so they can capture information and inject advertisements. Ubiquitous encryption changes both of these models, by re-empowering the endpoints.

Along the way, perhaps surprisingly, Farrell sees this as something that can make innovation on the Internet more democratic. He cited HTML2 and QUIC as important non-IETF examples: “there's a whole bunch of people motivated to use TLS almost ubiquitously, not only because they care about privacy, but because of performance: it moves the point of control back towards the endpoint, not the middle of the network. “One of the interesting and fun things of trying to improve the security properties and privacy properties of the network is that it changes who controls what. “If you encrypt a session, nobody in the middle can do something like inject advertising. “It reasserts the end-to-end argument in a pretty strong way.
If you do the crypto right, then the middlebox can't jump in and modify things – at least not without being detectable.” He argues that the carrier's / network operators' “middleboxes” became an innovation roadblock. “The real downside of having middleboxes doing things is that they kind of freeze what you're doing, and prevent you innovating. “One of the reasons people did HTTP2 implementations, that only ever talk ciphertext, is because they found a lot of middleboxes would break the connection if they saw anything that wasn't HTTP 1.1. “In other words, the cleartext had the effect that the middleboxes, that were frozen in time, would prevent the edges from innovating. Once they encrypted the HTTP2 traffic, the middleboxes were willing to say 'it's TLS so I won't go near it', and the innovation can kick off again at the edges.” Won't somebody think of the sysadmin? Systems administrators – in enterprises as well as in carriers – are less in love with crypto. “Network management people have been used to managing cleartext networks,” he said. For more than 20 years, for perfectly legitimate reasons – and without betraying their users – sysadmins would look into packets, see what they contained, and when sensible do something about them. “Not for nefarious reasons – in order to detect attacks, in order to optimise traffic, and so on. We're changing that, and that also means the technology they're using will be undergoing change, to deal with much more ciphertext than plaintext. “We need to learn better ways of how to fulfil those same functions on the network,” he said. “If you had some security mechanism in your network for detecting some malware attack traffic, instead of being able to operate that from the middle of the network, it pushes a requirement on you to move that to the edge.” Commercial services are starting to understand how this can work, he said: “If you look at some of the commercial instant messaging providers, that have introduced end-to-end encryption of their messaging – they have found they can move those functions in their networks to new places to do what they need to do. “It means change, but it doesn't make network management impossible.” Advertising models will change Companies collaborating to collect advertising data remains a big challenge, he said.

That's likely to change – “there's no reason why a particular business model has to last forever”, but in the meantime, “it's hard to see how we make a dramatic improvement in privacy. “We can make some improvements, but how we make it dramatically better – it's hard.

The incentives are aligned to make all the service providers want to be privacy-unfriendly, from the point of “me”, but not perhaps the point of view of 99 per cent of people who use the Internet, and seem happy enough with it.” Breaches and leaks are frightening the service providers, which helps, because providers “realise that storing everything, forever, is toxic, and in the end they'll get caught by it.” About the cough NSA coughThe Register also asked: what protects future standards against security organisations polluting standards, as they did with DUAL-EC? “As an open organisation, we need to be open to technical contributions from anywhere,” Farrell said, “be that an employee of the NSA, or be that – as we've had in one case – a teenager from the Ukraine who was commenting on RFCs five or six years ago.” It has to be handled socially, rather than by process, he argued, citing the IETF's creation of the Crypto Forum Research Group, chaired by Alexey Melnikov and Kenny Paterson and designed to bring together IETF standards authors and the academic crypto community. He described it as a “lightweight process” designed to assess crypto proposals – have they been reviewed? Is the proposal novel and maybe not ready for prime time? “The number of NSA employees that attend IETF [meetings] – I don't think it's a useful metric at all.
I think how well peoples' contributions are examined is a much more useful metric, and there, things like having the CFRG, having academic cryptographers interacting much more with the standards community – those are more effective ways of doing that. “We've set up a thing called the Advanced Networking Research Prize, which is a prize for already-published academic work.
It pays for the academic come to an IETF meeting, give us a talk, get them involved” (Paterson first became involved in the CRFG as an invited academic who won the prize). Spooks want to monitor everyone because they believe everyone might be guilty, he added, and that's a mistake. “We should not think people are guilty by association.

That's a fallacy – if you believe that NSA employees are not allowed to contribute, you're making the same mistake they're making.” ®

Adult FriendFinder Hack Exposes 400 Million Accounts

Account data for more than 400 million users of adult-themed FriendFinder Network has been exposed.

The breach includes personal account data from five sites including Adult FriendFinder, Penthouse.com and Stripshow.com.

FriendFinder Network did not confirm the breach and is investigating reports. According to LeakedSource, which obtained the data and reported the breach Sunday, a total of 412 million accounts are impacted. LeakedSource reports that the hack occurred in the October 2016 timeframe and was not related to a similar breach at that time by hacker Revolver. In a statement issued to Threatpost, FriendFinder Network said: “Our investigation is ongoing but we will continue to ensure all potential and substantiated reports of vulnerabilities are reviewed and if validated, remediated as quickly as possible.” According to the statement, the company has received a number of reports of “potential” security vulnerabilities from a “variety of sources” over the past several weeks.
It says it has hired external resources to support its investigation. According to a news report by ZDNet, this most recent breach was conducted by an “underground Russian hacking site” that took advantage of a local file inclusion flaw first revealed by Revolver in October. A local file inclusion vulnerability can allow a hacker to add local files to web servers via script and execute code. Hackers can take advantage of a LFI vulnerability when sites allow user-supplied input without proper validation, something Adult FriendFinder is guilty of, according to an October interview by Threatpost with Revolver, who also goes by the handle 1×0123. In the case of the FriendFinder Network, Dale Meredith, ethical hacking expert and author at Pluralsight, hackers implemented a LFI allowing them to move folder structures on targeted servers in what is called a directory transversal. “This means they can issue commands to a system that would allow the attacker to move around and download any file on this computer,” he said. LeakedSource bills itself as independent researchers who run a site that acts as a repository for breached data.

The website sells one-time or paid subscriptions to such breached data.
In May, LeakedSource faced a cease and desist order by LinkedIn for offering a paid subscription to access to 117 million  breached LinkedIn user logins. LeakedSource did not return requests for comment for this story. According to a blog post by LeakedSource, the FriendFinder Network data included 20 years of customer data.

The breach includes data tied to 340 million AdultFriendFinder.com accounts, 62 million accounts from Cams.com, 7 million from Penthouse.com and 15 million “deleted” accounts that were not purged from the databases.

Also impacted was a site called iCams.com and account data for 1 million users. “We have decided that this data set will not be searchable by the general public on our main page temporarily for the time being,” according to the blog post on LeakedSource’s website. According to several independent reviews of the breached data supplied by LeakedSource, the datasets included usernames, passwords, email addresses and dates of last visits.

According to LeakedSource, passwords were stored as plaintext or protected using the weak cryptographic standard SHA-1 hash function.  LeakedSource claims it has cracked 99 percent of the 412 million passwords. This most recent breach follows an unconfirmed breach in October where hacker Revolver who claimed to have compromised “millions” of Adult FriendFinder accounts when he leveraged a local file inclusion vulnerability used to access the site’s backend servers.
In 2015, more than 3.5 million Adult FriendFinder customers had intimate details of their profiles exposed.

At the time, hackers put user records up for sale on the Dark Web for 70 Bitcoin, or $16,000 at the time.

According to third-party reviews of this most recent FriendFinder Network breach, no sexual preference data was contained in the breached data. In 2012, the website MilitarySingles.com fell victim to a similar local file inclusion vulnerability.

The social network said, at the time, the vulnerability was tied to user generated content uploaded to the site. “Allowing the upload of user-generated content to the Web site can be extremely dangerous as the server which is usually considered by other users and the application itself as ‘trusted’ now hosts content that can be generated by a malicious source,” MilitarySingles.com said in a statement at the time of the intrusion.