Saturday, December 16, 2017
Home Tags BuzzFeed

Tag: BuzzFeed

Professor, reporter say meetings with Draper years ago turned inappropriate.
BuzzFeed reports results after sending supplements to an independent lab for testing.
At least three federal agencies are looking into Universal Health Services.
Currently, Amazon neither offers an Apple TV app nor sells the Apple TV at all.
"Six ways Buzzfeed has misled the court... and a picture of a kitten."
Prosecutor notes: "All of the Rioter Cell Phones were locked."
Enlarge / President-elect Donald Trump at his news conference on January 11, 2017.PBS reader comments 68 Share this story President-elect Donald Trump took to Twitter early Wednesday to blast the leak of unsubstantiated anti-Trump documents that touch on everything from Russia allegedly having leverage to extort him, to sexual escapades, and even to a potentially treasonous act. The soon-to-be 45th president tweeted, "Intelligence agencies should never have allowed this fake news to 'leak' into the public. One last shot at me.

Are we living in Nazi Germany." Later at a news conference Wednesday, he called the report "a disgrace," "fake news," "phony stuff," and "crap." "It certainly never should have been released," he said. The documents, and portrayals of them, began widely circulating on the Internet Tuesday and were even published in full by BuzzFeed, which said they were unverified.
Shortly after, Trump went on Twitter for the first time to address the issue. "FAKE NEWS - A TOTAL Political Witch HUNT!" The documents were said to have been drawn up by operatives seeking to derail his presidency and were allegedly passed from Sen. John McCain (R-Ariz.) to FBI director James Comey.

Trump and President Barack Obama were briefed on them a week ago. Russian denied it had a dossier with incriminating details of Trump, which US intel officials maintain they have not yet vetted. "The Kremlin has no compromising dossier on Trump, such information isn’t consistent with reality and is nothing but an absolute fantasy," Putin spokesman, Dmitri Peskov, told a news conference. Trump on Wednesday tweeted again, "Russia just said the unverified report paid for by political opponents is "A COMPLETE AND TOTAL FABRICATION, UTTER NONSENSE. Very unfair!" But beyond the claims of sex videos and alleged proposed Russian business deals, there's a potentially treasonous claim in the documents. One of the opposition research documents notes an unidentified Russian source that the hack of the Democratic National Committee happened "with the full knowledge and support of TRUMP and senior member of his campaign team." The memo said that, in exchange, the Trump campaign "agreed to sideline Russian intervention in Ukraine as a campaign issue." The Tuesday disclosure came the same day when Sen. Ron Wyden (D-Ore.) asked Comey during a hearing whether the FBI was investigating Trump, and Comey did not say. However, Comey did announce days before the election that the agency was investigating newly discovered e-mails connected to the Hillary Clinton e-mail server debacle. The Guardian, meanwhilehas an unconfirmed report that the FBI applied for a warrant from the Foreign Intelligence Surveillance Court to monitor four members of the Trump team and their contacts with Russian officials.
Aurich Lawson / Thinkstockreader comments 25 Share this story Neal H. Walfield is a hacker at g10code working on GnuPG.

This op-ed was written for Ars Technica by Walfield, in response to Filippo Valsorda's "I'm giving up on PGP" story that was published on Ars last week. Every once in a while, a prominent member of the security community publishes an article about how horrible OpenPGP is. Matthew Green wrote one in 2014 and Moxie Marlinspike wrote one in 2015.

The most recent was written by Filippo Valsorda, here on the pages of Ars Technica, which Matthew Green says "sums up the main reason I think PGP is so bad and dangerous." In this article I want to respond to the points that Filippo raises.
In short, Filippo is right about some of the details, but wrong about the big picture.

For the record, I work on GnuPG, the most popular OpenPGP implementation. Forward secrecy isn't always desirable Filippo's main complaint has to do with OpenPGP's use of long-term keys.
Specifically, he notes that due to the lack of forward secrecy, the older a key is, the more communication will be exposed by its compromise.

Further, he observes that OpenPGP's trust model includes incentives to not replace long-term keys. First, it's true that OpenPGP doesn't implement forward secrecy (or future secrecy).

But, OpenPGP could be changed to support this. Matthew Green and Ian Miers recently proposed puncturable forward secure encryption, which is a technique to add forward secrecy to OpenPGP-like systems.

But, in reality, approximating forward secrecy has been possible since OpenPGP adopted subkeys decades ago. (An OpenPGP key is actually a collection of keys: a primary key that acts as a long-term, stable identifier, and subkeys that are cryptographically bound to the primary key and are used for encryption, signing, and authentication.) Guidelines on how to approximate forward secrecy were published in 2001 by Ian Brown, Adam Back, and Ben Laurie.

Although their proposal is only for an approximation of forward secrecy, it is significantly simpler than Green and Miers' approach, and it works in practice. As far as I know, Brown et al.'s proposal is not often used. One reason for this is that forward secrecy is not always desired.

For instance, if you encrypt a backup using GnuPG, then your intent is to be able to decrypt it in the future.
If you use forward secrecy, then, by definition, that is not possible; you've thrown away the old decryption key.
In the recent past, I've spoken with a number of GnuPG users including 2U and 1010data.

These two companies told me that they use GnuPG to protect client data.

Again, to access the data in the future, the encryption keys need to be retained, which precludes forward secrecy. This doesn't excuse the lack of forward secrecy when using GnuPG to protect e-mail, which is the use case that Filippo concentrates on.

The reason that forward secrecy hasn't been widely deployed here is that e-mail is usually left on the mail server in order to support multi-device access.
Since mail servers are not usually trusted, the mail needs to be kept encrypted.

The easiest way to accomplish this is to just not strip the encryption layer.
So, again, forward secrecy would render old messages inaccessible, which is often not desired. But, let's assume that you really want something like forward secrecy.

Then following Brown et al.'s approach, you just need to periodically rotate your encryption subkey.
Since your key is identified by the primary key and not the subkey, creating a new subkey does not change your fingerprint or invalidate any signatures, as Filippo states.

And, as long as your communication partners periodically refresh your key, rotating subkeys is completely transparent. Ideally, you'll want to store your primary key on a separate computer or smartcard so that if your computer is compromised, then only the subkeys are compromised.

But, even if you don't use an offline computer, and an attacker also compromises your primary key, this approach provides a degree of future secrecy: your attacker will be able to create new subkeys (since she has your primary key), and sign other keys, but she'll probably have to publish them to use them, which you'll eventually notice, and she won't be able to guess any new subkeys using the existing keys. Enlarge / Circuit Benders and more at the 2011 Doo Dah Parade. Sheyneinlalaland Physical attacks vs. cyber attacks So, given that forward secrecy is possible, why isn't it enabled by default? We know from Snowden that when properly implemented, "encryption … really is one of the few things that we can rely on." In other words, when nation states crack encryption, they aren't breaking the actual encryption, they are circumventing it.

That is, they are exploiting vulnerabilities or using national security letters (NSLs) to break into your accounts and devices.

As such, if you really care about protecting your communication, you are much better off storing your encryption keys on a smartcard then storing them on your computer. Given this, it's not clear that forward secrecy is that big of a gain, since smartcards won't export private keys.
So, when Filippo says that he is scared of an evil maid attack and is worried that someone opened his safe with his offline keys while he was away, he's implicitly stating that his threat model includes a physical, targeted attack.

But, while moving to the encrypted messaging app Signal gets him forward secrecy, it means he can't use a smartcard to protect his keys and makes him more vulnerable to a cyber attack, which is significantly easier to conduct than a physical attack. Another problem that Filippo mentions is that key discovery is hard.
Specifically, he says that key server listings are hard to use.

This is true.

But, key servers are in no way authenticated and should not be treated as authoritative.
Instead, if you need to find someone's key, you should ask that person for their key's fingerprint. Unfortunately, our research suggests that for many GnuPG users, picking up the phone is too difficult. So, after our successful donation campaign two years ago, we used some of the money to develop a new key discovery technique called the Web Key Directory (WKD).

Basically, the WKD provides a canonical way to find a key given an e-mail address via HTTPS.

This is not as good as checking the fingerprint, but since only the mail provider and the user can change the key, it is a significant improvement over the de facto status quo. WKD has already been deployed by Posteo, and other mail providers are in the process of integrating it (consider asking your mail provider to support it). Other people have identified the key discovery issue, too. Micah Lee, for instance, recently published GPG Sync, and the INBOME group and the pretty Easy privacy (p≡p) project are working on opportunistically transferring keys via e-mail. Signal isn't our saviour Filippo also mentions the multi-device problem.
It's true that using keys on multiple devices is not easy. Part of the problem is that OpenPGP is not a closed ecosystem like Signal, which makes standardising a secret key exchange protocol much more difficult. Nevertheless, Tankred Hase did some work on private key synchronisation while at whiteout.io.

But, if you are worried about targeted attacks as Filippo is, then keeping your keys on a computer, never mind multiple computers, is not for you.
Instead, you want to keep your keys on a smartcard.
In this case, using your keys from multiple computers is easy: just plug the token in (or use NFC)! This assumes that there is an OpenPGP-capable mail client on your platform of choice.

This is the case for all of the major desktop environments, and there is also an excellent plug-in for K9 on Android called OpenKeychain. (There are also some solutions available for iOS, but I haven't evaluated them.) Even if you are using Signal, the multi-device problem is not completely solved.

Currently, it is possible to use Signal from a desktop and a smartphone or a tablet, but it is not possible to use multiple smartphones or tablets. One essential consideration that Filippo doesn't adequately address is that contacting someone on Signal requires knowing their mobile phone number. Many people don't want to make this information public.
I was recently chatting with Jason Reich, who is the head of OPSEC at BuzzFeed, and he told me that he spends a lot of time teaching reporters how to deal with the death and rape threats that they regularly receive via e-mail.

Based on this, I suspect that many reporters would opt to not publish their phone number even though it would mean missing some stories.
Similarly, while talking to Alex Abdo, a lawyer from the ACLU, I learned that he receives dozens of encrypted e-mails every day, and he is certain that some of those people would not have contacted him or the ACLU if they couldn't remain completely anonymous. Another point that Filippo doesn't cover is the importance of integrity; he focused primarily on confidentiality (i.e., encryption).
I love the fact that messages that I receive from DHL are signed (albeit using S/MIME and not OpenPGP).

This makes detecting phishing attempts trivial.
I wish more businesses would do this. Of course, Signal also provides integrity protection, but I definitely don't want to give all businesses my phone number given their record of protecting my e-mail address. Moreover, most of this type of communication is done using e-mail, not Signal. I want to be absolutely clear that I like Signal. When people ask me how they can secure their communication, I often recommend it.

But, I view Signal as complementary to OpenPGP.

First, e-mail is unlikely to go away any time soon.
Second, Signal doesn't allow transferring arbitrary data including documents.

And, importantly, Signal has its own problems.
In particular, the main Signal network is centralised, not federated like e-mail, the developers actively discourage third-party clients, and you can't choose your own identity.

These decisions are a rejection of a free and open Internet, and pseudononymous communication. In conclusion, Filippo has raised a number of important points.

But, with respect to long-term OpenPGP keys being fatally flawed and forward secrecy being essential, I think he is wrong and disagree with his compromises in light of his stated threat model.
I agree with him that key discovery is a serious issue.

But, this is something that we've been working to address. Most importantly, Signal cannot replace OpenPGP for many people who use it on a daily basis, and the developers' decision to make Signal a walled garden is problematic.
Signal does complement OpenPGP, though, and I'm glad that it's there. Neal H. Walfield is a hacker at g10code working on GnuPG. His current project is implementing TOFU for GnuPG.

To avoid conflict of interests, GnuPG maintenance and development is funded primary by donations. You can find him on Twitter @nwalfield.
E-mail: neal@gnupg.org OpenPGP: 8F17 7771 18A3 3DDA 9BA4 8E62 AACB 3243 6300 52D9 This post originated on Ars Technica UK
reader comments 62 Share this story Evernote is testing out machine learning algorithms on all the reams of content it has accumulated over the past eight years. But when it announced this move with a new privacy policy that goes into effect January 24, 2017, the company also pointed out something that many users hadn't realized: Evernote staffers will sometimes look at the content of your notes. There are actually a number of perfectly good reasons why Evernote employees might need to read note content, and they are explained clearly in Evernote's Privacy Policy. These include complying with a lawful court order, investigating whether there has been a violation of the Terms of Service, and "protect[ing] against potential spam, malware or other security concerns." What concerned some users, including journalist and former BuzzFeed News Editor Stacy-Marie Ishmael, is a vaguely worded section of the Privacy Policy stating that employees will look at your notes "for troubleshooting purposes or to maintain and improve the Service." She noted on Twitter that this clause is "so broad as to be all inclusive" and that it's particularly worrying for a "minority journalist in 2016." Given the hostile stance President-elect Donald Trump and some of his supporters have shown toward journalists, it's possible that journalists who want to preserve the anonymity of sources will have to stop using services like Evernote. Enlarge / Evernote spells out when its employees can read your notes. You cannot opt out. The highlighted phrase, with vague wording about "troubleshooting," is what has concerned users. Of course, Evernote employees might have to read notes to deal with bugs and abuse, but aren't those problems already covered by the "protect against potential spam, malware or other security concerns" language? It's unclear why Evernote chose to add the capacious "troubleshooting" clause to its policy, given how broadly it could be interpreted. Anthropologist Michael Oman-Regan warned followers on Twitter that they need to be aware of the policy, adding, "If you're using Evernote for research with human subjects, it may be necessary to export your data and leave." That's because researchers must be able to guarantee the privacy of human subjects in experiments. Evernote no longer allows any researcher to make that guarantee. For its part, Evernote has assured users that they can opt out of the machine learning features, so they're guaranteed that no algorithms will look at their notes. If users with sensitive data want to benefit from machine learning, they also have the option to encrypt private notes and make them off-limits to algorithms. As long as you choose a good password, it should be off-limits to humans too. It's worth noting that other companies have similar disclosures in their privacy policies, though with less alarming wording than Evernote's. In its privacy policy, Google states that its employees do have access to personal information, with this caveat: "We restrict access to personal information to Google employees, contractors and agents who need to know that information in order to process it for us, and who are subject to strict contractual confidentiality obligations and may be disciplined or terminated if they fail to meet these obligations." So Google employees can read your mail, your docs, and anything else. Facebook explains in its privacy policy that it shares personal data with employees of Facebook and Facebook companies like Whatsapp. Evernote did not immediately respond to a request for comment on its new policy.
But ad networks still power 'fake news' Faced with a report showing Google’s advertising network allowed big brands' ad money to be spent funding criminal operations, Google welcomed initiatives to “drain the swamp” in 2013 - three and a half years ago. But guess what? The swamp’s still here.

And it's feeding a different sort of creature now. Today the Wall Street Journal reports how “fake news” clickbait sites are richly rewarded by Google’s advertising network. Of course they are.

That’s why they do it.

Buzzfeed (oh, the irony) traced hundreds of clickbait domains to one operation run out of a provincial town in Macedonia. “Well-known brands’ appearance on fake-news sites reflects the complexity [our emphasis] of online advertising, where computers can place a different ad each time a user clicks on a webpage. Multiple middlemen are often involved, leaving both publishers and advertisers uncertain about which ads will appear where.” This might sound familiar.

Around four years ago, independent musicians, songwriters and filmmakers tried to find out how big brand advertisements were funding criminal piracy operations. Here’s how David Lowery described it: "When things get complex, it's typically to hide some institution from liability.
In finance, there's a saying: 'Complexity is fraud'". The brands didn’t like it when their ads showed up on porn sites.
It was bad for the brand.

The industry body the IAB took an interest.

And Google made a promise, which today reads better than ever. Google's Theo Bertram welcomed initiatives to "drain the swamp of dodgy networks, dodgy agencies and dodgy sites” and pointed to its own efforts with "in partnership" with IAB. That was in 2013. How’s the draining operation doing? The swamp must be bone-dry by now. “Many Google-placed ads, including those for big brands, continue to appear on the sites, even including ads for Google’s new Pixel smartphones,” the WSJ tells us. Oh. The WSJ saw a tip of an iceberg: according to the World Federation of Advertisers, many ads are bought, paid for, the cheques cashed, but never seen by a human.

The system is “fraudulent by design”, to ensure the parties involved continue to profit, undisturbed. The ad body described the many flavours of fraud in a report this summer, concluding: "Until the industry can prove that it has the capability to effectively deal with ad fraud, advertisers should use caution in relation to increasing their digital media investment, to limit their exposure to fraud," it warned. An exodus by big brands from the ad networks is not impossible to imagine. ® Sponsored: Want to know more about PAM? Visit The Register's hub
Enlarge / Edgar Maddison Welch, 28, arrested outside the Comet Ping Pong pizzeria in Washington, DC.ABC News/YouTube reader comments 346 Share this story A rifle-wielding North Carolina man was arrested Sunday in Washington, DC for carrying his weapon into a pizzeria that sits at the center of the fake news conspiracy theory known as "Pizzagate," authorities said Monday. DC's Metropolitan Police Department said it had arrested 28-year-old Edgar Maddison Welch on allegations of assault with a dangerous weapon. "During a post arrest interview this evening, the suspect revealed that he came to the establishment to self-investigate 'Pizza Gate' (a fictitious online conspiracy theory)," the agency said in a statement. Welch was arrested without incident. According to police, the suspect entered the Comet Ping Pong restaurant in DC around 3pm and pointed the firearm at an employee. He then discharged it without anybody getting hurt. Witnesses said restaurant patrons scattered from the venue. "Pizzagate" concerns a baseless conspiracy theory about a secret pedophile group, the Comet Ping Pong restaurant, and Hillary Clinton's campaign chief, John Podesta.

The Pizzagate conspiracy names Comet Ping Pong as the secret headquarters of a non-existent child sex-trafficking ring run by Clinton and members of her inner circle. James Alefantis, the restaurant's owner, said he has received hundreds of death threats.

According to Buzzfeed,the Pizzagate theory is believed to have been fostered by a white supremacist's tweets, the 4chan message board, Reddit, Donald Trump supporters, and right-wing blogs. The day before Thanksgiving, Reddit banned a "Pizzagate" conspiracy board from the site because of a policy about posting personal information of others. Reddit CEO Steve Huffman—who was facing vitriolic online fire for the move—altered Reddit comments directed at him. Huffman's move prompted him to apologize and to take more proactive steps to clean up the Reddit online community, which he said was "not sustainable" in its current form. After the latest incident at the Comet Ping Pong, the Metropolitan Police Department said in a statement that it was "monitoring the situation and aware of general threats being made against this establishment." A fake story based on the conspiracy theory even received fuel by General Mike Flynn, Donald Trump's national security advisor pick.

Days before the election, he tweeted a fake news story about it and said "U decide" and "MUST READ!" Alefantis, the pizzeria's owner, told CNN, "What happened today demonstrates that promoting false and reckless conspiracy theories comes with consequences.
I hope that those involved in fanning these flames will take a moment to contemplate what happened here today, and stop promoting these falsehoods right away." The police said two weapons were found in the pizzeria.

Another weapon was discovered in the suspect's vehicle, the authorities said. Welch is expected to appear in a local court Monday afternoon.
NEWS ANALYSIS: With the possibility that fake news may have impacted the U.S. presidential election, people are giving a new look to a relatively old type of operation—propaganda and misinformation. Three days before the 2016 presidential election, Facebook users began sharing an anti-Hillary Clinton article.

The headline, in all caps, declared "FBI AGENT SUSPECTED IN HILLARY EMAIL LEAKS FOUND DEAD IN APPARENT MURDER-SUICIDE."The article was a fake, naming a fictional police chief and published on an incomplete site called the "Denver Guardian," a site whose domain had been registered only the year before, on July 17, 2015.

The Denver Post did a tear down of the article, listing all the warning signs that should have tipped off readers that the story was false.
Instead, credulous—or partisan—readers shared the article at least 568,000 times and rocketing it to more than 15.5 million impressions, according to the Huffington Post.While there is no evidence that fake news influenced the U.S. presidential election, information-security experts point to the increase in false stories before the election as an example of the increasing difficulty in delivering accurate information to the right people at the right time."I think it definitely is a data-integrity challenge," Christopher Paul, senior social scientist at the Rand Corp. told eWEEK. "What the extent of it is? It is hard to know.

There [are] different audiences, who are differently vulnerable." The outcry over fake news has given new focus to a relatively old type of operation—propaganda and misinformation.

A variety of actors have used misinformation as a way to influence the decisions of targeted people, and propaganda – which originally had the positive meaning of propagating the truth – was first used in the 17th century by the Catholic church. Unlike past propaganda and misinformation campaigns, modern efforts benefit from the internet’s efficiencies and because widely diverse online opinions have led to a chaotic information landscape blurring the factual and make-believe. Social-media has become an echo chamber that has amplified the phenomenon.

Facebook and other social media sites have borne the brunt on recent criticism, as such sites are the filter through which many U.S. citizens view of the outside world.
Sixty-two percent of Americans get at least some of their news from social media, according to a Pew Research Center study.

About 70 percent of Reddit users, for example, get some of their news from the site, while 66 percent of Facebook users get news from its service.During the election, fake news stories surged in popularity.

The top-20 fake news stories jumped to 8.7 million Facebook engagements—a measure including the total number of shares, reactions and comments—between the beginning of August and Election Day, winning the popularity contest with mainstream news, which only saw 7.7 million Facebook engagements, according to an investigation by Buzzfeed.
Seventeen of the top-20 fake news stories were pro-Donald Trump, the publication said.Did these stories affect people's perceptions enough to change the outcome of the election? It's plausible, according to Fillippo Menczer, a professor of computer science and informatics at Indiana University in Bloomington."Each piece of misinformation contributes to the shaping of our opinions," he wrote in an editorial in The Conversation. “Overall, the harm can be very real.
If people can be conned into jeopardizing our children’s lives, as they do when they opt out of immunizations, why not our democracy?”Initially, Facebook CEO Mark Zuckerberg downplayed the impact that such stories could have on the election cycle, but then acknowledged the problem of disinformation and information pollution on Facebook. "The bottom line is: we take misinformation seriously," he said on his regular blog.Yet, election shenanigans are just the most public face of misinformation campaigns.

Governments regularly use propaganda to convince the populace of views that support the nation-state’s interests.

And misinformation campaigns could become a more popular way of conducting deniable attacks on rival nations.On Sept. 11, 2014, for example, residents in Louisiana reportedly began seeing a variety of reports concerning a toxic cloud coming from a chemical plant in Centerville.