15.9 C
London
Wednesday, September 20, 2017

6 Hot Cybersecurity Startups: MACH37’s Spring Class Of 2016

Intense 90-day program mentors budding entrepreneurs in the finer points of developing a viable technology business for the real world of information security.

6 Hot Cybersecurity Startups: MACH37's Spring Class Of 2016

Intense 90-day program mentors budding entrepreneurs in the finer points of developing a viable technology business for the real world of information security. 1 of 9 The race is on for six teams of technologists and entrepreneurs from the Mid-Atlantic, Pacific Northwest, Northeast United States and Turkey to turn their ideas -- seeded by a $50,000 grant from the Virginia-based MACH37 Cyber Accelerator -- into thriving investable companies.  The teams were chosen from a pool of 61 applicants. “We were looking for management teams that include technology founders and first-term entrepreneurs who have vision and want to create something compelling,” says Rick Gordon, managing partner of MACH37. More specifically, information security and business leaders who: Are building a disruptive information security technology product Are delivering foundational security capabilities that enable entirely new products & markets Have the will and endurance to turn their labor into commercial success Have built a team of two- to four co-founders Need help with startup capital, introductions, and navigating pitfalls Are seeking rapid growth through venture capital Are willing to be in Virginia for the entire 90-day program and commit to the venture full-time Gordon says this year’s spring cohort offers a diverse range of innovative solutions attacking problems ranging from phishing and attribution in threat intelligence, to “security as a service” and regulatory compliance, to the Internet of Things with an intrusion detection system for automotive infotainment systems. Participants for the next 14 weeks will draw on the expertise of MACH37’s large network of successful security professionals, business experts, and entrepreneurs.

The program will culminate in June with a “Demo Day” where the entrepreneurs pitch and demonstrate their technology to an audience of external mentors, investors, and stakeholders. Find out more about security threats at Interop 2016, May 2-6, at the Mandalay Bay Convention Center, Las Vegas.

Click here for pricing information and to register. Marilyn has been covering technology for business, government, and consumer audiences for over 20 years. Prior to joining UBM, Marilyn worked for nine years as editorial director at TechTarget Inc., where she launched six Websites for IT managers and administrators supporting ...
View Full Bio 1 of 9 More Insights

How To Share Threat Intelligence Through CISA: 10 Things To Know

If you want those liability protections the Cybersecurity Information Sharing Act promised, you must follow DHS's new guidelines. Share information about breaches, attacks, and threats with the federal government without fear of the legal repercussions -- that's the alluring promise of the Cybersecurity Information Sharing Act (CISA, passed as the Cybersecurity Act of 2015). However, those liability protections do not apply to any and all sharing, so if you want to be safe from litigation, you must share information through the guidelines recently released by the US Department of Homeland Security. Security and privacy professionals alike were anxiously awaiting these guidelines because they answer some of the pervading questions about how privacy would be protected when CISA passed.

They also provide some instructions -- particularly for non-federal entities -- on precisely how to conduct their information sharing activities under the new law. Here's what you need to know. 1. You need to remove individuals' personal data before sharing it. The guidelines require that, before sharing data, an organization remove any information "that it knows at the time of sharing" to be personally identifiable information of "a specific individual that is not directly related to a cybersecurity threat." If you don't do that, you won't get liability protection. The guidelines acknowledge that there may be occasions when PII is "directly related," such as in a social engineering attack. However, sometimes those individuals' relevant characteristics can be shared (job title, for example), but anonymized first. "The DHS Guidance does a decent job of explaining what ['directly related'] means, but I believe there is still a lot left to subjective decision making by the company doing the sharing," says Jason Straight, chief privacy officer of UnitedLex, and speaker at the upcoming Interop Las Vegas conference. "If they make a 'bad call,' and share something they shouldn’t have, what happens? Do they not get liability protection? Who decides?" Straight also points out that this requires that organizations put in place people, processes, and technology they might not have had before.  2.

The personal data you need to remove may be more extensive than you think. The guidelines provide a list of private data types that are protected by regulation, are unlikely to be directly related to a cybersecurity threat, and should therefore be on your watch list when scrubbing. That list includes not just your basic PII and personal health information, but human resource information (including performance reviews and such), consumer information protected by the Fair Credit Reporting Act, education history protected by the Family Educational Rights and Privacy Act, financial information including investment advice protected by the Gramm-Leach-Bliley Act, identifying information about property ownership (like vehicle identification numbers), identifying information about children under 13 protected by the Children's Online Privacy Protection Act. 3.

Be particularly careful of Europeans' personal data. European privacy laws to protect personal data are much more rigorous than American ones, and the divide is only getting wider.

As we've explained before:  The EU General Data Protection Regulation (GDPR), a replacement for the EU Data Protection Directive, is expected to be ratified by European Parliament this spring session, and go into effect by 2018. The GDPR will expand the definition of "personal data" to "encompass other factors that could be used to identify an individual, such as their genetic, mental, economic, cultural or social identity," according to IT Governance. ... So, data on Europeans' shoe sizes and political affiliations and more may be protected. Violations of GDPR have proposed fines of up to 4% of annual global revenue. Many breaches of personal data must be reported within 72 hours of discovery.
So, it's no small issue when the data is misused or lost.  Plus, the newly proposed trans-Atlantic data transfer agreement, EU-US Privacy Shield, if passed, will create a host of new regulations about how the US is permitted to handle data, and what European citizens' legal rights are in the event that Americans violate their rights.  You're better off upping your data classification game, and avoid sharing European citizens' data at all through CISA.   4.
If you want liability protection, share with DHS or ISACs and not other federal agencies. Liability protection is only given when you share information with DHS’s National Cybersecurity and Communications Integration Center (NCCIC) -- the official hub for the sharing of cyber threat indicators between the private sector and the federal government -- or with the industry ISACs (like FS-ISAC) that will pass the data onto DHS. Again, this only happens if the data is scrubbed of personal information before you share it.  CISA does allow you to share cyber threat indicators with other federal agencies, "as long as ... the sharing is conducted for a cybersecurity purpose," but you will not get the liability protections.   5.

DHS scrubs it of personal information too, but... DHS will review all threat data submitted and -- with automatic and manual means -- remove any remaining pieces of personal information before sharing it with any other agencies. So, no data submitted will go to waste; but you won't get the liability protection. Plus, there is a privacy issue, considering that one federal agency (DHS) has already seen information that it should not have.  CISA does, however, require federal entities to notify, "in a timely manner, any United States person whose personal information is known or determined to have been shared in violation of CISA." That notification is only required for US persons, according to CISA, but "as a matter of policy, DHS extends individual notification to United States and non-United States persons alike in accordance with its Privacy Incident Handling Guidelines." 6. Joining AIS and building a TAXII client makes all this easier. All that data scrubbing might sound like a nightmare! Who would bother sharing anything at all? Luckily, DHS NCCIC has automated and standardized the process to make it less painful. The Automated Indicator Sharing (AIS) initiative allows organizations to format and exchange threat indicators and defense measures in a standardized way, using standard technical specifications that were developed to satisfy CISA's private data scrubbing requirements.  The Structured Threat Information eXchange (STIX) and Trusted Automated eXchange of Indicator Information (TAXII) are standards for data fields and communication, respectively. OASIS now manages the specs. To share threat info, AIS participants acquire their own TAXII client, which communicates with the DHS TAXII server.

As a DHS representative explained in a statement to Dark Reading: "A TAXII client can be built by any organization that wishes to do so based on the TAXII specification (http://taxiiproject.github.io/).

DHS has built an open-source TAXII client for any organization that would like to use it free of charge, or incorporate the code into their existing systems.
In addition, there are a number of commercially available products that incorporate TAXII connectivity.

A list can be found at http://stixproject.github.io/supporters/."  To date, four federal agencies and 50 non-federal entities have signed up for AIS. 7.

There are other ways to share indicators with, too. Threat info can also be shared with DHS via: Web form: https://www.us-cert.gov/forms/share-indicators Email: [email protected] containing:  Title Type: either indicator or defensive measure; Valid time of incident or knowledge of topic; Indicate tactics, techniques, and procedures (TTP), even if pointing to a very simple TTP with just a title; and A confidence assertion regarding the level of confidence in the value of the indicator (e.g. high, medium, low).    8.

There are rules government agencies must follow, and punishments if they don't. The federal agencies that receive the data shared through CISA must follow certain operational procedures that moderate authorized access and ensure timely dissemination of threat data.

From the interim procedure document:  Failure by an individual to abide by the usage requirements set forth in these guidelines will result in sanctions applied to that individual in accordance with their department or agency’s relevant policy on Inappropriate Use of Government Computers and Systems. Penalties commonly found in such policies, depending on the severity of misuse, include: remedial training; loss of access to information; loss of a security clearance; and termination of employment.   9.

There are still privacy concerns. Although the list of privacy-related laws mentioned in section 2 above might seem pretty extensive, Jadzia Butler of the Center for Democracy and Technology pointed out: ...the list does not include the Electronic Communications Privacy Act (ECPA) or the Wiretap Act – the two laws most likely to be “otherwise applicable” to information sharing authorized by the legislation because they prohibit (with exceptions) the intentional disclosure of electronic communications. Another question: what will agencies do with all that data once they have it? Will it only be for cybersecurity purposes, or what? The CISA and DHS Privacy and Civil Liberties Interim Guidelines state specifically how the federal government can make use of the information. Other uses are expressly prohibited, but some privacy experts say the language itself is not prohibitive enough and the official privacy impact assessment (published here) says "Users of AIS may use AIS cyber threat indicators and defensive measures for purposes other than the uses authorized under CISA." As is, the uses permitted by CISA extend beyond direct cybersecurity attacks.

They may also use the submitted information for the purposes of: responding to, preventing, mitigating a specific threat of death, serious bodily harm, serious economic harm, terrorist act, or use of weapon of mass destruction; responding to, investigating, prosecuting, or preventing a serious threat to a minor, including sexual exploitation or physical threats to safety for preventing, investigating, disrupting, or prosecuting espionage, censorship, fraud, identity theft, or IP theft. As Butler wrote: For example, even under these guidelines, information shared with the federal government for cybersecurity reasons could be stockpiled and mined for use in unrelated investigations of espionage, trade secret violations, and identity theft. Without additional limitations, the information sharing program could end up being used as a tool by law enforcement to obtain vast swaths of sensitive information that could otherwise be obtained only with a warrant or other court order.
In other words, privacy advocates’ warnings that CISA is really a surveillance bill dressed in cybersecurity clothing may still come to fruition. 10.

The liability protections themselves aren't entirely clear. "The liability protection is fairly broad but not clear that it includes protection from disclosure through litigation process (discovery requests) or subpoenas," says Straight. "The big risk there, in my view, is that it would be potentially possible to use the fact that a breached company shared threat intel under CISA as evidence of when a company was aware of a threat or incident.

This could become part of a broader claim by a plaintiff that the breached company did not do enough to mitigate or respond effectively to the incident." Sharing threat data isn't the only thing that may come with risks; simply receiving threat feeds via AIS could have legal risks, according to Straight. "An organization that receives threat feeds should be prepared to take on the burden of assessing the threats and responding appropriately," he says. "This will create a burden on the receiving organization that did not exist before.

Also, I believe there is some risk in receiving threat data that you are not equipped to act upon.

Again, it is conceivable that the fact that you received 'notice of a threat through threat sharing, did nothing, and were then compromised by that threat could be used against you in a litigation or even a regulatory action." So, sharing is caring, but do it carefully. "I should say that I am in favor of threat intel-sharing," says Straight, "but any organization seeking to do so should make sure it understands what it is getting into and can support an ongoing threat intelligence consumption, production, and sharing process.
In my view, none of the [government] documents or commentary I’ve seen so far, including DHS Guidance, sufficiently addresses the issues I have raised."   Straight will present "Avoiding Legal Landmines Surrounding Your IT Infrastructure: Policies and Protocols" at Interop Las Vegas May 4. Related posts: Find out more about security threats at Interop 2016, May 2-6, at the Mandalay Bay Convention Center, Las Vegas.

Click here for pricing information and to register. Sara Peters is Senior Editor at Dark Reading and formerly the editor-in-chief of Enterprise Efficiency. Prior that she was senior editor for the Computer Security Institute, writing and speaking about virtualization, identity management, cybersecurity law, and a myriad ...
View Full Bio More Insights

How 4 Startups Are Harnessing AI In The Invisible Cyberwar

Cybersecurity startups are setting their scopes on a potential goldmine of automated systems they hope will be more effective than hiring human enterprise security teams. There is growing concern across the board that we might be losing control over cybersecurity.

The rapid changes in how we use technology to communicate and the increased number of connected devices means the points of entry or breach are growing.

Because the pace of change has been so rapid, security hasn't adapted fast enough and hackers are taking full advantage.

The traditional ways of dealing with cyber threats are beginning to look hopelessly inadequate. This concern goes right to the top.
Since entering the White House in 2009, President Obama has repeatedly called for improvements in cybersecurity and in December 2015 announced a new cybersecurity bill which allocated $14 billion of federal spending to further secure government information online. With global cyber spending expected to reach $170 billion by 2020, eyes are on the cybersecurity industry to see who can offer the best solutions. But while the rest of the industry gets up to speed, a number of forward-thinking cybersecurity startups are attempting to harness the power of artificial intelligence to strengthen the defenses of the good guys.

They are identifying, locating, and destroying potential threats in a manner that promises to be quicker and more effective than traditional methods. Darktrace Using machine learning techniques inspired by the self-learning intelligence of the human immune system, UK-based startup Darktrace tackles the challenge of detecting previously unidentifiable cyber threats in real time, and allows them to be eradicated more quickly than traditional approaches. Unlike traditional cybersecurity systems in which malicious threats and viruses are manually added to a list and then blocked, Darktrace uses a system based on machine learning and mathematics that can detect threats without any prior knowledge of what it is looking for, cutting out the need for human intervention.

The groundbreaking new system was developed by engineers and mathematicians from the University of Cambridge. Jask JASK, a San Francisco-based startup, is building what it calls “the world’s first, predictive security operations center" for enterprise-level cybersecurity.

The system aims to assist enterprises of all sizes keep ahead of sophisticated cyberattackers by moving past the limitations of existing solutions with proactive A.I security measures. With enterprises adding more and more software applications to their networks, and relying more heavily on cloud for saving data, JASK’s approach  “finds threats buried in data — all in an automated way that doesn’t require [companies] to throw more bodies at the problem,” according to Greg Martin, Jask founder and CEO. Deep Instinct Launched in November 2015, this Tel Aviv-based startup is using sophisticated deep learning algorithms to improve cybersecurity in the banking, financial, and government spheres in the U.S and Israel.

The Deep Instinct engine is modeled on the human brain’s ability to learn. Once a brain learns to identify an object, it can identify it again in the future instinctively.
Similarly, as Deep Instinct’s artificial brain learns to detect any type of cyber threat, its prediction capabilities become faster and more developed.

The company recently partnered with FireLayers to create the first commercially available AI solution for enterprise cloud applications.

The solution focuses on both detection and prevention, targeting the market for advanced persistent threats (APT) solutions. harvest.ai harvest.ai is approaching AI cybersecurity from a slightly different angle, based on the idea that to truly secure your defenses, you need to know your weak points and principle targets. Founder and CEO Alexander Watson is no stranger to industrial espionage and cyberattacks, having worked as a field agent for the NSA for nearly a decade.

The company has created AI-based algorithms that learn the business value of critical documents, monitor who is using or moving them, and detect and stop data breaches from targeted attacks and insider threats before data can be copied or stolen. harvest.ai’s MACIE system detects anomalies in how users access the network by analyzing changes in location of access, browsing habits, data transfers and other telemetry that can be harnessed from external systems.

The system can also alert users if an important document is accidentally shared publicly on a cloud or network, or sent to the wrong person.  Related content:   Find out more about security threats at Interop 2016, May 2-6, at the Mandalay Bay Convention Center, Las Vegas.

Click here for pricing and to register.  Andrew Thomson is the CEO and Founder of VentureRadar, a big data and machine-learning company that discovers and ranks companies, making them visible to potential partners, customers, and investors.
In recent months the company has been commissioned to do various scouting ...
View Full Bio More Insights

The Threat Of Security Analytics Complexity

Congratulations! You're protecting your organization with layered security...but now you're drowning in more security analytics data flows than you can handle.

In Brief: The Threat Of Security Analytics Complexity

SPONSORED: Congratulations! You're protecting your organization with layered security...but now you're drowning in more security analytics data flows than you can handle. Usman Choudhary, SVP and chief product officer of ThreatTrack talks to Brian Gil...

In Brief: Transforming SOCs to SICs

SPONSORED: Greg Boison, director of homeland and cybersecurity for Lockheed Martin, talks to Brian Gillooly at the RSA Conference about how to transform a security operations center into a security intelligence center, and Lockheed Martin's approach. ...

Threat Intelligence's Big Data Problem

Security teams are drowning in often useless threat intel data, but signs of maturity are emerging in what IT-Harvest predicts will be a $1.5 billion market by 2018. First in a series on the evolution of threat intelligence Something’s gotta give: nearly three-fourths of enterprises today say they ignore security events because they’re overwhelmed by the deluge of alerts.

And that doesn’t even take into account the firehose of threat intelligence data they’re funneling today, a new report shows. Mega-retailer Target was the poster child for security alert awareness gone bad—the needle in the haystack Target dismissed was actually the clue that it was under a major attack in the fall of 2013. Nearly three years after that epic data breach, security events, alerts, and threat intelligence feeds are exploding in many enterprises hungry for hints that they are in the bullseye.

The tradeoff is that this deluge of data is drowning security teams who must sift, separate, and correlate the real threats from the false positives or irrelevant information. Security event overload alone is causing some dramatic fallout: more than half of all security events get ignored by IT security pros due to the overload of information, according to a new Enterprise Strategy Group (ESG) report that surveyed 125 IT security pros on the state of incident response in their organizations.

Around 30% of those organizations say they also have some 11 different threat intelligence feeds flowing in as well, the Phantom-commissioned report—published today--found. Threat intelligence data is all about helping enterprises block or protect against the newest threats by providing in-the-wild attack and threat artifacts and intel that companies can compare and correlate with their security.

But for many organizations, the deluge of this type of information isn’t much help if they can’t triage and apply it effectively.  The threat intelligence market itself is booming, growing at a rapid clip at 84% annually, according to new data published today by IT-Harvest.

The threat intel market—which was at $251 million in 2015—is expected to reach more than $460 million this year, says Richard Stiennon, chief research analyst for IT-Harvest. Threat intelligence platform products such as those of ThreatConnect, ThreatStream (now Anomali), ThreatQuotient, and BrightPoint Security, made up $61 million of 2015’s total threat intel market revenues, according to IT-Harvest.

The market is on track to hit $1.5 billion in 2018 at the current rate of growth, according to the report, which includes a look at more than 20 threat intelligence vendors, including FireEye’s iSIGHT Partners, Cyveillance+LookingGlass, Digital Shadows, and Flashpoint Intel. “I expect a lot of churn and also a lot of startups,” Stiennon says of the threat intelligence space. Signs of churn started to show in the past month, with Norse Corp.’s mass layoffs and executive shakeout.
Security experts attributed Norse’s plight more to its own internal managerial problems and lack of a solid product as well some weak analysis reports, rather than as a bellwether of the threat intel space. ‘Threat’ Rebrand Meanwhile, recent moves by other threat intel vendors show signs of a logical evolution of making threat intel more useful and manageable. Late last month, ThreatStream dropped the “threat” moniker and rebranded itself as Anomali, now focusing on not just delivering threat intel, but also prioritizing and matching it for individual organizations.

Threat intel has its own big data problem, according to executives at Anomali, which now is filtering down indicators of compromise (IOCs) and other threat intel for security event and information management (SIEM) systems, which it says weren’t built to process millions of IOCs. “When we started [out], the volume of threat intelligence coming from feed vendors and open communities versus now was more manageable.

There were hundreds of thousands of indicators of compromise, and now there are tens of millions,” says Hugh Njemanze, CEO of Anomali. “We expect this year to [reach] 100 million IoCs.

There’s been an explosion.” That kind of threat intel volume isn’t conducive for most in-house SIEM tools today. “Even the most robust SIEM is not able to ingest more than 1 million IOCs,” he says.

Anomali’s new cloud-based products basically match event flows with IOCs, for example, and then feed contextual information about the incident to the SIEM. “We’re taking on the burden of discovery and matching and letting the SIEM do what it’s good at: analyzing the millions of events they are collecting,” Njemanze explains.
Security operations center teams need to know which IOCs are relevant, so that’s what Anomali is offering. Anomali still offers ThreatStream Optic, its threat intel feed, in addition to its new Harmony Breach Analytics and Anomali Reports products. “We still see ourselves as a threat intelligence player, but we’re radically shifting how threat intel can be operationalized,” he says. “I’m convinced TI platforms like ThreatStream’s [Anomali’s] have an opportunity.
I haven’t seen anyone targeting dealing with the data.

Building a distiller takes the good stuff out, and turns the SIEM into a log manager,” IT-Harvest’s Stiennan says. ThreatConnect, meanwhile, has upgraded its ThreatConnect platform to better integrate a company’s security incidents with threat intelligence. “The goal of my platform is to bring the two together: every data set and correlate it with events and incidents that are unfolding so human beings don’t have to look at the noise.
Instead, the most important things bubble up to the top, based on the underlying analytics,” says Adam Vincent, CEO of ThreatConnect. ThreatConnect has partnered with Splunk, Palo Alto Networks, and others, to integrate threat intel with an organization’s incident detection and response processes.
Version 4.0 of the ThreatConnect platform also lets companies customize reports for all levels of users, including C-level executives who want to see a map of which regions are targeting their company, for example, Vincent says. Threat intelligence is about empowering decision-making, he says. “It’s not the end goal in itself.” So rather than a retailer looking at 100 events in the order in which they occur, the threat intel platform would flag and prioritize events that appear to be connected or related to other attacks in the wild. “It would say this event is important because it looks coordinated, and it’s against equipment that has known vulnerabilities,” Vincent says. “And it looks at what type of techniques and tradecraft the [attacker] is using ...

As the [company] investigates it, they are collecting additional information that is going to inform their decision-making.” Most security vendors now offer some level of threat intelligence, and there are several open-source threat intel feeds as well. “The challenge right now is to tell high-quality threat intelligence from low-quality threat intelligence.
It’s tough to distinguish, given the abundancy of options” out there, says Oliver Friedrichs, founder and CEO of startup Phantom. “One of the biggest challenges is how to reconcile all the various feeds and how to actually make sense of them.

The threat intelligence platform space is really striving to solve that,” says Friedrichs, whose firm offers an automation and “orchestration engine” for an organization’s security tools. Related Content: Find out more about security threats at Interop 2016, May 2-6, at the Mandalay Bay Convention Center, Las Vegas. Register today and receive an early bird discount of $200. Kelly Jackson Higgins is Executive Editor at DarkReading.com.
She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise ...
View Full Bio More Insights

Anonymous To Launch Cyberattacks Against Trump Campaign Starting April 1

Planned attacks a response to candidate's controversial campaign rhetoric, hacking collective says. In a reprise of numerous similar campaigns from the past, the Anonymous hacktivist collective has announced plans to disrupt Donald Trump’s presidential campaign by launching cyberattacks on websites associated with the controversial candidate, starting April 1. In a message on Anonymous’ YouTube channel, an individual purporting to a be spokesman for the collective urged those aligned with its cause to shut down Trump campaign websites and to “expose what he doesn’t want the public to know.” The spokesman, wearing the group’s signature Guy Fawkes mask, described the planned attacks as a response to Trump’s “appalling actions and ideas” in running his presidential campaign. “We need to dismantle his campaign and sabotage his brand,” the masked spokesman exhorted viewers. The Trump attack announcement, with its usual colorful rhetoric, has raised some predictable questions about whether Anonymous is really capable any longer of mustering the support needed to launch a disruptive cyber campaign against the leading Republican presidential candidate. Rene Paap, security evangelist at A10 Networks says the Trump campaign appears to have foreseen the threat and protected its domain by using a Content Delivery Network (CDN) service. “A CDN provides an extra caching layer in-between the content of a website and the client browser.
It is a large network with many points of presence around the world, aimed to redirect a browser to the nearest location where cached content is served,” says Paap. “For Anonymous to break through this is going to be difficult, as the CDN anticipates DDoS attacks,” he says. Anonymous and its collection of loosely affiliated followers around the world have pulled off several high-profile hacktivist campaigns in the past.

Among the examples that Anonymous itself touts are a 2008 campaign against the Church of Scientology, in which it crashed the church’s website; Operation Darknet, in which it exposed IP addresses of nearly 200 alleged pedophiles; and its release of an incriminating video in a 2012 case involving a sexual assault on a high school girl in Steubenville, Ohio. Following last year’s terrorist attacks on France’s satirical newspaper Charlie Hebdo, Anonymous launched a campaign to expose and disrupt websites spreading jihadist propaganda and, more recently, it has committed to doing the same to ISIS-affiliated websites.
Soon after launching the campaign last February, Anonymous claimed it had succeeded in taking down over 1,000 sites and over 9,000 Twitter accounts affiliated with the terror group. Whether or not Anonymous can replicate such campaigns in its planned attacks against Trump websites and online presence remains to be seen. Regardless of how successful or not the planned attack is going to be, Anonymous’ call to attack the Trump campaign is another example of how the world of politics and cybersecurity are becoming increasingly intertwined. The Internet -- social media, in particular -- has become a primary vehicle for candidates to communicate with voters, raise campaign awareness, target specific demographic, gauge voter sentiment, and solicit donations.

But the growing use of these channels has given threat actors new ways to attack Internet users, security vendor Forcepoint had noted last year in its 2016 predictions report (registration required). One of the dangers is that attackers will use email lures related to 2016 campaign issues to try and distribute malicious payloads to unsuspecting users. “Attackers frequently see large events as an opportunity to launch cyber-attacks on a curious population,” Forcepoint pointed out in its report. “Political campaigns, platforms and candidates present a huge opportunity to tailor highly effective lures.” Another issue is the use of social media to misrepresent or to misdirect public perception of candidates and events related to the presidential campaign.

As one example, the Forcepoint report pointed to a campaign by the Syrian Electronic Army (SEA) where hackers supporting the government of President Bashar al-Assad targeted and defaced sites belonging to rival groups. Hackers affiliated with the same group also targeted the Facebook pages of former French President Nicolas Sarkozy and President Obama with spam messages supporting al-Assad, Forcepoint noted in its report. “The SEA also took over the Twitter accounts of legitimate news organizations, tweeting false news updates, creating uncertainty and alarm as the messages spread online before these accounts were again secured.” Bob Hansmann, Forcepoint’s director of security analysis and strategy says that campaigns that want to mitigate such threats need to make cybersecurity a core part of their planning.  “A qualified CISO, as a ranking member of the campaign team, would be a game changer,” for the presidential candidates, Hansmann says in comments to Dark Reading. “If a campaign team has one and, more importantly, if they listen to them, then the odds are in their favor,” he says. “They are likely less susceptible to an attack as well as more likely to maintain key operations in the face of a full or partially successful attack.”  Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year ...
View Full Bio More Insights

Hottest Topics To Come Out Of RSA Conference

Ecryption, bug bounties and threat intel dominated the mindshare of the cybersecurity hive mind at RSAC last week. SAN FRANCISCO, CALIF. – RSA Conference 2016 -- With one of the biggest crowds ever to hit Moscone for RSA Conference USA, the gathering last week of 40,000 security professionals and vendors was like a convergence of water cooler chatterboxes from across the entire infosec world. Whether at scheduled talks, in bustling hallways or cocktail hours at the bars nearby, a number of definite themes wound their way through discussions all week. Here's what kept the conversations flowing. Encryption Backdoors The topic of government-urged encryption backdoors was already promising to be a big topic at the show, but the FBI-Apple bombshell ensured that this was THE topic of RSAC 2016.

According to Bromium, a survey taken of attendees showed that 86% of respondents sided with Apple in this debate, so much of the chatter was 100 different ways of explaining the inadvisability of the FBI's mandate. One of the most colorful quotes came from Michael Chertoff, former head of U.S.

Department of Homeland Security: "Once you’ve created code that’s potentially compromising, it’s like a bacteriological weapon. You’re always afraid of it getting out of the lab.” Bug Bounties In spite of the dark cast the backdoor issue set over the Federal government's relations with the cybersecurity industry, there was plenty of evidence of positive public-private cooperation.

Exhibit A: the "Hack the Pentagon" bug bounty program announced by the DoD in conjunction with Defense Secretary Ash Carter's appearance at the show. While bug bounty programs are hardly a new thing, the announcement of the program shows how completely these programs have become mainstream best practices. "There are lots of companies who do this,” Carter said in a town hall session with Ted Schlein, general partner at Kleiner Perkins Caufield & Byers. “It’s a way of kind of crowdsourcing the expertise and having access to good people and not bad people. You’d much rather find vulnerabilities in your networks that way than in the other way, with a compromise or shutdown.” Threat Intel There was no lack of vendors hyping new threat intelligence capabilities at this show, but as with many hot security product categories threat intel is suffering a bit as the victim of its own success.

The marketing machine is in full gear now pimping out threat intel capabilities for any feature even remotely looking like it; one vendor lamented to me off the record, "most threat intel these days is not even close to being real intelligence." In short, threat intel demonstrated at the show that it was reaching the peak of the classic hype cycle pattern. RSAC attendees had some great evidence of that hanging around their necks. Just a month after the very public dismantling of Norse Corp., the show's badge holder necklaces still bore the self-proclaimed threat intelligence vendor's logos.

But as Robert Lee, CEO of Dragos Security, capably explained over a month ago in the Norse fallout, this kind of failure (and additional disillusionment from customers led astray by the marketing hype) is not necessarily a knock on the credibility of threat intel as a whole.
It is just a matter of people playing fast and loose with the product category itself. "Simply put, they were interpreting data as intelligence," Lee said. "There is a huge difference between data, information, and intelligence.
So while they may have billed themselves as significant players in the threat intelligence community they were never really accepted by the community, or participating in it, by most leading analysts and companies.

Therefore, they aren’t a bellwether of the threat intelligence industry." Related Content: Find out more about security threats at Interop 2016, May 2-6, at the Mandalay Bay Convention Center, Las Vegas. Register today and receive an early bird discount of $200. Ericka Chickowski specializes in coverage of information technology and business innovation.
She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full Bio More Insights

Hack The Pentagon: DoD Launches First-Ever Federal Bug Bounty Program

Defense Secretary Ash Carter offers insight into DoD's new vulnerability-hunting program that offers monetary awards. SAN FRANCISCO, CA – RSA Conference 2016 – The US Defense Department is inviting vetted white-hat hackers to hunt for vulnerabilities in its public web pages under a pilot bug bounty program.

The new “Hack the Pentagon” announced today by DoD officials took the security industry by surprise. Bug bounty programs are gradually catching on in the commercial world, but no one expected the Pentagon—much less the feds—to launch one.

The DoD program aims to tap expertise from the private sector in the first step in a planned group of programs to test for bugs in DoD websites, applications, and networks.

DoD will give monetary awards to hackers who find bugs, but many of the details of the program were not yet disclosed. Defense Secretary Ash Carter, here today, shed more light on why DoD made such a bold move. “We’re trying to adopt what is a best practice.

There are lots of companies who do this,” Carter said in a town hall session with Ted Schlein, general partner at Kleiner Perkins Caufield & Byers. “You invite people to come and attack you and find your vulnerabilities.
It’s a way of kind of crowdsourcing the expertise and having access to good people and not bad people. You’d much rather find vulnerabilities in your networks that way than in the other way, with a compromise or shutdown.” Participants must be vetted, of course: they register and undergo a background check. “We have to make sure they are a white hat,” Carter said. He said the hackers who participate in the program won’t be hacking at any of DoD’s other systems or networks, such as its mission-facing systems. Katie Moussouris, chief policy officer of HackerOne, called the DoD’s bug bounty program a “landmark event” for the federal government as well as for security research. “This legitimizes hacking for defensive purposes,” she says. It’s also a powerful recruiting tool for the DoD, which like many other organizations faces a talent gap in cybersecurity, says Moussouris, whose company sells a platform for vulnerability coordination and bug bounty programs. “As a means of identifying talent, it’s very significant.” That doesn’t mean only young hacker talent will take on the DoD’s Hack the Pentagon challenge. Moussouris expects seasoned hackers to sign up as well to be some of the first to find bugs in the DoD’s websites. Carter told RSA attendees that the program also highlights a cultural shift for DoD in cybersecurity. “It’s okay to tell us where we screwed up or if something is wrong.

That to me is one of the great messages” here, he said. Meanwhile, Schlein asked Carter to weigh in on the FBI-Apple dispute, where Apple is refusing to help the FBI unlock encryption on an iPhone used by San Bernardino terror suspect Syed Farook.

Carter declined to comment on specifics of the case, noting that it’s a “law enforcement matter,” but he did share his view on encryption backdoors: “I’m not a believer in backdoors or a single technical approach to what is a complex” issue, he said. “I don’t think we ought to let one case drive a particular conclusion or solution. We have to work together" to come up with a solution, he said. “I’m behind strong data security and strong encryption – no question about it,” he said. Related Content Find out more about security threats at Interop 2016, May 2-6, at the Mandalay Bay Convention Center, Las Vegas. Register today and receive an early bird discount of $200. Kelly Jackson Higgins is Executive Editor at DarkReading.com.
She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise ...
View Full Bio More Insights

Using Offensive Security Mindset To Create Best Defense

SPONSORED: Mike Viscuso, CTO of Carbon Black, and Ben Johnson, Chief Security Strategist of Carbon Black talk to Brian Gilloly at the RSA Conference about how their background in offensive security helps them think like attackers, and better defend ag...