11.5 C
Thursday, October 19, 2017

Hackers Attack Major US Law Firms

Hackers broke into computer networks of prominent law firms, and the FBI is investigating whether the stolen data was used for illegal trading purposes, the WSJ reports. Major US law firms including Cravath Swaine & Moore, and Weil Gotshal & Manges, and others suffered cyberattacks last year, but it is yet to be determined what information was breached or if it was for illegal insider trading purposes, a Wall Street Journal report says. The Manhattan U.S. attorney’s office and FBI last year began an investigation into the attacks. Officials from Weil Gotshal have not yet commented on the incident, while Cravath said in a statement that the incident did not have a major impact on its systems. Weil Gotshal and Cravath both represent Wall Street banks and Fortune 500 companies in merger negiations and lawsuits, for instance. Security firm Flashpoint had issued alerts and notices to law firms in past few months to warn about possible outbreaks, as had the FBI.  Read more about the cyberattack campaign against law firms in the Wall Street Journal report. Dark Reading's Quick Hits delivers a brief synopsis and summary of the significance of breaking news events.

For more information from the original source of the news item, please follow the link provided in this article.
View Full Bio More Insights

Machine Learning In Security: Seeing the Nth Dimension in Signatures

How adding "supervised" machine learning to the development of n-dimensional signature engines is moving the detection odds back to the defender. Second in a series of two articles about the history of signature-based detections and how the methodology has evolved to identify different types of cybersecurity threats. Many security vendors are now applying increasingly sophisticated machine learning elements into their cloud-based analysis and classification systems, and into their products.

All of these techniques have already proven their value in Internet search, targeted advertising and social networking business arenas. For example, supervised learning models lie at the heart of ensuring that the best and most applicable results are returned when searching for the phrase “never going to give you up.” In the information security world, supervised learning models are a natural progression of the one, two, and multi-dimensional signature systems discussed in my earlier article.

At its core, instead of humans arguing over which features and attributes of a threat are most relevant to a detection, mathematics and science are used to find and evaluate the most important artifacts, and to automatically construct a sophisticated signature.  N-dimensional Signatures Multidimensional signatures and the security products that use them rely heavily on human researchers and analysts to observe and classify each behavior for efficacy. If a threat exhibits a new malicious behavior (or a false positive behavior has been identified in the field), the analyst must manually create or edit a new signature element and its classification, and include it as an update.

The assumption is that humans will be the most relevant elements of a threat and can label them. The application of machine learning to the problem largely removes humans and their biases to the development of an n-dimensional signature (or often called a “classification model”). Instead of manually trying to figure out and label all the good, bad, and suspicious behaviors, a machine is fed a bunch of “known bad” and “known good” samples, which could be binary files, network traffic, or even photographs.  It then takes and compares all the observable behaviors of the collected samples, automatically determines which behaviors were more prevalent or less prevalent to each class of samples, calculates a weighting factor for each behavior, and combines all that intelligence in to a single model of n-dimensions – where n is a variable size based upon the type and number of samples and behaviors the machine used.  Enter ‘Supervised Learning’ Different sample volumes and differing samples supplied over time will often affect n.
In machine learning terminology, this process is called “supervised learning.”  Historically, there existed a class of threat detection referred to as “Anomaly Detection Systems” (ADS) that effectively operated on the premise of baselining a network or host activity.
In the case of network ADS (i.e. NADS), the approach would entail constructing a map of network devices, identifying who talks to who over what ports and protocols, how often, and in what kind of volume. Once that baseline is established (typically over a month), any new chatter that was an anomaly to that model (e.g. a new host added to the network) generated an alert – subject to certain thresholds being defined. Obviously that approach generated incredibly high volumes of alerts and detection was governed by those threshold settings.

As a technology, ADS represented a failed branch of the threat detection evolutionary tree. Without getting into the math, unsupervised machine learning has allowed security vendors to revisit the ADS path and detection objectives – and overcome most of the alerting and threshold problems.

The detection models and engines that use unsupervised machine learning still require an element of baselining, but continually learn and reassess that baseline on an hourly or daily basis.  As such, these new detection systems are capable of identifying attack vectors such as “low-and-slow” data exfiltration, lateral movement, and staging servers.

These threats are difficult or cumbersome to detect using signature systems. This is why signature-based detection systems will continue to be valuable in to the future – not as a replacement, but as a companion to the new advancements in unsupervised machine learning.
In other words, what the current generation of unsupervised machine learning brings to security is the ability to detect threats that are anomalies or unclassified events and behaviors. It is inevitable that machine learning approaches will play an increasingly important role in future generations of threat detection technology. Just as their use has been critical to the advancement of Internet search and social media applications, their application to information security will be just as great.  Signature-based threat detection systems have been evolving for more than two decades, and the application of supervised machine learning to the development of n-dimensional signature engines over the last couple of years is already moving the detection odds back to the defender. When combined with the newest generation of unsupervised machine learning systems, we can expect that needle to shift more rapidly in the defender’s favor. Return to part 1: Machine Learning In Security: Good & Bad News About Signatures Related Content:  Find out more about security threats at Interop 2016, May 2-6, at the Mandalay Bay Convention Center, Las Vegas.

Click here for pricing information and to register. Gunter Ollmann is chief security officer at Vectra. He has nearly 30 years of information security experience in an array of cyber security consulting and research roles.

Before joining Vectra, Günter was CTO of Domain Services at NCC Group, where he drove strategy ...
View Full Bio More Insights

Machine Learning In Security: Good & Bad News About Signatures

Why security teams that rely solely on signature-based detection are overwhelmed by a high number of alerts. First in a series of two articles about the history of signature-based detections, and how the methodology has evolved to identify different types of cybersecurity threats. Used in the context of an outdated and manually intensive technology focused on older classes of threats, there’s little wonder why vendors would seek to distance the legacy term “signature” from their advanced detection technology.
Vendors haven’t necessarily been deceptive in the labeling of their latest generation of techniques; it’s often just easier to create a new label for something than to fully explain the context and evolution of what preceded it. Over the years, signature-based systems have changed and advanced, but the core concepts still lie at the heart of all modern detection systems – and will continue to be integral for the foreseeable future.

To understand what a “signature system” is in reality, we need to understand the evolution of the detection path as directed and discovered by human intervention. One-dimensional signatures: Blacklists and whitelists are examples of one-dimensional signature systems.

They are found throughout security and exist in practically all detection and protection technologies.

They are by far the fastest and most efficient way of categorizing a data artifact (e.g. a domain name, IP address, user-agent, MD5 hashes, etc.).

As a Boolean operation, what you’re looking for is either on the list or it is not. Two-dimensional signatures: Classic regular-expression functions and string matching are examples of two-dimensional signature systems.

They are the fundamental building blocks of anti-malware, intrusion detection, and data leakage detection systems.
In malware, they are often used to search a binary file for known strings which help to label the type of threat it represents.

Two-dimensional signatures came to the fore as a means of detecting network-based threats within the content-level of traffic – easily capable of identifying previously known exploits and host enumeration techniques. Data leakage prevention (DLP) is a more recent security technology that relies heavily upon two-dimensional signatures. Messages and file attachments are often scanned for specific strings (e.g. serial numbers, passwords, etc.) or construction formats (e.g. social security numbers of the format nnn-nn-nnnn with a regular expression of ^d{3}-d{2}-d{4}$ ).  Multidimensional signatures: Security vendors developed a hybrid system as the threat spectrum grew and attackers found new ways to obfuscate the elements of their attacks that were most exposed to one-dimensional and two-dimensional signatures.
Instead of triggering on a single signature, a multi-dimensional signature was created.
In both sandboxing and network behavioral monitoring, certain actions and activities are labeled as either suspicious or bad.  When a threshold of good or bad activities is reached, the threat is classified and labeled.

For example, a suspicious file is executed within a virtual environment.

The file attempts to write to the Windows registry (neither good nor bad), add a file to the Windows startup path (suspicious), disable Windows updates (bad), read from the user’s contacts list (neither good nor bad), and then send email to every address listed in the contacts list (bad). Together, all of these individual actions (i.e. signatures) are combined and tallied and a decision is made that the suspicious file is in fact malicious and most likely a spambot. Signature systems all share the same characteristic of being able to promptly identify and label a threat.

As signature systems have evolved, they have become capable of detecting and classifying a broader range of threats.
In modern detection and prevention systems, a combination of different signature systems are used together so they can most accurately label a known threat, but this also has the problem of generating a high number of alerts that can overwhelm a team that solely relies on signature-based detection for security purposes.  Historically, the linear progression and sophistication of signature-based detection systems have been dependent upon human signature writers.

For each new threat, a unique signature or signature artifact is created by a skilled engineer or security researcher.

This pairing between signature and its human creator means that as the number of threats have increased, so too have the number of skilled personnel needed to develop and support the signatures that detect them.

For obvious reasons, this is not a scalable business proposition – for neither the vendor or customer. New developments in machine learning – in particular supervised and unsupervised learning algorithms – are now being applied to information security and are paving the way to a new class of signature systems capable of economically scaling to the threat.  Next in the series: Machine Learning In Security: Seeing the Nth Dimension in Signatures  Related Content:    Find out more about security threats at Interop 2016, May 2-6, at the Mandalay Bay Convention Center, Las Vegas.

Click here for pricing information and to register. Gunter Ollmann is chief security officer at Vectra. He has nearly 30 years of information security experience in an array of cyber security consulting and research roles.

Before joining Vectra, Günter was CTO of Domain Services at NCC Group, where he drove strategy ...
View Full Bio More Insights

6 Hot Cybersecurity Startups: MACH37’s Spring Class Of 2016

Intense 90-day program mentors budding entrepreneurs in the finer points of developing a viable technology business for the real world of information security.

6 Hot Cybersecurity Startups: MACH37's Spring Class Of 2016

Intense 90-day program mentors budding entrepreneurs in the finer points of developing a viable technology business for the real world of information security. 1 of 9 The race is on for six teams of technologists and entrepreneurs from the Mid-Atlantic, Pacific Northwest, Northeast United States and Turkey to turn their ideas -- seeded by a $50,000 grant from the Virginia-based MACH37 Cyber Accelerator -- into thriving investable companies.  The teams were chosen from a pool of 61 applicants. “We were looking for management teams that include technology founders and first-term entrepreneurs who have vision and want to create something compelling,” says Rick Gordon, managing partner of MACH37. More specifically, information security and business leaders who: Are building a disruptive information security technology product Are delivering foundational security capabilities that enable entirely new products & markets Have the will and endurance to turn their labor into commercial success Have built a team of two- to four co-founders Need help with startup capital, introductions, and navigating pitfalls Are seeking rapid growth through venture capital Are willing to be in Virginia for the entire 90-day program and commit to the venture full-time Gordon says this year’s spring cohort offers a diverse range of innovative solutions attacking problems ranging from phishing and attribution in threat intelligence, to “security as a service” and regulatory compliance, to the Internet of Things with an intrusion detection system for automotive infotainment systems. Participants for the next 14 weeks will draw on the expertise of MACH37’s large network of successful security professionals, business experts, and entrepreneurs.

The program will culminate in June with a “Demo Day” where the entrepreneurs pitch and demonstrate their technology to an audience of external mentors, investors, and stakeholders. Find out more about security threats at Interop 2016, May 2-6, at the Mandalay Bay Convention Center, Las Vegas.

Click here for pricing information and to register. Marilyn has been covering technology for business, government, and consumer audiences for over 20 years. Prior to joining UBM, Marilyn worked for nine years as editorial director at TechTarget Inc., where she launched six Websites for IT managers and administrators supporting ...
View Full Bio 1 of 9 More Insights

How To Share Threat Intelligence Through CISA: 10 Things To Know

If you want those liability protections the Cybersecurity Information Sharing Act promised, you must follow DHS's new guidelines. Share information about breaches, attacks, and threats with the federal government without fear of the legal repercussions -- that's the alluring promise of the Cybersecurity Information Sharing Act (CISA, passed as the Cybersecurity Act of 2015). However, those liability protections do not apply to any and all sharing, so if you want to be safe from litigation, you must share information through the guidelines recently released by the US Department of Homeland Security. Security and privacy professionals alike were anxiously awaiting these guidelines because they answer some of the pervading questions about how privacy would be protected when CISA passed.

They also provide some instructions -- particularly for non-federal entities -- on precisely how to conduct their information sharing activities under the new law. Here's what you need to know. 1. You need to remove individuals' personal data before sharing it. The guidelines require that, before sharing data, an organization remove any information "that it knows at the time of sharing" to be personally identifiable information of "a specific individual that is not directly related to a cybersecurity threat." If you don't do that, you won't get liability protection. The guidelines acknowledge that there may be occasions when PII is "directly related," such as in a social engineering attack. However, sometimes those individuals' relevant characteristics can be shared (job title, for example), but anonymized first. "The DHS Guidance does a decent job of explaining what ['directly related'] means, but I believe there is still a lot left to subjective decision making by the company doing the sharing," says Jason Straight, chief privacy officer of UnitedLex, and speaker at the upcoming Interop Las Vegas conference. "If they make a 'bad call,' and share something they shouldn’t have, what happens? Do they not get liability protection? Who decides?" Straight also points out that this requires that organizations put in place people, processes, and technology they might not have had before.  2.

The personal data you need to remove may be more extensive than you think. The guidelines provide a list of private data types that are protected by regulation, are unlikely to be directly related to a cybersecurity threat, and should therefore be on your watch list when scrubbing. That list includes not just your basic PII and personal health information, but human resource information (including performance reviews and such), consumer information protected by the Fair Credit Reporting Act, education history protected by the Family Educational Rights and Privacy Act, financial information including investment advice protected by the Gramm-Leach-Bliley Act, identifying information about property ownership (like vehicle identification numbers), identifying information about children under 13 protected by the Children's Online Privacy Protection Act. 3.

Be particularly careful of Europeans' personal data. European privacy laws to protect personal data are much more rigorous than American ones, and the divide is only getting wider.

As we've explained before:  The EU General Data Protection Regulation (GDPR), a replacement for the EU Data Protection Directive, is expected to be ratified by European Parliament this spring session, and go into effect by 2018. The GDPR will expand the definition of "personal data" to "encompass other factors that could be used to identify an individual, such as their genetic, mental, economic, cultural or social identity," according to IT Governance. ... So, data on Europeans' shoe sizes and political affiliations and more may be protected. Violations of GDPR have proposed fines of up to 4% of annual global revenue. Many breaches of personal data must be reported within 72 hours of discovery.
So, it's no small issue when the data is misused or lost.  Plus, the newly proposed trans-Atlantic data transfer agreement, EU-US Privacy Shield, if passed, will create a host of new regulations about how the US is permitted to handle data, and what European citizens' legal rights are in the event that Americans violate their rights.  You're better off upping your data classification game, and avoid sharing European citizens' data at all through CISA.   4.
If you want liability protection, share with DHS or ISACs and not other federal agencies. Liability protection is only given when you share information with DHS’s National Cybersecurity and Communications Integration Center (NCCIC) -- the official hub for the sharing of cyber threat indicators between the private sector and the federal government -- or with the industry ISACs (like FS-ISAC) that will pass the data onto DHS. Again, this only happens if the data is scrubbed of personal information before you share it.  CISA does allow you to share cyber threat indicators with other federal agencies, "as long as ... the sharing is conducted for a cybersecurity purpose," but you will not get the liability protections.   5.

DHS scrubs it of personal information too, but... DHS will review all threat data submitted and -- with automatic and manual means -- remove any remaining pieces of personal information before sharing it with any other agencies. So, no data submitted will go to waste; but you won't get the liability protection. Plus, there is a privacy issue, considering that one federal agency (DHS) has already seen information that it should not have.  CISA does, however, require federal entities to notify, "in a timely manner, any United States person whose personal information is known or determined to have been shared in violation of CISA." That notification is only required for US persons, according to CISA, but "as a matter of policy, DHS extends individual notification to United States and non-United States persons alike in accordance with its Privacy Incident Handling Guidelines." 6. Joining AIS and building a TAXII client makes all this easier. All that data scrubbing might sound like a nightmare! Who would bother sharing anything at all? Luckily, DHS NCCIC has automated and standardized the process to make it less painful. The Automated Indicator Sharing (AIS) initiative allows organizations to format and exchange threat indicators and defense measures in a standardized way, using standard technical specifications that were developed to satisfy CISA's private data scrubbing requirements.  The Structured Threat Information eXchange (STIX) and Trusted Automated eXchange of Indicator Information (TAXII) are standards for data fields and communication, respectively. OASIS now manages the specs. To share threat info, AIS participants acquire their own TAXII client, which communicates with the DHS TAXII server.

As a DHS representative explained in a statement to Dark Reading: "A TAXII client can be built by any organization that wishes to do so based on the TAXII specification (http://taxiiproject.github.io/).

DHS has built an open-source TAXII client for any organization that would like to use it free of charge, or incorporate the code into their existing systems.
In addition, there are a number of commercially available products that incorporate TAXII connectivity.

A list can be found at http://stixproject.github.io/supporters/."  To date, four federal agencies and 50 non-federal entities have signed up for AIS. 7.

There are other ways to share indicators with, too. Threat info can also be shared with DHS via: Web form: https://www.us-cert.gov/forms/share-indicators Email: [email protected] containing:  Title Type: either indicator or defensive measure; Valid time of incident or knowledge of topic; Indicate tactics, techniques, and procedures (TTP), even if pointing to a very simple TTP with just a title; and A confidence assertion regarding the level of confidence in the value of the indicator (e.g. high, medium, low).    8.

There are rules government agencies must follow, and punishments if they don't. The federal agencies that receive the data shared through CISA must follow certain operational procedures that moderate authorized access and ensure timely dissemination of threat data.

From the interim procedure document:  Failure by an individual to abide by the usage requirements set forth in these guidelines will result in sanctions applied to that individual in accordance with their department or agency’s relevant policy on Inappropriate Use of Government Computers and Systems. Penalties commonly found in such policies, depending on the severity of misuse, include: remedial training; loss of access to information; loss of a security clearance; and termination of employment.   9.

There are still privacy concerns. Although the list of privacy-related laws mentioned in section 2 above might seem pretty extensive, Jadzia Butler of the Center for Democracy and Technology pointed out: ...the list does not include the Electronic Communications Privacy Act (ECPA) or the Wiretap Act – the two laws most likely to be “otherwise applicable” to information sharing authorized by the legislation because they prohibit (with exceptions) the intentional disclosure of electronic communications. Another question: what will agencies do with all that data once they have it? Will it only be for cybersecurity purposes, or what? The CISA and DHS Privacy and Civil Liberties Interim Guidelines state specifically how the federal government can make use of the information. Other uses are expressly prohibited, but some privacy experts say the language itself is not prohibitive enough and the official privacy impact assessment (published here) says "Users of AIS may use AIS cyber threat indicators and defensive measures for purposes other than the uses authorized under CISA." As is, the uses permitted by CISA extend beyond direct cybersecurity attacks.

They may also use the submitted information for the purposes of: responding to, preventing, mitigating a specific threat of death, serious bodily harm, serious economic harm, terrorist act, or use of weapon of mass destruction; responding to, investigating, prosecuting, or preventing a serious threat to a minor, including sexual exploitation or physical threats to safety for preventing, investigating, disrupting, or prosecuting espionage, censorship, fraud, identity theft, or IP theft. As Butler wrote: For example, even under these guidelines, information shared with the federal government for cybersecurity reasons could be stockpiled and mined for use in unrelated investigations of espionage, trade secret violations, and identity theft. Without additional limitations, the information sharing program could end up being used as a tool by law enforcement to obtain vast swaths of sensitive information that could otherwise be obtained only with a warrant or other court order.
In other words, privacy advocates’ warnings that CISA is really a surveillance bill dressed in cybersecurity clothing may still come to fruition. 10.

The liability protections themselves aren't entirely clear. "The liability protection is fairly broad but not clear that it includes protection from disclosure through litigation process (discovery requests) or subpoenas," says Straight. "The big risk there, in my view, is that it would be potentially possible to use the fact that a breached company shared threat intel under CISA as evidence of when a company was aware of a threat or incident.

This could become part of a broader claim by a plaintiff that the breached company did not do enough to mitigate or respond effectively to the incident." Sharing threat data isn't the only thing that may come with risks; simply receiving threat feeds via AIS could have legal risks, according to Straight. "An organization that receives threat feeds should be prepared to take on the burden of assessing the threats and responding appropriately," he says. "This will create a burden on the receiving organization that did not exist before.

Also, I believe there is some risk in receiving threat data that you are not equipped to act upon.

Again, it is conceivable that the fact that you received 'notice of a threat through threat sharing, did nothing, and were then compromised by that threat could be used against you in a litigation or even a regulatory action." So, sharing is caring, but do it carefully. "I should say that I am in favor of threat intel-sharing," says Straight, "but any organization seeking to do so should make sure it understands what it is getting into and can support an ongoing threat intelligence consumption, production, and sharing process.
In my view, none of the [government] documents or commentary I’ve seen so far, including DHS Guidance, sufficiently addresses the issues I have raised."   Straight will present "Avoiding Legal Landmines Surrounding Your IT Infrastructure: Policies and Protocols" at Interop Las Vegas May 4. Related posts: Find out more about security threats at Interop 2016, May 2-6, at the Mandalay Bay Convention Center, Las Vegas.

Click here for pricing information and to register. Sara Peters is Senior Editor at Dark Reading and formerly the editor-in-chief of Enterprise Efficiency. Prior that she was senior editor for the Computer Security Institute, writing and speaking about virtualization, identity management, cybersecurity law, and a myriad ...
View Full Bio More Insights

How 4 Startups Are Harnessing AI In The Invisible Cyberwar

Cybersecurity startups are setting their scopes on a potential goldmine of automated systems they hope will be more effective than hiring human enterprise security teams. There is growing concern across the board that we might be losing control over cybersecurity.

The rapid changes in how we use technology to communicate and the increased number of connected devices means the points of entry or breach are growing.

Because the pace of change has been so rapid, security hasn't adapted fast enough and hackers are taking full advantage.

The traditional ways of dealing with cyber threats are beginning to look hopelessly inadequate. This concern goes right to the top.
Since entering the White House in 2009, President Obama has repeatedly called for improvements in cybersecurity and in December 2015 announced a new cybersecurity bill which allocated $14 billion of federal spending to further secure government information online. With global cyber spending expected to reach $170 billion by 2020, eyes are on the cybersecurity industry to see who can offer the best solutions. But while the rest of the industry gets up to speed, a number of forward-thinking cybersecurity startups are attempting to harness the power of artificial intelligence to strengthen the defenses of the good guys.

They are identifying, locating, and destroying potential threats in a manner that promises to be quicker and more effective than traditional methods. Darktrace Using machine learning techniques inspired by the self-learning intelligence of the human immune system, UK-based startup Darktrace tackles the challenge of detecting previously unidentifiable cyber threats in real time, and allows them to be eradicated more quickly than traditional approaches. Unlike traditional cybersecurity systems in which malicious threats and viruses are manually added to a list and then blocked, Darktrace uses a system based on machine learning and mathematics that can detect threats without any prior knowledge of what it is looking for, cutting out the need for human intervention.

The groundbreaking new system was developed by engineers and mathematicians from the University of Cambridge. Jask JASK, a San Francisco-based startup, is building what it calls “the world’s first, predictive security operations center" for enterprise-level cybersecurity.

The system aims to assist enterprises of all sizes keep ahead of sophisticated cyberattackers by moving past the limitations of existing solutions with proactive A.I security measures. With enterprises adding more and more software applications to their networks, and relying more heavily on cloud for saving data, JASK’s approach  “finds threats buried in data — all in an automated way that doesn’t require [companies] to throw more bodies at the problem,” according to Greg Martin, Jask founder and CEO. Deep Instinct Launched in November 2015, this Tel Aviv-based startup is using sophisticated deep learning algorithms to improve cybersecurity in the banking, financial, and government spheres in the U.S and Israel.

The Deep Instinct engine is modeled on the human brain’s ability to learn. Once a brain learns to identify an object, it can identify it again in the future instinctively.
Similarly, as Deep Instinct’s artificial brain learns to detect any type of cyber threat, its prediction capabilities become faster and more developed.

The company recently partnered with FireLayers to create the first commercially available AI solution for enterprise cloud applications.

The solution focuses on both detection and prevention, targeting the market for advanced persistent threats (APT) solutions. harvest.ai harvest.ai is approaching AI cybersecurity from a slightly different angle, based on the idea that to truly secure your defenses, you need to know your weak points and principle targets. Founder and CEO Alexander Watson is no stranger to industrial espionage and cyberattacks, having worked as a field agent for the NSA for nearly a decade.

The company has created AI-based algorithms that learn the business value of critical documents, monitor who is using or moving them, and detect and stop data breaches from targeted attacks and insider threats before data can be copied or stolen. harvest.ai’s MACIE system detects anomalies in how users access the network by analyzing changes in location of access, browsing habits, data transfers and other telemetry that can be harnessed from external systems.

The system can also alert users if an important document is accidentally shared publicly on a cloud or network, or sent to the wrong person.  Related content:   Find out more about security threats at Interop 2016, May 2-6, at the Mandalay Bay Convention Center, Las Vegas.

Click here for pricing and to register.  Andrew Thomson is the CEO and Founder of VentureRadar, a big data and machine-learning company that discovers and ranks companies, making them visible to potential partners, customers, and investors.
In recent months the company has been commissioned to do various scouting ...
View Full Bio More Insights

The Threat Of Security Analytics Complexity

Congratulations! You're protecting your organization with layered security...but now you're drowning in more security analytics data flows than you can handle.

In Brief: The Threat Of Security Analytics Complexity

SPONSORED: Congratulations! You're protecting your organization with layered security...but now you're drowning in more security analytics data flows than you can handle. Usman Choudhary, SVP and chief product officer of ThreatTrack talks to Brian Gil...

In Brief: Transforming SOCs to SICs

SPONSORED: Greg Boison, director of homeland and cybersecurity for Lockheed Martin, talks to Brian Gillooly at the RSA Conference about how to transform a security operations center into a security intelligence center, and Lockheed Martin's approach. ...

Threat Intelligence's Big Data Problem

Security teams are drowning in often useless threat intel data, but signs of maturity are emerging in what IT-Harvest predicts will be a $1.5 billion market by 2018. First in a series on the evolution of threat intelligence Something’s gotta give: nearly three-fourths of enterprises today say they ignore security events because they’re overwhelmed by the deluge of alerts.

And that doesn’t even take into account the firehose of threat intelligence data they’re funneling today, a new report shows. Mega-retailer Target was the poster child for security alert awareness gone bad—the needle in the haystack Target dismissed was actually the clue that it was under a major attack in the fall of 2013. Nearly three years after that epic data breach, security events, alerts, and threat intelligence feeds are exploding in many enterprises hungry for hints that they are in the bullseye.

The tradeoff is that this deluge of data is drowning security teams who must sift, separate, and correlate the real threats from the false positives or irrelevant information. Security event overload alone is causing some dramatic fallout: more than half of all security events get ignored by IT security pros due to the overload of information, according to a new Enterprise Strategy Group (ESG) report that surveyed 125 IT security pros on the state of incident response in their organizations.

Around 30% of those organizations say they also have some 11 different threat intelligence feeds flowing in as well, the Phantom-commissioned report—published today--found. Threat intelligence data is all about helping enterprises block or protect against the newest threats by providing in-the-wild attack and threat artifacts and intel that companies can compare and correlate with their security.

But for many organizations, the deluge of this type of information isn’t much help if they can’t triage and apply it effectively.  The threat intelligence market itself is booming, growing at a rapid clip at 84% annually, according to new data published today by IT-Harvest.

The threat intel market—which was at $251 million in 2015—is expected to reach more than $460 million this year, says Richard Stiennon, chief research analyst for IT-Harvest. Threat intelligence platform products such as those of ThreatConnect, ThreatStream (now Anomali), ThreatQuotient, and BrightPoint Security, made up $61 million of 2015’s total threat intel market revenues, according to IT-Harvest.

The market is on track to hit $1.5 billion in 2018 at the current rate of growth, according to the report, which includes a look at more than 20 threat intelligence vendors, including FireEye’s iSIGHT Partners, Cyveillance+LookingGlass, Digital Shadows, and Flashpoint Intel. “I expect a lot of churn and also a lot of startups,” Stiennon says of the threat intelligence space. Signs of churn started to show in the past month, with Norse Corp.’s mass layoffs and executive shakeout.
Security experts attributed Norse’s plight more to its own internal managerial problems and lack of a solid product as well some weak analysis reports, rather than as a bellwether of the threat intel space. ‘Threat’ Rebrand Meanwhile, recent moves by other threat intel vendors show signs of a logical evolution of making threat intel more useful and manageable. Late last month, ThreatStream dropped the “threat” moniker and rebranded itself as Anomali, now focusing on not just delivering threat intel, but also prioritizing and matching it for individual organizations.

Threat intel has its own big data problem, according to executives at Anomali, which now is filtering down indicators of compromise (IOCs) and other threat intel for security event and information management (SIEM) systems, which it says weren’t built to process millions of IOCs. “When we started [out], the volume of threat intelligence coming from feed vendors and open communities versus now was more manageable.

There were hundreds of thousands of indicators of compromise, and now there are tens of millions,” says Hugh Njemanze, CEO of Anomali. “We expect this year to [reach] 100 million IoCs.

There’s been an explosion.” That kind of threat intel volume isn’t conducive for most in-house SIEM tools today. “Even the most robust SIEM is not able to ingest more than 1 million IOCs,” he says.

Anomali’s new cloud-based products basically match event flows with IOCs, for example, and then feed contextual information about the incident to the SIEM. “We’re taking on the burden of discovery and matching and letting the SIEM do what it’s good at: analyzing the millions of events they are collecting,” Njemanze explains.
Security operations center teams need to know which IOCs are relevant, so that’s what Anomali is offering. Anomali still offers ThreatStream Optic, its threat intel feed, in addition to its new Harmony Breach Analytics and Anomali Reports products. “We still see ourselves as a threat intelligence player, but we’re radically shifting how threat intel can be operationalized,” he says. “I’m convinced TI platforms like ThreatStream’s [Anomali’s] have an opportunity.
I haven’t seen anyone targeting dealing with the data.

Building a distiller takes the good stuff out, and turns the SIEM into a log manager,” IT-Harvest’s Stiennan says. ThreatConnect, meanwhile, has upgraded its ThreatConnect platform to better integrate a company’s security incidents with threat intelligence. “The goal of my platform is to bring the two together: every data set and correlate it with events and incidents that are unfolding so human beings don’t have to look at the noise.
Instead, the most important things bubble up to the top, based on the underlying analytics,” says Adam Vincent, CEO of ThreatConnect. ThreatConnect has partnered with Splunk, Palo Alto Networks, and others, to integrate threat intel with an organization’s incident detection and response processes.
Version 4.0 of the ThreatConnect platform also lets companies customize reports for all levels of users, including C-level executives who want to see a map of which regions are targeting their company, for example, Vincent says. Threat intelligence is about empowering decision-making, he says. “It’s not the end goal in itself.” So rather than a retailer looking at 100 events in the order in which they occur, the threat intel platform would flag and prioritize events that appear to be connected or related to other attacks in the wild. “It would say this event is important because it looks coordinated, and it’s against equipment that has known vulnerabilities,” Vincent says. “And it looks at what type of techniques and tradecraft the [attacker] is using ...

As the [company] investigates it, they are collecting additional information that is going to inform their decision-making.” Most security vendors now offer some level of threat intelligence, and there are several open-source threat intel feeds as well. “The challenge right now is to tell high-quality threat intelligence from low-quality threat intelligence.
It’s tough to distinguish, given the abundancy of options” out there, says Oliver Friedrichs, founder and CEO of startup Phantom. “One of the biggest challenges is how to reconcile all the various feeds and how to actually make sense of them.

The threat intelligence platform space is really striving to solve that,” says Friedrichs, whose firm offers an automation and “orchestration engine” for an organization’s security tools. Related Content: Find out more about security threats at Interop 2016, May 2-6, at the Mandalay Bay Convention Center, Las Vegas. Register today and receive an early bird discount of $200. Kelly Jackson Higgins is Executive Editor at DarkReading.com.
She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise ...
View Full Bio More Insights

Anonymous To Launch Cyberattacks Against Trump Campaign Starting April 1

Planned attacks a response to candidate's controversial campaign rhetoric, hacking collective says. In a reprise of numerous similar campaigns from the past, the Anonymous hacktivist collective has announced plans to disrupt Donald Trump’s presidential campaign by launching cyberattacks on websites associated with the controversial candidate, starting April 1. In a message on Anonymous’ YouTube channel, an individual purporting to a be spokesman for the collective urged those aligned with its cause to shut down Trump campaign websites and to “expose what he doesn’t want the public to know.” The spokesman, wearing the group’s signature Guy Fawkes mask, described the planned attacks as a response to Trump’s “appalling actions and ideas” in running his presidential campaign. “We need to dismantle his campaign and sabotage his brand,” the masked spokesman exhorted viewers. The Trump attack announcement, with its usual colorful rhetoric, has raised some predictable questions about whether Anonymous is really capable any longer of mustering the support needed to launch a disruptive cyber campaign against the leading Republican presidential candidate. Rene Paap, security evangelist at A10 Networks says the Trump campaign appears to have foreseen the threat and protected its domain by using a Content Delivery Network (CDN) service. “A CDN provides an extra caching layer in-between the content of a website and the client browser.
It is a large network with many points of presence around the world, aimed to redirect a browser to the nearest location where cached content is served,” says Paap. “For Anonymous to break through this is going to be difficult, as the CDN anticipates DDoS attacks,” he says. Anonymous and its collection of loosely affiliated followers around the world have pulled off several high-profile hacktivist campaigns in the past.

Among the examples that Anonymous itself touts are a 2008 campaign against the Church of Scientology, in which it crashed the church’s website; Operation Darknet, in which it exposed IP addresses of nearly 200 alleged pedophiles; and its release of an incriminating video in a 2012 case involving a sexual assault on a high school girl in Steubenville, Ohio. Following last year’s terrorist attacks on France’s satirical newspaper Charlie Hebdo, Anonymous launched a campaign to expose and disrupt websites spreading jihadist propaganda and, more recently, it has committed to doing the same to ISIS-affiliated websites.
Soon after launching the campaign last February, Anonymous claimed it had succeeded in taking down over 1,000 sites and over 9,000 Twitter accounts affiliated with the terror group. Whether or not Anonymous can replicate such campaigns in its planned attacks against Trump websites and online presence remains to be seen. Regardless of how successful or not the planned attack is going to be, Anonymous’ call to attack the Trump campaign is another example of how the world of politics and cybersecurity are becoming increasingly intertwined. The Internet -- social media, in particular -- has become a primary vehicle for candidates to communicate with voters, raise campaign awareness, target specific demographic, gauge voter sentiment, and solicit donations.

But the growing use of these channels has given threat actors new ways to attack Internet users, security vendor Forcepoint had noted last year in its 2016 predictions report (registration required). One of the dangers is that attackers will use email lures related to 2016 campaign issues to try and distribute malicious payloads to unsuspecting users. “Attackers frequently see large events as an opportunity to launch cyber-attacks on a curious population,” Forcepoint pointed out in its report. “Political campaigns, platforms and candidates present a huge opportunity to tailor highly effective lures.” Another issue is the use of social media to misrepresent or to misdirect public perception of candidates and events related to the presidential campaign.

As one example, the Forcepoint report pointed to a campaign by the Syrian Electronic Army (SEA) where hackers supporting the government of President Bashar al-Assad targeted and defaced sites belonging to rival groups. Hackers affiliated with the same group also targeted the Facebook pages of former French President Nicolas Sarkozy and President Obama with spam messages supporting al-Assad, Forcepoint noted in its report. “The SEA also took over the Twitter accounts of legitimate news organizations, tweeting false news updates, creating uncertainty and alarm as the messages spread online before these accounts were again secured.” Bob Hansmann, Forcepoint’s director of security analysis and strategy says that campaigns that want to mitigate such threats need to make cybersecurity a core part of their planning.  “A qualified CISO, as a ranking member of the campaign team, would be a game changer,” for the presidential candidates, Hansmann says in comments to Dark Reading. “If a campaign team has one and, more importantly, if they listen to them, then the odds are in their favor,” he says. “They are likely less susceptible to an attack as well as more likely to maintain key operations in the face of a full or partially successful attack.”  Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year ...
View Full Bio More Insights