Threat Intelligence

Proof-of-Concept Exploit Sharing Is On The Rise

Research offers cyber defenders view of which POC exploits are being shared and distributed by threat actors.Approximately 12,000 references to shared Proof-of-Concept software exploits were generated over the last year, with significant distribution amongst threat actors and researchers, according to a new report.This represents nearly a 200% increase in POC references compared to 2014, culled from a wide range of sources including social media, security researcher blogs and forums, hacker chats and forums, and hidden websites on the Dark Web, according to Nicholas Espinoza, senior solutions engineer with Recorded Future, and an author of the report Prove It: The Rapid Rise of 12,000 Shared Proof-of-Concept Exploits. Approximately 12,000 references to POCs were identified within Recorded Future’s dataset from March 22, 2015 to the present.  For a defender that’s a lot of vulnerabilities and attack vectors to track, Espinoza says.  The threat intelligence company gleans POC information from hundreds of thousands of sources and ingests the data into its intelligence platform to make it more searchable.  Proof-of-Concept code is typically developed by security researchers, academics, and industry professionals to demonstrate possible vulnerabilities in software and operating systems, and to show the security risks of a particular method of attack. Malicious hackers develop and exploit the code to attack vulnerable applications, networks and systems. “With 12,000 conversations occurring about Proof-of-Concept exploits, there is certainly just too much information to cover,” Espinoza says.  Many security and product vendors will inform customers when vulnerabilities are discovered in their software and provide patches to fix them.

The more difficult discussion, though, is to determine which of the 100 vulnerabilities on my system, are exploitable, Espinoza says. Vendors try their best to maintain situational awareness and organizations such as the National Institute of Standards and Technology are working to track and identify vulnerabilities that have the “existence of exploits.”  However, POC exploits are developing “at such an insane speed there is no one to manage it,” says Espinoza.  A lot is being missed and only being reported, in many cases, a week or so after the exploit is in the wild, he says. Shared Via Social Media The report shows that POCs are disseminated primarily via social media platforms such as Twitter. Users are flagging POCs to view externally in a range of sources including code repositories like GitHub, paste sites like Pastebin, social media sites such as Facebook and Reddit, and Chinese and Spanish Deep Web forums, according to the report. Sharing of POCs makes sense because researchers and others who want to make the findings public need to share their information in public-facing and high-visibility forums.  “There’s a significant “echo” effect seen in the data, though, with other users retweeting or re-syndicating original content with a slightly different tweet,” the report says. Vulnerabilities that allow initial system access through privilege escalation and buffer overflow attacks are the primary focus of POC development, research indicates. The primary POC targets are companies that create popular consumer software and products such as Adobe, Google, Microsoft and VMware.  The underlying technologies being targeted include smartphones, office productivity software as well as core functions in Microsoft Windows and Linux machines such as DNS requests and HTTP requests. Some of the top POC vulnerabilities discussed or shared over the past year include: GNU C Library vulnerability that allows buffer overflow attacks through malicious DNS resources (CVE-2015-7547 (glibc)). Microsoft Windows Server vulnerability allowing remote code execution. (CVE-2015-1635 / MS15-034). Microsoft Windows Server vulnerability allowing local privilege escalation. (CVE-2016-0051). Virtualization platform vulnerability allowing the execution of arbitrary code to escape virtual machines. (CVE-2015-3456) Windows Remote Procedure Call vulnerability allowing local privilege escalation. (CVE-2015-2370 / MS15-076). The report helps “shed light on not just the classes of vulnerabilities out there, but what is the active interest in the threat actor community,” says Rodrigo Bijou, an independent security researcher focused on intelligence, information security, and analytics “It’s tough to say what is signal and what is noise when you are building a threat intelligence environment, pulling feeds from all the vulnerabilities of the day,” he says.

For example, a security engineer might find a vulnerability that has a common vulnerability score of 10, which appears critical. “It might look like a gnarly vulnerability, but is it being exploited and have an interest in the threat actor community?” “It is hard to say what vulnerabilities are necessarily in use until you actually take a look at the adversary.”  So it is useful to see what is being distributed by the various types of threat actors, Bijou says. Related Content:   Rutrell Yasin has more than 30 years of experience writing about the application of information technology in business and government.
View Full BioMore Insights

Silicon & Artificial Intelligence: The Foundation of Next Gen Data Security

Why new challenges like 'real-time, always-on' authentication and access control can only be met by a combination of smart hardware and software.

Data security is at an inflection point.

As threats faced by consumers, businesses and countries continue to grow, the need for smart security solutions that incorporate both silicon and software becomes even more important. Tackling today’s security threats means moving far beyond scanning files against a known list of threats.

This reactive model has been displaced by real-time analysis, using complex models, behavior analysis and artificial intelligence (AI) to quickly discern between valid and malicious user activity.

And behind these complex models is large scale, high performance computing comprised of CPUs, GPUs and dedicated security silicon. Security is an engineering challenge because to do it well the system must look at a number of factors, all of which rely on increasing levels of computation.

Take the most basic form of security -- authentication -- and the general concept that the person accessing data is, in fact, authorized to do so.

Traditionally, this process would involve validating a login and password, effectively matching text entry against a database. Now, we see biometric authentication using fingerprint readers or facial recognition through web cameras, all of which need orders of magnitude of compute power to provide a good user experience. Security is now a real-time problem Authentication is an effective facet of security, and while we see great strides being made in improving it, security threats persist even after user verification.

The number of new security threats being detected on a daily basis is almost incomprehensible, with security vendors such as F-Secure, Trend Micro and Kaspersky Labs providing real-time data on the number of threats it is tracking.

These numbers should not only shock but serve to illustrate that security is a real-time problem; just because the user was authenticated two minutes ago doesn’t mean the threat has vanished.

There must be “real time, always on” security. The challenge of providing real-time security can only be met with a combination of smart hardware and software.

A growing trend in security is the use of AI and behavior analysis. One way of looking at this is that if traditional virus scanning and firewalling are the hammer and nails behind security, AI and behavior analysis are the surgeon’s scalpel: pinpoint accuracy backed up with supreme knowledge and skill. Behavior analysis is the ability to carefully consider the behavior of the user and match it to previous   activity to produce a confidence rating on whether the user is authentic or not. You may have already seen this in action through Google’s reCAPTHCA, which uses an “advanced risk analysis engine” to validate users.

Another incarnation of this technology is set to appear in online banking, where the banks can analyze the authenticity of the user even if an attacker has the correct login and password.

To do this, the system takes into account typing characteristics, mouse movements and other user behaviors to match them against an existing behavior profile.

This type of technology is absolutely critical if we are to make fine-grained access control a practical reality, where authentication doesn’t rely on only a single authentication method to validate the user’s session in entirety. Behavior analysis drives demand on backend compute systems Behavior analysis doesn’t only take place on the user’s computer, this technology is used in network threat detection, too, known more commonly as network behavior detection.

The goal is the same, analyzing behavior, but doing it across an entire organization’s network.

The use of intelligent algorithms to determine whether an attack is taking place and learn from past usage patterns is important, but having the processing power to crunch the data and make effective decisions before an attack can cause significant damage is absolutely critical. So while behavior analysis and AI are smart ways to tackle the challenges of security, they require significant computation power to effectively protect the user while simultaneously providing a positive user experience. We know that users who experience slow or halting security interfaces are apt to avoid or undermine available functionality.

Achieving a favorable experience with behavior analysis technologies will place great demands on the backend compute systems that crunch the data and provide actionable answers. The silicon that powers security back-end will be a mix of CPUs, GPUs and dedicated security processors.

This combination of hardware will be backed up by a software ecosystem that allows consumers and businesses to seamlessly tap into the silicon’s security capabilities and have a good out-of-the-box experience.
It is absolutely critical that security software be able to leverage the tremendous growth in general purpose and dedicated compute that is available in modern processors and system-on-chips. Malware, infrastructure, memory encryption & more Rob Enderle, principle analyst at the Enderle Group, has also talked about the need for behavioural analysis in security, citing it as an important defense against the tremendous growth in vulnerabilities being discovered daily. He said, “We are seeing millions of security threats every day that attack consumers, enterprises and national infrastructure, and history shows us this number will continue to rise sharply. One of the cornerstones of a comprehensive defense in depth for this massive exposure is to utilize complex algorithms and AI that leverage compute in the datacenter to provide an intelligent adaptive solution to this massive and rapidly growing security exposure.”  Behavior analysis isn’t merely a security tool that runs alongside existing ones; it is a key technique to improve existing tools, such as malware detection.
Software security vendors are modifying traditional security apparatuses such as anti-virus to make use of these technologies to identify and hunt emerging threats. In addition to individual consumers and businesses, smart security is vital in helping secure the nation’s infrastructure.

Compute power has long been used by nation states to further their economic development and protect their citizens; protecting intellectual property and a nation’s digital borders is a frontier in advanced security research and development. As we see security vendors develop ever more complex threat and behavior analysis models and rely on advances in artificial intelligence research, the onus will be on silicon to power these algorithms. Whether it be to run complex behavioral analysis models or implement hardware-enabled sandboxing, memory encryption and physical attack resistance, or power the next innovation of security, the computer processor’s silicon will help power the solution.  Related Content: Mark Papermaster is chief technology officer and senior vice president at AMD, responsible for corporate technical direction, and AMD's intellectual property and system-on-chip product research and development. His more than 30 years of engineering experience includes ...
View Full BioMore Insights

Government Cybersecurity Performance, Confidence Bottoms Out

In the wake of OPM and other big gov breaches, government cybersecurity performance scores and employee confidence ratings sink through the floor. Government agencies at all levels are falling far behind the private sector in cybersecurity measures, according to a pair of recent studies.
If the damage left behind by massive breaches at the Office of Professional Management (OPM) and the Internal Revenue Service (IRS) weren't enough anecdotal evidence, now there's more data to back up government's lackluster performance.Most recently, a new study out by SecurityScorecard's research team found that local, state, and federal government agencies have the worst performance indicators among 18 industry verticals, including education, healthcare, and legal organizations--all known laggards on the cybersecurity front.

The scoring was based on SecurityScorecard's benchmarking platform, which aggregates from more than 30 million daily security-risk signals and sensors across the Web to form a picture of specific organizations and industries. Among the areas benchmarked, SecurityScorecard found that low-performing government agencies fared the worst compared to other organizations when it came to malware infection rates, network security indicators, and software patching cadence.

Among the 600 agencies included in this study, NASA fared dead last in performance scoring.                                                                                   "NASA’s primary threat indicators include a large number of detected malware signatures over the past 30 days, tracked P2P activity, various SSL certificate issues, and insecure open ports, varying from IMAP to Telnet to DB ports among others," the report stated. Leaders at federal agencies at least have a hunch that they aren't doing well: Another study out this month shows that cybersecurity confidence among the senior executive leadership at these agencies is at a low point.

Conducted by the Government Business Council and sponsored by Dell, this study is a follow-up to a similar one done in 2014.
Since that time, there's been a 30-point drop in the respondents who indicate they are confident or very confident in agency information security.
Similarly, there's been a 28-point drop in respondents indicating their confidence in their agency's ability to keep up with evolving cyber threats. "The federal government appears to still be in the beginning stages of constructing more robust cybersecurity strategies, and respondents cite budget constraints, slow technology acquisition processes, and bureaucratic inertia as the chief barriers to a more holistic agency cybersecurity posture," the report says. "Moving forward, agencies need to focus on tackling institutional obstacles in order to move forward with bolstering organizational cybersecurity." Ericka Chickowski specializes in coverage of information technology and business innovation.
She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full BioMore Insights

Pro-ISIS Hacking Groups Growing, Unifying, But Still Unskilled

Flashpoint report outlines the patchwork of hacking groups and the validity of their claims to fame. Although ISIS has not officially acknowledged or laid claim to a hacktivist group, there are several acting on the terrorist organization's behalf. New groups are emerging at an accelerate rate, others are joining forces, they're expanding their list of targets, but thankfully their capabilities are currently unsophisticated, according to a new report by Flashpoint.   The groups are low on homegrown hacking talent and have little success recruiting highly skilled attackers to the cause.

The most skilled hackers known to be connected to these groups: Jumaid Hussain (a.k.a.

Abu Hussain al-Brittani, a.k.a. "TriCk"), British citizen and previously a member of TeaMp0isoN.
Served time in British prison for hacking Tony Blair. Upon release, fled the United Kingdom to fight with ISIS.

Became leader of Cyber Caliphate Army, the first pro-ISIS hacking squad. Killed by an American drone strike in Raqqa in August 2015.    Ardit Ferizi (a.k.a. "Th3Dir3ctorY"), Kosovo citizen.

Believed to be the leader of the Kosova Hacker’s Security (KHS) hacking group, which is not a pro-ISIS group.

Ferizi allegedly hacked an unnamed victim organization, stole personal data -- including physical location -- of approximately 1,350 U.S. government and military personnel, then passed it to Hussain.  Hussain then published it on Twitter, with a message encouraging attacks on the individuals (and branding the data dump for "Islamic State Hacking Division," not Cyber Caliphate Army).

Ferizi was arrested in October and is the first person to face charges of cyber terrorism in the U.S. courts.
If convicted, he faces up to 35 years in prison.  Siful Haque Sujan, British-educated Bangladeshi citizen, who replaced Hussain as the leader of Cyber Caliphate after his death.
Sujan was also killed by a subsequent American drone strike in Raqqa in December 2015. One place that new recruits are both found and trained is the Gaza Hacker web forum, which is full of educational resources, according to the Flashpoint report. The pro-ISIS hacking groups tend to coordinate their attacks in private...but not very private. "We believe that while private communications between hackers takes place, they rely heavily on social media to generate support for their campaigns," the report states.

Flashpoint analysts have seen "security-savvy jihadists, but not necessarily hackers, [emphasis added] using encrypted online platforms for communications, such as Surespot and Telegram." Social media are used to declare intent of attacks, often with hashtags. Yet, some of the threats and claims may not be entirely genuine, according to analysts.

For example: When Hussain published the personal and location data on US government and military officials that Ferizi had allegedly provided, he stated they came from sensitive databases, but Flashpoint believes the data came from unclassified systems and that no military systems were compromised.  When the Islamic Cyber Army (users of the #AmericaUnderAttack hashtag) claimed they had "a list containing '300 FBI Agents emails hacked.' However, as purported FBI emails/passwords are a staple of low-level hacker dumps, Flashpoint analysts cross-checked the data and found that the list was a duplicate of a LulzSec leak from 2012." The Flashpoint report goes on to explain that the Islamic Cyber Army also defaced an Azerbaijani bank. "Lacking sophistication, ICA resorted to attacking any low-hanging fruit in its anti-American campaign, regardless of target relevance." Rabitat Al-Ansar used to be solely a propaganda engine until it added hacking.

A subgroup claimed to have obtained American credit card account information and told followers to use the information "for whatever Allah has made permissible." Yet, Flashpoint analysts' findings suggest that the data was not pilfered by Rabitat Al-Ansar hackers themselves, but rather, "may have been sourced from the so-called 'Scarfaze Hack Store.'" Despite their current limitations, Flashpoint researchers state that pro-ISIS hackers' "willingness to adapt and evolve in order to be more effective and garner more support indicates that while these actors are still unsophisticated, their ability to learn, pivot, and reorganize represents a growing threat." Sara Peters is Senior Editor at Dark Reading and formerly the editor-in-chief of Enterprise Efficiency. Prior that she was senior editor for the Computer Security Institute, writing and speaking about virtualization, identity management, cybersecurity law, and a myriad ...
View Full BioMore Insights

How Hackers Have Honed Their Attacks

More organizations are getting breached, but data exfiltration is becoming harder for attackers, new data shows. A triad of vendor reports released this week contained some mixed news for enterprises on the evolving threat landscape and how organizations are responding to the resulting challenges. On the one hand the reports showed that attackers are getting better at bypassing enterprise defenses and breaching networks.

But their ability to do damage after breaking in appears to be getting limited as the result of increasingly better detection and response capabilities. Here in no particular order are five high-level takeaways from the reports by Vectra Networks based on a post-intrusion analysis at 120 organizations in 2016; by Trustwave based on data from hundreds of breach investigations; and Cytegic from data gathered from its DyTA intelligence platform. More Breaches, Less Data Exfiltration Vectra’s report showed that attackers increasingly managed to get through corporate defenses in 2015.

But data exfiltration itself was fairly small in comparison. Of the 120 organizations that shared metadata with Vectra, the security firm detected at least one in-process malicious activity instance on their networks such as lateral movement, internal reconnaissance and command and control traffic.

Trustwave’s found 97% of the applications that it tested to contain at least one security flaw. At the same time though, only 3.1% of the incidents involved actual data exfiltration.

The evidence suggest that organizations did a relatively good job detecting and blocking threats before significant damage in a vast majority of cases, says Wade Williamson, director of threat analytics at Vectra. “On the front end pretty much every network let an attacker get inside,” Williamson says. “But the good news is that people who are paying attention are keeping data from getting out.

There is scary news on the front end, but it is manageable.” Shifting Focus Attackers shifted their focus away from e-commerce and point-of-sale environments to corporate and internal network breaches.
In 2015, e-commerce compromises represented 38% of the incidents that Trustwave was called in to remediate compared to 42% in 2014, The proportion of PoS incidents meanwhile dropped from 40% in 2014 to 22% in 2015.  The broadening adoption in the US of smartcards based on EMV technology may have had a role in that shift, according to Trustwave. “I would also say that, especially after the major PoS breaches of 2013 and 2014, that companies have gotten better at protecting their PoS assets,” says Karl Sigler, threat intelligence manager at Trustwave. Still, payment cards remain an attractive target for attackers according to Cytegic’s intelligence report.

About 42% of targeted data in the Dark Web was comprised of payment card data in March 2016, Cytegic said. Improved Breach Detection Trustwave’s data showed that North American organizations are getting better at detecting breaches on their own instead of having it reported to them by a third-party.
Self-detection of breaches increased from 19% in 2014 to 41% in 2015.  That still meant that a majority of organizations did not detect breaches on their own last year.

Even so, the fact that so many did is encouraging, Sigler says.

That’s because organization that self-detect breaches generally tend to do a better job containing them as well. For instance, for breaches reported by third parties, it took organizations a median of 168 days from intrusion to contain it.
In contrast, self-detected breaches had a median of 15 days between intrusion and containment. “Organizations are finally starting to prioritize security rather than making it a small earmark under the general IT department,” Sigler says. “These organizations have also discovered that security doesn’t just come from buying technology and tools, but from putting properly skilled people in place and empowering them.” Stealthier Attackers The evidence showed that attackers are getting stealthier inside the network, Vectra’s Williamson says. Until relatively recently, for instance, noisy brute force attacks were the most commonly employed tactic by attackers attempting to move laterally inside a network after breaking into it.  That shifted in 2015 with brute force attacks dropping to third place behind quieter and subtle Kerberos and internal replication techniques.

That suggests, among other things, that attackers are getting better at using stolen credentials to move quietly inside a breached network, Williamson says. Last year also saw a troubling increase in the use of hidden tunnels in HTTP and HTTPS as a way to conceal command and control traffic and to send stolen data out of a compromised network.

The increase in the use of such techniques compared to a year ago was surprising, Williamson says. More Geographically Widespread Attacks A vast majority of victims continue to be in North America, but attackers are beginning to spread the pain.
In March this year, Cytegic counted an overall 17% increase in the number of cyberattacks worldwide. North American organizations as usual bore more attacks than any other region and accounted for more than 30% of all attacks. But for the first time, the Middle East emerged as a major target by registering what Cytegic reports as a staggering 64% percent increase in cyber activity in the last month.

That increase made the Middle East the second most active region for cyber attacks in March. “This may be attributed to the rising cyber activity regarding Iran, Israel and especially Syria this month and the ricochets from the Brussels attacks,” Cytegic said in its report. Western Europe was the next most cyber active region with over 16% of attacks, while the East Asian region accounted for about 12% of all attacks in March. Related stories:   Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year ...
View Full Bio More Insights

Internal Pen-Testing: Not Just For Compliance Audits Anymore

How turning your internal penetration team into a 'Friendly Network Force' can identify and shut down the cracks in your security program. When evaluating or building a security operations program, one of my first steps is to ensure that a Friendly Network Forces (FNF) function is in place.
I wish I could take credit for creating this concept. Readers with US Department of Defense backgrounds will recognize the nomenclature, commonly referred to as FNF or Blue Teams.   What is FNF? For most non-technical audiences, I describe FNF as internal penetration teams, but they do so much more.

A sound FNF team will: Search for gaps and seams through both internal and external penetration testing Provide surveillance on critical portions of an enterprise that sophisticated threat actors will try to leverage   “Shake the door knobs” on security controls and devices to ensure they are working as they should (e.g. are those access controls really in place between segmented VLANs?) Conduct proofs-of-concept (POCs) on zero-day or globally known vulnerabilities to see if an environment is vulnerable Conduct insider threat surveillance To truly gain the most of out these roles, it’s vital to look for certain attributes and skillsets. For me, it is really simple.

Team members must have extensive knowledge of how an enterprise environment is designed and possess a strong understanding of the most critical gaps and vulnerabilities.
In fact, one of Armor’s FNF team members is one of our first employees, so he is someone who has a long history with the environment and understands each any every dark corner.
It’s that important. Understanding Threat Actors FNF professionals should have the experience and contextual understanding of how real cyber threat actors target and attack their victims.

There are great courses — the SANS-certified Network Penetration & Ethical Hacker course comes to mind — that provide solid foundations for this knowledge base.  However, experience in leveraging this knowledge to penetrate networks is a difficult skill for potential FNF team members to acquire. Normally, you can’t apply this knowledge without facing legal or ethical ramifications. That’s why alumni from national-level intelligence agencies or security consultants, who have worked as both close and remote access penetration testers, make great resources as do former incident response and forensics investigators. Strong Moral Compass Members of the team must be of high moral character.

This might sound corny and obvious to some, but it is very important that members of the FNF team are very discreet in their activities and can be trusted with the highest levels of access within an organization.  These teams watch everyone.

They look for any and all anomalous activity — from the front-desk admin to the greater security team to the CSO.
It’s even a good idea to include them on HR situations where an employee could be an insider threat.  I am confident that if my FNF discovered that my laptop was compromised, they would have the authority and moral courage to let me and the CEO know that I screwed up.

That is the litmus test for every member of our FNF team.

This type of character and visibility should be present in every organization.   More Than ‘Testers’ FNF teams are more than just internal pen-testers. When employed correctly, they will identify and shut down the cracks and seams in your security program.

They will validate that your security operations team is doing what they say they are doing.

And they will lock in on any unscrupulous, suspicious or erratic behavior within your organization. I can’t imagine a mature security organization NOT having an FNF team.

That is, unless they are afraid to know the truth.

But as high-profile breaches have proven, this strategic ignorance will not prevent consequences.
It only exacerbates them. Gain insight into the latest threats and emerging best practices for managing them.

Attend the Security Track at Interop Las Vegas, May 2-6. Register now! Chief Security Officer at Armor, Jeff Schilling (Col., rtd.) is responsible for the cyber and physical security programs for the corporate environment and customer hosted capabilities. Jeff retired from the US Army after 24 years of service in July 2012.
In his last ...
View Full Bio More Insights

MIT AI Researchers Make Breakthrough on Threat Detection

New artificial intelligence platform offers 3x detection capabilities with 5x fewer false positives. Researchers with MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) believe that can offer the security world a huge boost in incident response and preparation with a new artificial-intelligence platform it believes can eventually become a secret weapon in squeezing the most productivity from security analyst teams. Dubbed AI2, the technology has shown the capability to offer three times more predictive capabilities and drastically fewer false positive than todays analytics methods. CSAIL gave a sneak peek into AI2 in a presentation to the academic community last week at the IEEE International Conference on Big Data Security, which detailed the specifics of a paper released to the public this morning.

The driving force behind AI2 is its blending of artificial intelligence with what researchers at CSAIL call "analyst intuition," essentially finding an effective way to continuously model data with unsupervised machine learning while layering in periodic human feedback from skilled analysts to inform a supervised learning model. "You can think about the system as a virtual analyst,” says CSAIL research scientist Kalyan Veeramachaneni, who developed AI2 with former CSAIL postdoc Ignacio Arnaldo, who is now a chief data scientist at PatternEx. “It continuously generates new models that it can refine in as little as a few hours, meaning it can improve its detection rates significantly and rapidly.” This offers the best of both worlds in what has become a bright line division in security analytics today.

For the most part, security systems today either depend on analyst-driven solutions that rely on rules created by human experts or they lean heavily on machine-learning systems for anomaly detection that trigger highly disruptive false positive rates. Gain insight into the latest threats and emerging best practices for managing them.

Attend the Security Track at Interop Las Vegas, May 2-6. Register now! In the paper released today, Veeramachaneni, Arnaldo and their team showed how the system did when tested with 3.6 billion pieces of log data generated by millions of users over three months.

During this test, the platform was able to detect 85% of attacks, three times better than previous benchmark, while at the same time reducing false positives by a factor of five. The approach of melding together human- and computer-based approaches to machine learning has long run into stumbling blocks due to the challenge of manually labeling cybersecurity data for algorithms.

The specialized nature of analyzing the data makes it a difficult data set to crack with typical crowdsourcing strategies employed in other arenas of big data analysis.

The average person on a site like Amazon Mechanical Turk would be hard-pressed to apply accurate labels for data indicating DDoS or exfiltration attacks, Veermachaneni explained. Meanwhile, security experts have already tried several generations worth of supervised machine learning models only to find that 'feeding' these systems ends up creating more work rather than saving an analyst time.

This is what has lead many organization to dump early analytics solutions in the proverbial waste bin after experiencing those frustrations. AI2 is able to perform better by bringing together three different unsupervised learning models to sift through raw data before presenting data to the analyst.
So on day one, that system offers 200 of the most abnormal events to an analyst, who then manually sifts through those to identify the real attacks.

That information is fed back into the system and even within a few days the unsupervised system is presenting as few as 30 to 40 events for verification. “The more attacks the system detects, the more analyst feedback it receives, which, in turn, improves the accuracy of future predictions,” Veeramachaneni says. “That human-machine interaction creates a beautiful, cascading effect.” Check out this video for a quick overview of the way AI2 works. Related Content:  Ericka Chickowski specializes in coverage of information technology and business innovation.
She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full Bio More Insights

'Threat Hunting' On The Rise

Rather than wait for the adversary to strike, many enterprises are going out actively looking for them Rather than simply waiting for the inevitable data breach to happen, many organizations say they have begun more actively scouting around for and chasing down bad actors and malicious activity on their networks. Unlike the usual security approaches, threat hunting -- as some of the industry have taken to calling the trend -- combines the use of threat intelligence, analytics, and security tools with old-fashioned human smarts. Eighty six percent of respondents in a recent SANS Institute survey of 494 IT professionals said their organizations were engaged in such activity.

About 75% said they had reduced their attack surface as a result of more aggressive threat-hunting while 59% credited the approach for enhancing incident response speed and accuracy. All of this despite the fact that four in 10 did not have a formal threat-hunting program in place, and fewer still had any kind of repeatable process for hunting down threats. The survey results suggest that while organizations are benefiting from a more aggressive stance, many are still trying to figure out what a formal threat-hunting program needs to look like and how to attract the skills needed to make it work. “Threat hunting plays a critical role in early detection of an adversary, as well as faster removal and repair of vulnerabilities uncovered during the hunt,” the SANS report noted.  But the results also show that “threat hunting is still in its infancy in terms of formal processes and methods,” it said. Ben Johnson, co-founder and chief security strategist at security vendor Carbon Black, says what separates threat hunting from the usual security practices is its emphasis on human skills. Threat hunting, Johnson says, is about “using humans to find bad versus having an alert fire from a piece of technology.” The concept is not new, he says. “[But it] is only now hitting the main stream because it’s a sexy buzzword and organizations are tired of the long dwell times of the bad guys.” The emphasis is on the application of the human mind to seek out activity that hasn’t been flagged yet by various detection technologies. “It’s a more open-ended action where hunches, gut-feelings, and general security and risk-based experience drive individuals to places and activity they should analyze,” he says.  While tools are important, threat hunting is not specific to any technology nor is it dependent on them. Rather it is about knowing when, where, and what signs to look for. “You might not know who’s going to rob a bank or when, but if you see what appears to be a getaway car sitting outside, that might tip you off to go look for a person with malicious intent inside the bank,” Johnson says.   Gain insight into the latest threats and emerging best practices for managing them.

Attend the Security Track at Interop Las Vegas, May 2-6. Register now! For the most part, the industry has yet to coalesce around a clear definition for threat hunting, notes Tim Helming, director of product management at DomainTools. “But fundamentally, it's about not waiting to observe the effects of an attack.” Instead, it’s a strategy that begins with the assumption that the organization has been breached, and working backward from there to either detect the source -- or to make sure there isn’t an attack. “If you start from that assumption, you are more likely to find the evidence you're looking for.

Threat-hunting teams bring specific expertise to doing that,” he says. Getting there fully will take some time for the many organizations that say they are engaged in threat hunting.

The SANS survey showed that while organizations see the benefit in taking a more aggressive approach to finding threats on their network, few have allocated the necessary resources to make it happen.

A majority of the respondents in the survey still rely heavily on known indicators of compromise and manual analysis, for instance, and did not have the level of automation needed to enable a truly robust threat-hunting capability. Related stories: Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year ...
View Full Bio More Insights

FBI Paid Hackers To Help Unlock San Bernardino Shooter's iPhone

The professional hackers were paid a one-time fee by FBI to help break into Syed Farook's iPhone. The FBI reportedly hired a group of professional hackers to help break into the San Bernardino shooter’s iPhone via previously an unknown software flaw in the device.

The information was used to develop hardware that helped the FBI unlock the iPhone’s four-digit password, the Washington Post reported. The method is only effective for a limited time. “The solution works only on iPhone 5Cs running the iOS 9 operating system,” said FBI Director James B.

Comey. Some security experts are requesting that the government disclose the issue to Apple so the company can patch the vulnerability. Last week, Comey said that “If the government shares data on the flaws with Apple, they’re going to fix it and then we’re back where we started from.” But he later said the agency is considering whether to disclose it or not. According to The Post, a process has been set up by the White House in which federal officials evaluate whether any security vulnerability is to be disclosed or not.  Read the full story about the case in the Washington Post article. Dark Reading's Quick Hits delivers a brief synopsis and summary of the significance of breaking news events.

For more information from the original source of the news item, please follow the link provided in this article.
View Full Bio More Insights

FBI Paid Hackers To Help Unlock San Bernardino Shooter’s iPhone

The professional hackers were paid a one-time fee by FBI to help break into Syed Farook’s iPhone.

Securing the Weakest Link: Insiders

No longer is a hoodie-wearing malicious hacker the most obvious perpetrator of an inside cyber attack. Massive, high-profile security breaches dominate today’s headlines and consumers are swamped with notifications from organizations entrusted with private and sensitive data.  But, increasingly, I am convinced that security professionals and the majority of security vendors are too focused on the wrong things.   To many, it seems like the hoodie-wearing malicious hacker is the obvious enemy.  We imagine that he (or she) has been waiting for the perfect opportunity to launch that magical zero-day exploit s/he’s been sitting on, just waiting for the perfect moment to strike.  While this type of attack can happen, it isn’t the most common form of an attack that results in a breach; nor is it the biggest risk to your organization.  Let’s look at what defines an “insider.” An insider is any individual who has authorized access to corporate networks, systems or data.

This may include employees, contractors, business partners, auditors or other personnel with a valid reason to access these systems.  Since we are increasingly operating in a connected fashion, businesses are more susceptible to insider threats than ever before.  The volume of critical data in organizations is exploding, causing more information to be available to more staff.  While this can boost productivity, it comes with inherent risks that need to be considered and mitigated, lest that privileged access be used against the organization.   Gain insight into the latest threats and emerging best practices for managing them.

Attend the Security Track at Interop Las Vegas, May 2-6. Register now!  Mitigating risk is all about identifying weak points in the security program.

The weakest point in any security program is people; namely, the insider.  Insider threats can be malicious; but more commonly, they are accidental.  Insiders can have ill intent, they can also be manipulated or exploited, or they can simply make a mistake and email a spreadsheet full of client information to the wrong email address.  They can lose laptops or mobile devices with confidential data, or misplace backup tapes.   These types of incidents are real and happen every day.

They can lead to disastrous results on par with any major, external cyberattack.  Traditionally, these threats are overlooked by most businesses because they are more concerned with the unknown malicious actor than the known staff member or business partner.  Organizations are sometimes reluctant to take the steps necessary to mitigate these threats and share important data through a trusted relationship.

They put little to no emphasis on implementing security controls for insiders. Those of you who believe that you can count on employees as a line of defense in the organization, think again.

A recent SailPoint Technologies survey found that 27 percent of U.S. office workers at large companies would sell their work password to an outsider for as little as $1001.  Many years ago, (in a 2004 BBC News article) users were willing to trade passwords for chocolate bars.  With employee engagement levels as low as 30 percent in some organizations, asking employees to be a part of the solution may be asking too much. Given the current insider situation, attackers need not resort to elaborate attack methods to achieve their objectives.  A 2016 Balabit survey indicates that the top two attacker techniques are social engineering (e.g., phishing) and compromised accounts from weak passwords. There are a number of ways that insiders can cause damage.  In some cases, they are coerced by an outsider to extract data.  This is common when organized crime is involved.  In other cases, legitimate user access is used to extract data, but the user’s real credentials have been compromised and don't trigger security alerts focused on malware, compliance policies and account-brute-force attacks. The good news is that organizations can do more now than ever before.  Providers are responding with solutions that monitor email traffic, Web usage, network traffic and behavior-based pattern recognition to help detect who in the organization is trustworthy and who may be a risk.  If a staff accountant is in the process of exporting customer data at 3 a.m., this behavior is flagged as anomalous and alerts security staff to a potential compromise.  The employee that starts logging in later, leaving earlier and sending fewer emails to his manager may be disengaged or even disgruntled; and worth keeping an eye on.   Although this is a murky area, HR can be a security advocate, identifying employees with discipline issues whom could fit a risk profile.  While this may be a little “big brother” sounding in nature, some organizations may find this to be an appropriate way to mitigate the risks that come from insiders.  Organizations without big security budgets still have some old-school mitigations available to them such as employee awareness programs, employee background and reference checks, and exit interviews to gather information about attitudes toward the company and insights into working conditions.  The clear lesson here is that organizations must look past the perimeter and know what is happening inside the network, in addition to what is happening outside.

The most likely enemy won't fit the stereotype: beware that the threat could very well come from within.  Related Content: Philip Casesa is one of the leading voices representing (ISC)², often commenting on high-profile cyberattacks, breaches and important cybersecurity topics. His expertise has been featured in Security Week, CIO, CSO, GovInfosecurity, Dark Reading, eSecurity Planet, Health ...
View Full Bio More Insights