Analysis

Trends and Analysis

Machine Learning In Cybersecurity Warrants A Silver Shotgun Shell Approach

When protecting physical or virtual endpoints, it's vital to have more than one layer of defense against malware. Cybersecurity is arguably the most rapidly evolving industry, driven by the digitalization of services, our dependency on Internet-connected devices, and the proliferation of malware and hacking attempts in search for data and financial gain. More than 600 million malware samples currently stalk the Internet, and that’s just the tip of the iceberg in terms of cyber threats. Advanced persistent threats, zero-day vulnerabilities and cyber espionage cannot be identified and stopped by traditional signature-based detection mechanisms.

Behavior-based detection and machine learning are just a few technologies in the arsenal of some security companies, with the latter considered by some as the best line of defense. What is Machine Learning?The simplest definition is that it’s a set of algorithms that can learn by themselves.

Although we’re far from achieving anything remotely similar to human-level capabilities – or even consciousness – these algorithms are pretty handy when properly trained to perform a specific repetitive task. Unlike humans, who tire easily, a machine learning algorithm doesn’t complain and can go through far more data in a short amount of time. The concept has been around for decades, starting with Arthur Samuel in 1959, and at its core is the drive to overcome static programming instructions by enabling an algorithm to make predictions and decisions based on input data.

Consequently, the training data used by the machine learning algorithm to create a model is what makes the algorithm output statistically correct.

The expression “garbage in, garbage out” has been widely used to express poor-quality input that produces incorrect or faulty output in machine learning algorithms. Is There a Single Machine Learning Algorithm?While the term is loosely used across all fields, machine learning is not an algorithm per se, but a field of study.

The various types of algorithms take different approaches towards solving some really specific problems, but it’s all just statistics-based math and probabilities.

Decision trees, neural networks, deep learning, genetic algorithms and Bayesian networks are just a few approaches towards developing machine learning algorithms that can solve specific problems. Breaking down machine learning into the types of problems and tasks they try to solve revolves around the methods used to solve problems.
Supervised learning is one such method, involving training the algorithm to learn a general rule based on examples of inputs and desired outputs. Unsupervised learning and reinforcement learning are also commonly used in cybersecurity to enable the algorithm to discover for itself hidden patterns in data, or dynamically interact with malware samples to achieve a goal (e.g. malware detection) based on feedback in the form of penalties and rewards. Is Machine Learning Enough for Cybersecurity?Some security companies argue that machine learning technologies are enough to identify and detect all types of attacks on companies and organizations. Regardless of how well trained an algorithm is, though, there is a chance it will “miss” some malware samples or behaviors.

Even among a large set of machine learning algorithms, each trained to identify a specific malware strand or a specific behavior, chances are that one of them could miss something. This silver shotgun shell approach towards security-centric machine learning algorithms is definitely the best implementation, as more task-oriented algorithms are not only more accurate and reliable, but also more efficient.

But the misconception that that’s all cybersecurity should be about is misguided. When protecting physical or virtual endpoints, it’s vital to have more layers of defense against malware.

Behavior-based detection that monitors processes and applications throughout their entire execution lifetime, web filtering and application control are vital in covering all possible attack vectors that could compromise a system. Liviu Arsene is a senior e-threat analyst for Bitdefender, with a strong background in security and technology. Reporting on global trends and developments in computer security, he writes about malware outbreaks and security incidents while coordinating with technical and ...
View Full Bio More Insights

3 Lessons From The Yahoo Breach

Your organization must address these blind spots to detect sophisticated attacks. When an organization as established and trusted as Yahoo gets breached, it seems like there's no hope for the rest of us.

And in many ways, there isn't. Despite Yahoo's perimeter defenses, the company's network was still breached. Not once, but at least twice.

This indicates that these attacks were very sophisticated and carried out by highly motivated and well-funded attackers.

Although Yahoo's breaches demonstrate that it's virtually impossible to prevent every motivated attacker from getting past perimeter defenses and gaining access to a secure network, there are ways to detect breaches before massive exfiltration can occur.When it comes to breach detection and response, most enterprises today still rely on sifting through logs from network appliances such as firewalls and web gateways.

This includes performing correlation using security information and event management systems to figure out how the breaches occurred.The Yahoo breach exposed three key blind spots that need to be addressed to detect sophisticated attacks. (Editors' Note: In the spirit of transparency, SS8, the author's company, helps organizations detect and protect against network breaches using some of the concepts described in this article.) 1. Lack of application, identity, device, and geolocation information. Tools like NetFlow can't distinguish between multiple exchanges of information in a traffic flow (for example, an email session), and at best can only provide a summary of the entire flow.

They leave out valuable application-specific information such as To, CC, From, and Subject fields in an email, as well as the presence of any potential malicious attachments.
In addition, certain obfuscated protocols such as Tor can be difficult to detect on a network, but the ability to identify their presence and investigate these connections is critical to network security. 2.

Challenges tied to archiving and network history lookup. 
Although some tools can store network log data for long periods of time, it remains difficult to access that information quickly for the purpose of cyber investigations such as correlating potentially malicious network activity to an individual device or user. Meanwhile, packet recording tools can provide more granular detail into network data, but the economics of storing full packets over an extended period of time is often cost-prohibitive. 3. Lack of automated workflows for threat detection. The volume of new, constantly-generated threat information, combined with a shortage of skilled cybersecurity personnel, often leads to "log and alert fatigue." This is generally due to a lack of automation for correlating the latest threat intelligence, and tying it to actual events happening on the network.

Currently, most cyber investigators still have to manually perform a series of complicated steps to generate useful forensic information from log reports and the limited history of full packet capture tools. The Yahoo breach, like most advanced cyberattacks, was carried out over a long period of time, with attackers hiding their communications in the normal flow of network traffic.

According to the latest Verizon Data Breach Investigations report, dwell time — that is, the length of time an attacker is in a system before being detected — is averaging more than 200 days.  Perimeter defenses have to make point-in-time decisions to allow or block a specific communication.

Therefore, it isn't possible for them to detect advanced and persistent cyberattacks carried out over long periods of time.

Even though threats can breach the perimeter through a variety of attack vectors, most malicious activity can be still be detected in the network before data exfiltration — the ultimate goal of the attack — takes place. If we want to prevent protracted infiltrations and exfiltrations, like the one experienced by Yahoo, we need to combine deeper network visibility, including the ability to rewind past activity with constantly updated threat intelligence, and automated workflows.

This will allow us to discover indicators of compromise and devices of interest early in the breach cycle, which can be investigated using actual network history to pinpoint a compromise before massive data exfiltration takes place. Prevention is the always the goal, but incident detection and fast response can save the day. Related Content: Dr.

Cemal Dikmen is Chief Security Officer for SS8, which helps companies detect and protect against network breaches. He also works with the nation's leading telecommunications service providers as well as law enforcement and intelligence agencies on cybersecurity ...
View Full Bio More Insights

Cyber Lessons From The NSA's Admiral Michael Rogers

Security teams must get better at catching intruders where we have the advantage: on our own networks. The Russians spent a year inside the Democratic National Committee before they were discovered.
It took five months for OPM to catch the thieves that stole the records of more than four million federal employees.
Intruders broke into Yahoo’s systems in 2013, and we don’t even know how long they were inside; Yahoo only discovered the hack when stolen data turned up for sale on the dark web. We invest more and more in our security, but the breaches just get bigger. How many more times does this have to happen before we accept that what we’re doing isn’t working?Earlier this month, during a Senate Armed Service Committee hearing, Admiral Michael S. Rogers, the director of the National Security Agency, told us what we need to do to fix the problem, recognizing two different kinds of cybersecurity: Keeping intruders out of networks. Identifying, containing, and ejecting them once they get inside. We must be able to do both, Admiral Rogers argued, noting that there is an entirely “different thought process, methodology, prioritization, and risk approach to dealing with someone who is already in your network versus trying to keep them out in the first place.”The head of the best offensive agency in the world is telling us exactly what we’re missing, but we aren’t listening. Most organizations still focus heavily on keeping attackers out, rather than trying to catch the ones that get in. A common bit of security wisdom is that hackers have the advantage because they only need to be right once to get in.

This is largely true today - hackers can launch assault after assault to try to break through your defenses, probing for a weakness until you slip.

And every security team, no matter how good, slips up eventually.

But once inside, the intruders are in your network - unfriendly territory.

They have to hide inside your environment, and they only have to slip up once to get caught.Consider the White House, one of the most secure buildings on the planet. Jumping the wrought iron fence on Pennsylvania Avenue isn’t the challenge.

The challenge is dealing with the Secret Service agents that tackle you as soon as your feet hit the lawn.

Cybersecurity teams should play to our strengths, and follow the example of both Admiral Rogers and the Secret Service. We should always work to keep intruders out, but some will always get in. We should heavily invest where we have the advantage: on our own networks. Image Source: By: Orhan Cam via Shutterstock At the White House, it is the Secret Service’s visibility and control inside the grounds that shuts down intruders.

Crossing that lawn is exposed, and the Secret Service detects intruders in seconds.

Access within the compound is limited to only where you need to go for purposes of your meeting, so visitors that step out of bounds are easy to spot.

And once an intruder is detected, there is almost always an agent nearby, with a wide range of tools at their disposal to contain the intrusion.

This is the essence of the defender’s advantage: visibility linked with control means that intruders are at a huge disadvantage once they get in.Unfortunately, we have largely ceded this advantage on our networks.
Security teams often don’t know what devices are connected, or how those devices are talking to each other.

This offers an incredible opportunity for intruders, because by understanding our networks better than we do, they can operate at their strongest when they should be at their weakest.If we are going to take Admiral Rogers’ advice, this is what we must correct.

There are emerging technologies that could help us correct this imbalance. Organizations need real-time visibility into how their devices are communicating so they can identify intruders quickly. We should limit access to important systems; segment networks and important data; patch vulnerable systems; encrypt data.

Each of these steps increases visibility and control.

They enable organizations to quickly identify intruders, act to constrain their movements, and eject them from the network. None of these tools are rocket science, but they require that we focus not just on keeping intruders out, but on catching them when they get in.This reality makes Admiral Rogers’ comments during the Senate hearing all the more poignant.
If there are two types of cybersecurity, why have we invested so heavily in the one where we are at a disadvantage, and given up the advantage we hold for the other? Related Content: As head of cybersecurity strategy, Nathaniel is responsible for thought leadership, public engagement, and overseeing Illumio's security technology strategy. Nathaniel is a regular speaker at leading industry events, and his writing has appeared in industry publications, the ...
View Full Bio More Insights

Cyber Lessons From NSA's Admiral Michael Rogers

Security teams must get better at catching intruders where we have the advantage: on our own networks. The Russians spent a year inside the Democratic National Committee before they were discovered. It took five months for OPM to catch the thieves that stole the records of more than four million federal employees. Intruders broke into Yahoo’s systems in 2013, and we don’t even know how long they were inside; Yahoo only discovered the hack when stolen data turned up for sale on the dark web. We invest more and more in our security, but the breaches just get bigger. How many more times does this have to happen before we accept that what we’re doing isn’t working?Earlier this month, during a Senate Armed Service Committee hearing, Admiral Michael S. Rogers, the director of the National Security Agency, told us what we need to do to fix the problem, recognizing two different kinds of cybersecurity: Keeping intruders out of networks. Identifying, containing, and ejecting them once they get inside. We must be able to do both, Admiral Rogers argued, noting that there is an entirely “different thought process, methodology, prioritization, and risk approach to dealing with someone who is already in your network versus trying to keep them out in the first place.”The head of the best offensive agency in the world is telling us exactly what we’re missing, but we aren’t listening. Most organizations still focus heavily on keeping attackers out, rather than trying to catch the ones that get in. A common bit of security wisdom is that hackers have the advantage because they only need to be right once to get in. This is largely true today - hackers can launch assault after assault to try to break through your defenses, probing for a weakness until you slip. And every security team, no matter how good, slips up eventually. But once inside, the intruders are in your network - unfriendly territory. They have to hide inside your environment, and they only have to slip up once to get caught.Consider the White House, one of the most secure buildings on the planet. Jumping the wrought iron fence on Pennsylvania Avenue isn’t the challenge. The challenge is dealing with the Secret Service agents that tackle you as soon as your feet hit the lawn. Cybersecurity teams should play to our strengths, and follow the example of both Admiral Rogers and the Secret Service. We should always work to keep intruders out, but some will always get in. We should heavily invest where we have the advantage: on our own networks. Image Source: By: Orhan Cam via Shutterstock At the White House, it is the Secret Service’s visibility and control inside the grounds that shuts down intruders. Crossing that lawn is exposed, and the Secret Service detects intruders in seconds. Access within the compound is limited to only where you need to go for purposes of your meeting, so visitors that step out of bounds are easy to spot. And once an intruder is detected, there is almost always an agent nearby, with a wide range of tools at their disposal to contain the intrusion. This is the essence of the defender’s advantage: visibility linked with control means that intruders are at a huge disadvantage once they get in.Unfortunately, we have largely ceded this advantage on our networks. Security teams often don’t know what devices are connected, or how those devices are talking to each other. This offers an incredible opportunity for intruders, because by understanding our networks better than we do, they can operate at their strongest when they should be at their weakest.If we are going to take Admiral Rogers’ advice, this is what we must correct. There are emerging technologies that could help us correct this imbalance. Organizations need real-time visibility into how their devices are communicating so they can identify intruders quickly. We should limit access to important systems; segment networks and important data; patch vulnerable systems; encrypt data. Each of these steps increases visibility and control. They enable organizations to quickly identify intruders, act to constrain their movements, and eject them from the network. None of these tools are rocket science, but they require that we focus not just on keeping intruders out, but on catching them when they get in.This reality makes Admiral Rogers’ comments during the Senate hearing all the more poignant. If there are two types of cybersecurity, why have we invested so heavily in the one where we are at a disadvantage, and given up the advantage we hold for the other? Related Content: As head of cybersecurity strategy, Nathaniel is responsible for thought leadership, public engagement, and overseeing Illumio's security technology strategy. Nathaniel is a regular speaker at leading industry events, and his writing has appeared in industry publications, the ... View Full Bio More Insights

Threat Attribution: Misunderstood & Abused

Despite its many pitfalls, threat attribution remains an important part of any incident response plan. Here's why. Threat attribution is the process of identifying actors behind an attack, their sponsors, and their motivations.
It typically involves forensic analysis to find evidence, also known as indicators of compromise (IOCs), and derive intelligence from them. Obviously, a lack of evidence or too little of it will make attribution much more difficult, even speculative.

But the opposite is just as true, and one should not assume that an abundance of IOCs will translate into an easy path to attribution. Let’s take a simple fictional example to illustrate: François is the chief information security officer (CISO) at a large US electric company that has just suffered a breach.

François’ IT department has found a malicious rootkit on a server which, after careful examination, shows that it was compiled on a system that supported pinyin characters. In addition, the intrusion detection system (IDS) logs show that the attacker may have been using an IP address located in China to exfiltrate data.

The egress communications show connections to a server in Hong Kong that took place over a weekend with several archives containing blueprints for a new billion-dollar project getting leaked. The logical conclusion might be that François’ company was compromised by Chinese hackers stealing industrial secrets.

After all, strong evidence points in that direction and the motives make perfect sense, given many documented precedents. This is one of the issues with attribution in that evidence can be crafted in such a way that it points to a likely attacker, in order to hide the real perpetrator’s identity.

To continue with our example, the attacker was in fact another US company and direct competitor.

The rootkit was bought on an underground forum and the server used to exfiltrate data was vulnerable to a SQL injection, and had been taken over by the actual threat actor as a relay point. Another common problem leading to erroneous attribution is when the wrong IOCs have been collected or when they come with little context. How can leaders make a sound decision with flawed or limited information? Failing to properly attribute a threat to the right adversary can have moderate to more serious consequences.

Chasing down the wrong perpetrator can result in wasted resources, not to mention being blinded to the more pressing danger. But threat attribution is also a geopolitical tool where flawed IOCs can come in handy to make assumptions and have an acceptable motive to apply economic sanctions.

Alternatively, it can also be convenient to refute strong IOCs and a clear threat actor under the pretext that attribution is a useless exercise. Despite its numerous pitfalls, threat attribution remains an important part of any incident response plan.

The famous “know your enemy” quote from the ancient Chinese general Sun Tzu, is often cited when it comes to computer security to illustrate that defending against the unknown can be challenging.
IOCs can help us bridge that gap by telling us if attackers are simply opportunistic or are the ones you did not expect. More Insights

Close The Gap Between IT & Security To Reduce The Impact...

IT and security teams work more effectively together than apart. Every modern organization operating today needs to rely on IT teams for service assurance within their networks and security professionals to keep everything safe. Organizations need both to operate effectively, not unlike a person employing both halves of the brain. However, because of the way IT and security have developed in siloed environments over the years, a gap has formed between them that decreases the effectiveness of both. It's probably no surprise to anyone working in technology why this gap has formed between these two teams.

For years, the primary focus of most organizations was IT.

The IT team had to get websites, applications, and communication systems up and running.

Then came a wave of cyberattacks, and a security team got added — some might say bolted on — as a separate entity with its own responsibilities. Even though both groups have the best interests of the host organization at heart, the gap still formed because they use different lexicons and tools, and have different priorities within network operations.
In some cases, they may not be physically based in the same location or, in the case of outsourcing, be managed by the same company.

That puts a lot of roadblocks in the way of collaboration, with the costs associated with effective systems integration being one of the leading factors impeding executives' decision making.

But leaders need to ask the question: Do I invest in securing my business effectively in a manner that allows systems to function to the full extent of their capabilities, or do I suffer the costs of being compromised? Hackers have already taken notice and are using this gap to their advantage.

The use of advanced tools, such as Hammertoss, is a perfect example.
It was designed to mimic normal user behavior, thus hiding from cybersecurity teams that didn't have enough visibility into network operations to spot the anomalies.

And it exfiltrated data so slowly, using such little bandwidth, that IT teams didn't detect anything amiss on their end. Had IT and security teams been working together on a unified platform with shared situational awareness, there likely would have been more than enough clues to unmask the threat before it could cause significant damage. There are many advantages to having separate IT and security teams, with the most important being that it allows experts in both groups to hone specific skill sets that make them more effective at their jobs.

But that doesn't mean that each must operate within a silo.

Combining security and IT operations can be as simple as encouraging more communications and providing tools that give them visibility into areas supervised by the other group. In security, having a deeper understanding of how systems within the network are designed to perform would help them to better spot and stop threats. Modern advanced persistent threats that use tools like Hammertoss, which have been successful at exploiting the gap, would have a much harder time.

Attacks that leverage native capabilities in the operating system or whitelisted websites/applications (such as tech support) would not be so invisible to those on the security team if they knew what day-to-day operations of those systems looked like from an IT perspective. IT would also benefit from better collaboration with security. One advantage would be allowing IT teams to think more like analysts when planning network expansions.

Generally, IT staffers working in a silo consider having more systems and more capacity to be a good thing, even though recent studies have found that as much as 40% of the capacity of most networks is not utilized.

From a security perspective, more capacity and more systems equals a bigger complexity, which adversaries can use to their advantage.

Combining those methodologies could lead to more efficient and less expensive network growth. I have seen firsthand the advantages when IT and security become more integrated.

During my time working in government, I used to manage a branch that performed root cause analysis and analyze metrics for cyber defense across agencies. We worked with assessment teams as well as incident response with focus on identifying gaps across people, process, technology, and policy. What I always observed was that organizations that had their IT and security teams tightly integrated to foster collaboration had much greater success during our assessments. Ideally, security and IT should collaborate from the design phase of every new project.

That can act as a good starting point in bringing those two worlds together and teaching them how to work together. Once the advantages of doing that are observed by both groups, it will be much easier to get buy-in on the bigger goal of complete integration and collaboration. Only then will the gap finally start to close, eliminating a core danger to networks while also improving overall efficiency and cost savings. Related Content: Travis Rosiek serves as the CTO of Tychon, where he is responsible for product innovation and professional services. With nearly 20 years of experience in the security industry, Travis is a highly accomplished cyber-defense leader, having led several commercial and U.S. ...
View Full Bio More Insights

Ransomware: How A Security Inconvenience Became The Industry's Most-Feared Vulnerability

There are all sorts of ways to curb ransomware, so why has it spread so successfully? The word "ransomware" conjures up images of dark cloaks and even darker alleys, and not surprisingly, the level of media attention has been unprecedented.

The fact that news stories measure the affect of ransomware in terms of cash helps grab the public's attention. (One analysis estimates more than $1 billion in ransoms were paid out in 2016). The most frightening thing about ransomware is that its success is built on trust. Ransomware often gains access by way of a clever email designed with the sole intention of winning the victim's confidence. "My skill is in my ability to get a bunch of people to click on the attachment," explains a malicious actor in a YouTube primer. Ransomware perpetrators have even started copying incentive tactics from legal industries.

There's the Christmas discount for victims who pay up, and a pyramid scheme offer, described in the press as "innovative": "If you pass this link and two or more people pay, we will decrypt your files for free!" This sophistication and business savvy speaks to ransomware's growth as an industry, and IT has had to take notice.

A recent survey of IT professionals from around the globe found that more than 50% of IT staff and more than 70% of CIOs see defending against ransomware as their #1 priority for 2017. What made ransomware into such a strong threat? Is it really a greater malice than traditional security threats or data theft? Or is it just more buzzworthy because the consequences are more dramatic? What's enabling the epidemic, and what produced the conditions for ransomware to flourish? The Patching ConundrumIn a way, the rise of ransomware in 2016 was in the works for a long time.
Vulnerability patching has been a significant IT challenge for several years — among industrial control systems, 516 of 1,552 vulnerabilities discovered between 2010 and 2015 didn't have a vendor fix at the time of disclosure.

A full third of known "ways in" had to wait for a patch to be developed, providing ample time for criminals to do their worst. Reliance on distributed security appliances has only exacerbated the problem.

Even after patches become available, there's still a significant lag.

A combination of staff shortages, the volume of devices deployed across today's business networks, and distance has dramatically lengthened patch rollout times.
Varying reports put the gap between 100 days to 18 months. Before ransomware even became a trend, the stage had been set for adversaries to gain access. It Should Be Easy to StopFrom an IT perspective, one of the most aggravating things about ransomware is that even after the attack gains a foothold, it should be relatively easy to stop.

The file encryption — which actually does the damage — is the final stage of a multistep process.
In fact, there are several opportunities to block the attack before it affects valuable data.

First, if the attack is caught by URL filters or secure Web gateways, it will be averted. The second step is where the initial malware "drop" downloads the ransomware program.

To do this, it must connect back to the attacker's server from within the compromised network.
It's only after the ransomware program itself deploys inside the victim's environment that it encrypts local and network server files.

And still, before the process can launch, most ransomware must connect to a command-and-control server, to create the public-private key pair that encrypts the data. At any point in the process, a network security stack has ample chance to block the malicious program from making these connections, and data lockdowns would never happen. With all these opportunities to stop the attack, how has ransomware been so successful? Complexity upon ComplexityIn November, security researchers discovered a mutation to exploit Scalable Vector Graphics (SVG), and this may provide a clue.
SVG is an XML-based vector image format supported by Web-based browsers and applications.

Attackers were able to embed SVG files sent on Facebook Messenger with malicious JavaScript, ostensibly to take advantage of users' inclination to view interactive images. The way these files were manipulated is of much greater concern than either the app that was targeted, or the breach of users' trust: The SVG file had been loaded with obfuscated JavaScript code (see Figure 1).

These files automatically redirect users to malicious websites and open the door to eventual endpoint infection.

The obfuscation tricks detection engines, and signature-based detection will always fall behind as code morphs to new signatures for the same threat. Figure 1: The string "vqnpxl" is the obfuscation function.Source: Cato Networks The above attack spotlights an urgent need to simplify. Modern networks see their vulnerability go up thanks to a patchwork of point solutions.
It's not sustainable to expect IT pros to update each point solution, and patch every existing firewall, when each new attack vector comes about.
Skilled attackers will always build new threats faster than IT can defend against them.

For ransomware, the critical test is, "how fast can you roll defenses out?" Higher StakesWhen prevention is the only true cure, it's no wonder ransomware goes to the front of CIOs' agendas for 2017.

But the predominant trend toward cloud-based security and the promise of a "patch once, fix all" model are starting to correct the problem.

Cloud defenses promote quicker adaptation to ransomware mutations.

The idea is to consolidate all traffic from physical locations and mobile users, and integrate a single firewall service as a permanent "line of sight" between any given user, any given device, and a potential threat source.
In this respect, the cloud is not just about saving work, but also about improving speed to security. 2016 was the year that IT's reluctance to use the cloud backfired, and it played right into ransomware's hands.

Familiarity, comfort, and experience with using the cloud to keep networks safe may improve outcomes in 2017. Related Content:   Gur is co-founder and CTO of Cato Networks. Prior to Cato Networks, he was the co-founder and CEO of Incapsula Inc., a cloud-based Web applications security and acceleration company.

Before Incaspula, Gur was Director of Product Development, Vice President of Engineering and ...
View Full Bio More Insights

FBI Chief: Russian Hackers Exploited Outdated RNC Server, But Not Trump...

Russian state-sponsored hackers attacked Republican state political campaigns, and compromised an old Republican National Committee (RNC) server, but did not penetrate the "current RNC" or the campaign of president-elect Donald Trump, FBI director James Comey told lawmakers at a Senate hearing Tuesday. Reuters reports that Comey told lawmakers the FBI "'did not develop any evidence that the Trump campaign, or the current RNC, was successfully hacked.' He did not say whether Russia had tried to hack Trump's campaign." Russia did not release any information obtained through these compromises of state campaigns or old RNC email domains, Comey said, reports Reuters.  From the New York Times: Mr.

Comey said Tuesday that there was 'evidence of hacking directed at the state level' and at the R.N.C., 'but that it was old stuff.' He said there was no evidence 'that the current R.N.C.' — he appeared to be referring to servers at the committee's headquarters or contractors with current data — had been hacked. There is no evidence that computers used by the Trump campaign or the Clinton campaign were also compromised, though the personal email account of John D. Podesta, Hillary Clinton's campaign chairman, was copied and released as part of the Russian-ordered hack. According to the Times, the "old stuff" to which Comey referred appeared to be a single email server used by the RNC that was soon going out of service and contained outdated material.   Dark Reading's Quick Hits delivers a brief synopsis and summary of the significance of breaking news events.

For more information from the original source of the news item, please follow the link provided in this article.
View Full Bio More Insights

Latest Ukraine Blackout Tied To 2015 Cyberattackers

Broad cyberattack campaign hitting finance, energy, transporation in Ukraine were meant to disrupt but not cause major damage, researchers say. S4x17 CONFERENCE -- Miami, Fla.-- A wave of fresh cyberattacks against power substations, defense, finance, and port authority systems in Ukraine last month appear to be the handiwork of the same attackers who in December 2015 broke in and took control of industrial control systems at three regional power firms in that nation and shut off the lights, researchers said here today. A pair of researchers from Ukraine confirmed that a second power outage on Dec. 16, 2016, in the nation also was the result of a cyberattack. Ukrainian officials have identified Russian hackers as the perpetrators, and Ukraine President Petro Poroshenko recently revealed that his nation had suffered 6,500 cyberattacks at the hands of Russia in the past two months. But unlike the 2015 cyberattack that crippled some 27 power distribution operation centers across the country and affected three utilities in western Ukraine, the December 2016 attack hit the Pivnichna remote power transmission facility and shut down the remote terminal units (RTUs) that control circuit breakers, causing a power outage for about an hour. Confirmation of yet another cyberattack campaign against the Ukraine comes at a time when Russian nation-state hacking is a front-burner concern in the US and Western world, especially with the US intelligence community's recent report concluding that Russian president Vladimir Putin directed a wide-ranging campaign to influence the outcome of the 2016 US presidential campaign in favor of President-Elect Donald Trump. US officials say Russia employed cyber espionage attacks against policy groups, US primary campaigns, and the Democratic National Committee (DNC) in 2015, as well as propaganda to influence public opinion. Marina Krotofil, a security researcher for Honeywell Industrial Cyber Security Labs, who today presented the newest findings on the Ukraine hacks, said the attackers appear to be using Ukraine "as a training ground for R&D" - basically a way to hone their attacks on critical infrastructure attacks in general. She said in an interview that this testbed-type approach against Ukraine is considered by experts as a "standard practice" by Russian nation-state attackers for testing out their tools and attacks. This recent campaign worries some US security experts. "The 'red lines' that conventional wisdom taught us would prevent disruptive or destructive attacks in critical infrastructure are dimming, if not gone," says Steve Ward, a senior director at Claroty. "With the 2015 Ukraine incident and the fact that no apparent repercussions followed, it is not surprising to be at the point where a follow-up attack has been confirmed … We should be very concerned with the potential of such attacks in America," Ward says. Honeywell's Krotofil says the latest attacks began on Dec. 6 and lasted until Dec. 20, with each target getting hit one-by-one, via a combination of remote exploits and websites crumbling under distributed denial-of-service attacks. With the Ukraine rail system's server taken offline by the attacks, travelers were unable to purchase train tickets, and cargo shipments also were interrupted, she says. She said the attackers didn't appear to intend to wreak major damage on Ukraine's infrastructure, however. "It's hypothesized that this hacking campaign was to sabotage normal operations in Ukraine to cause disorganization and distrust," she said. "The goal was to destabilize the economy and political situation." The attackers used many of the same tools that they deployed in the 2015 power grid blackout -- including BlackEnergy framework tools and KillDisk. "The attacks [grew] in sophistication," Krotofil said. "They were more organized, with several groups working together like a good orchestra.

That was different from" the 2015 attack that appeared to be more disjointed and disorganized, she said. A spear phish on July 14, 2016, kicked off the first phase of the attacks aimed at a Ukraine bank.

The attachment employed malicious macros that checked for sandboxes and hid its activity with obfuscation techniques.

The researchers did not confirm the initial attack vector for the electric grid, however. Via a translater, in a pre-recorded video shown during Krotofil's talk, Oleksii Yasynskyi - head of research for Information Systems Security Partners in Ukraine and a fellow investigator of the Ukraine attacks - said that the attackers were "several cybercriminal groups" working together. Yasynskyi said the groups employed legitimate IT administrative tools to evade detection as they gathered the necessary intelligence about the networks in the reconnaissance phase of the attacks. They gathered passwords about targeted servers and workstations, for instance, noted Yasynskyi, and they created custom malware for their targets. "The code was written by experts," he said. Macro Got More Game The attackers upped their malicious macro game significantly in the 2016 attacks in comparison to the 2015 attack.

Case in point: 69% of the code in their macro software was for obfuscation, 30% for duping forensic analysis, and only one percent of the code actually corresponded to the macro's ability to launch malware, according to Yasynskyi. "In essence, this macro is a sophisticated container for infiltrating and delivering malicious code for actual intrusion by the attackers," he said. The attackers this time around also put extra effort into making malware analysis as onerous as possible. "It writes itself into certain parts of memory, like a puzzle," he said. "It unwraps only parts it needs at the time. "This only confirms the theory that this was executed by several teams: infrastructure, instruments to automate the analysis and penetration, and to deliver the malicious code," he said. The dropper malware, a custom tool called Hancitor, had two different samples, but some 500 software builds during a two-week period, demonstrating the level of software development by the attackers, Krotofil noted. The attackers also obviously had done the homework in order to wreak havoc on the power grid, such as the inner workings of industrial processes there. "You can't simply get" that information or documents on the Net, Krotofil said. Interestingly, while it took some four months to investigate the 2015 Ukraine power grid attack, it took Yasynskyi and the other investigators only two weeks to investigate the 2016 attacks.

They were able to detect the similar methods and tools in the second attacks based on the research from the previous attacks. Michael Assante, SANS lead for ICS and SCADA security, in a presentation here today noted that the Ukraine attacks raise new issues for ICS/SCADA operators. "In the case of Ukraine, it opened up a lot of questions" after that 2015 attack about how to engage when such physically disruptive events hit, such as who should identify a cyberattack, how to respond, and what protocol to follow if the attack causes damage. Related Content: Kelly Jackson Higgins is Executive Editor at DarkReading.com.
She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise ...
View Full Bio More Insights

Record Number of Vulns For Adobe, Microsoft, Apple In '16, Says...

Advantech makes surprise debut on vulnerability list at number two, right behind Adobe Like rules, records were made to be broken, and the security industry's largest-ever vulnerability reporting and remediation didn't disappoint, with 674 total advisories in 2016 - eight more than the year before, according to a report this week from the Zero Day Initiative. ZDI, launched in 2005, encourages responsible reporting of zero-day vulnerabilities to affected vendors by financially rewarding researchers, and protecting customers while the affected vendor creates, tests, and delivers a patch. ZDI paid out nearly $2 million in rewards in 2016, the group reported this week. As for the bigger software vulnerability picture, Adobe products accounted for 149 of advisories, or 22% of the ZDI total, same as in 2015.

Adobe Reader, Acrobat, and Flash were the main culprits, and ZDI expects the trend to continue as more browsers block Flash by default.
In addition, Adobe doesn't operate its own bounty program for bugs and vulnerabilities, unlike Microsoft and Apple.

And Adobe is already off to an auspicious start in 2017; ZDI communications manager Dustin Childs tells Dark Reading his organization just notified the vendor of eight new vulnerabilities. Microsoft fell to number three on ZDI's 2016 list and it had a lower percentage of published ZDI advisories - 11% - down from the previous year's 17%.

But those numbers don't tell the whole story, since Microsoft itself published more security bulletins in 2016 than ever before. Microsoft's biggest problem was the continued targeting of browsers; while its Edge browser was supposed to be much more secure than Internet Explorer, almost two-thirds (64%) of Microsoft-related ZDI advisories were related to browsers. Advisories for Apple products made a significant jump in 2016.

There were 61 ZDI advisories posted for the vendor's products in 2016, or 9% of the total, more than what it posted in 2014 and 2015 – 4% both years.

The jump isn't completely surprising to Childs, who notes Apple's more pervasive presence with desktop computing, not to mention its smartphone dominance.

The installed base of OSX and iOS combined is larger than Windows, Childs adds, and predicts more Apple vulnerabilities in 2017 through ZDI and Apple's own bug bounty program. Trend Micro, which owns ZDI, also predicts the percentage of Microsoft advisories will continue to drop in 2017 while Apple's increase.  Industrial computing vendor Advantech made its debut on the ZDI list at number two with 112 advisories published – 17% of the published advisories. "This doesn’t necessarily mean this vendor has a wide surface attack area," Childs writes in a ZDI blog post. "All of these cases came in through the same anonymous researcher, meaning the researcher found a specific type of bug prevalent in their systems," Childs says, adding that the same researcher reported no bugs from any other vendor in 2016. While Advantech's issues were a surprise, Childs says he also expected to see more enterprise software cropping up on the 2016 list from vendors like HP, Dell, or Oracle. "There's a bunch of enterprise software that hasn't been closely looked at, so there's a lot of bugs for researchers to find," he says.

And though browsers have become well-trod territory, this business middleware market is mostly untouched. Nonetheless, infosec professionals and executives should be careful with lists like these, since looking at the numbers without much context doesn’t make for better security decisions in the future, warns Jeremiah Grossman, a security researcher and chief of security strategy at SentinelOne. "These figures see significant and subjective variation in what’s included, how things are counted, and more, which can largely throw off the numbers from one year to the next," he tells Dark Reading in an email. "And of course cybercriminals really don’t care how many reported vulns a particular product has, mostly because they only need one (or maybe a small handful) that’s wired into their exploitation tools for easy deployment." Childs counters that it's important to understand how ZDI's list get compiled. "It's important to see how the list is created - these are the bugs coming through our program," he says. "They may not be representative of all the research going on… we don't do anything with mobile yet, for example.

But if you look back at the last couple years, you can definitely see some trends," like Adobe's recurring presence. Related Content:   Terry Sweeney is a Los Angeles-based writer and editor who has covered technology, networking, and security for more than 20 years. He was part of the team that started Dark Reading and has been a contributor to The Washington Post, Crain's New York Business, Red Herring, ...
View Full Bio More Insights

'Molecular' Cybersecurity Vs. Information Cybersecurity

When it comes to industrial processes, security begins at the molecular level. Not all cybersecurity risk is created equal.

Case in point: when Sony was hacked, information was stolen, systems were wiped, and society was temporarily deprived of a Seth Rogan movie.

These were mostly bad outcomes, and Sony certainly suffered a significant financial loss. Now, imagine a similar attack on an oil refinery where compromised systems include the proprietary industrial control systems that manage volatile processes. When I say volatile, I'm referring to processes where a boiler is heating oil by hundreds of degrees separating molecules to produce gasoline and other products. With appropriate access, a bad actor can change how hot that boiler is configured to run.
If you combine that with disabled safety systems, production, environments —  even lives —  can be severely affected.

A German steel mill experienced this in 2014 when a boiler exploded after an industrial control system attack; and 225,000 Ukrainians lost power in December 2015 when a hacker group shut down substation systems. I don't want to diminish the impact that malicious attacks have on our financial industry and others. However, chemical, oil and gas, and power generation attacks can have much graver outcomes — yet, surprisingly, these industries are in some ways the most vulnerable.
If you examine cybersecurity within a typical industrial process company, you find many of the same protections you find in any other company — antivirus software, firewalls, application whitelisting, and more.

These security controls are focused on protecting workstations, servers, routers, and other IT-based technology.
In other words, they protect the flow of information. But systems that move and manipulate molecules (for example, oil separating into constituent parts) are not nearly as secure. Why? Because many of these systems were built and deployed before cybersecurity was even a thing.
Industrial facilities rely primarily on layered defenses in front of industrial control systems, security by obscurity (think complex systems on which it takes years to become an expert), and air gapping (physical isolation from other networks). The reality is that layered defenses and air gapping can be bypassed.
Industrial facilities, for instance, periodically have turnarounds where they perform maintenance or switch production output.

This requires hundreds of engineers — many of them third-party ones — working multiple shifts to get production back online.

They are authorized users who could accidentally (or intentionally) introduce malicious code or configuration changes into a control system. Relying on obscurity as a strategy only has limited effect. With the rise of nation-sponsored cyber warfare, the capability of manipulating complex control systems is also on the rise.

The Ukrainian power attack, for instance, included malicious firmware updates that were believed to have been developed and tested on the hacking group's own industrial control equipment. Heck, you can even buy a programmable logic controller (a type of industrial control system) on eBay. Potential ImpactThe Obama administration's Commission on Enhancing National Cybersecurity report was released in early December.

There were some good recommendations in the report, particularly around having a security rating system for Internet of Things devices. What I found disturbing was that the report stated the distinction between critical infrastructure systems (found in the industries highlighted in this post plus others, such as transportation, that also rely on industrial control systems) and other devices is becoming impractical.

The point is that in a connected world, everything is vulnerable and attacks can come from any quarter.
It's a fair point, but this idea diminishes the importance of impact, which is essential to driving priority, policy, and investment decisions. Protecting the systems that manipulate molecules must have priority and, in some cases, have precedence over the ones that maintain information. So, where do you start? Where should investment flow? Most companies need to start at the beginning and simply begin to track the cyber assets they have in an industrial facility.

Another fun fact: many don't track that data today, or do so in a highly manual way, which means there are data gaps and errors. Without visibility into the cyber assets in a plant, you can't effectively secure them. And when we talk about cyber assets, any credible inventory plan must include the controllers, smart field instruments, and other systems that manage the volatile processes we've discussed (these systems, by the way, make up 80% of the cyber assets you find in an industrial facility).

This can't happen in a spreadsheet, but it must happen through automation software that can pull data from the many disparate, proprietary systems that can exist in a single facility. With an automated, detailed inventory that is updated regularly, companies can begin to do the things they know are important for securing any system — they can monitor for unauthorized changes, set security policies, and more.

Doing so allows companies not only to secure information, but also secure the molecules — the lifeblood of an industrial process company. Related Content: As General Manager of the Cybersecurity Business Unit at PAS, David Zahn leads corporate marketing and strategic development of the PAS Integrity Software Suite.

David has held numerous leadership positions in the oil and gas, information technology, and outsourcing ...
View Full Bio More Insights

DHS Designates Election Systems As Critical Infrastructure

The Department of Homeland Security has deemed the nation's voting system as part of its critical infrastructure, citing security reasons. The US Department of Homeland Security (DHS) has designated the nation's election system as part of its critical infrastructure, a status change it has been debating for the past few months. There are 16 critical infrastructure sectors and 20 subsectors. In a statement issued Jan. 6, DHS Secretary Jeh Johnson explained why the US voting system will become a subsector of the Government Facilities critical infrastructure division. "Election infrastructure is vital to our national interests, and cyber attacks on this country are becoming more sophisticated, and bad cyber actors -- ranging from nation states, cyber criminals and hacktivists -- are becoming more sophisticated and dangerous," he said. This infrastructure spans all systems used to manage elections, including storage facilities, polling locations, and voter registration databases. As critical infrastructure, these are eligible for prioritized security assistance from the DHS, if requested. Further, voting systems will be part of US efforts to improve incident response capabilities, as well as streamlined access to both classified and unclassified information shared by critical infrastructure operators. Information sharing is a key benefit in this case, says Travis Farral, director of security strategy at Anomali and former elections judge in Texas. The United States' infrastructure for tallying votes is decentralized, which is a "double-edged sword" in terms of security. "It's harder for someone to attack a single authority," he says, because voting systems are different in each state. "But when trying to dictate security for varying apparatuses, it's difficult for the federal government to protect all that." The elevation to critical infrastructure will enable local and state election organizations to quickly share information and connect with the DHS to receive updates related to elections, security events, or the geopolitical environment, Farral continues. It’s a benefit to local municipalities where funding is low and officials want to ensure the integrity of elections. The critical infrastructure designation will give them multiple resources to stay connected and receive a coordinated, streamlined flow of information. Johnson noted many state and local officials were against the designation, due to concerns about federal takeover of local election processes. He explained how the designation "does not mean a federal takeover, regulation, oversight or intrusion concerning elections in this country. This designation does nothing to change the role state and local governments have in administering and running elections." Farral echoes this, noting how the power of election processes still resides with each state. Greater steps would have to be taken in order to change how elections are run. However, the future is unclear. "This may not be where things end," he notes, acknowledging the uncertainty of a new president and administration. "It's possible there may be additional changes, or some legislation in Congress designed to make more changes." Individual states may implement their own changes to improve election security, he adds. This news arrived at a critical time for US cybersecurity. On the same day it was issued, the US Office of the Director of National Intelligence released a report explaining Russia's role in conducting cyberattacks to interfere with the US election. This likely wasn’t by chance. "This announcement was probably timed to coincide with the release of the report, but it's hard to say for certain," says Farral. Related Content: Kelly is an associate editor for InformationWeek. She most recently reported on financial tech for Insurance & Technology, before which she was a staff writer for InformationWeek and InformationWeek Education. When she's not catching up on the latest in tech, Kelly enjoys ... View Full Bio More Insights