12.8 C
London
Thursday, September 21, 2017
Home Tags Fingerprinting

Tag: Fingerprinting

The Facebook malware that spread last week was dissected in a collaboration with Kaspersky Lab and Detectify. We were able to get help from the involved companies and cloud services to quickly shut down parts of the attack to mitigate it as fast as possible.
Legislation, to be signed by Texas Gov.

Greg Abbott, paves way for some comebacks.
Device fingerprinting was used to prevent account fraud.
"Such Fourth Amendment intrusions are [not] justified based on the facts articulated."
The way Firefox caches intermediate CA certificates could allow for the fingerprinting of users and the leakage of browsing details, a researcher warns.
Online tracking gets more accurate and harder to evade.
Odds are, software (or virtual) containers are in use right now somewhere within your organization, probably by isolated developers or development teams to rapidly create new applications.

They might even be running in production. Unfortunately, many security teams don’t yet understand the security implications of containers or know if they are running in their companies. In a nutshell, Linux container technologies such as Docker and CoreOS Rkt virtualize applications instead of entire servers.

Containers are superlightweight compared with virtual machines, with no need for replicating the guest operating system.

They are flexible, scalable, and easy to use, and they can pack a lot more applications into a given physical infrastructure than is possible with VMs.

And because they share the host operating system, rather than relying on a guest OS, containers can be spun up instantly (in seconds versus the minutes VMs require). A June 2016 report from the Cloud Foundry Foundation surveyed 711 companies about their use of containers. More than half had either deployed or were in the process of evaluating containers. Of those, 16 percent have already mainstreamed the use of containers, with 64 percent expecting to do so within the next year.
If security teams want to seize the opportunity (borrowing a devops term) to “shift security to the left,” they need to identify and involve themselves in container initiatives now. Developers and devops teams have embraced containers because they align with the devops philosophy of agile, continuous application delivery. However, as is the case with any new technology, containers also introduce new and unique security challenges.

These include the following: Inflow of vulnerable source code: Because containers are open source, images created by an organization’s developers are often updated, then stored and used as necessary.

This creates an endless stream of uncontrolled code that may harbor vulnerabilities or unexpected behaviors. Large attack surface: In a given environment, there would be many more containers than there would be applications, VMs, databases, or any other object that requires protecting.

The large numbers of containers running on multiple machines, whether on premises or in the cloud, make it difficult to track what’s going on or to detect anomalies through the noise. Lack of visibility: Containers are run by a container engine, such as Docker or Rkt, that interfaces with the Linux kernel.

This creates another layer of abstraction that can mask the activity of specific containers or what specific users are doing within the containers. Devops speed: The pace of change is such that containers typically have a lifespan four times shorter than that of VMs, on average.

Containers can be executed in an instant, run for a few minutes, then stopped and removed.

This ephemerality makes it possible to launch attacks and disappear quickly, with no need to install anything. “Noisy neighbor” containers: A container might behave in a way that effectively creates a DoS attack on other containers.

For example, opening sockets repeatedly will quickly bring the entire host machine to a crawl and eventually cause it to freeze up. Container breakout to the host: Containers might run as a root user, making it possible to use privilege escalation to break the “containment” and access the host’s operating system. “East-west” network attacks: A jeopardized container can be leveraged to launch attacks across the network, especially if its outbound network connections and ability to run with raw sockets were not properly restricted. The best practices for securing container environments are not only about hardening containers or the servers they run on after the fact.

They’re focused on securing the entire environment.
Security must be considered from the moment container images are pulled from a registry to when the containers are spun down from a runtime or production environment.

Given that containers are often deployed at devops speed as part of a CI/CD framework, the more you can automate, the better. With that in mind, I present this list of best practices. Many of them are not unique to containers, but if they are “baked” into the devops process now, they will have a much greater impact on the security posture of containerized applications than if they are “bolted” on after the fact. Implement a comprehensive vulnerability management program. Vulnerability management goes way beyond scanning images when they are first downloaded from a registry.

Containers can easily pass through the development cycle with access controls or other policies that are too loose, resulting in corruption that causes the application to break down or leading to compromise in runtime.

A rigorous vulnerability management program is a proactive initiative with multiple checks from “cradle to grave,” triggered automatically and used as gates between the dev, test, staging, and production environments. Ensure that only approved images are used in your environment. An effective way of reducing the attack surface and preventing developers from making critical security mistakes is to control the inflow of container images into your development environment.

This means using only approved private registries and approved images and versions.

For example, you might sanction a single Linux distro as a base image, preferably one that is lean (Alpine or CoreOS rather than Ubuntu) to minimize the surface for potential attacks. Implement proactive integrity checks throughout the lifecycle. Part of managing security throughout the container lifecycle is to ensure the integrity of the container images in the registry and enforce controls as they are altered or deployed into production.
Image signing or fingerprinting can be used to provide a chain of custody that allows you to verify the integrity of the containers. Enforce least privileges in runtime. This is a basic security best practice that applies equally in the world of containers. When a vulnerability is exploited, it generally provides the attacker with access and privileges equal to those of the application or process that has been compromised.

Ensuring that containers operate with the least privileges and access required to get the job done reduces your exposure to risk. Whitelist files and executables that the container is allowed to access or run. It’s a lot easier to manage a whitelist when it is implemented from the get-go.

A whitelist provides a measure of control and manageability as you learn what files and executables are required for the application to function correctly, and it allows you to maintain a more stable and reliable environment. Limiting containers so that they can access or run only pre-approved or whitelisted files and executables is a powerful nethod to mitigate risk.
It not only reduces the attack surface, but also can be employed to provide a baseline for anomalies and prevent the use cases of the “noisy neighbor” and container breakout scenarios described above. Enforce network segmentation on running containers. Maintain network segmentation (or “nano-segmentation”) to segregate clusters or zones of containers by application or workload.
In addition to being a highly effective best practice, network segmentation is a must-have for container-based applications that are subject to PCI DSS.
It also serves as a safeguard against “east-west” attacks. Actively monitor container activity and user access. As with any IT environment, you should consistently monitor activity and user access to your container ecosystem to quickly identify any suspicious or malicious activity. Log all administrative user access to containers for auditing. While strong user access controls can restrict privileges for the majority of people who interact with containers, administrators are in a class by themselves. Logging administrative access to your container ecosystem, container registry, and container images is a good security practice and a common-sense control.
It will provide the forensic evidence needed in the case of a breach, as well as a clear audit trail if needed to demonstrate compliance. Much of the notion of “baking security into IT processes” relates to automating preventive processes from the onset.

Getting aggressive about container security now can allow for containerized applications to be inherently more secure than their predecessors. However, given that containers will be deployed ephemerally and in large numbers, active detection and response -- essential to any security program -- will be critical for containerized environments.

Container runtime environments will need to be monitored at all times, for anomalies, suspected breaches, and compliance purposes. Although there’s a growing body of knowledge about container security in the public domain, it’s important to note that we’re still in the early stages.

As we discover new container-specific vulnerabilities (or new-old ones such as Dirty COW), and as we make the inevitable mistakes (like the configuration error in Vine’s Docker registry that allowed a security researcher to access Vine's source code), best practices are sure to evolve. The good news, as far as container adoption goes, is it’s still early enough to automate strong security controls into container environments.

The not-so-good news is security teams need to know about container initiatives early enough to make that happen, and more often than not they don’t.

To realize the potential security improvements that can be achieved in the transition to container-based application development, that needs to change ... soon.

Educating yourself about containers and the security implications of using them is a good start. New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth.

The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers.
InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content.
Send all inquiries to newtechforum@infoworld.com.
The U.S.

Federal Trade Commission is scheduled to announce Wednesday a “prize competition” for a tool that can used against security vulnerabilities in internet of things systems. The prize pot is up to $25,000, with $3,000 available for each honorable mention.

The winners will be announced in July.

The announcement is scheduled to be published Wednesday in the Federal Register. The tool, at the minimum, will “help protect consumers from security vulnerabilities caused by out-of-date software,” said the FTC. The government’s call for help cites the use of internet-enabled cameras as a platform for a Distributed Denial of Service (DDoS) attack last October. Weak default passwords were blamed. The FTC wants automatic software updates for IoT devices and up-to-date physical devices also.
Some devices will automatically update, but many require consumers to adjust one or more settings before they will do so, said the FTC in its announcement.

The winning entry could be a physical device, an app or a cloud-based service. This isn’t the first time the FTC has offered cash for software tools.
In 2015, it awarded $10,500 to developers of an app that could block robocalls. The winners of that contest were Ethan Garr and Bryan Moyles, the co-inventors of the RoboKiller app, both of whom work for TelTech Systems, a communications technology start-up.

Their winning app was initially developed as a side project. “It gave us something to work toward,” said Garr, of the FTC contest, in an interview. “It gave us a deadline, which in technology is really valuable because software projects can go on forever without one.” Their contest submission included an iPhone with the installed app.

They also had to pay their own expenses to attend a DefCon conference in Las Vegas for the FTC’s final judging. “I don’t think they get enough credit for how passionate they are in solving the problem,” said Garr, the vice president of product TelTech, of the people involved in the FTC’s effort. The initial version of RoboKiller forwarded all calls to the app’s servers for analysis.
It used an “audio-fingerprinting algorithm” to quickly determine whether it was a robocall or not. A new version incorporates Apple’s new CallKit technology to identify robocalls. Users can also set up conditional call forwarding to TelTech’s servers for those calls that are declined, for instance.

The service will check multiple databases for information about the call, and the developers plan to soon roll out an additional feature that will show a photo of the caller from social media.
It charges $1/month for the service. The FTC’s IoT patching plan may have limits. One issue with IoT security is embedded devices that may continue to operate long after their last patch, and may even survive the companies that created the systems. This story, "FTC sets $25,000 prize for automatic IoT patching" was originally published by Computerworld.
A sandboxed version of the Tor Browser was released over the weekend, and while there are still some rough edges and bugs – potentially major, according to the developer– it could be the first step toward protecting Tor users from recent de-anonymization exploits. Yawning Angel, a longtime Tor developer, unveiled version 0.0.2, in a post to the Tor developers mailing list on Saturday. Official binaries, available only for Linux distributions, won’t be out until later this week. Until then, if prospective users want to try it out themselves, they can build it by downloading the code on GitHub, according to the developer. While the alpha release of a piece of software wouldn’t usually merit much attention, the fact that Tor Browser has been targeted with several exploits intended to unmask users over the past two years makes it a welcomed announcement for users who value their privacy. Developers with both Firefox and the Tor Browser, which is partially built on open source Firefox code, had to scramble last month to fix a zero-day vulnerability that was being exploited in the wild to unmask Tor users. The FBI targeted Tor Browser users in 2015 after officials with the service seized servers belonging to a child pornography site called Playpen.
Instead of shuttering the site, the FBI used a network investigative technique to harvest the IP and MAC addresses of Tor users who visited the site for 13 days. In the sandboxed version of Tor, exploits against the browser are confined to the sandbox, limiting the disclosure of information about whatever machine the browser is running on.

Data like files and legitimate IP and MAC addresses is hidden as well. The browser has come a long way to even get to alpha mode; In October, when Yawning Angel discussed the prototype in a Q&A with the Tor Project, he called it “experimental,” “not user friendly” and something that only worked on his laptop.

The developer first mentioned that he was tinkering with a sandboxed version of the browser back in September, although at that point the concept was even more rudimentary. Yawning Angel has sandboxed the Tor Browser! https://t.co/5pbBvJgUn4 pic.twitter.com/q8lHA6Fib6 — torproject (@torproject) October 11, 2016 The browser is built around bubblewrap, a sandboxing utility for Linux designed to restrict an application’s access to parts of the operating system or user data.
Since it is an alpha release however, Yawning Angel is stressing users not to assume the browser isn’t without its flaws. “There are several unresolved issues that affect security and fingerprinting,” the developer wrote in a README packaged with code for the sandboxed Tor Browser on GitHub. Users seeking strong security should pair the sandbox with a Linux-based operating system designed to thwart exploit and malware attacks, such as Qubes, Subgraph, or Tails, he adds. While major browsers such as Chrome, Edge and Safari operate in secure sandboxes, developers with Tor haven’t had the time to build a sandbox until now.
In the Q&A that Yawning Angel gave in October, he acknowledged this is his third time trying to write code for the sandbox and that the process is “incredibly complicated” and not without “lots of design problems.” “We never have time to do this. We have a funding proposal to do this but I decided to do it separately from the Tor Browser team.
I’ve been trying to do this since last year,” Yawning Angel said at the time.
 Download the PDF Yet another year has flown past and, as far as notable infosec happenings are concerned, this is one for the history books.

Drama, intrigue and exploits have plagued 2016 and, as we take stock of some of the more noteworthy stories, we once again cast our gaze forward to glean the shapes of the 2017 threat landscape. Rather than thinly-veiled vendor pitching, we hope to ground these predictions in trends we’ve observed in the course of our research and provide thought-provoking observations for researchers and visitors to the threat intelligence space alike. Our record Last year’s predictions fared well, with some coming to fruition ahead of schedule.
In case you didn’t commit these to memory, some of the more notable predictions included: APTs: We anticipated a decreased emphasis on persistence as well as an increased propensity to hide in plain sight by employing commodity malware in targeted attacks. We’ve seen this, both with an increase in memory or fileless malware as well as through the myriad reported targeted attacks on activists and companies, which relied on off-the-shelf malware like NJRat and Alienspy/Adwind. Ransomware: 2016 can be declared the year of ransomware.

Financial malware aimed at victimizing users has practically been galvanized into a ransomware-only space, with the more effective extortion scheme cannibalizing malware development resources from less profitable attempts at victimizing users. Forecast for 2017: time to start using Yara rules more extensively as IoCs become less effective Tweet More Bank Heists: When we considered the looming expansion of financial crime at the highest level, our hypothetical included targeting institutions like the stock exchange.

But it was the attacks on the SWIFT network that brought these predictions to bear, with millions walking out the door thanks to crafty, well-placed malware. Internet Attacks: Most recently, the oft-ignored world of sub-standard Internet-connected devices finally came to bear on our lives in the form of a nasty IoT botnet that caused outages for major Internet services, and hiccups for those relying on a specific DNS provider. Shame: Shame and extortion have continued to great fanfare as strategic and indiscriminate dumps have caused personal, reputational, and political problems left and right. We must admit that the scale and victims of some of these leaks have been genuinely astonishing to us. What does 2017 have in store? Those dreaded APTs The rise of bespoke and passive implants As hard as it is to get companies and large-scale enterprises to adopt protective measures, we also need to admit when these measures start to wear thin, fray, or fail.
Indicators of Compromise (IoCs) are a great way to share traits of already known malware, such as hashes, domains, or execution traits that will allow defenders to recognize an active infection. However, the trendsetting one-percenters of the cyberespionage game have known to defend against these generalized measures, as showcased by the recent ProjectSauron APT, a truly bespoke malware platform whose every feature was altered to fit each victim and thus would not serve to help defenders detect any other infections.

That is not to say that defenders are entirely without recourse but it’s time to push for the wider adoption of good Yara rules that allow us to both scan far-and-wide across an enterprise, inspect and identify traits in binaries at rest, and scan memory for fragments of known attacks. Forecast for 2017: passive implants showing almost no signs of infection come into fashion Tweet ProjectSauron also showcased another sophisticated trait we expect to see on the rise, that of the ‘passive implant’.

A network-driven backdoor, present in memory or as a backdoored driver in an internet gateway or internet-facing server, silently awaiting magic bytes to awaken its functionality. Until woken by its masters, passive implants will present little or no outward indication of an active infection, and are thus least likely to be found by anyone except the most paranoid of defenders, or as part of a wider incident response scenario. Keep in mind that these implants have no predefined command-and-control infrastructure to correlate and provide a more anonymous beachhead.

Thus, this is the tool of choice for the most cautious attackers, who must ensure a way into a target network at a moment’s notice. Ephemeral infections While adoption of PowerShell has risen as a dream tool for Windows administrators, it has also proven fruitful ground for the gamut of malware developers looking for stealthy deployment, lateral movement, and reconnaissance capabilities unlikely to be logged by standard configurations.

Tiny PowerShell malware stored in memory or in the registry is likely to have a field day on modern Windows systems.

Taking this further, we expect to see ephemeral infections: memory-resident malware intended for general reconnaissance and credential collection with no interest in persistence.
In highly sensitive environments, stealthy attackers may be satisfied to operate until a reboot wipes their infection from memory if it means avoiding all suspicion or potential operational loss from the discovery of their malware by defenders and researchers.

Ephemeral infections will highlight the need for proactive and sophisticated heuristics in advanced anti-malware solutions (see: System Watcher). Espionage goes mobile Multiple threat actors have employed mobile implants in the past, including Sofacy, RedOctober and CloudAtlas, as well as customers of HackingTeam and the suspected NSO Pegasus iOS malware suite. However, these have supplemented campaigns largely based on desktop toolkits.

As adoption of Desktop OS’s suffers from a lack of enthusiasm, and as more of the average user’s digital life is effectively transferred to their pockets, we expect to see the rise of primarily mobile espionage campaigns.

These will surely benefit from decreased attention and the difficulty of attaining forensic tools for the latest mobile operating systems.

Confidence in codesigning and integrity checks has stagnated visibility for security researchers in the mobile arena, but this won’t dissuade determined and well-resourced attackers from hunting their targets in this space. The future of financial attacks We heard you’d like to rob a bank… The announcement of this year’s attacks on the SWIFT network caused uproar throughout the financial services industry due to its sheer daring; measured in zeros and commas to the tune of multi-million dollar heists.

This move was a natural evolution for players like the Carbanak gang and perhaps other interesting threat actors. However, these cases remain the work of APT-style actors with a certain panache and established capability.
Surely, they’re not the only ones interested in robbing a bank for sizable funds? Forecast for 2017: growing popularity of short-lived infections, including those using PowerShell Tweet As cybercriminal interest grows, we expect to see the rise of the SWIFT-heist middlemen in the well-established underground scheme of tiered criminal enterprises. Performing one of these heists requires initial access, specialized software, patience, and, eventually, a money laundering scheme.

Each of these steps has a place for already established criminals to provide their services at a fee, with the missing piece being the specialized malware for performing SWIFT attacks. We expect to see the commodification of these attacks through specialized resources being offered for sale in underground forums or through as-a-service schemes. Resilient payment systems As payment systems became increasingly popular and widely adopted, we expected to see greater criminal interest in these. However, it appears that implementations have proven particularly resilient, and no major attacks have been noted at this time.

This relief for the consumer may, however, entail a headache for the payment system providers themselves, as cybercriminals are wont to target the latter through direct attacks on the payment system infrastructure. Whether these attacks will result in direct financial losses or simply outages and disruption, we expect increased adoption to attract more nefarious attention. Dirty, lying ransomware As much as we all hate ransomware (and with good reason), most ransomware thrives on the benefit of an unlikely trust relationship between the victim and their attacker.

This criminal ecosystem relies on the tenet that the attacker will abide by a tacit contract with the victim that, once payment is received, the ransomed files will be returned.

Cybercriminals have exhibited a surprising semblance of professionalism in fulfilling this promise and this has allowed the ecosystem to thrive. However, as the popularity continues to rise and a lesser grade of criminal decides to enter the space, we are likely to encounter more and more ‘ransomware’ that lacks the quality assurance or general coding capability to actually uphold this promise. We expect ‘skiddie’ ransomware to lock away files or system access or simply delete the files, trick the victim into paying the ransom, and provide nothing in return.

At that point, little will distinguish ransomware from wiping attacks and we expect the ransomware ecosystem to feel the effects of a ‘crisis of confidence’.

This may not deter larger, more professional outfits from continuing their extortion campaigns, but it may galvanize forces against the rising ransomware epidemic into abandoning hope for the idea that ‘just pay the ransom’ is viable advice for victims. The big red button The famous Stuxnet may have opened a Pandora’s Box by realizing the potential for targeting industrial systems, but it was carefully designed with a watchful eye towards prolonged sabotage on very specific targets.

Even as the infection spread globally, checks on the payload limited collateral damage and no industrial Armageddon came to pass.
Since then, however, any rumor or reporting of an industrial accident or unexplained explosion will serve as a peg to pin a cyber-sabotage theory on. Forecast for 2017: espionage increasingly shifting to mobile platforms Tweet That said, a cyber-sabotage induced industrial accident is certainly not beyond the realm of possibility.

As critical infrastructure and manufacturing systems continue to remain connected to the internet, often with little or no protection, these tantalizing targets are bound to whet the appetite of well-resourced attackers looking to cause mayhem.
It’s important to note that, alarmism aside, these attacks are likely to require certain skills and intent.

An unfolding cyber-sabotage attack is likely to come hand-in-hand with rising geopolitical tensions and well-established threat actors intent on targeted destruction or the disruption of essential services. The overcrowded internet bites back A brick by any other name Long have we prophesied that the weak security of the Internet of Things (or Threats) will come back to bite us, and behold, the day is here.

As the Mirai botnet showcased recently, weak security in needlessly internet-enabled devices provides an opportunity for miscreants to cause mayhem with little or no accountability. While this is no surprise to the infosec-aficionados, the next step may prove particularly interesting, as we predict vigilante hackers may take matters into their own hands. Forecast for 2017: use of intermediaries in attacks against the SWIFT interbank messaging system Tweet The notion of patching known and reported vulnerabilities holds a certain sacrosanct stature as validation for the hard (and often uncompensated) work of security researchers.

As IoT-device manufacturers continue to pump out unsecured devices that cause wide-scale problems, vigilante hackers are likely to take matters into their own hands.

And what better way than to return the headache to the manufacturers themselves by mass bricking these vulnerable devices? As IoT botnets continue to cause DDoS and spam distribution headaches, the ecosystem’s immune response may very well take to disabling these devices altogether, to the chagrin of consumers and manufacturers alike.

The Internet of Bricks may very well be upon us. The silent blinky boxes The shocking release of the ShadowBrokers dump included a wealth of working exploits for multiple, major manufacturers’ firewalls. Reports of exploitation in-the-wild followed not long after as the manufacturers scrambled to understand the vulnerabilities exploited and issue patches. However, the extent of the fallout has yet to be accounted for. What were attackers able to gain with these exploits on hand? What sort of implants may lie dormant in vulnerable devices? Looking beyond these particular exploits (and keeping in mind the late 2015 discovery of a backdoor in Juniper’s ScreenOS), there’s a larger issue of device integrity that bears further research when it comes to appliances critical to enterprise perimeters.

The open question remains, ‘who’s your firewall working for?’ Who the hell are you? The topic of False Flags and PsyOps are a particular favorite of ours and to no surprise, we foresee the expansion of several trends in that vein… Information warfare The creation of fake outlets for targeted dumps and extortion was pioneered by threat actors like Lazarus and Sofacy.

After their somewhat successful and highly notorious use in the past few months, we expect information warfare operations to increase in popularity for the sake of opinion manipulation and overall chaos around popular processes.

Threat actors interested in dumping hacked data have little to lose from crafting a narrative through an established or fabricated hacktivist group; diverting attention from the attack itself to the contents of their revelations. Forecast for 2017: ‘script kiddie’ extortionists compromise the idea of paying ransom to retrieve data Tweet The true danger at that point is not that of hacking, or the invasion of privacy, but rather that as journalists and concerned citizens become accustomed to accepting dumped data as newsworthy facts, they open the door to more cunning threat actors seeking to manipulate the outcome by means of data manipulation or omission.
Vulnerability to these information warfare operations is at an all-time high and we hope discernment will prevail as the technique is adopted by more players (or by the same players with more throwaway masks). The promise of deterrence As cyberattacks come to play a greater role in international relations, attribution will become a central issue in determining the course of geopolitical overtures.

Governmental institutions have some difficult deliberating ahead to determine what standard of attribution will prove enough for demarches or public indictments.

As precise attribution is almost impossible with the fragmented visibility of different public and private institutions, it may be the case that ‘loose attribution’ will be considered good enough for these. While advising extreme caution is important, we must also keep in mind that there is a very real need for consequences to enter the space of cyberattacks. Our bigger issue is making sure that retaliation doesn’t engender further problems as cunning threat actors outsmart those seeking to do attribution in the first place. We must also keep in mind that as retaliation and consequences become more likely, we’ll see the abuse of open-source and commercial malware begin to increase sharply, with tools like Cobalt Strike and Metasploit providing a cover of plausible deniability that doesn’t exist with closed-source proprietary malware. Doubling-down on False Flags While the examples reported in the False Flags report included in-the-wild cases of APTs employing false flag elements, no true pure false flag operation has been witnessed at this time.

By that we mean an operation by Threat Actor-A carefully and entirely crafted in the style and with the resources of another, ‘Threat Actor-B’, with the intent of inciting tertiary retaliation by the victim against the blameless Threat Actor-B. While it’s entirely possible that researchers have simply not caught onto this already happening, these sorts of operations won’t make sense until retribution for cyberattacks becomes a de facto effect.

As retaliation (be it overtures, sanctions, or retaliatory CNE) becomes more common and impulsive, expect true false flag operations to enter the picture. Forecast for 2017: lack of security for the Internet of Things will turn it into an ‘Internet of Bricks’ Tweet As this becomes the case, we can expect false flags to be worth even greater investment, perhaps even inciting the dumping of infrastructure or even jealously guarded proprietary toolkits for mass use.
In this way, cunning threat actors may cause a momentary overwhelming confusion of researchers and defenders alike, as script kiddies, hacktivists, and cybercriminals are suddenly capable of operating with the proprietary tools of an advanced threat actor, thus providing a cover of anonymity in a mass of attacks and partially crippling the attribution capabilities of an enforcing body. What privacy? Pulling the veil There’s great value to be found in removing what vestiges of anonymity remain in cyberspace, whether for the sake of advertisers or spies.

For the former, tracking with persistent cookies has proven a valuable technique.

This is likely to expand further and be combined with widgets and other innocuous additions to common websites that allow companies to track individual users as they make their way beyond their particular domains, and thus compile a cohesive view of their browsing habits (more on this below). Forecast for 2017: the question “Who is your firewall working for?” will become increasingly relevant Tweet In other parts of the world, the targeting of activists and tracking of social media activities that ‘incite instability’ will continue to inspire surprising sophistication, as deep pockets continue to stumble into curiously well-placed, unheard of companies with novelties for tracking dissidents and activists through the depth and breadth of the internet.

These activities tend to have a great interest in the social networking tendencies of entire geographic regions and how they’re affected by dissident voices. Perhaps we’ll even see an actor so daring as to break into a social network for a goldmine of PII and incriminating information. The espionage ad network No pervasive technology is more capable of enabling truly targeted attacks than ad networks.

Their placement is already entirely financially motivated and there is little or no regulation, as evidenced by recurring malvertising attacks on major sites.

By their very nature, ad networks provide excellent target profiling through a combination of IPs, browser fingerprinting, and browsing interest and login selectivity.

This kind of user data allows a discriminate attacker to selectively inject or redirect specific victims to their payloads and thus largely avoid collateral infections and the persistent availability of payloads that tend to pique the interest of security researchers.

As such, we expect the most advanced cyberespionage actors to find the creation or co-opting of an ad network to be a small investment for sizable operational returns, hitting their targets while protecting their latest toolkits. Forecast for 2017: rapid evolution of false-flag cybercriminal operations Tweet The rise of the vigilante hacker Following his indiscriminate release of the HackingTeam dump in 2015, the mysterious Phineas Fisher released his guide for aspiring hackers to take down unjust organizations and shady companies.

This speaks to a latent sentiment that the asymmetrical power of the vigilante hacker is a force for good, despite the fact that the HackingTeam dump provided live zero-days to active APT teams and perhaps even encouragement for new and eager customers.

As the conspiratorial rhetoric increases around this election cycle, fuelled by the belief that data leaks and dumps are the way to tip the balance of information asymmetry, more will enter the space of vigilante hacking for data dumps and orchestrated leaks against vulnerable organizations. Forecast for 2017: cybercriminals increasingly turn to social and advertising networks for espionage Tweet
People who are upset that Hillary Clinton’s personal email server may have been hacked are missing the big picture. Nearly everything that is worth hacking and connected to the internet is already hacked -- and that which is not can be hacked at will. I don’t want to get into the morass of whether Clinton’s use of personal email while she was Secretary of State was legal or ethical.

That’s been debated to death. Instead, I’m talking about whether it was hacked. Could it have been? I'll say it again: Everything is hackable.
Stuxnet took down Iranian centrifuges that were running on an air-gapped private network.

The State Department’s email was hacked -- very likely before, during, and after Clinton's tenure there. Was Clinton's email server hacked? As for Clinton's personal email server, the fact is we’ll never know whether it was hacked. Her server ran Microsoft Exchange 2010.

Arrested Romanian hacker Marcel Lazăr (aka Guccifer) claimed he had hacked it.

But beyond his public claim no evidence has come to light to back up his statement. The FBI forensic investigation into the server did not corroborate his statement.

As far as I can tell, Guccifer socially engineered her aide, Sidney Blumenthal, out of his AOL account password and nothing more.

The same hacking technique was used against her senior adviser John Podesta for the thousands of emails now shared via Wikileaks.
I’ve yet to hear any evidence that the server itself was exploited. Could someone have hacked the server without leaving evidence? Yes, although it seems unlikely. Most hackers leave behind lots of evidence because it doesn't matter if they do.

Almost no one gets caught, much less prosecuted.

Thus, hackers have become lazy and don’t attempt to clear log files or cover up evidence of their crimes. For the sake of argument, let's say a Russian superhacker broke into Clinton's server without leaving behind signs of compromise.
In that case, wouldn't we see emails other than those coming from two aides? It’s highly unlikely that a hacker would gain complete access, download every email, and fail to leak emails from Hillary and Bill Clinton. Don't get me wrong -- I think plenty of hackers are capable of hacking her server and not leaving behind evidence.

But I seriously doubt those hackers realized the importance of the email server serving up the @clintonemail.com domain.

The FBI’s own investigation revealed the server was scanned and a few hacks were attempted, but none seemed to get through. How would you hack Clinton’s email server? This is penetration testing 101.

First, you canvas your target.
It’s Microsoft Exchange 2010 running on Microsoft Windows -- you can get that much by sending a few SMTP query commands to the email service port or running a port scanner like Nmap against the IP address. Using a port scanner and a few fingerprinting apps, you’d likely come away with the Windows version and perhaps even its patch status, along with whatever other services it was running. We know from reports that it was running Microsoft Outlook Web Access (OWA) and Remote Desktop Protocol (RDP) for remote access.

That helps a lot. OWA means it’s also running Microsoft’s Internet Information Services (IIS).

Any hacker worth his or her salt already has all the possible exploits that might work against Microsoft Windows, IIS, Exchange, and RDP. Lots of hackers like to use the Metasploit Framework, but I’m partial to custom code for each vulnerability. RDP and OWA also give you remote logons to try.

Even if they have account lockout enabled, you can guess slowly.

Better yet, you can guess against the Administrator account.

As long as it hasn’t been renamed, you can guess forever as many times as you like and you won’t get locked out.
If you have Bill's or Hillary’s email address, the logon account name is likely to be the same as their email address. One of my favorite penetration tests, when I have the time, is to identify all  running software and wait until a new vulnerability appears. Microsoft releases new patches at least once a month, and almost every Windows server needs to be patched each time.

All you need to do is wait for the patch announcement and exploit the identified vulnerability before the system administrator can patch it. You usually have a day or so before the admin patches a server, if not longer. If the exploit gets you on the email server, you can then configure Exchange to forward copies of all new emails. Or you can use a program like ExMerge to suck up every existing email, including deleted ones. Once you're on the server, you can create new accounts, add backdoors, or do pretty much anything else. A few critics have noted that Clinton’s email server didn’t have SSL protection.

The SSL page was available, but the system admin didn’t populate it with an SSL certificate.

This means the connections to the server were in plaintext. While not having an SSL cert to protect the server isn’t great, it isn’t necessarily game over.
It isn’t easy to pop onto someone else’s network streams simply because you know they are there. You have to get close to the server’s original point and perform a man-in-the-middle attack on the main connection.
It’s easy to do if you’re already on the local network, but not so easy if you’re not. One of the more interesting feats you can perform with a public email server is to try and take over its domain. Perhaps Clinton’s server is bulletproof -- fully patched and unhackable.

Email hackers are famous for gaining control over DNS domains (in this case, clintonemail.com and wjcoffice.com) and, if successful, redirect all email and connections headed to those domains to a fraudulent email server. You wouldn’t be able to see preexisting emails, but you'd be able to capture new inbound emails (and all the long threads of previous emails they probably contain). What would have stopped the leak? In the social engineering instances, using a system that required two-factor authentication (2FA) would have helped.

Gmail had 2FA available back then, although I’m not sure about AOL.

Clinton should have been using the State Department systems for all business email, and her personal email server should have required 2FA (although the system admin would have to know how to set it up and show the Clintons how to use it). That’s water under the bridge now. What I’m sure Clinton really wishes she had used, besides the State Department email system, is a mechanism that prevents private email from being easily read by unauthorized parties.

There are myriad solutions, including Microsoft’s Rights Management System (RMS). Information protection software such as RMS is pretty nifty.
It encrypts all protected email and requires the user to retrieve an authorized personal digital certificate to view, print, or copy the email.

At any time the personal certificate can be revoked. Hence, if a hacker stole the email, as soon as someone noticed, the certificate could be revoked and the email would become unreadable.

Try posting that to Wikileaks. After all the huge corporate hacking incidents, in which embarrassing private emails were leaked, I’m surprised the email information protection market isn’t growing faster. Remember, we are either hacked or the attackers haven't gotten around to it yet. Your confidential emails should be protected in a manner that prevents your emails from being so easy to share. What happened to Clinton could absolutely happen to any person in any company who fails to use strong information protection for email.

That’s the real lesson we all should take away.
As hackers take aim at financial services, there is an increasing need to find new ways to deflect attacks. The Society for Worldwide Interbank Financial Telecommunication (SWIFT) system has increasingly become a target of hackers in 2016, as attackers attempt to exploit banks.
Security vendor TrapX is now helping banks that use SWIFT to mitigate the risk of attack via deception technology.The objective of TrapX's deception technology platform is to trick attackers into thinking they are attacking a real service, when in fact they are not. The foundation of the TrapX system is a lightweight emulation engine that can mimic the way a real operating system works. With the new SWIFT capability, TrapX now can emulate a SWIFT terminal.Attackers are attracted to the decoy SWIFT application, according to Greg Enriquez, CEO of TrapX. When interacting with TrapX's SWIFT deception, attackers think they are attacking the real SWIFT system, he said.

TrapX is able to monitor the emulated SWIFT terminal and see what procedures and methods attackers are using to infiltrate and exploit the system."It's an emulation for SWIFT, and it's so much more sophisticated than just attracting and luring attackers; we're emulating the real devices," Enriquez told eWEEK. "Our deception grid platform emulates dozens of operating environments, so we can do the operating system fingerprinting for anything." As TrapX sees new threats, it can build new deceptions to help enterprises, which it is now doing with the SWIFT capabilities, Enriquez said. He noted that TrapX didn't need to license any technology from SWIFT to enable the deception.

That said, TrapX has spoken with SWIFT, though there is no formal agreement or partnership between the two organizations, according to Enriquez. SWIFT terminals are supposed to be on separate, isolated network segments.

TrapX can be deployed on an organization's separate virtual LAN (VLAN), though Enriquez noted that in many cases, networks that companies think to be isolated are often still connecting out to the public internet. He noted that TrapX's goal is to help organizations lure hackers into the deception decoys, regardless of where they are coming in from.Given that the SWIFT deception is intended to appear exactly like the real thing, there is a theoretical risk that a legitimate user could end up in the wrong place. However, according to Enriquez, TrapX's deceptions have a low false-positive rate and don't typically ensnare legitimate users."SWIFT hacking is a real problem," he said. "It's a very sensitive area for financial institutions."The SWIFT deception capability is being made available at no additional cost to existing TrapX customers.TrapX raised $5 million in an extended Series B round of funding from Strategic Cyber Ventures (SCV) in April, bringing total funding to date for the company to $19 million."The investments we've taken have helped us to create more emulations for more environments," Enriquez said. "Deception is a platform that has become a doctrine for cyber-warfare to help protect organizations from the bad things that can happen."Sean Michael Kerner is a senior editor at eWEEK and InternetNews.com.

Follow him on Twitter @TechJournalist.