14.1 C
Thursday, November 23, 2017
Home Tags Fingerprinting

Tag: Fingerprinting

Researchers watch publishers watching you, ignore privacy settings, run over mere HTTP Researchers working on browser fingerprinting found themselves distracted by a much more serious privacy breach: analytical scripts siphoning off masses of user inte...
Firefox is to stop using the privacy-busting canvas-based browser fingerprinting that allows websites to track users’ online activities.
The 2017 VirusBulletin conference is upon us and, as in previous years, wersquo;re taking the opportunity to dive into an exciting subject, guided by our experience from doing hands-on APT research.

This year we decided to put our heads together to understand the implications that the esoteric SIGINT practice of fourth-party collection could have on threat intelligence research.
The Facebook malware that spread last week was dissected in a collaboration with Kaspersky Lab and Detectify. We were able to get help from the involved companies and cloud services to quickly shut down parts of the attack to mitigate it as fast as possible.
Legislation, to be signed by Texas Gov.

Greg Abbott, paves way for some comebacks.
Device fingerprinting was used to prevent account fraud.
"Such Fourth Amendment intrusions are [not] justified based on the facts articulated."
The way Firefox caches intermediate CA certificates could allow for the fingerprinting of users and the leakage of browsing details, a researcher warns.
Online tracking gets more accurate and harder to evade.
Odds are, software (or virtual) containers are in use right now somewhere within your organization, probably by isolated developers or development teams to rapidly create new applications.

They might even be running in production. Unfortunately, many security teams don’t yet understand the security implications of containers or know if they are running in their companies. In a nutshell, Linux container technologies such as Docker and CoreOS Rkt virtualize applications instead of entire servers.

Containers are superlightweight compared with virtual machines, with no need for replicating the guest operating system.

They are flexible, scalable, and easy to use, and they can pack a lot more applications into a given physical infrastructure than is possible with VMs.

And because they share the host operating system, rather than relying on a guest OS, containers can be spun up instantly (in seconds versus the minutes VMs require). A June 2016 report from the Cloud Foundry Foundation surveyed 711 companies about their use of containers. More than half had either deployed or were in the process of evaluating containers. Of those, 16 percent have already mainstreamed the use of containers, with 64 percent expecting to do so within the next year.
If security teams want to seize the opportunity (borrowing a devops term) to “shift security to the left,” they need to identify and involve themselves in container initiatives now. Developers and devops teams have embraced containers because they align with the devops philosophy of agile, continuous application delivery. However, as is the case with any new technology, containers also introduce new and unique security challenges.

These include the following: Inflow of vulnerable source code: Because containers are open source, images created by an organization’s developers are often updated, then stored and used as necessary.

This creates an endless stream of uncontrolled code that may harbor vulnerabilities or unexpected behaviors. Large attack surface: In a given environment, there would be many more containers than there would be applications, VMs, databases, or any other object that requires protecting.

The large numbers of containers running on multiple machines, whether on premises or in the cloud, make it difficult to track what’s going on or to detect anomalies through the noise. Lack of visibility: Containers are run by a container engine, such as Docker or Rkt, that interfaces with the Linux kernel.

This creates another layer of abstraction that can mask the activity of specific containers or what specific users are doing within the containers. Devops speed: The pace of change is such that containers typically have a lifespan four times shorter than that of VMs, on average.

Containers can be executed in an instant, run for a few minutes, then stopped and removed.

This ephemerality makes it possible to launch attacks and disappear quickly, with no need to install anything. “Noisy neighbor” containers: A container might behave in a way that effectively creates a DoS attack on other containers.

For example, opening sockets repeatedly will quickly bring the entire host machine to a crawl and eventually cause it to freeze up. Container breakout to the host: Containers might run as a root user, making it possible to use privilege escalation to break the “containment” and access the host’s operating system. “East-west” network attacks: A jeopardized container can be leveraged to launch attacks across the network, especially if its outbound network connections and ability to run with raw sockets were not properly restricted. The best practices for securing container environments are not only about hardening containers or the servers they run on after the fact.

They’re focused on securing the entire environment.
Security must be considered from the moment container images are pulled from a registry to when the containers are spun down from a runtime or production environment.

Given that containers are often deployed at devops speed as part of a CI/CD framework, the more you can automate, the better. With that in mind, I present this list of best practices. Many of them are not unique to containers, but if they are “baked” into the devops process now, they will have a much greater impact on the security posture of containerized applications than if they are “bolted” on after the fact. Implement a comprehensive vulnerability management program. Vulnerability management goes way beyond scanning images when they are first downloaded from a registry.

Containers can easily pass through the development cycle with access controls or other policies that are too loose, resulting in corruption that causes the application to break down or leading to compromise in runtime.

A rigorous vulnerability management program is a proactive initiative with multiple checks from “cradle to grave,” triggered automatically and used as gates between the dev, test, staging, and production environments. Ensure that only approved images are used in your environment. An effective way of reducing the attack surface and preventing developers from making critical security mistakes is to control the inflow of container images into your development environment.

This means using only approved private registries and approved images and versions.

For example, you might sanction a single Linux distro as a base image, preferably one that is lean (Alpine or CoreOS rather than Ubuntu) to minimize the surface for potential attacks. Implement proactive integrity checks throughout the lifecycle. Part of managing security throughout the container lifecycle is to ensure the integrity of the container images in the registry and enforce controls as they are altered or deployed into production.
Image signing or fingerprinting can be used to provide a chain of custody that allows you to verify the integrity of the containers. Enforce least privileges in runtime. This is a basic security best practice that applies equally in the world of containers. When a vulnerability is exploited, it generally provides the attacker with access and privileges equal to those of the application or process that has been compromised.

Ensuring that containers operate with the least privileges and access required to get the job done reduces your exposure to risk. Whitelist files and executables that the container is allowed to access or run. It’s a lot easier to manage a whitelist when it is implemented from the get-go.

A whitelist provides a measure of control and manageability as you learn what files and executables are required for the application to function correctly, and it allows you to maintain a more stable and reliable environment. Limiting containers so that they can access or run only pre-approved or whitelisted files and executables is a powerful nethod to mitigate risk.
It not only reduces the attack surface, but also can be employed to provide a baseline for anomalies and prevent the use cases of the “noisy neighbor” and container breakout scenarios described above. Enforce network segmentation on running containers. Maintain network segmentation (or “nano-segmentation”) to segregate clusters or zones of containers by application or workload.
In addition to being a highly effective best practice, network segmentation is a must-have for container-based applications that are subject to PCI DSS.
It also serves as a safeguard against “east-west” attacks. Actively monitor container activity and user access. As with any IT environment, you should consistently monitor activity and user access to your container ecosystem to quickly identify any suspicious or malicious activity. Log all administrative user access to containers for auditing. While strong user access controls can restrict privileges for the majority of people who interact with containers, administrators are in a class by themselves. Logging administrative access to your container ecosystem, container registry, and container images is a good security practice and a common-sense control.
It will provide the forensic evidence needed in the case of a breach, as well as a clear audit trail if needed to demonstrate compliance. Much of the notion of “baking security into IT processes” relates to automating preventive processes from the onset.

Getting aggressive about container security now can allow for containerized applications to be inherently more secure than their predecessors. However, given that containers will be deployed ephemerally and in large numbers, active detection and response -- essential to any security program -- will be critical for containerized environments.

Container runtime environments will need to be monitored at all times, for anomalies, suspected breaches, and compliance purposes. Although there’s a growing body of knowledge about container security in the public domain, it’s important to note that we’re still in the early stages.

As we discover new container-specific vulnerabilities (or new-old ones such as Dirty COW), and as we make the inevitable mistakes (like the configuration error in Vine’s Docker registry that allowed a security researcher to access Vine's source code), best practices are sure to evolve. The good news, as far as container adoption goes, is it’s still early enough to automate strong security controls into container environments.

The not-so-good news is security teams need to know about container initiatives early enough to make that happen, and more often than not they don’t.

To realize the potential security improvements that can be achieved in the transition to container-based application development, that needs to change ... soon.

Educating yourself about containers and the security implications of using them is a good start. New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth.

The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers.
InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content.
Send all inquiries to newtechforum@infoworld.com.
The U.S.

Federal Trade Commission is scheduled to announce Wednesday a “prize competition” for a tool that can used against security vulnerabilities in internet of things systems. The prize pot is up to $25,000, with $3,000 available for each honorable mention.

The winners will be announced in July.

The announcement is scheduled to be published Wednesday in the Federal Register. The tool, at the minimum, will “help protect consumers from security vulnerabilities caused by out-of-date software,” said the FTC. The government’s call for help cites the use of internet-enabled cameras as a platform for a Distributed Denial of Service (DDoS) attack last October. Weak default passwords were blamed. The FTC wants automatic software updates for IoT devices and up-to-date physical devices also.
Some devices will automatically update, but many require consumers to adjust one or more settings before they will do so, said the FTC in its announcement.

The winning entry could be a physical device, an app or a cloud-based service. This isn’t the first time the FTC has offered cash for software tools.
In 2015, it awarded $10,500 to developers of an app that could block robocalls. The winners of that contest were Ethan Garr and Bryan Moyles, the co-inventors of the RoboKiller app, both of whom work for TelTech Systems, a communications technology start-up.

Their winning app was initially developed as a side project. “It gave us something to work toward,” said Garr, of the FTC contest, in an interview. “It gave us a deadline, which in technology is really valuable because software projects can go on forever without one.” Their contest submission included an iPhone with the installed app.

They also had to pay their own expenses to attend a DefCon conference in Las Vegas for the FTC’s final judging. “I don’t think they get enough credit for how passionate they are in solving the problem,” said Garr, the vice president of product TelTech, of the people involved in the FTC’s effort. The initial version of RoboKiller forwarded all calls to the app’s servers for analysis.
It used an “audio-fingerprinting algorithm” to quickly determine whether it was a robocall or not. A new version incorporates Apple’s new CallKit technology to identify robocalls. Users can also set up conditional call forwarding to TelTech’s servers for those calls that are declined, for instance.

The service will check multiple databases for information about the call, and the developers plan to soon roll out an additional feature that will show a photo of the caller from social media.
It charges $1/month for the service. The FTC’s IoT patching plan may have limits. One issue with IoT security is embedded devices that may continue to operate long after their last patch, and may even survive the companies that created the systems. This story, "FTC sets $25,000 prize for automatic IoT patching" was originally published by Computerworld.
A sandboxed version of the Tor Browser was released over the weekend, and while there are still some rough edges and bugs – potentially major, according to the developer– it could be the first step toward protecting Tor users from recent de-anonymization exploits. Yawning Angel, a longtime Tor developer, unveiled version 0.0.2, in a post to the Tor developers mailing list on Saturday. Official binaries, available only for Linux distributions, won’t be out until later this week. Until then, if prospective users want to try it out themselves, they can build it by downloading the code on GitHub, according to the developer. While the alpha release of a piece of software wouldn’t usually merit much attention, the fact that Tor Browser has been targeted with several exploits intended to unmask users over the past two years makes it a welcomed announcement for users who value their privacy. Developers with both Firefox and the Tor Browser, which is partially built on open source Firefox code, had to scramble last month to fix a zero-day vulnerability that was being exploited in the wild to unmask Tor users. The FBI targeted Tor Browser users in 2015 after officials with the service seized servers belonging to a child pornography site called Playpen.
Instead of shuttering the site, the FBI used a network investigative technique to harvest the IP and MAC addresses of Tor users who visited the site for 13 days. In the sandboxed version of Tor, exploits against the browser are confined to the sandbox, limiting the disclosure of information about whatever machine the browser is running on.

Data like files and legitimate IP and MAC addresses is hidden as well. The browser has come a long way to even get to alpha mode; In October, when Yawning Angel discussed the prototype in a Q&A with the Tor Project, he called it “experimental,” “not user friendly” and something that only worked on his laptop.

The developer first mentioned that he was tinkering with a sandboxed version of the browser back in September, although at that point the concept was even more rudimentary. Yawning Angel has sandboxed the Tor Browser! https://t.co/5pbBvJgUn4 pic.twitter.com/q8lHA6Fib6 — torproject (@torproject) October 11, 2016 The browser is built around bubblewrap, a sandboxing utility for Linux designed to restrict an application’s access to parts of the operating system or user data.
Since it is an alpha release however, Yawning Angel is stressing users not to assume the browser isn’t without its flaws. “There are several unresolved issues that affect security and fingerprinting,” the developer wrote in a README packaged with code for the sandboxed Tor Browser on GitHub. Users seeking strong security should pair the sandbox with a Linux-based operating system designed to thwart exploit and malware attacks, such as Qubes, Subgraph, or Tails, he adds. While major browsers such as Chrome, Edge and Safari operate in secure sandboxes, developers with Tor haven’t had the time to build a sandbox until now.
In the Q&A that Yawning Angel gave in October, he acknowledged this is his third time trying to write code for the sandbox and that the process is “incredibly complicated” and not without “lots of design problems.” “We never have time to do this. We have a funding proposal to do this but I decided to do it separately from the Tor Browser team.
I’ve been trying to do this since last year,” Yawning Angel said at the time.