Home Tags Hardening

Tag: Hardening

Google Elevates Security in Android O

Android O, due in the third quarter, figures to elevate the security of the mobile OS with new features focused on improved third-party patching, a new permission model and hardening of existing features.

French spies warn politicians of hack risk as election draws near

Authorities uneasy in wake of alleged Russian interference in US presidential race French authorities are warning political parties about the increased threat of cyber attacks as the country prepares to elect a new president in May. Last year's US presidential election was marred by cyber attacks and leaks. US intel agencies blame Russia for the hack1 and subsequent leak of sensitive emails and other information from the Democratic National Committee (DNC).

French authorities fear the possibility of similar interference. The National Agency for the Security of Information Systems (L'Agence nationale de la sécurité des systèmes d'information or ANSSI) director Guillaume Poupard told FRANCE 24: "We're clearly not up against people who are throwing punches just to see what happens.

There's a real strategy that includes cyber [attacks], interference and leaked information...

These are people whom we're obviously following closely.

Even if we can't be sure that they're the same, they're attackers who regularly come knocking on our ministers' doors. "Political parties and campaign staff are particularly vulnerable to hackers with tactics likely to include spear phishing and website attacks.

Fundamentally, political parties, like small and medium-size businesses... are not equipped to deal with the situation alone." A spokesman for independent candidate and former economy minister Emmanuel Macron's political movement, En Marche, admitted that an attack on its website in October look at "least one full night of work to repair". ANSSI is teaching political parties how to protect themselves as well as referring them to a list of pre-approved companies for additional advice. Ilia Kolochenko, chief exec of web security firm High-Tech Bridge, commented: "Cybersecurity awareness, additional security assessment and hardening of the critical national infrastructure is definitely a good move. "We should all be aware of the risks associated with modern technologies, such as e-voting and mobile voting, especially the risks related to such an important process as a presidential election." ® Bootnote 1New research on APT28 – the Russian state-sponsored hacking crew blamed for the DNC hack as well as previous assaults on NATO, TV5Monde, World Anti-Doping Agency among others – was published by FireEye last week.

APT28 (AKA Fancy Bear) is suspected by other security firms to be a unit of Russian military intelligence agency, the GRU. Sponsored: Want to know more about Privileged Access Management? Visit The Register's hub

Security hardened, pah! Expert doubts Kaymera’s mighty Google’s Pixel

Kaymera: building on shoulders of a giant, claim The arrival of a security hardened version of Google’s suppoed "iPhone killer" Pixel phone from Kaymera has received a sceptical reception from one expert. Kaymera Secured Pixel is outfitted with Kaymera’s own hardened version of the Android operating system and its security architecture. This architecture is made up of four layers: encryption, protection from malicious downloads, a prevention layer that monitors for unauthorised attempts to access OS functions (such as microphone, camera or GPS), and a detection and enforcement layer that monitors, detects and blocks malicious code or misbehaving apps.
Indecent mobile security experts have questioned whether the technology offers much by way of benefits over that offered by native Pixel smartphones. But professor David Rogers, chief executive of Copper Horse and a Lecturer in mobile systems security at the University of Oxford, questioned what exaclty is new. “Many of the proposed functions are already in-built into Pixel (examples below), so what are the extra benefits Kaymera offers?” For example, Pixel has full device encryption and file-based encryption, backed by TrustZone. Plus, as it's Google’s own phone, Pixel is first in line for patching - an important security defence in itself. “Pixel has many other functions and capabilities built over many years including Position Independent Execution (PIE), Address Space Randomisation Layout (ASLR), SE Linux and so on,” Rogers added. Kaymera responded that its kit offered benefits on this front by enforcing security controls built into Pixel but not actually enforced. Oded Zehavi, Kaymera chief operating offficer, told El Reg: “In places where Google has good enough security, we leverage the existing functionality (in many of the examples given here, the functionality is not actually enforced.
In these cases we enforce and prevent disabling of the security functionality by negligent users or malicious hackers).” Third-parties building on Google security do not have a good track record in this space (including Blackphone) in terms of getting their own code secure and tested properly, including updates.  Rogers is unconvinced that Kaymera will do any better with hardening Pixel than others have done with hardening Android. Zehavi responded that Kaymera devices have been tested to the most rigorous standards by governments around the world. “As a philosophy we always have more than one security layer against any attack vector hence we don’t trust any single security measure including Google security measures.

For example, our prevention layer feeds with fake resources any payload that may overcome the OS hardening and get loaded onto the device,” Zehavi said. Rogers remains unconvinced about the security proposition of the Kaymera Secured Pixel, especially in the absence of NCSC certification or US security certification.
It’s more like “some kind of Chimera rather than a Kaymera,” he cuttingly concluded. “If Kaymera really want to protect against comms interception, low-level malware attacks and so on, they would have to build some kind of firewall and introspection capability,” Rogers said. “To do that they would need access inside the Radio Interface Layer and also to processes and app data.” “Google’s security architecture does not allow this unless you ‘roll your own’ in a big way, creating your own device and modifying the AOSP [Android Open Source Project] code to deliver a bespoke device,” he added. Creating a bespoke device risk undoing Google’s security controls, Rogers warned. “Application sandboxing and isolation there for a reason, including enforcing the Principle of Least Privilege,” he said. The Israeli manufacture said it had been careful to add extra security without breaking Google’s existing controls. Zehavi explained: “Even though we embed our code deep into the AOSP code in layers that are beyond what regular applications can reach, we do not break any existing Google security measures including the sandboxing etc.
Instead, we add extra measures across the board that, as mentioned, leverage the existing mechanism but bring the device to a total different level of security which cannot be achieved via the application layer alone.” Rogers responded: “They admit to using AOSP which I guess means they self-sign the build of the device themselves.

That then comes down to a question of trust in who is digitally signing the product (that gives that signer access to absolutely everything, the radio path, the private data, the lot).“ The Kaymera Secured Pixel is aimed at business and government customers prepared to pay for extra to avoid the security weaknesses associated with the ‘off the shelf’ Android operating system.

The device retains the original Google device’s purpose-built hardware, features and ergonomics. Users can, for example, still use the fingerprint scanner. Kaymera devices are centrally managed via the company’s management dashboard, enabling easy enforcement of security policies on the smartphone. Kaymera’s secured Pixel phone is available immediately. Kaymera was started in late 2013 by the founders of NSO, the surveillance tech provider whose legitimate iPhone spyware malware was used to target the phone of UAE human rights activist Ahmed Mansoor in August 2016.  The spyware caused Apple to rush out emergency software patches, to plug vulnerabiliies in its iOS mobile operating system. The Israeli firm is open about its roots.
If NSO is a ‘poacher’, selling surveillance tools to governments, then Kaymera is the gamekeeper, its pitch runs. “I’m not sure I can buy in to the poacher turned gamekeeper thing here and I would rather trust Google in this case,” Rogers concluded. ® Sponsored: Want to know more about Privileged Access Management? Visit The Register's hub

8 Docker security rules to live by

Odds are, software (or virtual) containers are in use right now somewhere within your organization, probably by isolated developers or development teams to rapidly create new applications.

They might even be running in production. Unfortunately, many security teams don’t yet understand the security implications of containers or know if they are running in their companies. In a nutshell, Linux container technologies such as Docker and CoreOS Rkt virtualize applications instead of entire servers.

Containers are superlightweight compared with virtual machines, with no need for replicating the guest operating system.

They are flexible, scalable, and easy to use, and they can pack a lot more applications into a given physical infrastructure than is possible with VMs.

And because they share the host operating system, rather than relying on a guest OS, containers can be spun up instantly (in seconds versus the minutes VMs require). A June 2016 report from the Cloud Foundry Foundation surveyed 711 companies about their use of containers. More than half had either deployed or were in the process of evaluating containers. Of those, 16 percent have already mainstreamed the use of containers, with 64 percent expecting to do so within the next year.
If security teams want to seize the opportunity (borrowing a devops term) to “shift security to the left,” they need to identify and involve themselves in container initiatives now. Developers and devops teams have embraced containers because they align with the devops philosophy of agile, continuous application delivery. However, as is the case with any new technology, containers also introduce new and unique security challenges.

These include the following: Inflow of vulnerable source code: Because containers are open source, images created by an organization’s developers are often updated, then stored and used as necessary.

This creates an endless stream of uncontrolled code that may harbor vulnerabilities or unexpected behaviors. Large attack surface: In a given environment, there would be many more containers than there would be applications, VMs, databases, or any other object that requires protecting.

The large numbers of containers running on multiple machines, whether on premises or in the cloud, make it difficult to track what’s going on or to detect anomalies through the noise. Lack of visibility: Containers are run by a container engine, such as Docker or Rkt, that interfaces with the Linux kernel.

This creates another layer of abstraction that can mask the activity of specific containers or what specific users are doing within the containers. Devops speed: The pace of change is such that containers typically have a lifespan four times shorter than that of VMs, on average.

Containers can be executed in an instant, run for a few minutes, then stopped and removed.

This ephemerality makes it possible to launch attacks and disappear quickly, with no need to install anything. “Noisy neighbor” containers: A container might behave in a way that effectively creates a DoS attack on other containers.

For example, opening sockets repeatedly will quickly bring the entire host machine to a crawl and eventually cause it to freeze up. Container breakout to the host: Containers might run as a root user, making it possible to use privilege escalation to break the “containment” and access the host’s operating system. “East-west” network attacks: A jeopardized container can be leveraged to launch attacks across the network, especially if its outbound network connections and ability to run with raw sockets were not properly restricted. The best practices for securing container environments are not only about hardening containers or the servers they run on after the fact.

They’re focused on securing the entire environment.
Security must be considered from the moment container images are pulled from a registry to when the containers are spun down from a runtime or production environment.

Given that containers are often deployed at devops speed as part of a CI/CD framework, the more you can automate, the better. With that in mind, I present this list of best practices. Many of them are not unique to containers, but if they are “baked” into the devops process now, they will have a much greater impact on the security posture of containerized applications than if they are “bolted” on after the fact. Implement a comprehensive vulnerability management program. Vulnerability management goes way beyond scanning images when they are first downloaded from a registry.

Containers can easily pass through the development cycle with access controls or other policies that are too loose, resulting in corruption that causes the application to break down or leading to compromise in runtime.

A rigorous vulnerability management program is a proactive initiative with multiple checks from “cradle to grave,” triggered automatically and used as gates between the dev, test, staging, and production environments. Ensure that only approved images are used in your environment. An effective way of reducing the attack surface and preventing developers from making critical security mistakes is to control the inflow of container images into your development environment.

This means using only approved private registries and approved images and versions.

For example, you might sanction a single Linux distro as a base image, preferably one that is lean (Alpine or CoreOS rather than Ubuntu) to minimize the surface for potential attacks. Implement proactive integrity checks throughout the lifecycle. Part of managing security throughout the container lifecycle is to ensure the integrity of the container images in the registry and enforce controls as they are altered or deployed into production.
Image signing or fingerprinting can be used to provide a chain of custody that allows you to verify the integrity of the containers. Enforce least privileges in runtime. This is a basic security best practice that applies equally in the world of containers. When a vulnerability is exploited, it generally provides the attacker with access and privileges equal to those of the application or process that has been compromised.

Ensuring that containers operate with the least privileges and access required to get the job done reduces your exposure to risk. Whitelist files and executables that the container is allowed to access or run. It’s a lot easier to manage a whitelist when it is implemented from the get-go.

A whitelist provides a measure of control and manageability as you learn what files and executables are required for the application to function correctly, and it allows you to maintain a more stable and reliable environment. Limiting containers so that they can access or run only pre-approved or whitelisted files and executables is a powerful nethod to mitigate risk.
It not only reduces the attack surface, but also can be employed to provide a baseline for anomalies and prevent the use cases of the “noisy neighbor” and container breakout scenarios described above. Enforce network segmentation on running containers. Maintain network segmentation (or “nano-segmentation”) to segregate clusters or zones of containers by application or workload.
In addition to being a highly effective best practice, network segmentation is a must-have for container-based applications that are subject to PCI DSS.
It also serves as a safeguard against “east-west” attacks. Actively monitor container activity and user access. As with any IT environment, you should consistently monitor activity and user access to your container ecosystem to quickly identify any suspicious or malicious activity. Log all administrative user access to containers for auditing. While strong user access controls can restrict privileges for the majority of people who interact with containers, administrators are in a class by themselves. Logging administrative access to your container ecosystem, container registry, and container images is a good security practice and a common-sense control.
It will provide the forensic evidence needed in the case of a breach, as well as a clear audit trail if needed to demonstrate compliance. Much of the notion of “baking security into IT processes” relates to automating preventive processes from the onset.

Getting aggressive about container security now can allow for containerized applications to be inherently more secure than their predecessors. However, given that containers will be deployed ephemerally and in large numbers, active detection and response -- essential to any security program -- will be critical for containerized environments.

Container runtime environments will need to be monitored at all times, for anomalies, suspected breaches, and compliance purposes. Although there’s a growing body of knowledge about container security in the public domain, it’s important to note that we’re still in the early stages.

As we discover new container-specific vulnerabilities (or new-old ones such as Dirty COW), and as we make the inevitable mistakes (like the configuration error in Vine’s Docker registry that allowed a security researcher to access Vine's source code), best practices are sure to evolve. The good news, as far as container adoption goes, is it’s still early enough to automate strong security controls into container environments.

The not-so-good news is security teams need to know about container initiatives early enough to make that happen, and more often than not they don’t.

To realize the potential security improvements that can be achieved in the transition to container-based application development, that needs to change ... soon.

Educating yourself about containers and the security implications of using them is a good start. New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth.

The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers.
InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content.
Send all inquiries to newtechforum@infoworld.com.

Hurricanes, Earthquakes & Threat Intelligence

You must be prepared for foreseeable attacks as well as the ones that sneak up on you. Organizations deal with two types of cyberthreats: hurricanes and earthquakes. Hurricanes are those attacks you can see coming; earthquakes, you can't. Both are inevitable, and you need to plan and take action accordingly. This starts with an understanding of what threat intelligence is and how to make it relevant and actionable. Threat intelligence can help you transition from constantly reacting to being proactive. It allows you to prepare for the hurricanes and respond to the earthquakes with an efficient, integrated approach.   Eliminate Noise Mention threat intelligence and most organizations think about multiple data feeds to which they subscribe — commercial sources, open source, and additional feeds from security vendors — each in a different format and most without any context to allow for prioritization. This global threat data gives some insight into activities happening outside of your enterprise — not only attacks themselves, but how attackers are operating and infiltrating networks. The challenge is that most organizations suffer from data overload. Without the tools and insights to automatically sift through mountains of disparate global data and aggregate it for analysts and action, this threat data becomes noise: you have alerts around attacks that aren't contextualized, relevant, or a priority. To make more effective use of this data, it must be aggregated in one manageable location and translated into a uniform format so that you can automatically get rid of the noise and focus on what's important. Focus on Threats With global threat data organized, you can focus on the hurricanes and earthquakes that threaten your organization. Hurricanes are the threats you know about, can prepare for, protect against, and anticipate based on past trends. For example, based on research, say that we know a file is malware. This intelligence should be operationalized — turned into a policy, a rule, or signature and sent to the appropriate sensor — so that it can prevent bad actors from stealing valuable data, creating a disruption, or causing damage. As security operations become more mature, you can start to get alerts on these known threats in addition to automatically blocking them so you can learn more about the adversary. This allows you to focus on the attacks that really matter. Earthquakes are unknown threats, or threats that you may not have adequate countermeasures against, that have bypassed existing defenses. Once they're inside the network, your job is to detect, respond, and recover. This hinges on the ability to turn global threat data into threat intelligence by enriching that data with internal threat and event data and allowing analysts to collaborate for better decision making. Threat intelligence helps you better scope the campaign once the threat is detected, learn more about the adversary, and understand affected systems and how to best remediate. By correlating events and associated indicators from inside your environment (e.g., SIEM alerts or case management records) with external data on indicators, adversaries, and their methods, you gain the context to understand the who, what, when, where, why, and how of an attack. Going a step further, applying context to your business processes and assets helps you assess relevance. Is anything the organization cares about at risk? If the answer is "no," then what you suspected to be a threat is low priority. If the answer is "yes," then it's a threat. Either way, you have the intelligence you need to quickly take action. Make Intelligence Actionable Intelligence has three attributes that help define "actionable." Accuracy: Is the intelligence reliable and detailed? Relevance: Does the intelligence apply to your business or industry? Timeliness: Is the intelligence being received with enough time to do something? An old industry joke is that you can only have two of the three, so you need to determine what's most important to your business. If you need intelligence as fast as possible to deploy to your sensors, then accuracy may suffer and you might expect some false positives. If the intelligence is accurate and timely, then you may not have been able to conduct thorough analysis to determine if the intelligence is relevant to your business. This could result in expending resources on something that doesn't present a lot of risk. Ultimately, the goal is to make threat intelligence actionable. But actionable is defined by the user. The security operations center typically looks for IP addresses, domain names, and other indicators of compromise — anything that will help to detect and contain a threat and prevent it in the future. For the network team, it's about hardening defenses with information on vulnerabilities, signatures, and rules to update firewalls, and patch and vulnerability management systems. The incident response team needs intelligence about the adversary and the campaigns involved so they can investigate and remediate. And the executive team and board need intelligence about threats in business terms — the financial and operational impact — in order to increase revenue and protect shareholders and the company as a whole. Analysts must work together and across the organization to provide the right intelligence in the right format and with the right frequency so that it can be used by multiple teams. Operationalizing threat intelligence takes time and a plan. Many organizations are already moving from a reactive mode to being more proactive. But to make time to look out at the horizon and see and prepare for hurricanes while also dealing with earthquakes, organizations need to move to an anticipatory model with contextual intelligence, relevance, and visibility into trends in the threat landscape. Related Content: As Senior VP of Strategy of ThreatQuotient, Jonathan Couch utilizes his 20+ years of experience in information security, information warfare, and intelligence collection to focus on the development of people, process, and technology within client organizations to assist in ... View Full Bio More Insights

Printer security is so bad HP Inc will sell you services...

Finally, FINALLY, someone is turning off Telnet and FTP Printer security is so awful HP Inc is willing to shut off shiny features and throw its own dedicated bodies at the perennial problem. The tech giant is offering the professional security services under its new and far-harder-than-before "Secure Managed Print Services" offering unveiled today. Security types will also provide ongoing risk assessments and audit passing for the horridly hackable hardware, and handle firmware updates and password resets. The HP printers are shipped in a hardened state with shiny but dangerous features and ports closed by default in a move that reduces the attack surface available to external hackers. The obvious hacker-bait Telnet and FTP facilities inexplicably included in printers are on the hardening chopping block, as are other unspecified geriatric features. More interfaces will be decommissioned in the future as HP successfully wrangles popular software providers to move to more secure networking options. Thankfully remote capabilities remain to allow Shodan users external HP experts to log in and monitor the security health of device fleets. The tech company is continuing its hardening approach decommissioning old cipher suites and protocols, and upping administration and encryption settings for new and old HP printers. “Networked printers can no longer be overlooked in the wake of weakening firewalls to the growing sophistication and volume of cyberattacks,” HP South Pacific printer boss Ben Vivoda says. ® Sponsored: Customer Identity and Access Management

Microsoft Bolsters Ransomware Protection in Windows 10 Anniversary Update

Microsoft says it hardened its ransomware defenses in Windows 10  Anniversary Update in the face of skyrocketing infection rates and a doubling in the number ransomware variants released into the wild over the past 12 months. In a whitepaper (PDF) released last week, Microsoft explained its latest anti-ransomware solutions bundled this past August with the release of Windows 10 Anniversary Update. One of those improvements includes an updated Microsoft Edge browser with advanced sandboxing technology specific to the exploit-magnet Adobe Flash Player. Other anti-ransomware hardening includes applying a machine-learning infrastructure and cloud-based approach for identifying, classifying and protecting against specific ransomware attacks in seconds rather than hours, according to Microsoft. “The ways attackers are executing attacks are becoming more complex, and the results of the attack are becoming increasingly costly to its victims,” Microsoft wrote in its report.
In fact, it’s fair to say, 2016 has been a devastating year for many individuals and companies stung by ransomware. Between April 2015 and March 2016 the number of users hit by ransomware rose 17.7 percent worldwide compared to the prior year, according to Kaspersky Lab.
Incidents of encryption-based ransomware that locks up data on a PC has risen five-fold over the past year jumping from 6.6 percent in 2014/2015 to 31 percent the preceding year. Microsoft says it has bolstered Windows 10 to reflect those changing tactics of criminals.

The company, in a push to get customers to upgrade, says its Windows 10 Anniversary Edition users are 58 percent less likely to encounter ransomware than when running Windows 7. Windows 7 is still the dominant version of Microsoft’s OS in use today with 48 percent market share compared to Windows 10 with 22 percent, according to the most recent data available from Net Market Share. Windows 8 is the third most popular Windows OS in use with 8.4 market share followed closely by Windows XP with 8.2 percent. Much of Microsoft’s anti-ransomware hardening centers on its Edge browser defenses. Microsoft said that between January and July, six of the top 10 ransomware threats used email and browser exploits, or browser-plug-in related exploits, with the remaining four using browser exploits. To that end, in Windows 10 Anniversary Update, Adobe’s Flash Player is isolated when running in the Edge browser and has its own dedicated application container.
In addition to container management, Microsoft adds new kernel protection to Windows 10 Anniversary Update that limits the ways in which system calls can be used by Microsoft Edge, according to Microsoft. “If a malware author attempts using a vulnerable system call to escape the browser’s sandbox and download and install ransomware in a way that does not fit within these new restrictions, Microsoft Edge will block the system call, preventing the attack,” Microsoft said. Part of Microsoft’s anti-ransomware security also includes a combination of human and machine learning to protect against malware. On one hand, Microsoft  bolsters its Edge browser with its SmartScreen Filter (introduced with Internet Explorer 8).  It’s SmartScreen Filter roughly competes with Google’s Safe Browsing initiative in that they both use a human and machine-generated URL blacklists to block users from visiting unsafe sites. Over the past six months, Microsoft says it has also been leveraging its machine-learning technology and cloud-based automatic sample submission features in Windows Defender to help block ransomware from inboxes. Using both can block previously unidentified malware, Microsoft says. “Definition updates can take hours to prepare and deliver; our cloud service delivers this protection in seconds,” Microsoft said. As part of Windows Anniversary Edition the company introduced Windows Defender Advanced Threat Protection (ATP), a new service that gives remote security staff members a shared dashboard to view security events and alerts in their network and mitigate remote threats. Despite ransomware victories, such as CrySis being neutralized via the release of master decryption keys, the threat of ransomware persists with most strains targeting Windows users.
In August, ransomware called Fantom was discovered masquerading as a fake critical Windows update. Last summer, shortly after Windows 10 was released, attackers began launching spam and phishing email campaigns around the operating system.
Victims received messages claiming users could upgrade to Windows 10 for free.

Those who downloaded the malicious .zip archive were ultimately hit with CTB-Locker ransomware and had their files encrypted.

IDG Contributor Network: Cybersecurity Awareness Month: Shedding light on application security

October is Cybersecurity Awareness Month and, in that spirit, I’d like to shed some light on a cybersecurity topic that is both increasingly important and frequently misunderstood. In his proclamation declaring October National Cybersecurity Month, President Obama said, “Keeping cyberspace secure is a matter of national security, and in order to ensure we can reap the benefits and utility of technology while minimizing the dangers and threats it presents, we must continue to make cybersecurity a top priority.” I agree that cybersecurity is key to taking advantage of the incredible innovations technology brings to our lives.
I also feel strongly that software is at the core of most of these innovations. Yet the systemic risk to our economy and national security brought on by vulnerabilities in software code does not get the attention it deserves. The fact is Verizon’s Data Breach Incident Report shows that web application attacks are now the most frequent pattern in confirmed breaches.

But why isn’t every organization talking about application security? Part of the reason that there are a lot of misconceptions, so I’d like to set the record straight and debunk a few of the more common application security “fallacies.” Fallacy: Implementing an application security program is expensive. Reality: Not so much anymore, especially when you consider the down-stream cost savings. Cloud-based application security solutions changed the cost game. You no longer need to purchase or maintain expensive equipment or hire specialized staff.

And the cost of a breach can be staggering. Juniper Research recently predicted that the cost of data breaches will increase to $2.1 trillion globally by 2019.

The risk of an expensive breach, not to mention the long-term fallout that often follows, far outweighs the cost of implementing application security. Fallacy: Firewalls, antivirus, and network security cover applications. Reality: These technologies may protect you in some ways, but they do not protect against attacks targeting your applications. One of the reasons that cyberattackers have turned their attention to web-facing applications is that most enterprises are proficient at hardening traditional perimeters with next-generation firewalls, IDS/IPS systems and endpoint security solutions, making applications a more attractive target. For instance, firewalls were designed to handle network events, such as finding and blocking botnets and remote access exploits but don’t get as granular as the application level.
Some network security solutions do address certain application-level events -- but they require significant effort to configure and monitor, leading to security inefficiencies. Ultimately, it’s like trying to eat spaghetti with a spoon. You can ... but it’s not the way to do it. Fallacy: One single technology can secure all applications. Reality: There is no solution to rule them all.
In my work at Veracode, I have found that there are significant differences in the types of vulnerabilities that are commonly discovered by looking at applications while they are running with dynamic testing compared to analyzing the raw code with static tests.
Static testing can help identify vulnerabilities inherent in the code while dynamic testing can provide a valuable outside perspective.

By combining both techniques across the life of the application, organizations can find and address more kinds of vulnerabilities and drive down application risk. In other words, only using one kind of application security testing during the software development lifecycle means missing many vulnerabilities. Fallacy: Covering only business-critical applications is enough for success. Reality: Some of the recent major breaches stemmed from applications considered “non-critical.” For instance, JPMorgan was recently breached after a third-party website for its annual charity road race was compromised, exposing employee credentials that were leveraged to log into a misconfigured JPMorgan server.

The site was hardly a business-critical application and was not even under the direct control of JPMorgan, but hackers found a vulnerability in the third-party website and used it to their advantage. Cyberattackers look for the path of least resistance into an organization, and that path is often through less-critical and third-party applications. Fallacy: Developers won’t change their agile processes to incorporate application security. Reality: The data shows They do—and Increasingly, they won’t need to. Application security is in the midst of a major transformation – it is becoming less driven by security professionals and more by frontline developers. With the rise of DevOps and continuous deployment/release models, the old ways of inserting security into software development after the fact are no longer viable.
In today’s development environment, developers can’t be held back by waiting for the security team’s review.

As such, security professionals must adapt to the new developer processes, not the other way around.

Application security solutions are already increasingly being designed to fit the way developers work and to integrate seamlessly and automatically into developers’ workflows.

Data from our customers, and third-party research from TechTarget, show that DevOps teams are seeing security as a part of the responsibility. Ultimately, cybersecurity awareness should include software.

The reality is that the world is increasingly powered by software and overlooking application security is a dangerous oversight. Make sure you know the truth of how application security really works to make the right decisions to keep your businesses safe. This article is published as part of the IDG Contributor Network. Want to Join?

Got Ancient exploit but nowhere to use it? Try the horrid...

Audio: Aussie hacker shows even NSA hacks haven't schooled some telcos Ruxcon They've been warned for years, but scores of telcos are still making bone-headed configuration mistakes in their GPRS Global Roaming Exchange (GRX) networks, leaving mail and FTP servers vulnerable. The international phone routing system is used for passing and billing calls between providers, using encryption to funnel data over specific protocols. It is the same network leaker Edward Snowden revealed in 2013 was the NSA's attack vector to breach Belgian telco Belgacom. Aussie HP Enterprise Consulting Services managing principal Stephen Kho detailed in 2014 how anyone can access reams of leaky GRX data without hacking national telcos, with simple "light weight" scans. A year later and Kho is still finding data via some 40,000 live GRX hosts which responded to pings, although numbers of exposed services have largely fallen. He shared his results and explained how GRX data can be obtained, including detailing the workings of the network and protocols, at the Ruxcon hacking conference in Melbourne, Australia. Presentation recording:Listen or download On her majesty's secret service - GRX and a spy agency. Kho listed the banner server scan results showing user services including mail servers and Cisco routers, and showed many unpatched and exposed to old, dangerous exploits including remote code execution and denial of service. "So there's some email servers here, there's a root exploit on that, 10 year-old remote code execution on that, buffer overflow on this send mail … that's not good," Kho told the giggling assembled hackers. "We looked at some FTP servers, a whole bunch here .. again remote code execution on that, denial of service on that, overflow on that from 2001." "Clearly people are putting things on the GRX network that are running all services and filtering, not hardening," he says. "If you think you're old exploits aren't gunna work anymore, well you're still good." ®

Domain name resolution is a Tor attack vector, but don’t worry

Nation-state attackers probably pwn you anyhow This one needs the words “Don't Panic” in large friendly letters on the cover: privacy researchers have worked out that Tor's use of the domain name system (DNS) can be exploited to identify users. However, they say, right now an attacker with resources to drop Tor sniffers at “Internet scale” can already de-anonymise users. Rather, they hope to have Tor and relay operators start hardening their use of DNS against future attacks. So: read if you're interested in the interaction between Tor and the DNS, but not if you need the sensation of smelling salts after a faint. The basis of Tor is that your ISP can see you're talking to a Tor node, but nothing else, because your content is encrypted; while a Tor Website is responding to your requests, but doesn't know your IP address. What Benjamin Greschbach (KTH Royal Institute of Technology) and his collaborators have done is add the DNS to the attack vectors. While the user's traffic is encrypted when it enters the network, what travels from the exit node to a resolver is a standard – unencrypted – DNS request. Described at Freedom to Tinker here and written up in full in this Arxiv pre-print, the attack is dubbed DefecTor. Google's DNS has a special place in the research, the paper states, because 40 per cent of DNS bandwidth from Tor exit nodes, and one-third of all Tor DNS requests, land on The Chocolate Factory's resolvers.

That makes Google uniquely placed to help snoop on users, if it were so minded. There's a second problem, and one The Register is sure has other researchers slapping their foreheads and saying “why didn't I think of that?”: DNS requests often traverse networks (autonomous systems, ASs) that the user's HTTP traffic never touches.

The requests leak further than the Tor traffic that triggers it. DefecTor components: a sniffer on the ingress TCP traffic, and another either on the DNS path or in a malicious DNS server Like other attacks, DefecTor needs a network-level sniffer at ingress. While ingress traffic is encrypted, existing research demonstrates that packet length and direction provides a fingerprint that can identify the Website that originated the traffic. Egress sniffing is also needed: the attacker might capture traffic on the path between an exit relay and a resolver; or may operate a malicious DNS resolver to capture exit traffic. With the user's encrypted TCP traffic fingerprinted, DNS requests, and time-stamps, DefecTor can mount “perfectly precise” attacks. “Mapping DNS traffic to websites is highly accurate even with simple techniques, and correlating the observed websites with a website fingerprinting attack greatly improves the precision when monitoring relatively unpopular websites,” the paper states. Mitigations suggested in the paper include: Exit relay operators handling DNS resolution themselves, or using their ISP's resolver, rather than Google or OpenDNS; There's a “clipping bug” in Tor (notified to the operators) that expires DNS caches too soon, sending too many requests out to a resolver (and providing more sniffable material to an attacker); Site operators should create .onion services that don't raise DNS requests; and Tor needs hardening against Website fingerprinting. The researchers have published code, data, and replication instructions here. &regl

Enterprises Slow to Share Cyber-Threat Data Despite Federal Protection

In January, President Obama signed the Cybersecurity Act of 2015, but companies remain in a holding pattern, waiting for legal clarity and demonstrable benefits before sharing sensitive information.   Sharing information on cyber-threats has garnered a great deal of U.S. government support over the past 18 months.In February 2015, President Obama signed Executive Order 13691, encouraging collaboration between private companies and with the government through organizations known as information sharing and analysis organizations, or ISAOs.Nearly a year later, Congress passed a 2,009-page military spending bill that included among its provisions the Cybersecurity Act of 2015, a law that affords companies legal protections in exchange for sharing information with the government about cyber-attacks.

This past summer, the Department of Homeland Security released guidelines for sharing details of attacks with the federal government.Despite the government action, companies have been reticent to begin sharing data on the attacks hitting their networks. One report found that while nearly 140 organizations were connected to DHS's Automated Indicator Sharing system, only one company was sharing any significant amount of information. Nine months after the Cybersecurity Act became law, the complexity of information sharing and the natural human reluctance to revealing details about network and data breaches means that convincing organizations to share continues to be difficult, Chris Coleman, CEO of threat-intelligence firm LookingGlass, told eWEEK. "I always question whether it's in human nature to share this type of information," he said. "For companies, the legal issues of a material breach ... mean that there is not a lot of established policy in regards to sharing.
So (many say) why take the risk?"Yet, defenders need to exchange information on cyber-threats.
Such intelligence promises to aid companies in hardening their defenses against the most pervasive attacks and assigning staff and resources to the most pressing threats.Yet, very few companies have started sharing information. Large companies are studying the legal issues, concerned that talking about attacks will bring lawsuits and legal jeopardy.
Smaller firms generally just do not know where to begin, Greg White, executive director of the ISAO Standards Organization and a professor of computer science at University of Texas at San Antonio, told eWEEK."Mostly our problem at this point is getting the word out," he said, adding that "if you are one of those entities that sign up for a feed and you are getting thousands of indicators, many don't know what to do with that."The Cybersecurity Act of 2015 should assuage fears of legal repercussions to limited sharing.

The law, which had been discussed in Congress in various forms for nearly a decade, orders government agencies to share information about threats with companies and other groups, and mandates new processes and systems to disseminate information about threats from the private sector to government agencies.Before the law, companies would only rarely voluntarily share breach information.

Lockdown! Harden Windows 10 for maximum security

You may have heard that Microsoft has made Windows 10 more secure than any of its predecessors, packing it with security goodies. What you might not know is that some of these vaunted security features aren’t available out of the box or they require additional hardware -- you may not be getting the level of security you bargained for. Features such as Credential Guard are available for only certain editions of Windows 10, while the advanced biometrics promised by Windows Hello require a hefty investment in third-party hardware. Windows 10 may be the most secure Windows operating system to date, but the security-savvy organization -- and individual user -- needs to keep the following hardware and Windows 10 edition requirements in mind in order to unlock the necessary features to achieve optimum security. Note: Presently, there are four desktop editions of Windows 10 -- Home, Pro, Enterprise, and Education -- along with multiple versions of each, offering varying levels of beta and preview software.
InfoWorld’s Woody Leonard breaks down which version of Windows 10 to use.

The following Windows 10 security guide focuses on standard Windows 10 installations -- not Insider Previews or Long Term Servicing Branch -- and includes Anniversary Update where relevant.
The right hardware Windows 10 casts a wide net, with minimum hardware requirements that are undemanding.

As long as you have the following, you’re good to upgrade from Win7/8.1 to Win10: 1GHz or faster processor, 2GB of memory (for Anniversary Update), 16GB (for 32-bit OS) or 20GB (64-bit OS) disk space, a DirectX 9 graphic card or later with WDDM 1.0 driver, and an 800-by-600-resolution (7-inch or larger screens) display.

That describes pretty much any computer from the past decade. But don’t expect your baseline machine to be fully secure, as the above minimum requirements won’t support many of the cryptography-based capabilities in Windows 10. Win10’s cryptography features require Trusted Platform Module 2.0, which provides a secure storage area for cryptographic keys and is used to encrypt passwords, authenticate smartcards, secure media playback to prevent piracy, protect VMs, and secure hardware and software updates against tampering, among other functions. Modern AMD and Intel processors (Intel Management Engine, Intel Converged Security Engine, AMD Security Processor) already support TPM 2.0, so most machines bought in the past few years have the necessary chip.
Intel’s vPro remote management service, for example, uses TPM to authorize remote PC repairs.

But it’s worth verifying whether TPM 2.0 exists on any system you upgrade, especially given that Anniversary Update requires TPM 2.0 support in the firmware or as a separate physical chip.

A new PC, or systems installing Windows 10 from scratch, must have TPM 2.0 from the get-go, which means having an endorsement key (EK) certificate preprovisioned by the hardware vendor as it is shipped.

Alternatively, the device can be configured to retrieve the certificate and store it in TPM the first time it boots up. Older systems that don’t support TPM 2.0 -- either because they don’t have the chip installed or are old enough that they have only TPM 1.2 -- will need to get a TPM 2.0-enabled chip installed. Otherwise, they will not be able to upgrade to Anniversary Update at all. While some of the security features work with TPM 1.2, it’s better to get TPM 2.0 whenever possible.

TPM 1.2 allows only for RSA and SHA-1 hashing algorithm, and considering the SHA-1 to SHA-2 migration is well under way, sticking with TPM 1.2 is problematic.

TPM 2.0 is much more flexible, as it supports SHA-256 and elliptical curve cryptography. Unified Extensible Firmware Interface (UEFI) BIOS is the next piece of must-have hardware for achieving the most secure Windows 10 experience.

The device needs to be shipped with UEFI BIOS enabled to allow Secure Boot, which ensures that only operating system software, kernels, and kernel modules signed with a known key can be executed during boot time.
Secure Boot blocks rootkits and BIOS-malware from executing malicious code.
Secure Boot requires firmware that supports UEFI v2.3.1 Errata B and has the Microsoft Windows Certification Authority in the UEFI signature database. While a boon from a security perspective, Microsoft designating Secure Boot mandatory for Windows 10 has run into controversy, as it makes it harder to run unsigned Linux distributions (such as Linux Mint) on Windows 10-capable hardware. Anniversary Update won’t install unless your device is UEFI 2.31-compliant or later. Beefing up authentication, identity Password security has been a significant issue in the past few years, and Windows Hello moves us closer to a password-free world as it integrates and extends biometric logins and two-factor authentication to "recognize" users without passwords. Windows Hello also manages to be simultaneously the most accessible and inaccessible security feature of Windows 10. Yes, it is available across all Win10 editions, but it requires significant hardware investment to get the most of what it has to offer. To protect credentials and keys, Hello requires TPM 1.2 or later.

But for devices where TPM is not installed or configured, Hello can use software-based protection to secure credentials and keys instead, so Windows Hello is accessible to pretty much any Windows 10 device. But the best way to use Hello is to store biometric data and other authentication information in the on-board TPM chip, as the hardware protection makes it more difficult for attackers to steal them.

Further, to take full advantage of biometric authentication, additional hardware -- such as a specialized illuminated infrared camera or a dedicated iris or fingerprint reader -- is necessary. Most business-class laptops and several lines of consumer laptops ship with fingerprint scanners, enabling businesses to get started with Hello under any edition of Windows 10.

But the marketplace is still limited when it comes to depth-sensing 3D cameras for facial recognition and retina scanners for iris-scanning, so Windows Hello’s more advanced biometrics is a future possibility for most, rather than a daily reality. Available for all Windows 10 editions, Windows Hello Companion Devices is a framework for allowing users to use an external device -- such as a phone, access card, or wearable -- as one or more authenticating factors for Hello. Users interested in working with Windows Hello Companion Device to roam with their Windows Hello credentials between multiple Windows 10 systems must have Pro or Enterprise installed on each one. Windows 10 formerly had Microsoft Passport, which enabled users to log in to trusted applications via Hello credentials. With Anniversary Update, Passport no longer exists as a separate feature but is incorporated into Hello.

Third-party applications that use the Fast Identity Online (FIDO) specification will be able to support single sign-on by way of Hello.

For example, the Dropbox app can be authenticated directly via Hello, and Microsoft’s Edge browser enables integration with Hello to extend to the web.
It’s possible to turn on the feature in a third-party mobile device management platform, as well. The password-less future is coming, but not quite yet. Keeping malware out Windows 10 also introduces Device Guard, technology that flips traditional antivirus on its head.

Device Guard locks down Windows 10 devices, relying on whitelists to let only trusted applications be installed. Programs aren’t allowed to run unless they are determined safe by checking the file’s cryptographic signature, which ensures all unsigned applications and malware cannot execute.

Device Guard relies on Microsoft’s own Hyper-V virtualization technology to store its whitelists in a shielded virtual machine that system administrators can’t access or tamper with.

To take advantage of Device Guard, machines must run Windows 10 Enterprise or Education and support TPM, hardware CPU virtualization, and I/O virtualization.

Device Guard relies on Windows hardening such as Secure Boot. AppLocker, available only for Enterprise and Education, can be used with Device Guard to set up code integrity policies.

For example, administrators can decide to limit which universal applications from the Windows Store can be installed on a device.  Configurable code integrity is another Windows component which verifies that the code running is trusted and sage. Kernel mode code integrity (KMCI) prevents the kernel from executing unsigned drivers.

Administrators can manage the policies at the certificate authority or publisher level as well as the individual hash values for each binary executable.
Since much of commodity malware tends to be unsigned, deploying code integrity policies lets organizations immediately protect against unsigned malware. Windows Defender, first released as standalone software for Windows XP, became Microsoft’s default malware protection suite, with antispyware and antivirus, in Windows 8.

Defender is automatically disabled when a third-party antimalware suite is installed.
If there is no competing antivirus or security product installed, make sure that Windows Defender, available across all editions and with no specific hardware requirements, is turned on. For Windows 10 Enterprise users, there is the Windows Defender Advanced Threat Protection, which offers real-time behavioral threat analysis to detect online attacks. Securing data BitLocker, which secures files in an encrypted container, has been around since Windows Vista and is better than ever in Windows 10. With Anniversary Update, the encryption tool is available for Pro, Enterprise, and Education editions. Much like Windows Hello, BitLocker works best if TPM is used to protect the encryption keys, but it can also use software-based key protection if TPM does not exist or is not configured. Protecting BitLocker with a password provides the most basic defense, but a better method is to use a smartcard or the Encrypting File System to create a file encryption certificate to protect associated files and folders. When BitLocker is enabled on the system drive and brute-force protection is enabled, Windows 10 can restart the PC and lock access to the hard drive after a specified number of incorrect password attempts. Users would have to type the 48-character BitLocker recovery key to start the device and access the disk.

To enable this feature, the system would need to have UEFI firmware version 2.3.1 or later. Windows Information Protection, formerly Enterprise Data Protection (EDP), is available only for Windows 10 Pro, Enterprise, or Education editions.
It provides persistent file-level encryption and basic rights management, while also integrating with Azure Active Directory and Rights Management services.
Information Protection requires some kind of mobile device management -- Microsoft Intune or a third-party platform such as VMware’s AirWatch -- or System Center Configuration Manager (SCCM) to manage the settings.

An admin can define a list of Windows Store or desktop applications that can access work data, or block them entirely. Windows Information Protection helps control who can access data to prevent accidental information leakage. Active Directory helps ease management but is not required to use Information Protection, according to Microsoft. Virtualizing security defenses Credential Guard, available only for Windows 10 Enterprise and Education, can isolate “secrets” using virtualization-based security (VBS) and restrict access to privileged system software.
It helps block pass-the-hash attacks, although security researchers have recently found ways to bypass the protections.

Even so, having Credential Guard is still better than not having it at all.
It runs only on x64 systems and requires UEFI 2.3.1 or greater.
Virtualization extensions such as Intel VT-x, AMD-V, and SLAT must be enabled, as well as IOMMU such as Intel VT-d, AMD-Vi, and BIOS Lockdown.

TPM 2.0 is recommended in order to enable Device Health Attestation for Credential Guard, but if TPM is not available, software-based protections can be used instead. Another Windows 10 Enterprise and Education feature is Virtual Secure Mode, which is a Hyper-V container that protects domain credentials saved on Windows. Other security goodies Windows 10 supports mobile device management across all editions, but needs to be integrated with a separate MDM platform, such as Microsoft Intune or a third-party platform such as VMware’s AirWatch.
If MDM is on the list, the best scenario would be to avoid Windows 10 Home, as not all capabilities are available in that edition. MDM and SCCM platforms can also use the Windows Device Health Attestation Service, available across all editions, to manage conditional access scenarios. Group Policy is a powerful tool for Windows administrators, but it is available with only Pro, Enterprise, and Education editions.

Domain join and Azure Active Directory Domain join, which enable single sign-on for cloud-hosted applications, are also powerful administrator tools available for Pro, Enterprise, and Education editions.

Azure Directory Domain join requires a separate Azure Active Directory. Though not strictly a security feature, Assigned Access lets administrators lock down the interface on Windows 10 devices so that users are limited to specific tasks.

Available only with an Enterprise E3 subscription (or Education), Assigned Access can restrict access to services; block access to Shut Down, Restart, Sleep, and Hibernate commands; and prevent changes to the Start menu, the taskbar, or the Start screen. Organizations that have deployed DirectAccess infrastructure for remote access will need Windows 10 Enterprise or Education to connect. Picking what you need While Windows 10 Home may be the most limited of the desktop editions when it comes to security, that doesn’t mean users have to shell out for Enterprise to get any of the new features. Regardless of edition, Windows 10 is Microsoft’s most secure operating system to date, and a constant release of security patches, feature updates, and version upgrades will keep it that way.

Everyone’s security needs are different. Make sure to buy the edition and establish the configuration that gives you the optimal security you are looking for. Related resources