8.7 C
London
Wednesday, September 20, 2017
Home Tags Hardening

Tag: Hardening

The built-in exploit mitigations are getting stronger and easier to configure.
Mobile app developers'casual use of back-end technology like Elasticsearch without security-hardening puts unsuspecting enterprises at grave risk of exposure.
Android O, due in the third quarter, figures to elevate the security of the mobile OS with new features focused on improved third-party patching, a new permission model and hardening of existing features.
Authorities uneasy in wake of alleged Russian interference in US presidential race French authorities are warning political parties about the increased threat of cyber attacks as the country prepares to elect a new president in May. Last year's US presidential election was marred by cyber attacks and leaks. US intel agencies blame Russia for the hack1 and subsequent leak of sensitive emails and other information from the Democratic National Committee (DNC).

French authorities fear the possibility of similar interference. The National Agency for the Security of Information Systems (L'Agence nationale de la sécurité des systèmes d'information or ANSSI) director Guillaume Poupard told FRANCE 24: "We're clearly not up against people who are throwing punches just to see what happens.

There's a real strategy that includes cyber [attacks], interference and leaked information...

These are people whom we're obviously following closely.

Even if we can't be sure that they're the same, they're attackers who regularly come knocking on our ministers' doors. "Political parties and campaign staff are particularly vulnerable to hackers with tactics likely to include spear phishing and website attacks.

Fundamentally, political parties, like small and medium-size businesses... are not equipped to deal with the situation alone." A spokesman for independent candidate and former economy minister Emmanuel Macron's political movement, En Marche, admitted that an attack on its website in October look at "least one full night of work to repair". ANSSI is teaching political parties how to protect themselves as well as referring them to a list of pre-approved companies for additional advice. Ilia Kolochenko, chief exec of web security firm High-Tech Bridge, commented: "Cybersecurity awareness, additional security assessment and hardening of the critical national infrastructure is definitely a good move. "We should all be aware of the risks associated with modern technologies, such as e-voting and mobile voting, especially the risks related to such an important process as a presidential election." ® Bootnote 1New research on APT28 – the Russian state-sponsored hacking crew blamed for the DNC hack as well as previous assaults on NATO, TV5Monde, World Anti-Doping Agency among others – was published by FireEye last week.

APT28 (AKA Fancy Bear) is suspected by other security firms to be a unit of Russian military intelligence agency, the GRU. Sponsored: Want to know more about Privileged Access Management? Visit The Register's hub
Kaymera: building on shoulders of a giant, claim The arrival of a security hardened version of Google’s suppoed "iPhone killer" Pixel phone from Kaymera has received a sceptical reception from one expert. Kaymera Secured Pixel is outfitted with Kaymera’s own hardened version of the Android operating system and its security architecture. This architecture is made up of four layers: encryption, protection from malicious downloads, a prevention layer that monitors for unauthorised attempts to access OS functions (such as microphone, camera or GPS), and a detection and enforcement layer that monitors, detects and blocks malicious code or misbehaving apps.
Indecent mobile security experts have questioned whether the technology offers much by way of benefits over that offered by native Pixel smartphones. But professor David Rogers, chief executive of Copper Horse and a Lecturer in mobile systems security at the University of Oxford, questioned what exaclty is new. “Many of the proposed functions are already in-built into Pixel (examples below), so what are the extra benefits Kaymera offers?” For example, Pixel has full device encryption and file-based encryption, backed by TrustZone. Plus, as it's Google’s own phone, Pixel is first in line for patching - an important security defence in itself. “Pixel has many other functions and capabilities built over many years including Position Independent Execution (PIE), Address Space Randomisation Layout (ASLR), SE Linux and so on,” Rogers added. Kaymera responded that its kit offered benefits on this front by enforcing security controls built into Pixel but not actually enforced. Oded Zehavi, Kaymera chief operating offficer, told El Reg: “In places where Google has good enough security, we leverage the existing functionality (in many of the examples given here, the functionality is not actually enforced.
In these cases we enforce and prevent disabling of the security functionality by negligent users or malicious hackers).” Third-parties building on Google security do not have a good track record in this space (including Blackphone) in terms of getting their own code secure and tested properly, including updates.  Rogers is unconvinced that Kaymera will do any better with hardening Pixel than others have done with hardening Android. Zehavi responded that Kaymera devices have been tested to the most rigorous standards by governments around the world. “As a philosophy we always have more than one security layer against any attack vector hence we don’t trust any single security measure including Google security measures.

For example, our prevention layer feeds with fake resources any payload that may overcome the OS hardening and get loaded onto the device,” Zehavi said. Rogers remains unconvinced about the security proposition of the Kaymera Secured Pixel, especially in the absence of NCSC certification or US security certification.
It’s more like “some kind of Chimera rather than a Kaymera,” he cuttingly concluded. “If Kaymera really want to protect against comms interception, low-level malware attacks and so on, they would have to build some kind of firewall and introspection capability,” Rogers said. “To do that they would need access inside the Radio Interface Layer and also to processes and app data.” “Google’s security architecture does not allow this unless you ‘roll your own’ in a big way, creating your own device and modifying the AOSP [Android Open Source Project] code to deliver a bespoke device,” he added. Creating a bespoke device risk undoing Google’s security controls, Rogers warned. “Application sandboxing and isolation there for a reason, including enforcing the Principle of Least Privilege,” he said. The Israeli manufacture said it had been careful to add extra security without breaking Google’s existing controls. Zehavi explained: “Even though we embed our code deep into the AOSP code in layers that are beyond what regular applications can reach, we do not break any existing Google security measures including the sandboxing etc.
Instead, we add extra measures across the board that, as mentioned, leverage the existing mechanism but bring the device to a total different level of security which cannot be achieved via the application layer alone.” Rogers responded: “They admit to using AOSP which I guess means they self-sign the build of the device themselves.

That then comes down to a question of trust in who is digitally signing the product (that gives that signer access to absolutely everything, the radio path, the private data, the lot).“ The Kaymera Secured Pixel is aimed at business and government customers prepared to pay for extra to avoid the security weaknesses associated with the ‘off the shelf’ Android operating system.

The device retains the original Google device’s purpose-built hardware, features and ergonomics. Users can, for example, still use the fingerprint scanner. Kaymera devices are centrally managed via the company’s management dashboard, enabling easy enforcement of security policies on the smartphone. Kaymera’s secured Pixel phone is available immediately. Kaymera was started in late 2013 by the founders of NSO, the surveillance tech provider whose legitimate iPhone spyware malware was used to target the phone of UAE human rights activist Ahmed Mansoor in August 2016.  The spyware caused Apple to rush out emergency software patches, to plug vulnerabiliies in its iOS mobile operating system. The Israeli firm is open about its roots.
If NSO is a ‘poacher’, selling surveillance tools to governments, then Kaymera is the gamekeeper, its pitch runs. “I’m not sure I can buy in to the poacher turned gamekeeper thing here and I would rather trust Google in this case,” Rogers concluded. ® Sponsored: Want to know more about Privileged Access Management? Visit The Register's hub
Odds are, software (or virtual) containers are in use right now somewhere within your organization, probably by isolated developers or development teams to rapidly create new applications.

They might even be running in production. Unfortunately, many security teams don’t yet understand the security implications of containers or know if they are running in their companies. In a nutshell, Linux container technologies such as Docker and CoreOS Rkt virtualize applications instead of entire servers.

Containers are superlightweight compared with virtual machines, with no need for replicating the guest operating system.

They are flexible, scalable, and easy to use, and they can pack a lot more applications into a given physical infrastructure than is possible with VMs.

And because they share the host operating system, rather than relying on a guest OS, containers can be spun up instantly (in seconds versus the minutes VMs require). A June 2016 report from the Cloud Foundry Foundation surveyed 711 companies about their use of containers. More than half had either deployed or were in the process of evaluating containers. Of those, 16 percent have already mainstreamed the use of containers, with 64 percent expecting to do so within the next year.
If security teams want to seize the opportunity (borrowing a devops term) to “shift security to the left,” they need to identify and involve themselves in container initiatives now. Developers and devops teams have embraced containers because they align with the devops philosophy of agile, continuous application delivery. However, as is the case with any new technology, containers also introduce new and unique security challenges.

These include the following: Inflow of vulnerable source code: Because containers are open source, images created by an organization’s developers are often updated, then stored and used as necessary.

This creates an endless stream of uncontrolled code that may harbor vulnerabilities or unexpected behaviors. Large attack surface: In a given environment, there would be many more containers than there would be applications, VMs, databases, or any other object that requires protecting.

The large numbers of containers running on multiple machines, whether on premises or in the cloud, make it difficult to track what’s going on or to detect anomalies through the noise. Lack of visibility: Containers are run by a container engine, such as Docker or Rkt, that interfaces with the Linux kernel.

This creates another layer of abstraction that can mask the activity of specific containers or what specific users are doing within the containers. Devops speed: The pace of change is such that containers typically have a lifespan four times shorter than that of VMs, on average.

Containers can be executed in an instant, run for a few minutes, then stopped and removed.

This ephemerality makes it possible to launch attacks and disappear quickly, with no need to install anything. “Noisy neighbor” containers: A container might behave in a way that effectively creates a DoS attack on other containers.

For example, opening sockets repeatedly will quickly bring the entire host machine to a crawl and eventually cause it to freeze up. Container breakout to the host: Containers might run as a root user, making it possible to use privilege escalation to break the “containment” and access the host’s operating system. “East-west” network attacks: A jeopardized container can be leveraged to launch attacks across the network, especially if its outbound network connections and ability to run with raw sockets were not properly restricted. The best practices for securing container environments are not only about hardening containers or the servers they run on after the fact.

They’re focused on securing the entire environment.
Security must be considered from the moment container images are pulled from a registry to when the containers are spun down from a runtime or production environment.

Given that containers are often deployed at devops speed as part of a CI/CD framework, the more you can automate, the better. With that in mind, I present this list of best practices. Many of them are not unique to containers, but if they are “baked” into the devops process now, they will have a much greater impact on the security posture of containerized applications than if they are “bolted” on after the fact. Implement a comprehensive vulnerability management program. Vulnerability management goes way beyond scanning images when they are first downloaded from a registry.

Containers can easily pass through the development cycle with access controls or other policies that are too loose, resulting in corruption that causes the application to break down or leading to compromise in runtime.

A rigorous vulnerability management program is a proactive initiative with multiple checks from “cradle to grave,” triggered automatically and used as gates between the dev, test, staging, and production environments. Ensure that only approved images are used in your environment. An effective way of reducing the attack surface and preventing developers from making critical security mistakes is to control the inflow of container images into your development environment.

This means using only approved private registries and approved images and versions.

For example, you might sanction a single Linux distro as a base image, preferably one that is lean (Alpine or CoreOS rather than Ubuntu) to minimize the surface for potential attacks. Implement proactive integrity checks throughout the lifecycle. Part of managing security throughout the container lifecycle is to ensure the integrity of the container images in the registry and enforce controls as they are altered or deployed into production.
Image signing or fingerprinting can be used to provide a chain of custody that allows you to verify the integrity of the containers. Enforce least privileges in runtime. This is a basic security best practice that applies equally in the world of containers. When a vulnerability is exploited, it generally provides the attacker with access and privileges equal to those of the application or process that has been compromised.

Ensuring that containers operate with the least privileges and access required to get the job done reduces your exposure to risk. Whitelist files and executables that the container is allowed to access or run. It’s a lot easier to manage a whitelist when it is implemented from the get-go.

A whitelist provides a measure of control and manageability as you learn what files and executables are required for the application to function correctly, and it allows you to maintain a more stable and reliable environment. Limiting containers so that they can access or run only pre-approved or whitelisted files and executables is a powerful nethod to mitigate risk.
It not only reduces the attack surface, but also can be employed to provide a baseline for anomalies and prevent the use cases of the “noisy neighbor” and container breakout scenarios described above. Enforce network segmentation on running containers. Maintain network segmentation (or “nano-segmentation”) to segregate clusters or zones of containers by application or workload.
In addition to being a highly effective best practice, network segmentation is a must-have for container-based applications that are subject to PCI DSS.
It also serves as a safeguard against “east-west” attacks. Actively monitor container activity and user access. As with any IT environment, you should consistently monitor activity and user access to your container ecosystem to quickly identify any suspicious or malicious activity. Log all administrative user access to containers for auditing. While strong user access controls can restrict privileges for the majority of people who interact with containers, administrators are in a class by themselves. Logging administrative access to your container ecosystem, container registry, and container images is a good security practice and a common-sense control.
It will provide the forensic evidence needed in the case of a breach, as well as a clear audit trail if needed to demonstrate compliance. Much of the notion of “baking security into IT processes” relates to automating preventive processes from the onset.

Getting aggressive about container security now can allow for containerized applications to be inherently more secure than their predecessors. However, given that containers will be deployed ephemerally and in large numbers, active detection and response -- essential to any security program -- will be critical for containerized environments.

Container runtime environments will need to be monitored at all times, for anomalies, suspected breaches, and compliance purposes. Although there’s a growing body of knowledge about container security in the public domain, it’s important to note that we’re still in the early stages.

As we discover new container-specific vulnerabilities (or new-old ones such as Dirty COW), and as we make the inevitable mistakes (like the configuration error in Vine’s Docker registry that allowed a security researcher to access Vine's source code), best practices are sure to evolve. The good news, as far as container adoption goes, is it’s still early enough to automate strong security controls into container environments.

The not-so-good news is security teams need to know about container initiatives early enough to make that happen, and more often than not they don’t.

To realize the potential security improvements that can be achieved in the transition to container-based application development, that needs to change ... soon.

Educating yourself about containers and the security implications of using them is a good start. New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth.

The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers.
InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content.
Send all inquiries to newtechforum@infoworld.com.
You must be prepared for foreseeable attacks as well as the ones that sneak up on you. Organizations deal with two types of cyberthreats: hurricanes and earthquakes. Hurricanes are those attacks you can see coming; earthquakes, you can't. Both are inevitable, and you need to plan and take action accordingly. This starts with an understanding of what threat intelligence is and how to make it relevant and actionable. Threat intelligence can help you transition from constantly reacting to being proactive. It allows you to prepare for the hurricanes and respond to the earthquakes with an efficient, integrated approach.   Eliminate Noise Mention threat intelligence and most organizations think about multiple data feeds to which they subscribe — commercial sources, open source, and additional feeds from security vendors — each in a different format and most without any context to allow for prioritization. This global threat data gives some insight into activities happening outside of your enterprise — not only attacks themselves, but how attackers are operating and infiltrating networks. The challenge is that most organizations suffer from data overload. Without the tools and insights to automatically sift through mountains of disparate global data and aggregate it for analysts and action, this threat data becomes noise: you have alerts around attacks that aren't contextualized, relevant, or a priority. To make more effective use of this data, it must be aggregated in one manageable location and translated into a uniform format so that you can automatically get rid of the noise and focus on what's important. Focus on Threats With global threat data organized, you can focus on the hurricanes and earthquakes that threaten your organization. Hurricanes are the threats you know about, can prepare for, protect against, and anticipate based on past trends. For example, based on research, say that we know a file is malware. This intelligence should be operationalized — turned into a policy, a rule, or signature and sent to the appropriate sensor — so that it can prevent bad actors from stealing valuable data, creating a disruption, or causing damage. As security operations become more mature, you can start to get alerts on these known threats in addition to automatically blocking them so you can learn more about the adversary. This allows you to focus on the attacks that really matter. Earthquakes are unknown threats, or threats that you may not have adequate countermeasures against, that have bypassed existing defenses. Once they're inside the network, your job is to detect, respond, and recover. This hinges on the ability to turn global threat data into threat intelligence by enriching that data with internal threat and event data and allowing analysts to collaborate for better decision making. Threat intelligence helps you better scope the campaign once the threat is detected, learn more about the adversary, and understand affected systems and how to best remediate. By correlating events and associated indicators from inside your environment (e.g., SIEM alerts or case management records) with external data on indicators, adversaries, and their methods, you gain the context to understand the who, what, when, where, why, and how of an attack. Going a step further, applying context to your business processes and assets helps you assess relevance. Is anything the organization cares about at risk? If the answer is "no," then what you suspected to be a threat is low priority. If the answer is "yes," then it's a threat. Either way, you have the intelligence you need to quickly take action. Make Intelligence Actionable Intelligence has three attributes that help define "actionable." Accuracy: Is the intelligence reliable and detailed? Relevance: Does the intelligence apply to your business or industry? Timeliness: Is the intelligence being received with enough time to do something? An old industry joke is that you can only have two of the three, so you need to determine what's most important to your business. If you need intelligence as fast as possible to deploy to your sensors, then accuracy may suffer and you might expect some false positives. If the intelligence is accurate and timely, then you may not have been able to conduct thorough analysis to determine if the intelligence is relevant to your business. This could result in expending resources on something that doesn't present a lot of risk. Ultimately, the goal is to make threat intelligence actionable. But actionable is defined by the user. The security operations center typically looks for IP addresses, domain names, and other indicators of compromise — anything that will help to detect and contain a threat and prevent it in the future. For the network team, it's about hardening defenses with information on vulnerabilities, signatures, and rules to update firewalls, and patch and vulnerability management systems. The incident response team needs intelligence about the adversary and the campaigns involved so they can investigate and remediate. And the executive team and board need intelligence about threats in business terms — the financial and operational impact — in order to increase revenue and protect shareholders and the company as a whole. Analysts must work together and across the organization to provide the right intelligence in the right format and with the right frequency so that it can be used by multiple teams. Operationalizing threat intelligence takes time and a plan. Many organizations are already moving from a reactive mode to being more proactive. But to make time to look out at the horizon and see and prepare for hurricanes while also dealing with earthquakes, organizations need to move to an anticipatory model with contextual intelligence, relevance, and visibility into trends in the threat landscape. Related Content: As Senior VP of Strategy of ThreatQuotient, Jonathan Couch utilizes his 20+ years of experience in information security, information warfare, and intelligence collection to focus on the development of people, process, and technology within client organizations to assist in ... View Full Bio More Insights
Finally, FINALLY, someone is turning off Telnet and FTP Printer security is so awful HP Inc is willing to shut off shiny features and throw its own dedicated bodies at the perennial problem. The tech giant is offering the professional security services under its new and far-harder-than-before "Secure Managed Print Services" offering unveiled today. Security types will also provide ongoing risk assessments and audit passing for the horridly hackable hardware, and handle firmware updates and password resets. The HP printers are shipped in a hardened state with shiny but dangerous features and ports closed by default in a move that reduces the attack surface available to external hackers. The obvious hacker-bait Telnet and FTP facilities inexplicably included in printers are on the hardening chopping block, as are other unspecified geriatric features. More interfaces will be decommissioned in the future as HP successfully wrangles popular software providers to move to more secure networking options. Thankfully remote capabilities remain to allow Shodan users external HP experts to log in and monitor the security health of device fleets. The tech company is continuing its hardening approach decommissioning old cipher suites and protocols, and upping administration and encryption settings for new and old HP printers. “Networked printers can no longer be overlooked in the wake of weakening firewalls to the growing sophistication and volume of cyberattacks,” HP South Pacific printer boss Ben Vivoda says. ® Sponsored: Customer Identity and Access Management
Microsoft says it hardened its ransomware defenses in Windows 10  Anniversary Update in the face of skyrocketing infection rates and a doubling in the number ransomware variants released into the wild over the past 12 months. In a whitepaper (PDF) released last week, Microsoft explained its latest anti-ransomware solutions bundled this past August with the release of Windows 10 Anniversary Update. One of those improvements includes an updated Microsoft Edge browser with advanced sandboxing technology specific to the exploit-magnet Adobe Flash Player. Other anti-ransomware hardening includes applying a machine-learning infrastructure and cloud-based approach for identifying, classifying and protecting against specific ransomware attacks in seconds rather than hours, according to Microsoft. “The ways attackers are executing attacks are becoming more complex, and the results of the attack are becoming increasingly costly to its victims,” Microsoft wrote in its report.
In fact, it’s fair to say, 2016 has been a devastating year for many individuals and companies stung by ransomware. Between April 2015 and March 2016 the number of users hit by ransomware rose 17.7 percent worldwide compared to the prior year, according to Kaspersky Lab.
Incidents of encryption-based ransomware that locks up data on a PC has risen five-fold over the past year jumping from 6.6 percent in 2014/2015 to 31 percent the preceding year. Microsoft says it has bolstered Windows 10 to reflect those changing tactics of criminals.

The company, in a push to get customers to upgrade, says its Windows 10 Anniversary Edition users are 58 percent less likely to encounter ransomware than when running Windows 7. Windows 7 is still the dominant version of Microsoft’s OS in use today with 48 percent market share compared to Windows 10 with 22 percent, according to the most recent data available from Net Market Share. Windows 8 is the third most popular Windows OS in use with 8.4 market share followed closely by Windows XP with 8.2 percent. Much of Microsoft’s anti-ransomware hardening centers on its Edge browser defenses. Microsoft said that between January and July, six of the top 10 ransomware threats used email and browser exploits, or browser-plug-in related exploits, with the remaining four using browser exploits. To that end, in Windows 10 Anniversary Update, Adobe’s Flash Player is isolated when running in the Edge browser and has its own dedicated application container.
In addition to container management, Microsoft adds new kernel protection to Windows 10 Anniversary Update that limits the ways in which system calls can be used by Microsoft Edge, according to Microsoft. “If a malware author attempts using a vulnerable system call to escape the browser’s sandbox and download and install ransomware in a way that does not fit within these new restrictions, Microsoft Edge will block the system call, preventing the attack,” Microsoft said. Part of Microsoft’s anti-ransomware security also includes a combination of human and machine learning to protect against malware. On one hand, Microsoft  bolsters its Edge browser with its SmartScreen Filter (introduced with Internet Explorer 8).  It’s SmartScreen Filter roughly competes with Google’s Safe Browsing initiative in that they both use a human and machine-generated URL blacklists to block users from visiting unsafe sites. Over the past six months, Microsoft says it has also been leveraging its machine-learning technology and cloud-based automatic sample submission features in Windows Defender to help block ransomware from inboxes. Using both can block previously unidentified malware, Microsoft says. “Definition updates can take hours to prepare and deliver; our cloud service delivers this protection in seconds,” Microsoft said. As part of Windows Anniversary Edition the company introduced Windows Defender Advanced Threat Protection (ATP), a new service that gives remote security staff members a shared dashboard to view security events and alerts in their network and mitigate remote threats. Despite ransomware victories, such as CrySis being neutralized via the release of master decryption keys, the threat of ransomware persists with most strains targeting Windows users.
In August, ransomware called Fantom was discovered masquerading as a fake critical Windows update. Last summer, shortly after Windows 10 was released, attackers began launching spam and phishing email campaigns around the operating system.
Victims received messages claiming users could upgrade to Windows 10 for free.

Those who downloaded the malicious .zip archive were ultimately hit with CTB-Locker ransomware and had their files encrypted.
October is Cybersecurity Awareness Month and, in that spirit, I’d like to shed some light on a cybersecurity topic that is both increasingly important and frequently misunderstood. In his proclamation declaring October National Cybersecurity Month, President Obama said, “Keeping cyberspace secure is a matter of national security, and in order to ensure we can reap the benefits and utility of technology while minimizing the dangers and threats it presents, we must continue to make cybersecurity a top priority.” I agree that cybersecurity is key to taking advantage of the incredible innovations technology brings to our lives.
I also feel strongly that software is at the core of most of these innovations. Yet the systemic risk to our economy and national security brought on by vulnerabilities in software code does not get the attention it deserves. The fact is Verizon’s Data Breach Incident Report shows that web application attacks are now the most frequent pattern in confirmed breaches.

But why isn’t every organization talking about application security? Part of the reason that there are a lot of misconceptions, so I’d like to set the record straight and debunk a few of the more common application security “fallacies.” Fallacy: Implementing an application security program is expensive. Reality: Not so much anymore, especially when you consider the down-stream cost savings. Cloud-based application security solutions changed the cost game. You no longer need to purchase or maintain expensive equipment or hire specialized staff.

And the cost of a breach can be staggering. Juniper Research recently predicted that the cost of data breaches will increase to $2.1 trillion globally by 2019.

The risk of an expensive breach, not to mention the long-term fallout that often follows, far outweighs the cost of implementing application security. Fallacy: Firewalls, antivirus, and network security cover applications. Reality: These technologies may protect you in some ways, but they do not protect against attacks targeting your applications. One of the reasons that cyberattackers have turned their attention to web-facing applications is that most enterprises are proficient at hardening traditional perimeters with next-generation firewalls, IDS/IPS systems and endpoint security solutions, making applications a more attractive target. For instance, firewalls were designed to handle network events, such as finding and blocking botnets and remote access exploits but don’t get as granular as the application level.
Some network security solutions do address certain application-level events -- but they require significant effort to configure and monitor, leading to security inefficiencies. Ultimately, it’s like trying to eat spaghetti with a spoon. You can ... but it’s not the way to do it. Fallacy: One single technology can secure all applications. Reality: There is no solution to rule them all.
In my work at Veracode, I have found that there are significant differences in the types of vulnerabilities that are commonly discovered by looking at applications while they are running with dynamic testing compared to analyzing the raw code with static tests.
Static testing can help identify vulnerabilities inherent in the code while dynamic testing can provide a valuable outside perspective.

By combining both techniques across the life of the application, organizations can find and address more kinds of vulnerabilities and drive down application risk. In other words, only using one kind of application security testing during the software development lifecycle means missing many vulnerabilities. Fallacy: Covering only business-critical applications is enough for success. Reality: Some of the recent major breaches stemmed from applications considered “non-critical.” For instance, JPMorgan was recently breached after a third-party website for its annual charity road race was compromised, exposing employee credentials that were leveraged to log into a misconfigured JPMorgan server.

The site was hardly a business-critical application and was not even under the direct control of JPMorgan, but hackers found a vulnerability in the third-party website and used it to their advantage. Cyberattackers look for the path of least resistance into an organization, and that path is often through less-critical and third-party applications. Fallacy: Developers won’t change their agile processes to incorporate application security. Reality: The data shows They do—and Increasingly, they won’t need to. Application security is in the midst of a major transformation – it is becoming less driven by security professionals and more by frontline developers. With the rise of DevOps and continuous deployment/release models, the old ways of inserting security into software development after the fact are no longer viable.
In today’s development environment, developers can’t be held back by waiting for the security team’s review.

As such, security professionals must adapt to the new developer processes, not the other way around.

Application security solutions are already increasingly being designed to fit the way developers work and to integrate seamlessly and automatically into developers’ workflows.

Data from our customers, and third-party research from TechTarget, show that DevOps teams are seeing security as a part of the responsibility. Ultimately, cybersecurity awareness should include software.

The reality is that the world is increasingly powered by software and overlooking application security is a dangerous oversight. Make sure you know the truth of how application security really works to make the right decisions to keep your businesses safe. This article is published as part of the IDG Contributor Network. Want to Join?
Audio: Aussie hacker shows even NSA hacks haven't schooled some telcos Ruxcon They've been warned for years, but scores of telcos are still making bone-headed configuration mistakes in their GPRS Global Roaming Exchange (GRX) networks, leaving mail and FTP servers vulnerable. The international phone routing system is used for passing and billing calls between providers, using encryption to funnel data over specific protocols. It is the same network leaker Edward Snowden revealed in 2013 was the NSA's attack vector to breach Belgian telco Belgacom. Aussie HP Enterprise Consulting Services managing principal Stephen Kho detailed in 2014 how anyone can access reams of leaky GRX data without hacking national telcos, with simple "light weight" scans. A year later and Kho is still finding data via some 40,000 live GRX hosts which responded to pings, although numbers of exposed services have largely fallen. He shared his results and explained how GRX data can be obtained, including detailing the workings of the network and protocols, at the Ruxcon hacking conference in Melbourne, Australia. Presentation recording:Listen or download On her majesty's secret service - GRX and a spy agency. Kho listed the banner server scan results showing user services including mail servers and Cisco routers, and showed many unpatched and exposed to old, dangerous exploits including remote code execution and denial of service. "So there's some email servers here, there's a root exploit on that, 10 year-old remote code execution on that, buffer overflow on this send mail … that's not good," Kho told the giggling assembled hackers. "We looked at some FTP servers, a whole bunch here .. again remote code execution on that, denial of service on that, overflow on that from 2001." "Clearly people are putting things on the GRX network that are running all services and filtering, not hardening," he says. "If you think you're old exploits aren't gunna work anymore, well you're still good." ®
Nation-state attackers probably pwn you anyhow This one needs the words “Don't Panic” in large friendly letters on the cover: privacy researchers have worked out that Tor's use of the domain name system (DNS) can be exploited to identify users. However, they say, right now an attacker with resources to drop Tor sniffers at “Internet scale” can already de-anonymise users. Rather, they hope to have Tor and relay operators start hardening their use of DNS against future attacks. So: read if you're interested in the interaction between Tor and the DNS, but not if you need the sensation of smelling salts after a faint. The basis of Tor is that your ISP can see you're talking to a Tor node, but nothing else, because your content is encrypted; while a Tor Website is responding to your requests, but doesn't know your IP address. What Benjamin Greschbach (KTH Royal Institute of Technology) and his collaborators have done is add the DNS to the attack vectors. While the user's traffic is encrypted when it enters the network, what travels from the exit node to a resolver is a standard – unencrypted – DNS request. Described at Freedom to Tinker here and written up in full in this Arxiv pre-print, the attack is dubbed DefecTor. Google's DNS has a special place in the research, the paper states, because 40 per cent of DNS bandwidth from Tor exit nodes, and one-third of all Tor DNS requests, land on The Chocolate Factory's resolvers.

That makes Google uniquely placed to help snoop on users, if it were so minded. There's a second problem, and one The Register is sure has other researchers slapping their foreheads and saying “why didn't I think of that?”: DNS requests often traverse networks (autonomous systems, ASs) that the user's HTTP traffic never touches.

The requests leak further than the Tor traffic that triggers it. DefecTor components: a sniffer on the ingress TCP traffic, and another either on the DNS path or in a malicious DNS server Like other attacks, DefecTor needs a network-level sniffer at ingress. While ingress traffic is encrypted, existing research demonstrates that packet length and direction provides a fingerprint that can identify the Website that originated the traffic. Egress sniffing is also needed: the attacker might capture traffic on the path between an exit relay and a resolver; or may operate a malicious DNS resolver to capture exit traffic. With the user's encrypted TCP traffic fingerprinted, DNS requests, and time-stamps, DefecTor can mount “perfectly precise” attacks. “Mapping DNS traffic to websites is highly accurate even with simple techniques, and correlating the observed websites with a website fingerprinting attack greatly improves the precision when monitoring relatively unpopular websites,” the paper states. Mitigations suggested in the paper include: Exit relay operators handling DNS resolution themselves, or using their ISP's resolver, rather than Google or OpenDNS; There's a “clipping bug” in Tor (notified to the operators) that expires DNS caches too soon, sending too many requests out to a resolver (and providing more sniffable material to an attacker); Site operators should create .onion services that don't raise DNS requests; and Tor needs hardening against Website fingerprinting. The researchers have published code, data, and replication instructions here. &regl