15.6 C
London
Friday, August 18, 2017
Home Tags Fuzzing

Tag: Fuzzing

More than 4.8 billion automated tests show that certain industriesmdash;such as industrial control systems and the Internet of Thingsmdash;remain fertile ground for vulnerability researchers.
Microsoft's enterprise customers can soon use its Azure-hosted fuzzing service to ferret out bugs in their own Windows and Linux applications.
Chocolate Factory bearing gifts to improve open source projects Google wants more open source projects to include fuzzing during their development cycle, and to help things along, it's announced a rewards program that goes as high as US$20,000.…
Oracle is patching a long list of different vulnerabilities in its software portfolio.

This time, it's the Oracle E-Business Suite that is getting the most patches. Oracle is out with its first Critical Patch Update (CPU) for 2017 and it's a big one.
I...
Segfault, segfault black out Smart meters are 'dangerously insecure', according to researcher Netanel Rubin, with insecure encryption and known-pwned protocols - and, worryingly, attacks reach all the way to making them explode. The utility hacker and founder of Vaultra derided global governmental efforts to install the meters as reckless, saying the "dangerous" devices are a risk to all connected smart home devices. Smart meters can communicate with devices inside homes, such as air conditioners, fridges, and the like.

A hacker who could break into the meters could control those, potentially unlocking doors. They could also manipulate the meter's code to cause fires, something that's trivially easy at mains a.c. voltages. "An attacker who controls the meter also controls its software, allowing them to literally blow the meter up. "If an attacker could hack your meter, he could have access to all the devices connected to the meter. "The smart meter network in its current state is completely exposed to attackers." Rubin acknowledged some complaint over fear-mongering from the security audience at the Chaos Communications Congress in Hamburg, Germany, but says his description of exploding boxes is to deliver the message of smart metre insecurity to the wider public. He fended off comments that triggering explosions through hacking was not possible, saying it had been acknowledged in the US [The Register could not at the time of writing independently verify that claim]. The physical security of the meter is strong, but hackers still have plenty of wireless vectors to attack. Rubin lists smart meters' use of Zigbee or GSM protocols, often left insecure and unencrypted, or at best secured with the GPRS +A5 algorithm which is known to be broken for more than five years. Attackers can also broadcast over the top of meters' comms protocols forcing all units in an area to connect to malicious base stations using hardcoded credentials. The access grants hackers direct access to the smart meter firmware for deep exploitation. "All meters of the same utility use the same APN credentials," Rubin told the applauding audience. "One key to rule them all." Worse, Rubin found smart meters add home devices handing over the critical network key without first checking if the gadgets should be added.

This opens an avenue for attackers to masquerade as home devices, steal the key, and impersonate the meter. You can communicate with and control any device in the house from way across the street, open up locks, cause a short in the electricity system, whatever we want to do. "A simple segmentation fault is enough to crash the meter, causing a blackout at the premises," Rubin says. He says the attack vectors would have been erased if proper encryption was used, and the network was segmented instead of treated as a "giant LAN". Youtube Video The attacks were in 2009 realised in Puerto Rico when hackers caused some US$400 million in billing fraud. Rubin says meters ability to communicate with internal smart home devices is only the first step as utilities expand in the future to form city-wide mesh networks with city smart appliances. "The entirety of the electricity grid, your home, your city, and everything in between will be in control of your energy utility, and that's a bit scary. About 40 percent of the smart metre market is held by Itron, Landis and Gyr, and Elster. The European Union wants to replace more than 70 percent of electricity meters with smart versions at a cost of €45 billion.

There are already some 100 million meters are installed globally. Rubin expects a sharp increase in hacking attempts, and called on utilities to "step up". He released an open source fuzzing tool to help security researchers test their own meters. "Reclaim your home, before someone else does." ® Sponsored: Next gen cybersecurity.
Visit The Register's security hub
A new Google program aimed at continuously fuzzing open source software has already detected over 150 bugs. The program, OSS-Fuzz, currently in beta mode, is designed to help unearth programming errors in open source software via fuzz testing.

Fuzz testing, or fuzzing is when bits of randomly generated code is inputted into programs as a means to discover code and security flaws. Chrome security engineers Oliver Chang and Abhishek Arya, Google software engineers Kostya Serebryany and Mike Aizatsky, and Meredith Whittaker, who leads Google’s Open Source Research group, announced the project last Thursday. Our fuzzing-as-a-service for opensource software is now in beta! https://t.co/wYPxBNeEgO — Kostya Serebryany (@kayseesee) December 1, 2016 The program was developed with help from the Core Infrastructure Initiative, a Linux Foundation collaborative that counts Cisco, Facebook, and Microsoft among its members. “Open source software is the backbone of the many apps, sites, services, and networked things that make up ‘the internet.’ It is important that the open source foundation be stable, secure, and reliable, as cracks and weaknesses impact all who build on it,” the engineers wrote Thursday, “OSS-Fuzz’s goal is to make common software infrastructure more secure and stable by combining modern fuzzing techniques with scalable distributed execution.” The project is built on fuzzing engineers such as libFuzzer, sanitizers, AddressSanitizer and a distributed fuzzing infrastructure that catalogs fuzz statistics called ClusterFuzz. The program has identified bugs in projects pcre2, libchewing and FFmpeg so far.

Even more impressive is that Google claims OSS-Fuzz is cranking out four trillion test cases a week. Engineers say FreeType, an open source library that’s used to display text, is a perfect example of what OSS-Fuzzing can achieve. One of FreeType library’s developers, Werner Lemburg, adopted OSS-Fuzz early on.
In October, after a heap buffer overflow was identified in the library, OSS-Fuzz notified the maintainer, who went on to fix the bug.
It was confirmed fixed by OSS-Fuzz the same day, Google says. While the program is in its infancy, developers and open source proponents have lauded Google for the program. Alex Gaynor, who writes a lot of open source code and previously served as the director of the Python Software Foundation and the Django Software Foundation, tested OSS-Fuzz late last week and called the experience “extremely good.” “I definitely think it’s something that every OSS project should take a look at,” Gaynor told Threatpost Monday. Gaynor, who penned a blog entry around his experience on Saturday, used OSS-Fuzz to test libyaml, a C library YAML 1.1 parser and emitter that’s the basis for both Python and Ruby’s YAML libraries.

The program ran 17 billion testcases against the library in under a day, according to Gaynor, roughly 30 days of CPU time in less than a calendar day. While developers have to build fuzzers specific to their project, OSS-Fuzz does most of the work, Gaynor says. He added that in his short experience, sending pull requests for projects is easy enough. The program will file any bugs it discovers privately and leave a comment when it thinks a crash has been fixed.
It makes the bug public seven days after it’s been fixed and even “handles automatically rebuilding when the upstream source changes,” Gaynor points out. “It was almost no work to write a fuzzing function and get it running, and OSS-Fuzz handles tons of the details around making fuzzing at scale practical; this makes the experience far more pleasant than if I’d jerry–rigged something together myself,” Gaynor, who currently works for the United States Digital Service–part of the Executive Office of the President, wrote. Gaynor said Monday the only area OSS-Fuzz might need a slight tweak in is its user interface. “In terms of areas for improvement, the biggest one would be the UI used for looking at stats and crash reports, which needs a bit more polish for non-internal audiences,” Gaynor said. Open source software experts like Jim Zemlin, executive director at the Linux Foundation, also took time to laud the project on Twitter last week. This is making the internet more secure with thanks to @mer__edith and Google team.

Devs should check this out: https://t.co/kaPfbytvw6 — jzemlin (@jzemlin) December 1, 2016 Now that the program has been announced, Google claims its main focus is fostering OSS-Fuzz usage. The company is encouraging open source projects, as long as they have a large user base, to join OSS-Fuzz.

By doing so, developers would have to subject themselves to Google’s 90-day disclosure deadline but they’d also be joining a rich open source community, the engineers say. While often viewed as a nuisance, low level bugs like buffer overflow and use-after-free vulnerabilities can have a pivotal impact on software security, especially in the open source realm. Vulnerabilities in libStageFright, a software library coded in C++; part of the Android Open Source Project, led to a series of bugs last year which went on to enable remote code execution and privilege escalation. The venture is one of the latest efforts from the CII to bolster open source software security.

Earlier this year the consortium unveiled a badge program designed to help developers self-certify their projects.

The program, which counted GitLab, Node.js, and OpenSSL as early adopters, encourages open source software to follow best practices and self-disclose their security aspects. The CII was also behind the Open Crypto Audit Project, which was responsible for last year’s TrueCrypt audit and also helped fund the current large scale audit of OpenSSL.
Building software securely requires a verifiable method of reproduction and that is why the Linux Foundation's Core Infrastructure Initiative is supporting the Reproducible Builds Project. In an effort to help open-source software developers build more secure software, the Linux Foundation is doubling down on its efforts to help the reproducible builds project.

Among the most basic and often most difficult aspects of software development is making sure that the software end-users get is the same software that developers actually built."Reproducible builds are a set of software development practices that create a verifiable path from human readable source code to the binary code used by computers," the Reproducible Builds project explains.Without the promise of a verified reproducible build, security can potentially be compromised, as the binary code might not be the same as the original developer intended.

The Reproducible Builds project benefits from the support of the Linux Foundation's Core Infrastructure Initiative (CII).CII was founded in the aftermath of the open-source Heartbleed SSL vulnerability in 2014, as a way to help provide resources and direction in a bid to secure open-source code. CII has raised over $5.5 million from financial backers including Adobe, Bloomberg, Hewlett-Packard, VMware, Rackspace, NetApp, Microsoft, Intel, IBM, Google, Fujitsu, Facebook, Dell, Amazon and Cisco. In June 2015, CII announced its initial support for the Reproducible Builds effort, providing the project with $200,000 in funding. Now CII is committing to renewing its support for Reproducible Build with an additional $225,000."The first chunk of funding helped deliver reproducibility-related debugging tools such as diffoscope," Nicko van Someren, CTO of the Linux Foundation, told eWEEK.The diffoscope open-source tool provides developers with an in-depth comparison of files, archives and directories.
Van Someren added that CII's initial support of Reproducible Builds also enabled the project to spend significant time investing in a reliable and flexible framework for testing the reproducibility of software packages within Debian and other operating systems."Using this framework, combined with efforts from the rest of the Reproducible Builds Project, has resulted in 91 percent of the packages within the testing Debian distribution becoming reproducible," Van Someren said.With the renewed support for Reproducible Builds, Van Someren said that in addition to enabling the project to 'double-down' on the previous efforts, his expectation is that new tools will also be built.

Additionally the plan is to rework the documentation for upstream open-source projects as well as to experiment and ultimately deliver tools to end-users."For example, users may wish to forbid installation of packages on their system that are not reproducible," he said.While the ability to have reproducible builds is an important component of ensuring secure software, the delivery mechanism by which software gets to users also needs to be secure.
In January 2016, the popular open-source Drupal content management system (CMS), used by WhiteHouse.gov among others notable deployments, came under criticism for not providing a secure update mechanism.The challenge in that case, as with many others, is that the software wasn't always being delivered over an HTTPS encrypted link.CII is working to also help improve security in the area of secure software delivery in several ways. One of those ways is through the Let’s Encrypt project, which is an effort to provide free SSL/TLS certificates for websites.
Van Someren noted that Let's Encrypt helps developers in setting up HTTPS servers and getting certificates as simply as possible. Let's Encrypt is operated as a Linux Foundation Collaborative Project.CII also has a Best Practices Badge for Open Source Projects."One of the requirements for achieving the badge is for the project to be in control of a HTTPS server from which the project can be downloaded," Van Someren said. "So far 45 projects have been awarded a badge, with more than 200 in process of obtaining one."In addition to renewing funding for the Reproducible Builds project, CII also recently renewed funding for The Fuzzing Project, False-Positive-Free Testing Project, OpenSSL, OpenSSH, Network Time and Best Practices Badging Projects.

The CII is also likely to expand its supported projects roster with more than 20 projects currently in the process of submitting applications."The CII Steering Committee will of course assess these once they have been submitted and will award funds as budget allows," Van Someren said. "Our total grant budget is $1.6 million annually."Sean Michael Kerner is a senior editor at eWEEK and InternetNews.com.

Follow him on Twitter @TechJournalist.
The OpenStack Security project adds new tools and processes to help secure OpenStack technologies.

The project technical leader offers insight on the program.  Security is such a critical element of the open-source OpenStack cloud platform that there is an entire project—the OpenStack Security project—dedicated to the task of helping protect OpenStack technologies.In a well-attended session at the OpenStack Summit in Barcelona, Spain, on Oct. 27, Rob Clark, the project technical leader of the OpenStack Security project, detailed the group's most recent efforts.The OpenStack Security project focuses on building security tools that help identify potential vulnerabilities in the OpenStack project code and providing guidance and secure governance."In many ways, we act as a group of consultants to the wider OpenStack organization," Clark said. The OpenStack Security project engages in threat analysis to look at potential areas of risk. A threat analysis exercise should first start by identifying any points of entry into a system, as well as assets, Clark said, adding that the threat analysis should also be able to document where data goes and what formats are used. "A huge number of vulnerabilities come from changing from one format to another and not really thinking about what you're doing," Clark said. "It could be as easy as reading data from a disk into memory."The threat analysis exercise is also about identifying common deployment approaches as well as best practices for a given OpenStack technology.

Additionally, all assets used in a project are documented in an asset catalog that helps inform an asset-oriented threat analysis.

For each item, the project will look at confidentiality, integrity and availability for a given asset, Clark said.The threat analysis process the OpenStack Security project uses employs a clear diagram methodology and makes basic assertions about the confidentiality, integrity and availability for a given asset, he said."The idea is to understand what's at risk, quantify it and describe the worst-case impact." New Tools The new Syntribos tool is an API fuzzing framework built specifically for OpenStack. With API fuzzing, unexpected inputs are generated and injected into an application to see what will happen, Clark said.

Among the issues that fuzzing can find are Cross Site Scripting (XSS), buffer overflow and string validation risks.So far, Syntribos has found more than 500 errors across the OpenStack Cinder (storage), Glance (image), Keystone (identity) and Neutron (networking) projects."At this point, OpenStack has already been through many of the good quality commercial static and dynamic analysis tools, and none of them found the issues that Syntribos did," Clark said.The OpenStack Security project will continue to draft guidance and build tools to help OpenStack projects, he said.

There also is a new idea for a security incubator that could take shape in 2017 to bring in small security projects that are applicable for OpenStack cloud and help to provide a home and guidance."I hope the community as a whole finds the things we do useful," Clark said.Sean Michael Kerner is a senior editor at eWEEK and InternetNews.com.

Follow him on Twitter
@TechJournalist.
Researchers for the new 'Hacker's Playbook' analyzed 4 million breach methods from an attacker's point of view to gauge the real risks today to enterprises. No organization is immune to the risk of a data breach.
Security leaders who want to assume the strongest protection must analyze their security posture from a hacker's point of view to understand risk, validate security controls, and prioritize resources.  That is the premise behind the SafeBreach Hacker's Playbook, which was released in its second edition today.

The first edition of the playbook, published in January, details enterprise security threats and risky habits from the point-of-view of an attacker.  Researchers at SafeBreach "play the hacker" by deploying simulators that assume the role of a "virtual hacker" across endpoints, network, and the cloud.

The new Hacker's Playbook incorporates a total of 3,985,011 breach methods, all executed between January and September 2016. SafeBreach's research team had two main objectives in compiling this playbook, says CTO and co-founder Itzik Kotler. The first is to take highly publicized breaches such as those at Sony and Target, and to create artificial models so customers can better understand these attacks and how they happen. Researchers also figure out how to attack; they analyze different methods to create simulation events to give users a better idea of the threats they face. "They're [the researchers] pushing the envelope in creating new ideas and experimenting with existing ones," says Kotler. "It's all to show customers what kind of malicious ideas exist." Successful breaches are sorted into three pillars: infiltration, how hackers enter a machine; lateral movement, how they jump from one server to the other, for instance; and exfiltration, how they steal valuable data out of the victim organization. The top infiltration methods used by attackers, according to the report, involved hiding executable files inside non-executable files.
Specifically, executable files embedded within Windows script files, macros, and Visual Basic had great success.  (Image: SafeBreach) Old exploit kits, many of which have been around for a year or longer, are still considered effective means of delivering malware.

These kits challenge endpoint security and secure web gateway products; top picks include Sweet Orange, Neutrino, and Rig Exploit Kit. Another finding, consistent with the last Hacker's Playbook, is the danger of misconfigured security products. Researchers passed malware between internal and external simulators and found many malware sandboxing solutions were not properly set up to safeguard all protocols, encrypted traffic, ports, and file formats. In exploring lateral movement, researchers were successful in infiltrating networks via brute-force methods and discovered issues with proxies, which can segment internal networks when deployed correctly.
If proxies are misconfigured, hackers can breach new network paths both internally and externally through proxy fuzzing. It's easy for hackers to pull data outside victim organizations because most have fairly open outbound communication channels to the Internet.

Top successful protocols include HTTP, IRC, SIP, and Syslogs, but IT support tools like externally bound syslogs can also be used to steal data. "One thing we are continuously seeing from the previous Hacker's Playbook is the exfiltration of information, the ability of the hacker to steal something you care about, is still at 100%," says Kotler.

This is a proven problem that will continue to pose a business risk in the future. The means of mitigating these risks varies depending on the business, as different companies and security programs have different needs, he continues. Knowing where the business is lacking, investing in the right technology, and driving employee awareness are key. "It all begins with an understanding of what is the problem and where are the gaps, and making sure they are validated correctly," he says. There is a positive finding from this collection of research, however, Kotler notes. "Companies today are being more and more proactive when it comes to understanding their security posture, and the notion of running simulations ahead of the curve so they can mitigate [risk]," he says. Related Content: Kelly is an associate editor for InformationWeek.
She most recently reported on financial tech for Insurance & Technology, before which she was a staff writer for InformationWeek and InformationWeek Education. When she's not catching up on the latest in tech, Kelly enjoys ...
View Full Bio More Insights
Microsoft is previewing a cloud-based bug detector, dubbed Project Springfield, that it calls one of its most sophisticated tools for finding potential security vulnerabilities. Project Springfield uses "whitebox fuzzing," which uncovered one-third of the "million dollar" security bugs during the development of Windows 7. Microsoft has been using a component of the project called SAGE since the mid-2000s to test products prior to release, including fuzzing both Windows and Office applications.  For this project, SAGE is bundled with other tools for fuzz testing, featuring a dashboard and other interfaces that enable use by people without an extensive security background.

The tests are run using Microsoft's Azure cloud. With fuzz testing, the system throws random inputs at software to find instances in which unforeseen actions cause software to crash.

This testing, according to Microsoft researcher David Molnar, is ideal for software regularly incorporating inputs like documents, images, videos, or other information that may not be trustworthy.

Bad actors are sought out that could launch malicious attacks or crash a system. Whitebox fuzz testing uses artificial intelligence to ask a series of "what if" questions and make decisions about what might cause a crash and signal a security concern. The code-name, Springfield, previously was used at Microsoft for the now-defunct Popfly web page and mashup creation service.

There's no relation between the two projects, a Microsoft representative said. Microsoft is extending preview invitations for Project Springfield to customers, with an initial group to evaluate it for free.
Enlarge / No, not that sort of fuzzing for bugs.Micha L. Rieser reader comments 15 Share this story At Microsoft's Ignite conference in Atlanta yesterday, the company announced the availability of a new cloud-based service for developers that will allow them to test application binaries for security flaws before they're deployed.

Called Project Springfield, the service uses "whitebox fuzzing" (also known as "smart fuzzing") to test for common software bugs used by attackers to exploit systems. In standard fuzzing tests, randomized inputs are thrown at software in an effort to find something that breaks the code—a buffer overflow that would let malicious code be planted in the system's memory or an unhandled exception that causes the software to crash or processes to hang.

But the problem with this random approach is that it's hard to get deep into the logic of code.

Another approach, called static code analysis (or "whiteboxing"), looks instead at the source code and walks through it without executing it, using ranges of inputs to determine whether security flaws may be present. Whitebox fuzzing combines some of the aspects of each of these approaches. Using sample inputs as a starting point, a whitebox fuzz tester dynamically generates new sets of inputs to exercise the code, walking deeper into processes. Using machine learning techniques, the system repeatedly runs the code through fuzzing sessions, adapting its approach based on what it discovers with each pass.

The approach is similar to some of the techniques developed by competitors in the Defense Advanced Research Projects Agency's Cyber Grand Challenge to allow for automated bug detection and patching. Microsoft Research scientist Patrice Godefroid led the development of Microsoft's internal whitebox fuzzing tool, called SAGE, which is the basis for the new service. In its earliest form, SAGE was used in testing of Windows 7 prior to its release and accounted for a third of the bugs discovered by fuzzing tools overall, despite being used after all other testing was complete.
SAGE is now the basis of Project Springfield, which Godefroid leads. Project Springfield puts the fuzz-testing system in the Azure cloud behind a Web dashboard. Users upload code for testing along with a "test driver"—an interface for pushing inputs to the code—and sample inputs. Currently, the service works with Windows binaries, but Linux testing will be available soon. Like other Microsoft Research projects before it (such as Project Oxford), Project Springfield is in limited preview, and Microsoft is screening interested customers for access.