Home Tags Fuzzing

Tag: Fuzzing

Oracle Patches 270 Vulnerabilities in First Patch Update of 2017

Oracle is patching a long list of different vulnerabilities in its software portfolio.

This time, it's the Oracle E-Business Suite that is getting the most patches. Oracle is out with its first Critical Patch Update (CPU) for 2017 and it's a big one.
I...

Hackers could explode horribly insecure smart metres, pwn home IoT

Segfault, segfault black out Smart meters are 'dangerously insecure', according to researcher Netanel Rubin, with insecure encryption and known-pwned protocols - and, worryingly, attacks reach all the way to making them explode. The utility hacker and founder of Vaultra derided global governmental efforts to install the meters as reckless, saying the "dangerous" devices are a risk to all connected smart home devices. Smart meters can communicate with devices inside homes, such as air conditioners, fridges, and the like.

A hacker who could break into the meters could control those, potentially unlocking doors. They could also manipulate the meter's code to cause fires, something that's trivially easy at mains a.c. voltages. "An attacker who controls the meter also controls its software, allowing them to literally blow the meter up. "If an attacker could hack your meter, he could have access to all the devices connected to the meter. "The smart meter network in its current state is completely exposed to attackers." Rubin acknowledged some complaint over fear-mongering from the security audience at the Chaos Communications Congress in Hamburg, Germany, but says his description of exploding boxes is to deliver the message of smart metre insecurity to the wider public. He fended off comments that triggering explosions through hacking was not possible, saying it had been acknowledged in the US [The Register could not at the time of writing independently verify that claim]. The physical security of the meter is strong, but hackers still have plenty of wireless vectors to attack. Rubin lists smart meters' use of Zigbee or GSM protocols, often left insecure and unencrypted, or at best secured with the GPRS +A5 algorithm which is known to be broken for more than five years. Attackers can also broadcast over the top of meters' comms protocols forcing all units in an area to connect to malicious base stations using hardcoded credentials. The access grants hackers direct access to the smart meter firmware for deep exploitation. "All meters of the same utility use the same APN credentials," Rubin told the applauding audience. "One key to rule them all." Worse, Rubin found smart meters add home devices handing over the critical network key without first checking if the gadgets should be added.

This opens an avenue for attackers to masquerade as home devices, steal the key, and impersonate the meter. You can communicate with and control any device in the house from way across the street, open up locks, cause a short in the electricity system, whatever we want to do. "A simple segmentation fault is enough to crash the meter, causing a blackout at the premises," Rubin says. He says the attack vectors would have been erased if proper encryption was used, and the network was segmented instead of treated as a "giant LAN". Youtube Video The attacks were in 2009 realised in Puerto Rico when hackers caused some US$400 million in billing fraud. Rubin says meters ability to communicate with internal smart home devices is only the first step as utilities expand in the future to form city-wide mesh networks with city smart appliances. "The entirety of the electricity grid, your home, your city, and everything in between will be in control of your energy utility, and that's a bit scary. About 40 percent of the smart metre market is held by Itron, Landis and Gyr, and Elster. The European Union wants to replace more than 70 percent of electricity meters with smart versions at a cost of €45 billion.

There are already some 100 million meters are installed globally. Rubin expects a sharp increase in hacking attempts, and called on utilities to "step up". He released an open source fuzzing tool to help security researchers test their own meters. "Reclaim your home, before someone else does." ® Sponsored: Next gen cybersecurity.
Visit The Register's security hub

Google Debuts Continuous Fuzzer for Open Source Software

A new Google program aimed at continuously fuzzing open source software has already detected over 150 bugs. The program, OSS-Fuzz, currently in beta mode, is designed to help unearth programming errors in open source software via fuzz testing.

Fuzz testing, or fuzzing is when bits of randomly generated code is inputted into programs as a means to discover code and security flaws. Chrome security engineers Oliver Chang and Abhishek Arya, Google software engineers Kostya Serebryany and Mike Aizatsky, and Meredith Whittaker, who leads Google’s Open Source Research group, announced the project last Thursday. Our fuzzing-as-a-service for opensource software is now in beta! https://t.co/wYPxBNeEgO — Kostya Serebryany (@kayseesee) December 1, 2016 The program was developed with help from the Core Infrastructure Initiative, a Linux Foundation collaborative that counts Cisco, Facebook, and Microsoft among its members. “Open source software is the backbone of the many apps, sites, services, and networked things that make up ‘the internet.’ It is important that the open source foundation be stable, secure, and reliable, as cracks and weaknesses impact all who build on it,” the engineers wrote Thursday, “OSS-Fuzz’s goal is to make common software infrastructure more secure and stable by combining modern fuzzing techniques with scalable distributed execution.” The project is built on fuzzing engineers such as libFuzzer, sanitizers, AddressSanitizer and a distributed fuzzing infrastructure that catalogs fuzz statistics called ClusterFuzz. The program has identified bugs in projects pcre2, libchewing and FFmpeg so far.

Even more impressive is that Google claims OSS-Fuzz is cranking out four trillion test cases a week. Engineers say FreeType, an open source library that’s used to display text, is a perfect example of what OSS-Fuzzing can achieve. One of FreeType library’s developers, Werner Lemburg, adopted OSS-Fuzz early on.
In October, after a heap buffer overflow was identified in the library, OSS-Fuzz notified the maintainer, who went on to fix the bug.
It was confirmed fixed by OSS-Fuzz the same day, Google says. While the program is in its infancy, developers and open source proponents have lauded Google for the program. Alex Gaynor, who writes a lot of open source code and previously served as the director of the Python Software Foundation and the Django Software Foundation, tested OSS-Fuzz late last week and called the experience “extremely good.” “I definitely think it’s something that every OSS project should take a look at,” Gaynor told Threatpost Monday. Gaynor, who penned a blog entry around his experience on Saturday, used OSS-Fuzz to test libyaml, a C library YAML 1.1 parser and emitter that’s the basis for both Python and Ruby’s YAML libraries.

The program ran 17 billion testcases against the library in under a day, according to Gaynor, roughly 30 days of CPU time in less than a calendar day. While developers have to build fuzzers specific to their project, OSS-Fuzz does most of the work, Gaynor says. He added that in his short experience, sending pull requests for projects is easy enough. The program will file any bugs it discovers privately and leave a comment when it thinks a crash has been fixed.
It makes the bug public seven days after it’s been fixed and even “handles automatically rebuilding when the upstream source changes,” Gaynor points out. “It was almost no work to write a fuzzing function and get it running, and OSS-Fuzz handles tons of the details around making fuzzing at scale practical; this makes the experience far more pleasant than if I’d jerry–rigged something together myself,” Gaynor, who currently works for the United States Digital Service–part of the Executive Office of the President, wrote. Gaynor said Monday the only area OSS-Fuzz might need a slight tweak in is its user interface. “In terms of areas for improvement, the biggest one would be the UI used for looking at stats and crash reports, which needs a bit more polish for non-internal audiences,” Gaynor said. Open source software experts like Jim Zemlin, executive director at the Linux Foundation, also took time to laud the project on Twitter last week. This is making the internet more secure with thanks to @mer__edith and Google team.

Devs should check this out: https://t.co/kaPfbytvw6 — jzemlin (@jzemlin) December 1, 2016 Now that the program has been announced, Google claims its main focus is fostering OSS-Fuzz usage. The company is encouraging open source projects, as long as they have a large user base, to join OSS-Fuzz.

By doing so, developers would have to subject themselves to Google’s 90-day disclosure deadline but they’d also be joining a rich open source community, the engineers say. While often viewed as a nuisance, low level bugs like buffer overflow and use-after-free vulnerabilities can have a pivotal impact on software security, especially in the open source realm. Vulnerabilities in libStageFright, a software library coded in C++; part of the Android Open Source Project, led to a series of bugs last year which went on to enable remote code execution and privilege escalation. The venture is one of the latest efforts from the CII to bolster open source software security.

Earlier this year the consortium unveiled a badge program designed to help developers self-certify their projects.

The program, which counted GitLab, Node.js, and OpenSSL as early adopters, encourages open source software to follow best practices and self-disclose their security aspects. The CII was also behind the Open Crypto Audit Project, which was responsible for last year’s TrueCrypt audit and also helped fund the current large scale audit of OpenSSL.

Reproducible Builds Effort Advances With Linux Foundation Support

Building software securely requires a verifiable method of reproduction and that is why the Linux Foundation's Core Infrastructure Initiative is supporting the Reproducible Builds Project. In an effort to help open-source software developers build more secure software, the Linux Foundation is doubling down on its efforts to help the reproducible builds project.

Among the most basic and often most difficult aspects of software development is making sure that the software end-users get is the same software that developers actually built."Reproducible builds are a set of software development practices that create a verifiable path from human readable source code to the binary code used by computers," the Reproducible Builds project explains.Without the promise of a verified reproducible build, security can potentially be compromised, as the binary code might not be the same as the original developer intended.

The Reproducible Builds project benefits from the support of the Linux Foundation's Core Infrastructure Initiative (CII).CII was founded in the aftermath of the open-source Heartbleed SSL vulnerability in 2014, as a way to help provide resources and direction in a bid to secure open-source code. CII has raised over $5.5 million from financial backers including Adobe, Bloomberg, Hewlett-Packard, VMware, Rackspace, NetApp, Microsoft, Intel, IBM, Google, Fujitsu, Facebook, Dell, Amazon and Cisco. In June 2015, CII announced its initial support for the Reproducible Builds effort, providing the project with $200,000 in funding. Now CII is committing to renewing its support for Reproducible Build with an additional $225,000."The first chunk of funding helped deliver reproducibility-related debugging tools such as diffoscope," Nicko van Someren, CTO of the Linux Foundation, told eWEEK.The diffoscope open-source tool provides developers with an in-depth comparison of files, archives and directories.
Van Someren added that CII's initial support of Reproducible Builds also enabled the project to spend significant time investing in a reliable and flexible framework for testing the reproducibility of software packages within Debian and other operating systems."Using this framework, combined with efforts from the rest of the Reproducible Builds Project, has resulted in 91 percent of the packages within the testing Debian distribution becoming reproducible," Van Someren said.With the renewed support for Reproducible Builds, Van Someren said that in addition to enabling the project to 'double-down' on the previous efforts, his expectation is that new tools will also be built.

Additionally the plan is to rework the documentation for upstream open-source projects as well as to experiment and ultimately deliver tools to end-users."For example, users may wish to forbid installation of packages on their system that are not reproducible," he said.While the ability to have reproducible builds is an important component of ensuring secure software, the delivery mechanism by which software gets to users also needs to be secure.
In January 2016, the popular open-source Drupal content management system (CMS), used by WhiteHouse.gov among others notable deployments, came under criticism for not providing a secure update mechanism.The challenge in that case, as with many others, is that the software wasn't always being delivered over an HTTPS encrypted link.CII is working to also help improve security in the area of secure software delivery in several ways. One of those ways is through the Let’s Encrypt project, which is an effort to provide free SSL/TLS certificates for websites.
Van Someren noted that Let's Encrypt helps developers in setting up HTTPS servers and getting certificates as simply as possible. Let's Encrypt is operated as a Linux Foundation Collaborative Project.CII also has a Best Practices Badge for Open Source Projects."One of the requirements for achieving the badge is for the project to be in control of a HTTPS server from which the project can be downloaded," Van Someren said. "So far 45 projects have been awarded a badge, with more than 200 in process of obtaining one."In addition to renewing funding for the Reproducible Builds project, CII also recently renewed funding for The Fuzzing Project, False-Positive-Free Testing Project, OpenSSL, OpenSSH, Network Time and Best Practices Badging Projects.

The CII is also likely to expand its supported projects roster with more than 20 projects currently in the process of submitting applications."The CII Steering Committee will of course assess these once they have been submitted and will award funds as budget allows," Van Someren said. "Our total grant budget is $1.6 million annually."Sean Michael Kerner is a senior editor at eWEEK and InternetNews.com.

Follow him on Twitter @TechJournalist.

OpenStack Security Project: Protecting Open-Source Cloud

The OpenStack Security project adds new tools and processes to help secure OpenStack technologies.

The project technical leader offers insight on the program.  Security is such a critical element of the open-source OpenStack cloud platform that there is an entire project—the OpenStack Security project—dedicated to the task of helping protect OpenStack technologies.In a well-attended session at the OpenStack Summit in Barcelona, Spain, on Oct. 27, Rob Clark, the project technical leader of the OpenStack Security project, detailed the group's most recent efforts.The OpenStack Security project focuses on building security tools that help identify potential vulnerabilities in the OpenStack project code and providing guidance and secure governance."In many ways, we act as a group of consultants to the wider OpenStack organization," Clark said. The OpenStack Security project engages in threat analysis to look at potential areas of risk. A threat analysis exercise should first start by identifying any points of entry into a system, as well as assets, Clark said, adding that the threat analysis should also be able to document where data goes and what formats are used. "A huge number of vulnerabilities come from changing from one format to another and not really thinking about what you're doing," Clark said. "It could be as easy as reading data from a disk into memory."The threat analysis exercise is also about identifying common deployment approaches as well as best practices for a given OpenStack technology.

Additionally, all assets used in a project are documented in an asset catalog that helps inform an asset-oriented threat analysis.

For each item, the project will look at confidentiality, integrity and availability for a given asset, Clark said.The threat analysis process the OpenStack Security project uses employs a clear diagram methodology and makes basic assertions about the confidentiality, integrity and availability for a given asset, he said."The idea is to understand what's at risk, quantify it and describe the worst-case impact." New Tools The new Syntribos tool is an API fuzzing framework built specifically for OpenStack. With API fuzzing, unexpected inputs are generated and injected into an application to see what will happen, Clark said.

Among the issues that fuzzing can find are Cross Site Scripting (XSS), buffer overflow and string validation risks.So far, Syntribos has found more than 500 errors across the OpenStack Cinder (storage), Glance (image), Keystone (identity) and Neutron (networking) projects."At this point, OpenStack has already been through many of the good quality commercial static and dynamic analysis tools, and none of them found the issues that Syntribos did," Clark said.The OpenStack Security project will continue to draft guidance and build tools to help OpenStack projects, he said.

There also is a new idea for a security incubator that could take shape in 2017 to bring in small security projects that are applicable for OpenStack cloud and help to provide a home and guidance."I hope the community as a whole finds the things we do useful," Clark said.Sean Michael Kerner is a senior editor at eWEEK and InternetNews.com.

Follow him on Twitter
@TechJournalist.

Executable Files, Old Exploit Kits Top Most Effective Attack Methods

Researchers for the new 'Hacker's Playbook' analyzed 4 million breach methods from an attacker's point of view to gauge the real risks today to enterprises. No organization is immune to the risk of a data breach.
Security leaders who want to assume the strongest protection must analyze their security posture from a hacker's point of view to understand risk, validate security controls, and prioritize resources.  That is the premise behind the SafeBreach Hacker's Playbook, which was released in its second edition today.

The first edition of the playbook, published in January, details enterprise security threats and risky habits from the point-of-view of an attacker.  Researchers at SafeBreach "play the hacker" by deploying simulators that assume the role of a "virtual hacker" across endpoints, network, and the cloud.

The new Hacker's Playbook incorporates a total of 3,985,011 breach methods, all executed between January and September 2016. SafeBreach's research team had two main objectives in compiling this playbook, says CTO and co-founder Itzik Kotler. The first is to take highly publicized breaches such as those at Sony and Target, and to create artificial models so customers can better understand these attacks and how they happen. Researchers also figure out how to attack; they analyze different methods to create simulation events to give users a better idea of the threats they face. "They're [the researchers] pushing the envelope in creating new ideas and experimenting with existing ones," says Kotler. "It's all to show customers what kind of malicious ideas exist." Successful breaches are sorted into three pillars: infiltration, how hackers enter a machine; lateral movement, how they jump from one server to the other, for instance; and exfiltration, how they steal valuable data out of the victim organization. The top infiltration methods used by attackers, according to the report, involved hiding executable files inside non-executable files.
Specifically, executable files embedded within Windows script files, macros, and Visual Basic had great success.  (Image: SafeBreach) Old exploit kits, many of which have been around for a year or longer, are still considered effective means of delivering malware.

These kits challenge endpoint security and secure web gateway products; top picks include Sweet Orange, Neutrino, and Rig Exploit Kit. Another finding, consistent with the last Hacker's Playbook, is the danger of misconfigured security products. Researchers passed malware between internal and external simulators and found many malware sandboxing solutions were not properly set up to safeguard all protocols, encrypted traffic, ports, and file formats. In exploring lateral movement, researchers were successful in infiltrating networks via brute-force methods and discovered issues with proxies, which can segment internal networks when deployed correctly.
If proxies are misconfigured, hackers can breach new network paths both internally and externally through proxy fuzzing. It's easy for hackers to pull data outside victim organizations because most have fairly open outbound communication channels to the Internet.

Top successful protocols include HTTP, IRC, SIP, and Syslogs, but IT support tools like externally bound syslogs can also be used to steal data. "One thing we are continuously seeing from the previous Hacker's Playbook is the exfiltration of information, the ability of the hacker to steal something you care about, is still at 100%," says Kotler.

This is a proven problem that will continue to pose a business risk in the future. The means of mitigating these risks varies depending on the business, as different companies and security programs have different needs, he continues. Knowing where the business is lacking, investing in the right technology, and driving employee awareness are key. "It all begins with an understanding of what is the problem and where are the gaps, and making sure they are validated correctly," he says. There is a positive finding from this collection of research, however, Kotler notes. "Companies today are being more and more proactive when it comes to understanding their security posture, and the notion of running simulations ahead of the curve so they can mitigate [risk]," he says. Related Content: Kelly is an associate editor for InformationWeek.
She most recently reported on financial tech for Insurance & Technology, before which she was a staff writer for InformationWeek and InformationWeek Education. When she's not catching up on the latest in tech, Kelly enjoys ...
View Full Bio More Insights

Microsoft opens up its 'million dollar' bug-finder

Microsoft is previewing a cloud-based bug detector, dubbed Project Springfield, that it calls one of its most sophisticated tools for finding potential security vulnerabilities. Project Springfield uses "whitebox fuzzing," which uncovered one-third of the "million dollar" security bugs during the development of Windows 7. Microsoft has been using a component of the project called SAGE since the mid-2000s to test products prior to release, including fuzzing both Windows and Office applications.  For this project, SAGE is bundled with other tools for fuzz testing, featuring a dashboard and other interfaces that enable use by people without an extensive security background.

The tests are run using Microsoft's Azure cloud. With fuzz testing, the system throws random inputs at software to find instances in which unforeseen actions cause software to crash.

This testing, according to Microsoft researcher David Molnar, is ideal for software regularly incorporating inputs like documents, images, videos, or other information that may not be trustworthy.

Bad actors are sought out that could launch malicious attacks or crash a system. Whitebox fuzz testing uses artificial intelligence to ask a series of "what if" questions and make decisions about what might cause a crash and signal a security concern. The code-name, Springfield, previously was used at Microsoft for the now-defunct Popfly web page and mashup creation service.

There's no relation between the two projects, a Microsoft representative said. Microsoft is extending preview invitations for Project Springfield to customers, with an initial group to evaluate it for free.

Microsoft launches “fuzzing-as-a-service” to help developers find security bugs

Enlarge / No, not that sort of fuzzing for bugs.Micha L. Rieser reader comments 15 Share this story At Microsoft's Ignite conference in Atlanta yesterday, the company announced the availability of a new cloud-based service for developers that will allow them to test application binaries for security flaws before they're deployed.

Called Project Springfield, the service uses "whitebox fuzzing" (also known as "smart fuzzing") to test for common software bugs used by attackers to exploit systems. In standard fuzzing tests, randomized inputs are thrown at software in an effort to find something that breaks the code—a buffer overflow that would let malicious code be planted in the system's memory or an unhandled exception that causes the software to crash or processes to hang.

But the problem with this random approach is that it's hard to get deep into the logic of code.

Another approach, called static code analysis (or "whiteboxing"), looks instead at the source code and walks through it without executing it, using ranges of inputs to determine whether security flaws may be present. Whitebox fuzzing combines some of the aspects of each of these approaches. Using sample inputs as a starting point, a whitebox fuzz tester dynamically generates new sets of inputs to exercise the code, walking deeper into processes. Using machine learning techniques, the system repeatedly runs the code through fuzzing sessions, adapting its approach based on what it discovers with each pass.

The approach is similar to some of the techniques developed by competitors in the Defense Advanced Research Projects Agency's Cyber Grand Challenge to allow for automated bug detection and patching. Microsoft Research scientist Patrice Godefroid led the development of Microsoft's internal whitebox fuzzing tool, called SAGE, which is the basis for the new service. In its earliest form, SAGE was used in testing of Windows 7 prior to its release and accounted for a third of the bugs discovered by fuzzing tools overall, despite being used after all other testing was complete.
SAGE is now the basis of Project Springfield, which Godefroid leads. Project Springfield puts the fuzz-testing system in the Azure cloud behind a Web dashboard. Users upload code for testing along with a "test driver"—an interface for pushing inputs to the code—and sample inputs. Currently, the service works with Windows binaries, but Linux testing will be available soon. Like other Microsoft Research projects before it (such as Project Oxford), Project Springfield is in limited preview, and Microsoft is screening interested customers for access.

It’s open season for bug hunting – on Microsoft’s Azure cloud

Project Springfield offers fuzzing, which isn't nearly as titillating as it sounds Ignite Microsoft's conviction that "fuzzing in the cloud will revolutionize security testing," voiced in a research paper six years ago, has taken form with the debut of Project Springfield: an Azure-based service for identifying software flaws by automatically subjecting the code to bad input. Introduced at the Ignite conference in Atlanta, Georgia, on Monday, Project Springfield offers developers the ability to conduct continuous testing of binary files on virtual machines running atop Microsoft Azure, in order to identify and eliminate bugs. Allison Linn, self-described writer and storyteller for Microsoft, says that Microsoft's research team thinks about Project Springfield as a "million-dollar bug detector" (not to be confused with the Million Dollar Homepage) because some software bugs cost that much to fix if left too long. Your costs may vary. A 2002 study released by the US National Institute of Standards and Technology estimated that software bugs cost the US economy between $22.2 and $59.5 billion annually (more like $79 billion today). Catching bugs before software gets released presumably can bring repair costs down, if that's your goal. Microsoft insists a third of the "million dollar" security bugs in Windows 7 were found using its "whitebox fuzzing" technology, referred to internally as SAGE (scalable, automated, guided execution). SAGE is one of the components of Project Springfield. Like other announcements echoing around Silicon Valley these days, artificial intelligence comes into play. Microsoft says its system employs AI to ask questions and make better decisions about conditions that might cause code to crash. Microsoft's whitebox fuzzing algorithm symbolically executes code from a starting input and develops subsequent input data based on constraints from the conditional statements it encounters along the way. The technology is distinct from blackbox fuzzing, which involves the sending of malformed input data without ensuring all the target paths have been explored. Blackbox fuzzing thus has the potential to miss a critical test condition by chance. Fuzzing lends itself to cloud computing because fuzzing software can run different tests in parallel using large amounts of available infrastructure. But Microsoft researchers Patrice Godefroid and David Molnar, in their 2010 research paper, argue that such computational elasticity matters less than the benefits of shared cloud infrastructure. "Hosting security testing in the cloud simplifies the process of gathering information from each enrolled application, rolling out updates, and driving improvements in future development," they wrote. It also, it is claimed, simplifies billing. ®

What’s in your code? Why you need a software bill of...

Writing secure applications doesn't mean simply checking the code you've written to make sure there are no logic errors or coding mistakes.

Attackers are increasingly targeting vulnerabilities in third-party libraries as part of their attacks, so you h...

Fooling the ‘Smart City’

The concept of a smart city involves bringing together various modern technologies and solutions that can ensure comfortable and convenient provision of services to people, public safety, efficient consumption of resources, etc. However, something that often goes under the radar of enthusiasts championing the smart city concept is the security of smart city components themselves.

The truth is that a smart city’s infrastructure develops faster than security tools do, leaving ample room for the activities of both curious researchers and cybercriminals. Smart Terminals Have Their Weak Points Too Parking payment terminals, bicycle rental spots and mobile device recharge stations are abundant in the parks and streets of modern cities.

At airports and passenger stations, there are self-service ticket machines and information kiosks.
In movie theaters, there are ticket sale terminals.
In clinics and public offices, there are queue management terminals.

Even some paid public toilets now have payment terminals built into them, though not very often. Ticket terminals in a movie theater However, the more sophisticated the device, the higher the probability that it has vulnerabilities and/or configuration flaws.

The probability that smart city component devices will one day be targeted by cybercriminals is far from zero. Сybercriminals can potentially exploit these devices for their ulterior purposes, and the scenarios of such exploitation come from the characteristics of such devices. Many such devices are installed in public places They are available 24/7 They have the same configuration across devices of the same type They have a high user trust level They process user data, including personal and financial information They are connected to each other, and may have access to other local area networks They typically have an Internet connection Increasingly often, we see news on another electronic road sign getting hacked and displaying a “Zombies ahead” or similar message, or news about vulnerabilities detected in traffic light management or traffic control systems. However, this is just the tip of the iceberg; smart city infrastructure is not limited to traffic lights and road signs. We decided to analyze some smart city components: Touch-screen payment kiosks (tickets, parking etc.) Infotainment terminals in taxis Information terminals at airports and railway terminals Road infrastructure components: speed cameras, traffic routers Smart City Terminals From a technical standpoint, nearly all payment and service terminals – irrespective of their purpose – are ordinary PCs equipped with touch screens.

The main difference is that they have a ‘kiosk’ mode – an interactive graphical shell that blocks the user from accessing the regular operating system functions, leaving only a limited set of features that are needed to perform the terminal’s functions.

But this is theory.
In practice, as our field research has shown, most terminals do not have reliable protection preventing the user from exiting the kiosk mode and gaining access to the operating system’s functions. Exiting the kiosk mode Techniques for Exiting the Kiosk Mode There are several types of vulnerabilities that affect a large proportion of terminals.

As a consequence, there are existing attack methods that target them. The sequence of operations that can enable an attacker to exit the full-screen application is illustrated in the picture below. Methodology for analyzing the security of public terminals Tap Fuzzing The tap fuzzing technique involves trying to exit the full-screen application by taking advantage of incorrect handling when interacting with the full-screen application.

A hacker taps screen corners with his fingers and tries to call the context menu by long-pressing various elements of the screen.
If he is able to find such weak points, he tries to call one of the standard OS menus (printing, help, object properties, etc.) and gain access to the on-screen keyboard.
If successful, the hacker gets access to the command line, which enables him to do whatever he wants in the system – explore the terminal’s hard drive in search of valuable data, access the Internet or install unwanted applications, such as malware. Data Fuzzing Data fuzzing is a technique that, if exploited successfully, also gives an attacker access to the “hidden” standard OS elements, but by using a different technique.

To exit the full-screen application, the hacker tries filling in available data entry fields with various data in order to make the ‘kiosk’ work incorrectly.

This can work, for example, if the full-screen application’s developer did not configure the filter checking the data entered by the user properly (string length, use of special symbols, etc.).

As a result, the attacker can enter incorrect data, triggering an unhandled exception: as a result of the error, the OS will display a window notifying the user of the problem. Once an element of the operating system’s standard interface has been brought up, the attacker can access the control panel, e.g., via the help section.

The control panel will be the starting point for launching the virtual keyboard. Other Techniques Yet another technique for exiting the ‘kiosk’ is to search for external links that might enable the attacker to access a search engine site and then other sites.

Due to developer oversight, many full-screen applications used in terminals contain links to external resources or social networks, such as VKontakte, Facebook, Google+, etc. We have found external links in the interface of cinema ticket vending machines and bike rental terminals, described below. One more scenario of exiting the full-screen application is using standard elements of the operating system’s user interface. When using an available dialog window in a Windows-based terminal, an attacker is sometimes able to call the dialog window’s control elements, which enables him to exit the virtual ‘kiosk’. Exiting the full-screen application of a cinema ticket vending terminal Bike Rental Terminals Cities in some countries, including Norway, Russia and the United States, are dotted with bicycle rental terminals.
Such terminals have touch-screen displays that people can use to register if they want to rent a bike or get help information. Status bar containing a URL We found that the terminal system shown above has a curious feature.

The Maps section was implemented using Google maps, and the Google widget includes a status bar, which contains “Report an Error”, “Privacy Policy” and “Terms of Use” links, among other information.

Tapping on any of these links brings up a standard Internet Explorer window, which provides access to the operating system’s user interface. The application includes other links, as well: for example, when viewing some locations on the map, you can tap on the “More Info” button and open a web page in the browser. The Internet Explorer opens not only a web page, but also a new opportunity for the attacker It turned out that calling up the virtual keyboard is not difficult either.

By tapping on links on help pages, an attacker can access the Accessibility section, which is where the virtual keyboard can be found.

This configuration flaw enables attackers to execute applications not needed for the device’s operation. Running cmd.exe demonstrates yet another critical configuration flaw: the operating system’s current session is running with administrator privileges, which means that an attacker can easily execute any application. The current Windows session is running with administrator privileges In addition, an attacker can get the NTLM hash of the administrator password.
It is highly probable that the password used on this device will work for other devices of the same type, as well. Note that, in this case, an attacker can not only obtain the NTLM hash – which has to be brute-force cracked to get the password – but the administrator password itself, because passwords can be extracted from memory in plain text. An attacker can also make a dump of the application that collects information on people who wish to rent a bicycle, including their full names, email addresses and phone numbers.
It is not impossible that the database hosting this information is stored somewhere nearby.
Such a database would have an especially high market value, since it contains verified email addresses and phone numbers.
If it cannot be obtained, an attacker can install a keylogger that will intercept all data entered by users and send it to a remote server. Given that these devices work 24/7, they can be pooled together to mine cryptocurrency or used for hacking purposes seeing as an infected workstation will be online around the clock. Particularly audacious cybercriminals can implement an attack scenario that will enable them to get customer payment data by adding a payment card detail entry form to the main window of the bike rental application.
It is highly probable that users deceived by the cybercriminals will enter this information alongside their names, phone numbers and email addresses. Terminals at Government Offices Terminals at some government offices can also be easily compromised by attackers.

For example, we have found a terminal that prints payment slips based on the data entered by users.

After all fields have been filled with the relevant data, the user taps the “Create” button, after which the terminal opens a standard print window with all the print parameters and control tools for several seconds. Next, the “Print” button is automatically activated. A detail of the printing process on one of the terminals An attacker has several seconds to tap the Change [printer] button and exit into the help section.

From there, they can open the control panel and launch the on-screen keyboard.

As a result, the attacker gets all the devices needed to enter information (the keyboard and the mouse pointer) and can use the computer for their own mercenary purposes, e.g., launch malware, get information on printed files, obtain the device’s administrator password, etc. Public Devices at Airports Self-service check-in kiosks that can be found at every modern airport have more or less the same security problems as the terminals described above.
It is highly probable that they can be successfully attacked.

An important difference between these kiosks and other similar devices is that some terminals at airports handle much more valuable information that terminals elsewhere. Exiting the kiosk mode by opening an additional browser window Many airports have a network of computers that provide paid Internet access.

These computers handle the personal data that users have to enter to gain access, including people’s full names and payment card numbers.

These terminals also have a semblance of a kiosk mode, but, due to design faults, exiting this mode is possible. On the computers we have analyzed, the kiosk software uses the Flash Player to show advertising and at a certain point an attacker can bring up a context menu and use it to access other OS functions. It is worth noting that web address filtering policies are used on these computers. However, access to policy management on these computers was not restricted, enabling an attacker to add websites to the list or remove them from it, offering a range of possibilities for compromising these devices.

For example, the ability to access phishing pages or sites used to distribute malware potentially puts such computers at risk.

And blacklisting legitimate sites helps to increase the chances of a user following a phishing link. List of addresses blocked by policies We also discovered that configuration information used to connect to the database containing user data is stored openly in a text file.

This means that, after finding a way to exit kiosk mode on one of these machines, anyone can get access to administrator credentials and subsequently to the customer database – with all the logins, passwords, payment details, etc. A configuration file in which administrator logins and password hashes are stored Infotainment Terminals in Taxicabs In the past years, Android devices embedded in the back of the front passenger seat have been installed in many taxicabs. Passengers in the back seat can use these devices to watch advertising, weather information, news and jokes that are not really funny.

These terminals have cameras installed in them for security reasons. The application that delivers the content also works in kiosk mode and exiting this mode is also possible. Exiting the kiosk mode on a device installed in a taxi makes it possible to download external applications In those terminals that we were able to analyze, there was hidden text on the main screen.
It can be selected using standard Android tools using a context menu.

This leads to the search option being activated on the main screen.

As a result, the shell stops responding, terminates and the device is automatically restarted. While the device is starting, all the hacker needs to do is exit to the main menu at the right time and open the RootExplorer – an Android OS file manager. Android interface and folder structure This gives an attacker access to the terminal’s OS and all of its capabilities, including the camera.
If the hacker has prepared a malicious application for Android in advance and hosted it on a server, that application can be used to remotely access the camera.
In this case, the attacker can remotely control the camera, making videos or taking photos of what is going on in the taxi and uploading them to his server. Exiting the terminal’s full-screen application in a taxi gives access to the operating system’s functions Our Recommendations A successful attack can disrupt a terminal’s operation and cause direct financial damage to its owners.

Additionally, a hacker can use a compromised terminal to hack into others, since terminals often form a network.

After this, there are extensive possibilities for exploiting the network – from stealing personal data entered by users and spying on them (if the terminal has a camera or document scanner built into it) to stealing money (if the terminal accepts cash or bank cards). To prevent malicious activity on public devices that have a touch interface, the developers and administrators of terminals located in public places should keep the following recommendations in mind: The kiosk’s interactive shell should have no extra functions that enable the operating system’s menu to be called (such as right mouse click, links to external sites, etc.) The application itself should be launched using sandboxing technology, such as jailroot, sandbox, etc.

This will help to keep the application’s functionality limited to the artificial environment Using a thin client is another method of protection.
If a hacker manages to ‘kill’ an application, most of the valuable information will be stored on the server rather than the compromised device if the device is a thin client The current operating system session should be launched with the restricted privileges of a regular user – this will make installing new applications much more difficult A unique account with a unique password should be created on each device to prevent attackers who have compromised one of the terminals from using the password they have cracked to access other similar devices Elements of the Road Infrastructure The road infrastructure of modern cities is being gradually equipped with a variety of intelligent sensors, regulators, traffic analyzers, etc.

All these sensors collect and send traffic density information to data centers. We looked at speedcams, which can be found everywhere these days. Speed Cameras We found speedcam IP addresses by pure chance, using the Shodan search engine.

After studying several of these cameras, we developed a dork (a specific search request that identifies the devices or sites with pinpoint accuracy based on a specific attribute) to find as many IP addressed of these cameras as possible. We noticed a certain regularity in the IP addresses of these devices: in each city, all the cameras were on the same subnet.

This enabled us to find those devices which were not shown in Shodan search results but which were on the same subnets with other cameras.

This means there is a specific architecture on which these devices are based and there must be many such networks. Next, we scanned these and adjacent subnets on certain open ports and found a large number of such devices. After determining which ports are open on speed cameras, we checked the hypothesis that one of them is responsible for RTSP – the real-time streaming protocol.

The protocol’s architecture enables streaming to be either private (accessible with a login and password) or public. We decided to check that passwords were being used.
Imagine our surprise when we realized there was no password and the entire video stream was available to all Internet users. Openly broadcast data includes not only the video stream itself, but additional data, such as the geographical coordinates of cameras, as well. Direct broadcast screenshot from a speed camera We found many more open ports on these devices, which can also be used to get many interesting technical details, such as a list of internal subnets used by the camera system or the list of camera hardware. We learned from the technical documentation that the cameras can be reprogrammed over a wireless channel. We also learned from documentation that cameras can detect rule violations on specified lanes, making it possible to disable detection on one of the lanes in the right place at the right time.

All of this can be done remotely. Let’s put ourselves in criminals’ shoes and assume they need to remain undetected in the car traffic after performing certain illegal actions.

They can take advantage of speed camera systems to achieve this.

They can disable vehicle detection on some or all lanes along their route or monitor the actions of law-enforcement agents chasing them. In addition, a criminal can get access to a database of vehicles registered as stolen and can add vehicles to it or remove them from it. We have notified the organizations responsible for operating speed cameras in those countries where we identified the above security issues. Routers We also analyzed another element of the road infrastructure – the routers that transfer information between the various smart city elements that are part of the road infrastructure or to data centers. As we were able to find out, a significant part of these routers uses either weak password protection or none at all.

Another widespread vulnerability is that the network name of most routers corresponds to their geographic location, i.e., the street names and building numbers.

After getting access to the administration interface of one of these routers, an attacker can scan internal IP ranges to determine other routers’ addresses, thereby collecting information on their locations.

After this, by analyzing road load sensors, traffic density information can be collected from these sensors. Such routers support recording traffic and uploading it to an FTP server that can be created by an attacker.

These routers can also be used to create SSH tunnels.

They provide access to their firmware (by creating its backup copy), support Telnet connections and have many other capabilities. These devices are indispensable for the infrastructure of a smart city. However, after gaining access to them, criminals can use them for their own purposes.

For example, if a bank uses a secret route to move large amounts of cash, the route can be determined by monitoring information from all sensors (using previously gained access to routers). Next, the movements of the vehicles can be monitored using the cameras. Our Recommendations To protect speed cameras, a full-scale security audit and penetration testing must first be carried out.

From this, well-thought-out IT security recommendations be prepared for those who provide installation and maintenance of such speed monitoring systems.

The technical documentation that we were able to obtain does not include any information on security mechanisms that can protect cameras against external attacks.

Another thing that needs to be checked is whether such cameras are assigned an external IP address.

This should be avoided where possible.

For security reasons, none of these cameras should be visible from the Internet. The main issue with routers used in the road infrastructure is that there is no requirement to set up a password during initial loading and configuration of the device. Many administrators of such routers are too forgetful or lazy to do such simple things.

As a result, gaining access to the network’s internal traffic is sufficiently easy. Conclusion The number of new devices used in the infrastructure of a modern city is gradually growing.

These new devices in turn connect to other devices and systems.

For this environment to be safe for people who live in it, smart cities should be treated as information systems whose protection requires a custom approach and expertise. This article was prepared as part of the support provided by Kaspersky Lab to “Securing Smart Cities”, an international non-profit initiative created to unite experts in smart city IT security technologies.

For further information about the initiative, please visit securingsmartcities.org

Google Patches 55 Android Flaws in September Update

More media server flaws surface as Google splits Android updates into three different patch levels.

The update provides patches for 55 vulnerabilities. Google's September Android security update provides users with patches for 55 vulnerabilities spread across three patch levels.

Google first began to split Android updates into two patch levels with the July update, which fixed 108 security vulnerabilities.The basic idea behind the new three-patch model is to make it easier for handset vendors to deliver the most important patches for a subset of vulnerabilities.

Google's own Nexus-branded devices receive the complete patch level, which this month is designated as 2016-09-06, while the other two patch levels are 2016-09-05 and 2016-09-01, providing a subset of patches.Overall, Google is patching eight critical vulnerabilities in Android, including one in the much-maligned media server component.

Android's media server and related libraries have been under scrutiny from security researchers since June 2015, when the first Stagefright flaw was revealed.To date, Google has patched more than 115 media server-related CVE (Common Vulnerabilities and Exposures) flaws in Android.

The new CVE-2016-3862 being fixed in the September Android update is a remote code execution vulnerability that Tim Strazzere, director of mobile research at SentinelOne, discovered. While Google has patched many media server flaws over the course of the past year, the CVE-2016-3862 is really interesting as it was in a different area than the other vulnerabilities, Strazzere said. "While it is part of the media framework, it is exposed to the system through a simple Java object, which many developers were using, and as a result were unknowingly exposing themselves to this issue," Strazzere told eWEEK. "Essentially, anyone who was using the ExifInterface object that was loading an image could trigger this vulnerability."If this vulnerability is exploited, an attacker could break into an applications sandbox, and they also could attack the media server, Strazzere explained.

A highly sophisticated attacker could use this to target a specific person or application—or simply try to compromise the entire device, he added.Strazzere found the CVE-2016-3862 issue through fuzzing techniques and looking at how people have reported vulnerabilities in the past and how they claimed to be looking for them."We then tried to see if we could beat some of these other companies, and Google, by fuzzing in different places," Strazzere said. "While the end vulnerability is attributed to a media server issue, it was a section of the code base no one else seemed to be looking at."While CVE-2016-3862 is the only media server issue in the September update marked critical, it's not the only media server issue that Google is patching this month.
In fact, Google is patching six additional media server flaws, including five high-severity denial-of-service vulnerabilities (CVE-2016-3899, CVE-2016-3878, CVE-2016-3879, CVE-2016-3880 and CVE-2016-388) and one information disclosure vulnerability that is marked moderate (CVE-2016-3895).Of note in the September update, Google is patching two of the so-called Quadrooter flaws that were publicly reported by security firm Check Point at the DefCon security conference in August.

The two flaws are CVE-2016-5340, which Google identifies as a critical vulnerability that enables privilege escalation in the kernel shared memory subsystem, and CVE-2016-2059, which is rated by Google as having high severity and is also a privilege-escalation flaw."An elevation of privilege vulnerability in the Qualcomm networking component could enable a local malicious application to execute arbitrary code within the context of the kernel," Google warns in its advisory.There are also three patches in the September updates for issues first patched in the upstream Linux kernel in 2015.

CVE-2015-1465 and CVE-2015-5364 are a pair of high-severity DoS vulnerabilities in the kernel networking subsystem, and CVE-2015-8839 is a high-severity DoS vulnerability in the kernel ext4 file system.Sean Michael Kerner is a senior editor at eWEEK and InternetNews.com.

Follow him on Twitter
@TechJournalist.