14.6 C
London
Tuesday, September 26, 2017
Home Tags Reverse Engineering

Tag: Reverse Engineering

XPan, I am your father

While we have previously written on the now infamous XPan ransomware family, some of it’s variants are still affecting users primarily located in Brazil.

This sample is what could be considered as the “father” of other XPan ransomware variants.

A considerable amount of indicators within the source code depict the early origins of this sample.
If you've never been to SAS, ask around. You really are missing out on the best security conference in the industry – and event where the best connections are made, high-quality discoveries are shared in a fun, casual atmosphere.

Dissecting Malware

From March 30 through April 2, 2017, one of them — Principal Security Researcher at Kaspersky Lab Nicolas Brulez — will deliver a course on the subject he has been training people around the world on for 12 years, malware reverse engineering.
The concept of a connected car, or a car equipped with Internet access, has been gaining popularity for the last several years.

By using proprietary mobile apps, it is possible to get some useful features, but if a car thief were to gain access to the mobile device that belongs to a victim that has the app installed, then would car theft not become a mere trifle?
Kaspersky Lab Principal Security Researcher Nico Brulez talks with Ryan Naraine about his upcoming SAS 2017 training on the ins and outs of malware reverse engineering and how attendees can benefit for a wide range of tips and tricks.
The hacker group "ShadowBrokers" releases 61 files said to contain exploit tools used by the National Security Agency, which could fuel a race between attackers—trying to create their own exploit tools—and defenders. The ShadowBrokers, a hacking group, pledged to shut down their operation and go dark on Jan. 12.

But as a final act of spite the group released 61 files from a cache of hundreds of programs allegedly belonging to an exploitation framework used by the U.S. National Security Agency.The files reportedly include programs for compromising systems and circumventing defensive software, including antivirus programs.

The group released the files because many—44, according to security experts—could be detected by at least one antivirus program, the group said in a statement posted online.“So long, farewell peoples,” the group stated. “TheShadowBrokers is going dark, making exit.

Continuing is being much risk … not many bitcoins.”The group originally appeared in August 2016, claiming to have stolen files from a NSA server—files that matched those described in documents leaked by former NSA contractor Edward Snowden. The files also matched the telltale signatures from an exploitation kit discovered by Russian antivirus firm Kaspersky.

The security firm dubbed the group behind the software as “The Equation Group.” In August the ShadowBrokers declared that they would release the files to anyone who paid them 10,000 bitcoins in an auction.Yet, the chaotic manner with which the group declared the auction—along with the astronomical price tag—suggested to many researchers that the group was not serious. Other researchers believe that the group is likely linked to Russian intelligence.“I think the Shadowbrokers are a front for Russian intelligence and the auction was a smokescreen,” Jake Williams, principal consultant for Rendition Infosec, a cyber-security services firm, told eWEEK. “It is an insane auction method.
It was likely never about raising revenue.”Williams argues that the release of the information is a parting shot at the Obama administration and intelligence organizations.

Furthermore, the release of the code will likely result in an arms race as other nation-states try to reverse engineer the files and incorporate the exploits as well as the vulnerabilities targeted by the exploits into their own attacks and defenses.“This is definitely a game changer for the industry,” he said. “This is the first time that we have ever seen a nation-state’s toolkit.
It likely represents years of research, and in a matter of weeks other nation-states and cyber-criminal groups will have reverse-engineered it.
I don’t think there is a nation-state attack team on the planet that is not reverse engineering this code and figuring out how they can best use the technology.”Kaspersky Lab verified that the files released on Dec. 12 matched those from the Equation Group.“Most of the samples in the archive are EquationDrug plugins, GrayFish modules and EquationVector modules,” the company said in a statement. “These three are known malware platforms used by the Equation group, which we described in February 2015.

From the list of 61 files provided, our products already detect 44 of them. We are updating our products to detect all further samples."However, the ShadowBrokers dropped another parting shot in fractured English.

They may be back.“TheShadowBrokers offer is still being good, no expiration,” the group said in a statement. “If TheShadowBrokers receiving 10,000 btc in bitcoin address then coming out of hiding and dumping password for Linux + Windows.”
At SAS 2017, on April 1st and 2nd on St. Maarten, Global Director of GReAT Costin Raiu and Principal Security Researchers Vitaly Kamluk and Sergey Mineev will provide YARA training for incident response specialists and malware researchers, who need an effective arsenal for finding malware.

During the training, the experts will give participants access to some of Kaspersky Lab internal systems, which are otherwise closed to the public, to demonstrate how the company’s malware analysts catch rare samples.

After two days, even being a newcomer, you’ll walk away with the ability to write rules and start using the tool for hunting malware. You can book your seat now — the class will be limited for maximum 15 participants. Each trainer has an impressive portfolio of cyber-espionage campaigns that they have investigated, including Stuxnet, Duqu, Flame, Gauss, Red October, MiniDuke, Turla, Careto/TheMask, Carbanak and Duqu2. Why YARA training? Protective measures that were effective yesterday don’t guarantee the same level of security tomorrow.
Indicators of Compromise (IoCs) can help you search for footprints of known malware or for an active infection.

But serious threat actors have started to tailor their tools to fit each victim, thus making IoCs much less effective.

But good YARA detection rules still allow analysts to find malware, exploits and 0-days which couldn’t be found in any other way.

The rules can be deployed in networks and on various multi scanner systems. Giveaways People who go through the training will be able to start writing relatively complex YARA rules for malware – from polymorphic keyloggers all the way up to highly complex malware – that can’t be detected easily with strings.

The GReAT trainers will teach how to balance rules, in other words how to write detection rules while minimising the risk of false-positives.

They also will share their experience of what exactly they are looking for when they write YARA rules as part of their everyday jobs. What are the requirements for participation? You don’t have to be an expert in order to go through this training.
It’s enough to have basic knowledge of how to use a TextEditor and the UNIX grep tool, and a basic understanding of what computer viruses are and what binary formats look like. You’ll also need your laptop and YARA software v. 3.4.0 installed on the machine.

Experience with malware analysis, reverse engineering and programming (especially in structured languages) will help you to learn more quickly, but this doesn’t mean that you can’t learn without it. Catching a 0-day with YARA One of the most remarkable cases in which Kaspersky Lab’s GReAT used YARA was the very famous Silverlight 0-day: the team started hunting for this after Hacking Team, the Italian company selling “legal surveillance tools” for governments and LEAs, was hacked. One of the stories in the media attracted our researchers’ attention — according to the article, a programmer offered to sell Hacking Team a Silverlight 0-day, an exploit for an obsolete Microsoft plug-in which at some point had been installed on a huge number of computers. GReAT decided to create a YARA rule based on this programmer’s older, publicly available proof-of-concept exploits. Our researchers found that he had a very particular style when coding the exploits — he used very specific comments, shell code and function names.

All of this unique information was used to write a YARA rule — the experts set it to carry out a clear task, basically saying “Go and hunt for any piece of malware that shows the characteristics described in the rule”.

Eventually it caught a new sample, it was a 0-day, and the team reported it to Microsoft immediately. If you’re a scholar… Surprisingly enough, YARA can be used for any sort of classification, such as finding documents by metadata, email and so on.
If you work with any kind of rare information and lack a competitive tool for searching for it, come to St. Maarten in April and join the training — you’ll benefit greatly. You are welcome to listen the podcast to learn about how YARA can be used in malware hunting, data analysis and incident response activities. Book a seat at sas.kaspersky.com now to hunt APTs with YARA like a GReAT ninja!
Disk-erasing malware has been tweaked to encrypt data instead and to ask for a Bitcoin payment. In an ominous but unsurprising development, threat actors appear to have begun targeting industrial companies in ransomware campaigns. Security firm CyberX’s threat intelligence research team recently analyzed a new version of the KillDisk disk-wiping malware that was used in cyber attacks against the Ukrainian power grid earlier this year. The analysis showed that KillDisk has been tweaked so that now instead of erasing data the malware encrypts it and then asks for a Bitcoin payment. The new version of KillDisk encrypts the local hard drives of the machines it infects as well as any network-mapped folders shared across the organization, using RSA 1028 and AES algorithms, CyberX’s vice president of marketing Phil Neray said in a blog this week. The security firm’s reverse engineering of the malware sample showed it containing a pop-up message demanding a ransom payment of 222 Bitcoins or roughly $206,000 in return for the decryption key. Ransomware attacks on companies in the industrial sector could cause significantly bigger problems than similar attacks on companies in other sectors.

For example, an attack that succeeded in locking up the operational data upon which physical processes rely could do serious and potentially even catastrophic damage to people and property.

Considering the severity of the potential consequences of a ransomware attack, plant owners are also likely to be more willing than others to quietly pay up any demanded ransom, CyberX said. The authors of the new KillDisk variant are a cybercriminal group called the TeleBots gang that appears to have evolved from another group called the Sandworm gang, Neray said. The Sandworm gang was responsible for a series of attacks on Industrial Control System (ICS) and SCADA networks in the US in 2014 involving the use of malware dubbed BlackEnergy.

The same group is also believed responsible for the attacks on the Ukrainian power grid in December 2015 and in early 2016 using the same BlackEnergy malware and the hard disk-erasing version of KillDisk. The TeleBots gang itself has been associated with previous attacks on Ukrainian banks and now appear to have turned their sights on companies in the industrial sector. “We know that both BlackEnergy and KillDisk were seen in the Ukraine power attacks and may also have been used in attacks against a large Ukrainian mining company and a large Ukrainian rail company,” says Neray in comments to Dark Reading.  The new KillDisk ransomware variant has almost the same functionality as the previous version, but instead of deleting files it encrypts them.  “For example, in both samples, the same string encoding algorithm is being used. So it's reasonable to assume that the new ransomware malware was designed [for use] against industrial companies too,” Neray says. In addition, other security researchers too have seen evidence of cybercriminals already targeting chemical plants in Eastern Europe for extortion, he says. It is unclear what malware strains were used in those attacks.

But as with KillDisk, the threat actors behind those attacks used malicious email attachments to distribute their malware and to penetrate the operational networks and processing systems of chemical plants in a way to effect the purity of the output, Neray says. Related stories:   Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year ...
View Full Bio More Insights
In-flight entertainment systems create hacker risk, say researchers Vulnerabilities in Panasonic in-flight entertainment systems create a possible mechanism for attackers to control in-flight displays, PA systems and lighting, say researchers. Ruben Santamarta, principal security consultant at IOActive, said it had found vulnerabilities in Panasonic Avionic In-Flight Entertainment (IFE) systems that it claims could allow hackers to "hijack" passengers’ in-flight displays and, in some instances, potentially access their credit card information.

The research revealed it would also theoretically be possible that such a vulnerability could present an entry point to the wider network, including the aircraft controls domain. “I’ve been afraid of flying for as long as I can remember,” said Santamarta. “It might sound like a sick cure to some but, as a hacker, learning everything I could about how planes work, from the aerodynamics to electronics, has reduced the fear significantly. On a 2014 flight from Warsaw to Dubai, I discovered I could access debug codes directly from a Panasonic inflight display.

A subsequent internet search allowed me to discover hundreds of publicly available firmware updates for multiple major airlines, which was quite alarming. Upon analysing backend source code for these airlines and reverse engineering the main binary, I’ve found several interesting functionalities and exploits.” IFE system vulnerabilities identified by Santamarta might most straightforwardly be exploited to gain control of what passengers see and hear from their in-flight screen, he claimed.

For example, an attacker might spoof flight information values such as altitude or speed, or show a bogus route on the interactive map.

An attacker might also compromise the "CrewApp" unit, which controls PA systems, lighting, or even the recliners on first class seating.
If all of these attacks are applied at the same time, a malicious actor may create a baffling and disconcerting situation for passengers.

Furthermore, the capture of personal information, including credit card details, is also technically possible due to backend systems that sometimes provide access to specific airlines’ frequent-flyer/VIP membership data, said the researcher. Aircraft's data networks are divided into four domains, depending on the kind of data they process: passenger entertainment, passenger-owned devices, airline information services, and finally aircraft control.

Avionics is usually located in the Aircraft Control domain, which should be physically isolated from the passenger domains; however, this doesn’t always happen.

This means that as long as there is a physical path that connects both domains, there is potential for attack.

The specific devices, software and configuration deployed on the target aircraft would dictate whether an attack is possible or not.
Santamarta urged airlines to steer towards a cautious course. “I don’t believe these systems can resist solid attacks from skilled malicious actors,” he said. “As such, airlines must be incredibly vigilant when it comes to their IFE systems, ensuring that these and other systems are properly segregated and each aircraft's security posture is carefully analysed case by case.” IOActive reported these findings to Panasonic Avionics in March 2015.
It only went public this week after giving the firm “enough time to produce and deploy patches, at least for the most prominent vulnerabilities”. Panasonic Avionic’s technology is used by a several major airlines including Virgin, American and Emirates airlines. El Reg asked Panasonic Avionic to comment on IOActive's research but we’ve yet to hear back. We’ll update this story as and when we learn more. The avionics research has some parallels with IOActive’s remote hack of the Jeep Cherokee in 2014, in which hackers took control of the vehicle’s dashboard functions, including steering, brakes, and transmission, through vulnerabilities existing in the automobile’s entertainment system. Once again, it appears entertainment systems have created a potential route into sensitive systems that hackers might be able to exploit. Stephen Gates, chief research intelligence analyst at NSFOCUS, commented: “In the light of this research, physical separation between in-flight entertainment systems and aircraft control systems could never be more important.

As airlines continue to add new customer-based entertainment and information technologies, airlines need to ensure that an impenetrable barrier is in place protecting aircraft control systems. “This research demonstrates that hackers could cause all sorts of issues that could impact a customer’s 'experience' while flying, but have yet to prove they could impact flight control systems,” he added. ® Sponsored: Flash enters the mainstream.
Visit The Register's storage hub
It just became harder to distinguish bot behavior from human behavior. Inbar Raz, Principal Researcher, PerimeterX, also contributed to this commentary. Ask just about anyone the question “What distinguishes an automated (bot) session from a human-driven session?” and you'll almost always get the same first answer: “Speed.” And no wonder - it's our first intuition.

Computers are just faster. If you focus the question on credential brute-forcing, then it's even more intuitive.

After all, the whole purpose of a brute-force attack is to cover as many options as possible, in the shortest possible time. Working quickly is just elementary, right? Well, it turns out that this is not always the case. Most defenders, if not all, are already looking at speed and have created volumetric detections that are nothing more than time-based signatures.

And that works, most of the time.

But the attackers are getting smarter every day, and changing their attack methods.
Suddenly, checking speed is no longer enough. On the first week of October, we detected a credentials brute-force attack on one of our customers that commenced around 03:30am UTC. The attack, which lasted a few minutes shy of 34 hours, spanned a whopping 366,000 login attempts.
Sounds like an easy case - 366K over 34 hours is over 10,000 attempts per hour.  But an easy catch? Not by existing volumetric detections, because the attack did not originate from one single IP address.
In fact, we discovered that well over 1,000 different IP addresses participated in this attack. Let's look at the distribution of attempts: Image Source: PerimeterX Of all the participating IP addresses, the vast majority (over 77%) of them appeared up to 10 times only, during the entire attack. While the minority may trigger a volumetric detection, 77% percent of the attacking IP addresses would go unnoticed. One can argue that counting failed login attempts would come in handy here.

And it indeed could, except that many of the brute-force attacks don't actually enumerate on passwords tirelessly.
Instead, they try username/password pairs that were likely obtained from leaked account databases, gathered from other vulnerable and hacked sites.
Since many people use the same password in more than one place, there is a good chance that some, if not many, of the login attempts will actually be successful.  How Motivated Attackers AdaptOn a different attack we observed, nearly 230,000 attempts at logging in over 20 minutes were performed from over 40,000 participating IP addresses.

The vast majority of IP addresses were the origin point of 10 or fewer attempts.

A volumetric detection would simply miss this attack. In comparison, a common volumetric detector is usually set to between 5 and 30 as a minimum, depending on the site’s specific behavior. Our data suggest that motivated attackers will adapt and adjust their numbers to your threshold, no matter how low it is. We also observed that the attack was incredibly concentrated within a very short detection window of only about 20 or 25 seconds. Fake User Creation AttackLet's look at one last distributed attack, on yet another client.

This time, the attack is not about credentials brute-forcing but rather fake user creation.
In this example, the largest groups of IP addresses used per attempt count were those that committed only 1 or 2 attempts: Image Source: PerimeterX The entire attack was conducted in less than six hours. How do the attackers get so many IP addresses to attack from? The answer lies in analyzing the IP addresses themselves. Our research shows that 1% were proxies, anonymizers or cloud vendors, and the other 99% were private IP addresses of home networks, likely indicating that the attacks were performed by some botnet (or botnets) of hacked computers and connected devices. Furthermore, the residential IP addresses constantly change (as in any home) rendering IP blacklisting irrelevant, and even harmful for the real users' experience. Suspicious IndicatorsWe included in this post just a few representative examples (out of many more we detected) of large-scale attacks originating from thousands of IP addresses over a short time span.
In the majority of these cases, detection was achieved by examining how users interacted with the website.

The suspicious indicators included users accessing only the login page, filling in the username and password too fast or not using the mouse. The implication of these attacks vary.

They include theft of user credentials as well as fake user account creation, which in turn leads to user fraud, spam, malware distribution and even layer-7 DDoS on the underlying web application. In conclusion, volumetric detections are simple and useful, but they are not sufficient.

The attackers continue to improve their techniques, bypassing old-fashioned defenses.

The new frontier in defense is in distinguishing bot behavior from human behavior – and blocking the bots. Related Content: Inbar Raz has been teaching and lecturing about Internet security and reverse engineering for nearly as long as he’s been doing that himself: He started programming at the age of 9 and reverse engineering at the age of 14.
Inbar specializes in outside-the-box approaches to analyzing security and finding vulnerabilities; the only reason he's not in jail right now is because he chose the right side of the law at an earlier age.

These days, Inbar is the principal researcher at PerimeterX, researching and educating the public on automated attacks on websites.
Amir Shaked is a software engineer and security researcher. He entered the software world at the age of 14 and has been developing and researching ever since at various startups and companies.
In recent years he managed several groups and recently started leading the research ...
View Full Bio More Insights
Brit/Belgian research team decipher signals and devise wounding wireless attacks A global research team has hacked 10 different types of implantable medical devices and pacemakers finding exploits that could allow wireless remote attackers to kill victims. Eduard Marin and Dave Singelée, researchers with KU Leuven University, Belgium, began examining the pacemakers under black box testing conditions in which they had no prior knowledge or special access to the devices, and used commercial off-the-shelf equipment to break the proprietary communications protocols. From the position of blind attackers the pair managed to hack pacemakers from up to five metres away gaining the ability to deliver fatal shocks and turn of life-saving treatment. The wireless attacks could also breach patient privacy, reading device information disclosing location history, treatments, and current state of health. Singelée told The Register the pair has probed implantable medical device and pacemakers, along with insulin pumps and neurostimulators in a bid to improve security understanding and develop lightweight countermeasures. "So we wanted to see if these wireless attacks would be possible on these newer types of pacemakers, as this would show that there are still security problems almost 10 years after the initial security flaws have been discovered, and because the impact of breaking the long-range wireless communication channel would be much larger as adversaries can be further away from their victim," Singelée says. "We deliberately followed a black-box approach mimicking a less-skilled adversary that has no prior knowledge about the specification of the system. "Using this black-box approach we just listened to the wireless communication channel and reverse-engineered the proprietary communication protocol. And once we knew all the zeros and ones in the message and their meaning, we could impersonate genuine readers and perform replay attacks etcetera." Laboratory setup: A USRP (left) and DAQ with antennas below. Their work is detailed in the On the (in)security of the Latest Generation Implantable Cardiac Defibrillators and How to Secure Them [PDF] authored by Marin and Singelée, KU Leven colleague Bart Preneel, Flavio D. Garcia and Tom Chothia of the University of Birmingham, and cardiologist Rik Willems of University Hospital Gasthuisberg. The team describes in limited detail to protect patients how the wireless communications used to maintain the implantable medical devices can be breached. "Adversaries may eavesdrop the wireless channel to learn sensitive patient information, or even worse, send malicious messages to the implantable medical devices. The consequences of these attacks can be fatal for patients as these messages can contain commands to deliver a shock or to disable a therapy." No physical access to the devices is required to pull off the attacks. The researchers say attackers could install beacons in strategic locations such as train stations and hospitals to infer patient movements, revealing frequented locations, and to infer patient treatment. Attackers could trigger a reprogramming session in order to grab that data. Programming flaws relating to the devices' standby energy saving mode allow denial of service attacks to be performed which will keep units in battery-draining alive states through continuous broadcasting of messages over long-range wireless. This could "drastically reduce" the units' battery life, the team says. The research, like all medical device hacking, has scope limitations that mean mass targeting of pacemakers is not immediately possible. Nor can attacks be extended to many metres. Another happy fact: the gear required isn't cheap. National Instruments sells its URSP-2920 for US$3670 (£2930, A$4972) and USB-6353 for US$2886 (£2724, A$3910). The team tells The Register they have been informed that the compromised vendor has issued a patch, but further details are not known. Medical devices' wireless could be jammed as a stop-gap measure, while the addition of shutdown commands to the devices would best serve long-term fix, as would the inclusion of standard symmetric key authentication. "We want to emphasise that reverse engineering was possible by only using a black-box approach," the team says. "Our results demonstrated that security-by-obscurity is a dangerous design approach that often conceals negligent designs." Medical device hacking has picked up pace in recent years, with much work made through the I Am The Calvary research and activist group. ® Sponsored: Customer Identity and Access Management
Backdoor slipped into phones sold outside China doesn't even hide itself successfully Got a cheap-and-cheerful Android phone from BLU, Infinix, Doogee, Leagoo, IKU, Beeline or Xolo? It might be harbouring some badware in the firmware. The issue affects phones that use an over-the-air update mechanism from Chinese company according to BitSight researcher Dan Dahlberg and Anubis Networks' João Gouveia and Tiago Pereira. Since a firmware update runs at root, the phones in question are vulnerable to pretty much anything a malicious server might install. Which means a keylogger, bugging software, or anything else an attacker might contemplate. In a twist that doesn't look like an accident, the vulnerable process tries to hide itself from the user and has a command that would let the manufacturer turn it off for six months or until the phone is rebooted. The researchers say Regentek doesn't encrypt firmware updates, making it vulnerable to a man-in-the-middle attack, and as well as the BLU Studio G they tested, they estimate about three million phones in America are vulnerable. The sub-$100 phone they tested started by trying to contact a Regentek domain immediately after initialisation; then another two unregistered domains that Anubis acquired (watching the traffic to those domains provided the estimated number of affected phones). Like the backdoor discovered last week by Kryptowire, the Ragentek firmware phones home with information like IMEI (the device ID), phone numbers (there might be two, since the BLU Studio G is a dual-SIM unit), country and more. Some reverse engineering of the software turned up the snippets that processes in question (/system/bin/debugsrun and /system/bin/debugs) try to hide themselves from the user, and can be sent to sleep by the server. The Carnegie-Mellon CERT has tagged the issue CVE-2016-6564 and is tracking affected vendors for updates. Like last week's Adups spyware, the Regentek firmware is present on phones sold outside China, including kit offered by prominent retailers such as Best Buy and Amazon. ® Sponsored: Customer Identity and Access Management