11.5 C
London
Sunday, October 22, 2017

Introducing Deep Learning: Boosting Cybersecurity With An Artificial Brain

With nearly the same speed and precision that the human eye can identify a water bottle, the technology of deep learning is enabling the detection of malicious activity at the point of entry in real-time. Editor’s Note: Last month, Dark Reading editors named Deep Instinct the most innovative startup in its first annual Best of Black Hat Innovation Awards program at Black Hat 2016 in Las Vegas.

For more details on the competition and other results, read
Best Of Black Hat Innovation Awards: And The Winners Are It’s hot outside and you’re thirsty.

As you reach for a water bottle, you don’t pause to analyze its material, size or shape in order to determine whether it’s a water bottle.
Instead, you immediately reach for it, with complete confidence in its identification. If I show the same water bottle to any traditional computer vision module, it will easily recognize it.
If I partially obstruct the image with my fingers, then traditional computer vision modules will have difficulty recognizing it.

But, if I apply an advanced form of artificial intelligence that is called deep learning, which is resistant to small changes and can generalize from partial data, it would be very easy for the computer vision module to correctly recognize the water bottle, even when most of the image is obstructed. Deep learning, also known as neural networks, is “inspired” by the brain’s ability to learn to identify objects.

Take vision as an example. Our brain can process raw data derived from our sensory inputs and learn the high-level features all on its own.
Similarly, in deep learning, raw data is fed through the deep neural network, which learns to identify the object on which it is trained. Machine learning, on the other hand, requires manual intervention in selecting which features to process through the machine learning modules.

As a result, the process is slower and accuracy can be affected by human error.

Deep learning's more sophisticated, self-learning capability results in higher accuracy and faster processing. Similar to image recognition, in cybersecurity, more than 99% of new threats and malware are actually very small mutations of previously existing ones.

And even that 1% of supposedly brand-new malware are rather substantial mutations of existing malicious threats and concepts.

But, despite this fact, cybersecurity solutions -- even the most advanced ones that use dynamic analysis and traditional machine learning -- have great difficulty in detecting a large portion of these new malware.

The result is vulnerabilities that leave organizations exposed to data breaches, data theft, seizure for ransomware, data corruption, and destruction. We can solve this problem by applying deep learning to cybersecurity. The history of malware detection in a nutshellSignature-based solutions are the oldest form of malware detection, which is why they are also called legacy solutions.

To detect malware, the antivirus engine compares the contents of an unidentified piece of code to its database of known malware signatures.
If the malware hasn’t been seen before, these methods rely on manually tuned heuristics to generate a handcrafted signature, which is then released as an update to clients.

This process is time-consuming, and sometimes signatures are released months after the initial detection.

As a result, this detection method can’t keep up with the million new malware variants that are created daily.

This leaves organizations vulnerable to the new threats as well as threats that have already been detected but have yet to have a signature released. Heuristic techniques identify malware based on the behavioral characteristics in the code, which has led to behavioral-based solutions.

This malware detection technique analyzes the malware’s behavior at runtime, instead of considering the characteristics hardcoded in the malware code itself.

The main limitation of this malware detection method is that it is able to discover malware only once the malicious actions have begun.

As a result, prevention is delayed, sometimes available only once it’s too late. Sandbox solutions are a development of the behavioral-based detection method.

These solutions execute the malware in a virtual (sandbox) environment to determine whether the file is malicious or not, instead of detecting the behavioral fingerprint at runtime.

Although this technique has shown to be quite effective in its detection accuracy, it is achieved at the cost of real-time protection because of the time-consuming process involved.

Additionally, newer types of malicious code that can evade sandbox detection by stalling their execution in a sandbox environment are posing new challenges to this type of malware detection and consequently, prevention capabilities. Malware detection using AI: machine learning & deep learningIncorporating AI capabilities to enable more sophisticated detection capabilities is the latest step in the evolution of cybersecurity solutions. Malware detection methods that are based on machine learning AI apply elaborate algorithms to classify a file’s behavior as malicious or legitimate according to feature engineering that is conducted manually. However, this process is time-consuming and requires massive human resources to tell the technology on which parameters, variables or features to focus during the file classification process.

Additionally, the rate of malware detection is still far from 100%.  Deep learning AI is an advanced branch of machine learning, also known as “neural networks” because it is "inspired" by the way the human brain works.
In our neocortex, the outer layer of our brain where high-level cognitive tasks are performed, we have several tens of billions of neurons.

These neurons, which are largely general purpose and domain-agnostic, can learn from any type of data.

This is the great revolution of deep learning because deep neural networks are the first family of algorithms within machine learning that do not require manual feature engineering.
Instead, they learn on their own to identify the object on which they are trained by processing and learning the high-level features from raw data -- very much like the way our brain learns on its own from raw data derived from our sensory inputs. When applied to cybersecurity, the deep learning core engine is trained to learn without any human intervention whether a file is malicious or legitimate.

Deep learning exhibits potentially groundbreaking results in detecting first-seen malware, compared with classical machine learning.
In real environment tests on publicly known databases of endpoints, mobile and APT malware, for example, the detection rates of a deep learning solution detected over 99.9% of both substantial and slightly modified malicious code.

These results are consistent with improvements achieved by deep learning in other fields, such as computer vision, speech recognition and text understanding. In the same way humans can immediately identify a water bottle in the real world, the technology advancements of deep learning -- applied to cybersecurity -- can enable the precise detection of new malware threats and fill in the critical gaps that that leave organizations exposed to attacks. Related Content: Guy Caspi is a leading mathematician and a data scientist global expert. He has 15 years of extensive experience in applying mathematics and machine learning in a technology elite unit of the Israel Defense Forces (IDF), financial institutions and intelligence organizations ...
View Full Bio More Insights

Banking Trojan, Gugi, evolves to bypass Android 6 protection

Almost every Android OS update includes new security features designed to make cybercriminals’ life harder.

And, of course, the cybercriminals always try to bypass them. We have found a new modification of the mobile banking Trojan, Trojan-Banker.AndroidOS.Gugi.c that can bypass two new security features added in Android 6: permission-based app overlays and a dynamic permission requirement for dangerous in-app activities such as SMS or calls.

The modification does not use any vulnerabilities, just social engineering. Initial infection The Gugi Trojan is spread mainly by SMS spam that takes users to phishing webpages with the text “Dear user, you receive MMS-photo! You can look at it by clicking on the following link”. Clicking on the link initiates the download of the Gugi Trojan onto the user’s Android device. Circumventing the security features To help protect users from the impо, неact of phishing and ransomware attacks, Android 6 introduced a requirement for apps to request permission to superimpose their windows/views over other apps.
In earlier versions of the OS they were able to automatically overlay other apps. The Trojan’s ultimate goal is to overlay banking apps with phishing windows in order to steal user credentials for mobile banking.
It also overlays the Google Play Store app to steal credit card details. The Trojan-Banker.AndroidOS.Gugi.c modification gets the overlay permission it needs by forcing users to grant this permission.
It then uses that to block the screen while demanding ever more dangerous access. The first thing an infected user is presented with is a window with the text “Additional rights needed to work with graphics and windows” and one button: “provide.” After clicking on this button, the user will see a dialog box that authorizes the app overlay (“drawing over other apps”). System request to permit Trojan-Banker.AndroidOS.Gugi.c to overlay other apps But as soon as the user gives Gugi this permission, the Trojan will block the device and show its window over any other windows/dialogs. Trojan-Banker.AndroidOS.Gugi.c window that blocks the infected device until it receives all the necessary rights It gives the user no option, presenting a window that contains only one button: “Activate”. Once the user presses this button they will receive a continuous series of requests for all the rights the Trojan is looking for.

They won’t get back to the main menu until they have agreed to everything. For example, following the first click of the button, the Trojan will ask for Device Administrator rights.
It needs this for self-defense because it makes it much harder for the user to uninstall the app. After successfully becoming the Device Administrator, the Trojan produces the next request.

This one asks the user for permission to send and view SMS and to make calls. It is interesting that Android 6 has introduced dynamic request capability as a new security features Earlier versions of the OS only show app permissions at installation; but, starting from Android 6, the system will ask users for permission to execute dangerous actions like sending SMS or making calls the first time they are attempted, or allows apps to ask at any other time – so that is what the modified Gugi Trojan does. TSystem request for dynamic permission The Trojan will continue to ask the user for each permission until they agree.
Should the user deny permission, subsequent requests will offer them the option of closing the request.
If the Trojan does not receive all the permissions it wants, it will completely block the infected device.
In such a case the user’s only option is to reboot the device in safe mode and try to uninstall the Trojan. TRepeating system request for dynamic permission A standard banking Trojan With the exception of its ability to bypass Android 6 security features, and its use of the Websocket protocol, Gugi is a typical banking Trojan.
It overlays apps with phishing windows to steal credentials for mobile banking or credit card details.
It also steals SMS, contacts, makes USSD requests and can send SMS by command from the CnC. The Trojan-Banker.AndroidOS.Gugi family has been known about since December 2015, with the modification Trojan-Banker.AndroidOS.Gugi.c first discovered in June 2016. Victim profile The Gugi Trojan mainly attacks users in Russia: more than 93% of attacked users to date are based in that country. Right now it is a trending Trojan – in the first half of August 2016 there were ten times as many victims as in April 2016. TUnique number users attacked by Trojan-Banker.AndroidOS.Gugi. We will shortly be publishing a detailed report into the Trojan-Banker.AndroidOS.Gugi malware family, its functionality and its use of the Websocket protocol. All Kaspersky Lab products detect all modifications of the Trojan-Banker.AndroidOS.Gugi malware family.

Air-Gapped Systems Foiled Again, Via USB Drive

Researchers at Israel's Ben-Gurion University have come up with another novel way to extract data from air-gapped systems, at least theoretically. Conventional wisdom has it that one of the best ways to protect a sensitive system from external attack is to air-gap it, or to physically and logically separate the system from the Internet and other computers. The technique is very effective, but not foolproof, as researchers from Israel’s Ben-Gurion University of the Negev (BGU) showed yet again this week with a proof-of-concept exploit for extracting data from air-gapped systems via radio frequency transmissions. This is at least the third similar exploit that BGU researchers have developed in recent years involving data theft from isolated systems. Last year, they showed how attackers could establish a covert bi-directional communication channel between two air-gapped computers by leveraging the integrated thermal sensors in the systems. In 2014, they demonstrated a technique for getting an air-gapped computer to transmit sensitive data via radio signals to a nearby smartphone equipped with special software dubbed AirHopper. The latest proof-of-concept too involves the use of controlled RF signals, but with a twist.
In this case, the researchers have developed software dubbed USBee that is designed to generate RF electromagnetic emissions from a system’s USB data bus.
In a technical paper, the researchers discuss how the RF signals can be modulated with arbitrary binary data, like passwords and encryption keys, and then sent to a nearby computer using a plugged-in USB drive as the transmitter. The attack builds on previous research that shows how data can be extracted from an air-gapped computer using USB connectors with embedded RF transmitters in them, the researchers said in their paper.

As one example, they pointed to an USB hardware implant dubbed COTTONMOUTH, developed by the National Security Agency for extracting data from disconnected systems. Unlike the previously described methods however, USBee does not require any hardware modification.

An ordinary unmodified USB stick that is plugged into a system running the USBee code, can be used as a transmitter to send data to a nearby computer at a rate of about 80 bytes per second.

Tests have shown that a computer equipped with an RF antenna will be able to capture the signals from up to 30 feet away, the researchers said. Significant as that is, the actual risk of attackers being able to leverage the method to steal data from air-gapped systems remains small.

By far the biggest reason is that in order for the attack to work, the air-gapped system has to be infected with the USBee code first, which means having physical access to the system.
Systems that are sensitive enough to be air-gapped are also unlikely to permit the use of USB drives on them, especially after the Stuxnet attacks of 2010, which were enabled via infected USB sticks. “The whole chain-of-attack requires a motivated attacker with high skills,” says Mordechai Guri, head of R&D of the cybersecurity center at BGU and the lead author of the technical paper. “Developing the USBee malware itself is not the hardest part.
Infecting the victim with [the] malware and then intercepting the transmitted data is the more challenging part in this attack,” he admits. There are several well-documented examples of how isolated systems can be infected with malware like USBee, he says.

But often, even after that, it is hard to download data from such systems without being noticed.

The USBee exploit shows one way of how that can be accomplished without raising any red flags. Assuming that an attacker is able to infect an air-gapped system with USBee, the data that can be stolen this way is significant.

Guri pointed to examples like encryption keys, keyloggings, personal records, small files, usernames, and passwords. One of the most effective countermeasures for dealing with a threat like this is to define areas around air-gapped systems where RF receivers are prohibited, the researchers said. Partition walls with proper insulation can also help to lower signal reception distance, they said. Cris Thomas, strategist at Tenable Network Security, said that while there’s a chance that a high-level nation state actor might employ a tactic such as the one described by the BGU researchers, there are few real-world scenarios where such an attack would be useful. “However, the fact that this technology exists, even in the lab, does mean that people who are in control of air-gapped systems need to make sure that the physical security controlling access to those systems is adequate,” he said. Related stories: Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year ...
View Full Bio More Insights

Improvements In Cybersecurity Require More Than Sharing Threat-Intelligence Information

Interoperability and automation are keys to defining success in computer network defense. I read a recent article covering the cybersecurity marketplace that says the sharing of threat intelligence data could significantly disrupt malicious cyberactivity.

The author continues to use “could” in every sentence in the rest of that paragraph.

Cybersecurity professionals need more than” could.”  Timely detection and responses in the face of advanced targeted attacks are major challenges for security teams across every sector. Most organizations rely on a multivendor security infrastructure with products that rarely communicate well with one another.

The shortage of trained security staff and lack of automated processes result in inefficiencies and protection gaps. Interoperability and integration improve effectiveness.

The active sharing of data makes it practical and possible for every security control to leverage the strengths and experiences of the other tools in the security infrastructure. Rather than treating each malware interaction as a standalone event, adaptive threat prevention integrates processes and data through an efficient messaging layer.

This approach connects end-to-end components to generate and consume as much actionable intelligence as possible from each contact and process. Tear Down The Fences The shift to adaptive threat prevention helps overcome the functional fences that impede detection, response, and any chance of improved prevention.
Silos of data and point products complicate operations and increase risk.

The actions of each security control and the context of each situation are poorly captured and seldom shared within an organization, let alone among a larger community of trust. Unintegrated security functions keep organizations in firefighting mode, always reacting and pouring human resources into every breach. Process inefficiency exhausts scarce investigative resources and lengthens the timeline during which data and networks are exposed to determined attackers.

The length of time from breach to detection has a direct correlation to extent of damage.
Separate islands of security products, data sets, and operations provide sophisticated attackers with ample space and noise that they can use to their advantage while their malicious code enters, hides, and persists within and throughout an organization. Intel Security’s DXL is the foundation for enabling the ideal adaptive security ecosystem.
It is a near real-time, bidirectional communications fabric that allows security components to share relevant data among endpoint, network, and other IP-enabled systems.
It provides command and control options for otherwise inaccessible systems, and benefits organizations by enabling automated response, vastly reduced response time, and better containment. The goal of DXL is to promote open collaborative security, enable active command and control, forge interoperability (plug-and-play) among distributed elements from disparate vendors, and ensure consistency and speed of outcomes.

The interactions among these components can use their own (standardized) layered application protocols, depending on the use case.

DXL acts as the foundational service -- just as standardized roads and transportation are foundational to commerce or HTTP and browsers are foundational to the internet. Traditionally, communication between security products has been application programming interface (API)-driven, resulting in a fragile patchwork of communicating pairs.

As threats have grown more sophisticated, this model is simply no longer acceptable, as the time from detection to reaction to containment can take days.

To accelerate this process and keep up with the enormous volume of sophisticated threats, security architectures must undergo a significant evolution and be able to respond in minutes or seconds. Shared threat information and synchronized real-time enforcement are necessities, not luxuries. Until now, this has been utilized only for specific products or single point-to-point integrations.
Intel Security’s DXL supplies a standardized communication solution to this real-time problem. Ned Miller, a 30+ year technology industry veteran, is the Chief Technology Strategist for the Intel Security Public Sector division. Mr. Miller is responsible for working with industry and government thought leaders and worldwide public sector customers to ensure that ...
View Full Bio More Insights

Malware Exposes Payment Card Data At Kimpton Hotels

The Sir Francis Drake Hotel in San Francisco is one of the Kimpton Hotels affected by the malware. Kimpton Hotels Kimpton Hotels has become the latest hotel operator to ...

Malware-Ridden Word Docs Lead To Microsoft Alert Blurt

MICROSOFT HAS taken the trouble to warn Windows users about an attack that takes what trust people have left in the software and throws it out of the window. The firm explained that the problem involves macros and the use of social engineering. People are tricked into downloading and then enabling malicious content that ultimately leads to trouble when they innocently use Word. "Attackers have been using social engineering to avoid the increasing costs of exploitation due to the significant hardening and exploit mitigation investments in Windows," said the firm in a Microsoft TechNet blog post suggesting that this is a cheap shot by hackers. "Tricking a user into running a malicious file or malware can be cheaper for an attacker than building an exploit which works on Windows 10. We recently came across a threat that uses the same social engineering trick but delivers a different payload." Microsoft explained that the payload's primary purpose is to change a user's browser Proxy Server setting, which could result in the theft of authentication credentials or other sensitive information. "We detect this JScript malware as Trojan:JS/Certor.A. What's not unique is that the malware gets into the victim's computer when the victim clicks the email attachment from a spam campaign," the post said. Microsoft added that people really ought not to click on links from people or outfits that they do not know or trust.

This is good, if perhaps hoary and often ignored, advice. "To avoid attacks like we have just detailed, it is recommended that you only open and interact with messages from senders and websites that you recognise and trust," explained the firm. "For added defence-in-depth, you can reduce the risk from this threat by following [our] guidance to adjust the registry settings to help prevent OLE Embedded Objects executing altogether or running without your explicit permission." Just don't click untrusted links, people. µ

How Trojans manipulate Google Play

For malware writers, Google Play is the promised land of sorts. Once there, a malicious application gains access to a wide audience, gains the trust of that audience and experiences a degree of leniency from the security systems built into operating systems. On mobile devices, users typically cannot install applications coming from sources other than the official store, meaning this is a serious barrier for an app with malicious intent. However, it is far from easy for the app to get into Google Play: one of the main conditions for it is to pass a rigorous check for unwanted behavior by different analysis systems, both automatic and manual. Some malware writers have given up on their efforts to push their malicious creations past security checks, and instead learned how to use the store’s client app for their unscrupulous gains. Lately, we have seen many Trojans use the Google Play app during promotion campaigns to download, install and launch apps on smartphones without the owners’ knowledge, as well as leave comments and rate apps.

The apps installed by the Trojan do not typically cause direct damage to the user, but the victim may have to pay for the created excessive traffic.
In addition, the Trojans may download and install paid apps as if they were free ones, further adding to the users’ bills. Let us look into the methods how such manipulations with Google Play happen. Level 1. N00b The first method is to make the official Google Play app store undertake the actions the cybercriminal wants.

The idea is to use the Trojan to launch the client, open the page of the required app in it, then search for and use special code to interact with the interface elements (buttons) to cause download, installation and launch of the application.

The misused interface elements are outlined with red boxes in the screenshots below: The exact methods of interaction with the interface vary.
In general, the following techniques may be identified: Use of the Accessibility services of the operating system (used by modules in Trojan.AndroidOS.Ztorg). Imitation of user input (used by Trojan-Clicker.AndroidOS.Gopl.c). Code injection into the process of Google Play client to modify its operation (used by Trojan.AndroidOS.Iop). To see how such Trojans operate. Let us look at the example of Trojan.AndroidOS.Ztorg.n.

This malicious program uses Accessibility services originally intended to create applications to help people with disabilities, such as GUI voice control apps.

The Trojan receives a job from the command and control server (C&C) which contains a link to the required application, opens it in Google Play, and then launches the following code: This code is needed to detect when the required interface element appears on the screen, and to emulate the click on it.

This way, the following buttons are clicked in a sequence: “BUY” (the price is shown in the button), “ACCEPT” and “CONTINUE”.

This is sufficient to purchase the app, if the user has a credit card with sufficient balance connected to his/her Google account. Level 2. Pro Some malware writers take roads less traveled.
Instead of using the easy and reliable way described above, they create their own client for the app store using HTTPS API. The difficult part about this approach is that the operation of the self-made client requires information (e.g. user credentials and authentication tokens) which is not available to a regular app. However, the cybercriminals are very fortunate that all required data are stored on the device in clear text, in the convenient SQLite format.

Access to the data is limited by the Android security model, however apps may abuse it e.g. by rooting the device and thus gaining unlimited access. For example, some versions of the Trojan.AndroidOS.Guerrilla.a have their own client for Google Play, which is distributed with the help of the rooter Leech.

This client successfully fulfils the task of downloading and installing free and paid apps, and is capable of rating apps and leaving comments in the Google store. After launch, Guerrilla starts to collect the following required information: The credentials to the user’s Google Play account. Activities in Google Play require special tokens that are generated when the user logs in. When the user is already logged in to Google Play, the Trojan can use the locally cached tokens.

They can be located through a simple search through the database located at /data/system/users/0/accounts.db: With the help of the code below, the Trojan checks if there are ready tokens on the infected device, i.e. if the user has logged on and can do activities in Google Play: If no such tokens are available, the Trojan obtains the user’s username and hashed password, and authenticates via OAuth: Android_id is the device’s unique ID. Google Service Framework ID is the device’s identifier across Google services. First, the Trojans attempts to obtain this ID using regular methods.
If these fail for whatever reason, it executes the following code: Google Advertising ID is the unique advertising ID provided by Google Play services. Guerrilla obtains it as follows: In a similar way, the Trojan obtains hashed data about the device from the file “/data/data/com.google.android.gms/shared_prefs/Checkin.xml“. When the Trojan has collected the above data, it begins to receive tasks to download and install apps.

Below is the structure of one such task: The Trojan downloads the application by sending POST requests using the links below: https://android.clients.google.com/fdfe/search: a search is undertaken for the request sent by the cybercriminals.

This request is needed to simulate the user’s interaction with the Google Play client. (The main scenario of installing apps from the official client presupposes that the user first does the search request and only then visits the app’s page). https://android.clients.google.com/fdfe/details: with this request, additional information needed to download the app is collected. https://android.clients.google.com/fdfe/purchase: the token and purchase details are downloaded, used in the next request. https://android.clients.google.com/fdfe/delivery: the Trojan receives the URL and the cookie-files required to download the Android application package (APK) file. https://android.clients.google.com/fdfe/log: the download is confirmed (so the download counter is incremented.) https://android.clients.google.com/fdfe/addReview: the app is rated and a comment is added. When creating the requests, the cybercriminals attempted to simulate most accurately the equivalent requests sent by the official client.

For example, the below set of HTTP headers is used in each request: After the request is executed, the app may (optionally) get downloaded, installed (using the command ‘pm install -r’ which allows for installation of applications without the user’s consent) and launched. Conclusion The Trojans that use the Google Play app to download, install and launch apps from the store to a smartphone without the device owner’s consent are typically distributed by rooters – malicious programs which have already gained the highest possible privileges on the device.
It is this particular fact that allows them to launch such attacks on the Google Play client app. This type of malicious program pose a serious threat: in Q2 2016, different rooters occupied more than a half of the Top 20 of mobile malware.

All the more so, rooters can download not only malicious programs that compromise the Android ecosystem and spend the user’s money on purchasing unnecessary paid apps, but other malware as well.

Malware Markets: Exposing The Hype & Filtering The Noise

There's a lot of useful infosec information out there, but cutting through clutter is harder than it should be. There is a wealth of genuine, educational information on the infosec industry available through the media, helping the industry and the general public.

Then there are stories put forth by those who just want to get into the headlines for some quick recognition.

This makes it difficult for the tech audience to separate the signal from the noise. I have the perspective of over 20 years in the information security industry, and I’ve seen the cycle of hype continually chase its own tail. Over time, you start to recognize the patterns and learn to tell the difference between the genuine heroes who put out data for the greater good and the villains who are just generating hype. Those individuals and companies that go for the quick win with the press try to get their name associated with as many buzzwords as possible, but when you dig in, you find little to no substance.

There is no data behind many of their claims, they tend not to collaborate with the greater industry, and their overall goal seems to be to spread fear.   With the rise of vendors of all-things-evil on the “Dark Web,” ransomware (which isn’t really new), and other threats, the problem has gotten worse.

Take, for example, the hype surrounding a new ransomware offering, Stampado, being sold on the Dark Web. Once news broke of the cheap ransomware’s availability, word quickly circulated. However, this just shows how attention is being diverted from real threats and to unsophisticated, overhyped commodity malware. Stampado first appeared on the AlphaBay market priced attractively at $39. While that’s an interesting footnote in the lengthy history of ransomware, it doesn’t deserve the headlines.

There are several reasons for this.   First of all, there are other ransomware and ransomware-as-a-service offerings that are either free or cheaper. Many are available in the same market, and some are even more interesting from a code/malware research perspective. A second big issue is the sophistication level.
Stampado doesn’t have a truly automated payment mechanism.
It relies on sending email back and forth to get further instruction on decryption.

Beyond that, it’s fully written/scripted in AutoIT, meaning it’s easily deconstructed and simple to analyze without any advanced tools.   All the coding is right in front of you with a couple of clicks into Exe2Aut: Image Source: Jim Walter Before press coverage around Stampado died down, there were multiple file decryption tools available to undo the changes it made.

And speaking of those changes, unlike many of the more destructive ransomware families, Stampado doesn’t affect all files on the disk or all disks on the system.
It targets specific files within the user profile directory of the current user.

Also, according to AlphaBay, only 55 individuals have actually purchased Stampado to date. There are already plenty of technical teardowns of Stampado, done by genuine researchers who want to dispel the hype.

But for every Stampado, there are thousands of other “scary-sounding” things waiting for the next new-company-on-the-block to exploit.   Malware Clearance BinsMuch of what you find in markets like AlphaBay and entry-level forums are old, cracked (that is, modified to run for free) copies of malware builders, cracked versions of packers/crypters, and even versions of gray- and black-hat tools (Havij Pro, license key generators, etc.) that are booby-trapped with back doors.

There is a lot of scammy, useless garbage out there.

The item below appears in several markets.
It is a scam.

The site it links to (for its cut of the buyer’s profits) is the Encryptor RaaS service, which no longer exists. Image Source: Jim Walter The issue of overhyped threats isn’t specific to Dark Web markets and forums -- it’s a trend that exists elsewhere in our industry.
It’s at a different level, but similar games are played with the release of flashy threat reports.

Almost daily, there is news of a recently discovered, state-sponsored, super-sophisticated advanced persistent threat campaign.
Some of these reports are valid and packed with vetted research.

And some are cut-and-paste jobs or technically inaccurate word salads, designed to grab headlines and notoriety. Experts Must Help Sort Out The HypeThe bottom line is that the good guys need to do a better job of helping our customers and the general public spot the hype.

Those in the security industry must refrain from making a quick “press grab” and disseminate data that really assists customers and the general public.

Building trust and becoming a source of valuable security information is only achieved over time.

There are no overnight heroes, and that needs to be accepted by everyone in the industry. I believe we’re on the right course.

Time and the market we exist in have a way of correcting these situations and allowing the valid, helpful entities in information security rise to the top, while those spreading fear, uncertainty, and doubt eventually fade away. Related Content: Jim Walter is a senior member of Cylance's SPEAR team. He focuses on next-level attacks, actors, and campaigns as well as 'underground' markets and associated criminal activity. Jim is a regular speaker at cybersecurity events and has authored numerous articles, whitepapers ...
View Full Bio More Insights

The Hunt for Lurk

In early June, 2016, the Russian police arrested the alleged members of the criminal group known as Lurk.

The police suspected Lurk of stealing nearly three billion rubles, using malicious software to systematically withdraw large sums of money from the accounts of commercial organizations, including banks.

For Kaspersky Lab, these arrests marked the culmination of a six-year investigation by the company’s Computer Incidents Investigation team. We are pleased that the police authorities were able to put the wealth of information we accumulated to good use: to detain suspects and, most importantly, to put an end to the theft. We ourselves gained more knowledge from this investigation than from any other.

This article is an attempt to share this experience with other experts, particularly the IT security specialists in companies and financial institutions that increasingly find themselves the targets of cyber-attacks. When we first encountered Lurk, in 2011, it was a nameless Trojan.
It all started when we became aware of a number of incidents at several Russian banks that had resulted in the theft of large sums of money from customers.

To steal the money, the unknown criminals used a hidden malicious program that was able to interact automatically with the financial institution’s remote banking service (RBS) software; replacing bank details in payment orders generated by an accountant at the attacked organization, or even generating such orders by itself. In 2016, it is hard to imagine banking software that does not demand some form of additional authentication, but things were different back in 2011.
In most cases, the attackers only had to infect the computer on which the RBS software was installed in order to start stealing the cash. Russia’s banking system, like those of many other countries, was unprepared for such attacks, and cybercriminals were quick to exploit the security gap. We participated in the investigation of several incidents involving the nameless malware, and sent samples to our malware analysts.

They created a signature to see if any other infections involving it had been registered, and discovered something very unusual: our internal malware naming system insisted that what we were looking at was a Trojan that could be used for many things (spamming, for example) but not stealing money. Our detection systems suggest that a program with a certain set of functions can sometimes be mistaken for something completely different.
In the case of this particular program the cause was slightly different: an investigation revealed that it had been detected by a “common” signature because it was doing nothing that could lead the system to include it in any specific group, for example, that of banking Trojans. Whatever the reason, the fact remained that the malicious program was used for the theft of money. So we decided to take a closer look at the malware.

The first attempts to understand how the program worked gave our analysts nothing. Regardless of whether it was launched on a virtual or a real machine, it behaved in the same way: it didn’t do anything.

This is how the program, and later the group behind it, got its name.

To “lurk” means to hide, generally with the intention of ambush. We were soon able to help investigate another incident involving Lurk.

This time we got a chance to explore the image of the attacked computer.

There, in addition to the familiar malicious program, we found a .dll file with which the main executable file could interact.

This was our first piece of evidence that Lurk had a modular structure. Later discoveries suggest that, in 2011, Lurk was still at an early stage of development.
It was formed of just two components, a number that would grow considerably over the coming years. The additional file we uncovered did little to clarify the nature of Lurk.
It was clear that it was a Trojan targeting RBS and that it was used in a relatively small number of incidents.
In 2011, attacks on such systems were starting to grow in popularity. Other, similar, programs were already known about, the earliest detected as far back as in 2006, with new malware appearing regularly since then.

These included ZeuS, SpyEye, and Carberp, etc.
In this series, Lurk represented yet another dangerous piece of malware. It was extremely difficult to make Lurk work in a lab environment. New versions of the program appeared only rarely, so we had few opportunities to investigate new incidents involving Lurk.

A combination of these factors influenced our decision to postpone our active investigation into this program and turn our attention to more urgent tasks. A change of leader For about a year after we first met Lurk, we heard little about it.
It later turned out that the incidents involving this malicious program were buried in the huge amount of similar incidents involving other malware.
In May 2011, the source code of ZeuS had been published on the Web and this resulted in the emergence of many program modifications developed by small groups of cybercriminals. In addition to ZeuS, there were a number of other unique financial malware programs.
In Russia, there were several relatively large cybercriminal groups engaged in financial theft via attacks on RBS.

Carberp was the most active among them.

At the end of March 2012, the majority of its members were arrested by the police.

This event significantly affected the Russian cybercriminal world as the gang had stolen hundreds of millions of rubles during a few years of activity, and was considered a “leader” among cybercriminals. However, by the time of the arrests, Carberp’s reputation as a major player was already waning.

There was a new challenger for the crown. A few weeks before the arrests, the sites of a number of major Russian media, such as the agency “RIA Novosti”, Gazeta.ru and others, had been subjected to a watering hole attack.

The unknown cybercriminals behind this attack distributed their malware by exploiting a vulnerability in the websites’ banner exchange system.

A visitor to the site would be redirected to a fraudulent page containing a Java exploit.
Successful exploitation of the vulnerability initiated the launch of a malicious program whose main function was collecting information on the attacked computer, sending it to a malicious server, and in some cases receiving and installing an extra load from the server. The code on the main page of RIA.ru that is used to download additional content from AdFox.ru From a technical perspective, the malicious program was unusual. Unlike most other malware, it left no traces on the hard drive of the system attacked and worked only in the RAM of the machine.

This approach is not often used in malware, primarily because the resulting infection is “short-lived”: malware exists in the system only until the computer is restarted, at which point the process of infection need to be started anew.

But, in the case of these attacks, the secret “bodiless” malicious program did not have to gain a foothold in the victim’s system.
Its primary job was to explore; its secondary role was to download and install additional malware.

Another fascinating detail was the fact that the malware was only downloaded in a small number of cases, when the victim computer turned out to be “interesting”. Part of the Lurk code responsible for downloading additional modules Analysis of the bodiless malicious program showed that it was “interested” in computers with remote banking software installed. More specifically, RBS software created by Russian developers. Much later we learned that this unnamed, bodiless module was a mini, one of the malicious programs which used Lurk.

But at the time we were not sure whether the Lurk we had known since 2011, and the Lurk discovered in 2012, were created by the same people. We had two hypotheses: either Lurk was a program written for sale, and both the 2011 and 2012 versions were the result of the activity of two different groups, which had each bought the program from the author; or the 2012 version was a modification of the previously known Trojan. The second hypothesis turned out to be correct. Invisible war with banking software A small digression. Remote banking systems consist of two main parts: the bank and the client.

The client part is a small program that allows the user (usually an accountant) to remotely manage their organization’s accounts.

There are only a few developers of such software in Russia, so any Russian organization that uses RBS relies on software developed by one of these companies.

For cybercriminal groups specializing in attacks on RBS, this limited range of options plays straight into their hands. In April 2013, a year after we found the “bodiless” Lurk module, the Russian cybercriminal underground exploited several families of malicious software that specialized in attacks on banking software.

Almost all operated in a similar way: during the exploration stage they found out whether the attacked computer had the necessary banking software installed.
If it did, the malware downloaded additional modules, including ones allowing for the automatic creation of unauthorized payment orders, changing details in legal payment orders, etc.

This level of automation became possible because the cybercriminals had thoroughly studied how the banking software operated and “tailored” their malicious software modules to a specific banking solution. The people behind the creation and distribution of Lurk had done exactly the same: studying the client component of the banking software and modifying their malware accordingly.
In fact, they created an illegal add-on to the legal RBS product. Through the information exchanges used by people in the security industry, we learned that several Russian banks were struggling with malicious programs created specifically to attack a particular type of legal banking software.
Some of them were having to release weekly patches to customers.

These updates would fix the immediate security problems, but the mysterious hackers “on the other side” would quickly release a new version of malware that bypassed the upgraded protection created by the authors of the banking programs. It should be understood that this type of work – reverse-engineering a professional banking product – cannot easily be undertaken by an amateur hacker.
In addition, the task is tedious and time-consuming and not the kind to be performed with great enthusiasm.
It would need a team of specialists.

But who in their right mind would openly take up illegal work, and who might have the money to finance such activities? In trying to answer these questions, we eventually came to the conclusion that every version of Lurk probably had an organized group of cybersecurity specialists behind it. The relative lull of 2011-2012 was followed by a steady increase in notifications of Lurk-based incidents resulting in the theft of money.

Due to the fact that affected organizations turned to us for help, we were able to collect ever more information about the malware.

By the end of 2013, the information obtained from studying hard drive images of attacked computers as well as data available from public sources, enabled us to build a rough picture of a group of Internet users who appeared to be associated with Lurk. This was not an easy task.

The people behind Lurk were pretty good at anonymizing their activity on the network.

For example, they were actively using encryption in everyday communication, as well as false data for domain registration, services for anonymous registration, etc.
In other words, it was not as easy as simply looking someone up on “Vkontakte” or Facebook using the name from Whois, which can happen with other, less professional groups of cybercriminals, such as Koobface.

The Lurk gang did not make such blunders. Yet mistakes, seemingly insignificant and rare, still occurred.

And when they did, we caught them. Not wishing to give away free lessons in how to run a conspiracy, I will not provide examples of these mistakes, but their analysis allowed us to build a pretty clear picture of the key characteristics of the gang. We realized that we were dealing with a group of about 15 people (although by the time it was shut down, the number of “regular” members had risen to 40).

This team provided the so-called “full cycle” of malware development, delivery and monetization – rather like a small, software development company.

At that time the “company” had two key “products”: the malicious program, Lurk, and a huge botnet of computers infected with it.

The malicious program had its own team of developers, responsible for developing new functions, searching for ways to “interact” with RBS systems, providing stable performance and fulfilling other tasks.

They were supported by a team of testers who checked the program performance in different environments.

The botnet also had its own team (administrators, operators, money flow manager, and other partners working with the bots via the administration panel) who ensured the operation of the command and control (C&C) servers and protected them from detection and interception. Developing and maintaining this class of malicious software requires professionals and the leaders of the group hunted for them on job search sites.

Examples of such vacancies are covered in my article about Russian financial cybercrime.

The description of the vacancy did not mention the illegality of the work on offer.

At the interview, the “employer” would question candidates about their moral principles: applicants were told what kind of work they would be expected to do, and why.

Those who agreed got in. A fraudster has advertised a job vacancy for java / flash specialists on a popular Ukrainian website.

The job requirements include a good level of programming skills in Java, Flash, knowledge of JVM / AVM specifications, and others.

The organizer offers remote work and full employment with a salary of $2,500.
So, every morning, from Monday to Friday, people in different parts of Russia and Ukraine sat down in front of their computer and started to “work”.

The programmers “tuned” the functions of malware modifications, after which the testers carried out the necessary tests on the quality of the new product.

Then the team responsible for the botnet and for the operation of the malware modules and components uploaded the new version onto the command server, and the malicious software on botnet computers was automatically updated.

They also studied information sent from infected computers to find out whether they had access to RBS, how much money was deposited in clients’ accounts, etc. The money flow manager, responsible for transferring the stolen money into the accounts of money mules, would press the button on the botnet control panel and send hundreds of thousands of rubles to accounts that the “drop project” managers had prepared in advance.
In many cases they didn’t even need to press the button: the malicious program substituted the details of the payment order generated by the accountant, and the money went directly to the accounts of the cybercriminals and on to the bank cards of the money mules, who cashed it via ATMs, handed it over to the money mule manager who, in turn, delivered it to the head of the organization.

The head would then allocate the money according to the needs of the organization: paying a “salary” to the employees and a share to associates, funding the maintenance of the expensive network infrastructure, and of course, satisfying their own needs.

This cycle was repeated several times. Each member of the typical criminal group has their own responsibilities. These were the golden years for Lurk.

The shortcomings in RBS transaction protection meant that stealing money from a victim organization through an accountant’s infected machine did not require any special skills and could even be automated.

But all “good things” must come to an end. The end of “auto money flow” and the beginning of hard times The explosive growth of thefts committed by Lurk and other cybercriminal groups forced banks, their IT security teams and banking software developers to respond. First of all, the developers of RBS software blocked public access to their products.

Before the appearance of financial cybercriminal gangs, any user could download a demo version of the program from the manufacturer’s website.

Attackers used this to study the features of banking software in order to create ever more tailored malicious programs for it.

Finally, after many months of “invisible war” with cybercriminals, the majority of RBS software vendors succeeded in perfecting the security of their products. At the same time, the banks started to implement dedicated technologies to counter the so-called “auto money flow”, the procedure which allowed the attackers to use malware to modify the payment order and steal money automatically. By the end of 2013, we had thoroughly explored the activity of Lurk and collected considerable information about the malware.

At our farm of bots, we could finally launch a consistently functioning malicious script, which allowed us to learn about all the modifications cybercriminals had introduced into the latest versions of the program. Our team of analysts had also made progress: by the year’s end we had a clear insight into how the malware worked, what it comprised and what optional modules it had in its arsenal. Most of this information came from the analysis of incidents caused by Lurk-based attacks. We were simultaneously providing technical consultancy to the law enforcement agencies investigating the activities of this gang. It was clear that the cybercriminals were trying to counteract the changes introduced in banking and IT security.

For example, once the banking software vendors stopped providing demo versions of their programs for public access, the members of the criminal group established a shell company to receive directly any updated versions of the RBS software. Thefts declined as a result of improvements in the security of banking software, and the “auto money flow” became less effective.

As far as we can judge from the data we have, in 2014 the criminal group behind Lurk seriously reduced its activity and “lived from hand to mouth”, attacking anyone they could, including ordinary users.

Even if the attack could bring in no more than a few tens of thousands of rubles, they would still descend to it. In our opinion, this was caused by economic factors: by that time, the criminal group had an extensive and extremely costly network infrastructure, so, in addition to employees’ salaries, it was necessary to pay for renting servers, VPN and other technical tools. Our estimates suggest that the network infrastructure alone cost the Lurk managers tens of thousands of dollars per month. Attempts to come back In addition to increasing the number of “minor” attacks, the cybercriminals were trying to solve their cash flow problem by “diversifying” the business and expanding their field of activity.

This included developing, maintaining and renting the Angler exploit pack (also known as XXX).
Initially, this was used mainly to deliver Lurk to victims’ computers.

But as the number of successful attacks started to decline, the owners began to offer smaller groups paid access to the tools. By the way, judging by what we saw on Russian underground forums for cybercriminals, the Lurk gang had an almost legendary status.

Even though many small and medium-sized groups were willing to “work” with them, they always preferred to work by themselves.
So when Lurk provided other cybercriminals with access to Angler, the exploit pack became especially popular – a “product” from the top underground authority did not need advertising.
In addition, the exploit pack was actually very effective, delivering a very high percentage of successful vulnerability exploitations.
It didn’t take long for it to become one of the key tools on the criminal2criminal market. As for extending the field of activity, the Lurk gang decided to focus on the customers of major Russian banks and the banks themselves, whereas previously they had chosen smaller targets. In the second half of 2014, we spotted familiar pseudonyms of Internet users on underground forums inviting specialists to cooperate on document fraud.

Early the following year, several Russian cities were swamped with announcements about fraudsters who used fake letters of attorney to re-issue SIM cards without their owners being aware of it. The purpose of this activity was to gain access to one-time passwords sent by the bank to the user so that they could confirm their financial transaction in the online or remote banking system.

The attackers exploited the fact that, in remote areas, mobile operators did not always carefully check the authenticity of the documents submitted and released new SIM cards at the request of cybercriminals. Lurk would infect a computer, collect its owner’s personal data, generate a fake letter of attorney with the help of “partners” from forums and then request a new SIM card from the network operator. Once the cybercriminals received a new SIM card, they immediately withdrew all the money from the victim’s account and disappeared. Although initially this scheme yielded good returns, this didn’t last long, since by then many banks had already implemented protection mechanisms to track changes in the unique SIM card number.
In addition, the SIM card-based campaign forced some members of the group and their partners out into the open and this helped law enforcement agencies to find and identify suspects. Alongside the attempts to “diversify” the business and find new cracks in the defenses of financial businesses, Lurk continued to regularly perform “minor thefts” using the proven method of auto money flow. However, the cybercriminals were already planning to earn their main money elsewise. New “specialists” In February 2015, Kaspersky Lab’s Global Research and Analysis Team (GReAT) released its research into the Carbanak campaign targeting financial institutions.

Carbanak’s key feature, which distinguished it from “classical” financial cybercriminals, was the participation of professionals in the Carbanak team, providing deep knowledge of the target bank’s IT infrastructure, its daily routine and the employees who had access to the software used to conduct financial transactions.

Before any attack, Carbanak carefully studied the target, searched for weak points and then, at a certain moment in time, committed the theft in no more than a few hours.

As it turned out, Carbanak was not the only group applying this method of attack.
In 2015, the Lurk team hired similar experts. How the Carbanak group operated. We realized this when we found incidents that resembled Carbanak in style, but did not use any of its tools.

This was Lurk.

The Lurk malware was used as a reliable “back door” to the infrastructure of the attacked organization rather than as a tool to steal money.

Although the functionality that had previously allowed for the near-automatic theft of millions no longer worked, in terms of its secrecy Lurk was still an extremely dangerous and professionally developed piece of malware. However, despite its attempts to develop new types of attacks, Lurk’s days were numbered.

Thefts continued until the spring of 2016.

But, either because of an unshakable confidence in their own impunity or because of apathy, day-by-day the cybercriminals were paying less attention to the anonymity of their actions.

They became especially careless when cashing money: according to our incident analysis, during the last stage of their activity, the cybercriminals used just a few shell companies to deposit the stolen money.

But none of that mattered any more as both we and the police had collected enough material to arrest suspected group members, which happened early in June this year. No one on the Internet knows you are a cybercriminal? My personal experience of the Lurk investigation made me think that the members of this group were convinced they would never be caught.

They had grounds to be that presumptuous: they were very thorough in concealing the traces of their illegal activity, and generally tried to plan the details of their actions with care. However, like all people, they made mistakes.

These errors accumulated over the years and eventually made it possible to put a stop to their activity.
In other words, although it is easier to hide evidence on the Internet, some traces cannot be hidden, and eventually a professional team of investigators will find a way to read and understand them. Lurk is neither the first nor the last example to prove this.

The infamous banking Trojan SpyEye was used to steal money between 2009 and 2011.
Its alleged creator was arrested 2013, and convicted in 2014. The first attacks involving the banking Trojan Carberp began in 2010; the members of the group suspected of creating and distributing this Trojan were arrested in 2012 and convicted in 2014.

The list goes on. The history of these and other cybercriminal groups spans the time when everyone (and members of the groups in particular) believed that they were invulnerable and the police could do nothing.

The results have proved them wrong. Unfortunately, Lurk is not the last group of cybercriminals attacking companies for financial gain. We know about some other groups targeting organizations in Russia and abroad.

For these reasons, we recommend that all organizations do the following: If your organization was attacked by hackers, immediately call the police and involve experts in digital forensics.

The earlier you apply to the police, the more evidence the forensics will able to collect, and the more information the law enforcement officers will have to catch the criminals. Apply strict IT security policies on terminals from which financial transactions are made and for employees working with them. Teach all employees who have access to the corporate network the rules of safe online behavior. Compliance with these rules will not completely eliminate the risk of financial attacks but will make it harder for fraudsters and significantly increase the probability of their making a mistake while trying to overcome these difficulties.

And this will help law enforcement agencies and IT security experts in their work. P.S.: why does it take so long? Law enforcement agencies and IT security experts are often accused of inactivity, allowing hackers to remain at large and evade punishment despite the enormous damage caused to the victims. The story of Lurk proves the opposite.
In addition, it gives some idea of the amount of work that has to be done to obtain enough evidence to arrest and prosecute suspects. Unfortunately, the rules of the “game” are not the same for all participants: the Lurk group used a professional approach to organizing a cybercriminal enterprise, but, for obvious reasons, did not find it necessary to abide by the law.

As we work with law enforcement, we must respect the law.

This can be a long process, primarily because of the large number of “paper” procedures and restrictions that the law imposes on the types of information we as a commercial organization can work with. Our cooperation with law enforcement in investigating the activity of this group can be described as a multi-stage data exchange. We provided the intermediate results of our work to the police officers; they studied them to understand if the results of our investigation matched the results of their research.

Then we got back our data “enriched” with the information from the law enforcement agencies. Of course, it was not all the information they could find; but it was the part which, by law, we had the right to work with.

This process was repeated many times until we finally we got a complete picture of Lurk activity. However, that was not the end of the case. A large part of our work with law enforcement agencies was devoted to “translating” the information we could get from “technical” into “legal” language.

This ensured that the results of our investigation could be described in such a way that they were clear to the judge.

This is a complicated and laborious process, but it is the only way to bring to justice the perpetrators of cybercrimes.

DoD Taps DEF CON Hacker Traits For Cybersecurity Training Program

Famed capture-the-packet contest technology will become part of DoD training as well. The Defense Department for the second year in a row sent one of its top directors to DEF CON in Las Vegas this month, but it wasn’t for recruiting purposes. So what was Frank DiGiovanni, director of force training in DoD’s Office of the Assistant Secretary of Defense for Readiness, doing at DEF CON? “My purpose was to really learn from people who come to DEF CON … Who are they? How do I understand who they are? What motivates them? What sort of attributes” are valuable to the field, the former Air Force officer and pilot who heads overall training policy for the military, says. DiGiovanni interviewed more than 20 different security industry experts and executives during DEF CON. His main question:  “If you’re going to hire someone to either replace you or eventually be your next cyber Jedi, what are you looking for?” The DEF CON research is part of DiGiovanni’s mission to develop a state-of-the-art cyber training program that ultimately helps staff the military as well as private industry with the best possible cybersecurity experts and to fill the infamous cybersecurity skills gap today.

The program likely will employ a sort of ROTC-style model where DoD trains the students and they then owe the military a certain number of years of employment. With the help of DEF CON founder Jeff Moss, DiGiovanni over the the past year has met and then picked the brains of, seasoned hackers and the people who hire them about the types of skills, characteristics, and know-how needed for defending organizations from today’s attackers. DiGiovanni, who is also responsible for helping shape retention and recruitment policy efforts in the DoD, has chatted with CEOs of firms that conduct penetration testing, as well as pen testers and other security experts themselves, to get a clearer picture of the types of skills DoD should be teaching, testing, and encouraging, for future cybersecurity warriors and civilians. This is the second phase of the development of a prototype cyber training course he spearheads for DoD at Fort McNair: the intensive six-month prototype program currently consists of 30 students from all branches of the military as well as from the US Department of Homeland Security.
It’s all about training a new generation of cybersecurity experts. The big takeaway from DiGiovanni’s DEF CON research: STEM, aka science, technology, engineering, and mathematics, was not one of the top skills organizations look for in their cyber-Jedis. “Almost no one talked about technical capabilities or technical chops,” he says. “That was the biggest revelation for me.” DiGiovanni compiled a list of attributes for the cyber-Jedi archetype based on his interviews.

The ultimate hacker/security expert, he found, has skillsets such as creativity and curiosity, resourcefulness, persistence, and teamwork, for example. A training exercise spinoff of DEF CON’s famed capture-the-packet (CTP) contest also will become part of the DoD training program.

DiGiovanni recruited DEF CON CTP and Wall of Sheep mastermind Brian Markus to repurpose his capture-the-packet technology as a training exercise module. “In October, he will submit to the government a repackaged capture-the-packet training capability for DoD, which is huge,” DiGiovanni says.

Also on tap is a capture-the-flag competition, DoD-style, he says. One of the security experts DiGiovanni met with at DEF CON this year was Patrick Upatham, global director of advanced cybersecurity at Digital Guardian. “I was a little apprehensive at first,” Upatham says. “After learning what they are doing and the approach that they are taking, it totally made sense.” “He [Frank] is looking for a completely different mindset and background, and [to] then train that person with the technical detail” to do the job, Upatham says. “They are looking for folks who are more resourceful and persistent, and creative in their mindset.” DoD’s training program is about being more proactive in building out its cybersecurity workforce.

That’s how it has to work now, given that more than 200,000 cybersecurity jobs were left unfilled last year overall.

DoD’s Cyber Mission Force is calling for some 6,200 positions to be filled. The goal is to train that workforce in both offensive and defensive security skills.

That means drilling down on the appropriate problem-based learning, for example.

The current prototype training program doesn’t require a four-year degree, and it’s more of a “journeyman apprentice” learning model, DiGiovanni says. About 80% or so is hands-on keyboard training, he says, with the rest is lecture-based. “A lot of the lectures are by the students themselves, with a learn-by-teaching model,” he says. From 'Cable Dog' To Hax0r DiGiovanni gave an example of one student in the DoD training program who came in knowing nothing about security.

The young man was a self-professed  “cable dog” at Fort Meade, a reference to his job of pulling cable through pipes.

But when he finished the six-month DoD course, he was reverse-engineering malware. “When he came to the course, he didn’t know what a ‘right-click’” of a mouse was, nor did he have any software technology experience, DiGiovanni recalls. “To me, that’s a heck of a success story.” The next step is determining how to scale the DoD training program so that it can attract and train enough cyber warriors for the future.

The goal is to hand off the training program to a partner organization to run it and carry it forward, possibly as early as this fall, he says. Meantime, DiGiovanni says the DEF CON hacker community is a key resource and potential partner. “The security of our nation is at stake.
I think it’s imperative for DoD to embrace the DEF CON community because of the unique skill they bring to the table,” he says. “They want to serve and contribute, and the nation needs them.” Related Content: Kelly Jackson Higgins is Executive Editor at DarkReading.com.
She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise ...
View Full Bio More Insights

Multiple Apple iOS Zero-Days Enabled Firm To Spy On Targeted iPhone...

Victims of 'lawful intercepts' include human rights activists and journalist, researchers from Citizen Lab and Lookout say. Apple’s much vaunted reputation for security took a bit of beating this week with two separate reports identifying serious vulnerabilities in its iOS operating system for iPhones and iPads. One of the reports, from security firm Lookout and the University of Toronto’s Citizen Lab, details a trio of zero-day vulnerabilities in iOS, dubbed Trident, that a shadowy company called the NSO Group has been exploiting for several years to spy on targeted iOS users. The NSO Group is based in Israel but owned by an American private-equity firm.  The company has developed a highly sophisticated spyware product called Pegasus that takes advantage of the Trident zero-day exploit chain to jailbreak iOS devices and install malware on them for spying on users. In an alert this week, security researchers at Citizen Lab and Lookout described Pegasus as one of the most sophisticated endpoint malware threats they had ever encountered.

The malware exploits a kernel base mapping vulnerability, a kernel memory corruption flaw and a flaw in the Safari WebKit that basically lets an attacker compromise an iOS device by getting the user to click on a single link. All three are zero-days flaws, which Apple has addressed via its 9.3.5 patch.

The researchers are urging iOS users to apply the patch as soon as possible. Pegasus, according to the security researchers, is highly configurable and is designed to spy on SMS text messages, calls, emails, logs and data from applications like Facebook, Gmail, Skype, WhatsApp and Viber running on iOS devices. “The kit appears to persist even when the device software is updated and can update itself to easily replace exploits if they become obsolete,” the researchers said in their alert. Evidence suggests that Pegasus has been used to conduct so-called ‘lawful intercepts’ of iOS owners by governments and government-backed entities.

The malware kit has been used to spy on a noted human rights activist in the United Arab Emirates, a Mexican journalist who reported on government corruption and potentially several individuals in Kenya, the security researchers said. The malware appears to emphasize stealth very heavily and the authors have gone to considerable efforts to ensure that the source remains hidden. “Certain Pegasus features are only enabled when the device is idle and the screen is off, such as ‘environmental sound recording’ (hot mic) and ‘photo taking’,” the researchers noted.   The spyware also includes a self-destruct mechanism, which can activate automatically when there is a probability that it will be discovered. Like many attacks involving sophisticated malware, the Pegasus attack sequence starts with a phishing text—in this case a link in an SMS message—which when clicked initiates a sequence of actions leading to device compromise and installation of malware. Because of the level of sophistication required to find and exploit iOS zero-day vulnerabilities, exploit chains like Trident can fetch a lot of money in the black and gray markets, the researchers from Citizen Lab and Lookout said.

As an example they pointed to an exploit chain similar to Trident, which sold for $1 million last year. The second report describing vulnerabilities in IOS this week came from researchers at the North Carolina State University, TU Darmstadt, a research university in Germany and University Politehnica in Bucharest. In a paper to be presented at an upcoming security conference in Vienna, the researchers said they focused on iOS’ sandbox feature to see if they could find any security vulnerabilities that could be exploited by third-party applications.

The exercise resulted in the researchers unearthing multiple vulnerabilities that would enable adversaries to launch different kinds of attacks on iOS devices via third-party applications. Among them were attacks that would let someone bypass iOS’ privacy setting for contacts, gain access to a user’s location search history, and prevent access to certain system resources.
In an alert, a researcher who co-authored the paper said that the vulnerabilities have been disclosed to Apple, which is now working on fixing them. Related stories: Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year ...
View Full Bio More Insights

The Hidden Dangers Of 'Bring Your Own Body'

The use of biometric data is on the rise, causing new security risks that must be assessed and addressed. The term “BYOB” might have more interpretations than you think.
Increasingly, in the area of enterprise security and data, it could mean “bring your own body.” The use of biometric data, in both consumer and enterprise technology, is on the rise.

The average worker in a business environment now generates more types of data more quickly than ever, and at higher volumes.
Increasingly, some of that data might be biometric. To understand the sensitive role of biometric data in enterprise information governance, you first have to understand its basic nature -- mainly, that it is very difficult to alter and often inextricable from the individual that it came from. You can easily change your debit card number if it has been stolen, right? But doing the same for your fingerprints or iris is impossible.

Biometric information doesn’t simply provide a code or number permanently assigned to a person, it provides a measure of that person.

Biometric systems provide data on the fundamental physical identity of the self -- a self that has the right to change jobs, move on from an organization, and still have the reasonable expectation that his or her identity and data will remain protected. So, for professionals who work in information governance, this brings up two critical questions: 1. Who, exactly, has ownership of this data? 2. How should the business manage this data? The first, unfortunately, is nearly impossible to answer now. Privacy laws for nonmedical biometric data are still nascent in the US, and determining data ownership between the enterprise and the individual can be difficult and is influenced by many variables. Many businesses harbor some sort of biometric data originating from employees.
So, while the first question may remain unanswered now, it’s clear that data management itself must be considered before biometric data becomes more commonplace.

Failure to think about governance and security practices today could mean beginning too late to prevent a breach or misappropriation tomorrow. There may not be that much biometric data currently in the average enterprise, but its use is on the rise.

Both the private and public sectors probably (and legally) have some of your biometric data right now.
If you’ve ever worked for a government-affiliated organization and achieved any type of security clearance, it has your fingerprint data.
If you have a US driver’s license  -- even if you have no criminal record -- there’s a good chance that the FBI is already analyzing your photo for a facial-recognition database.

The information that HR departments handle on a regular basis -- Social Security numbers, home addresses, health insurance details, tax information, etc. -- all pose threats to privacy and security that are practically incomparable to traditionally stolen data types such as credit card numbers. These hypothetical threats may seem nebulous given today’s relatively low use of biometrics in the average business, but they’re still a concern.
If a regular breach of business documents is a disaster, one with inherently personal data is a legal, monetary, and PR disaster. As of 2016, the average three-year cost of a breach in the US is $4 million over three years, and the average cost of an individual breached business record is $158.

Because most of these breaches until now have been of more traditional data types such as business records, emails, and financial data, the enterprise should expect increasing costs with the availability of increasingly granular data belonging to individuals.

The most-prized data types currently are those that the individual can’t change; medical records have far surpassed credit card numbers in their value on the black market.
It’s not unlikely that personal biometric data -- especially types that are unalterable -- will have similar value. The most logical first step for today’s information governance professionals would be to simply identify what biometric data may exist within the enterprise.

This can include (but isn’t limited to) the following: Fingerprints Iris scans/images Close-up facial photos EEGs (used in neuromarketing research) Fitness tracker and heartrate data Personal handwriting and signatures Once that’s done, mapping the potential locations where that data exists is necessary to determine where the most likely risks exist. Possible places that biometric data reside within the enterprise can include: File-sharing environments Archives and information governance platforms Building entry and physical security systems Third-party password management software Productivity platforms (such as Evernote) Scanned and photographed note repositories Enterprise social media accounts Software-as-a-service products The key objective for the immediate future is to determine what’s within the realm of control, and how security can be strengthened for the locations where there is most likely to be sensitive items.

This relatively simple task today will be important for the future, regardless of how common biometric data becomes in business. So “bring your own body” isn’t quite the HR policy violation it sounds like.
It’s a call to action for information governance and security.
It’s time to identify sources of employee biometric data, and to ensure that it is properly governed and secured within enterprise systems.  Related Content: Kon Leong is CEO/Co-founder of ZL Technologies.

For two decades, he has been immersed in large-scale information technologies to solve "big data" issues for enterprises. His focus for the last 14+ years has been on massively scalable archiving technology to solve records ...
View Full Bio More Insights