Home Tags Tamper

Tag: Tamper

Mass Effect update leaves pirates with rough facial animation

BioWare patches in new, uncracked Denuvo version alongside improvements.

Trump Extends Obama’s EO for Sanctioning Hackers

EO ultimately led to sanctions against Russia for hacking and other attempts to tamper with the outcome of the US election.

Denuvo: Our cracked RE7 protection is still better than nothing [Updated]

DRM says main goal is "keeping the game safe... during the initial sales window"

Virulent Android malware returns, gets >2 million downloads on Google Play

Enlargeportal gda reader comments 26 Share this story A virulent family of malware that infected more than 10 million Android devices last year has made a comeback, this time hiding inside Google Play apps that have been downloaded by as many as 12...

Go dark with the flow: Lavabit lives again

Another shot at spook-proofing e-mail It's taken longer than first expected, but the first fruits of Lavabit founder Ladar Levison's Dark Mail Technical Alliance have landed with the relaunch of the encrypted mail service he closed in 2013. After shuttering Lavabit, Levison joined hands with Silent Circle to form the DMTA and promised Lavabit would flow again in 2014. In 2015, Levison posted a GitHub repository putting forward a protocol to support fully “dark” e-mail: the Dark Internet Mail Environment, or DIME, which has “multiple layers of key management and multiple layers of message encryption”. The Libdime implementation offered both libraries and command line utilities, which is, after all, doing it the hard way: Lavabit Mark II puts that in the hands of users with the also-open-source Magma Webmail server implementation. The Lavabit mail server, Magma first appeared on GitHub in 2016. Levison writes “DIME provides multiple modes of security (Trustful, Cautious, & Paranoid) and is radically different from any other encrypted platform, solving security problems others neglect.

DIME is the only automated, federated, encryption standard designed to work with different service providers while minimising the leakage of metadata without a centralised authority.

DIME is end-to-end secure, yet flexible enough to allow users to continue using their email without a Ph.D. in cryptology.” So what's in the protocol? Let's look at the specification, published here (PDF). DIME's message flow.
Image: The Dark Mail Technical Alliance You don't get perfect security while you've still got wetware involved.

The DIME document notes that if a user has a weak password or bad endpoint security, all bets are off. Within that constraint, the DMTA says DIME's designed to provide “secure and reliable delivery of email, while providing for message confidentiality, tamper protection, and a dramatic reduction in the leakage of metadata to processing agents encountered along the delivery path”. At the top level, the four components of the system architecture are e-mail clients; privacy processing agents; key stores (with a resolver architecture to retrieve keys, in DIME called “signets”); and the encrypted message objects. To most users, The Register will assume the only new concept here is the privacy processing agent (PPA).

There are two kinds, the organisational PPA, and the user PPA. The Organisation Privacy Agent (OPA) talks to both user e-mail clients and the Internet at large, handling user key management to create “a secure transit channel that hides all information about the message using transport layer security”.
It also “provides access to the envelope information needed for immediate handling.” The User Privacy Agent (UPA) handles user-side crypto functions, and can reside in the user's e-mail client or, in Webmail implementations, on the server. DIME has three modes of operation: Trustful – the user trusts the server to handle privacy; Cautious – the server stores and synchs encrypted data, including encrypted copies of private keys and messages.

Encryption can be carried out inside a user's browser; Paranoid – the server never sees a user's keys.

There's no Webmail, and if you want to use multiple devices, it's up to you to synch them across different keyrings. In technical terms, that means the system has to automate all aspects of key management; encrypt and sign messages without a user having to learn how to run it; and resist manipulation (including, ideally, even if a client is compromised. The layering of encryption, the standard says, is designed to protect messages, even if (for example) a server along the way is compromised. DIME relies on a concept of “signets” for keys: organisational signets, which are keys associated with a domain; and user signets, the key associated with an individual e-mail address. “The basic validation model is to obtain a signet from a credible primary source and then confirm it with another pre-authenticated source.

The two pre-authenticated sources currently available are a management record signed using DNSSEC or a TLS certificate signed by a recognised Certificate Authority (CA).

Both can be cryptographically traced by a signet resolver back to a trusted key that is shipped with the resolver. As well as the Webmail version, Lavabit says it wants to develop clients for Windows, Mac OS and iOS, Linux and Android. ® Sponsored: Customer Identity and Access Management

Skills gap could hold back blockchain, AI, IoT advancements in 2017

Blockchains, AI, internet of things (IoT), and other emergent technologies are all going to be major forces in the coming year, notes IT industry analyst firm CompTIA in its IT Industry Outlook 2017 report. But these new-school technologies are hemmed in by some of the industry's oldest and most pervasive problems: Lack of qualified people to make the most of them, laggardly approaches to security, and whether or not they represent solutions still looking for a good problem. What do we do with this? In CompTIA's eyes, the emergent technologies of the coming year include software-defined components (the enablers of "hyperconverged infrastructure"), blockchain technology, and machine learning/artificial intelligence. As with the cloud environments most of these will prosper in, they're "primarily focused on the back end, and [we] will see initial adoption at the enterprise level before moving downstream into the SMB space." The hard part will be figuring out where they're genuinely useful. Blockchain's original name-brand application, bitcoin, has turned out to be only one of many possible uses for the tech, with automated contracts or tamper-proof databases on the list as well. Right now it's the biggest of the boys, the likes of IBM and Microsoft, that are making blockchain useful; when this trickles down to small businesses, it'll likely be through tech those companies are already familiar with -- such as databases -- and not necessarily entirely new applications. There's little question that machine learning and AI have achieved prominence and are already at work enriching many different kinds of applications. But CompTIA asserts that ML/AI "will provide a new layer for technology interaction" -- that is, the manner in which AI can enhance the utility of any given tech is still largely unexplored territory. IoT, long believed to be a game-changer, also figured into CompTIA's analysis, but with plenty of caveats: "The complexity of IoT and the regulations and protocols required for integration will drive a long adoption cycle." Add to that a patchwork of competing standards, a lack of good expertise, and threats galore -- all good reasons why IoT still remains better in theory than in practice. Experts (still) wanted The lack of good expertise isn't confined to IoT; CompTIA sees the difficulties of finding the right people to work with emerging technologies across IT generally. "Finding workers with expertise in emerging tech fields" made the top slot on the report's list of "factors contributing to a more challenging hiring landscape in 2017," ahead of issues like "insufficient pool of talent in locale" and "competing with other tech firms." CompTIA's list of "emerging job titles to watch for" feature many AI and data-centric specializations: chief data officer, data architect, AI/machine learning architect, and data visualizers. The title of "cloud services engineer" could also in theory be stretched to include ML/AI, as cloud platforms offer more. Machine learning may be getting easier to grasp thanks to such services (and the open source toolkits from which many of them spring), but they still require knowledgeable developers to get the most from them. Other specializations on the list reflect CompTIA's belief that many technologies that have already become staple presences in the last couple of years still lack for manpower. Data visualization, for instance, is not new, but the manner in which data is accumulated and analyzed in the enterprise -- and the breadth of new tools available -- has given analytics a new prominence and a need for data visualization experts. (CompTIA also sees room for data admins moving out of the dev side and interacting more directly with business units to "focus on aggregation, analysis, and visualization.") Likewise, container technology existed before Docker, but Docker consumerized it and made it broadly accessible, spurring a need for quality talent ("container developer," on CompTIA's list) to make sense of it and put it to good use. Risky business Another major skill set on CompTIA's list is security. "Computer security incident responder," "risk management specialist," and "information assurance analyst/security auditor" all show up there, a reflection of CompTIA's sentiment that security for enterprises and small businesses will "get worse before it gets better." As bad as the "headline-making breaches" of the past few years have been, big companies have shown they can shrug off the costs, and the report projects any event that "creates a tipping point will need to have greater consequences before there is a broad shift in transforming security technology, processes, and education." One possible reason why many outfits "play catchup on recent technology moves" with security, as CompTIA puts it, is the disproportionate distribution and affects of security breaches. Health care companies, for instance, have been hammered heavily by attacks because of the payoffs involved, and they stand to be the most targeted sector for such attacks in 2017. 

Google floats prototype Key Transparency to tackle secure swap woes

♪ I've got the key, I've got the secreeeee-eeet ♪ Google has released an open-source technology dubbed Key Transparency, which is designed to offer an interoperable directory of public encryption keys. Key Transparency offers a generic, secure way to discover public keys.

The technology is built to scale up to internet size while providing a way to establish secure communications through untrusted servers.

The whole approach is designed to make encrypted apps easier and safer to use. Google put together elements of Certificate Transparency and CONIKS to develop Key Transparency, which it made available as an open-source prototype on Thursday. The approach is a more efficient means of building a web of trust than older alternatives such as PGP, as Google security engineers Ryan Hurst and Gary Belvin explain in a blog post. Existing methods of protecting users against server compromise require users to manually verify recipients' accounts in-person.

This simply hasn't worked. The PGP web-of-trust for encrypted email is just one example: over 20 years after its invention, most people still can't or won't use it, including its original author. Messaging apps, file sharing, and software updates also suffer from the same challenge. Key Transparency aims to make the relationship between online personas and public keys "automatically verifiable and publicly auditable" while supporting important user needs such as account recovery. "Users should be able to see all the keys that have been attached to an account, while making any attempt to tamper with the record publicly visible," Google's security boffins explain. The directory will make it easier for developers to create systems of all kinds with independently auditable account data, Google techies add.

Google is quick to emphasise that the technology is very much a work in progress. "It's still very early days for Key Transparency. With this first open-source release, we're continuing a conversation with the crypto community and other industry leaders, soliciting feedback, and working toward creating a standard that can help advance security for everyone," they explain. The project so far has already involved collaboration from the CONIKS team, Open Whisper Systems, as well as the security engineering teams at Yahoo! and internally at Google. Early reaction to the project from some independent experts such as Matthew Green, a professor of cryptography at Johns Hopkins University, has been positive. Kevin Bocek, chief cyber-security strategist at certificate management vendor Venafi, was much more sceptical. "Google's introduction of Key Transparency is a 'build it and hope the developers will come' initiative," he said. "There is not the clear compelling event as there was with Certificate Transparency, when the fraudulent issuance of digital [certificates] was starting to run rampant. Moreover, building a database of public keys not linked to digital certificates has been attempted before with PGP and never gain[ed] widespread adoption." ® Sponsored: Want to know more about Privileged Access Management? Visit The Register's hub

Azure Security Center Now Guards Windows Server 2016 VMs

Microsoft has added Windows Server 2016, its latest server operating system, to the roster of virtual machines supported by its Azure Monitoring Agent cloud-based threat protection offering. With the holidays out of the way, Microsoft has returned to r...

Efforts to Improve IoT Security Advance in 2017

The U.S. Federal Trade Commission announces IoT Home Inspector Challenge, while the Online Trust Alliance aims to improve security with a new version of the IoT Trust Framework. The emerging internet of things (IoT) world is rapidly taking shape and with it have come a host of security related concerns and challenges. Multiple organizations and vendors are working hard to help improve the state of IoT security with new initiatives that are being announced this week.FTCAmong the biggest security issues that face consumers of IoT are unpatched devices that are at risk from security vulnerabilities. On Jan. 4, The U.S Federal Trade Commission (FTC) announced a new IoT challenge to help improve security in connected home devices. The goal of the IoT Home Inspector Challenge is to develop some form of technology tool that can help protect consumers against the risks posed by out-of-date software that runs on IoT devices. Those risks also include the challenge of dealing with hard-coded and factory default passwords that are embedded in devices.The top prize in the contest is $25,000 with up to three honorable mention winners that will be awarded $3,000. Submissions to the contest will be accepted by the FTC starting on March 1, and the deadline for final submissions is May 22 at 12:00 p.m. EDT. The FTC expects to announce the winners of the contest on or about July 27, 2017. "Every day American consumers are offered innovative new products and services to make their homes smarter," Jessica Rich, Director of the Federal Trade Commission's Bureau of Consumer Protection said in a statement. "Consumers want these devices to be secure, so we're asking for creativity from the public – the tinkerers, thinkers and entrepreneurs – to help them keep device software up-to-date." Online Trust AllianceThe Online Trust Alliance (OTA) updated its IoT Trust Framework on Jan. 5, providing guidance on how to develop secure IoT devices and assess risk."The IoT Trust Framework is a good example of the security culture that is needed in the connected devices space," Olaf Kolkman, Chief Internet Technology Officer for the Internet Society, said in a statement. "If companies are in the business of selling smart devices, they need to implement the requirements outlined in this framework before calling them smart."The framework is comprised of four key areas to help provide structure to understanding how to properly implement IoT security.The first category is security principles, which outline best practices for secure code development and deployment. The second category details requirements for user access and credentials security. The third area in the IoT Trust Framework is about privacy, disclosures and transparency. Among the required disclosures suggested by the OTA is for vendors to include disclosures around the impact to product features or functionality if connectivity is disabled.The fourth core category in the IoT Trust Framework defines notifications and related best practices for IoT security."These principles include requiring email authentication for security notifications," the OTA Trust Framework states. "In addition messages must be written for maximum user comprehension and tamper-proof packaging and accessibility considerations are recommended."The OTA's attempt at helping to define IoT security is one of many efforts in the market to develop guidelines for secure IoT devices. In October 2016, the Cloud Security Alliance released a detailed 75-page report for the development of secure IoT products.Sean Michael Kerner is a senior editor at eWEEK and InternetNews.com. Follow him on Twitter @TechJournalist.

5 enterprise-related things you can do with blockchain technology today

Diamonds. Bitcoin. Pork. If you think you’ve spotted the odd one out, think again: All three are things you can track using blockchain technologies today. Blockchains are distributed, tamper-proof, public ledgers of transactions, brought to public attention by the cryptocurrency bitcoin, which is based on what is still the most widespread blockchain. But blockchains are being used for a whole lot more than making pseudonymous payments outside the traditional banking system. Because blockchains are distributed, an industry or a marketplace can use them without the risk of a single point of failure. And because they can’t be modified, there is no question of whether the record keeper can be trusted. Those factors have prompted a number of enterprises to build blockchains into essential business functions, or at least to test them there. Here are five ways your business could use blockchain technology today. Making payments Bitcoin introduced the first blockchain as a tool for making payments without going through the banks. But what if you work for a bank? Strangely, many of the features that made bitcoin distasteful to the banks are making the underlying blockchain technology attractive as a way to settle transactions among themselves in dollars or sterling. It’s public, so banks can see whether their counterparties can afford to settle their debts, and distributed, so they can settle faster than some central banks will allow. Ripple is one of the first such blockchain-based settlement mechanisms: Its banking partners include UBS, Santander, and Standard Chartered. But UBS and Santander are also working on another blockchain project called Utility Settlement Coin, which will allow them to settle payments in multiple currencies, with Deutsche Bank, BNY Mellon, and others. If these systems catch on, it’s surely only a matter of time before such blockchain payments trickle down to compete with traditional inter-bank transfer mechanisms such as SWIFT. Identity of Things On the internet, famously, no one knows if you’re a dog, and on the internet of things, identity can be similarly difficult to pin down. That’s not great if you’re trying to securely identify the devices that connect to your network, and it’s what prompted the U.S. Department of Homeland Security to fund a project by Factom to create a timestamped log of such devices in a blockchain, recording their identification number, manufacturer, available device updates, known security issues, and granted permissions. That could all go in a regular device-management database, but the DHS hopes that the immutability of the blockchain will make it harder for hackers to spoof known devices by preventing them from altering the records. Certifying certificates It’s not just devices that can be spoofed, but also qualifications. If you were looking to hire someone with blockchain expertise, and the applicant told you they had a professional certification, what would you do to check the certificate’s validity? Software developer Learning Machine hopes candidates will present their certificates in its mobile app, and that you will check their validity using Blockcerts. This is a way of storing details of a certificate in the blockchain, so that anyone can verify its content and the identity of the person to whom it was issued without the need to contact a central issuing authority. The certificates can be about educational qualifications, professional training, membership of a group, anything, so if your organization issues certificates, you could issue them on the blockchain, too. Learning Machine and co-developer MIT Media Lab have published details of Blockcerts as an open standard and posted the code to Github. Diamonds are forever Diamonds, they say, are forever, so that means whatever system you use to track them is going to have to stand the test of time too. Everledger is counting on blockchain technology to prove the provenance and ownership of diamonds recorded in its ledger. In fact, it’s using two blockchains: A private one to record information that diamond sellers need to share with buyers, but may not want widely known, and the public bitcoin blockchain to provide an indisputable timestamp for the private records. The company built its first diamond database on the Eris blockchain application platform developed by Monax but recently moved to a system running in IBM’s Bluemix cloud. Diamonds are eminently traceable as the uncut ones have unique physical characteristics and the cut ones are, these days, typically laser-etched with a tiny serial number. Recording each movement of such valuable items allows insurers to identify fraud and international bodies to ensure that trade in diamonds is not funding conflicts. Everledger CEO Leanne Kemp believes the system could transform trade in other valuable commodities, too. The company has identified luxury goods and works of art as possibilities. And finally, the pork But what about the pork? It may not be worth as much by weight as a diamond, but in China at least, it more than makes up for that in volume. And because pork is not forever, being able to demonstrate that a particular piece of it is fresh and fit for consumption can be vital. Pork is one of many products for which fine-grained tracking and tracing of inventory can be helpful, and happens to be the one Walmart is testing blockchain technology with. It’s using IBM’s blockchain to record where each piece of pork it sells in China comes from, where and how it is processed, its storage temperature and expected expiration date. If a product recall becomes necessary, it will be able to narrow down the batches affected and identify exactly where they are or, if they have already been sold, who bought them. The project may extend to other products: The company has just opened the Walmart Food Safety Collaboration Center (WFSCC) in China to work with IBM and industry partners to make food supplies safer and healthier using blockchain technology.

A Rowhammer ban-hammer for all, and it’s all in software

Sorry to go all MC Hammer on you, but boffins tell bit-flippers 'you can't touch this' A group of German researchers reckon they've cracked a pretty hard nut indeed: how to protect all x86 architectures from the “Rowhammer” memory bug. It's been 18 months since “Rowhammer” first emerged, and responses have largely come from individual vendors working out how to block the “bit-flipping” attacks in their own environments. LWN reported last October that Linux kernel gurus are working on a generic defence that would work in their environment – but it would be even better if the whole x86 world could be protected. That's what the authors of this ArXiv paper claim: a software-only defence for x86 and ARM machines that isolates the memory of different system entities – for example, the kernel from the user space. The researchers hail from Germany's Technical University of Darmstadt and the University of Duisburg-Essen. To understand the protection, a Rowhammer refresh is probably useful because if you can force corruption into DRAM, you can theoretically find a way to take over the kernel. It was Google's Project Zero that changed theory into proof-of-concept, and as The Register's Iain Thomson reported at the time: “The technique, dubbed 'rowhammer', rapidly writes and rewrites memory to force capacitor errors in DRAM, which can be exploited to gain control of the system. By repeatedly recharging one line of RAM cells, bits in an adjacent line can be altered, thus corrupting the data stored. “This corruption can lead to the wrong instructions being executed, or control structures that govern how memory is assigned to programs being altered – the latter case can be used by a normal program to gain kernel-level privileges.” With access to the physical RAM, the Project Zero attackers could then bypass memory protection and security mechanisms, and tamper with operating system structures to take over the machine. As the new paper's authors write, most protections against Rowhammer involved either modifying hardware, or ran heuristics-based counters against CPUs to raise alerts. The Duisburg-Essen group has taken a different approach: their G-CATT (Generic CAn't Touch This) is built on their x86-only B-CATT (Bootloader CAn't Touch This), which extended the bootloader to disable vulnerable physical memory. The bootloader approach, however, was Rowhammer-specific: “it does not yet tackle the fundamental problem of missing memory isolation in physical memory” – which is why the researchers extended their work to try and make it “generic”. G-CATT takes a different angle: instead of isolating memory, it defeats Rowhammer by stopping an attacker from exploiting its effects, by ensuring attackers can only flip bits in memory already under their control. That restriction “tolerates Rowhammer-induced bit flips, but prevents bit flips from affecting memory belonging to higher-privileged security domains” (such as the OS kernel, or co-located virtual machines); “to do so, G-CATT extends the physical memory allocator to partition the physical memory into security domains.” B-CATT and G-CATT block diagrams It's no good if the defence renders the system unusable, so the researchers also ran a bunch of benchmarks on their protected machines (using both B-CATT and G-CATT under Linux). Under SPEC2006 (which test the performance on a bunch of different programs), the average B-CATT performance hit was 0.43 per cent and 0.49 per cent for G-CATT. The Phoronix benchmark performance was better – B-CATT slowed things down by 0.19 per cent and G-CATT by 0.33 per cent. The two systems ran well on file and VM latency tests, and the two protection schemes had minimal impact on local bandwidth and memory latency. ® Sponsored: Customer Identity and Access Management

How to seek and destroy advanced persistent threats

With the rise of ransomware against hospitals, attacks against the Democratic National Committee (DNC), and even major tech CEOs getting hacked, no one is immune to having their information stolen.

Today’s attackers are sophisticated, state sponsored, and armed with advanced techniques to hit specific targets.

Despite an estimated $75 billion per year spent on security, adversaries dwell undetected in networks for an average of 146 days—exposing organizations to massive theft and business disruption. Organizations must work to close this gap and thwart advanced threats before damage and loss occur. Endgame helps organizations close the protection gap with a unified platform for preventing, detecting, hunting, and responding to advanced threats at the earliest stages of a cyberattack.

Endgame enables Security Operations Center (SOC) and Incident Response (IR) teams to automate the threat hunting process, from asset discovery to response, dramatically reducing the time to detection and remediation.

Time ordinarily spent on forensic analysis and compromise assessment can be shifted to attack detection and response, pre-empting advanced attacks, and discovering and eliminating intruders before they cause damage and loss. The Endgame platform does not depend on signatures or indicators of compromise (IOCs). Rather it draws on machine learning and other advanced analytics to detect not only malware, but patterns and signals of maliciousness.

The Endgame platform is built to discover attackers even when they are dormant, as in the case of the DNC attacks, or missed by traditional signature-based tools, reducing the time and cost associated with incident response and compromise assessment. The hunt for APTs The DNC attack was attributed to two different groups: APT28, or Fancy Bear, and APT29, or Cozy Bear.

APT28 is a Russian-based threat actor that has been active since the mid-2000s.

APT29 is the adversary group that last year successfully infiltrated the unclassified networks of the White House, State Department, and U.S. Joint Chiefs of Staff.

Both APT28 and APT29 have been responsible for targeted intrusion campaigns against the aerospace, defense, energy, government, and media sectors, among many others. Imagine a large enterprise with a SOC team consisting of tier 1, tier 2, and tier 3 security analysts.

The tier 1 and tier 2 analysts are focused on monitoring the flood of alerts and determining their root causes; high-priority alerts are then escalated to tier 3 analysts for resolution. Now imagine this large enterprise is at the receiving end of targeted attacks by groups using similar techniques as APT28 Fancy Bear and APT29 Cozy Bear. Unfortunately, this large enterprise is likely susceptible to being breached because techniques used by APT28 and APT29 can bypass their existing security products, and there are no known indicators of compromise (IOCs) or signatures of these attacks that this large enterprise can rely on. Using the recent DNC attack from APT29 Cozy Bear as an example, we’ll demonstrate how Endgame’s platform empowers security analysts to find never-before-seen, persistent attackers and evict them. With Endgame’s platform deployed, we’ll detect these attackers hiding in the network and remove them before critical assets are stolen. Figure 1: From the Endgame dashboard, security analysts can survey, investigate, and remediate endpoints.

The top-right corner shows new alerts and the number of endpoints on which Endgame is deployed. Surveying the network Endgame operates with the basic premise that enterprises have already been compromised, and therefore, SOC teams should proactively look for threats within their network before damage and loss of information occurs. With Endgame, analysts can simply enter network IP address information to scan the network and gain situational awareness of endpoints and device connections. Once the security analyst identifies all critical assets that need monitoring, they can deploy the lightweight Endgame sensor to Windows and Linux endpoints with a single click.

Endgame also supports out-of-band deployment through endpoint management tools such as Microsoft’s System Center Configuration Manager and IBM’s BigFix. There are two modes of operation for Endgame sensors: dissolvable or persistent.

The analyst can choose to use a dissolvable sensor to conduct a short-term hunt mission or leverage the persistent mode to continuously monitor and explore the enterprise network. Endgame operates in stealth mode to hide its presence from the adversary.

This means that the adversary will not find an Endgame process running on the system and will not be able to disable it or tamper with it. Once the sensor is deployed, the analyst will begin to receive alerts, which are generated by Endgame’s heuristics and behavior detections when attackers’ techniques are encountered. Hunting for malicious persistence In the example of the recent DNC hack, APT29 persisted for almost a year in the network.
Sophisticated persistence techniques are commonly used by attackers to ensure ongoing presence on compromised systems even when those systems are shut down and restarted. Persistence is a characteristic of every major compromise, and attackers are constantly rotating and evolving persistence techniques to evade detection and stay hidden in an environment after every reboot. The APT29 attack relied primarily on the SeaDaddy implant written in Python and a PowerShell backdoor, with persistence accomplished via Windows Management Instrumentation (WMI) service that allowed the attacker to launch malicious code automatically after a specific period of system uptime or on a specific schedule. Endgame’s platform covers more than 27 persistence techniques without relying on IOCs or signatures. Many of these techniques are enumerated in the MITRE Attack Matrix, a framework used widely by practitioners to categorize the techniques used by attackers when operating in an enterprise network.

Endgame shares its discoveries of new attack techniques with MITRE to ensure that enterprise defenders are fully informed about new adversary behavior, including the recent addition of a new persistence technique to the model. With Endgame deployed on enterprise endpoints, the security teams receive alerts on suspicious activity.

Alerts provide filename, full path, an Endgame MalwareScore, and an MD5 hash to identify malicious activity.

Further, SOC teams can run automated investigations to hunt for persistence techniques, such as those used by APT29.  Endgame’s advanced analytics detect suspicious persistence locations to quickly narrow the scope of the problem.

By providing this powerful, comprehensive automated capability, analysts no longer have to manually export data and analyze them with spreadsheets to determine suspicious behavior, thus saving significant time to detection. Figure 2: With a few clicks, SOC analysts can launch investigations across a variety of resources to hunt for malicious behavior and infected boxes. The analysis—shown below—results in a histogram view of unique occurrences among a population of endpoints. With this view, we can see that Endgame was able to detect a novel persistence technique used by APT29 without prior knowledge and with only a few clicks. Figure 3: Security analysts use Endgame’s investigation view to identify malicious persistence and take action. Hunting for malicious files In addition to hunting for the APT29 persistence technique on compromised systems, the launch of the malicious code will trigger a prioritized alert.

By clicking on the alert, the security analyst can view the Endgame MalwareScore, which provides a score of confidence for the malicious files. Our signature-less malware classification capability uses a gradient-boosted decision tree data science model on the lightweight sensor with low false-positive rates.

To investigate further, the security analyst can launch a hunt for running processes, find the malware and the endpoint infected, and kill the process with zero business disruption. With a few clicks, a tier 1 analyst can not only investigate the alert, but also remediate any issues with zero business disruption. Endgame gives SOC and IR teams a combination of prevention, detection, and response capabilities to protect their infrastructures.

Beyond the examples outlined above, Endgame allows analysts to further their investigations by executing additional hunts for persistence, privilege escalation, defense evasion, or credential theft.

Furthermore, the analyst can collect information on network connections, files, processes, registries, system configurations, and users to provide contextual information for more extensive investigation.

Endgame presents all of the data for an investigation in a single real-time view for analyst review. By automating the hunt, Endgame allows organizations to protect themselves against both known and never-before-seen attacks. Most important, it allows analysts to identify and thwart these attacks before damage and loss occur. Jian Zhen is senior vice president of products at Endgame. New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth.

The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers.
InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content.
Send all inquiries to newtechforum@infoworld.com.