Home Tags Least Privilege

Tag: Least Privilege

Security hardened, pah! Expert doubts Kaymera’s mighty Google’s Pixel

Kaymera: building on shoulders of a giant, claim The arrival of a security hardened version of Google’s suppoed "iPhone killer" Pixel phone from Kaymera has received a sceptical reception from one expert. Kaymera Secured Pixel is outfitted with Kaymera’s own hardened version of the Android operating system and its security architecture. This architecture is made up of four layers: encryption, protection from malicious downloads, a prevention layer that monitors for unauthorised attempts to access OS functions (such as microphone, camera or GPS), and a detection and enforcement layer that monitors, detects and blocks malicious code or misbehaving apps.
Indecent mobile security experts have questioned whether the technology offers much by way of benefits over that offered by native Pixel smartphones. But professor David Rogers, chief executive of Copper Horse and a Lecturer in mobile systems security at the University of Oxford, questioned what exaclty is new. “Many of the proposed functions are already in-built into Pixel (examples below), so what are the extra benefits Kaymera offers?” For example, Pixel has full device encryption and file-based encryption, backed by TrustZone. Plus, as it's Google’s own phone, Pixel is first in line for patching - an important security defence in itself. “Pixel has many other functions and capabilities built over many years including Position Independent Execution (PIE), Address Space Randomisation Layout (ASLR), SE Linux and so on,” Rogers added. Kaymera responded that its kit offered benefits on this front by enforcing security controls built into Pixel but not actually enforced. Oded Zehavi, Kaymera chief operating offficer, told El Reg: “In places where Google has good enough security, we leverage the existing functionality (in many of the examples given here, the functionality is not actually enforced.
In these cases we enforce and prevent disabling of the security functionality by negligent users or malicious hackers).” Third-parties building on Google security do not have a good track record in this space (including Blackphone) in terms of getting their own code secure and tested properly, including updates.  Rogers is unconvinced that Kaymera will do any better with hardening Pixel than others have done with hardening Android. Zehavi responded that Kaymera devices have been tested to the most rigorous standards by governments around the world. “As a philosophy we always have more than one security layer against any attack vector hence we don’t trust any single security measure including Google security measures.

For example, our prevention layer feeds with fake resources any payload that may overcome the OS hardening and get loaded onto the device,” Zehavi said. Rogers remains unconvinced about the security proposition of the Kaymera Secured Pixel, especially in the absence of NCSC certification or US security certification.
It’s more like “some kind of Chimera rather than a Kaymera,” he cuttingly concluded. “If Kaymera really want to protect against comms interception, low-level malware attacks and so on, they would have to build some kind of firewall and introspection capability,” Rogers said. “To do that they would need access inside the Radio Interface Layer and also to processes and app data.” “Google’s security architecture does not allow this unless you ‘roll your own’ in a big way, creating your own device and modifying the AOSP [Android Open Source Project] code to deliver a bespoke device,” he added. Creating a bespoke device risk undoing Google’s security controls, Rogers warned. “Application sandboxing and isolation there for a reason, including enforcing the Principle of Least Privilege,” he said. The Israeli manufacture said it had been careful to add extra security without breaking Google’s existing controls. Zehavi explained: “Even though we embed our code deep into the AOSP code in layers that are beyond what regular applications can reach, we do not break any existing Google security measures including the sandboxing etc.
Instead, we add extra measures across the board that, as mentioned, leverage the existing mechanism but bring the device to a total different level of security which cannot be achieved via the application layer alone.” Rogers responded: “They admit to using AOSP which I guess means they self-sign the build of the device themselves.

That then comes down to a question of trust in who is digitally signing the product (that gives that signer access to absolutely everything, the radio path, the private data, the lot).“ The Kaymera Secured Pixel is aimed at business and government customers prepared to pay for extra to avoid the security weaknesses associated with the ‘off the shelf’ Android operating system.

The device retains the original Google device’s purpose-built hardware, features and ergonomics. Users can, for example, still use the fingerprint scanner. Kaymera devices are centrally managed via the company’s management dashboard, enabling easy enforcement of security policies on the smartphone. Kaymera’s secured Pixel phone is available immediately. Kaymera was started in late 2013 by the founders of NSO, the surveillance tech provider whose legitimate iPhone spyware malware was used to target the phone of UAE human rights activist Ahmed Mansoor in August 2016.  The spyware caused Apple to rush out emergency software patches, to plug vulnerabiliies in its iOS mobile operating system. The Israeli firm is open about its roots.
If NSO is a ‘poacher’, selling surveillance tools to governments, then Kaymera is the gamekeeper, its pitch runs. “I’m not sure I can buy in to the poacher turned gamekeeper thing here and I would rather trust Google in this case,” Rogers concluded. ® Sponsored: Want to know more about Privileged Access Management? Visit The Register's hub

10 key security terms devops ninjas need to know

It’s no secret that devops and IT security, like oil and water, are hard to mix.

After all, devops is all about going fast, while security is all about proceeding carefully. However, both devops and security serve a higher authority—the business—and the business will be served only if devops and security learn to get along. Security can (and should) be baked into the devops process, resulting in what is often referred to as devsecops.
IT security teams are obliged to understand how applications and data move from development and testing to staging and production, and to address weaknesses along the way.

At the same time, devops teams must understand that security is at least partly their responsibility, not merely slapped onto the application at the very end.

Done right, security and devops go hand in hand. Because half of this equation is about making devops more security-aware, I’ve put together a primer on some basic security principles and described their applicability in devops environments. Of course, this list is only a start.

Feel free to comment and suggest other terms and examples. Vulnerabilities vs. exploits A vulnerability is a weakness that may allow an attacker to compromise a system.
Vulnerabilities usually happen due to bad code, design errors, or programming errors.

They are basically bugs, albeit bugs that may not interfere with normal operations of the application, except to open a door to a would-be intruder.

For a recent example, look at Dirty Cow. Whenever you’re using open source components, it is recommended that you scan the code for known vulnerabilities (CVEs), then remediate by updating the affected components to newer versions that are patched.
In some cases, it’s possible to neutralize the risk posed by a vulnerability by changing configuration settings. An exploit, on the other hand, is code that exploits the vulnerability—that is, a hack.
It’s very common for a vulnerability to be discovered by an ethical researcher (a “white hat”) and to be patched before it has ever been exploited. However, if an exploit has been used, it’s often referred to as existing “in the wild.” The situation where a known vulnerability has an exploit in the wild and has yet to be patched is obviously to be avoided.
In devops environments, vulnerability management must be automated and integrated into the development and delivery cycle using automated steps in CI/CD tools, with a clear policy (typically created by security teams and compliance teams) as to what constitutes an acceptable level of risk, and success/fail criteria for scanned code. Zero-day vs. known vulnerabilities (CVE) Vulnerabilities in public software can be resolved by the developers, and fixes deployed to all users before malicious users become aware of them.
Such “known vulnerabilities” are recorded on the Common Vulnerabilities and Exposures (CVE) system, operated by MITRE. However, in some situations hackers discover new vulnerabilities before they’ve been publicly revealed and fixed.

These “zero-day vulnerabilities” (so called because the developers have zero days to work on a fix once the vulnerability becomes public) are the most dangerous, but they are also less common.

There is no way to detect a zero-day vulnerability up front. However, zero days can be mitigated through network segmentation, continuous monitoring, and encrypting secrets so that even if they are stolen, they are not exposed.

Behavioral analytics and machine learning can also be applied to understand normal usage patterns and flag anomalies as they happen, reducing the potential damage from zero days. Attack surface The attack surface is composed of all the possible entry points into a system through which an attacker could gain access.
It is always advised to minimize the attack surface by eliminating or shutting down parts of a system that are not needed for a particular workload. In devops environments, where applications are deployed and updated frequently, it’s easy to lose sight of the various components and code elements that are included, changed, or added with each update. Over time, this can result in a bloated attack surface, so it’s important to first understand the workloads and configure servers and applications in an optimal manner, removing unnecessary functions and components. Using one “cookie cutter” template will simply result in a larger attack surface, so you need to adjust to specific workloads or at least group workloads by application or trust level.

Then, it’s highly recommended to review the configurations periodically to ensure there’s no “creep up” of the attack surface. Least-privilege principle This principle dictates that users and application components should only have access to the minimum information and resources they need, in order to prevent both accidental and deliberate system misuse.

The principle relies on the notion that if you have access to only what you need, then the damage will be limited if your privileges are compromised. Applying least privilege can dramatically reduce the spread of malware, which tends to use the privileges of a user who was tricked into installing or activating the software.
It is also advised to perform periodic reviews of user privileges and trim them—especially with respect to users who have changed roles or left the company. In devops environments, it’s also recommended to separately define access privileges to development, testing, staging, and production environments, minimizing the potential damage in case of an attack and making it easier to recover from one. Lateral movement (east-west) Lateral movement, sometimes described as “east-west attacks,” refers to the ability of an attacker to move across the network sideways, from server to server or from application to application, thus expanding the attack or moving closer to valuable assets.

This is in contrast to north-south movement, which relates to moving across layers—from a web application into a database, for example. Network controls such as segmentation are crucial in preventing lateral movement and in limiting the damage that a successful attacker might inflict. Network segmentation is akin to the compartmentalization of a ship or submarine: If one section is breached, it is sealed off, preventing the entire ship from going down. Because one of the goals of devops is to remove barriers, this could be a tricky one to master.
It’s important to distinguish between openness in the delivery process, from development through to production, and openness across the network.

The former contributes to agility and process efficiency, but the latter seldom does. For example, there’s usually no cross-talk in the processes required to deliver different applications.
If you have a web retail application and an ERP application, and they are developed and run by different teams, then they belong on separate network segments.

There’s absolutely no devops justification to have an open network between them. Segregation of duties Remember those movies where you need two people to simultaneously turn the key in order to launch nuclear missiles? Segregation of duties is about restricting the privileges that users have in access to systems and data, and limiting the ability of one privileged user to cause damage either by mistake or maliciously.

For example, it’s best practice to separate administration rights of a server from the administration rights of the application running on that server. In a devops environment, the key is to make the segregation of duties part of the CI/CD process and apply it equally to systems as well as users, so no single system or user would be able to compromise your deployment. Orchestrator admins should not also be the configuration management admins, for example. Data exfiltration Data exfiltration, or the unauthorized extraction of data from your systems, might result in sensitive data being accessed by unauthorized parties.
It’s often referred to as “data theft,” but data theft isn’t like physical theft: When data is stolen it still remains where it was, making it more difficult to detect the “loss.” To prevent exfiltration, ensure that “secrets” and sensitive data such as personal information, passwords, and credit card data are encrypted.

Also prevent outbound network connections where they are not required. In development environments, it’s recommended to use data masking or fake data. Using real data means you have to protect your dev environment as you would a production environment, and many organizations don’t want to invest the resources to do that. Denial of service (DoS) DoS is an attack vector whose purpose it is to deny your users from getting service from your systems, by using a variety of methods that place a massive load on your servers, applications, or networks, paralyzing them or causing them to crash. On the internet, DoS attacks are usually distributed (DDoS).

DDoS attacks are much more difficult to block because they don’t originate from a single IP. That said, even a single-origin DoS can be devastating if it comes from within.

For example, a container may be compromised and used as a platform to repeatedly open processes or sockets on the host (attacks known respectively as fork bombs and socket bombs).
Such attacks can cause the host to freeze or crash in seconds. There are many and varied ways to prevent and detect DoS attacks, but proper configuration and sticking to the basic tenets of a minimal attack surface, patching, and least privileges go a long way to making DoS less likely. Organizations that adopt devops methods may actually recover faster from DoS when it does occur because they can more easily relaunch their applications on different nodes (or different clouds) and roll back to previous versions without losing data. Advanced persistent threat (APT) APT is the name given to sophisticated attacks that often take many months to unravel.
In a typical scenario, an intruder will first find a point of infiltration, using a vulnerability or configuration error, and plant code that will collect network traffic or scan processes on the host. Using the data collected, the intruder will then progress to the next phase of the attack, perhaps infiltrating deeper into the network.

This step-by-step process continues until the intruder can lay his hands on a valuable asset, such as customer or financial data, at which point he will go for the final attack, typically data exfiltration. Because APT is not a single attack vector but a combination of many methods, there isn’t any one single thing you can do to protect yourself. Rather you must employ multiple layers of security and be sensitive to anomalies. In devops environments this is even more difficult because they are anything but static.
In addition to avoiding vulnerabilities, applying least privilege religiously, and making it difficult to breach your environment in the first place, you should also implement network segmentation to hinder an intruder’s progress, and monitor your applications for abnormal activity. “Left shift” of security One of the results of continuous development and rapid devops cycles is that developers must bear more of the responsibility for delivering secure code.

Their commits are often integrated straight into the application, and the traditional security gates of penetration testing and code review simply don’t work fast enough to detect or stop anything.
Security tests must “shift left,” or move upstream into the development pipeline.

The easiest way to do this is to integrate security tools with CI/CD tools and insert the necessary steps into the build process. The 10 terms above comprise only a partial list, but in today’s rapidly converging environments it is imperative that devops teams understand security better.

By the same token, security teams must understand that in devops environments security cannot be applied as an afterthought or without understanding how applications are developed and delivered through the pipeline, nor can they use the security tools of yesterday to gate or hinder speedy devops deployments. Learning to speak each other’s lingo is a good start. Amir Jerbi is co-founder and CTO of Aqua Security. Prior to Aqua, he was chief architect at CA Technologies in charge of the host-based security product line. New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth.

The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers.
InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content.
Send all inquiries to newtechforum@infoworld.com.

Thycotic Releases Privilege Manager for Windows

Privilege Manager for Windows lets IT admins implement an array of policies and controls that best match their needs, such as deny-first whitelisting. Thycotic announced Privilege Manager for Windows, which provides organizations with malware protection by enabling them to run only those applications that are both required and trustworthy, with the lowest possible privilege and access.Privilege Manager for Windows allows IT admins to implement an array of policies and controls that best match their needs, such as deny-first whitelisting, least privilege policy, application isolation, endpoint monitoring and logging, and application self-elevation."Security tools, processes and applications are only beneficial to an organization if they are actually used," Joseph Carson, head of global strategic alliances at Thycotic, told eWEEK. "Now, more than ever, we live in a faster paced environment – IT and Security administrators are having to do more and more every day, and they require tools that are not only easy to use but also highly customizable, to fit their individual needs."When purchasing Privilege Manager for Windows the installation process will install Secret Server v.10.0 and Privilege Manager together, and customers will also have the option to license and use Privilege Manager by itself. "When developing our solutions, Thycotic takes an approach that puts the human first--understanding that our products will only be widely adopted and used if they are easy to use and simple to manage," Carson said. "When talking to prospects who are evaluating vendors, we constantly hear from them how difficult competing solutions are to implement." He explained it’s important to the company that their customers can begin seeing value out of their investment from day one."As Thycotic has done with Privileged Account Management making it easy and simple to use with Secret Server, Thycotic has taken the same approach making Application Control easy and simple to use with the latest launch of Privilege Manager for Windows," Carson said.Combined with Secret Server v10.0 for Privileged Account Management, which is also being released in conjunction with Thycotic’s Privilege Manager for Windows, Thycotic is able to provide comprehensive security for businesses."It’s important for companies, moving forward, to account for the two big vulnerabilities in a network – the compromised endpoints, and the malicious acquisition of privileged accounts," Carson said. "Protecting endpoints make it easier to protect privileged accounts from being captured, and protecting your privileged credentials make it easier to protect the endpoints."He explained cycle that feeds itself, and companies must evaluate solutions from companies that can handle both of these attack vectors."With Thycotic, businesses now have the ability to control who can access their endpoints and what approved and trusted actions that can occur on the endpoints providing more control, security and visibility to stay safe and compliant," Carson said.

Remote hacker nabs Win10 logins in ‘won’t-fix’ Safe Mode* attack

*Turns out to be very unsafe mode thanks to this hack Security researcher Doron Naim has cooked an attack that abuses Windows 10's Safe Mode to help hackers steal logins. The Cyberark man says remote attackers need to have access to a PC before they can spring this trap, which involves rebooting a machine into Safe Mode to take advantage of the lesser security controls offered in that environment. One in Safe Mode, logins can be stolen and otherwise with defeated pass-the-hash lateral techniques can be used to compromise other networked machines. A fake login screen can be shown using a COM object technique to emulate a normal boot and cloak Safe Mode. Users who then type in their credentials assuming a normal reboot will hand their logins to attackers. "Once attackers break through the perimeter and gain local administrator privileges on an infected Windows-based machine, they can remotely activate Safe Mode to bypass and manipulate endpoint security measures," Naim says. "In Safe Mode, the attackers are able to freely run tools to harvest credentials and laterally move to connected systems – all while remaining undetected. "This exploit can also work in Windows 10, despite the presence of the Microsoft’s Virtual Secure Module." Microsoft will not fix the attack vector since it depends on hackers already having access to a Windows machine. However Naim says gaining access to at least one Windows machine in an organisation is easy, and cites a Fireeye study [PDF] which reveals most organisations had recorded falling for targeted phishing attack last year. Entering Safe Mode avoids a host of security controls including the Virtual Security Module which would otherwise serve to limit the ability for attackers to deploy tools and steal password hashes. "This pattern of credential capture and lateral movement can be reused by an attacker multiple times until an eventual domain compromise in achieved," Naim says. Attackers can either wait until victims reboot or generate a notice prompting that a reboot is necessary. Security controls can be disabled using the altered conditions under Safe Mode that allow registry keys to be tampered, Naim found. Popular post-exploitation tool Mimikatz in lab tests went undetected when antivirus from Microsoft, Trend Micro, McAfee, and Avira was disabled from safe boot. Naims recommends administrators cycle privileged account credentials to disrupt pass-the-hash attacks, enforce least privilege by stripping local administrator rights, and deploy security tools capable of running in Safe Mode. ®

5 security practices hackers say make their lives harder

Whether they identify as white hats, black hats, or something in between, a majority of hackers agree that no password is safe from them -- or the government for that matter. Regardless of where they sit with respect to the law, hackers mostly agree th...

Security Leadership & The Art Of Decision Making

What a classically-trained guitarist with a Master's Degree in counseling brings to the table as head of cybersecurity and privacy at one of the world's major healthcare organizations. Bishop Fox’s Vincent Liu sat down recently with GE Healthcare Cybersecurity and Privacy General Manager Richard Seiersen in a wide-ranging chat about security decision making, how useful threat intelligence is, critical infrastructure, the Internet of Things, and his new book on measuring cybersecurity risk. We excerpt highlights below. You can read the full text here. Fourth in a series of interviews with cybersecurity experts by cybersecurity experts. Vincent Liu: How has decision making played a part in your role as a security leader? Richard Seiersen:  Most prominently, it’s led me to the realization that we have more data than we think and need less than we think when managing risk.
In fact, you can manage risk with nearly zero empirical data.
In my new book “How to Measure Anything in Cybersecurity Risk,” we call this “sparse data analytics.” I also like to refer to it as “small data.” Sparse analytics are the foundation of our security analytics maturity model. The other end is what we term “prescriptive analytics.” When we assess risk with near zero empirical data, we still have data, which we call “beliefs.” Consider the example of threat modeling. When we threat model an architecture, we are also modeling our beliefs about threats. We can abstract this practice of modeling beliefs to examine a whole portfolio of risk as well. We take what limited empirical data we have and combine it with our subject matter experts’ beliefs to quickly comprehend risk. VL: If you’re starting out as a leader, and you want to be more “decision” or “measurement” oriented, what would be a few first steps down this road? RS: Remove the junk that prevents you from answering key questions. I prefer to circumvent highs, mediums, or lows of any sort, what we call in the book “useless decompositions.” Instead, I try to keep decisions to on-and-off choices. When you have too much variation, risk can be amplified. Most readers have probably heard of threat actor capability.

This can be decomposed into things like nation-state, organized crime, etc. We label these “useless decomposition” when used out of context. Juxtapose these to useful decompositions, which are based on observable evidence.

For example, “Have we or anyone else witnessed this vulnerability being exploited?” More to the point, what is the likelihood of this vulnerability being exploited in a given time frame? If you have zero evidence of exploitability anywhere, your degree of belief would be closer to zero. And when we talk about likelihood, we are really talking about probability. When real math enters the situation, most reactions are, “Where did you get your probability?” My answer is usually something like, “Where do you get your 4 on a 1-to-5 scale, or your ‘high’ on a low, medium, high, critical scale?” A percentage retains our uncertainty.
Scales are placebos that make you feel as if you have measured something when you actually haven’t. This type of risk management based on ordinal scales can be worse than doing nothing.   VL: My takeaway is the more straightforward and simple things are, the better.

The more we can make a decision binary, the better.

Take CVSS (Common Vulnerability Scoring System). You have several numbers that become an aggregate number that winds up devoid of context. RS: The problem with CVSS is it contains so many useless decompositions.

The more we start adding in these ordinal scales, the more we enter this arbitrary gray area. When it comes to things like CVSS and OWASP, the problem also lies with how they do their math. Ordinal scales are not actually numbers. For example, let’s say I am a doctor in a burn unit.
I can return home at night when the average burn intensity is less than 5 on a 1-to-10 ordinal scale.
If I have three patients with burns that each rank a 1, 3, and 10 respectively, my average is less than a 5. Of course, I have one person nearing death, but it’s quitting time and I am out of there! That makes absolutely no sense, but it is exactly how most industry frameworks and vendor implement security risk management.

This is a real problem.

That approach falls flat when you scale out to managing portfolios of risk. VL: How useful is threat intelligence, then? RS: We have to ask—and not to be mystical here—what threat intelligence means.
If you’re telling me it is an early warning system that lets me know a bad guy is trying to steal my shorts, that’s fine.
It allows me to prepare myself and fortify my defenses (e.g., wear a belt) at a relatively sustainable cost. What I fear is that most threat intelligence data is probably very expensive, and oftentimes redundant noise. VL: Where would you focus your energy then? RS: For my money, I would focus on how I design, develop, and deploy products that persist and transmit or manage treasure.

Concentrate on the treasure; the bad guys have their eyes on it, and you should have your eyes directed there, too. This starts in design, and not enough of us who make products focus enough on design. Of course, if you are dealing with the integration of legacy “critical infrastructure”-based technology, you don’t always have the tabula rasa of design from scratch. VL: You mean the integration of critical infrastructure with emerging Internet of Things technology, is that correct? RS: Yes; we need to be thoughtful and incorporate the best design practices here.

Also, due to the realities of legacy infrastructure, we need to consider the “testing in” of security.
Ironically, practices like threat modeling can help us focus our testing efforts when it comes to legacy.
I constantly find myself returning to concepts like the principle of least privilege, removing unnecessary software and services.
In short, focusing on reducing attack surface where it counts most. Oldies, but goodies! VL: When you’re installing an alarm system, you want to ensure it is properly set up before you worry about where you might be attacked. Reduce attack surface, implement secure design, execute secure deployments. Once you’ve finished those fundamentals, then consider the attackers’ origin. RS:  Exactly! As far as the industrial IoT (IIoT) or IoT is concerned, I have been considering the future of risk as it relates to economic drivers...

Connectivity, and hence attack surface, will naturally increase due to a multitude of economic drivers.

That was true even when we lived in analog days before electricity. Now we have more devices, there are more users per device, and there are more application interactions per device per user.

This is an exponential growth in attack surface. VL: And the more attack surface signals more room for breach. RS: As a security professional, I consider what it means to create a device with minimal attack surface but that plays well with others.
I would like to add [that] threat awareness should be more pervasive individually and collectively. Minimal attack surface means less local functionality exposed to the bad guy and possibly less compute on the endpoint as well. Push things that change, and or need regular updates, to the cloud. Plays well with others means making services available for use and consumption; this can include monitoring from a security perspective.

These two goals seem at odds with one another. Necessity then becomes the mother of invention.

There will be a flood of innovation coming from the security marketplace to address the future of breach caused by a massive growth in attack surface.  Richard Seiersen, General Manager of Cybersecurity and Privacy, GE Healthcare PERSONALITY BYTES First career interest: Originally a classical musician who transitioned into teaching music. Start in security: My master’s degree capstone project was focused on decision analysis.
It was through this study that I landed an internship at a company called TriNet, which was then a startup. My internship soon evolved into a risk management role with plenty of development and business intelligence. Best decision-making advice for security leaders: Remove the junk that prevents you from answering key questions. Most unusual academic credential: Earned a Master in Counseling with an emphasis on decision making ages ago.
I focused on a framework that combined deep linguistics analysis with goal-setting to model effective decision making. You could call it “agile counseling” as opposed to open-ended soft counseling. More recently, I started a Master of Science in Predictive Analytics. My former degree has affected how I frame decisions and the latter brings in more math to address uncertainty.

Together they are a powerful duo, particularly when you throw programming into the mix. Number one priority since joining GE: A talent-first approach in building a global team that spans device to cloud security. Bio: Richard Seiersen is a technology executive with nearly 20 years of experience in information security, risk management, and product development.

Currently he is the general manager of cybersecurity and privacy for GE Healthcare. Richard now lives with his family of string players in the San Francisco Bay Area.
In his limited spare time he is slowly working through his MS in predictive analytics at Northwestern. He should be done just in time to retire. He thinks that will be the perfect time to take up classical guitar again. Related Content: Vincent Liu (CISSP) is a Partner at Bishop Fox, a cyber security consulting firm providing services to the Fortune 500, global financial institutions, and high-tech startups.
In this role, he oversees firm management, client matters, and strategy consulting.
Vincent is a ...
View Full Bio More Insights

Zero-interaction remote wormable hijack hole blasts Symantec kit

Google blasts AV security with 'patch or pay the price' red alert Scores (or thousands, or millions) of enterprise and home Symantec users are open to remote compromise through multiple now-patched (where possible) wormable remote code execution holes described by Google as 'as bad as it gets'. The flaws are "100 percent" reliable against Symantec's Norton Antivirus and Endpoint according to renowned hacker Tavis Ormandy from Google's Project Zero initiative. "These vulnerabilities are as bad as it gets," Ormandy says. "They don’t require any user interaction, they affect the default configuration, and the software runs at the highest privilege levels possible." It could easily result in a worm which could realistically spread rapidly between Symantec users via email or web links. Victims would not even need to open the malicious files to be compromised. "An attacker could easily compromise an entire enterprise fleet using a vulnerability like this," Ormandy says. "Network administrators should keep scenarios like this in mind when deciding to deploy anti-virus [because] it’s a significant tradeoff in terms of increasing attack surface." Affected products include Norton Security, Norton 360, Endpoint Protection, Email Security, the Protection Engine, and others. Some of those platforms cannot be upgraded.

The many users of pirate copies of Symantec's products would also likely be affected since many cracked applications block update mechanisms. The problems lie in part with Symantec's unpacking engines which run in the kernel.

The company also used code for its decomposer that was derived from open source libraries such as libmspack and unrarsrc which had not been updated for some seven years. Symantec is the latest to fall to Ormandy's security testing of antivirus products, but has fallen hardest.

Comodo, ESET, Kaspersky, and Fireeye are among those tested. The security company has posted a security notice confirming the flaws. It says it has added "additional checks" to its secure development lifecycle to spot similar flaws in the future, adding it has not seen in-the-wild attacks. Users should: Restrict access to administrative or management systems to authorised privileged users. Restrict remote access, if required, to trusted / authorised systems only. Run under the principle of least privilege where possible to limit the impact of potential exploit. Keep all operating systems and applications current with vendor patches. Follow a multi-layered approach to security.

At a minimum, run both firewall and anti-malware applications to provide multiple points of detection and protection to both inbound and outbound threats. Deploy network- and host-based intrusion detection systems to monitor network traffic for signs of anomalous or suspicious activity.

This may aid in the detection of attacks or malicious activity related to the exploitation of latent vulnerabilities. ®