13.6 C
London
Tuesday, September 26, 2017
Home Tags Threat Model

Tag: Threat Model

Reducing privacy and security risks starts with knowing what the threats really are.
There are generally accepted principles that developers of all secure operating systems strive to apply, but there can be completely different approaches to implementing these principles.
Just the Facts, sysadmins Ansible sysadmins, make with the patch-fingers because the project's just gone public with a high-severity bug. CVE-2016-9587 is a peach: “a compromised remote system being managed via Ansible can lead to commands being run on the Ansible controller (as the user running the ansible or ansible-playbook command)”, Ansible lead at Red Hat James Cammarata writes. Dutch outfit Computest found the bug.
It writes that if an attacker can get access to one compromised machine, they can use that as a hop-off to the controller, “gaining access to the entire server park managed by that controller”. Its advisory explains the problem exists in how the controller handles an API feature called Facts. Facts let the Ansible controller get information about remote systems, using them in playbook variables, and users can write their own Facts (if this sounds all a bit post-truth, we apologise). Computest found that Facts allows “too many special cases that allow for the bypassing of filtering”. Although the special cases didn't take the testers more than a few hours, they don't sideswipe Ansible for having poor security. Rather, Computest writes, filtering and quoting of Facts can be fixed, “and that when this has been done, the opportunity for attack in this threat model is very small.” Ansible's fixes are in two release candidates it's released – 2.1.4 RC1, and 2.2.1 RC3. ® Sponsored: Customer Identity and Access Management
SGX needs I/O protection, Austrian boffins reckon Intel's Software Guard Extensions started rolling in Skylake processors in October 2015, but it's got an Achilles heel: insecure I/O like keyboards or USB provide a vector by which sensitive user data could be compromised. A couple of boffins from Austria's Graz University of Technology reckon they've cracked that problem, with an add-on that creates protected I/O paths on top of SGX. Instead of the handful of I/O technologies directly protected by SGX – most of which have to do with DRM rather than user security – the technology proposed in Samuel Weiser and Mario Werner's Arxiv paper, SGXIO, is a “generic” trusted I/O that can be applied to things like keyboards, USB devices, screens and so on. And we're not talking about a merely esoteric technology that might soothe the fears of people running cloud apps on multi-tenant infrastructure. The Weiser/Werner proposal would create an SGX-supported trusted path all the way to a remote user's browser to protect (for example) an online banking session – and provide “attestation mechanisms to enable the bank as well as the user to verify that trusted paths are established and functional.” SGXIO as a way to protect a banking app The shortcoming SGXIO is trying to fix is that SGX's threat model considers everything outside itself a threat (which isn't a bad thing, in context). The usual approach for trusted paths is to use encrypted interfaces. The paper mentions the Protected Audio Video Path (PAVP) – but that's a DRM-specific example, and most I/O devices don't encrypt anything. Hence SGXIO, an attempt to add a generic trusted path to the SGX environment – and with that trusted path reaching to the end user environment, it's an attempt to protect an application from nasties like keyloggers that a miscreant might have installed on a victim's box. The key architectural concepts in SGXIO are: A trusted stack – which contains a security hypervisor, secure I/O drivers, and the trusted boot (TB) enclave; and The virtual machine – hosting an untrusted operating system that runs secure user applications. A user application communicating with the end user: 1. Opens an encrypted channel to the secure I/O driver; 2. This tunnels through the untrusted operating system, and establishes secure communication with the “generic” user I/O device. The hypervisor binds user devices exclusively to I/O; I/O on unprotected devices passes directly through the hypervisor; the trusted path names both the encrypted user-app-to-driver communication; and the exclusive driver-to-device binding; The TB enclave provides assurance of the trusted path setup, by attesting the hypervisor. The paper illustrates this process like this: SGXIO's trusted stack components An implementation wouldn't be seamless: the SGXIO paper devices a fair chunk of copy to application design, enclave programming (fortunately something Intel provides resources for), driver design, and hypervisor choice. Application developers, for example, have to work out a key exchange mechanism (Diffie-Hellman is supported, and SGXIO offers its own lightweight key protocol). For hypervisors, the paper suggests the seL4 microkernel. Originally developed by Australia's NICTA and now handled by the CSIRO Data61 project, seL4 is a mathematically verified software kernel that was published as open source software in 2014. SGXIO will get its first public airing at the CODASPY'17 conference in March, being held in Scottsdale Arizona. ® Sponsored: Customer Identity and Access Management
Aurich Lawson / Thinkstockreader comments 25 Share this story Neal H. Walfield is a hacker at g10code working on GnuPG.

This op-ed was written for Ars Technica by Walfield, in response to Filippo Valsorda's "I'm giving up on PGP" story that was published on Ars last week. Every once in a while, a prominent member of the security community publishes an article about how horrible OpenPGP is. Matthew Green wrote one in 2014 and Moxie Marlinspike wrote one in 2015.

The most recent was written by Filippo Valsorda, here on the pages of Ars Technica, which Matthew Green says "sums up the main reason I think PGP is so bad and dangerous." In this article I want to respond to the points that Filippo raises.
In short, Filippo is right about some of the details, but wrong about the big picture.

For the record, I work on GnuPG, the most popular OpenPGP implementation. Forward secrecy isn't always desirable Filippo's main complaint has to do with OpenPGP's use of long-term keys.
Specifically, he notes that due to the lack of forward secrecy, the older a key is, the more communication will be exposed by its compromise.

Further, he observes that OpenPGP's trust model includes incentives to not replace long-term keys. First, it's true that OpenPGP doesn't implement forward secrecy (or future secrecy).

But, OpenPGP could be changed to support this. Matthew Green and Ian Miers recently proposed puncturable forward secure encryption, which is a technique to add forward secrecy to OpenPGP-like systems.

But, in reality, approximating forward secrecy has been possible since OpenPGP adopted subkeys decades ago. (An OpenPGP key is actually a collection of keys: a primary key that acts as a long-term, stable identifier, and subkeys that are cryptographically bound to the primary key and are used for encryption, signing, and authentication.) Guidelines on how to approximate forward secrecy were published in 2001 by Ian Brown, Adam Back, and Ben Laurie.

Although their proposal is only for an approximation of forward secrecy, it is significantly simpler than Green and Miers' approach, and it works in practice. As far as I know, Brown et al.'s proposal is not often used. One reason for this is that forward secrecy is not always desired.

For instance, if you encrypt a backup using GnuPG, then your intent is to be able to decrypt it in the future.
If you use forward secrecy, then, by definition, that is not possible; you've thrown away the old decryption key.
In the recent past, I've spoken with a number of GnuPG users including 2U and 1010data.

These two companies told me that they use GnuPG to protect client data.

Again, to access the data in the future, the encryption keys need to be retained, which precludes forward secrecy. This doesn't excuse the lack of forward secrecy when using GnuPG to protect e-mail, which is the use case that Filippo concentrates on.

The reason that forward secrecy hasn't been widely deployed here is that e-mail is usually left on the mail server in order to support multi-device access.
Since mail servers are not usually trusted, the mail needs to be kept encrypted.

The easiest way to accomplish this is to just not strip the encryption layer.
So, again, forward secrecy would render old messages inaccessible, which is often not desired. But, let's assume that you really want something like forward secrecy.

Then following Brown et al.'s approach, you just need to periodically rotate your encryption subkey.
Since your key is identified by the primary key and not the subkey, creating a new subkey does not change your fingerprint or invalidate any signatures, as Filippo states.

And, as long as your communication partners periodically refresh your key, rotating subkeys is completely transparent. Ideally, you'll want to store your primary key on a separate computer or smartcard so that if your computer is compromised, then only the subkeys are compromised.

But, even if you don't use an offline computer, and an attacker also compromises your primary key, this approach provides a degree of future secrecy: your attacker will be able to create new subkeys (since she has your primary key), and sign other keys, but she'll probably have to publish them to use them, which you'll eventually notice, and she won't be able to guess any new subkeys using the existing keys. Enlarge / Circuit Benders and more at the 2011 Doo Dah Parade. Sheyneinlalaland Physical attacks vs. cyber attacks So, given that forward secrecy is possible, why isn't it enabled by default? We know from Snowden that when properly implemented, "encryption … really is one of the few things that we can rely on." In other words, when nation states crack encryption, they aren't breaking the actual encryption, they are circumventing it.

That is, they are exploiting vulnerabilities or using national security letters (NSLs) to break into your accounts and devices.

As such, if you really care about protecting your communication, you are much better off storing your encryption keys on a smartcard then storing them on your computer. Given this, it's not clear that forward secrecy is that big of a gain, since smartcards won't export private keys.
So, when Filippo says that he is scared of an evil maid attack and is worried that someone opened his safe with his offline keys while he was away, he's implicitly stating that his threat model includes a physical, targeted attack.

But, while moving to the encrypted messaging app Signal gets him forward secrecy, it means he can't use a smartcard to protect his keys and makes him more vulnerable to a cyber attack, which is significantly easier to conduct than a physical attack. Another problem that Filippo mentions is that key discovery is hard.
Specifically, he says that key server listings are hard to use.

This is true.

But, key servers are in no way authenticated and should not be treated as authoritative.
Instead, if you need to find someone's key, you should ask that person for their key's fingerprint. Unfortunately, our research suggests that for many GnuPG users, picking up the phone is too difficult. So, after our successful donation campaign two years ago, we used some of the money to develop a new key discovery technique called the Web Key Directory (WKD).

Basically, the WKD provides a canonical way to find a key given an e-mail address via HTTPS.

This is not as good as checking the fingerprint, but since only the mail provider and the user can change the key, it is a significant improvement over the de facto status quo. WKD has already been deployed by Posteo, and other mail providers are in the process of integrating it (consider asking your mail provider to support it). Other people have identified the key discovery issue, too. Micah Lee, for instance, recently published GPG Sync, and the INBOME group and the pretty Easy privacy (p≡p) project are working on opportunistically transferring keys via e-mail. Signal isn't our saviour Filippo also mentions the multi-device problem.
It's true that using keys on multiple devices is not easy. Part of the problem is that OpenPGP is not a closed ecosystem like Signal, which makes standardising a secret key exchange protocol much more difficult. Nevertheless, Tankred Hase did some work on private key synchronisation while at whiteout.io.

But, if you are worried about targeted attacks as Filippo is, then keeping your keys on a computer, never mind multiple computers, is not for you.
Instead, you want to keep your keys on a smartcard.
In this case, using your keys from multiple computers is easy: just plug the token in (or use NFC)! This assumes that there is an OpenPGP-capable mail client on your platform of choice.

This is the case for all of the major desktop environments, and there is also an excellent plug-in for K9 on Android called OpenKeychain. (There are also some solutions available for iOS, but I haven't evaluated them.) Even if you are using Signal, the multi-device problem is not completely solved.

Currently, it is possible to use Signal from a desktop and a smartphone or a tablet, but it is not possible to use multiple smartphones or tablets. One essential consideration that Filippo doesn't adequately address is that contacting someone on Signal requires knowing their mobile phone number. Many people don't want to make this information public.
I was recently chatting with Jason Reich, who is the head of OPSEC at BuzzFeed, and he told me that he spends a lot of time teaching reporters how to deal with the death and rape threats that they regularly receive via e-mail.

Based on this, I suspect that many reporters would opt to not publish their phone number even though it would mean missing some stories.
Similarly, while talking to Alex Abdo, a lawyer from the ACLU, I learned that he receives dozens of encrypted e-mails every day, and he is certain that some of those people would not have contacted him or the ACLU if they couldn't remain completely anonymous. Another point that Filippo doesn't cover is the importance of integrity; he focused primarily on confidentiality (i.e., encryption).
I love the fact that messages that I receive from DHL are signed (albeit using S/MIME and not OpenPGP).

This makes detecting phishing attempts trivial.
I wish more businesses would do this. Of course, Signal also provides integrity protection, but I definitely don't want to give all businesses my phone number given their record of protecting my e-mail address. Moreover, most of this type of communication is done using e-mail, not Signal. I want to be absolutely clear that I like Signal. When people ask me how they can secure their communication, I often recommend it.

But, I view Signal as complementary to OpenPGP.

First, e-mail is unlikely to go away any time soon.
Second, Signal doesn't allow transferring arbitrary data including documents.

And, importantly, Signal has its own problems.
In particular, the main Signal network is centralised, not federated like e-mail, the developers actively discourage third-party clients, and you can't choose your own identity.

These decisions are a rejection of a free and open Internet, and pseudononymous communication. In conclusion, Filippo has raised a number of important points.

But, with respect to long-term OpenPGP keys being fatally flawed and forward secrecy being essential, I think he is wrong and disagree with his compromises in light of his stated threat model.
I agree with him that key discovery is a serious issue.

But, this is something that we've been working to address. Most importantly, Signal cannot replace OpenPGP for many people who use it on a daily basis, and the developers' decision to make Signal a walled garden is problematic.
Signal does complement OpenPGP, though, and I'm glad that it's there. Neal H. Walfield is a hacker at g10code working on GnuPG. His current project is implementing TOFU for GnuPG.

To avoid conflict of interests, GnuPG maintenance and development is funded primary by donations. You can find him on Twitter @nwalfield.
E-mail: neal@gnupg.org OpenPGP: 8F17 7771 18A3 3DDA 9BA4 8E62 AACB 3243 6300 52D9 This post originated on Ars Technica UK
reader comments 1 Share this story A recently fixed security vulnerability that affected both the Firefox and Tor browsers had a highly unusual characteristic that caused it to threaten users only during temporary windows of time that could last anywhere from two days to more than a month. As a result, the cross-platform, malicious code-execution risk most recently visited users of browsers based on the Firefox Extended Release on September 3 and lasted until Tuesday, or a total of 17 days.

The same Firefox version was vulnerable for an even longer window last year, starting on July 4 and lasting until August 11.

The bug was scheduled to reappear for a few days in November and for five weeks in December and January.

Both the Tor Browser and the production version of Firefox were vulnerable during similarly irregular windows of time. While the windows were open, the browsers failed to enforce a security measure known as certificate pinning when automatically installing NoScript and certain other browser extensions.

That meant an attacker who had a man-in-the-middle position and a forged certificate impersonating a Mozilla server could surreptitiously install malware on a user's machine. While it can be challenging to hack a certificate authority or trick one into issuing the necessary certificate for addons.mozilla.org, such a capability is well within the means of nation-sponsored attackers, who are precisely the sort of adversaries included in the Tor threat model.
Such an attack, however, was only viable at certain periods when Mozilla-supplied "pins" expired."It comes around every once in a while," Ryan Duff, an independent researcher and former member of the US Cyber Command, told Ars, referring to the vulnerability. "It's weird.
I've never seen a bug that presented itself like that." Certificate pinning is designed to ensure that a browser accepts only specific certificates for a specific domain or subdomain and rejects all others, even if the certificates are issued by a browser-trusted authority.

But because certificates inevitably must expire from time to time, the pins must periodically be updated so that newly issued certificates can be accepted. Mozilla used a static form of pinning for its extension update process that wasn't based on the HTTP Public Key Pinning protocol (HPKP).

Due to lapses caused by human error, older browser versions sometimes scheduled static pins to expire before new versions pushed out a new expiration date. During those times, pinning wasn't enforced.

And when pinning wasn't enforced, it was possible for man-in-the-middle attackers to use forged certificates to install malicious add-on updates when the add-on was obtained through Mozilla's add-on site. Mozilla on Tuesday updated Firefox to fix the faulty expiration pins, and over the weekend, the organization also updated the add-ons server to make it start using HPKP.

Tor officials fixed the weakness last week with the early release of a version based on Tuesday's release from Mozilla. Duff has a much more detailed explanation here. The vulnerability was first described here by a researcher who goes by the handle movrcx and who complained that his attempts to privately report the weakness to Tor were "ridiculed." Duff eventually confirmed the reported behavior.

The irregular windows in which the vulnerability was active likely contributed to some of the skepticism that initially greeted movrcx's report and made it hard to spot the problem. "I’d be lying if I said luck didn’t play a significant role in the discovery of this bug," Duff wrote in the above-linked postmortem. "If movrcx had tried his attack before 3 Sept or after 20 Sept, it would have failed in his tests.
It’s only because he conducted it within that 17 day window that this was discovered."
Enlargereader comments 3 Share this story Mozilla officials say they're investigating whether the fully patched version of Firefox is affected by the same cross-platform, malicious code-execution vulnerability patched Friday in the Tor browser. The...
Enlargereader comments 13 Share this story Signal, the mobile messaging app recommended by NSA leaker Edward Snowden and a large number of security professionals, just fixed a bug that allowed attackers to tamper with the contents of encrypted messages sent by Android users. The authentication-bypass vulnerability was one of two weaknesses found by researchers Jean-Philippe Aumasson and Markus Vervier in an informal review of the Java code used by the Android version of Signal.

The bug made it possible for attackers who compromised a Signal server or were otherwise able to monitor data passing between Signal users to replace a valid attachment with a fraudulent one.

A second bug possibly would have allowed attackers to remotely execute malicious code, but a third bug made limited exploits to a simple remote crash. "The results are not catastrophic, but show that, like any piece of software, Signal is not perfect," Aumasson wrote in an e-mail. "Signal drew the attention of many security researchers, and it's impressive that no vulnerability was ever published until today.

This pleads in favor of Signal, and we'll keep trusting it." The attachment-spoofing vulnerability was the result of an integer overflow bug that was triggered when extremely large files were attached to a message.
Instead of verifying the authenticity of the entire file, Signal would check only a small portion, making it possible for attackers to append fraudulent data that wouldn't be detected by the MAC (message authentication code) that's a standard part of most encryption schemes.

To make such attacks practical, an adversary could use file compression that's supported by Signal to reduce the size of the malicious attachment to a manageable 4 megabytes. In his e-mail, Aumasson said the overflow bug was found in the following line of code: int remainingData = (int) file.length() - mac.getMacLength(); He explained: Here, the value "file.length()" is a number encoded on 64 bits (of type "long"), whereas the receiving variable "remainingData" is a number encoded on 32 bits (of type "int").

Therefore, when "file.length()" is longer than what fits in a 32-bit number, the value of "remainingData" (the number of bytes left to process) will be incorrect, as it will be much smaller than the real size of the file.

Consequently, a large part of the file will be ignored when Signal will verify the cryptographic authenticity.
Signal will only check the (small) beginning of the file, whereas the user will actually receive the much larger file. One of the reasons for Signal's appeal is that it deploys end-to-end encryption, meaning it encrypts a message on the sender's device and doesn't decrypt it until it is safely stored on the receiving device.
Still, the encrypted message passes through a server.

The authentication bypass exploit could be carried out by hacking or impersonating such a server and then tampering with message attachments.

To circumvent transport-layer security protections, an impersonating attacker might compromise any one of the hundreds of certificate authorities trusted by the Android operating system or trick targets into installing a rogue CA certificate on their devices.

Additional details about the vulnerabilities are here. While the hack is by no means trivial to carry out, it's within the capability of the kind of nation-sponsored adversaries included in the threat model of many Signal users.

The researchers privately reported the vulnerabilities to Signal developer Open Whisper Systems on September 13 and the company has already issued an update.

Aumasson and Vervier—who are the principal research engineer at Kudelski Security and CEO and head of security research at X41 respectively—said they're still working to determine if the same bugs can be exploited in WhatsApp, the Facebook messaging app that also relies on Signal code.
What a classically-trained guitarist with a Master's Degree in counseling brings to the table as head of cybersecurity and privacy at one of the world's major healthcare organizations. Bishop Fox’s Vincent Liu sat down recently with GE Healthcare Cybersecurity and Privacy General Manager Richard Seiersen in a wide-ranging chat about security decision making, how useful threat intelligence is, critical infrastructure, the Internet of Things, and his new book on measuring cybersecurity risk. We excerpt highlights below. You can read the full text here. Fourth in a series of interviews with cybersecurity experts by cybersecurity experts. Vincent Liu: How has decision making played a part in your role as a security leader? Richard Seiersen:  Most prominently, it’s led me to the realization that we have more data than we think and need less than we think when managing risk.
In fact, you can manage risk with nearly zero empirical data.
In my new book “How to Measure Anything in Cybersecurity Risk,” we call this “sparse data analytics.” I also like to refer to it as “small data.” Sparse analytics are the foundation of our security analytics maturity model. The other end is what we term “prescriptive analytics.” When we assess risk with near zero empirical data, we still have data, which we call “beliefs.” Consider the example of threat modeling. When we threat model an architecture, we are also modeling our beliefs about threats. We can abstract this practice of modeling beliefs to examine a whole portfolio of risk as well. We take what limited empirical data we have and combine it with our subject matter experts’ beliefs to quickly comprehend risk. VL: If you’re starting out as a leader, and you want to be more “decision” or “measurement” oriented, what would be a few first steps down this road? RS: Remove the junk that prevents you from answering key questions. I prefer to circumvent highs, mediums, or lows of any sort, what we call in the book “useless decompositions.” Instead, I try to keep decisions to on-and-off choices. When you have too much variation, risk can be amplified. Most readers have probably heard of threat actor capability.

This can be decomposed into things like nation-state, organized crime, etc. We label these “useless decomposition” when used out of context. Juxtapose these to useful decompositions, which are based on observable evidence.

For example, “Have we or anyone else witnessed this vulnerability being exploited?” More to the point, what is the likelihood of this vulnerability being exploited in a given time frame? If you have zero evidence of exploitability anywhere, your degree of belief would be closer to zero. And when we talk about likelihood, we are really talking about probability. When real math enters the situation, most reactions are, “Where did you get your probability?” My answer is usually something like, “Where do you get your 4 on a 1-to-5 scale, or your ‘high’ on a low, medium, high, critical scale?” A percentage retains our uncertainty.
Scales are placebos that make you feel as if you have measured something when you actually haven’t. This type of risk management based on ordinal scales can be worse than doing nothing.   VL: My takeaway is the more straightforward and simple things are, the better.

The more we can make a decision binary, the better.

Take CVSS (Common Vulnerability Scoring System). You have several numbers that become an aggregate number that winds up devoid of context. RS: The problem with CVSS is it contains so many useless decompositions.

The more we start adding in these ordinal scales, the more we enter this arbitrary gray area. When it comes to things like CVSS and OWASP, the problem also lies with how they do their math. Ordinal scales are not actually numbers. For example, let’s say I am a doctor in a burn unit.
I can return home at night when the average burn intensity is less than 5 on a 1-to-10 ordinal scale.
If I have three patients with burns that each rank a 1, 3, and 10 respectively, my average is less than a 5. Of course, I have one person nearing death, but it’s quitting time and I am out of there! That makes absolutely no sense, but it is exactly how most industry frameworks and vendor implement security risk management.

This is a real problem.

That approach falls flat when you scale out to managing portfolios of risk. VL: How useful is threat intelligence, then? RS: We have to ask—and not to be mystical here—what threat intelligence means.
If you’re telling me it is an early warning system that lets me know a bad guy is trying to steal my shorts, that’s fine.
It allows me to prepare myself and fortify my defenses (e.g., wear a belt) at a relatively sustainable cost. What I fear is that most threat intelligence data is probably very expensive, and oftentimes redundant noise. VL: Where would you focus your energy then? RS: For my money, I would focus on how I design, develop, and deploy products that persist and transmit or manage treasure.

Concentrate on the treasure; the bad guys have their eyes on it, and you should have your eyes directed there, too. This starts in design, and not enough of us who make products focus enough on design. Of course, if you are dealing with the integration of legacy “critical infrastructure”-based technology, you don’t always have the tabula rasa of design from scratch. VL: You mean the integration of critical infrastructure with emerging Internet of Things technology, is that correct? RS: Yes; we need to be thoughtful and incorporate the best design practices here.

Also, due to the realities of legacy infrastructure, we need to consider the “testing in” of security.
Ironically, practices like threat modeling can help us focus our testing efforts when it comes to legacy.
I constantly find myself returning to concepts like the principle of least privilege, removing unnecessary software and services.
In short, focusing on reducing attack surface where it counts most. Oldies, but goodies! VL: When you’re installing an alarm system, you want to ensure it is properly set up before you worry about where you might be attacked. Reduce attack surface, implement secure design, execute secure deployments. Once you’ve finished those fundamentals, then consider the attackers’ origin. RS:  Exactly! As far as the industrial IoT (IIoT) or IoT is concerned, I have been considering the future of risk as it relates to economic drivers...

Connectivity, and hence attack surface, will naturally increase due to a multitude of economic drivers.

That was true even when we lived in analog days before electricity. Now we have more devices, there are more users per device, and there are more application interactions per device per user.

This is an exponential growth in attack surface. VL: And the more attack surface signals more room for breach. RS: As a security professional, I consider what it means to create a device with minimal attack surface but that plays well with others.
I would like to add [that] threat awareness should be more pervasive individually and collectively. Minimal attack surface means less local functionality exposed to the bad guy and possibly less compute on the endpoint as well. Push things that change, and or need regular updates, to the cloud. Plays well with others means making services available for use and consumption; this can include monitoring from a security perspective.

These two goals seem at odds with one another. Necessity then becomes the mother of invention.

There will be a flood of innovation coming from the security marketplace to address the future of breach caused by a massive growth in attack surface.  Richard Seiersen, General Manager of Cybersecurity and Privacy, GE Healthcare PERSONALITY BYTES First career interest: Originally a classical musician who transitioned into teaching music. Start in security: My master’s degree capstone project was focused on decision analysis.
It was through this study that I landed an internship at a company called TriNet, which was then a startup. My internship soon evolved into a risk management role with plenty of development and business intelligence. Best decision-making advice for security leaders: Remove the junk that prevents you from answering key questions. Most unusual academic credential: Earned a Master in Counseling with an emphasis on decision making ages ago.
I focused on a framework that combined deep linguistics analysis with goal-setting to model effective decision making. You could call it “agile counseling” as opposed to open-ended soft counseling. More recently, I started a Master of Science in Predictive Analytics. My former degree has affected how I frame decisions and the latter brings in more math to address uncertainty.

Together they are a powerful duo, particularly when you throw programming into the mix. Number one priority since joining GE: A talent-first approach in building a global team that spans device to cloud security. Bio: Richard Seiersen is a technology executive with nearly 20 years of experience in information security, risk management, and product development.

Currently he is the general manager of cybersecurity and privacy for GE Healthcare. Richard now lives with his family of string players in the San Francisco Bay Area.
In his limited spare time he is slowly working through his MS in predictive analytics at Northwestern. He should be done just in time to retire. He thinks that will be the perfect time to take up classical guitar again. Related Content: Vincent Liu (CISSP) is a Partner at Bishop Fox, a cyber security consulting firm providing services to the Fortune 500, global financial institutions, and high-tech startups.
In this role, he oversees firm management, client matters, and strategy consulting.
Vincent is a ...
View Full Bio More Insights