Wednesday, December 13, 2017

WannaCry’s ‘Kill Switch’ May Have Been a Sandbox-Evasion Tool

Massive ransomware worm attack appears to have come with a poorly planned anti-analysis feature.

5 Factors to Secure & Streamline Your Cloud Deployment

How a Midwestern credit union overcame the challenges of speed, cost, security, compliance and automation to grow its footprint in the cloud.

Cloud Security Lessons from the RNC Leak

A poorly configured Amazon S3 bucket that led to a massive data leak could easily happen to any organization not adopting proper cloud security measures.

The Looming War of Good AI vs. Bad AI

The rise of artificial intelligence, machine learning, hivenets, and next-generation morphic malware is leading to an arms race that enterprises must prepare for now.

Mandia Replaces DeWalt As CEO Of FireEye

In major shake-up of company's top brass, DeWalt moved to executive chairman.In a major reshuffle of the company’s top management, FireEye has appointed existing president Kevin Mandia to take over as CEO from David DeWalt, who will stay on as executive chairman.

DeWalt was previously also board chairman.In addition, Mandiant president Travis Reese has been named FireEye president, while Mike Berry, chief financial officer, gets the additional responsibility of chief operating officer.

Board member Enrique Salem has been appointed lead independent director. These moves were made with a view to strengthening the company’s position globally and to “prepare for growth opportunities going forward,” according to FireEye. Mandia is the founder of Mandiant, which FireEye acquired in 2013. “With the combination of FireEye services, intelligence, and products, I believe that our global threat management platform is poised to dominate the future of cybersecurity, and we’ve taken steps to create a senior leadership team that can build on this opportunity,” Mandia said. These executive changesl take effect on June 15 For more on this story, see FireEye's announcement here. Dark Reading's Quick Hits delivers a brief synopsis and summary of the significance of breaking news events.

For more information from the original source of the news item, please follow the link provided in this article.
View Full BioMore Insights

Your Grandma Could Be the Next Ransomware Millionaire

Today's as-a-service technology has democratized ransomware, offering practically anyone with a computer and an Internet connection an easy way to get in on the game.

A Look Inside Responsible Vulnerability Disclosure

It's time for security researchers and vendors to agree on a standard responsible disclosure timeline. Animal Man, Dolphin, Rip Hunter, Dane Dorrance, the Ray. Ring any bells? Probably not, but these characters fought fictitious battles on the pages of DC Comics in the 1940s, '50s, and '60s. As part of the Forgotten Heroes series, they were opposed by the likes of Atom-Master, Enchantress, Ultivac, and other Forgotten Villains. Cool names aside, the idea of forgotten heroes seems apropos at a time when high-profile cybersecurity incidents continue to rock the headlines and black hats bask in veiled glory. But what about the good guys? What about the white hats, these forgotten heroes? For every cybercriminal looking to make a quick buck exploiting or selling a zero-day vulnerability, there's a white hat reporting the same vulnerabilities directly to the manufacturers. Their goal is to expose dangerous exploits, keep users protected, and perhaps receive a little well-earned glory for themselves along the way. This process is called "responsible disclosure." Although responsible disclosure has been going on for years, there's no formal industry standard for reporting vulnerabilities. However, most responsible disclosures follow the same basic steps. First, the researcher identifies a security vulnerability and its potential impact. During this step, the researcher documents the location of the vulnerability using screenshots or pieces of code. They may also create a repeatable proof-of-concept attack to help the vendor find and test a resolution. Next, the researcher creates a vulnerability advisory report including a detailed description of the vulnerability, supporting evidence, and a full disclosure timeline. The researcher submits this report to the vendor using the most secure means possible, usually as an email encrypted with the vendor's public PGP key. Most vendors reserve the [email protected] email alias for security advisory submissions, but it could differ depending on the organization. After submitting the advisory to the vendor, the researcher typically allows the vendor a reasonable amount of time to investigate and fix the exploit, per the advisory full disclosure timeline. Finally, once a patch is available or the disclosure timeline (including any extensions) has elapsed, the researcher publishes a full disclosure analysis of the vulnerability. This full disclosure analysis includes a detailed explanation of the vulnerability, its impact, and the resolution or mitigation steps. For example, see this full disclosure analysis of a cross-site scripting vulnerability in Yahoo Mail by researcher Jouko Pynnönen. How Much Time?Security researchers haven't reached a consensus on exactly what "a reasonable amount of time" means to allow a vendor to fix a vulnerability before full public disclosure. Google recommends 60 days for a fix or public disclosure of critical security vulnerabilities, and an even shorter seven days for critical vulnerabilities under active exploitation. HackerOne, a platform for vulnerability and bug bounty programs, defaults to a 30-day disclosure period, which can be extended to 180 days as a last resort. Other security researchers, such as myself, opt for 60 days with the possibility of extensions if a good-faith effort is being made to patch the issue. I believe that full disclosure of security vulnerabilities benefits the industry as a whole and ultimately serves to protect consumers. In the early 2000s, before full disclosure and responsible disclosure were the norm, vendors had incentives to hide and downplay security issues to avoid PR problems instead of working to fix the issues immediately. While vendors attempted to hide the issues, bad guys were exploiting these same vulnerabilities against unprotected consumers and businesses. With full disclosure, even if a patch for the issue is unavailable, consumers have the same knowledge as the attackers and can defend themselves with workarounds and other mitigation techniques. As security expert Bruce Schneier puts it, full disclosure of security vulnerabilities is "a damned good idea." I've been on both ends of the responsible disclosure process, as a security researcher reporting issues to third-party vendors and as an employee receiving vulnerability reports for my employer's own products. I can comfortably say responsible disclosure is mutually beneficial to all parties involved. Vendors get a chance to resolve security issues they may otherwise have been unaware of, and security researchers can increase public awareness of different attack methods and make a name for themselves by publishing their findings. My one frustration as a security researcher is that the industry lacks a standard responsible disclosure timeline. We already have a widely accepted system for ranking the severity of vulnerabilities in the form of the Common Vulnerability Scoring System (CVSS). Perhaps it's time to agree on responsible disclosure time periods based on CVSS scores? Even without an industry standard for responsible disclosure timelines, I would call for all technology vendors to fully cooperate with security researchers. While working together, vendors should be allowed a reasonable amount of time to resolve security issues and white-hat hackers should be supported and recognized for their continued efforts to improve security for consumers. If you're a comic book fan, then you'll know even a vigilante can be a forgotten hero.  Related Content: Marc Laliberte is an information security threat analyst at WatchGuard Technologies. Specializing in network security technologies, Marc's industry experience allows him to conduct meaningful information security research and educate audiences on the latest cybersecurity ... View Full Bio More Insights

CrowdStrike Fails In Bid To Stop NSS Labs From Publishing Test...

NSS results are based on incomplete and materially incorrect data, CrowdStrike CEO George Kurtz says.

White House Announces Retaliatory Measures For Russian Election-Related Hacking

35 Russian intelligence operatives ejected from the US, and two of the "Cyber Most Wanted" are frozen out by Treasury Department. UPDATED 4:00 PM E.T.

THURSDAY -- The US, today, formally ejected 35 Russian intelligence operatives from the United States and imposed sanctions on nine entities and individuals: Russia's two leading intelligence services (the G.R.U. and the F.S.B.), four individual GRU officers, and three other organizations.

The actions are the Obama administration's response to a Russian hacking and disinformation campaign used to interfere in the American election process. The FBI and the Department of Homeland Security also released new declassified technical information on Russian civilian and military intelligence service cyber activity, in an effort to help network defenders protect against these threats. Further, the State Department is shutting down two Russian compounds, in Maryland and New York, used by Russian personnel for intelligence-related purposes. Plus, the US Department of Treasury sanctioned two members of the FBI's Cyber Most Wanted List, Evgeniy Mikhailovich Bogachev and Aleksey Alekseyevich Belan.
Infosec pros will recognize Bogachev especially as the alleged head of the GameOver Zeus botnet.

A $3 million reward for info leading to his arrest has been available for some time. Treasury sanctioned Bogachev and Belan "for their activities related to the significant misappropriation of funds or economic resources, trade secrets, personal identifiers, or financial information for private financial gain.

As a result of today’s action, any property or interests in property of [Bogachev and Belan] within U.S. jurisdiction must be blocked and U.S. persons are generally prohibited from engaging in transactions with them." This is the first time sanctions are being issued under an Executive Order first signed by President Obama in April 2015, and expanded today.

The original executive Order, gives the president authorization to impose some sort of retribution or response to cyberattacks and also allows the Secretary of Treasury, in consultation with the Attorney General and Secretary of State, to institute sanctions against entities behind cybercrime, cyber espionage, and other damaging cyberattacks.

That includes freezing the assets of attackers. The sanctions announced today are not expected to be the Obama administration's complete response to the Russian operations.
In a statement, the president said "These actions are not the sum total of our response to Russia’s aggressive activities. We will continue to take a variety of actions at a time and place of our choosing, some of which will not be publicized." The moves will put pressure on president-elect Donald Trump to either support or attempt to lift the sanctions on Russian officials and entities.

Trump has expressed skepticism at the validity of American intelligence agencies' assertions that such a campaign occurred at all. When asked by reporters Wednesday night about the fact that these sanctions were set to be announced, Trump said, “I think we ought to get on with our lives.
I think that computers have complicated lives very greatly.

The whole age of computer has made it where nobody knows exactly what is going on.  The NY Times reported today that immediate sanctions are being imposed on four Russian intelligence officials: Igor Valentinovich Korobov, the current chief of the G.R.U., as well as three deputies: Sergey Aleksandrovich Gizunov, the deputy chief of the G.R.U.; Igor Olegovich Kostyukov, a first deputy chief, and Vladimir Stepanovich Alekseyev, also a first deputy chief of the G.R.U. From the Times: The administration also put sanctions on three companies and organizations that it said supported the hacking operations: the Special Technologies Center, a signals intelligence operation in St. Petersburg; a firm called Zor Security that is also known as Esage Lab; and the Autonomous Non-commercial Organization Professional Association of Designers of Data Processing Systems, whose lengthy name, American officials said, was cover for a group that provided special training for the hacking. Wednesday, The Russian Ministry of Foreign Affairs' official representative, Maria Zakharova, said in a statement on the ministry's website: "If Washington really does take new hostile steps, they will be answered ... any action against Russian diplomatic missions in the US will immediately bounce back on US diplomats in Russia." 'Proportional' response The news comes after President Obama stated in October that the US would issue a "proportional" response to Russian cyber attacks on the Democratic National Committee.  The administration has used the word "proportional" when discussing cyber attacks before.
In December 2014, while officially naming North Korea as the culprit behind the attacks at Sony Pictures Entertainment, President Obama said the US would "respond proportionately." That attack was against one entertainment company, however, and not a nation's election system, so the proportions are surely different. "We have never been here before," said security expert Cris Thomas, aka Space Rogue, in a Dark Reading interview in October. "No one really knows what is socially acceptable and what is not when it comes to cyber. We have no 'Geneva Convention' for cyber."  According to Reuters reports, "One decision that has been made, [officials] said, speaking on the condition of anonymity, is to avoid any moves that exceed the Russian election hacking and risk an escalating cyber conflict." As Christopher Porter, manager of the Horizons team at FireEye explained in a Dark Reading interview in October, Russian doctrine supports escalation as a way to de-escalate tensions or conflict. "If the US administration puts in place a proportional response, Moscow could do something even worse to stop a future response … I think that is very dangerous." "The administration, fellow lawmakers and general public must understand the potentially catastrophic consequences of a digital cyber conflict escalating into a kinetic, conventional shooting-war," said Intel Security CTO Steve Grobman, in a statement. "While offensive cyber operations can be highly precise munitions, in that they can be directed to only impact specific targets, the global and interconnected nature of computing systems can lead to unintended consequences.
Impacting digital infrastructure beyond the intended target opens the door to draw additional nation states into a conflict.

This increases risk to civilian populations as countries see the need to retaliate or escalate." ORIGINAL STORY: Officials stated Wednesday that the White House will announce, as early as today, a series of measures the US will use to respond to Russian interference in the American election process.

The news comes after President Obama stated in October that the US would issue a "proportional" response to Russian cyber attacks on the Democratic National Committee.  Not all the measures will be announced publicly.

According to CNN, "The federal government plans some unannounced actions taken through covert means at a time of its choosing." Wednesday, CNN reported that as part of the public response, the administration is expected to name names -- specifically, individuals associated with a Russian disinformation operation against the Hillary Clinton presidential campaign. The actions announced are expected to include expanded sanctions and diplomatic actions. Reuters reported Wednesday that "targeted economic sanctions, indictments, leaking information to embarrass Russian officials or oligarchs, and restrictions on Russian diplomats in the United States are among steps that have been discussed." In April 2015, President Obama signed an Executive Order, which gives the president authorization to impose some sort of retribution or response to cyberattacks.

The EO has not yet been used.
It allows the Secretary of Treasury, in consultation with the Attorney General and Secretary of State, to institute sanctions against entities behind cybercrime, cyber espionage, and other damaging cyberattacks.

That includes freezing the assets of attackers. The Russian Ministry of Foreign Affairs' official representative, Maria Zakharova, said in a statement on the ministry's website: "If Washington really does take new hostile steps, they will be answered ... any action against Russian diplomatic missions in the US will immediately bounce back on US diplomats in Russia." 'Proportional' response The administration has used the word "proportional" when discussing cyber attacks before.
In December 2014, while officially naming North Korea as the culprit behind the attacks at Sony Pictures Entertainment, President Obama said the US would "respond proportionately." That attack was against one entertainment company, however, and not a nation's election system, so the proportions are surely different. "We have never been here before," said security expert Cris Thomas, aka Space Rogue, in a Dark Reading interview in October. "No one really knows what is socially acceptable and what is not when it comes to cyber. We have no 'Geneva Convention' for cyber."  According to Reuters reports, "One decision that has been made, [officials] said, speaking on the condition of anonymity, is to avoid any moves that exceed the Russian election hacking and risk an escalating cyber conflict." As Christopher Porter, manager of the Horizons team at FireEye explained in a Dark Reading interview in October, Russian doctrine supports escalation as a way to de-escalate tensions or conflict. "If the US administration puts in place a proportional response, Moscow could do something even worse to stop a future response … I think that is very dangerous." "The administration, fellow lawmakers and general public must understand the potentially catastrophic consequences of a digital cyber conflict escalating into a kinetic, conventional shooting-war," said Intel Security CTO Steve Grobman, in a statement. "While offensive cyber operations can be highly precise munitions, in that they can be directed to only impact specific targets, the global and interconnected nature of computing systems can lead to unintended consequences.
Impacting digital infrastructure beyond the intended target opens the door to draw additional nation states into a conflict.

This increases risk to civilian populations as countries see the need to retaliate or escalate." Related Content:   Sara Peters is Senior Editor at Dark Reading and formerly the editor-in-chief of Enterprise Efficiency. Prior that she was senior editor for the Computer Security Institute, writing and speaking about virtualization, identity management, cybersecurity law, and a myriad ...
View Full Bio More Insights

'Snoopers' Charter' Set To Become Law In UK

Surveillance bill goes through British Parliament and awaits only the Royal assent to become law before the year ends. 'Snoopers’ Charter,' officially knows as The Investigatory Powers Bill, is all set to become law before the year ends after it was passed by the British Parliament and awaits the Queen’s stamp of approval, The Register reports. The bill, which is widely regarded as being the most stringent of its kind, had its first draft published in November 2015 and was passed by both Houses of Parliament with the Labour Party abstaining. Under the new legislation, Internet service providers will have to store a back-up of the browsing activities of their users for 12 months and make it available to authorities whenever needed. It will also legalize offensive hacking and bulk collection of personal data by the authorities, despite concerns that this could lead to flaws being exploited to reveal more data than required. This law will legalize what the British government had secretly been doing all along, Prime Minister Theresa May conceded when publishing the first draft. Read full story here. Dark Reading's Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio More Insights

Kaspersky Lab and the AV Security Hole

It's unclear what happened in the reported theft of NSA data by Russian spies, but an attacker would need little help to steal if he or she had privileged access to an AV vendor's network, security experts say.

How To Share Threat Intelligence Through CISA: 10 Things To Know

If you want those liability protections the Cybersecurity Information Sharing Act promised, you must follow DHS's new guidelines. Share information about breaches, attacks, and threats with the federal government without fear of the legal repercussions -- that's the alluring promise of the Cybersecurity Information Sharing Act (CISA, passed as the Cybersecurity Act of 2015). However, those liability protections do not apply to any and all sharing, so if you want to be safe from litigation, you must share information through the guidelines recently released by the US Department of Homeland Security. Security and privacy professionals alike were anxiously awaiting these guidelines because they answer some of the pervading questions about how privacy would be protected when CISA passed.

They also provide some instructions -- particularly for non-federal entities -- on precisely how to conduct their information sharing activities under the new law. Here's what you need to know. 1. You need to remove individuals' personal data before sharing it. The guidelines require that, before sharing data, an organization remove any information "that it knows at the time of sharing" to be personally identifiable information of "a specific individual that is not directly related to a cybersecurity threat." If you don't do that, you won't get liability protection. The guidelines acknowledge that there may be occasions when PII is "directly related," such as in a social engineering attack. However, sometimes those individuals' relevant characteristics can be shared (job title, for example), but anonymized first. "The DHS Guidance does a decent job of explaining what ['directly related'] means, but I believe there is still a lot left to subjective decision making by the company doing the sharing," says Jason Straight, chief privacy officer of UnitedLex, and speaker at the upcoming Interop Las Vegas conference. "If they make a 'bad call,' and share something they shouldn’t have, what happens? Do they not get liability protection? Who decides?" Straight also points out that this requires that organizations put in place people, processes, and technology they might not have had before.  2.

The personal data you need to remove may be more extensive than you think. The guidelines provide a list of private data types that are protected by regulation, are unlikely to be directly related to a cybersecurity threat, and should therefore be on your watch list when scrubbing. That list includes not just your basic PII and personal health information, but human resource information (including performance reviews and such), consumer information protected by the Fair Credit Reporting Act, education history protected by the Family Educational Rights and Privacy Act, financial information including investment advice protected by the Gramm-Leach-Bliley Act, identifying information about property ownership (like vehicle identification numbers), identifying information about children under 13 protected by the Children's Online Privacy Protection Act. 3.

Be particularly careful of Europeans' personal data. European privacy laws to protect personal data are much more rigorous than American ones, and the divide is only getting wider.

As we've explained before:  The EU General Data Protection Regulation (GDPR), a replacement for the EU Data Protection Directive, is expected to be ratified by European Parliament this spring session, and go into effect by 2018. The GDPR will expand the definition of "personal data" to "encompass other factors that could be used to identify an individual, such as their genetic, mental, economic, cultural or social identity," according to IT Governance. ... So, data on Europeans' shoe sizes and political affiliations and more may be protected. Violations of GDPR have proposed fines of up to 4% of annual global revenue. Many breaches of personal data must be reported within 72 hours of discovery.
So, it's no small issue when the data is misused or lost.  Plus, the newly proposed trans-Atlantic data transfer agreement, EU-US Privacy Shield, if passed, will create a host of new regulations about how the US is permitted to handle data, and what European citizens' legal rights are in the event that Americans violate their rights.  You're better off upping your data classification game, and avoid sharing European citizens' data at all through CISA.   4.
If you want liability protection, share with DHS or ISACs and not other federal agencies. Liability protection is only given when you share information with DHS’s National Cybersecurity and Communications Integration Center (NCCIC) -- the official hub for the sharing of cyber threat indicators between the private sector and the federal government -- or with the industry ISACs (like FS-ISAC) that will pass the data onto DHS. Again, this only happens if the data is scrubbed of personal information before you share it.  CISA does allow you to share cyber threat indicators with other federal agencies, "as long as ... the sharing is conducted for a cybersecurity purpose," but you will not get the liability protections.   5.

DHS scrubs it of personal information too, but... DHS will review all threat data submitted and -- with automatic and manual means -- remove any remaining pieces of personal information before sharing it with any other agencies. So, no data submitted will go to waste; but you won't get the liability protection. Plus, there is a privacy issue, considering that one federal agency (DHS) has already seen information that it should not have.  CISA does, however, require federal entities to notify, "in a timely manner, any United States person whose personal information is known or determined to have been shared in violation of CISA." That notification is only required for US persons, according to CISA, but "as a matter of policy, DHS extends individual notification to United States and non-United States persons alike in accordance with its Privacy Incident Handling Guidelines." 6. Joining AIS and building a TAXII client makes all this easier. All that data scrubbing might sound like a nightmare! Who would bother sharing anything at all? Luckily, DHS NCCIC has automated and standardized the process to make it less painful. The Automated Indicator Sharing (AIS) initiative allows organizations to format and exchange threat indicators and defense measures in a standardized way, using standard technical specifications that were developed to satisfy CISA's private data scrubbing requirements.  The Structured Threat Information eXchange (STIX) and Trusted Automated eXchange of Indicator Information (TAXII) are standards for data fields and communication, respectively. OASIS now manages the specs. To share threat info, AIS participants acquire their own TAXII client, which communicates with the DHS TAXII server.

As a DHS representative explained in a statement to Dark Reading: "A TAXII client can be built by any organization that wishes to do so based on the TAXII specification (http://taxiiproject.github.io/).

DHS has built an open-source TAXII client for any organization that would like to use it free of charge, or incorporate the code into their existing systems.
In addition, there are a number of commercially available products that incorporate TAXII connectivity.

A list can be found at http://stixproject.github.io/supporters/."  To date, four federal agencies and 50 non-federal entities have signed up for AIS. 7.

There are other ways to share indicators with, too. Threat info can also be shared with DHS via: Web form: https://www.us-cert.gov/forms/share-indicators Email: [email protected] containing:  Title Type: either indicator or defensive measure; Valid time of incident or knowledge of topic; Indicate tactics, techniques, and procedures (TTP), even if pointing to a very simple TTP with just a title; and A confidence assertion regarding the level of confidence in the value of the indicator (e.g. high, medium, low).    8.

There are rules government agencies must follow, and punishments if they don't. The federal agencies that receive the data shared through CISA must follow certain operational procedures that moderate authorized access and ensure timely dissemination of threat data.

From the interim procedure document:  Failure by an individual to abide by the usage requirements set forth in these guidelines will result in sanctions applied to that individual in accordance with their department or agency’s relevant policy on Inappropriate Use of Government Computers and Systems. Penalties commonly found in such policies, depending on the severity of misuse, include: remedial training; loss of access to information; loss of a security clearance; and termination of employment.   9.

There are still privacy concerns. Although the list of privacy-related laws mentioned in section 2 above might seem pretty extensive, Jadzia Butler of the Center for Democracy and Technology pointed out: ...the list does not include the Electronic Communications Privacy Act (ECPA) or the Wiretap Act – the two laws most likely to be “otherwise applicable” to information sharing authorized by the legislation because they prohibit (with exceptions) the intentional disclosure of electronic communications. Another question: what will agencies do with all that data once they have it? Will it only be for cybersecurity purposes, or what? The CISA and DHS Privacy and Civil Liberties Interim Guidelines state specifically how the federal government can make use of the information. Other uses are expressly prohibited, but some privacy experts say the language itself is not prohibitive enough and the official privacy impact assessment (published here) says "Users of AIS may use AIS cyber threat indicators and defensive measures for purposes other than the uses authorized under CISA." As is, the uses permitted by CISA extend beyond direct cybersecurity attacks.

They may also use the submitted information for the purposes of: responding to, preventing, mitigating a specific threat of death, serious bodily harm, serious economic harm, terrorist act, or use of weapon of mass destruction; responding to, investigating, prosecuting, or preventing a serious threat to a minor, including sexual exploitation or physical threats to safety for preventing, investigating, disrupting, or prosecuting espionage, censorship, fraud, identity theft, or IP theft. As Butler wrote: For example, even under these guidelines, information shared with the federal government for cybersecurity reasons could be stockpiled and mined for use in unrelated investigations of espionage, trade secret violations, and identity theft. Without additional limitations, the information sharing program could end up being used as a tool by law enforcement to obtain vast swaths of sensitive information that could otherwise be obtained only with a warrant or other court order.
In other words, privacy advocates’ warnings that CISA is really a surveillance bill dressed in cybersecurity clothing may still come to fruition. 10.

The liability protections themselves aren't entirely clear. "The liability protection is fairly broad but not clear that it includes protection from disclosure through litigation process (discovery requests) or subpoenas," says Straight. "The big risk there, in my view, is that it would be potentially possible to use the fact that a breached company shared threat intel under CISA as evidence of when a company was aware of a threat or incident.

This could become part of a broader claim by a plaintiff that the breached company did not do enough to mitigate or respond effectively to the incident." Sharing threat data isn't the only thing that may come with risks; simply receiving threat feeds via AIS could have legal risks, according to Straight. "An organization that receives threat feeds should be prepared to take on the burden of assessing the threats and responding appropriately," he says. "This will create a burden on the receiving organization that did not exist before.

Also, I believe there is some risk in receiving threat data that you are not equipped to act upon.

Again, it is conceivable that the fact that you received 'notice of a threat through threat sharing, did nothing, and were then compromised by that threat could be used against you in a litigation or even a regulatory action." So, sharing is caring, but do it carefully. "I should say that I am in favor of threat intel-sharing," says Straight, "but any organization seeking to do so should make sure it understands what it is getting into and can support an ongoing threat intelligence consumption, production, and sharing process.
In my view, none of the [government] documents or commentary I’ve seen so far, including DHS Guidance, sufficiently addresses the issues I have raised."   Straight will present "Avoiding Legal Landmines Surrounding Your IT Infrastructure: Policies and Protocols" at Interop Las Vegas May 4. Related posts: Find out more about security threats at Interop 2016, May 2-6, at the Mandalay Bay Convention Center, Las Vegas.

Click here for pricing information and to register. Sara Peters is Senior Editor at Dark Reading and formerly the editor-in-chief of Enterprise Efficiency. Prior that she was senior editor for the Computer Security Institute, writing and speaking about virtualization, identity management, cybersecurity law, and a myriad ...
View Full Bio More Insights