Home Tags Database

Tag: Database

DDoS protection biz Incapsula knackers its customers’ websites

An unwelcome PITSTOP Glitches at distributed denial-of-service mitigation biz Incapsula left the websites it defends offline twice on Thursday. Incapsula blamed "connectivity issues" for the global PITSTOP, aka the worldwide degradation of its services. "A rare case triggered an issue on the Incapsula service and caused two system-wide errors at 9:44 UTC and 14:50 UTC making sites inaccessible," a spokeswoman told us. "The issue was identified immediately and actions were taken to contain it and restore service.

The root cause has been identified and the Incapsula development and ops teams have corrected the issue. We apologize for the inconvenience to our customers." The data center security firm elaborated on the situation on its system status page and in a string of tweets. Affected sites included the blog of IT security industry veteran Graham Clulely. He tweeted: "Apologies to those trying to get to my site. @Incapsula_com is down for the second time today, bring my site with it." ® Incapsula Incapsulating Thursday's problems Bootnote PITSTOP – Partial Inability To Support Totally Optimal Performance: Not quite a full TITSUP, which is a Total Inability To Support Usual Packets. Sponsored: Speed up incident response with actionable forensic analytics

OpenSSL flaw disclosure: Right thought, wrong time

Tech has plenty of holy wars -- Windows vs Linux, emacs vs vi, and Perl vs Python, to name a few -- and security has its own: vulnerability disclosure.

At times it makes sense to publicly disclose a security vulnerability, but the recently revealed out-of-bounds read flaw in OpenSSL isn't one of them. Attackers can trigger the out-of-bounds read flaw in OpenSSL's b2i_PVK_bio() function with a specially crafted private key, according to a post by Guido Vranken, a software engineer at Intelworks.

That could lead to a heap corruption and potentially leak memory contents. The vulnerability was reported to OpenSSL on Feb. 24, but Vranken said the project team informed him on Feb. 26 that the report, along with other reports submitted around that time, would have to wait until the next release.
Vranken publicized the bug on his blog on Mar. 1, the same day OpenSSL released versions 1.0.2g and 1.0.1s. "It's not necessarily more secure to have vulnerable code running on servers for a month of more while attackers, if any (for this vulnerability), are not bound to release cycles and have the advantage of time," he wrote. The argument that administrators and users have to know about security vulnerabilities right away and can't wait for updates is frequently used to justify public disclosures.

Certainly, there are times when openly revealing a bug can spur a lagging company to prioritize the issue and get it fixed. That was the case with last year's automobile hack, as researchers Charlie Miller and Chris Valasek worked with Chrysler for nine months to fix the security flaw that could let attackers wirelessly break into some vehicles and remotely control a 2015 Jeep Cherokee.

Chrysler issued a recall notice within days after the duo's "stunt hack" with Wired's Andy Greenberg at the wheel. That isn't the case with the OpenSSL flaw since the project team acknowledged the report and indicated it was working on a fix.

Even Vranken acknowledged the team has to "conform to deadlines and schedules." No better, no worse While it should be fixed at some point, the bug doesn't seem critical enough to warrant pre-emptively disclosing it before a patch. While Vranken didn't provide information regarding severity or exploitability in his post, an entry on VulnDB, a comprehensive vulnerability database from Risk Based Security, suggests this is not a show-stopping, drop-everything-and-get-on-it flaw. VulnDB rated the flaw as "high," but assigned a base score of 7.8, an exploitability score of 8.6, and an impact score of 7.8.

The scores, based on the Common Vulnerability Scoring System as well as other internal classifications and metrics, are used to determine if a vulnerability can be easily exploited and if there is a public exploit available. This makes the vulnerability "no better or no worse" than the 60-or-so OpenSSL flaws found over the past two years, said Bill Ledingham, CTO of Black Duck Software. "This is another in a long line of vulnerabilities reported against OpenSSL as researchers pore over the code." It would be a good idea for OpenSSL to make sure similar out-of-bounds read vulnerabilities aren't in other sections of the code, which may wind up being more critical than this particular one. Not under attack Another good reason for a public disclosure would be if the flaw was actively under attack and being aware could help administrators beef up their defenses.

That isn't the case in this situation, as Vranken wasn't aware of any incidents, and the VulnDB entry doesn't list any, either.

Thanks to the disclosure, attackers who didn't know about the issue now have the details and can experiment to craft an exploit, and the defenders don't have an easy way to defend their systems. IT has to wait for a new OpenSSL release, which they had to do before the disclosure -- so nothing has been gained by jumping the gun.

For the moment, administrators with OpenSSL in their environments can rest assured they don't need to do anything about this specific bug.   Responsible disclosure may take longer and may not be as exciting, but it helps improve overall security because by the time the details are public, the fix is available.

There's some comfort in being able to say, yes, this is a serious issue, but look, here's what can be done to address it to protect the systems/network.

The endless drumbeat of software vulnerabilities can wear down even the most security-conscious IT administrator, especially when it's not clear how it can be exploited, whether there's an active threat, or even what to do as a result of the bug report. Researchers need to think through the vulnerability's actual impact. Just because it's potentially serious doesn't automatically make it critical.

There are only so many times people can be told there is nothing they can do about a serious flaw before they start ignoring vulnerability reports altogether.

That's not what anyone wants to see happen in IT security.

Google says it won’t Google jurors in upcoming Oracle API copyright...

Shawn Collins It was just days ago when the federal judge presiding over the upcoming Oracle v.

Google API copyright trial said he was concerned that the tech giants were already preparing for a mistrial—despite the fact that the San Francisco jury hasn't even been picked yet. US District Judge William Alsup said he was suspicious that, during the trial, the two might perform intensive Internet searches on the chosen jurors in hopes of finding some "lie" or "omission" that could be used in a mistrial bid. To placate the judge's fears, Google said (PDF) it won't do Internet research on jurors after a panel is picked for the closely watched trial, set to begin on May 9."The Court stated that it is considering imposing on both sides a ban on any and all Internet research on the jury members prior to verdict. Provided the ban applies equally to both parties, Google has no objection to imposition of such a ban in this case," Google attorney Robert Van Nest wrote to the judge in a Tuesday filing. Enlarge Peter Kaminski Google was referring solely to Internet searches of the jury once jurors were picked. Oracle didn't go so far in its response Tuesday and said the dueling companies should be able to investigate jurors both before and after they are chosen. "...the parties should be permitted to conduct passive Internet searches for public information, including searches for publicly available demographic information, blogs, biographies, articles, announcements, public Twitter and other social media posts, and other such public information," Oracle attorney Peter Bicks wrote (PDF) Alsup on Tuesday. However, Oracle was concerned that Google might tap its vast database of "proprietary" information connected to jurors' Google accounts and said such research should be off-limits. "Neither party should access any proprietary databases, services, or other such sources of information, including by way of example information related to jurors', prospective jurors', or their acquaintances' use of Google accounts, Google search history information, or any information regarding jurors' or prospective jurors' Gmail accounts, browsing history, or viewing of Google served ads..." Oracle wrote. Google has never suggested it would violate its customers' privacy in such a way. Oracle is seeking $1 billion in damages after successfully suing the search giant for infringing Oracle's Java APIs that were once used in the Android operating system.

A federal appeals court has ruled that the "declaring code and the structure, sequence, and organization of the API packages are entitled to copyright protection." The decision reversed the outcome of the first Oracle-Google federal trial before Alsup in 2012.

APIs are essential and allow different programs to work with one another. The new jury will be tasked with deciding solely whether Google has a rightful fair-use defense to that infringement.

CVE bug system has bugs – quick, use this alternative, say...

Allege critical software vulns ignored in huge backlog Frustrated security professionals acting on behalf of equally irritated researchers unable to gain Common Vulnerabilities and Exposures (CVE) numbers for their bugs have started an alternative numbering system to help triage what they describe as a huge backlog of ignored software flaws. Several prominent researchers are now backing the Distributed Weakness Filing (DWF) System badged as an alternative for the herds of researchers unable to gain the CVE for their legitimate vulnerabilities. The researchers say the movement is a response to inaction from US government-funded CVE-handler MITRE Corporation over the last six months.

They claim it has allocated far fewer CVE numbers to vulnerabilities and has been much less responsive to requests from researchers. MITRE has been contacted for comment. Common Vulnerabilities and Exposures numbers are the numerical tags assigned to legitimate verified bugs that act as a single source of truth for security companies and engineers in corporate offices to assign and apply patches. It is crucial for the security of software and for decades has been assigned – largely for US technology – by the MITRE Corporation. Dozens of researchers from multiple countries – from upstart hackers to competent experts with track records – have told this reporter they have been unable to gain a CVE number from MITRE. The effects of the alleged radio silence are tangible; the Reg understands that many US government agencies do not react to disclosed vulnerabilities that are not catalogued by the National Vulnerability Database – which in turn ignores bugs that lack assigned CVEs. Some large private sector corporations also respond only to CVE-numbered bugs – leading to the possibility that legitimate and critical vulnerabilities may remain unpatched due to MITRE's alleged unwillingness to allocate a CVE number to them. However, while the number of bugs is outpacing the speed at which vulnerability numbers are allocated, researchers say not enough is being done to cover important and forgotten critical bugs in popular software.
Some researchers say they have held off disclosure as a result, while many are published without CVE tracking. Kurt Seifried, who established the alternative system, is a Red Hat security staffer and MITRE board member but speaks to The Register in his personal capacity. He says the system could remain a bridge for those cut out of CVE allocation or, in the worst case scenario, become a full-blown replacement to it with eventual co-opting of the CVE title. "We are really seeking a response from MITRE," Seifried says, adding he would be glad to retire the effort should MITRE fill the gap. "Your first job is to get CVEs out the door, and the second is to engage with industry and neither of those is happening. "I planned to maybe launch this (DWF) in the summer, but I saw that it was getting worse and we as an industry just can't do another four months of no one getting CVEs. Seifried and other researchers contacted by this reporter say they have tried hard to inquire and lobby MITRE for CVE allocation – to no avail. It has sparked a series of complaints sent to this reporter and posted in public online mailing lists. A researcher known as Radek said he'd failed to elicit a response when disclosing his OS X vulnerabilities. "I have not heard back from MITRE," he says. "I am a little bit confused why vulnerability like this one which affects few hundreds or even more applications do not have a CVE assigned.
It is ridiculous in my opinion." Security researcher David Jorm says some prominent researchers able to gain immediate CVEs harbour such disdain for the alleged allocation failings they have submitted entirely fake and mocking bugs and still received CVE numbers. "There are a lot of legitimate researches who can't get CVE," Jorm says. "It seems that you need to be a rock star to get a number." Jorm, a respected security researcher in Australia, says the rules and procedures for allocation need to be clearly defined for the stability of the technology industry. "A lot of feeds aggregate CVEs for vulnerability and threat intelligence platforms, as do a lot of vulnerability scanners; the downstream impact is enormous," he says. The DWF system will largely map and complement CVE such that CVE-2016-0101 will become DWF-2016-0101.
It has, like the CVE system, corporations serving as numbering authorities.
Interested researchers can look over the DWF system at GitHub. ® Sponsored: Network monitoring and troubleshooting for Dummies

2016 Security 100: 20 Coolest Web And Application Security Vendors

Coolest Web And Application Security Vendors There's been a lot of talk in the security industry about the death of the perimeter, as protection technologies on the edge of the network have proven to be insufficient to fully stopping today's threats. W...

Google Offers Tool to Help Evaluate Vendor Security

The vendor security evaluation framework provides questions that organizations need to ask to accurately assess a third-party's security and privacy readiness, Google said. Google has released a framework to open source that it implements internally to...

eWEEKchat March 9: Is Data-Centric Security the Future?

This will be a particularly timely eWEEKchat conversation on how security is moving ahead in the nascent IoT age. On Wednesday, March 9, at 11 a.m. PST/2 p.m.

EST/7 p.m.

GMT, @eWEEKNews will host its 41st monthly #eWEEKChat.

The topic will be "Is Data-Centric Security the Future?" It will be moderated by Chris Preimesberger, who serves as eWEEK's editor of features and analysis.Some quick facts:Topic: "Is Data-Centric Security the Future?"Date/time: March 9, 2016 @11 a.m. PST/2 p.m.

EST/7 p.m.

GMT Moderator: Chris Preimesberger: @editingwhiz Tweetchat handle: Use #eWEEKChat to follow/participate, but it's easier and more efficient to use real-time chatroom links.Chatroom real-time links: We have two: http://tweetchat.com/room/eweekchat or http://www.tchat.io/rooms/eweekchat.

Both work well.
Sign in via Twitter and use #eweekchat for the identifier."Is Data-Centric Security the Future?"Data-centric security is designed to protect data at all times while allowing it to flow freely and securely anywhere, without the need for plug-ins, proxies, gateways or changes in user behavior.This defines a large trend in IT in which the primary function is the management and manipulation of data itself, rather than security focused primarily on the application, networking or storage.

This type of security follows the data item or store around wherever it travels—on-premises or off.This is as close to airtight a concept as there can be when it comes to securing the Internet of things, many industry observers say.With the advent of virtualized IT systems, the worldwide explosion in the use of cloud and managed services, and the increasing usage of data storage and big data analytics inside clouds, data is often separated in so-called "chunks" for security purposes and spread in various locations. Later, when the entire file is needed, systems reassemble these chunks—usually with a just-in-time methodology.All this movement has made conventional security a central problem, and data-centric security—centered around government-level encryption—may have come to the rescue as the only way to handle all this travel in a reliable fashion.Some of the leading innovators in this space include Thales Security, which recently bought Vormetric for this purpose; IONU, whose data isolation platform creates a separate and secure zone where data is insulated from the outside world; Dataguise, which specializes in data-centric security for NoSQL server shops; and Vera, which does both file-centric and data-centric security.These are just a few of the data points we'll talk about on March 9. We also will pose questions such as:--What do you personally see as the No. 1 advantage of using data-centric security?--What other companies do you know will become data-centric security players in 2016?--Do you see, or do you not see, data-centric security becoming mainstream in 2016?Join us March 9 at 11 a.m. Pacific/2 p.m.

Eastern/7 p.m.

GMT for an hour.

Chances are good that you'll learn something valuable.

DataVisor Debuts User Analytics for Security

VIDEO: Yinglian Xie, CEO and co-founder of DataVisor, discusses her firm's new technology that makes use of unsupervised analytics to combat online fraud. There is an increasing consensus among security vendors and technology users that organizations c...

Secret court approves classified rule change on how FBI can use...

On Tuesday, The Guardian reported that the Federal Bureau of Investigation (FBI) has changed its rules regarding how it redacts Americans’ information when it takes international communications from the National Security Agency’s (NSA) database.

The paper confirmed the classified rule change with unnamed US officials, but details on the new rules remain murky. The new rules, which were approved by the secret US Foreign Intelligence Surveillance Court (FISC), deal with how the FBI handles information it gleans from the National Security Agency (NSA).

Although the NSA is technically tasked with surveillance of communications involving foreigners, information on US citizens is inevitably sucked up, too.

The FBI is then allowed to search through that data without any “minimization” from the NSA—a term that refers to redacting Americans’ identifiable information unless there is a warrant to justify surveillance on that person. The FBI enjoys privileged access to this information trove that includes e-mails, texts, and phone call metadata that are sent or received internationally. Recently, the Obama administration said it was working on new rules to allow other US government agencies similar access to the NSA’s database. But The Guardian notes that the Privacy and Civil Liberties Oversight Group (PCLOB), which was organized by the Obama administration in the wake of the Edward Snowden leaks, took issue with how the FBI accessed and stored NSA data in 2014. "As of 2014, the FBI was not even required to make note of when it searched the metadata, which includes the ‘to' or ‘from' lines of an e-mail,” The Guardian wrote. "Nor does it record how many of its data searches involve Americans’ identifying details." However, a recent report from PCLOB suggested that the new rules approved by FISC for the FBI involve a revision of the FBI's minimization procedures.
Spokespeople from both the FBI and PCLOB declined to comment on that apparent procedure change, saying it was classified, but PCLOB’s spokesperson, Sharon Bradford Franklin, told The Guardian that the new rules "do apply additional limits.” A spokesperson for the Office of the Director of National Intelligence said that the new procedures may be publicly released at some point.

ThreatStream Changes Name To Anomali, Adds New Products

The security vendor formerly known as ThreatStream used the RSA conference in San Francisco as a launching platform for its new name, Anomali. "Anomali is the world's leading threat intelligence platform,” said the Redwood City, Calif.-based company's director, Rich Scott. Why did the company change names? "It's a reflection of a whole new generation of management of threat intelligence.

The problem that many organizations are faced with is an overwhelming amount of intelligence data," he said. ThreatStream first tackled that challenge in 2013, but now with a new name it has introduced two new products, Harmony Breach Analytics and Anomali Reports. Scott called security "a big data problem." "Harmony takes log data out of SIMs and matches that in a historic, retrospective way, to threat data that has gone on for, in some cases, over a year," he explained. Anomali Reports is a data breach detection service built for SMBs. Anomali's partner program targets both integration partners and service providers.

Rohde & Schwarz Cybersecurity redefines next generation UTM firewalls

In data transmission, bandwidths in the Gigabit range call for new IT security solutions.

This applies in particular to traditional unified threat management (UTM) firewalls, which have limited performance.

At this year's CeBIT, the IT security company Rohde & Schwarz Cybersecurity will present an innovative solution that for the first time meets the challenges posed by higher bandwidths: the UTM+ firewall series with an integrated next-generation engine.

The integrated software also comes with high-end features.Munich, March 8, 2016 — The UTM+ firewall series was designed especially for the needs of medium sized businesses.
It is just as powerful as a next-generation firewall (NGFW) due to the integrated single-pass technology. While the efficiency of a traditional UTM appliances ends in the megabit range, UTM+ appliances provide performance in the Gigabit range.

And they offer even more: the UTM+ models are easy-to-use, all-in-one solutions and are significantly less expensive than next-generation firewalls. In addition to single-pass technology, further high-performance next-generation firewall features were integrated into the new UTM+ solution.

These include, for example, security mechanisms such as port-independent SSL decryption for automatic analysis of encrypted data traffic.

The permanent layer 7 scanner ensures complete and continuous analysis of data packets – even after successful validation.

The application control feature allows a fine-grained analysis of network traffic.

The firewall operating system is additionally protected with a highly secure firewall container system. Like all new Rohde & Schwarz Cybersecurity products to be showcased at CeBIT, the UTM+ firewalls follow the innovative approach "security by design", which prevents attacks proactively rather than reactively. Security certificate: made in GermanyAt CeBIT 2016, the Rohde & Schwarz security companies gateprotect, Sirrix, Rohde & Schwarz SIT and ipoque will, for the first time, bundle their broad ranges of technologically leading IT and network security solutions under the umbrella of the new Rohde & Schwarz Cybersecurity GmbH.

The first product of this new big player is the UTM+ V16. The UTM+ V16 is the improved successor model to the successful GP series with V15 software from gateprotect.

The V16 software is not only more powerful, but can be optically recognized as a Rohde & Schwarz product.
Instead of the familiar red, it now comes in the blue and gray Rohde & Schwarz corporate colors. Rohde & Schwarz Cybersecurity, a 100 % subsidiary of the Rohde & Schwarz electronics group, develops and manufactures its products exclusively in Germany.

Customers can therefore rely on the stringent German quality and data protection standards as well as maximum performance for all Rohde & Schwarz Cybersecurity products. Contact:Svenja Borgschulte, Tel.: +49 (0)221 801087 85, Fax: +49 (0)221 801087 77, E-Mail: sb@moeller-pr.de Kontakt für Leser:Christian Reschke, Tel.: +49 (0)30 65884 232, Fax: +49 (0)30 65884 184, E-Mail: christian.reschke@rohde-schwarz.com https://cybersecurity.rohde-schwarz.com/de CeBIT 2016 in Hanover, March 14 to 18 hall 6/booth G16 Rohde & Schwarz CybersecurityThe IT security company Rohde & Schwarz Cybersecurity protects companies and public institutions around the world against espionage and cyberattacks.

The company offers high-end encryption solutions, next-generation firewalls, network traffic analytics and endpoint security software in addition to producing cutting-edge technical solutions for IT and network security.

These “Made in Germany” IT security solutions range from compact all-in-one products to custom solutions for critical infrastructures.

The “security by design” approach, which employs a proactive rather than reactive approach to dealing with cyberattacks, is central to the development of trusted IT solutions.

Around 400 employees work at the current sites in Berlin, Bochum, Darmstadt, Hamburg, Leipzig, Munich and Saarbrücken. R&S® is a registered trademark of Rohde & Schwarz GmbH & Co. KG.All press releases are available online at https://cybersecurity.rohde-schwarz.com/de.Image material can also be downloaded there.

Security Training for Developers Failing to Keep Up With Threats

NEWS ANALYSIS: Multiple speakers at the RSA conference said developers alone are not to blame for the current state of cyber-security in which threats evolve faster than the defenses. SAN FRANCISCO—It's the best of times and the worst of times to be a software developer.

There are lots of jobs and business opportunities for developers, but thousands of new applications reach the market each day with inadequate attention to built-in security flaws.Cloud computing, containers, new programming languages and continuous integration and delivery tools are changing the game and enabling developers to create new types of applications and reach new levels of agility.

Despite all the opportunity, there's one area in which developers can't catch a break—security.Here at the RSA Conference this week there was a lot of talk about Apple vs. the FBI and the coming security market consolidation.

Dig a little deeper and the real issues confronting enterprise CIOs and security managers include the never-ending stream of insecure applications being put into production from vendors as well as corporate developers.For enterprise developers, this is not necessarily their fault.

They are facing, in geek speak, the Kobayashi Maru Star Trek command test scenario: They can't win.

Either they push out apps quickly and insecurely, or slowly but more securely.
Security processes and agile development methodologies require their own schedules and resources. To that point, a new survey from CloudPassage found that 50 per cent of security professionals don't believe security is capable of moving as fast as app release cycles; 65 percent said a lack of resources and organizational siloes are the main barriers to security getting into release cycles earlier. Businesses, seeing great opportunities in increasing developer productivity, are pushing developers to get apps out as fast as possible.
Sometimes, security best practices are being ignored. More often, they are merely being put off until later.
Software producers will wait to work on security until hackers find the product's weak spots.

This symptom is already pervasive in the Internet of things.

Experts who monitor and test application security call this "security debt."Which kinds of applications are the ones causing the most problems?"New ones.

That's the reality," said Amichai Shulman, CTO of Web application firewall vendor Imperva. "There are not bad programmers or bad languages.
It's mostly those apps that have very tight schedules—a very fast time to market—that are the most vulnerable. No one has enough time to weed out vulnerabilities and write secure code."The biggest code culprit for security these days are APIs for mobile apps and server-side controls.

Companies are creating mobile versions of their legacy applications and in the process generating security bugs. "Companies say let's go mobile, they mobilize the apps and they end up with APIs that are vulnerable," he said.Again, business imperatives are not necessarily the developer's fault. Nor do security flaws occur because student developers are not getting enough training on writing secure code and preventing exploits like SQL injection and cross-site scripting.It's also a simple numbers problem.
IT industry research shows that over the next few years millions of cyber-security jobs will go unfilled.