As US gov’t surveillance watchdog group opens for business, questions linger

Ars speaks with chairman of the Privacy and Civil Liberties Oversight Board.    

How Qualys network security tools protect University of Westminster from cyber...

Based in the heart of London, the University of Westminster caters for more than 20,000 students and 4,500 staff. That leaves the institution's network security officer Ashley Pereira with a huge task in protecting networks from cyber risks such as zero day threats and spear phishing, with the likes of vulnerability management, patching, device management and security policies all coming under his regular remit. That's why the University of Westminster selected the QualysGuard Cloud Platform integrated solutions suite to simplify the process of keeping the network protected from cyber threats. "There's a lot to take on during my day-to-day, and Qualys saves a lot of time," Pereira told Computing. "I quickly navigate through to the portal, kick off a scan and then get back some very detailed information that I can use to inform the necessary people we've got vulnerabilities on our network," he said, adding that the subsequent generated report makes it simple to determine an "action plan" for combating vulnerabilities. "A lot of that can be derived from the Qualys report because it tells you what the vulnerability is, where the patch can be downloaded from and then it's a case of the system admin owners making a request to upgrade that service," Pereira explained. Prior to adopting Qualys, the university didn't have the capability to monitor vulnerabilities across the entirety of its IT estate, especially when more and more devices have become reliant on being connected to the internet. "We didn't really have a tool that could give us complete visibility of all of our IT assets across the whole of the university," admitted Pereira. "So while you might just take things into account from the server side, there are other endpoints that are connected as well: the student labs, the corporate machines and other things such as printers.

They can potentially be a mode from which somebody may attack, so we need to be conscious of all endpoints that connect via IP." While the previous solution didn't provide full visibility of connections to the network, Qualys has made sure that is not the case and without having an effect on network users. "Before it was a cumbersome task to get scans working while having no impact on the endpoint, while the scans we're doing now have zero impact on service delivery," Pereira explained.

RBS Markets deploys IBM Rational for virtual testing

Royal Bank of Scotland is investigating how it could extend a virtual test environment to support future development projects. The environment, built using IBM Rational Integration Tester, was initially used to ensure the bank's systems would be able to cope with a new currency if Greece was forced to leave the euro. Speaking at an IBM Innovate 2013 event in London, Stephen King, head of middleware at RBS Markets, said the bank deployed the test suite to virtualise Swift (Society for Worldwide Interbank Financial Telecommunication) transactions to ensure it would be easy to make code changes to support a new currency had Greece defaulted. "We built a virtualised Swift [payment] system environment in Rational to test changes with the new currency," said King. He said such a virtual test environment could save the bank a considerable amount in terms of transaction fees in the future, since it avoids testing using the real Swift service. The bank has spent the past three years building a test methodology on top of the Rational toolset. The project began when the bank needed to consolidate its four messaging hubs.

The hubs are critical to RBS's business by enabling payment and confirmation messages, clearing services and supporting regulatory compliance.

They handle two million messages a day, amounting to £9.5m per week. King admitted there was a climate of fear attached to changing the bank's messaging middleware. But the middleware messaging platform was one of several projects the middleware team needed to deliver.  "We needed to consolidate [messaging middleware] within three years and deliver 35 major business change projects, including rewiring the Forex [foreign exchange] business system," he said. Software development improvements The bank also needed to improve its software development in terms of improving quality. "We wanted to instil quality right through the project and have a test strategy," he said. The bank adapted a software development lifecycle for middleware and took on middleware experts who were trained in the methodology. "We selected IBM Rational Integration Test and piloted a payment compliance application which had to be delivered quickly to prevent the bank facing £5m fines," said King.  Using Rational and the new methodology, the developer team was able to deliver the application in six weeks. Following the success of the payment application, King said RBS rolled out the methodology and toolset across all the projects the team was working on.  Prior to deploying Rational and the software development methodology, King said regression testing used to take four weeks. It now takes a couple of hours, he said, saving the bank £40,000 in project costs. Email Alerts Register now to receive IT-related news, guides and more, delivered to your inbox. By submitting you agree to receive email from TechTarget and its partners.

If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy Read More Related content from

Update: CIA pays AT&T to search international call database

AT&T is paid more than $10 million annually to search records, NY Times reports.    

Microsoft zero-day vulnerability exploited more widely than expected

Hackers are exploiting a vulnerability in the graphics component of several key Microsoft products much more widely than was initially thought. The vulnerability (CVE-2013-3906) is in the Tiff graphics format used in Microsoft Windows Vista, Windows Server 2008, Microsoft Office 2003-2010 and Microsoft Lync. Microsoft confirmed this exploit had been used in limited attacks against “selected” computers, largely in the Middle East and South Asia. But the research team at security firm FireEye analysed the zero-day exploit and found links with two cyber operations, indicating it had been used for more than a few targeted attacks. First, the researchers established a link with Operation Hangover, which adds India and Pakistan to the mix of targets.  Information obtained from a command-and-control server (CnC) revealed that the Hangover group, believed to operate from India, has compromised 78 computers, 47% of them in Pakistan. But FireEye researchers also found another group had access to the Microsft exploit and was using it to deliver the Citadel Trojan malware. This hacker gang – dubbed "the Ark group" by FireEye – may have had access to the exploit before the Hangover group, the researchers found. Information obtained from CnCs operated by the Ark group revealed that 619 targets (4,024 unique IP addresses) had been compromised. Most of the targets are in India (63%) and Pakistan (19%). Because of the links with Hangover and Ark, the researchers have concluded that the use of this zero-day exploit is more widespread than previously believed. Hangover had previously been connected with a targeted malware campaign.

The Ark group is operating a Citadel-based botnet for organised crime. Security firm Websense claims nearly 37% of Microsoft Office business users are susceptible to the exploit. Alex Watson, director of security research at Websense, said IT administrators are encouraged to install the Microsoft Fixit 51004 to stop the vulnerability while waiting for a formal patch from Microsoft. Email Alerts Register now to receive IT-related news, guides and more, delivered to your inbox. By submitting you agree to receive email from TechTarget and its partners.

If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy Read More Related content from

Malware Incidents go Unreported, Particularly in Large Businesses

The largest companies, those with more than 500 employees, are even more likely to have had an unreported breach, according to a ThreatTrack survey. Enterprises in the United States are facing mounting cyber-security challenges, with nearly six in 10 malware analysts reporting they have investigated or addressed a data breach that was never disclosed by their company. Moreover, the largest companies, those with more than 500 employees, are even more likely to have had an unreported breach, with 66 percent of malware analysts with enterprises of that size reporting undisclosed data breaches. These are just two of the troubling findings of an independent blind survey of 200 security professionals dealing with malware analysis within U.S. enterprises, which was conducted by Opinion Matters on behalf of ThreatTrack Security in October. These results suggest that the data breach epidemic–totaling 621 confirmed data breaches in 2012, according to Verizon’s 2013 Data Breach Investigations Report–may be significantly under-reported, leaving enterprises’ customers and data-sharing partners unaware of a wide array of potential security risks. When asked to identify the most difficult aspects of defending their companies’ networks from advanced malware, more than two-thirds (67 percent) said the complexity of malware is a chief factor, while 67 percent said the volume of malware attacks, and 58 percent cited the ineffectiveness of anti-malware solutions. "While it is discouraging that so many malware analysts are aware of data breaches that enterprises have not disclosed, it is no surprise that the breaches are occurring," ThreatTrack CEO Julian Waits said in a statement. "Every day, malware becomes more sophisticated, and U.S. enterprises are constantly targeted for cyber-espionage campaigns from overseas competitors and foreign governments." More than half (52 percent) of all malware analysts said it typically takes them more than rwo hours to analyze a new malware sample. Conversely, only 4 percent said they are capable of analyzing a new malware sample in less than one hour. More than one-third (35 percent) said one of the most difficult aspects of defending their organization from advanced malware is the lack of access to an automated malware analysis solution. "This study reveals that malware analysts are acutely aware of the threats they face, and while many of them report progress in their ability to combat cyber-attacks, they also point out deficiencies in resources and tools," Waits continued. Four in 10 respondents reported that one of the most difficult aspects of defending their organization’s network was the fact that they don’t have enough highly skilled security personnel on staff. Installing a malicious mobile app, allowing a family member to use a company-owned device, clicking on a malicious link in a phishing email and visiting adult Websites were among the top routes that senior leadership teams infected the business with malware. ${QSComments.incrementNestedCommentsCounter()} {{if QSComments.checkCommentsDepth()}} {{if _childComments}}

Fear of cyber attack driving a shift from risk-based security, says...

Fear of advanced cyber attacks is driving a shift from tried-and-tested, risk-based security tactics, making them more vulnerable to emerging threats, a survey has found. Fear of attack is causing security professionals to shift focus away from disciplines, such as enterprise risk management and risk-based information security, to technical security, according to Gartner’s 2013 Global Risk Management Survey. Gartner surveyed 555 organisations in the US, UK, Canada and Germany. This shift in focus is driven by what Gartner analysts refer to as fear, uncertainty and doubt (FUD), which often leads to reactionary and highly emotional decision making. "While the shift to strengthening technical security controls is not surprising, given the hype around cyberattacks and data security breaches, strong risk-based disciplines, such as enterprise risk management or risk-based information security, are rooted in proactive, data-driven decision making," said John Wheeler, research director at Gartner. "These disciplines focus squarely on the uncertainty (risk) as well as the methods or controls to reduce it. By doing so, the associated fear and doubt are subsequently eliminated,” he said. Gartner believes organisations that shift away from risk-based disciplines or fail to adopt them will find themselves at the mercy of the FUD trap. The survey results showed movement away from these disciplines, with just 6% focused on enterprise risk management in 2013, compared with 12% in 2012. Wheeler said that, as IT risk profiles and postures change in the future, an inevitable shift in focus back to these risk-based disciplines will need to occur. “If not, IT organisations may find that more critical, emerging risks will remain undetected, and the company as a whole will be left unprepared,” he said. While FUD sometimes leads to negative management behaviours, Gartner found it can also lead to positive budget impacts for an IT risk management program. In the short term, this can add staff and resources to an area that is typically cost-constrained. The survey showed that 39% of respondents have been allocated funds totaling more than 7% of the total IT budget, compared with only 23% receiving a similar amount in 2011. However, the added budget resources are not a given for future years. Unless there is a strong IT risk management program in place to support the future need for similar levels of budget allocation, Gartner believes the resources will soon evaporate. Gartner recommends that CIOs, CISOs and senior business executives assess the current maturity of their IT risk management program, and create a strategic road map for risk management to ensure continued funding. The survey shows that governance of IT risk management is weakening at management levels. Overall, in 2013, 53% of respondents reported using either informal IT risk management steering committees or none at all.

This compares with 39% in 2012. Regular communication about emerging IT risks with board members and business leaders will result in better decision making "These incongruent survey findings seem to validate the observation that risk-based, data-driven approaches are falling to the wayside in favor of FUD-based, emotion-driven activities," said Wheeler. "Or, perhaps more disturbingly, they indicate that those who have concerns are simply burying their head in the sand, rather than proactively addressing emerging threats,” he said. According to Wheeler, regular communication about emerging IT risks with board members and business leaders will result in better decision making and, ultimately, more desirable business outcomes. Survey participants also indicated that progress is slowing to link IT risk indicators and corporate performance indicators. Not only did activity supporting the formal mapping of key risk indicators (KRIs) to key performance indicators (KPIs) decline by seven percent from 2012 to 2013, but mapping also ceased altogether for 17% of respondents in 2013, compared with just 8% in 2012. Gartner believes that this shift in activity could very well be a result of the FUD-based, emotion-driven approaches. "If done correctly, integrated risk and performance mapping exercises can yield tremendous benefits for companies and IT organisations that are seeking to develop a more effective risk-management dialogue with business leaders," said Wheeler. "However, if done incorrectly, the exercise can become too time and resource consuming, often resulting in an unwieldy process that ultimately fails,” he said. Email Alerts Register now to receive IT-related news, guides and more, delivered to your inbox. By submitting you agree to receive email from TechTarget and its partners.

If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy Read More Related content from

Microsoft, Google and Facebook offer bigger bounties for bug hunters

Microsoft, Google and Facebook have teamed up to offer bigger bounties for bug hunters under the HackerOne bug bounty project. The project is intended to reward hackers who find "issues" with internet tools such as PHP, OpenSSL, Perl, Ruby, Python and...

Latest payment card security standard takes new approach

The latest version of the Payment Card Industry’s Data Security Standard (PCI DSS), to be published on 7 November, will advise companies to make security part of “business as usual”. “Organisations should aim to make PCI DSS as part of business as usual because the standard provides the best set of requirements and processes for protecting data,” according to Jeremy King, European director of the PCI Security Standards Council (PCI SCC). To that end, version three focuses on security training – particularly passwords, helping people understand that security is a shared responsibility and giving merchants more flexibility in how they adopt the standard. Other changes are aimed at ensuring card data security practices are updated to cope with new technologies and trends, such as bring your own device (BYOD) programmes in the workplace. For the first time, the standard also incorporates the previously separate guidance document into the requirements as an extra column to explain in more detail what is meant and required. PCI DSS compliance is necessary for any organisation that handles customer payment card data and specifies how that information must be held and protected. The PCI SSC that administers the security standard claims version three is designed to help organisations take a proactive approach to protect cardholder data, and that it focuses on security, not compliance.  “It is extremely encouraging that the latest revision of PCI DSS is moving away from focusing solely on compliance, and moving towards best practice security,” said Matt Middleton-Leal, regional director for UK & Ireland at security firm CyberArk. “As we continue to see privileged account credentials and passwords as primary targets in almost all major breaches, it is great that this latest version of the standard is taking steps towards addressing this crucial part of the problem,” he said. The revised standard advises that password policies should include guidance on choosing strong passwords, protecting their credentials, changing passwords on suspicion of compromise. “While this is certainly a step in the right direction, I would argue that we need to go further, to adequately protect these extremely powerful credentials,” said Middleton-Leal. Rather than waiting for suspicious activity before taking action, organisations should arm themselves with the best possible defence by establishing a centrally managed privileged account security policy, he said. Middleton-Leal said this approach enables organisations to determine how regularly passwords need to be changed and enable users to set, manage and monitor password security from a single interface. “By simplifying the password management process and giving control back to the security, risk and audit teams, companies can be sure that they are not only compliant with PCI DSS V3.0, but also that they are doing everything they can to proactively protect their customers’ payment card data,” he said. PCI DSS V3.0 goes into effect on 1 January 2014, but merchants who have not completed compliance with version two will have until the end of 2014 to begin working on compliance with version three. Read more about PCI DSS PCI DSS review: Assessing the PCI standard nine years later Podcast: What’s new in PCI-DSS and PA-DSS version 3.0? Using encryption technology to achieve PCI DSS compliance objectives Understanding the PCI DSS prioritized approach to compliance Can predefined DLP rules help prevent HIPAA and PCI DSS violations? PCI DSS 3.0 preview highlights passwords, providers, payment data flow PCI validation: Requirements for merchants covered by PCI DSS Analysis: Inside the new PCI DSS risk assessment Email Alerts Register now to receive IT-related news, guides and more, delivered to your inbox. By submitting you agree to receive email from TechTarget and its partners.

If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy Read More Related content from

Spy agency encryption cracking foolish, says web inventor

The father of the World Wide Web, Tim Berners-Lee (pictured), has called for debate about “dysfunctional and unaccountable” supervision of US and UK intelligence agencies. The decision by the US National Security Agency (NSA) and the UK’s GCHQ to crack the encryption that internet users rely on for privacy was “appalling and foolish”, he told the Guardian. Berners-Lee said undermining of the protection afforded by encryption would benefit organised criminal hacker gangs and hostile states. The UK scientist said the move weakened online security, contradicted US and UK efforts to fight cyber crime and cyber warfare, and was a betrayal of the technology industry. He called for a "full and frank public debate" on internet surveillance, saying that the system of checks and balances to oversee the agencies has failed. The call comes just ahead of a scheduled hearing by parliament's intelligence and security committee (ISC) that will raise questions about conduct with the heads of all the UK's spy agencies on 7 November. The ISC is expected to question GCHQ director Iain Lobban, MI5 director general Andrew Parker, and MI6 chief Sir John Sawers about mass surveillance programmes, terror threats and cyber security. However, details of intelligence techniques and ongoing operations will be off-limits, according to the BBC. While senior UK politicians, including the prime minister David Cameron, have called the revelations by whistleblower Edward Snowden irresponsible, Berners-Lee believes the leaks are in the public’s interest. He said whistleblowers play an important role in society, and while powerful agencies are needed to combat criminal activity online, any powerful agency needs checks and balances. “Based on recent revelations, it seems the current system of checks and balances has failed," he said. As the director of the World Wide Web Consortium (W3C) that seeks to promote global standards for the web, Berners-Lee is a leading authority on the power and the vulnerabilities of the internet. He said although he had anticipated many of the surveillance activities exposed by Snowden, he had not been prepared for the scale of the NSA and GCHQ operations. Email Alerts Register now to receive IT-related news, guides and more, delivered to your inbox. By submitting you agree to receive email from TechTarget and its partners.

If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy Read More Related content from

New FCC chair was industry lobbyist, but might not be industry...

Wheeler appoints consumer advocate to key post, says competition is good.    

BlackBerry Business Plan Must Leverage Customer Trust in Secure Network

NEWS ANALYSIS: With its last major suitor, Lenovo, rejected by the Canadian government, a beleaguered BlackBerry faces an uncertain future after it buys time to work on a turn-around plan. BlackBerry's last c...