Thursday, December 14, 2017
Home Tags Threat

Tag: threat

Computer system threats come in many different forms. Some of the most common threats today are software attacks, theft of intellectual property, identity theft, theft of equipment or information, sabotage, and information extortion. Most people have experienced software attacks of some sort. Viruses, worms, phishing attacks, and Trojan horses are a few common examples of software attacks. The theft of intellectual property has also been an extensive issue for many businesses in the IT field. Intellectual property is the ownership of property usually consisting of some form of protection. Theft of software is probably the most common in IT businesses today. Identity theft is the attempt to act as someone else usually to obtain that person’s personal information or to take advantage of their access to vital information. Theft of equipment or information is becoming more prevalent today due to the fact that most devices today are mobile. Cell phones are prone to theft and have also become far more desirable as the amount of data capacity increases. Sabotage usually consists of the destruction of an organization′s website in an attempt to cause loss of confidence to its customers. Information extortion consists of theft of a company′s property or information as an attempt to receive a payment in exchange for returning the information or property back to its owner. There are many ways to help protect yourself from some of these attacks but one of the most functional precautions is user carefulness.

The idea that cloud computing providers could suffer as a result of the US National Security Agency's (NSA) policy of obtaining data from web companies as part of its Prism surveillance programme is a "red herring", argues Bobby Soni, chief platform and services officer for risk management software company RMS. Soni is responsible for building end-to-end cloud hosting capabilities that support the company's high-end analytics software, RMS(one). He believes that cloud providers - and the organisations that adopt their solutions - have little to fear from Prism, details of which were leaked this summer by whistleblower Edward Snowden. "I think it's a red herring because every government, regardless of what country, can get access to data under whatever reason, it doesn't matter," he replied at the RMS and Verne Global's Big Data Iceland event in Rekjavik when asked about the implications of the NSA's Prism programme. Soni argued that it is not in governments' interests to compromise cloud computing providers as doing so would interfere with business. "Governments are not interested in harming business and the growth in cloud computing and the growth in business in general; they're not interested in that. It wouldn't be in their interests.

They're interested in things that are illegal, that are terrorist related," he said.  "So I don't think any honest businessman who does this has to fear that the government would interfere in their transactional business.

There's absolutely zero chance of that. "That's why I say it's a red herring," Soni added. Soni, who was vice president of industry and cloud business solutions for IBM before joining RMS, also told Computing that the company does everything in its power to make sure data is secure. "We're doing several things - we're encrypting the data on the racks, so if some disgruntled person decided to take a drive out from the data centre that drive is useless because the data is encrypted. Only the client can decrypt the data," he said.  "Also, when the data travels from the data centre to the client, it's encrypted in motion and then we have access control software that manages authorisation," Soni continued, adding that it's possible for the cloud to be more secure than the traditional data centre. "If you take all of that, our clients get comfortable very quickly and it's as secure, even more secure than their internal data centres.

A lot of client data centres aren't even as secure," he said.
HP describes SureStart as the industry's first self-healing IT that automatically restores a BIOS to its previously safe state if it is corrupted. SAN FRANCISCO—Hewlett-Packard, which has quietly been hiring new managers in various departments to help give the iconic but struggling company some fresh new ideas, unveiled a forward-thinking new security feature on Sept. 17. The Palo Alto, Calif.-based IT giant introduced a new self-healing basic input/output system (BIOS) called SureStart that will be installed in its forthcoming EliteBook enterprise laptops.

Eventually, it is likely to be added to HP's entire Windows computer lineup. HP describes SureStart as the industry's "first self-healing technology that automatically restores a system's BIOS to its previously safe state if attacked or corrupted." SureStart is part of a larger HP project, BIOSphere, a firmware ecosystem that automates data protection as well as configurability and manageability for all HP business PCs. For the record, the BIOS is a de facto-standard firmware interface built into all IBM-compatible PCs and is the first software run by the PC when powered on.

The fundamental purposes of the BIOS are to initialize and test the system hardware components and to load a bootloader or an operating system from a mass memory device. The BIOS also provides an abstraction layer for the hardware—a consistent way for application programs and operating systems to interact with the keyboard, display and other input/output devices.

The BIOS basically runs a script and tells the PC what to do. HP, not particularly known for being a security pioneer, now has taken security matters into its own hands. "We've taken the time to write our own proprietary HP BIOS, which thinks about device protection from the silicon up," Michael Park, HP's new vice president of strategy and product management for enterprise computing, told eWEEK. "What's happening now is that a lot of the malware coming into systems isn't coming into the OS (operating system) level. It comes in under the OS; it takes over the BIOS, and then it tricks the OS into thinking it's a secure machine," Park said. "It's called rooting. That's why people in enterprises are hesistant to bring in Android devices because they're easily rootable.

And if you can root the device, it doesn't matter what security you put on the OS; you can come in under it and take it over." What HP did was create a reference point in the PC processor "where we store an HP BIOS that only HP can access," Park said. "When the machine boots, before the BIOS runs, it does a sub-check against the reference BIOS.

If it's the same, it goes right into the boot sequence.

If it sees any kind of change, it initiates a sequence to rewrite the BIOS, so you will never have a blue-screen BIOS failure." This is not all HP has been doing in security.

Other new products that will provide real-time threat disruption and self-healing IT include: —HP Threat Central, which HP claims is the industry's first community-sourced security intelligence platform to facilitate automated, real-time collaboration among organizations in the battle against active cyber-threats. —HP TippingPoint Next-Generation Firewall addresses risks introduced by cloud, mobile and bring your own device (BYOD) by delivering reliable security with granular application visibility and control. —HP ArcSight and HP Fortify offer data-driven security technologies, including Application View, Management Center, Risk Insight and Enterprise Security Manager v6.5c, that empower security operations teams to run more effectively with accelerated and real-time application-level threat detection. HP will be unveiling its fall lineup of enterprise and consumer laptops, notebooks, tablets and printers during the next few weeks. eWEEK will cover the launches.
Hewlett-Packard debuts new its Threat Central service and updates multiple platforms to take advantage of its new security intelligence capabilities. Hewlett-Packard is doubling-down on its security portfolio with the announcement Sept. 17 of its new Threat Central security intelligence service that is all about enabling collaboration among organizations to share security information. HP is also updating other components of its security portfolio to take advantage of the new security intelligence, including the introduction of a new hardware firewall and an update to the ArcSight platform. Threat Central offers a way of bringing security intelligence into HP by way of a normalized data structure, said Art Gilliland, senior vice president and general manager for Enterprise Security Products at HP. Threat Central then adds value to that information with additional contextual relevance, and the Threat Central feed can be piped back into security operations consoles, providing data that is then actionable. The data that comes into Threat Central comes from HP's own data sources as well as open sources of intelligence.  The Threat Central technology adds a key missing piece to the security puzzle, Gilliland said.  Threat Central will enable HP to pull multiple data sources instead of just its own. Going a step further is the fact that HP will be making sure that the data can be understood in the context of everything else that might be going at a given enterprise or even across the larger threat landscape. The data from Threat Central can feed into HP's ArcSight Security Information and Event Management (SIEM) platform so that enterprises can then start to create policies and take actions to limit security risks. HP is also improving ArcSight with new capabilities that enable it to collect even more information from enterprise networks. Traditionally, the way a SIEM works is it collects logs from servers and other endpoint systems that generate log information.

The challenge has always been about how to monitor and understand what is going on with machines that don't generate logs, as well as those devices whose logs are not complete. The new ArcSight improvements will enable enterprises to instrument their applications.

The application instrumentation will enable enterprises to add security monitoring to applications that were built without monitoring capabilities. If an enterprise is looking to find out if there are security risks and attacks against their own environment, the only way to do that is to watch how users interact with the environment, Gilliland told eWEEK. "We need to have full visibility and these new technologies give us that visibility," he said. Sean Michael Kerner is a senior editor at eWEEK and InternetNews.com. Follow him on Twitter @TechJournalist.  
Proportionality is critical in cyber surveillance by intelligence services, says former MI5 head Eliza Manningham-Buller. “The more intrusive the tool, the higher the level of authorisation,” she told attendees of Trend Micro’s 25th anniversary customer conference in London. Adversaries have these tools, but applying that principle for intelligence services, she said, means that highly intrusive tools are approved only by the secretary of state with judicial supervision. Manningham-Buller was responding to questions after a presentation about what she learned about managing responses to threats and risk while head of MI5 from 2002 to 2007. First, she said, anyone in charge of responding to risks and threats needs to understand that choices need to be made because with finite resources no-one can do everything. Second, that accumulating information about threats is pointless unless that information is turned into action that can result in better protection. When faced with a new kind of threat or scale of threat, Manningham-Buller said it may be necessary to do things in a different way and agree that there will be no “sacred cows”. Another successful strategy for MI5, she said, was to build partnerships with industry and other government departments to tap into all the tools and skills needed to get the job done. “It is important not to say: this is exclusively our job,” she said. This approach resulted in a cross-departmental threat assessment team that Manningham-Buller said was copied by the US, France, Australia and others. “Through partnership we achieve a much richer intelligence capability,” she said. Manningham-Buller said she learned at MI5 that it is important for security leaders to communicate that they value what their staff members are doing, but that they will take responsibility if things go wrong. “I let my staff know that there would be no blame culture; that they would be able to discuss concerns, and that they aim was to work together to deliver better security,” she said. Manningham-Buller said it was important not to forget the “soft” things when dealing with threats on a daily basis. “When people work hard, they need encouragement and thanks,” she said. Overall, she said it is important to maintain clarity and simplicity, and to help people to deal with uncertainty. Information security professionals, she said, are working to protect valuable data and to allow organisations to work securely. “This is a motivating thing to do. But leaders can damage people’s natural motivation by not valuing, praising, and thanking,” she said. Email Alerts Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox. By submitting you agree to receive email from TechTarget and its partners.

If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy Read More Related content from ComputerWeekly.com
The military geniuses from the US Department of Defence have come to the conclusion that the best way to defeat hackers is to put all of your data in one place. The cunning plan is to consolidate its 15,000 networks into a single “joint information environment” which would be protected by JIE, a new set of security protocols.  The Pentagon calls this a “single security architecture” while most hackers would call it a “target”. According to the Pentagon bigwigs, the protocols will make it easier to detect intrusions and identify unauthorised “insiders” who might be accessing a network. National Defence Magazine says this brilliant idea comes from the chairman of the Joint Chiefs Army General Martin Dempsey. The system will potentially save billions of dollars by eliminating redundant, overlapping systems, it is claimed. Some are unhappy with the move.

There is what planners call a “bureaucratic” reluctance to change the situation in the Pentagon, although we can't say we are surprised. The head of DISA, Air Force Lieutenant General Ronnie Hawkins, warned that JIE is pushing the Department into unchartered territory.

He characterised the venture as the digital equivalent of the Lewis and Clark expedition to the western United States. JIE will be financed under the Pentagon’s $23 billion cybersecurity budget.  There is also the technology consideration as to whether 15,000 networks can coalesce into a common environment. It will not be a single architecture but more of a “standard security architecture”. To stop insider leaks, the JIE will track network activity using “identity access management” technology. Supervisors will look for warning signs of a potential insider threat, such as whether people are authorised to be where they're at, and whether they have the administrative privileges they are supposed to have. Snowden might have still slipped through the net, and with a consolidated network, he would have had access to even more data.
Startup BitSight Technologies aims to measure how seriously a business focuses on security by analyzing a host of external factors. Before a company or individual borrows money, they have to undergo a credit check as proof of their financial security.

In the future, companies that want to do business with each other may have to show a similar rating that grades their information security. On Sept. 10, security startup BitSight Technologies launched a service that uses a number of external measures to rate how likely a company is to fend off compromises and protect any data entrusted to them. Is spam coming from a firm's domain? Lower their score. Have they taken steps to protect their domain-name records? Raise their score. "Ratings have shown themselves to be a very robust way for organizations to understand and manage the risk in a way that abstracts some of the complexity," Stephen Boyer, founder and CTO for BitSight, told eWEEK. "There is a real chasm between the tech folks and the business decision makers, and so the credit-rating model—like it or love it—has introduced time and cost efficiencies into the system for people to quantitatively build models around." Because service providers and suppliers are under increasing attack, BitSight has focused first on rating companies that supply products and services to others, grading their security using a measure similar to credit scores.

The service, dubbed the Partner SecurityRating, will focus on giving companies insight into how secure their business partners are likely to be, Boyer said. Just like credit services cannot see into an actual household, BitSight's security service cannot peer into a company's internal network, but it can use external data to infer whether there is a potential security problem inside the company or to deduce whether the company takes security seriously. BitSight rates businesses' information security on a scale of 250 to 900, based on publicly available information and threat-intelligence feeds. Negative factors include whether the company's computers are included on blacklists for spamming, been used in a distributed denial-of-service attack or communicated with a known botnet. Positive factors include visible steps taken by a company to increase their security, such as locking their domain, using sender policy framework to authenticate their email server, and using a service or security product to protect their Website from known attacks. "If a company potentially loses a deal because they are not up to snuff, then they will likely allocate the budget to improve their security posture," Boyer said. Ratings for industries can show which groups of companies have a good security track record, and which ones need to pay more attention to security.

For example, the financial-services industry, which is widely considered an early adopter of security technologies, is the best performing industry, although it is not perfect, Boyer said. Meanwhile, legal service firms have done poorly as a group, he said. "Legal services [are] actually a real poor performer, and that concerns a lot of our customers, because they hold the keys to the kingdom," Boyer said. BitSight recently closed a $24 million round of venture funding.
Value-added distributor (VAD) Wick Hill and US-based security firm Guidance Software have joined forces to help UK businesses better address new and evolving cyber threats. Wick Hill supplies organisations with secure IP infrastructure from enterprises to small and medium-sized enterprises (SMEs) through a UK-wide network of accredited value added resellers (VARs). This agreement adds support for Guidance Software’s digital forensics, cyber security and security analytics products to that network, said Ian Kilpatrick, chairman of the Wick Hill Group. “Most companies have a trusted advisor within a VAR.

Now those trusted advisors will be able to recommend Guidance Software where appropriate with full support,” he told Computer Weekly. According to Kilpatrick, VARs tend to be reluctant to recommend systems for which they do not have full support for fear of risking their reputations as trusted advisors. Through its partnership with Guidance Software, Wick Hill will provide its UK VAR network with the confidence they need through training and support alongside the supplier, he said. Support from Wick Hill includes technical and consultancy support throughout the sales process to ensure successful implementations. This will enable Guidance Software to scale up its support across the country, multiplying the reach and support of the 65-member UK team, said Sam Maccherola, vice-president of sales in Europe and Asia. For many UK companies this will mean gaining one more option for protecting critical information through current suppliers, as well as increased support for existing Guidance Software customers. Kilpatrick said one of the biggest challenges for UK companies is to be more proactive about cyber threats, and that Guidance Software will be key in the recommendations by many trusted advisors. “For threat visibility, organisations need a security solution that provides a total overview, is increasingly granular, and can integrate with the existing security investment,” he said. After cyber attacks, organisations need to be able to carry out forensic investigations to uncover any intellectual property theft, employee fraud, and employee policy violations, said Kilpatrick. “They also need to preserve the evidence, and Guidance Software has considerable expertise and products in all these areas,” he said. Guidance Software's Maccherola said the risk and cost of data breaches has grown significantly for public and private organisations, and consequently mitigating the risk of critical data loss is a top priority. “Wick Hill and its network of VARs understand that visibility at the end point is a critical factor in mitigating the risk to critical user and customer data,” he said. Email Alerts Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox. By submitting you agree to receive email from TechTarget and its partners.

If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy Read More Related content from ComputerWeekly.com
Beware of increasingly advanced exploits targeting flaws that will never be fixed.    
With $62 million in funding, the CEO of open-source security startup AlienVault is gearing up for the challenge of PCI-DSS 3 compliance. AlienVault is continuing to raise new funding, as it pushes forward on its mission of expand the market for its open-source-based Unified Security Management platform. The security vendor has raised an additional $26.5 million in Series D funding this month, bringing the total funding since its founding in 2007 to $62 million. AlienVault CEO Barmak Meftah told eWEEK that the company has been doubling revenues year-over-year for the past three years and wasn't really out looking for new capital. That said, he's happy to have the additional funding to help finance innovation and keep the company growing. The company, which raised $22.4 million in a Series C funding round in 2012, is often categorized as a security information and event management, or SIEM, vendor, though Meftah stresses that AlienVault is really much more. At the core of the AlienVault solution set is the company's Open Source Security Information Management (OSSIM) platform. That project is complemented by a commercial Unified Security Management (USM) offering that bundles additional enterprise scale, management and reporting capabilities. Meftah explained that AlienVault's main offering packages five core security capabilities into a single solution. Those capabilities include threat detection, vulnerability analysis, behavioral analysis that looks for anomalous activities, automated asset inventory management and security analytics from log management in a SIEM. Going a step beyond what USM and OSSIM provide, AlienVault also shares threat information via its Open Threat Exchange effort, which Meftah describes as a crowd-sourced threat data sharing platform. Open Source The OSSIM project has more than 230,000 downloads to date and is continuing to grow. Meftah explained that the open-source project is a core component of his company's commercial sales efforts. "A typical sale for us is a bottoms-up approach, where an IT operations person has used the open-source product and decides that they want to expand the footprint and need enterprise management capabilities," Meftah said. OSSIM is currently at its 4.3 release, which is the same as the commercial USM release. Meftah stressed that AlienVault is obsessed about making sure that its open-source community is well taken care of.

He noted that open-source community forums for OSSIM are very active and AlienVault staff are among the active participants. Compliance Regulatory compliance is a key growth area for AlienVault, both in terms of technology evolution as well as customer adoption. "The vast majority of our customers are ultimately driven by compliance, and the payment card industry (PCI) data security standards (DSS) is a huge driver of what we do and how we sell," Meftah said. PCI-DSS is currently in a state of evolution with the new 3.0 specification in development and expected to become generally available in 2014. "We're in sync with the PCI rules and regulations," Meftah said. "What's great about our product is you can check off a lot of the requirements for PCI-DSS as it pertains to security analytics and visibility in one fell swoop." Sean Michael Kerner is a senior editor at eWEEK and InternetNews.com. Follow him on Twitter @TechJournalist.  
Remember firewalls? They're simply a standard part of the overall security fabric now—analogous to XML in networks—but they've never gone away.

The firewall has been around since the earliest days of network security.

For a long time, they were the last line of defense in a network. However, with changing types of threats, ever-growing numbers of bad guys—and, in fact, organizations and countries—that are up to no good, and general IT advances, there has been increasing discussion about the firewall's place in the network. Is a firewall still relevant in an age in which almost any security measure can be bypassed in a workaround? eWEEK and security policy management provider AlgoSec outline some major milestones in the history of the firewall, beginning from its early days as a proxy to packet filtering and continuing to next-generation firewalls (NGFWs), which include cloud-based versions. In addition, this slide show will present predictions on how the firewall and firewall management are likely to evolve. Firewall Evolution: 5 Milestones, 5 Predictions By Chris Preimesberger Milestone: The Firewall as a Proxy In the early 1990s, the firewall was a primitive piece of technology—really just a proxy. During this period, the proxies were often pushed to the perimeter of a network and used to proxy traffic resources within the internal network.

The traffic could be filtered and shaped to certain resources. Milestone: Packet Filters During the early 1990s, there were also packet filters, which ran on servers that inspected traffic coming into the network.

This is where administrators would create security policies and, in effect, rudimentary rule bases, which performed packet filtering based on five attributes of TCP/IP: Source IP, Source Port, Destination IP, Destination Port and Destination Protocol. Milestone: Stateful Firewalls While packet filtering only looks at an individual packet at a time, using stateful packet inspection, firewalls are able to retain packets until there is enough information to make a sound "yes" or "no" decision. Stateful firewalls are still used today, but that is starting to change. Milestone: Unified Threat Management Becomes the Latest Buzzword In the early 2000s, unified threat management (UTM) devices emerge in the market, providing an all-in-one appliance that combines Secure Sockets Layer (SSL) virtual private networks, anti-virusware, intrusion-prevention systems (IPSes), and firewalls. Milestone: Next-Generation Firewalls (NGFWs) The latest evolution in firewall IT is the next-generation firewall, which filters packets based on much more granular policies for application and user traffic.

Additionally, these NGFWs can integrate IPSes as well as many other security functions into the firewalls' decisions to block malicious traffic. Prediction: Firewalls Are Becoming Virtual Over the next few years, organizations will see firewalls becoming much more virtual, instead of being a stagnant appliance on networks. Like a traditional firewall, these virtual/hypervisory-level firewalls will inspect packets and use security policy rules to block unapproved communication between virtual machines.

While these virtual/hypervisor-level firewalls will not replace dedicated firewalls operating at or near wire speeds, there will be more demand for these firewalls as organizations begin to mix workloads with different security requirements on the same physical box. Prediction: Cloud-Based Firewalls Since there is a rise in both cloud computing and mobile devices, analysts have predicted that there might be an increase in cloud-based firewalls that will become more focused by services, such as Web application firewalls (WAF). Prediction: More Cross-Pollination With Other Security Capabilities We've already seen a lot of integration with UTM technology and NGFWs, and we will move beyond simply adding more capabilities onto a box and more effectively integrating the data and capabilities to get faster and better decisions made.

For example, this would mean having a security information and event management, or SIEM, platform correlate data from the gateway and dynamically adapt the firewall rules to mitigate specific threats. Prediction: Deeper Content Inspection Content inspection can always be improved as new generations of firewalls come into the market.

As each generation of inspection software enters the market, it runs leaner and faster and is generally more efficient. Prediction: Managing Firewalls With the Business in Mind More decisions in larger organizations will be made from the perspective of a business application, rather than from strictly a firewall/security perspective as networks become increasingly complex.

This is a trend throughout the software industry. By business application, we mean—as one example—a credit card processing service that is necessary for an e-commerce organization to run and make money.

Therefore, if a firewall rule is preventing the application from working or slowing down the performance, then the organization will suffer.

This is a new way of looking at how firewalls are managed, which continues to evolve. Chris Preimesberger was named Editor-in-Chief of Features & Analysis at eWEEK in November 2011. Previously he served eWEEK as Senior Writer, covering a range of IT sectors that include data center systems, cloud computing, storage, virtualization, green IT, e-discovery and IT governance. His blog, Storage Station, is considered a go-to information source. Chris won a national Folio Award for magazine writing in November 2011 for a cover story on Salesforce.com and CEO-founder Marc Benioff, and he has served as a judge for the SIIA Codie Awards since 2005. In previous IT journalism, Chris was a founding editor of both IT Manager's Journal and DevX.com and was managing editor of Software Development magazine. His diverse resume also includes: sportswriter for the Los Angeles Daily News, covering NCAA and NBA basketball, television critic for the Palo Alto Times Tribune, and Sports Information Director at Stanford University.

He has served as a correspondent for The Associated Press, covering Stanford and NCAA tournament basketball, since 1983.

He has covered a number of major events, including the 1984 Democratic National Convention, a Presidential press conference at the White House in 1993, the Emmy Awards (three times), two Rose Bowls, the Fiesta Bowl, several NCAA men's and women's basketball tournaments, a Formula One Grand Prix auto race, a heavyweight boxing championship bout (Ali vs. Spinks, 1978), and the 1985 Super Bowl.

A 1975 graduate of Pepperdine University in Malibu, Calif., Chris has won more than a dozen regional and national awards for his work.

He and his wife, Rebecca, have four children and reside in Redwood City, Calif.Follow on Twitter: editingwhiz ${QSComments.incrementNestedCommentsCounter()} {{if QSComments.checkCommentsDepth()}} {{if _childComments}} {{tmpl(_childComments) "commentsTemplate"}} {{/if}} {{/if}} ${QSComments.decrementNestedCommentsCounter()} {{/if}}
Dropbox, Box, SugarSync, SkyDrive – or whatever it is now to be called.

These and other cloud storage suppliers should be enough to send any business person concerned with governance, risk and compliance into paroxysms. The uncontrolled use of external file storage/share systems is a major threat to the management of an organisation’s intellectual property, yet banning their use is not a real option either. Cloud file storage is here to stay – gaining better enterprise control over how it is used has to be the aim. The use of a cloud-based file sharing system can actually be a good thing.

In the past, the main way of “sharing” a document was to email it. A lot of “sharing” was not sharing at all – it was a case of the user needing to work on a document from home and so emailing it to themselves so that they could access it from a different machine.  The problem then became the number of different versions of a document that existed – and which one was deemed to be “live”. Too often, a user would pick up the wrong version and do extra work on it, leading to errors in the overall information contained in the document.

The additional technical costs of storing, archiving and searching across multiple different copies of essentially the same file also counted against this kind of information sharing being a strategic option – as well as the security issues of how emails can be so easily moved around. Accessing documents from different devices The growth of mobile devices has led to an increasing need for an individual to be able to access documents from different devices. Rather than use email, users are now turning to cloud-based storage systems.

This gives them a single version of a document to work on, with the benefit of knowing that no matter what device they come in from, it will be the latest version. Cloud storage is also off-premise – this can be good news if anything happens to the primary file store; the cloud provides a direct backup for the files without the need for the user to go through internal help desks and data recovery processes. So, there is goodness in cloud storage systems but the amount of information lying outside the reach and control of the organisation still means that something has to be done to make it work for the user - and the organisation.

This means viewing cloud storage from a multi-user point of view, not just the individual’s. Teams often need to collaborate on documents and making sure everyone is working with the correct version is more important than when just one person is involved. Versioning, access control and data security all have to be taken into account. Choosing a user-friendly system There is the option to install an on-premise equivalent – an environment where users can place files and access them from anywhere. Systems such as SharePoint or Alfresco are suitable examples – but they are not particularly liked by users.

They are “too stuffy and too enterprise”.

And, an on-premise system also suffers from that same problem – unless backups are off-site, any major issue with the datacentre could lead to all data being lost. What users want is something that is still “consumer” and easy to use, almost transparent to how they work.

An organisation will have to match whatever the user would choose themselves with something that gives the levels of functionality and security it needs. It could look at putting a nice web front-end on its file servers, but as these are likely to be distributed throughout the organisation, this may not be a good idea – and the same problem of backup and restore remains. Leaving users to their own devices misses the value of team-working However, there are alternatives appearing.

AppSense provides its DataNow virtual appliance that pulls together all corporate information systems and makes them available to users in a seamless manner, but still securely. RES Software has a similar concept with its HyperDrive technology. Another approach is leave the user to their own devices, but capture the data stream as it passes to their chosen storage environment.

For this to work, you need to have total control of how they are accessing corporate information.

The use of a virtual private network (VPN) connection creates a point of control where the user has to touch the organisation’s network.

At this point, data leak prevention from suppliers including Symantec, Barracuda. Mimecast and Websense, can be applied. Virtual desktops such as those provided through Citrix or VMware, with additional functionality from suppliers such as Centrix Software and RES Software, can apply even greater levels of control.

The users total work environment still resides in the datacentre, with the device acting purely as a window to that desktop. “Sandboxing” can help eliminate copying and pasting or other means of placing data directly on the device. In conjunction with a suitably secure cloud-based storage system, such as that provided by Egnyte, a highly functional enterprise-class information management platform can be put in place. Numecent’s technology Numecent’s cloud paging technology enables applications to be run securely at a device using the local compute power of the device itself and the data can be stored securely wherever a user wants it – and if necessary the data will be able to be automatically deleted when the user’s session is ended.

This allows the user to continue using Dropbox or a similar application – but with better corporate controls in place. But leaving users to their own devices misses the value of team-working. Using apps built for individuals prevents the value of team file sharing to be accrued – the single live document; the team’s capability to work across different locations and time zones with the latest correct information; and the capability for the organisation to search and report against all the information that should be at its disposal. Most of the consumer cloud-based storage systems have a business version as well.

This could be a good way forward, but it has to be done strategically. It is no good negotiating a licence for thousands of business seats of a system if the users still carry on using their individual consumer versions – particularly if that happens to be the one-person version of what you are trying to put in place.

If you sign up for a business version of a cloud storage system, then you have to make every employee aware of this – and make it easily available to them, maybe through the use of a corporate app portal.

This has the further advantage of providing users with a full menu of all approved apps that they can use. You also have to make them aware of why you have done this – enhanced team working, corporate risk mitigation and overall information value. You may also need to put in place a rule stating that use of a non-preferred system could be a disciplinary issue, as this may be the only way to force their hand. You may also need to swallow a bitter pill and allow the employee to use the cloud storage for their personal data as well – only through providing a single data storage system that they can use for everything can you hope to get them on board. Finally, the use of a business version of a cloud storage system does not abdicate the organisation from its data availability and security issues. Encryption of data that needs to be “classified” should still be carried out; data should be backed up or otherwise synchronised with an on-premise data store so that the organisation has a capability to carry on working should the cloud provider go out of business or suffer from a major security breach. Overall, a strategic approach of a targeted replacement of what users are already using with a strategic option that still works seamlessly for the user but adds the teamwork functionality, information security and helps to address the governance, risk and compliance needs of an organisation has to be the preferred way forward. Clive Longbottom is founder of analyst Quocirca. Email Alerts Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox. By submitting you agree to receive email from TechTarget and its partners.

If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy Read More Related content from ComputerWeekly.com This was first published in September 2013
I recently wrote a KuppingerCole analyst report on how cloud computing brings the threat of industrial espionage to the fore. With this in mind, it must be asked whether using low-cost or free services is ever a good idea. While there is clearly a risk to the confidentiality of your sensitive information in cloud computing, the question remains as to whether industrial espionage is an internet-era or cloud-era problem.

The internet made this type of attack exponentially more possible, but cloud has put sensitive corporate information directly and willingly into the hands of a third party. Governance in the cloud is complex, and requires thorough consideration from the outset.

The legal environment of the cloud is global, and needs to be clarified before technical setup. Once the legal environment is clear, make sure your operational requirements can be met.

Ask your business the following questions: Do you need to put your highly sensitive data in the cloud? If you can avoid it, do so – you are more likely to be able to protect it better on-premises.

In the low-cost/free world, you will certainly be able to protect it better on your own terms.

Agree a level of risk you are happy with and perform a proper risk assessment that ensures any data above this level of risk is not exposed. You may be happy with transaction details being held in the cloud, but not customer credit card information, for example. Tokenisation solutions are available to deal with the latter issue. What happens when the provider experiences downtime? What are its service level agreements (SLAs)? Do they meet with your requirements? If you need the storage to be available 24x7 and the provider only provides a guaranteed 9x5 service, consider what you can do outside working hours to mitigate – local data stores and asynchronous delivery, for example. What happens if your provider pulls the plug? Do you have backups of all sensitive data? Ensure all data is backed up and encrypted to a write-once read-many (WORM) device or tape somewhere under your control. Make sure that the data in their possession is either returned to you or ask to see a destruction certificate.

This requirement needs to be in an original contract. Some operators will not agree to this level of control, so again you have to ask yourself whether the level of risk is acceptable. What would happen if your competitor acquired the company processing your data? Not so pie-in-the-sky these days. Does your service definition protect your data from being copied and worked on offline, at least in legal terms? So, we can see that from the outset, the best approach is to define the value of your data assets, perform a risk assessment and consider all eventualities in governance. Encryption, authentication and authorisation Once you are happy with the level of risk involved in putting data in the cloud, encrypt everything and ensure your access controls are sound. Report on the access regularly, and ensure that any incident management processes at the provider include an immediate report back to you about any potential data exposure. The key to feeling safe in the cloud is an information-centric security solution. Encryption is a good way to physically protect your data once it is out of your control, and should certainly be used where data is kept in any publicly available store.  Free or low-cost cloud storage does not have to be insecure. Check with the provider as to the controls used to separate access to data, and whether any system-level encryption is used to protect from physical attack on its premises. Encryption is a fantastic means of implementing a preventative security control for information in the cloud, but it requires good access control.

If possible, you should add an authentication and authorisation layer. Using good governance from the outset retains control of information, access to information and the monitoring of processes around it. Operational security is then paramount, and reporting and testing vital for continued assurance. Good cloud service comes at a price One area that may cause friction between your business and a service provider is in security healthchecking.

If the provider will not allow you to perform your own checks, ask for assurance that it has done the same.

If this is refused, it may be that you should be looking elsewhere – an unknown risk is a high risk. My last word on this subject is not so positive. Having worked in highly regulated environments as an advisor for private (i.e. not free or low-cost) cloud providers serving a number large businesses, I could not recommend a free solution. The cost of the solution relates to the contract, which gives you your right to demand a proper service. No cost means little or no contract, which means no recourse, and certainly no provider keen to keep you happy.

Always remember that a little skin in the game is a useful control. Robert Newby is an analyst and managing partner at KupingerCole UK. Email Alerts Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox. By submitting you agree to receive email from TechTarget and its partners.

If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy Read More Related content from ComputerWeekly.com This was first published in September 2013