Thursday, December 14, 2017
Home Tags On-premise

Tag: on-premise

Devops platform provider Puppet has introduced its Puppet Cloud Discovery service for learning what, exactly, users have running in the cloud and their impact.As Puppet's first foray into SaaS, the service offers visibility into cloud workloads, pro...
NoSQL database specialist MongoDB unveiled a new free tier for its MongoDB Atlas database-as-a-service (DaaS) offering on Tuesday.

The company also released a utility to support live migration of data to MongoDB Atlas, whether that data is on-premise or in the cloud.“Since we first introduced MongoDB to the community in 2009, we have been laser-focused on one thing—building a technology that gets out of the way of developers and makes them more productive,” Eliot Horowitz, CTO and co-founder of MongoDB, said in a statement Tuesday. “Now, with these updates to MongoDB Atlas, we’re tearing down more of the barriers that stand between developers and their giant ideas.”To read this article in full or to leave a comment, please click here
It’s 3 a.m. and your e-commerce site is down. You don’t know about it until you awake at 6 a.m.—OK, 7 a.m.
It turns out to be an issue in your cloud service, and you spend a few hours to fix the remote servers.

As with power-cycling the Wi-Fi router at home, things return to normal afterward. All in all, you were down for six hours. No biggie—except that you lost a half million dollars in revenue, and the cost to your reputation is estimated at $5 million.[ The cloud storage security gap—and how to close it. | The InfoWorld Deep Dive: How to make document sharing really work in Office 365. ] When people talk about such situations, what I hear is that they lack management.
Systems fail, both cloud-based and on-premise—that’s a given. What also should be a given, but often is not, is having the ability to correct issues or prevent them entirely.

For cloud systems, that takes a good understanding of cloud-management best practices and tools.To read this article in full or to leave a comment, please click here
Google is finally giving administrators the ability to manage their encryption keys in Google Cloud Platform (GCP) with its Cloud Key Management Service (KMS). Google is the last of the three major cloud providers to provide the key management service, as Amazon and Microsoft already have similar offerings. The Cloud KMS, currently in beta, helps administrators manage the encryption keys for their organization without having to maintain an on-premise key management system or deploy hardware security modules. With Cloud KMS, administrators can manage all the organization's encryption keys, not only the ones used to protect data in GCP. Administrators can create, use, rotate, and destroy AES-256 symmetric encryption keys via the Cloud KMS API. Multiple versions of a key can be active at any time for decryption, but only one primary key version can be used for encrypting new data. The rotation schedule can be defined to automatically generate a new key version at fixed time intervals. There's also a built-in 24-hour delay when trying to destroy keys to prevent accidental or malicious loss. Cloud KMS integrates with GCP's Cloud Identity Access Management and Cloud Audit Logging services so that administrators can manage permissions for individual keys and monitor usage. Cloud KMS also provides a REST API that allows AES-256 encryption or decryption in Galois/Counter Mode, which is the same encryption library used internally to encrypt data in Google Cloud Storage. AES GCM is implemented in the BoringSSL library maintained by Google, and the company continually checks for weaknesses in the encryption library using several tools, "including tools similar to the recently open-sourced cryptographic test tool Project Wycheproof," said Google product manager Maya Kaczorowski on the Google Cloud Platform blog. Compared to AWS and Windows Azure, GCP has lagged in encryption. Amazon introduced customer-supplied encryption keys (CSEK) to AWS customers for its S3 service in June 2014, and it introduced the AWS Key Management Service later that year. Microsoft added CSEK via Key Vault in January 2015. Google began offering CSEK in June 2015 and is only now rolling out Cloud KMS. Google Cloud Storage manages server-side encryption by default, and administrators have to specifically select "Cloud Key Management Service" to manage the keys in the cloud service, or "Customer Supplied Encryption Keys" to manage the keys on-premise. CSEK is also available with Compute Engine. Kaczorowski said organizations in regulated industries, such as financial services and health care, can benefit from hosted key management services "for the ease of use and peace of mind that they provide." However, administrators should evaluate whether the convenience is worth the possibility that if the government has a legal order compelling Google to provide information about the keys, the company will have to comply because it has access to all the keys managed by the service. There's another potential hiccup for administrators to consider if the organization gathers personal information from Europeans. The European General Data Protection Regulation applies to European personal data, regardless of where it is stored in the world, and regulators in the past have recommended not storing encryption keys with the same cloud provider. If the key is kept securely with the organization, the cloud provider can't do anything beyond just maintaining access to and availability of the data. Using GCP and Cloud KMS simultaneously may or may not be acceptable to European regulators. "Encryption is only effective is you separate the encrypted data from the key storage. Using the same vendor, be it AWS or Google to store the keys and data still raises compliance and security challenges for many businesses,” said Pravin Kothari, founder, chairman, and CEO of cloud encryption company CipherCloud. 
Even the servers it colocates (!) says new docu revealing Alphabet sub's security secrets Google has published a Infrastructure Security Design Overview that explains how it secures the cloud it uses for its own operations and for public cloud services. Revealed last Friday, the document outlines six layers of security and reveals some interesting factoids about the Alphabet subsidiary's operations, none more so than the revelation that “We also design custom chips, including a hardware security chip that is currently being deployed on both servers and peripherals.

These chips allow us to securely identify and authenticate legitimate Google devices at the hardware level.” That silicon works alongside cryptographic signatures employed “over low-level components like the BIOS, bootloader, kernel, and base operating system image.” “These signatures can be validated during each boot or update,” the document says, adding that “The components are all Google-controlled, built, and hardened. With each new generation of hardware we strive to continually improve security: for example, depending on the generation of server design, we root the trust of the boot chain in either a lockable firmware chip, a microcontroller running Google-written security code, or the above mentioned Google-designed security chip. Another interesting nugget of information the document reveals is that “Google additionally hosts some servers in third-party data centers,” a fact mentioned so the company can explain that when it works with others' bit barns it puts in place its own layers of physical security such as “independent biometric identification systems, cameras, and metal detectors.” The document goes on to explain that Google's fleet of applications and services encrypt data before it is written to disk, to make it harder for malicious disk firmware to access data. Disks get the following treatment: “We enable hardware encryption support in our hard drives and SSDs and meticulously track each drive through its lifecycle.

Before a decommissioned encrypted storage device can physically leave our custody, it is cleaned using a multi-step process that includes two independent verifications.

Devices that do not pass this wiping procedure are physically destroyed (e.g. shredded) on-premise.” Elsewhere, the document describes client security which starts with universal second factor authentication and then sees the company scan employees' devices to “ensure that the operating system images for these client devices are up-to-date with security patches and … control the applications that can be installed.” “We additionally have systems for scanning user-installed apps, downloads, browser extensions, and content browsed from the web for suitability on corp clients.” “Being on the corporate LAN is not our primary mechanism for granting access privileges. We instead use application-level access management controls which allow us to expose internal applications to only specific users when they are coming from a correctly managed device and from expected networks and geographic locations.” Also explained are the automated and manual code review techniques Google uses to detect bugs in software its developers write.

The manual reviews “... are conducted by a team that includes experts across web security, cryptography, and operating system security.

The reviews can also result in new security library features and new fuzzers that can then be applied to other future products.” There's also this description of the lengths Google goes to in its quest to protect source code: “Google’s source code is stored in a central repository where both current and past versions of the service are auditable.

The infrastructure can additionally be configured to require that a service’s binaries be built from specific reviewed, checked in, and tested source code.
Such code reviews require inspection and approval from at least one engineer other than the author, and the system enforces that code modifications to any system must be approved by the owners of that system.

These requirements limit the ability of an insider or adversary to make malicious modifications to source code and also provide a forensic trail from a service back to its source. There's plenty more in the document, like news that Google's public cloud runs virtual machines in a custom version of the KVM hypervisor.

Google also boasts in the document that it is “the largest submitter of CVEs and security bug fixes for the Linux KVM hypervisor.” We also learn that the Google cloud rests on the same security services as the rest of its offerings. There's also an explanation of the company's internal service identity and access management scheme, detailed in the diagram below, plus news that “We do not rely on internal network segmentation or firewalling as our primary security mechanisms”.

That's a little at odds with current interest in network virtualisation and microsegmentation. Google's Service Identity and Access Management scheme The company's also published documents detailing each aspect of security discussed in the main document.

They're listed and linked to at the end of the master document. ® Sponsored: Customer Identity and Access Management
CodeLathe FileCloud,version and earlier,is vulnerable to cross-site request forgery(CSRF).
Encryption got you down? Google will manage your secrets for you Google on Wednesday introduced its Cloud Key Management Service in beta to help Google Cloud Platform customers deal with their encryption keys. "Cloud KMS offers a cloud-based root of trust that you can monitor and audit," said product manager Maya Kaczorowski in a blog post. "As an alternative to custom-built or ad-hoc key management systems, which are difficult to scale and maintain, Cloud KMS makes it easy to keep your keys safe." Following the disclosures about the scope of online surveillance by former NSA contractor Edward Snowden in 2013, encryption became more important for cloud service providers – particularly encryption that allows customers to control the keys. Google began offering customer-supplied encryption keys (CSEK) in June 2015.

But it hasn't exactly led the way with encryption for cloud customers.

Amazon Web Services introduced CSEK for S3 in June 2014 and in November of that year introduced AWS Key Management Service. Microsoft Azure added CSEK via Key Vault in January 2015. A Google spokesperson wasn't immediately available to discuss the service. Garrett Bekker, an analyst with 451 research, said in a statement provided by Google that KMS "fills a gap by providing customers with the ability to manage their encryption keys in a multi-tenant cloud service, without the need to maintain an on-premise key management system or HSM [hardware security module]." GCP customers can use Cloud KMS to create, use, rotate (at will or scheduled), and destroy AES-256 symmetric encryption keys.

Cloud KMS provides a REST API that can use a key to encrypt or decrypt data. Cloud KMS integrates with Cloud Identity Access Management and Cloud Audit Logging, two related GCP services. Kaczorowski says that Cloud KMS relies on the Advanced Encryption Standard (AES) in Galois/Counter Mode [PDF], a method for high-speed encryption.

Google constantly checks its implementation, residing in its BoringSSL library, using tools like Project Wycheproof, according to Kaczorowski. While key management offers convenience, the tradeoff is security, since service providers can be compelled to turn keys over to authorities when presented with lawful demands. ® Sponsored: Flash enters the mainstream.
Visit The Register's storage hub
Red Hat OpenShift Enterprise release 2.2.11 is now available with updatedpackages that fix several bugs and add various enhancements. OpenShift Enterprise by Red Hat is the company's cloud computingPlatform-as-a-Service (PaaS) solution designed for on-premise orprivate cloud deployments.This update fixes the following bugs:* The routing daemon (RD) can now be configured with multiple F5 BIG-IP hosts.During F5 configurations, the RD tries to connect to the first configured host.If it fails, it retries each successive host until it connects to a host orexhausts its host list.

The RD now correctly sends a NACK response to ActiveMQwhen operations fail.

ActiveMQ redelivers the message, causing the RD to retry.The RD's communication with ActiveMQ, logging of errors, and handling of errorresponses from F5 BIG-IP improved.

This enables the RD to continue operationwith the F5 BIG-IP cluster even if the RD loses contact with the cluster,improving the RD's behavior when multiple instances are run in a clusteredconfiguration.

The RD is more resilient against losing contact with individualF5 BIG-IP hosts in a cluster of F5 BIG-IP hosts and functions better when run ina clustered configuration.

The RD elicits fewer error responses from F5 BIG-IPand provides better logs, making error diagnosis easier. (BZ#1227472)* Users can now allow the provided database connection helper functions mysql(),psql(), and mongo() to be overwritten.

This allows users to overwrite the helperfunctions to easily connect to external databases. Users can now define mysql(),psql(), and mongo() functions in their $OPENSHIFT_DATA_DIR/.bash_profile, whichcan be used within an SSH connection to a gear. (BZ#1258033)* HAProxy cookies were inconsistently named. Requests to an HA application werenot always being routed to the correct gear.

This fix changes the cookie naminglogic so that the cookie name reflects which back-end gear is handling therequest.

As a result, all back-end HAProxy gears should now return the samecookie name and the requests should be properly routed to the correct back-endgear. (BZ#1377433)* EWS Tomcat 7 can now be configured on nodes to use either EWS 2 or EWS 3channels, allowing an administrator an option of what EWS version the EWS 2cartridge deploys.

This option was enabled to allow administrators to takeadvantage of the EWS 3 lifecycle and security or bug updates that it receivescompared to the maintenance lifecycle that EWS 2 is currently receiving.Administrators have options or can mix and match EWS versions (with nodeprofiles) on what Tomcat version is installed when an EWS 2 cartridge iscreated. (BZ#1394328)* The new version of PIP (7.1.0) no longer accepted insecure (HTTP) mirrors.Also, PIP attempted to create and then write files into the .cache directory,which users do not have permission to create post-installation.

As a result,Python dependencies failed to be installed.The default PyPi mirror URL is now updated to use a secure connection (HTTPS).The directory .cache is created during installation in advance so it can be usedlater by PIP. With this fix, Python dependencies can be fetched from the PyPimirror and installed properly. (BZ#1401120)* When using a gear's UUID in the logical volume name, a grep in the oo-acceptnode caused oo-accept-node to fail.

The grep was fixed with this update. Usingthe gear UUID in the logical volume name no longer causes oo-accept-node tofail. (BZ#1401124)* Previously, moving a gear with many aliases reloaded Apache for each alias.The excess aliases caused the gear move to timeout and fail. With this fix, agear move will now update Apache once with an array of of aliases instead ofupdating after each alias. (BZ#1401132)* Previously, node-proxy did not specify to use cipher order, so the order didnot matter when using a custom cipher order.

This fix makes the node-proxy honorthe cipher order.

Custom cipher orders will now take the cipher order in accountwhen choosing a cipher. (BZ#1401133)All OpenShift Enterprise 2 users are advised to upgrade to these updatedpackages. Red Hat OpenShift Enterprise 2 SRPMS: openshift-enterprise-upgrade-2.2.11-1.el6op.src.rpm     MD5: 7ec16aed5fc59ed2890c39c512535506SHA-256: 684678600d7a39ada09613e3e8f2131ff1c0302d9e3041a187cebf76675ecaaa openshift-origin-cartridge-haproxy-     MD5: a1f1449b05688c5a980633d6c7d944f3SHA-256: 2929f1d04ea76635016830e108b098bbada8b45efc7bb53c73eb445ab77c830a openshift-origin-cartridge-python-     MD5: 3dcfe8900468bbf667affe2bf00a696eSHA-256: 4d29292623e415e1d5775a3f7e097d7f6a6c315d66c2a29b68e806788180ce2d openshift-origin-msg-node-mcollective-     MD5: d997b5a2ad85f8d336f207978d7bd6a3SHA-256: 8894b0fdc2fb0a033626bbbd4e1ccb2eaeb3b3b8f9fb6b3d6c3904077f3d1d0c openshift-origin-node-proxy-     MD5: 0a9ef5709ecdb7a38e2fb62c5be21a3dSHA-256: 5be7a48d2364bc0448f88d6a63a5be81270902695d674466c3a36d8fc5c6062c openshift-origin-node-util-     MD5: de83fb1a8228c3965286c5ec20162e32SHA-256: 832c41d74199362210989ef8c73b6e463f9116d23e3b934107f6135106e9e5a5 rubygem-openshift-origin-frontend-apache-mod-rewrite-     MD5: 16a356b09fa38aeb1c0dd6077b9170c6SHA-256: c6fcb52c44e805b4a2d3bd52845d3aae477a15cc9b3eadea8db4d92cff6b9cb8 rubygem-openshift-origin-frontend-apache-vhost-     MD5: e8dd00e793be08b117ac994405b260b4SHA-256: 09b5e3a38406ed813841204b7247faa840cdf9e5bc031b1acf4ae4e6ddf3ebb1 rubygem-openshift-origin-frontend-haproxy-sni-proxy-     MD5: 84be2c2e546dcf2d5e1c00f482347865SHA-256: d8e741d5123a3b4702c431f61e2e4f19415268f15536c8aeb4d4148a113f0fda rubygem-openshift-origin-frontend-nodejs-websocket-     MD5: 78a15fbefa3e00fe25cd350b59195172SHA-256: 9e414c68803f45a0ec50a0a7f700bb80c168401ca3038310c45f624e33eb6354 rubygem-openshift-origin-node-     MD5: 21ef886a44b03c688d48846fed34b974SHA-256: aeddbeafb1f58d2b2349ad5fa97fe3f5188bf5b905e0938aa3169bfe0746fdde rubygem-openshift-origin-routing-daemon-     MD5: 1744e26a273c397078b83ea4946f7836SHA-256: c039f8d023321d8eed0c09b123b171f27c866860705d45aa05b85f82faedf346   x86_64: openshift-enterprise-release-2.2.11-1.el6op.noarch.rpm     MD5: 2014a606a47b5e5491341a1381f83ccfSHA-256: c211f0dd8c3efba9d8f2840a7e418f2096dbfbb47f13a8ec7cf7929e38e6162f openshift-enterprise-upgrade-broker-2.2.11-1.el6op.noarch.rpm     MD5: 74e50b025859ef9d22efaea0771d1dfaSHA-256: e9fac95a23aa696dfb4c1e4cc8cf33d5cabfb0d9ea4a7f29925936635b6f6078 openshift-enterprise-upgrade-node-2.2.11-1.el6op.noarch.rpm     MD5: 43b23128a6f8508f872f199f11e99844SHA-256: 2182ab628c84f5bdcc4fff537aadd260894787a2c2a47d2501912b7190b8ea4d openshift-enterprise-yum-validator-2.2.11-1.el6op.noarch.rpm     MD5: af77a0545ff330278c6cd6b02671695aSHA-256: b867d00bda0f52d6ba6a98a74f4303c0df9b4b74405e0487131fb3180ec2150e openshift-origin-cartridge-haproxy-     MD5: 749c76f4c105f7ad2b8b4599c393eb39SHA-256: 51eccf1effbf4e287e5d7d22432c5c17e94ee5b03a082e40a38811a29fffb34f openshift-origin-cartridge-python-     MD5: 5a2b1bc49dc51b6e1d27418dcbdebe92SHA-256: d1d081769812ca7ff3a109144639e5f0fdfa6879354959e1a4907b21316565d1 openshift-origin-msg-node-mcollective-     MD5: 4f7a36fe214d0ff3c73b03f420455451SHA-256: 3571f7067485b72a67d8de2d6f22ddc06bb8e09128047011cb1c54084eb9e6d4 openshift-origin-node-proxy-     MD5: f422b78254bc9e061281b769b6257905SHA-256: 2d0fe749cbedb32b5feaa5c871bf38c6cad7f27a90cea0f8466f774974781166 openshift-origin-node-util-     MD5: 8a4247c0b621b63656b4fdbfaf48f9e7SHA-256: ab960e297a55df5a662793af11e6b540ebab93df6c3edb32610597afbecaacc8 rubygem-openshift-origin-frontend-apache-mod-rewrite-     MD5: 95210c17c2f0cc126b6b0756f6ca3fc3SHA-256: 22362fee3fa68b4ad59ed0a883948d5561d425b67a3396438e408c6df3bbab56 rubygem-openshift-origin-frontend-apache-vhost-     MD5: 59411dfa22500844ee7c995cbb3e855dSHA-256: 307fc8948cbbad0548562b7dfd01c7cc976346f9974c30f63801a6ae5925f540 rubygem-openshift-origin-frontend-haproxy-sni-proxy-     MD5: 19897e4896ccdf8f527eeef81334dd86SHA-256: 2139ed1ff65db053d722c9a61c0490d5a1e3457bc05b7a746bb1e398c60786cb rubygem-openshift-origin-frontend-nodejs-websocket-     MD5: a1d083fdbe96c3a50a44317d43f16f2aSHA-256: adad2d5496b14a6310eb947e4d07eecc2f892a4c8a6223473718ad006bcc761b rubygem-openshift-origin-node-     MD5: f0863b65b63e9e85f9cfc3eef3029980SHA-256: 3e1c1250766b63670687ff4ae1e8327229e82b738057bb22758544a24cdc3fc2 rubygem-openshift-origin-routing-daemon-     MD5: 1a08ee809815b4c0e231a98deec953d0SHA-256: be88d6d1f339675e91ca18087c9af6825afbb26f9abc2570188fb715c83fe57c   (The unlinked packages above are only available from the Red Hat Network) 1258033 - Allow the override of pre-defined function for database connections1377433 - haproxy configuration in HA gears sets inconsistent cookie values, breaking session affinity1394328 - [RFE] EWS 2 cartridge should be able to use EWS 3 binaries.1401120 - pip permission error prevents installing on python-2.7 cartridge1401124 - oo-accept-node reports missing quota if filesystem name contains gear uuid1401132 - Moving gears with many aliases causes excessive number of apache reloads These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from:
One Identity Connect for Cloud removes a significant inhibitor to digital transformation: complexity. Quest Software, the enterprise IT tool company Dell essentially rented for four years to be Dell Software, and the bigger company have officially parted ways.

Dell bought EMC and has new software companies like VMware and Pivotal on its team, and Quest is now going forward on its own under new ownership.One Identity, a business entity now operating under Quest Software, has launched a new-gen identity and access-management package called One Identity Connect for Cloud to govern users of cloud applications.
It's the first new product from Quest since Dell's people packed up from Aliso Viejo, Calif. last summer and moved back to Austin.The announcement was made at the recent 2016 Gartner Identity and Access Management Summit in Las Vegas.One Identity Connect for Cloud removes a significant inhibitor to digital transformation: complexity. Using this toolset, enterprises now can rapidly extend identity governance, access controls, compliance reporting and provisioning/de-provisioning to popular cloud-based apps including Salesforce, ServiceNow,, DropBox, Concur and Amazon AWS and many more. Extension of One Identity Manager to Cloud One Identity Connect for Cloud extends the capabilities of the original One Identity Manager to cloud applications, representing a major identity and access management milestone.

The product is the first in the industry, Quest claims, to embrace the System for Cross-domain Identity Management (SCIM) as the common interface between on-premise and SaaS applications, enabling One Identity developers to address the IAM needs of multiple cloud applications from a single SCIM interface. The new SCIM support in Identity Manager uses Dell Boomi, an integration platform as a service (iPaas), that enables cloud applications to talk directly to One Identity Manager for complete identity management of cloud-based applications in a traditionally difficult-to-address hybrid environment.In contrast to competing offerings—which require custom connectors developed for each cloud application, each time—Connect for Cloud provides one interface, eliminating time-consuming custom coding and one-off connections to on-board hundreds of cloud applications today, and many more in the coming months. Enabler of Digital Transformation A 2016 survey of global businesses found that 97 percent of respondents are investing in digital technologies to transform their businesses, yet only 18 percent reported that security has been involved in these initiatives, most likely because security slows them down.Because cloud applications are a major component of any digital journey, Connect for Cloud mitigates ad-hoc security issues related to their use and enables their quicker and more complete adoption, senior director, product management at One Identity Jackson Shaw told eWEEK."Organizations can manage identity governance for on-premise, hybrid and cloud deployments from a single interface and with a single set of policies, rules, workflows, and identities, granting full visibility into users, rights, data and applications regardless of where they reside," Shaw said.One Identity Connect for Cloud is available now worldwide for One Identity Manager 7.1 customers.  Go here for more information.

The focus on digital is set to remain the key trend in the IT industry for the next 12 months

London, UK – 29 November 2016 - Dimension Data, the global ICT solutions and services provider, today published its top IT predictions for 2017, and the focus on digital is set to remain the key trend in the industry for the next 12 months.

Dimension Data’s Chief Technology Officer, Ettienne Reinecke says digital is about building truly customer-centric business models on IT including the network, data centre, applications, and other infrastructure - which may be on-premise, or cloud-based. “Today, there’s no such thing as a digital strategy – just strategy in a digital world.

And while the digital age is creating a degree of uncertainty for some organisations, it’s also opening the doors to exciting possibilities and ushering in an era of infinite potential.”

Reinecke cites ownership and access to data – and metadata – as a key theme. “In the year ahead, control and ownership of data and metadata will emerge as a point of discussion - and indeed contention.

That’s because data and metadata are the ‘gold dust’ that allow organisations to glean rich insights about customer behaviour.
In addition, metadata allows organisations to identify specific behavioural patterns, derive business intelligence, and make informed business decisions,” Reinecke explains.

As a result, organisations are becoming increasingly protective of their metadata, and wary of who has access to it. “Organisations don’t just want ownership and control of their data for compliance reasons: they want it to perform analytics. We expect that this will trigger some interesting discussions between businesses and their cloud providers.

For example, where are the boundaries with respect to ownership, especially around metadata. We foresee this issue resulting in a bit of ‘push and pull’ among the various parties.”

Other IT trends that Dimension Data predicts will make their mark in 2017 include:

  • Intelligence is driving the predictive cybersecurity posture
    Cybercrime is big business. Over the last few years, cybercriminals have been re-investing much of the ill-gotten gains into developing more sophisticated capabilities, using more advanced technologies.

    Despite ongoing innovation in the cybersecurity industry, much of the effort remains reactive.

    Cybersecurity will become more predictive, rather than proactive.

  • Machines are being embedded in the workspace for tomorrow
    A new generation is starting to show up at work, and they’re not millennials, or even Gen Z: they’re machines.

    And it won’t be much longer before holographics, augmented reality, and virtual reality begin to move from B2C into B2B.

    Also, over the next two to three years these technologies will drive a fundamental transformation of the workspace.

  • The Internet of Things is delivering on the promise of big data
    IoT will deliver on the promise of big data.
    Increasingly, big data projects are going through multiple updates in a single year – and the Internet of Things (IoT) is largely the reason.

    That’s because IoT makes it possible to examine specific patterns that deliver specific business outcomes, and this has to increasingly be done in realtime.

    This will drive a healthier investment, and faster return in big data projects.

  • Container technology is the new disruptor in the data centre and a key enabler for hybrid IT
    In 2017 we’ll see more widespread adoption of containers, but the transition to a fully containerised world will take few more years.
    In addition, we’ll see increasing adoption of network function virtualisation (NFV) when cloud-enabling existing networks, and for new networks to be architected with hybrid cloud in mind.

Visit to read more about Dimension Data’s 2017 IT predictions.

Follow us on


About Dimension Data
Dimension Data uses the power of technology to help organisations achieve great things in the digital era.

As a member of the NTT Group, we accelerate our clients’ ambitions through digital infrastructure, hybrid cloud, workspaces for tomorrow, and cybersecurity. With a turnover of USD 7.5 billion, offices in 58 countries, and 31,000 employees, we deliver wherever our clients are, at every stage of their technology journey. We’re proud to be the Official Technology Partner of Amaury Sport Organisation, which owns the Tour de France, and the title partner of the cycling team, Team Dimension Data for Qhubeka.
Visit us at

Press Contacts
Charlotte Martin/Matthew Watkins
Finn Partners
tel +44 (0) 203 217 7060

SDL Secure Translation Solution offers a controlled environment where organizations determine who accesses their data, and howWAKEFIELD, MA and MAIDENHEAD, UK – November 17, 2016 – SDL (LSE:SDL) today announced it is redefining security standards within the translation industry with the launch of its SDL Secure Translation Solution.

As strict requirements continue to evolve in highly regulated industries – including Financial Services and Life Sciences – SDL is providing organizations the ability to be compliant as well as offering full traceability and access control of their content. SDL Secure Translation Solution provides a cost-effective, easily deployable and scalable option in two ways.

Customers can define content as highly sensitive and deliver it to SDL’s secure environment for translation.

They have the option to use their linguists of choice and only those who have relevant permissions can access content.
In addition, customers using SDL Language Service can be assured that their sensitive documents are sent via the secure environment and only accessed by translators that the customer defines. Within the environment, companies can control how and by whom their data is accessed from either their on-premise technology solution, or by using our Language Delivery Service.

Translation memory, terminology, translation workbench and machine translation are all available within the secure environment.

This ensures that nothing can be stored locally, copied and pasted, printed or added to unauthorized translation memories. The explosion of content has brought great challenges to global organizations, most notably around data privacy and protection. However, these challenges do not outweigh the need for high quality, localized translations to reach customers.
SDL fulfills the demand for cost efficient, scalable and easy-to-implement solutions by building upon its history of offering a secure, translation supply chain for the most sensitive of content.

The latest innovation allows customers to deliver highly sensitive information to SDL’s secure environment virtually, with the added safeguard of industry-leading standards.

Documents containing highly sensitive information sent using SDL’s Language Delivery Service will be forwarded via the secure environment, only to be accessible to translators that have appropriate access rights. “Security continues to be crucial for business,” said Adolfo Hernandez, CEO, SDL. “We are now taking it to the next level, providing an unprecedented level of security and control for those industries who require the highest level of protection.” SDL’s Secure Translation Solution will be available in early 2017.

To find out more about the technology, read our blog. About SDLSDL (LSE: SDL) is the global innovator in language translation technology, services and content management.

For more than 20 years, SDL has transformed business results by enabling nuanced digital experiences with customers across the globe so they can create personalized connections anywhere and on any device.

Are you in the know? Find out more at and follow us on Twitter, LinkedIn and Facebook. ### Contacts:SDLVicky PAN CommunicationsEmily Holt / Jenny Radloff+1
New integrated SSO, MFA, user-provisioning and risk-based network and application policy capabilities close critical business security holes in IAMLONDON, UK - November 16, 2016 – Kaseya®, the leading provider of complete IT management solutions for Managed Service Providers (MSPs) and small to mid-sized businesses (SMBs), today announced the immediate availability of the latest release of Kaseya AuthAnvil on-demand.

Delivering a new level of security all within a single identity and access management (IAM) solution, the technology now provides easy single sign on (SSO), multifactor authentication (MFA) and automated user provisioning for Microsoft Office 365. AuthAnvil Office 365 Microsoft Office 365 commercial subscriptions have ballooned in 2016, up 40 percent from the previous year to 85 million subscribers.

As companies around the world continue to embrace the shift to the cloud, millions of Microsoft Office 365 corporate users now have access to Kaseya AuthAnvil’s patented password management, MFA and SSO capabilities to further secure both employee and customer environments in the cloud. SSO support for Office 365 adds one of the most popular business applications to Kaseya's growing library of thousands of applications supported by AuthAnvil.

The technology provides end users with the simplicity and time savings of securely signing in once to access all applications without having to remember multiple passwords, accounts and login information. With its MFA capabilities, Kaseya AuthAnvil provides an added, easy-to-use layer of security to protect businesses from increasingly common password breaches at popular services and sites, such as LinkedIn, Dropbox, Yahoo and others. Kaseya's MFA supports common end-user devices, such as Apple iOS, Android phones and tablets, Windows 10 or the new U2F protocol, and meets the latest requirements of NIST 800-60 for mobile MFA, HIPAA, FFIEC and PCI standards. Kaseya AuthAnvil goes beyond security, compliance and usability by automatically provisioning Office 365 user accounts as part of the authentication process.

By directly tying into a company’s existing Active Directory or other user directory, AuthAnvil can automate both Office 365 user provisioning and privileges assignment.

As a result, IT administrators no longer have the burden of having to create user accounts and assign common services and privileges – such as SharePoint, OneDrive and mail. Additionally, Kaseya introduces an industry-first with its risk-based network and application authentication process engine.

This technology makes it simple for administrators to create fine-grained compliance controls for users, networks and applications. Using a simple, visual, policy interface, administrators can quickly and easily create authentication rules using a risk-based approach.

Access can be controlled not only based on any user attribute, but also based on geo-location, time, type of network or individual application usage. To learn more about the new Kaseya AuthAnvil release, please visit: Supporting Quotes“Microsoft Office 365 is at the heart of many of our customers’ businesses, as well as our own.
So security is paramount for this application.

AuthAnvil’s latest support for O365 is a game changer.

The integration of single sign on, multifactor authentication and automatic user provisioning means our customers can trust that our own environment is as secure as possible, and that we have the capability and know how to safeguard theirs,” said Bill Burke, CIO, Corporate IT Solutions. “Plain and simple – AuthAnvil just works.” “It’s not easy to strike a balance between a seamless user experience and top notch security, but that is exactly what Kaseya AuthAnvil accomplishes,” said Jason Shirdon, vice president of operations, Ease Technologies. “With AuthAnvil, we’re able to comprehensively manage our IAM needs both efficiently and consistently.

As an MSP, our clients rely on us to be security conscious, which is why we use AuthAnvil.” “The growth of the cloud is in a historic phase right now, and Microsoft Office 365 is one of the leading drivers of this advancement.

That said, security remains a leading concern for all organisations as corporate data increasingly resides outside IT firewalls.

The latest release of Kaseya AuthAnvil locks down two key threat vectors to hybrid cloud environments: endpoint security and network access,” said Mike Puglia, chief product officer for Kaseya. “With Kaseya AuthAnvil, our multiple layers of security checks and balances ensure that access is limited to authorised users only, giving IT leaders peace of mind in the security of their corporate information.” About KaseyaKaseya is the leading provider of complete IT Management solutions for Managed Service Providers and small to midsized businesses. Kaseya allows organisations to efficiently manage and secure IT in order to drive IT service and business success. Offered as both an industry-leading cloud solution and on-premise software, Kaseya solutions empower businesses to command all of IT centrally, manage remote and distributed environments with ease, and automate across IT management functions. Kaseya solutions currently manage over 10 million endpoints worldwide and are in use by customers in a wide variety of industries, including retail, manufacturing, healthcare, education, government, media, technology, finance and more. Kaseya, headquartered in Dublin, Ireland, is privately held with a presence in over 20 countries.

To learn more, please visit ### Media ContactAlex SweeneyThe Whiteoaks ConsultancyPhone +44 1252 727 313Email: