Data Center is intended for datacenter deployments, with capabilities like high availability and clustering.
Bitbucket Server is deployed on a single server.[ The art of programming moves rapidly.
InfoWorld helps you navigate what’s running hot and what's going cold. | Keep up with main topics in programming with InfoWorld’s App Dev Report newsletter. ]Also, smart mirror authentication caching in Data Center 5.0 lets global teams maintain mirror access by caching authentication credentials locally in the event of short outages, Atlassian said.
The Bitbucket upgrades are currently in a beta stage of release.To read this article in full or to leave a comment, please click here
In some cases, those certificates made it possible to spoof protected HTTPS-protected websites. One of the most fundamental requirements Google and other major browser developers impose on CAs is that they issue certificates only to people who verify the rightful control of an affected domain name or company name. On multiple occasions last year and earlier this month, the Symantec-owned CAs issued 108 credentials that violated these strict industry guidelines, according to research published Thursday by Andrew Ayer, a security researcher and founder of a CA reseller known as SSLMate.
These guidelines were put in place to ensure the integrity of the entire encrypted Web. Nine of the certificates were issued without the permission or knowledge of the affected domain owners.
The remaining 99 certificates were issued without proper validation of the company information in the certificate. Many of the improperly issued certificates—which contained the string "test" in various places in a likely indication they were created for test purposes—were revoked within an hour of being issued.
Still, the move represents a major violation by Symantec, which in 2015 fired an undisclosed number of CA employees for doing much the same thing. Even when CA-issued certificates are discovered as fraudulent and revoked, they can still be used to force browsers to verify an impostor site.
The difficulty browsers have in blacklisting revoked certificates in real-time is precisely why industry rules strictly control the issuance of such credentials.
There's no indication that the unauthorized certificates were ever used in the wild, but there's also no way to rule out that possibility, however remote it is. "Chrome doesn't [immediately] check certificate revocation, so a revoked certificate can be used in an attack just as easily as an unrevoked certificate," Ayer told Ars. "By default, other browsers fail open and accept a revoked certificate as legitimate if the attacker can successfully block the browser from contacting the revocation server." ("Fail open" is a term that means the browser automatically accepts the certificate in the event the browser can't access the revocation list.) The nine certificates issued without the domain name owners' permission affected 15 separate domains, with names including wps.itsskin.com, example.com, test.com, test1.com, test2.com, and others.
Three Symantec-owned CAs—known as Symantec Trust Network, GeoTrust Inc., and Thawte Inc.—issued the credentials on July 14, October 26, and November 15.
The other 99 certificates were issued on many dates between October 21 and January 18.
In an e-mail, a Symantec spokeswoman wrote: "Symantec has learned of a possible situation regarding certificate mis-issuance involving Symantec and other certificate authorities. We are currently gathering the facts about this situation and will provide an update once we have completed our investigation and verified information." This is the second major violation of the so-called baseline requirements over the past four months.
Those requirements were mandated by the CA/Browser Forum, an industry group made up of CAs and the developers of major browsers that trust them.
In November, Firefox recommended the blocking of China-based WoSign for 12 months after that CA was caught falsifying the issuance date of certificates to get around a prohibition against use of the weak SHA1 cryptographic hashing algorithm. Other browser makers quickly agreed. Ayer discovered the unauthorized certificates by analyzing the publicly available certificate transparency log, a project started by Google for auditing the issuance of Chrome-trusted credentials. Normally, Google requires CAs to report only the issuance of so-called extended validation certificates, which offer a higher level of trust because they verify the identity of the holder, rather than just the control of the domain.
Following Symantec's previously mentioned 2015 mishap, however, Google required Symantec to log all certificates issued by its CAs. Had Symantec not been required to report all certificates, there's a strong likelihood the violation never would have come to light.
The project’s goal is to make it easier for applications services to share and discover public keys for users, but it will be a while before it's ready for prime time. Secure communications should be de rigueur, but it remains frustratingly out of reach for most people, more than 20 years after the creation of Pretty Good Privacy (PGP).
Existing methods where users need to manually find and verify the recipients’ keys are time-consuming and often complicated. Messaging apps and file sharing tools are limited in that users can communicate only within the service because there is no generic, secure method to look up public keys. “Key Transparency is a general-use, transparent directory, which makes it easy for developers to create systems of all kinds with independently auditable account data,” Ryan Hurst and Gary Belvin, members of Google’s security and privacy engineering team, wrote on the Google Security Blog. Key Transparency will maintain a directory of online personae and associated public keys, and it can work as a public key service to authenticate users.
Applications and services can publish their users’ public keys in Key Transparency and look up other users’ keys.
An audit mechanism keeps the service accountable.
There is the security protection of knowing that everyone is using the same published key, and any malicious attempts to modify the record with a different key will be immediately obvious. “It [Key Transparency] can be used by account owners to reliably see what keys have been associated with their account, and it can be used by senders to see how long an account has been active and stable before trusting it,” Hurst and Belvin wrote. The idea of a global key lookup service is not new, as PGP previously attempted a similar task with Global Directory.
The service still exists, but very few people know about it, let alone use it. Kevin Bocek, chief cybersecurity strategist at certificate management vendor Venafi, called Key Transparency an "interesting" project, but expressed some skepticism about how the technology will be perceived and used. Key Transparency is not a response to a serious incident or a specific use case, which means there is no actual driving force to spur adoption.
Compare that to Certificate Transparency, Google’s framework for monitoring and auditing digital certificates, which came about because certificate authorities were repeatedly mistakenly issuing fraudulent certificates. Google seems to be taking a “build it, and maybe applications will come,” approach with Key Transparency, Bocek said. The engineers don’t deny that Key Transparency is in early stages of design and development. “With this first open source release, we're continuing a conversation with the crypto community and other industry leaders, soliciting feedback, and working toward creating a standard that can help advance security for everyone," they wrote. While the directory would be publicly auditable, the lookup service will reveal individual records only in response to queries for specific accounts.
A command-line tool would let users publish their own keys to the directory; even if the actual app or service provider decides not to use Key Transparency, users can make sure their keys are still listed. “Account update keys” associated with each account—not only Google accounts—will be used to authorize changes to the list of public keys associated with that account. Google based the design of Key Transparency on CONIKS, a key verification service developed at Princeton University, and integrated concepts from Certificate Transparency.
A user client, CONIKS integrates with individual applications and services whose providers publish and manage their own key directories, said Marcela Melara, a second-year doctoral fellow at Princeton University’s Center for Information Technology Policy and the main author of CONIKS.
For example, Melara and her team are currently integrating CONIKS to work with Tor Messenger.
CONIKS relies on individual directories because people can have different usernames across services. More important, the same username can belong to different people on different services. Google changed the design to make Key Transparency a centralized directory. Melara said she and her team have not yet decided if they are going to stop work on CONIKS and start working on Key Transparency. One of the reasons for keeping CONIKS going is that while Key Transparency’s design may be based on CONIKS, there may be differences in how privacy and auditor functions are handled.
For the time being, Melara intends to keep CONIKS an independent project. “The level of privacy protections we want to see may not translate to [Key Transparency’s] internet-scalable design,” Melara said. On the surface, Key Transparency and Certificate Transparency seem like parallel efforts, with one providing an auditable log of public keys and the other a record of digital certificates. While public keys and digital certificates are both used to secure and authenticate information, there is a key difference: Certificates are part of an existing hierarchy of trust with certificate authorities and other entities vouching for the validity of the certificates. No such hierarchy exists for digital keys, so the fact that Key Transparency will be building that web of trust is significant, Venafi’s Bocek said. “It became clear that if we combined insights from Certificate Transparency and CONIKS we could build a system with the properties we wanted and more,” Hurst and Belvin wrote.
They might even be running in production. Unfortunately, many security teams don’t yet understand the security implications of containers or know if they are running in their companies. In a nutshell, Linux container technologies such as Docker and CoreOS Rkt virtualize applications instead of entire servers.
Containers are superlightweight compared with virtual machines, with no need for replicating the guest operating system.
They are flexible, scalable, and easy to use, and they can pack a lot more applications into a given physical infrastructure than is possible with VMs.
And because they share the host operating system, rather than relying on a guest OS, containers can be spun up instantly (in seconds versus the minutes VMs require). A June 2016 report from the Cloud Foundry Foundation surveyed 711 companies about their use of containers. More than half had either deployed or were in the process of evaluating containers. Of those, 16 percent have already mainstreamed the use of containers, with 64 percent expecting to do so within the next year.
If security teams want to seize the opportunity (borrowing a devops term) to “shift security to the left,” they need to identify and involve themselves in container initiatives now. Developers and devops teams have embraced containers because they align with the devops philosophy of agile, continuous application delivery. However, as is the case with any new technology, containers also introduce new and unique security challenges.
These include the following: Inflow of vulnerable source code: Because containers are open source, images created by an organization’s developers are often updated, then stored and used as necessary.
This creates an endless stream of uncontrolled code that may harbor vulnerabilities or unexpected behaviors. Large attack surface: In a given environment, there would be many more containers than there would be applications, VMs, databases, or any other object that requires protecting.
The large numbers of containers running on multiple machines, whether on premises or in the cloud, make it difficult to track what’s going on or to detect anomalies through the noise. Lack of visibility: Containers are run by a container engine, such as Docker or Rkt, that interfaces with the Linux kernel.
This creates another layer of abstraction that can mask the activity of specific containers or what specific users are doing within the containers. Devops speed: The pace of change is such that containers typically have a lifespan four times shorter than that of VMs, on average.
Containers can be executed in an instant, run for a few minutes, then stopped and removed.
This ephemerality makes it possible to launch attacks and disappear quickly, with no need to install anything. “Noisy neighbor” containers: A container might behave in a way that effectively creates a DoS attack on other containers.
For example, opening sockets repeatedly will quickly bring the entire host machine to a crawl and eventually cause it to freeze up. Container breakout to the host: Containers might run as a root user, making it possible to use privilege escalation to break the “containment” and access the host’s operating system. “East-west” network attacks: A jeopardized container can be leveraged to launch attacks across the network, especially if its outbound network connections and ability to run with raw sockets were not properly restricted. The best practices for securing container environments are not only about hardening containers or the servers they run on after the fact.
They’re focused on securing the entire environment.
Security must be considered from the moment container images are pulled from a registry to when the containers are spun down from a runtime or production environment.
Given that containers are often deployed at devops speed as part of a CI/CD framework, the more you can automate, the better. With that in mind, I present this list of best practices. Many of them are not unique to containers, but if they are “baked” into the devops process now, they will have a much greater impact on the security posture of containerized applications than if they are “bolted” on after the fact. Implement a comprehensive vulnerability management program. Vulnerability management goes way beyond scanning images when they are first downloaded from a registry.
Containers can easily pass through the development cycle with access controls or other policies that are too loose, resulting in corruption that causes the application to break down or leading to compromise in runtime.
A rigorous vulnerability management program is a proactive initiative with multiple checks from “cradle to grave,” triggered automatically and used as gates between the dev, test, staging, and production environments. Ensure that only approved images are used in your environment. An effective way of reducing the attack surface and preventing developers from making critical security mistakes is to control the inflow of container images into your development environment.
This means using only approved private registries and approved images and versions.
For example, you might sanction a single Linux distro as a base image, preferably one that is lean (Alpine or CoreOS rather than Ubuntu) to minimize the surface for potential attacks. Implement proactive integrity checks throughout the lifecycle. Part of managing security throughout the container lifecycle is to ensure the integrity of the container images in the registry and enforce controls as they are altered or deployed into production.
Image signing or fingerprinting can be used to provide a chain of custody that allows you to verify the integrity of the containers. Enforce least privileges in runtime. This is a basic security best practice that applies equally in the world of containers. When a vulnerability is exploited, it generally provides the attacker with access and privileges equal to those of the application or process that has been compromised.
Ensuring that containers operate with the least privileges and access required to get the job done reduces your exposure to risk. Whitelist files and executables that the container is allowed to access or run. It’s a lot easier to manage a whitelist when it is implemented from the get-go.
A whitelist provides a measure of control and manageability as you learn what files and executables are required for the application to function correctly, and it allows you to maintain a more stable and reliable environment. Limiting containers so that they can access or run only pre-approved or whitelisted files and executables is a powerful nethod to mitigate risk.
It not only reduces the attack surface, but also can be employed to provide a baseline for anomalies and prevent the use cases of the “noisy neighbor” and container breakout scenarios described above. Enforce network segmentation on running containers. Maintain network segmentation (or “nano-segmentation”) to segregate clusters or zones of containers by application or workload.
In addition to being a highly effective best practice, network segmentation is a must-have for container-based applications that are subject to PCI DSS.
It also serves as a safeguard against “east-west” attacks. Actively monitor container activity and user access. As with any IT environment, you should consistently monitor activity and user access to your container ecosystem to quickly identify any suspicious or malicious activity. Log all administrative user access to containers for auditing. While strong user access controls can restrict privileges for the majority of people who interact with containers, administrators are in a class by themselves. Logging administrative access to your container ecosystem, container registry, and container images is a good security practice and a common-sense control.
It will provide the forensic evidence needed in the case of a breach, as well as a clear audit trail if needed to demonstrate compliance. Much of the notion of “baking security into IT processes” relates to automating preventive processes from the onset.
Getting aggressive about container security now can allow for containerized applications to be inherently more secure than their predecessors. However, given that containers will be deployed ephemerally and in large numbers, active detection and response -- essential to any security program -- will be critical for containerized environments.
Container runtime environments will need to be monitored at all times, for anomalies, suspected breaches, and compliance purposes. Although there’s a growing body of knowledge about container security in the public domain, it’s important to note that we’re still in the early stages.
As we discover new container-specific vulnerabilities (or new-old ones such as Dirty COW), and as we make the inevitable mistakes (like the configuration error in Vine’s Docker registry that allowed a security researcher to access Vine's source code), best practices are sure to evolve. The good news, as far as container adoption goes, is it’s still early enough to automate strong security controls into container environments.
The not-so-good news is security teams need to know about container initiatives early enough to make that happen, and more often than not they don’t.
To realize the potential security improvements that can be achieved in the transition to container-based application development, that needs to change ... soon.
Educating yourself about containers and the security implications of using them is a good start. New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth.
The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers.
InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content.
Send all inquiries to email@example.com.
Facebook, however, may have filled that gap today with the release of a previously internal tool called the Certificate Transparency Monitoring Developer Tool. The tool checks major public CT logs at regular intervals for new certificates issued on domains singled out by the user. “We’ve been monitoring Certificate Transparency logs internally since last year, and found it very useful,” Facebook security engineer David Huang said. “It allowed us to discover unexpected certs that were issued for our domain that we previously were unaware of. We realized it might be useful for other developers and made this free for everyone.” The tool allows users to search CT logs for a particular domain and return certs that have been issued for the domain and its subdomains. Users can also subscribe to a domain feed and receive email notifications when new certs are issued. Facebook said the search interface is easy to use, and its infrastructure can process large amounts of data quickly, providing a reliable return for any domain.
Facebook has been promoting the use of CT logs to detect unexpected certificates; not all of these occurrences are malicious. “It’s not always necessarily a vulnerability or attack, but it may be a case where a site as large as Facebook with lots of domains—some run by ourselves or by external hosting vendors—where we many not have a full picture of how our certs are deployed on domains,” Huang said. “This tool provides easy information for us.
This is probably very interesting for individual sites or smaller sites that probably are not actively monitoring certificates for their domains.” The framework is set up to monitor, in a standard way, all publicly trusted TLS certificates issued on the internet.
It consists of logs, or records of TLS certs submitted by CAs or site owners; an auditing services that ensures submitted certs are included in the CT logs; and a monitoring service that queries CT logs for new cert data.
Facebook said since it adopted Certificate Transparency, it has observed more than 50 million certificates.
That data is collected and verified against a ruleset, and any variations triggers a notification. Huang said that Facebook’s tool is among the few free services that include a notification and subscriber option. “There are dozens of CT logs, and we periodically fetch them (hourly, or even every 15 minutes) and keep synching across CT logs,” Huang said. “Once we fetch those certificates and process them through our pipeline, we generate alerts if we detect anything unexpected.” Google recently said it was making Certificate Transparency mandatory, an set an October 2017 deadline that was announced at the CA/Browser Forum in mid-October.
Sites that are not compliant will not display the green banner signifying a site is secure. “The level of transparency CT logs have provided is moving us in a very good direction,” Huang said. “In the future, all publicly published certificates will be required to be logged to CT Logs.
By that time, our monitoring tool will be able to have full coverage of any type of public certs.”
The company has contracted cryptography engineering expert Matthew Green, a professor at Johns Hopkins University in Baltimore, to carry out the evaluation with the goal of identifying any vulnerabilities in the code. Green has experience in auditing encryption software, being one of the founders of the Open Crypto Audit Project, which organized a detailed analysis of TrueCrypt, a popular open-source full-disk encryption application.
TrueCrypt has been abandoned by its original developers in 2014, but its code has since been forked and improved as part of other projects. Green will evaluate OpenVPN 2.4, which is currently the release candidate for the next major stable version.
For now, he will look for vulnerabilities in the source code that’s available on GitHub, but he will compare his results with the final version when released in order to complete the audit. Any issues that are found will be shared with the OpenVPN developers and the results of the audit will only be made public after they have been patched, PIA’s Caleb Chen said in a blog post. “Instead of going for a crowdfunded approach, Private Internet Access has elected to fund the entirety of the OpenVPN 2.4 audit ourselves because of the integral nature of OpenVPN to both the privacy community as a whole and our own company,” Chen said. The OpenVPN software is cross-platform and can be used both in server or client modes.
It’s therefore used by end-users to connect to VPN servers and by companies to set up such servers.
The software is also integrated in commercial consumer and business products.
Vormetric Data Security Platform expansion includes patented, non-disruptive encryption deployment and advanced Docker encryption
December 8, 2016 – Thales, a leader in critical information systems, cybersecurity and data security, today announced the release of new capabilities for its leading Vormetric Data Security Platform.
These advances extend data-at-rest security capabilities with deeply integrated Docker encryption and access controls, the ability to encrypt and re-key data without having to take applications offline, FIPS certified remote administration and management of data security policies and protections, and the ability to accelerate the deployment of tokenization, static data masking and application encryption.
Announced today by Thales:
- General availability of Vormetric Transparent Encryption Live Data Transformation Extension: A patented solution that enables organisations to deploy and maintain encryption with minimal downtime.
Enables initial encryption and rekeying of previously encrypted data while in use.
Available previously as a pilot – now generally available.
- Vormetric Transparent Encryption Docker Extension: Extends Vormetric Transparent Encryption’s OS-level policy-based encryption, data access controls and data access logging capabilities to internal Docker container users, processes and resource sets.
Deploys and protects without the need to alter containers or applications.
Enables compliance and best practices for encryption, control of data access, and data access auditing for container accessible information.
Find additional information here: https://www.vormetric.com/products/containers.
- FIPS 140-2 level 3 certified remote data security management and policy control for Vormetric Data Security Manager V6100 appliance.
This innovation enables organisations with the most stringent compliance and best practice requirements to easily manage the full Thales line of Vormetric data security platform solutions without physical visits to data centers.
- Batch Data Transformation: Eases initial encryption or tokenization of sensitive database columns in environments that are protected with Vormetric Application Encryption or Vormetric Tokenization.
Also supports Static Data Masking requirements.
"IT system downtime is costly for any business, even when it is planned," said Bob Tarzey of UK-based Quocirca. "The financial consequences of IT disruptions arise from lost sales and productivity; in addition, consequent reputational damage can have a longer term knock-on effect," he added. "Downtime need not be caused by system outage, it can be due to data processing, which includes encryption.
The idea behind Vormetric's Live Data Transformation is to solve this problem, even for large databases with high transaction volumes.
Any organisation which needs to ensure both constant data security and availability should take a look at such technology."
Compliance requirements and best practices increasingly call for organisations to encrypt and control access to sensitive data, while also logging and auditing information about sensitive data access.
The company’s recent 2016 Vormetric Data Threat Report revealed that perceived “complexity” is the number-one reason that enterprises do not adopt data security tools and techniques that support these capabilities more widely.
These advanced data security controls directly address this problem by enabling enterprises to confidently support their digital transformation more easily and simply, and in more environments, than ever before.
“Thales continues to innovate by providing advanced data security solutions and services that delivers trust wherever information is created, shared, or stored,” said Vice President of Product Management for Thales e-Security, Derek Tumulak. “No other organisation offers the depth and breadth of integrated data security solutions, or enables enterprises to confidently accelerate their organisation’s digital transformation, like Thales.”
Availability: All new offerings are planned to be available in Q1 2017
About Thales e-Security
Thales e-Security + Vormetric have combined to form the leading global data protection and digital trust management company.
Together, we enable companies to compete confidently and quickly by securing data at-rest, in-motion, and in-use to effectively deliver secure and compliant solutions with the highest levels of management, speed and trust across physical, virtual, and cloud environments.
By deploying our leading solutions and services, targeted attacks are thwarted and sensitive data risk exposure is reduced with the least business disruption and at the lowest life cycle cost.
Thales e-Security and Vormetric are part of Thales Group. www.thales-esecurity.com
Thales is a global technology leader for the Aerospace, Transport, Defence and Security markets. With 62,000 employees in 56 countries, Thales reported sales of €14 billion in 2015. With over 22,000 engineers and researchers, Thales has a unique capability to design and deploy equipment, systems and services to meet the most complex security requirements.
Its exceptional international footprint allows it to work closely with its customers all over the world.
Positioned as a value-added systems integrator, equipment supplier and service provider, Thales is one of Europe’s leading players in the security market.
The Group’s security teams work with government agencies, local authorities and enterprise customers to develop and deploy integrated, resilient solutions to protect citizens, sensitive data and critical infrastructure.
Thales offers world-class cryptographic capabilities and is a global leader in cybersecurity solutions for defence, government, critical infrastructure providers, telecom companies, industry and the financial services sector. With a value proposition addressing the entire data security chain, Thales offers a comprehensive range of services and solutions ranging from security consulting, data protection, digital trust management and design, development, integration, certification and security maintenance of cybersecured systems, to cyberthreat management, intrusion detection and security supervision through cybersecurity Operation Centres in France, the United Kingdom, The Netherlands and soon in Hong Kong.
Thales Media Relations – Security
+33 (0)1 57 77 90 89
Thales e-Security Media Relations
+44 (0)1223 723612