17 C
London
Monday, September 25, 2017
Home Tags Auditing

Tag: Auditing

Posing as Equifax employees, crooks are calling to verify your account information.
The “internal control processes for this program were really broken,” GAO says.
London 06 July, 2017 – Hampleton Partners, an international mergers and acquisitions and corporate finance advisory firm for technology companies, has advised the UK founders of Bangor, Wales-based NMi Metrology and Gaming Ltd in the sale to New Jersey-based GLI Group (GLI). NMi Metrology & Gaming is a market leading compliance testing and auditing laboratory in the fields of gaming and IT security whilst GLI delivers the highest quality land-based, lottery and iGaming testing and... Source: RealWire
Tectonic, CoreOS's Linux platform built to run containers, was revamped this week to version 1.6.2. Underneath that minor point revision label lie some significant changes.According to an official CoreOS blog post, this version of Tectonic rolls in ...
Emphasizing enterprise devops, Atlassian is focusing on automation enhancements this week to its code management and continuous integration platforms.The 5.0 versions of Bitbucket Server and Data Center, Atlassian's Git code management tool, focus on compliance requirements with a committer verification capability. Only the author of a commit can push changes back to the central repository, and a log of code changes is kept for auditing purposes.

Data Center is intended for datacenter deployments, with capabilities like high availability and clustering.

Bitbucket Server is deployed on a single server.[ The art of programming moves rapidly.
InfoWorld helps you navigate what’s running hot and what's going cold. | Keep up with main topics in programming with InfoWorld’s App Dev Report newsletter. ]
Also, smart mirror authentication caching in Data Center 5.0 lets global teams maintain mirror access by caching authentication credentials locally in the event of short outages, Atlassian said.

The Bitbucket upgrades are currently in a beta stage of release.To read this article in full or to leave a comment, please click here
There are generally accepted principles that developers of all secure operating systems strive to apply, but there can be completely different approaches to implementing these principles.
EnlargeOwn Work reader comments 16 Share this story A security researcher has unearthed evidence showing that three browser-trusted certificate authorities owned and operated by Symantec improperly issued more than 100 unvalidated transport layer security certificates.
In some cases, those certificates made it possible to spoof protected HTTPS-protected websites. One of the most fundamental requirements Google and other major browser developers impose on CAs is that they issue certificates only to people who verify the rightful control of an affected domain name or company name. On multiple occasions last year and earlier this month, the Symantec-owned CAs issued 108 credentials that violated these strict industry guidelines, according to research published Thursday by Andrew Ayer, a security researcher and founder of a CA reseller known as SSLMate.

These guidelines were put in place to ensure the integrity of the entire encrypted Web. Nine of the certificates were issued without the permission or knowledge of the affected domain owners.

The remaining 99 certificates were issued without proper validation of the company information in the certificate. Many of the improperly issued certificates—which contained the string "test" in various places in a likely indication they were created for test purposes—were revoked within an hour of being issued.
Still, the move represents a major violation by Symantec, which in 2015 fired an undisclosed number of CA employees for doing much the same thing. Even when CA-issued certificates are discovered as fraudulent and revoked, they can still be used to force browsers to verify an impostor site.

The difficulty browsers have in blacklisting revoked certificates in real-time is precisely why industry rules strictly control the issuance of such credentials.

There's no indication that the unauthorized certificates were ever used in the wild, but there's also no way to rule out that possibility, however remote it is. "Chrome doesn't [immediately] check certificate revocation, so a revoked certificate can be used in an attack just as easily as an unrevoked certificate," Ayer told Ars. "By default, other browsers fail open and accept a revoked certificate as legitimate if the attacker can successfully block the browser from contacting the revocation server." ("Fail open" is a term that means the browser automatically accepts the certificate in the event the browser can't access the revocation list.) The nine certificates issued without the domain name owners' permission affected 15 separate domains, with names including wps.itsskin.com, example.com, test.com, test1.com, test2.com, and others.

Three Symantec-owned CAs—known as Symantec Trust Network, GeoTrust Inc., and Thawte Inc.—issued the credentials on July 14, October 26, and November 15.

The other 99 certificates were issued on many dates between October 21 and January 18.
In an e-mail, a Symantec spokeswoman wrote: "Symantec has learned of a possible situation regarding certificate mis-issuance involving Symantec and other certificate authorities. We are currently gathering the facts about this situation and will provide an update once we have completed our investigation and verified information." This is the second major violation of the so-called baseline requirements over the past four months.

Those requirements were mandated by the CA/Browser Forum, an industry group made up of CAs and the developers of major browsers that trust them.
In November, Firefox recommended the blocking of China-based WoSign for 12 months after that CA was caught falsifying the issuance date of certificates to get around a prohibition against use of the weak SHA1 cryptographic hashing algorithm. Other browser makers quickly agreed. Ayer discovered the unauthorized certificates by analyzing the publicly available certificate transparency log, a project started by Google for auditing the issuance of Chrome-trusted credentials. Normally, Google requires CAs to report only the issuance of so-called extended validation certificates, which offer a higher level of trust because they verify the identity of the holder, rather than just the control of the domain.

Following Symantec's previously mentioned 2015 mishap, however, Google required Symantec to log all certificates issued by its CAs. Had Symantec not been required to report all certificates, there's a strong likelihood the violation never would have come to light.
Google announced an early prototype of Key Transparency, its latest open source effort to ensure simpler, safer, and secure communications for everyone.

The project’s goal is to make it easier for applications services to share and discover public keys for users, but it will be a while before it's ready for prime time. Secure communications should be de rigueur, but it remains frustratingly out of reach for most people, more than 20 years after the creation of Pretty Good Privacy (PGP).

Existing methods where users need to manually find and verify the recipients’ keys are time-consuming and often complicated. Messaging apps and file sharing tools are limited in that users can communicate only within the service because there is no generic, secure method to look up public keys. “Key Transparency is a general-use, transparent directory, which makes it easy for developers to create systems of all kinds with independently auditable account data,” Ryan Hurst and Gary Belvin, members of Google’s security and privacy engineering team, wrote on the Google Security Blog. Key Transparency will maintain a directory of online personae and associated public keys, and it can work as a public key service to authenticate users.

Applications and services can publish their users’ public keys in Key Transparency and look up other users’ keys.

An audit mechanism keeps the service accountable.

There is the security protection of knowing that everyone is using the same published key, and any malicious attempts to modify the record with a different key will be immediately obvious. “It [Key Transparency] can be used by account owners to reliably see what keys have been associated with their account, and it can be used by senders to see how long an account has been active and stable before trusting it,” Hurst and Belvin wrote. The idea of a global key lookup service is not new, as PGP previously attempted a similar task with Global Directory.

The service still exists, but very few people know about it, let alone use it. Kevin Bocek, chief cybersecurity strategist at certificate management vendor Venafi, called Key Transparency an "interesting" project, but expressed some skepticism about how the technology will be perceived and used. Key Transparency is not a response to a serious incident or a specific use case, which means there is no actual driving force to spur adoption.

Compare that to Certificate Transparency, Google’s framework for monitoring and auditing digital certificates, which came about because certificate authorities were repeatedly mistakenly issuing fraudulent certificates. Google seems to be taking a “build it, and maybe applications will come,” approach with Key Transparency, Bocek said. The engineers don’t deny that Key Transparency is in early stages of design and development. “With this first open source release, we're continuing a conversation with the crypto community and other industry leaders, soliciting feedback, and working toward creating a standard that can help advance security for everyone," they wrote. While the directory would be publicly auditable, the lookup service will reveal individual records only in response to queries for specific accounts.

A command-line tool would let users publish their own keys to the directory; even if the actual app or service provider decides not to use Key Transparency, users can make sure their keys are still listed. “Account update keys” associated with each account—not only Google accounts—will be used to authorize changes to the list of public keys associated with that account. Google based the design of Key Transparency on CONIKS, a key verification service developed at Princeton University, and integrated concepts from Certificate Transparency.

A user client, CONIKS integrates with individual applications and services whose providers publish and manage their own key directories, said Marcela Melara, a second-year doctoral fellow at Princeton University’s Center for Information Technology Policy and the main author of CONIKS.

For example, Melara and her team are currently integrating CONIKS to work with Tor Messenger.

CONIKS relies on individual directories because people can have different usernames across services. More important, the same username can belong to different people on different services. Google changed the design to make Key Transparency a centralized directory. Melara said she and her team have not yet decided if they are going to stop work on CONIKS and start working on Key Transparency. One of the reasons for keeping CONIKS going is that while Key Transparency’s design may be based on CONIKS, there may be differences in how privacy and auditor functions are handled.

For the time being, Melara intends to keep CONIKS an independent project. “The level of privacy protections we want to see may not translate to [Key Transparency’s] internet-scalable design,” Melara said. On the surface, Key Transparency and Certificate Transparency seem like parallel efforts, with one providing an auditable log of public keys and the other a record of digital certificates. While public keys and digital certificates are both used to secure and authenticate information, there is a key difference: Certificates are part of an existing hierarchy of trust with certificate authorities and other entities vouching for the validity of the certificates. No such hierarchy exists for digital keys, so the fact that Key Transparency will be building that web of trust is significant, Venafi’s Bocek said. “It became clear that if we combined insights from Certificate Transparency and CONIKS we could build a system with the properties we wanted and more,” Hurst and Belvin wrote.
Odds are, software (or virtual) containers are in use right now somewhere within your organization, probably by isolated developers or development teams to rapidly create new applications.

They might even be running in production. Unfortunately, many security teams don’t yet understand the security implications of containers or know if they are running in their companies. In a nutshell, Linux container technologies such as Docker and CoreOS Rkt virtualize applications instead of entire servers.

Containers are superlightweight compared with virtual machines, with no need for replicating the guest operating system.

They are flexible, scalable, and easy to use, and they can pack a lot more applications into a given physical infrastructure than is possible with VMs.

And because they share the host operating system, rather than relying on a guest OS, containers can be spun up instantly (in seconds versus the minutes VMs require). A June 2016 report from the Cloud Foundry Foundation surveyed 711 companies about their use of containers. More than half had either deployed or were in the process of evaluating containers. Of those, 16 percent have already mainstreamed the use of containers, with 64 percent expecting to do so within the next year.
If security teams want to seize the opportunity (borrowing a devops term) to “shift security to the left,” they need to identify and involve themselves in container initiatives now. Developers and devops teams have embraced containers because they align with the devops philosophy of agile, continuous application delivery. However, as is the case with any new technology, containers also introduce new and unique security challenges.

These include the following: Inflow of vulnerable source code: Because containers are open source, images created by an organization’s developers are often updated, then stored and used as necessary.

This creates an endless stream of uncontrolled code that may harbor vulnerabilities or unexpected behaviors. Large attack surface: In a given environment, there would be many more containers than there would be applications, VMs, databases, or any other object that requires protecting.

The large numbers of containers running on multiple machines, whether on premises or in the cloud, make it difficult to track what’s going on or to detect anomalies through the noise. Lack of visibility: Containers are run by a container engine, such as Docker or Rkt, that interfaces with the Linux kernel.

This creates another layer of abstraction that can mask the activity of specific containers or what specific users are doing within the containers. Devops speed: The pace of change is such that containers typically have a lifespan four times shorter than that of VMs, on average.

Containers can be executed in an instant, run for a few minutes, then stopped and removed.

This ephemerality makes it possible to launch attacks and disappear quickly, with no need to install anything. “Noisy neighbor” containers: A container might behave in a way that effectively creates a DoS attack on other containers.

For example, opening sockets repeatedly will quickly bring the entire host machine to a crawl and eventually cause it to freeze up. Container breakout to the host: Containers might run as a root user, making it possible to use privilege escalation to break the “containment” and access the host’s operating system. “East-west” network attacks: A jeopardized container can be leveraged to launch attacks across the network, especially if its outbound network connections and ability to run with raw sockets were not properly restricted. The best practices for securing container environments are not only about hardening containers or the servers they run on after the fact.

They’re focused on securing the entire environment.
Security must be considered from the moment container images are pulled from a registry to when the containers are spun down from a runtime or production environment.

Given that containers are often deployed at devops speed as part of a CI/CD framework, the more you can automate, the better. With that in mind, I present this list of best practices. Many of them are not unique to containers, but if they are “baked” into the devops process now, they will have a much greater impact on the security posture of containerized applications than if they are “bolted” on after the fact. Implement a comprehensive vulnerability management program. Vulnerability management goes way beyond scanning images when they are first downloaded from a registry.

Containers can easily pass through the development cycle with access controls or other policies that are too loose, resulting in corruption that causes the application to break down or leading to compromise in runtime.

A rigorous vulnerability management program is a proactive initiative with multiple checks from “cradle to grave,” triggered automatically and used as gates between the dev, test, staging, and production environments. Ensure that only approved images are used in your environment. An effective way of reducing the attack surface and preventing developers from making critical security mistakes is to control the inflow of container images into your development environment.

This means using only approved private registries and approved images and versions.

For example, you might sanction a single Linux distro as a base image, preferably one that is lean (Alpine or CoreOS rather than Ubuntu) to minimize the surface for potential attacks. Implement proactive integrity checks throughout the lifecycle. Part of managing security throughout the container lifecycle is to ensure the integrity of the container images in the registry and enforce controls as they are altered or deployed into production.
Image signing or fingerprinting can be used to provide a chain of custody that allows you to verify the integrity of the containers. Enforce least privileges in runtime. This is a basic security best practice that applies equally in the world of containers. When a vulnerability is exploited, it generally provides the attacker with access and privileges equal to those of the application or process that has been compromised.

Ensuring that containers operate with the least privileges and access required to get the job done reduces your exposure to risk. Whitelist files and executables that the container is allowed to access or run. It’s a lot easier to manage a whitelist when it is implemented from the get-go.

A whitelist provides a measure of control and manageability as you learn what files and executables are required for the application to function correctly, and it allows you to maintain a more stable and reliable environment. Limiting containers so that they can access or run only pre-approved or whitelisted files and executables is a powerful nethod to mitigate risk.
It not only reduces the attack surface, but also can be employed to provide a baseline for anomalies and prevent the use cases of the “noisy neighbor” and container breakout scenarios described above. Enforce network segmentation on running containers. Maintain network segmentation (or “nano-segmentation”) to segregate clusters or zones of containers by application or workload.
In addition to being a highly effective best practice, network segmentation is a must-have for container-based applications that are subject to PCI DSS.
It also serves as a safeguard against “east-west” attacks. Actively monitor container activity and user access. As with any IT environment, you should consistently monitor activity and user access to your container ecosystem to quickly identify any suspicious or malicious activity. Log all administrative user access to containers for auditing. While strong user access controls can restrict privileges for the majority of people who interact with containers, administrators are in a class by themselves. Logging administrative access to your container ecosystem, container registry, and container images is a good security practice and a common-sense control.
It will provide the forensic evidence needed in the case of a breach, as well as a clear audit trail if needed to demonstrate compliance. Much of the notion of “baking security into IT processes” relates to automating preventive processes from the onset.

Getting aggressive about container security now can allow for containerized applications to be inherently more secure than their predecessors. However, given that containers will be deployed ephemerally and in large numbers, active detection and response -- essential to any security program -- will be critical for containerized environments.

Container runtime environments will need to be monitored at all times, for anomalies, suspected breaches, and compliance purposes. Although there’s a growing body of knowledge about container security in the public domain, it’s important to note that we’re still in the early stages.

As we discover new container-specific vulnerabilities (or new-old ones such as Dirty COW), and as we make the inevitable mistakes (like the configuration error in Vine’s Docker registry that allowed a security researcher to access Vine's source code), best practices are sure to evolve. The good news, as far as container adoption goes, is it’s still early enough to automate strong security controls into container environments.

The not-so-good news is security teams need to know about container initiatives early enough to make that happen, and more often than not they don’t.

To realize the potential security improvements that can be achieved in the transition to container-based application development, that needs to change ... soon.

Educating yourself about containers and the security implications of using them is a good start. New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth.

The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers.
InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content.
Send all inquiries to newtechforum@infoworld.com.
The amount of insecure software tied to reused third-party libraries and lingering in applications long after patches have been deployed is staggering. It’s a habitual problem perpetuated by developers failing to vet third-party code for vulnerabilities, and some repositories taking a hands-off approach with the code they host. This scenario allows attackers to target one overlooked component flaw used in millions of applications instead of focusing on a single application security vulnerability. The real-world consequences have been demonstrated in the past few years with the Heartbleed vulnerability in OpenSSL, Shellshock in GNU Bash, and a deserialization vulnerability exploited in a recent high-profile attack against the San Francisco Municipal Transportation Agency. These are three instances where developers reuse libraries and frameworks that contain unpatched flaws in production applications. Related Posts Adobe Patches Flash Zero Day Under Attack October 26, 2016 , 11:24 am Dyn Confirms DDoS Attack Affecting Twitter, Github, Many Others October 21, 2016 , 10:01 am Threatpost News Wrap, June 17, 2016 June 17, 2016 , 11:15 am Security researchers at Veracode estimate that 97 percent of Java applications it tested included at least one component with at least one known software vulnerability. “The problem isn’t limited to Java and isn’t just tied to obscure projects,” said Tim Jarrett senior director of security, Veracode. “Pick your programming language.” Gartner, meanwhile, estimates that by 2020, 99 percent of vulnerabilities exploited will be ones known by security and IT professionals for at least one year. Code Reuse Saves Time, Invites Bugs According to security experts, the problem is two-fold. On one hand, developers use reliable code that at a later date is found to have a vulnerability. Second, insecure code is used by a developer who doesn’t exercise due diligence on the software libraries used in their project. “They’ve heard the warnings and know the dangers, but for many developers open source and third-party components can be a double-edge sword – saving time but opening the door to bugs,” said Derek Weeks, vice president and DevOps advocate at Sonatype. In an analysis of 25,000 applications, Sonatype found that seven percent of components had at least one security defect tied to the use of an insecure software component. Repositories GitHub, Bitbucket, Python Package Index and NuGet Gallery are essential tools helping developers find pre-existing code that adds functionality for their software projects without having to reinvent the wheel. Java application developers, for example, rely on pre-existing frameworks to handle encryption, visual elements and libraries for handling data. “Software is no longer written from scratch,” Weeks said. “No matter how new and unique the application, 80 percent of the code used in a software application relies on third-party libraries or components.” He said enterprises are more reliant on the software supply chain than ever before. But he says many of the go-to open-source repositories that make up that supply chain are not vetted libraries of reliable code. Rather, they are warehouses with a varying percentage of outdated projects with security issues. According to an analysis of Sonatype’s own Central Repository in 2015, developers had made 31 billion download requests of open source and third-party software components, compared to 17 billion requests the year before. And when Sonatype analyzed its own code library, it found 6.1 percent of code downloaded from its Central Repository had a known security defect. Weeks says Sonatype’s is doing better than other repositories that offer no tools, no guidance and no red flags to prevent developers from using frameworks with faulty code. “There is no Good Housekeeping Seal of Approval for third-party code.” “Faulty code can easily spawn more problems down the road for developers,” said Stephen Breen, a principal consultant at NTT Com Security. “Even when development teams have the best intentions, it’s easy for developers working under tight deadlines to not properly vet the third-party code used in their software.” Breen said when insecure code is unknowingly used to build a component within a software program, problems snowball when that component is used inside other larger components. One example of vulnerable third-party code reused repeatedly is a deserialization flaw in Apache Commons Collections (commons-collections-3.2.1.jar) – first reported in 2015 and patched in November of the same year. Source: Veracode Breen found there are still 1,300 instances of the old vulnerable version of the Commons Collections lurking inside Java applications using Spring and Hibernate libraries and hosted across multiple open source code repositories. “The developer knows they are picking Spring or Hibernate for their development project. They don’t take it to the next level and realize they are also getting Common Collections,” Breen said. “That Common Collections library is then used by thousands more projects.” According to Veracode, Apache Commons Collections is the sixth-most common component used in Java applications. It found that the unpatched versions of the software was in 25 percent of 300,000 Java applications scanned. Even more challenging for developers is updating those applications that are using the vulnerable version of libraries and frameworks since flaws were patched. “Think of it like a faulty airbag. Carmakers used those faulty airbags in millions of vehicles. Now it’s the carmaker on the hook to fix the problem, not the airbag maker,” Weeks said. Leaky Apps, Bad Crypto, Injection Flaws Galore Veracode said the Apache Common Collection example is the tip of the iceberg. When Veracode examined vulnerabilities tied to insecure code it found application information leakage, where user or application data can be leveraged by an attacker, is the most prevalent type of vulnerability, accounting for 72 percent of third-party code flaws. Second are cryptographic issues representing 65 percent of vulnerabilities. That was followed by Carriage Return Line Feed (CRLF) injection flaws and cross site scripting bugs. Source: Veracode Compounding the problem is an increased dependency on open-source components used in a wide variety of software products. The federal government is typical. It has an open-source-first policy as do many private companies. Relying on third-party libraries shortens development time and can improve the safety and quality of their software projects, Weeks said. “Not only does code reuse save time but it also allows developers to be more innovative as they focus on creating new functionality and not writing encryption libraries from scratch,” Weeks said. Done correctly, code reuse is a developer’s godsend, he said. For those reasons, security experts say it’s time for the industry to stop and consider where code originates. Sonatype, which markets and sells code verification services, promotes the idea of documenting software’s supply chain with what it calls a “software bill of materials.” That way developers can better scrutinize open-source frameworks before and after they are used; making it easier to update those applications that are using vulnerable old versions of libraries. Sonatype said it found one in 16 components it analyzed had a vulnerability that was previously documented, verified and with additional information available on the Internet. “I can’t imagine any other industry where it’s okay that one in 16 parts have known defects.” The problem is that among developers there is a mix of denial and ignorance at play. “Developers choose component parts, not security,” Weeks said. It should be the other way around. “If we are aware of malicious or bad libraries or code, of course we want to warn our users,” said Logan Abbott, president of SourceForge, a software and code repository. “We scan binaries for vulnerabilities, but we don’t police any of the code we host.” Repositories Say: ‘We’re Just the Host’ Repositories contacted by Threatpost say their platforms are a resource for developers akin to cloud storage services that allow people to store and share content publicly or privately. They don’t tell users what they can and cannot host with their service. They say rooting out bugs in software should be on shoulders of developers – not repositories. Writing good vulnerability-free code starts at getting good code from healthy repositories with engaged users. “We think of ourselves as the Home Depot of repositories,” said Rahul Chhabria, product manager for Atlassian Bitbucket. “We provide the tools, material and platform to get the job done right.” Chhabria said Bitbucket offers a range of tools to help sniff out bad or insecure components such as the third-party tool SourceClear for scanning dependency chains. It also offers Bitbucket Pipelines that allows for cloud-based team development of software projects and simplifies peer review. GitHub is one of the largest repositories; it hosts 49 million public and private projects for its 18 million users. It does not scan or red flag insecure code hosted on its platform, according to Shawn Davenport, VP of security at GitHub. Instead developers can use third party-tools such as Gemnasium, Brakeman and Code Climate for static and dependency analysis. “There is a lot of hidden risk out there in projects,” Davenport said. “We do our best to make sure our developers know what tools are available to them to vet their own code.” He estimates a minority GitHub developers take advantage of software scanning and auditing tools. “Unfortunately security isn’t a developers first priority.” Other repositories told Threatpost they intentionally take a hands-off approach and say expecting them to police their own software isn’t feasible, not part of their mission and nothing they plan to do. They point out, flawed or not, developers want access to all code – even older components. “An implementation of a library in one framework might not be a security risk at all,” Breen said. He points out developers often temporarily revert to those old libraries as stopgaps should an updated version break a project. Automated Scanning to the Rescue? One attempt at nipping the problem at the bud is the used of automated security vulnerability and configuration scanning for open source components. By 2019, more than 70 percent of enterprise DevOps initiatives will incorporate automated scanning, according to Gartner. Today only 10 percent of packages are scanned. The Node.js Foundation, an industry consortium designed to promote the Node.js platform, relies on a more community-based approach via the Node.js Security Project. The goal is to provide developers a process for discovering and disclosing security vulnerabilities found in the Node.js module ecosystem. According to Node.js the approach is a hybrid solution that consists of a database of vulnerabilities and a community communication channel for vetting and disclosing vulnerable code. “It’s not a story about security professionals solving the problem, it’s about how we empower development with the right information about the (software) parts they are consuming,” Weeks said. “In this case, the heart of the solution lies with development, and therefore requires a new approach and different thinking.”
The movement toward Certificate Transparency (CT) has brought about a healthy improvement, not only in the way organizations monitor and audit TLS certs, but also in cutting down the number of malicious or mistakenly issued certificates. CT, a framework developed by Google, works because Certificate Authorities are required to submit certificates to publicly accessible logs; as of next October, non-compliant sites will no longer be trusted by Chrome. For smaller organizations in particular, the cost is high to build out an infrastructure and search tool that interacts with all public CT logs.

Facebook, however, may have filled that gap today with the release of a previously internal tool called the Certificate Transparency Monitoring Developer Tool. The tool checks major public CT logs at regular intervals for new certificates issued on domains singled out by the user. “We’ve been monitoring Certificate Transparency logs internally since last year, and found it very useful,” Facebook security engineer David Huang said. “It allowed us to discover unexpected certs that were issued for our domain that we previously were unaware of. We realized it might be useful for other developers and made this free for everyone.” The tool allows users to search CT logs for a particular domain and return certs that have been issued for the domain and its subdomains. Users can also subscribe to a domain feed and receive email notifications when new certs are issued. Facebook said the search interface is easy to use, and its infrastructure can process large amounts of data quickly, providing a reliable return for any domain.

Facebook has been promoting the use of CT logs to detect unexpected certificates; not all of these occurrences are malicious. “It’s not always necessarily a vulnerability or attack, but it may be a case where a site as large as Facebook with lots of domains—some run by ourselves or by external hosting vendors—where we many not have a full picture of how our certs are deployed on domains,” Huang said. “This tool provides easy information for us.

This is probably very interesting for individual sites or smaller sites that probably are not actively monitoring certificates for their domains.” The framework is set up to monitor, in a standard way, all publicly trusted TLS certificates issued on the internet.
It consists of logs, or records of TLS certs submitted by CAs or site owners; an auditing services that ensures submitted certs are included in the CT logs; and a monitoring service that queries CT logs for new cert data.

Facebook said since it adopted Certificate Transparency, it has observed more than 50 million certificates.

That data is collected and verified against a ruleset, and any variations triggers a notification. Huang said that Facebook’s tool is among the few free services that include a notification and subscriber option. “There are dozens of CT logs, and we periodically fetch them (hourly, or even every 15 minutes) and keep synching across CT logs,” Huang said. “Once we fetch those certificates and process them through our pipeline, we generate alerts if we detect anything unexpected.” Google recently said it was making Certificate Transparency mandatory, an set an October 2017 deadline that was announced at the CA/Browser Forum in mid-October.
Sites that are not compliant will not display the green banner signifying a site is secure. “The level of transparency CT logs have provided is moving us in a very good direction,” Huang said. “In the future, all publicly published certificates will be required to be logged to CT Logs.

By that time, our monitoring tool will be able to have full coverage of any type of public certs.”
The next major version of OpenVPN, one of the most widely used virtual private networking technologies, will be audited by a well-known cryptography expert. The audit will be fully funded by Private Internet Access (PIA), a popular VPN service provider that uses OpenVPN for its business.

The company has contracted cryptography engineering expert Matthew Green, a professor at Johns Hopkins University in Baltimore, to carry out the evaluation with the goal of identifying any vulnerabilities in the code. Green has experience in auditing encryption software, being one of the founders of the Open Crypto Audit Project, which organized a detailed analysis of TrueCrypt, a popular open-source full-disk encryption application.

TrueCrypt has been abandoned by its original developers in 2014, but its code has since been forked and improved as part of other projects. Green will evaluate OpenVPN 2.4, which is currently the release candidate for the next major stable version.

For  now, he will look for vulnerabilities in the source code that’s available on GitHub, but he will compare his results with the final version when released in order to complete the audit. Any issues that are found will be shared with the OpenVPN developers and the results of the audit will only be made public after they have been patched, PIA’s Caleb Chen said in a blog post. “Instead of going for a crowdfunded approach, Private Internet Access has elected to fund the entirety of the OpenVPN 2.4 audit ourselves because of the integral nature of OpenVPN to both the privacy community as a whole and our own company,” Chen said. The OpenVPN software is cross-platform and can be used both in server or client modes.
It’s therefore used by end-users to connect to VPN servers and by companies to set up such servers.

The software is also integrated in commercial consumer and business products.