Home Tags Auditing

Tag: Auditing

Features of secure OS realization

There are generally accepted principles that developers of all secure operating systems strive to apply, but there can be completely different approaches to implementing these principles.

Already on probation, Symantec issues more illegit HTTPS certificates

EnlargeOwn Work reader comments 16 Share this story A security researcher has unearthed evidence showing that three browser-trusted certificate authorities owned and operated by Symantec improperly issued more than 100 unvalidated transport layer security certificates.
In some cases, those certificates made it possible to spoof protected HTTPS-protected websites. One of the most fundamental requirements Google and other major browser developers impose on CAs is that they issue certificates only to people who verify the rightful control of an affected domain name or company name. On multiple occasions last year and earlier this month, the Symantec-owned CAs issued 108 credentials that violated these strict industry guidelines, according to research published Thursday by Andrew Ayer, a security researcher and founder of a CA reseller known as SSLMate.

These guidelines were put in place to ensure the integrity of the entire encrypted Web. Nine of the certificates were issued without the permission or knowledge of the affected domain owners.

The remaining 99 certificates were issued without proper validation of the company information in the certificate. Many of the improperly issued certificates—which contained the string "test" in various places in a likely indication they were created for test purposes—were revoked within an hour of being issued.
Still, the move represents a major violation by Symantec, which in 2015 fired an undisclosed number of CA employees for doing much the same thing. Even when CA-issued certificates are discovered as fraudulent and revoked, they can still be used to force browsers to verify an impostor site.

The difficulty browsers have in blacklisting revoked certificates in real-time is precisely why industry rules strictly control the issuance of such credentials.

There's no indication that the unauthorized certificates were ever used in the wild, but there's also no way to rule out that possibility, however remote it is. "Chrome doesn't [immediately] check certificate revocation, so a revoked certificate can be used in an attack just as easily as an unrevoked certificate," Ayer told Ars. "By default, other browsers fail open and accept a revoked certificate as legitimate if the attacker can successfully block the browser from contacting the revocation server." ("Fail open" is a term that means the browser automatically accepts the certificate in the event the browser can't access the revocation list.) The nine certificates issued without the domain name owners' permission affected 15 separate domains, with names including wps.itsskin.com, example.com, test.com, test1.com, test2.com, and others.

Three Symantec-owned CAs—known as Symantec Trust Network, GeoTrust Inc., and Thawte Inc.—issued the credentials on July 14, October 26, and November 15.

The other 99 certificates were issued on many dates between October 21 and January 18.
In an e-mail, a Symantec spokeswoman wrote: "Symantec has learned of a possible situation regarding certificate mis-issuance involving Symantec and other certificate authorities. We are currently gathering the facts about this situation and will provide an update once we have completed our investigation and verified information." This is the second major violation of the so-called baseline requirements over the past four months.

Those requirements were mandated by the CA/Browser Forum, an industry group made up of CAs and the developers of major browsers that trust them.
In November, Firefox recommended the blocking of China-based WoSign for 12 months after that CA was caught falsifying the issuance date of certificates to get around a prohibition against use of the weak SHA1 cryptographic hashing algorithm. Other browser makers quickly agreed. Ayer discovered the unauthorized certificates by analyzing the publicly available certificate transparency log, a project started by Google for auditing the issuance of Chrome-trusted credentials. Normally, Google requires CAs to report only the issuance of so-called extended validation certificates, which offer a higher level of trust because they verify the identity of the holder, rather than just the control of the domain.

Following Symantec's previously mentioned 2015 mishap, however, Google required Symantec to log all certificates issued by its CAs. Had Symantec not been required to report all certificates, there's a strong likelihood the violation never would have come to light.

Google ventures into public key encryption

Google announced an early prototype of Key Transparency, its latest open source effort to ensure simpler, safer, and secure communications for everyone.

The project’s goal is to make it easier for applications services to share and discover public keys for users, but it will be a while before it's ready for prime time. Secure communications should be de rigueur, but it remains frustratingly out of reach for most people, more than 20 years after the creation of Pretty Good Privacy (PGP).

Existing methods where users need to manually find and verify the recipients’ keys are time-consuming and often complicated. Messaging apps and file sharing tools are limited in that users can communicate only within the service because there is no generic, secure method to look up public keys. “Key Transparency is a general-use, transparent directory, which makes it easy for developers to create systems of all kinds with independently auditable account data,” Ryan Hurst and Gary Belvin, members of Google’s security and privacy engineering team, wrote on the Google Security Blog. Key Transparency will maintain a directory of online personae and associated public keys, and it can work as a public key service to authenticate users.

Applications and services can publish their users’ public keys in Key Transparency and look up other users’ keys.

An audit mechanism keeps the service accountable.

There is the security protection of knowing that everyone is using the same published key, and any malicious attempts to modify the record with a different key will be immediately obvious. “It [Key Transparency] can be used by account owners to reliably see what keys have been associated with their account, and it can be used by senders to see how long an account has been active and stable before trusting it,” Hurst and Belvin wrote. The idea of a global key lookup service is not new, as PGP previously attempted a similar task with Global Directory.

The service still exists, but very few people know about it, let alone use it. Kevin Bocek, chief cybersecurity strategist at certificate management vendor Venafi, called Key Transparency an "interesting" project, but expressed some skepticism about how the technology will be perceived and used. Key Transparency is not a response to a serious incident or a specific use case, which means there is no actual driving force to spur adoption.

Compare that to Certificate Transparency, Google’s framework for monitoring and auditing digital certificates, which came about because certificate authorities were repeatedly mistakenly issuing fraudulent certificates. Google seems to be taking a “build it, and maybe applications will come,” approach with Key Transparency, Bocek said. The engineers don’t deny that Key Transparency is in early stages of design and development. “With this first open source release, we're continuing a conversation with the crypto community and other industry leaders, soliciting feedback, and working toward creating a standard that can help advance security for everyone," they wrote. While the directory would be publicly auditable, the lookup service will reveal individual records only in response to queries for specific accounts.

A command-line tool would let users publish their own keys to the directory; even if the actual app or service provider decides not to use Key Transparency, users can make sure their keys are still listed. “Account update keys” associated with each account—not only Google accounts—will be used to authorize changes to the list of public keys associated with that account. Google based the design of Key Transparency on CONIKS, a key verification service developed at Princeton University, and integrated concepts from Certificate Transparency.

A user client, CONIKS integrates with individual applications and services whose providers publish and manage their own key directories, said Marcela Melara, a second-year doctoral fellow at Princeton University’s Center for Information Technology Policy and the main author of CONIKS.

For example, Melara and her team are currently integrating CONIKS to work with Tor Messenger.

CONIKS relies on individual directories because people can have different usernames across services. More important, the same username can belong to different people on different services. Google changed the design to make Key Transparency a centralized directory. Melara said she and her team have not yet decided if they are going to stop work on CONIKS and start working on Key Transparency. One of the reasons for keeping CONIKS going is that while Key Transparency’s design may be based on CONIKS, there may be differences in how privacy and auditor functions are handled.

For the time being, Melara intends to keep CONIKS an independent project. “The level of privacy protections we want to see may not translate to [Key Transparency’s] internet-scalable design,” Melara said. On the surface, Key Transparency and Certificate Transparency seem like parallel efforts, with one providing an auditable log of public keys and the other a record of digital certificates. While public keys and digital certificates are both used to secure and authenticate information, there is a key difference: Certificates are part of an existing hierarchy of trust with certificate authorities and other entities vouching for the validity of the certificates. No such hierarchy exists for digital keys, so the fact that Key Transparency will be building that web of trust is significant, Venafi’s Bocek said. “It became clear that if we combined insights from Certificate Transparency and CONIKS we could build a system with the properties we wanted and more,” Hurst and Belvin wrote.

8 Docker security rules to live by

Odds are, software (or virtual) containers are in use right now somewhere within your organization, probably by isolated developers or development teams to rapidly create new applications.

They might even be running in production. Unfortunately, many security teams don’t yet understand the security implications of containers or know if they are running in their companies. In a nutshell, Linux container technologies such as Docker and CoreOS Rkt virtualize applications instead of entire servers.

Containers are superlightweight compared with virtual machines, with no need for replicating the guest operating system.

They are flexible, scalable, and easy to use, and they can pack a lot more applications into a given physical infrastructure than is possible with VMs.

And because they share the host operating system, rather than relying on a guest OS, containers can be spun up instantly (in seconds versus the minutes VMs require). A June 2016 report from the Cloud Foundry Foundation surveyed 711 companies about their use of containers. More than half had either deployed or were in the process of evaluating containers. Of those, 16 percent have already mainstreamed the use of containers, with 64 percent expecting to do so within the next year.
If security teams want to seize the opportunity (borrowing a devops term) to “shift security to the left,” they need to identify and involve themselves in container initiatives now. Developers and devops teams have embraced containers because they align with the devops philosophy of agile, continuous application delivery. However, as is the case with any new technology, containers also introduce new and unique security challenges.

These include the following: Inflow of vulnerable source code: Because containers are open source, images created by an organization’s developers are often updated, then stored and used as necessary.

This creates an endless stream of uncontrolled code that may harbor vulnerabilities or unexpected behaviors. Large attack surface: In a given environment, there would be many more containers than there would be applications, VMs, databases, or any other object that requires protecting.

The large numbers of containers running on multiple machines, whether on premises or in the cloud, make it difficult to track what’s going on or to detect anomalies through the noise. Lack of visibility: Containers are run by a container engine, such as Docker or Rkt, that interfaces with the Linux kernel.

This creates another layer of abstraction that can mask the activity of specific containers or what specific users are doing within the containers. Devops speed: The pace of change is such that containers typically have a lifespan four times shorter than that of VMs, on average.

Containers can be executed in an instant, run for a few minutes, then stopped and removed.

This ephemerality makes it possible to launch attacks and disappear quickly, with no need to install anything. “Noisy neighbor” containers: A container might behave in a way that effectively creates a DoS attack on other containers.

For example, opening sockets repeatedly will quickly bring the entire host machine to a crawl and eventually cause it to freeze up. Container breakout to the host: Containers might run as a root user, making it possible to use privilege escalation to break the “containment” and access the host’s operating system. “East-west” network attacks: A jeopardized container can be leveraged to launch attacks across the network, especially if its outbound network connections and ability to run with raw sockets were not properly restricted. The best practices for securing container environments are not only about hardening containers or the servers they run on after the fact.

They’re focused on securing the entire environment.
Security must be considered from the moment container images are pulled from a registry to when the containers are spun down from a runtime or production environment.

Given that containers are often deployed at devops speed as part of a CI/CD framework, the more you can automate, the better. With that in mind, I present this list of best practices. Many of them are not unique to containers, but if they are “baked” into the devops process now, they will have a much greater impact on the security posture of containerized applications than if they are “bolted” on after the fact. Implement a comprehensive vulnerability management program. Vulnerability management goes way beyond scanning images when they are first downloaded from a registry.

Containers can easily pass through the development cycle with access controls or other policies that are too loose, resulting in corruption that causes the application to break down or leading to compromise in runtime.

A rigorous vulnerability management program is a proactive initiative with multiple checks from “cradle to grave,” triggered automatically and used as gates between the dev, test, staging, and production environments. Ensure that only approved images are used in your environment. An effective way of reducing the attack surface and preventing developers from making critical security mistakes is to control the inflow of container images into your development environment.

This means using only approved private registries and approved images and versions.

For example, you might sanction a single Linux distro as a base image, preferably one that is lean (Alpine or CoreOS rather than Ubuntu) to minimize the surface for potential attacks. Implement proactive integrity checks throughout the lifecycle. Part of managing security throughout the container lifecycle is to ensure the integrity of the container images in the registry and enforce controls as they are altered or deployed into production.
Image signing or fingerprinting can be used to provide a chain of custody that allows you to verify the integrity of the containers. Enforce least privileges in runtime. This is a basic security best practice that applies equally in the world of containers. When a vulnerability is exploited, it generally provides the attacker with access and privileges equal to those of the application or process that has been compromised.

Ensuring that containers operate with the least privileges and access required to get the job done reduces your exposure to risk. Whitelist files and executables that the container is allowed to access or run. It’s a lot easier to manage a whitelist when it is implemented from the get-go.

A whitelist provides a measure of control and manageability as you learn what files and executables are required for the application to function correctly, and it allows you to maintain a more stable and reliable environment. Limiting containers so that they can access or run only pre-approved or whitelisted files and executables is a powerful nethod to mitigate risk.
It not only reduces the attack surface, but also can be employed to provide a baseline for anomalies and prevent the use cases of the “noisy neighbor” and container breakout scenarios described above. Enforce network segmentation on running containers. Maintain network segmentation (or “nano-segmentation”) to segregate clusters or zones of containers by application or workload.
In addition to being a highly effective best practice, network segmentation is a must-have for container-based applications that are subject to PCI DSS.
It also serves as a safeguard against “east-west” attacks. Actively monitor container activity and user access. As with any IT environment, you should consistently monitor activity and user access to your container ecosystem to quickly identify any suspicious or malicious activity. Log all administrative user access to containers for auditing. While strong user access controls can restrict privileges for the majority of people who interact with containers, administrators are in a class by themselves. Logging administrative access to your container ecosystem, container registry, and container images is a good security practice and a common-sense control.
It will provide the forensic evidence needed in the case of a breach, as well as a clear audit trail if needed to demonstrate compliance. Much of the notion of “baking security into IT processes” relates to automating preventive processes from the onset.

Getting aggressive about container security now can allow for containerized applications to be inherently more secure than their predecessors. However, given that containers will be deployed ephemerally and in large numbers, active detection and response -- essential to any security program -- will be critical for containerized environments.

Container runtime environments will need to be monitored at all times, for anomalies, suspected breaches, and compliance purposes. Although there’s a growing body of knowledge about container security in the public domain, it’s important to note that we’re still in the early stages.

As we discover new container-specific vulnerabilities (or new-old ones such as Dirty COW), and as we make the inevitable mistakes (like the configuration error in Vine’s Docker registry that allowed a security researcher to access Vine's source code), best practices are sure to evolve. The good news, as far as container adoption goes, is it’s still early enough to automate strong security controls into container environments.

The not-so-good news is security teams need to know about container initiatives early enough to make that happen, and more often than not they don’t.

To realize the potential security improvements that can be achieved in the transition to container-based application development, that needs to change ... soon.

Educating yourself about containers and the security implications of using them is a good start. New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth.

The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers.
InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content.
Send all inquiries to newtechforum@infoworld.com.

Code Reuse a Peril for Secure Software Development

The amount of insecure software tied to reused third-party libraries and lingering in applications long after patches have been deployed is staggering. It’s a habitual problem perpetuated by developers failing to vet third-party code for vulnerabilities, and some repositories taking a hands-off approach with the code they host. This scenario allows attackers to target one overlooked component flaw used in millions of applications instead of focusing on a single application security vulnerability. The real-world consequences have been demonstrated in the past few years with the Heartbleed vulnerability in OpenSSL, Shellshock in GNU Bash, and a deserialization vulnerability exploited in a recent high-profile attack against the San Francisco Municipal Transportation Agency. These are three instances where developers reuse libraries and frameworks that contain unpatched flaws in production applications. Related Posts Adobe Patches Flash Zero Day Under Attack October 26, 2016 , 11:24 am Dyn Confirms DDoS Attack Affecting Twitter, Github, Many Others October 21, 2016 , 10:01 am Threatpost News Wrap, June 17, 2016 June 17, 2016 , 11:15 am Security researchers at Veracode estimate that 97 percent of Java applications it tested included at least one component with at least one known software vulnerability. “The problem isn’t limited to Java and isn’t just tied to obscure projects,” said Tim Jarrett senior director of security, Veracode. “Pick your programming language.” Gartner, meanwhile, estimates that by 2020, 99 percent of vulnerabilities exploited will be ones known by security and IT professionals for at least one year. Code Reuse Saves Time, Invites Bugs According to security experts, the problem is two-fold. On one hand, developers use reliable code that at a later date is found to have a vulnerability. Second, insecure code is used by a developer who doesn’t exercise due diligence on the software libraries used in their project. “They’ve heard the warnings and know the dangers, but for many developers open source and third-party components can be a double-edge sword – saving time but opening the door to bugs,” said Derek Weeks, vice president and DevOps advocate at Sonatype. In an analysis of 25,000 applications, Sonatype found that seven percent of components had at least one security defect tied to the use of an insecure software component. Repositories GitHub, Bitbucket, Python Package Index and NuGet Gallery are essential tools helping developers find pre-existing code that adds functionality for their software projects without having to reinvent the wheel. Java application developers, for example, rely on pre-existing frameworks to handle encryption, visual elements and libraries for handling data. “Software is no longer written from scratch,” Weeks said. “No matter how new and unique the application, 80 percent of the code used in a software application relies on third-party libraries or components.” He said enterprises are more reliant on the software supply chain than ever before. But he says many of the go-to open-source repositories that make up that supply chain are not vetted libraries of reliable code. Rather, they are warehouses with a varying percentage of outdated projects with security issues. According to an analysis of Sonatype’s own Central Repository in 2015, developers had made 31 billion download requests of open source and third-party software components, compared to 17 billion requests the year before. And when Sonatype analyzed its own code library, it found 6.1 percent of code downloaded from its Central Repository had a known security defect. Weeks says Sonatype’s is doing better than other repositories that offer no tools, no guidance and no red flags to prevent developers from using frameworks with faulty code. “There is no Good Housekeeping Seal of Approval for third-party code.” “Faulty code can easily spawn more problems down the road for developers,” said Stephen Breen, a principal consultant at NTT Com Security. “Even when development teams have the best intentions, it’s easy for developers working under tight deadlines to not properly vet the third-party code used in their software.” Breen said when insecure code is unknowingly used to build a component within a software program, problems snowball when that component is used inside other larger components. One example of vulnerable third-party code reused repeatedly is a deserialization flaw in Apache Commons Collections (commons-collections-3.2.1.jar) – first reported in 2015 and patched in November of the same year. Source: Veracode Breen found there are still 1,300 instances of the old vulnerable version of the Commons Collections lurking inside Java applications using Spring and Hibernate libraries and hosted across multiple open source code repositories. “The developer knows they are picking Spring or Hibernate for their development project. They don’t take it to the next level and realize they are also getting Common Collections,” Breen said. “That Common Collections library is then used by thousands more projects.” According to Veracode, Apache Commons Collections is the sixth-most common component used in Java applications. It found that the unpatched versions of the software was in 25 percent of 300,000 Java applications scanned. Even more challenging for developers is updating those applications that are using the vulnerable version of libraries and frameworks since flaws were patched. “Think of it like a faulty airbag. Carmakers used those faulty airbags in millions of vehicles. Now it’s the carmaker on the hook to fix the problem, not the airbag maker,” Weeks said. Leaky Apps, Bad Crypto, Injection Flaws Galore Veracode said the Apache Common Collection example is the tip of the iceberg. When Veracode examined vulnerabilities tied to insecure code it found application information leakage, where user or application data can be leveraged by an attacker, is the most prevalent type of vulnerability, accounting for 72 percent of third-party code flaws. Second are cryptographic issues representing 65 percent of vulnerabilities. That was followed by Carriage Return Line Feed (CRLF) injection flaws and cross site scripting bugs. Source: Veracode Compounding the problem is an increased dependency on open-source components used in a wide variety of software products. The federal government is typical. It has an open-source-first policy as do many private companies. Relying on third-party libraries shortens development time and can improve the safety and quality of their software projects, Weeks said. “Not only does code reuse save time but it also allows developers to be more innovative as they focus on creating new functionality and not writing encryption libraries from scratch,” Weeks said. Done correctly, code reuse is a developer’s godsend, he said. For those reasons, security experts say it’s time for the industry to stop and consider where code originates. Sonatype, which markets and sells code verification services, promotes the idea of documenting software’s supply chain with what it calls a “software bill of materials.” That way developers can better scrutinize open-source frameworks before and after they are used; making it easier to update those applications that are using vulnerable old versions of libraries. Sonatype said it found one in 16 components it analyzed had a vulnerability that was previously documented, verified and with additional information available on the Internet. “I can’t imagine any other industry where it’s okay that one in 16 parts have known defects.” The problem is that among developers there is a mix of denial and ignorance at play. “Developers choose component parts, not security,” Weeks said. It should be the other way around. “If we are aware of malicious or bad libraries or code, of course we want to warn our users,” said Logan Abbott, president of SourceForge, a software and code repository. “We scan binaries for vulnerabilities, but we don’t police any of the code we host.” Repositories Say: ‘We’re Just the Host’ Repositories contacted by Threatpost say their platforms are a resource for developers akin to cloud storage services that allow people to store and share content publicly or privately. They don’t tell users what they can and cannot host with their service. They say rooting out bugs in software should be on shoulders of developers – not repositories. Writing good vulnerability-free code starts at getting good code from healthy repositories with engaged users. “We think of ourselves as the Home Depot of repositories,” said Rahul Chhabria, product manager for Atlassian Bitbucket. “We provide the tools, material and platform to get the job done right.” Chhabria said Bitbucket offers a range of tools to help sniff out bad or insecure components such as the third-party tool SourceClear for scanning dependency chains. It also offers Bitbucket Pipelines that allows for cloud-based team development of software projects and simplifies peer review. GitHub is one of the largest repositories; it hosts 49 million public and private projects for its 18 million users. It does not scan or red flag insecure code hosted on its platform, according to Shawn Davenport, VP of security at GitHub. Instead developers can use third party-tools such as Gemnasium, Brakeman and Code Climate for static and dependency analysis. “There is a lot of hidden risk out there in projects,” Davenport said. “We do our best to make sure our developers know what tools are available to them to vet their own code.” He estimates a minority GitHub developers take advantage of software scanning and auditing tools. “Unfortunately security isn’t a developers first priority.” Other repositories told Threatpost they intentionally take a hands-off approach and say expecting them to police their own software isn’t feasible, not part of their mission and nothing they plan to do. They point out, flawed or not, developers want access to all code – even older components. “An implementation of a library in one framework might not be a security risk at all,” Breen said. He points out developers often temporarily revert to those old libraries as stopgaps should an updated version break a project. Automated Scanning to the Rescue? One attempt at nipping the problem at the bud is the used of automated security vulnerability and configuration scanning for open source components. By 2019, more than 70 percent of enterprise DevOps initiatives will incorporate automated scanning, according to Gartner. Today only 10 percent of packages are scanned. The Node.js Foundation, an industry consortium designed to promote the Node.js platform, relies on a more community-based approach via the Node.js Security Project. The goal is to provide developers a process for discovering and disclosing security vulnerabilities found in the Node.js module ecosystem. According to Node.js the approach is a hybrid solution that consists of a database of vulnerabilities and a community communication channel for vetting and disclosing vulnerable code. “It’s not a story about security professionals solving the problem, it’s about how we empower development with the right information about the (software) parts they are consuming,” Weeks said. “In this case, the heart of the solution lies with development, and therefore requires a new approach and different thinking.”

Facebook Releases Free Certificate Transparency Monitoring Tool

The movement toward Certificate Transparency (CT) has brought about a healthy improvement, not only in the way organizations monitor and audit TLS certs, but also in cutting down the number of malicious or mistakenly issued certificates. CT, a framework developed by Google, works because Certificate Authorities are required to submit certificates to publicly accessible logs; as of next October, non-compliant sites will no longer be trusted by Chrome. For smaller organizations in particular, the cost is high to build out an infrastructure and search tool that interacts with all public CT logs.

Facebook, however, may have filled that gap today with the release of a previously internal tool called the Certificate Transparency Monitoring Developer Tool. The tool checks major public CT logs at regular intervals for new certificates issued on domains singled out by the user. “We’ve been monitoring Certificate Transparency logs internally since last year, and found it very useful,” Facebook security engineer David Huang said. “It allowed us to discover unexpected certs that were issued for our domain that we previously were unaware of. We realized it might be useful for other developers and made this free for everyone.” The tool allows users to search CT logs for a particular domain and return certs that have been issued for the domain and its subdomains. Users can also subscribe to a domain feed and receive email notifications when new certs are issued. Facebook said the search interface is easy to use, and its infrastructure can process large amounts of data quickly, providing a reliable return for any domain.

Facebook has been promoting the use of CT logs to detect unexpected certificates; not all of these occurrences are malicious. “It’s not always necessarily a vulnerability or attack, but it may be a case where a site as large as Facebook with lots of domains—some run by ourselves or by external hosting vendors—where we many not have a full picture of how our certs are deployed on domains,” Huang said. “This tool provides easy information for us.

This is probably very interesting for individual sites or smaller sites that probably are not actively monitoring certificates for their domains.” The framework is set up to monitor, in a standard way, all publicly trusted TLS certificates issued on the internet.
It consists of logs, or records of TLS certs submitted by CAs or site owners; an auditing services that ensures submitted certs are included in the CT logs; and a monitoring service that queries CT logs for new cert data.

Facebook said since it adopted Certificate Transparency, it has observed more than 50 million certificates.

That data is collected and verified against a ruleset, and any variations triggers a notification. Huang said that Facebook’s tool is among the few free services that include a notification and subscriber option. “There are dozens of CT logs, and we periodically fetch them (hourly, or even every 15 minutes) and keep synching across CT logs,” Huang said. “Once we fetch those certificates and process them through our pipeline, we generate alerts if we detect anything unexpected.” Google recently said it was making Certificate Transparency mandatory, an set an October 2017 deadline that was announced at the CA/Browser Forum in mid-October.
Sites that are not compliant will not display the green banner signifying a site is secure. “The level of transparency CT logs have provided is moving us in a very good direction,” Huang said. “In the future, all publicly published certificates will be required to be logged to CT Logs.

By that time, our monitoring tool will be able to have full coverage of any type of public certs.”

OpenVPN will be audited for security flaws

The next major version of OpenVPN, one of the most widely used virtual private networking technologies, will be audited by a well-known cryptography expert. The audit will be fully funded by Private Internet Access (PIA), a popular VPN service provider that uses OpenVPN for its business.

The company has contracted cryptography engineering expert Matthew Green, a professor at Johns Hopkins University in Baltimore, to carry out the evaluation with the goal of identifying any vulnerabilities in the code. Green has experience in auditing encryption software, being one of the founders of the Open Crypto Audit Project, which organized a detailed analysis of TrueCrypt, a popular open-source full-disk encryption application.

TrueCrypt has been abandoned by its original developers in 2014, but its code has since been forked and improved as part of other projects. Green will evaluate OpenVPN 2.4, which is currently the release candidate for the next major stable version.

For  now, he will look for vulnerabilities in the source code that’s available on GitHub, but he will compare his results with the final version when released in order to complete the audit. Any issues that are found will be shared with the OpenVPN developers and the results of the audit will only be made public after they have been patched, PIA’s Caleb Chen said in a blog post. “Instead of going for a crowdfunded approach, Private Internet Access has elected to fund the entirety of the OpenVPN 2.4 audit ourselves because of the integral nature of OpenVPN to both the privacy community as a whole and our own company,” Chen said. The OpenVPN software is cross-platform and can be used both in server or client modes.
It’s therefore used by end-users to connect to VPN servers and by companies to set up such servers.

The software is also integrated in commercial consumer and business products.

Thales Releases Advanced Encryption Solutions for Secure Docker Containers, Simplified Deployment...

Vormetric Data Security Platform expansion includes patented, non-disruptive encryption deployment and advanced Docker encryption

December 8, 2016 – Thales, a leader in critical information systems, cybersecurity and data security, today announced the release of new capabilities for its leading Vormetric Data Security Platform.

These advances extend data-at-rest security capabilities with deeply integrated Docker encryption and access controls, the ability to encrypt and re-key data without having to take applications offline, FIPS certified remote administration and management of data security policies and protections, and the ability to accelerate the deployment of tokenization, static data masking and application encryption.

Announced today by Thales:

  • General availability of Vormetric Transparent Encryption Live Data Transformation Extension: A patented solution that enables organisations to deploy and maintain encryption with minimal downtime.

    Enables initial encryption and rekeying of previously encrypted data while in use.

    Available previously as a pilot – now generally available.
  • Vormetric Transparent Encryption Docker Extension: Extends Vormetric Transparent Encryption’s OS-level policy-based encryption, data access controls and data access logging capabilities to internal Docker container users, processes and resource sets.

    Deploys and protects without the need to alter containers or applications.

    Enables compliance and best practices for encryption, control of data access, and data access auditing for container accessible information.

    Find additional information here: https://www.vormetric.com/products/containers.
  • FIPS 140-2 level 3 certified remote data security management and policy control for Vormetric Data Security Manager V6100 appliance.

    This innovation enables organisations with the most stringent compliance and best practice requirements to easily manage the full Thales line of Vormetric data security platform solutions without physical visits to data centers.
  • Batch Data Transformation: Eases initial encryption or tokenization of sensitive database columns in environments that are protected with Vormetric Application Encryption or Vormetric Tokenization.

    Also supports Static Data Masking requirements.

"IT system downtime is costly for any business, even when it is planned," said Bob Tarzey of UK-based Quocirca. "The financial consequences of IT disruptions arise from lost sales and productivity; in addition, consequent reputational damage can have a longer term knock-on effect," he added. "Downtime need not be caused by system outage, it can be due to data processing, which includes encryption.

The idea behind Vormetric's Live Data Transformation is to solve this problem, even for large databases with high transaction volumes.

Any organisation which needs to ensure both constant data security and availability should take a look at such technology."

Compliance requirements and best practices increasingly call for organisations to encrypt and control access to sensitive data, while also logging and auditing information about sensitive data access.

The company’s recent 2016 Vormetric Data Threat Report revealed that perceived “complexity” is the number-one reason that enterprises do not adopt data security tools and techniques that support these capabilities more widely.

These advanced data security controls directly address this problem by enabling enterprises to confidently support their digital transformation more easily and simply, and in more environments, than ever before.

“Thales continues to innovate by providing advanced data security solutions and services that delivers trust wherever information is created, shared, or stored,” said Vice President of Product Management for Thales e-Security, Derek Tumulak. “No other organisation offers the depth and breadth of integrated data security solutions, or enables enterprises to confidently accelerate their organisation’s digital transformation, like Thales.”

Availability: All new offerings are planned to be available in Q1 2017

About Thales e-Security
Thales e-Security + Vormetric have combined to form the leading global data protection and digital trust management company.

Together, we enable companies to compete confidently and quickly by securing data at-rest, in-motion, and in-use to effectively deliver secure and compliant solutions with the highest levels of management, speed and trust across physical, virtual, and cloud environments.

By deploying our leading solutions and services, targeted attacks are thwarted and sensitive data risk exposure is reduced with the least business disruption and at the lowest life cycle cost.

Thales e-Security and Vormetric are part of Thales Group. www.thales-esecurity.com

About Thales
Thales is a global technology leader for the Aerospace, Transport, Defence and Security markets. With 62,000 employees in 56 countries, Thales reported sales of €14 billion in 2015. With over 22,000 engineers and researchers, Thales has a unique capability to design and deploy equipment, systems and services to meet the most complex security requirements.
Its exceptional international footprint allows it to work closely with its customers all over the world.

Positioned as a value-added systems integrator, equipment supplier and service provider, Thales is one of Europe’s leading players in the security market.

The Group’s security teams work with government agencies, local authorities and enterprise customers to develop and deploy integrated, resilient solutions to protect citizens, sensitive data and critical infrastructure.

Thales offers world-class cryptographic capabilities and is a global leader in cybersecurity solutions for defence, government, critical infrastructure providers, telecom companies, industry and the financial services sector. With a value proposition addressing the entire data security chain, Thales offers a comprehensive range of services and solutions ranging from security consulting, data protection, digital trust management and design, development, integration, certification and security maintenance of cybersecured systems, to cyberthreat management, intrusion detection and security supervision through cybersecurity Operation Centres in France, the United Kingdom, The Netherlands and soon in Hong Kong.

Contact:
Dorothée Bonneil
Thales Media Relations – Security
+33 (0)1 57 77 90 89
dorothee.bonneil@thalesgroup.com

Liz Harris
Thales e-Security Media Relations
+44 (0)1223 723612
liz.harris@thales-esecurity.com

ESET Multi-Device Security 10

Many security suite product lines form a simple progression, at least on the Windows platform. It goes like this: basic antivirus, entry-level suite, feature-rich mega-suite, and cross-platform multi-device suite. With ESET Multi-Device Security 10, you can install the antivirus or entry-level suite on Windows, but not the mega-suite. It also offers a choice of antivirus or suite on macOS devices. As for Android, you can install mobile security, parental control, or both. In fact, this suite shines under Android more than it does under Windows or macOS.

For $84.99 per year, you get six licenses to install ESET protection on your Windows, macOS, and Android devices. At the $99.99 per year level, you get 10 licenses. Kaspersky offers a bit less for $99.99, just five licenses. For $89.99 per year, Norton gives you 10 licenses plus 25GB of hosted online storage for your backups, and McAfee LiveSafe lets you protect all your devices, without limit. ESET's pricing fits right in with these products, and the fact that you get six licenses at the base subscription rate makes it a better deal than many. Also, the previous edition's requirement that one-half of your licenses go to Android devices has been lifted.

To start, you click a link in the activation email, which also contains your license key. In most cases, you'll start by installing ESET on a Windows device, but the download page offers you the choice of Windows, macOS, or Android. Additional installations require either the activation code or the username and password supplied along with the activation code. Unlike F-Secure, Symantec Norton Security Premium, Bitdefender, and others, ESET does not let you manage licenses using an online account. Rather, My ESET is the place to go for antitheft, Android parental control, and social media scanning.

Windows Protection

If you choose to download protection for Windows, ESET Multi-Device installs ESET Internet Security 10. This suite's antivirus gets good scores in our tests and in independent lab tests. It includes a Host Intrusion Prevention System, a secure browser, and a simple spam filter. The firewall's program control is old school, however, either doing very little or spewing popups. Furthermore, the parental control is limited, and it fared poorly in our antiphishing test. For full details, read my review of this suite.

ESET's mega-suite, ESET Smart Security Premium 10, adds a number of advanced features not found in the entry-level suite. These include a password manager based on Editors' Choice Sticky Password Premium, an encryption system that creates secure virtual drives or secure mobile storage, and an anti-theft system for Windows devices. Smart Security Premium also uses an unusual pricing model, with no multi-license bundles. But, once again, ESET Security Multi-Device does not let you access these premium features.

F-Secure, Bitdefender, Kaspersky, and most other cross-platform suites assume that you'll want a full security suite on Windows. ESET gives you the option to install ESET NOD32 Antivirus 10 rather than the full suite, if that's what you prefer. To do so, you download and install the product as usual, then enter the license key you received with the activation email.

ESET on Mac

On a Mac, ESET Multi-Device likewise gives you a choice. You can install the ESET Cyber Security (for Mac) antivirus, or the ESET Cyber Security Pro suite. Note that there's no protection offered for iOS devices.

The Mac antivirus scans for malware on demand, on access, and on schedule. It also scans incoming POP3 and IMAP email messages for dangerous attachments. On the chance that your Mac might act as a carrier for non-Mac malware, it scans for Windows and Linux threats as well.

To keep you safe online, the Mac product includes Banking Protection as well as protection against malicious and fraudulent websites. You can also invoke its social media scanner to check for potentially dangerous links.

This suite's firewall aims to block malicious network attacks, and to control network usage by apps. Firewall experts can block specific services, ports, and IP addresses, but ordinary users shouldn't meddle with such firewall rules.

ESET's Parental control on the Mac is similar to what it offers for Windows, which means it's fairly limited. For each child, you can configure it to block websites matching specific categories, or just accept the default blocking categories for your child's age. It also logs attempts to reach blocked websites. That's the extent of parental control.

Security for Android

ESET Mobile Security provides a full range of expected Android security features. To get started, just install it from the Google Play Store. As with the Windows product, the installer requires that you actively choose whether to block Potentially Unwanted Applications (PUAs). PUAs are not as risky as malware, and you may have even given permission for their installation, but they tend to do annoying things, like bombard you with ads.

The installer offers a free trial of the app's premium features. These include anti-theft, automatic updates, antiphishing, scheduled scanning, and more. Don't bother with the trial, as you already have a license for the premium edition.

Activating that license is a bit awkward. You can do it by typing the registration code from the activation email, but that code is 20 characters long. There's also an option to activate using your username and password. I tried typing the username and password from my ESET account online; it failed. As it turns out, what it wants here is the random username and password assigned to you in the activation email.

ESET's antivirus component scans for malware immediately after install. Real-time protection watches for active malware. You can set up a scheduled scan, or (and this is clever) set it to scan any time it's charging.

Anti-theft isn't enabled by default, because it requires that you change your Android settings to make ESET a Device Administrator. You also must link this installation to your online My ESET account. Uninstall Protection prevents a thief from just turning off ESET.

The Proactive Protection feature snaps a screenshot after a failed unlock attempt. After a specified number of failed attempts (two, by default) a countdown starts in the background (15 seconds, by default). If the countdown finishes before the correct code is entered, the device goes into lockdown, just as if you had locked it remotely. A Good Samaritan who found your lost device could click a contact button to see your email address.

By logging in to the My ESET online portal, you can manage anti-theft remotely. When you mark a device as missing, ESET locks the device and starts monitoring, periodically sending the device's location, and snapping photos using the camera. You can trigger a loud alarm to help find a nearby device. And if you lose all hope for recovery, tapping the Wipe button erases all of the device's data.

Bitdefender Mobile Security and Antivirus (for Android) offers a similar set of anti-theft tools, but adds one unusual item. Once you pair your device with an Android Wear watch, you get a warning if you walk away leaving the device behind.

The anti-phishing component only works with browsers that support its integration. Tapping its button displays a list of supported browsers on your device. On the Nexus 9 that I used for testing, only Chrome appeared in the list. Avast Mobile Security blocks malicious sites in a wide variety of browsers.

Security Audit is disabled by default; you should turn it on. It warns if you're connected to insecure Wi-Fi, and if you've enabled debug mode, or installation of apps from unknown sources.

More importantly, Security Audit checks all your apps and reports how many of them have specific potentially risky permissions: using paid services, tracking your location, reading identity information, accessing messages, and accessing contacts. For each category, you can tap to see a list of programs. On my clean test device, only Speedtest triggered a warning—it needs to know your location to pick the closest server.

After I installed ESET's own Parental Control, described below, it triggered all five Security Audit warnings. Of course, that makes perfect sense; parental control is a kind of invasion of privacy. Note that the similar auditing feature in Norton Security and Antivirus (for Android) takes the concept to the next level, offering warnings about iffy apps before you even download them.

All of my Android test devices are tablets. On an Android smartphone, more options become available. If a phone thief changes out the SIM card, ESET can send the new SIM card details to a trusted friend that you've specified. You can also enable the device to receive remote lock, locate, wipe, and siren commands through SMS.

On a smartphone, ESET's SMS and call filtering lets you control who can call and text you. You make the rules, for specific numbers, for masked numbers, or for numbers not in your contacts list. Rules can apply to calls, SMS messages, or MMS messages. You can also set each rule to apply during specific times or date ranges. I imagine you could use this to block calls during the night but allow calls from your most important contacts.

The similar feature in Avast logs the content of blocked text messages, but just dumps blocked calls to voicemail. Bitdefender's Android app does many things, but call and text blocking isn't among its features.

The Security Audit component adds a couple entries for a smartphone. Specifically, it checks to be sure that data roaming and call roaming are not active.

Norton, Bitdefender, and Avast are our Editors' Choice products in the Android security realm. However, ESET covers most of the same features; it's a good choice for Android protection.

Android Parental Control

ESET's Parental Control app for Android is completely separate from the basic Android security app. In fact, you must use one of your licenses to activate the parental control system. However, once you've done so you can install it on as many Android devices as you wish.

Parental control on Android is significantly more feature-rich than on Windows. See my review of ESET Parental Control (for Android) for full details. I'll summarize here.

The same app that enforces the rules on a child's device can be used to make the rules on a parent's device. In fact, you can log in to the parent app from your child's device, if necessary. You can also manage and monitor the system from the My ESET console.

ESET blocks access to websites in categories you've defined as inappropriate. You can optionally have it log access to such sites without blocking them. It handles secure (HTTPS) websites, so kids won't evade its reach using a secure anonymizing proxy.

The Application Guard feature blocks the use of inappropriate apps, naturally. For apps defined as Fun & Games, it imposes a daily limit, and also lets parents define a weekly schedule for when such apps are allowed.

From the parental control home page, you can see an overview of the child's website and app usage, as well as a location map, and can click on the overviews for detailed reports. A few features work only from the app. Parental Message is perhaps the most important of these. It lets parents send a text message that locks the device until the child responds. That will teach them to ignore you!

On its own, the ESET parental control app is impressive, provided that you only need to cover Android devices. It comes close to our Editors' Choice product for Android parental control, Norton Family Parental Control (for Android).

Uneven Security

ESET Multi-Device Security 10 lets you use a single license to install protection on your Windows, macOS, and Android devices, but it doesn't offer consistent protection across all platforms. Its Android support is the best of the lot, with a full-features antivirus and anti-theft component plus a parental control app that rivals the best. If you're an all-Android household, this could be a good choice.

On Windows, antivirus is ESET's best feature—other components like firewall and parental control don't come up to the same mark. In addition, Windows users don't get the premium features found in ESET's top-of-the-line suite. The Mac product does give you more than just antivirus, but it lacks many features found in the other platforms.

If you need top-notch security for your Windows, macOS, and Android devices, consider Symantec Norton Security Premium. You get 10 licenses for less than what ESET costs, along with 25GB of online backup storage. Don't want any limits? Your McAfee LiveSafe subscription is good for every device in your household, even iOS devices. These two are our Editors' Choice products for cross-platform multi-device security.

Back to top

PCMag may earn affiliate commissions from the shopping links included on this page. These commissions do not affect how we test, rate or review products. To find out more, read our complete terms of use.

SpeedCast Introduces SIGMA Net

A new standard in cloud-based vessel management with security by design

Sydney, Australia, November 30, 2016 - SpeedCast International Limited (ASX: SDA), a leading global satellite communications and network service provider, today announced the official release of SIGMA Net, the new standard for shipping and remote site network management designed specifically for VSAT and MSS.

SIGMA Net is a small but powerful industrial-grade VSAT and MSS network management device designed for ships and remote sites, providing automated and efficient management of multiple WAN links. Cyber security is at the heart of SIGMA Net, which incorporates a stateful firewall and Virtual Private Networking between the vessel and the Internet plus unique methods to regulate Internet access, including rejection of update services to Windows or mobile devices. Voice calling across multiple satellite equipment is simplified via SIGMA Net’s integrated VoIP server, allowing a caller to choose the outbound call route via a prefix. National numbers can also be allocated, allowing for cost-effective calling from shore to a vessel. Feature and performance enhancements are automatically applied, ensuring that the SIGMA Net’s software is always kept up to date.

SIGMA Net offers flexible crew services, including innovative pre-paid PIN-based BYOD (Bring Your Own Device) Internet and voice calling services, allowing for simplified voucher generation and management from shore. SIGMA Net provides managed network segmentation between business critical, crew or M2M networks at the remote location.

The cloud-based SIGMA Net Portal brings a vessel or remote site closer to IT management through its innovative and secured portal. The browser-based SIGMA Net Portal provides remote management and configuration of SIGMA Net from shore. Any configuration changes made from the portal are instantly replicated to one or more SIGMA Net terminals, with full auditing of amendments recorded. Reliability and redundancy is a primary feature of SIGMA Net, with its configuration securely synchronized and stored to the portal. The portal also presents fully-featured and interactive reporting of all data transferred via the SIGMA Net WAN links onboard.

“SIGMA Net has introduced a new degree of connection and network management to the Danaos fleet,” said Mr V Fotinias, Vessel IT Manager at Danaos Shipping, Greece. “The SIGMA Net Portal provides a web interface that enables remote configuration of SIGMA Net terminals across our fleet. The reporting provided by the SIGMA Net Portal gives us full visibility on traffic sent and received via the WAN links. Our vessel IT support team is able to easily and quickly resolve problems on board via SIGMA Net. The Danaos crew are extremely happy with the SIGMA Net prepaid vouchers for Internet access or crew calling.”

Danaos Shipping is one of the world’s largest containership owners, with a modern fleet of 59 container ships operating globally.

“SIGMA Net is a robust and secure cloud-based management platform that will both revolutionize and simplify vessel IT administration, both for shore-based support staff and a vessel’s crew,” said Dan Rooney, Maritime Product Director for SpeedCast. “The highly-configurable and flexible prepaid voucher services allow for time-consuming administrative tasks such as voucher generation to be managed centrally, rather than relying upon the Captain.”

About SpeedCast International Limited
SpeedCast International Limited (ASX: SDA) is a leading global satellite communications and network service provider, offering high-quality managed network services in over 90 countries and a global maritime network serving customers worldwide. With a worldwide network of 42 sales and support offices and 39 teleport operations, SpeedCast has a unique infrastructure to serve the requirements of customers globally. With over 5,000 links on land and at sea supporting mission critical applications, SpeedCast has distinguished itself with a strong operational expertise and a highly efficient support organization. For more information, visit http://www.speedcast.com/.

Social Media: Twitter | LinkedIn | Facebook

SpeedCast® is a trademark and registered trademark of SpeedCast International Limited. All other brand names, product names, or trademarks belong to their respective owners.

© 2016 SpeedCast International Limited. All rights reserved.

For more information, please contact:
Media:
Clara So
SpeedCast International Limited
clara.so@speedcast.com
Tel: +852 3919 6800

About Danaos Corporation
Danaos Corporation is one of the largest independent owners of modern, large-size containerships. Our current fleet of 59 containerships aggregating 353,586 TEUs, including four vessels owned jointly with Gemini Shipholdings Corporation, is predominantly chartered to many of the world's largest liner companies on fixed-rate, long-term charters. Our long track record of success is predicated on our efficient and rigorous operational standards and environmental controls. Danaos Corporation's shares trade on the New York Stock Exchange under the symbol "DAC". Please visit www.danaos.com for more information.

How Do You Protect Your Perimeter When You’ve Blown it to...

By Ian Kilpatrick, Executive Vice President Cyber-Security, Nuvias Group and Chairman Wick Hill GroupIn 2016, we are subject to near constant headlines detailing the latest big data breach or hacking scandal. Many of us probably think we have a pretty good handle on the different types of security risks that can threaten our businesses.

But the reality may be a little different. The introduction of new technologies, the growth of cloud computing and changing employee working practices have all opened the door to a raft of new security vulnerabilities – often without us realising it. The security perimeter that was once in place no longer exists; Bring your own Device (BYOD), remote working or working across multiple sites, combined with an increasing reliance on cloud-based applications such as Office 365 and Salesforce, and public cloud services like Amazon AWS or Microsoft Azure, have contributed to a de-centralised environment where company data and applications can be freely accessed from almost any device, on any network. Without knowing it, many organisations have repeatedly punched holes into their once-secure perimeter, potentially leaving themselves not only vulnerable but fully open to attack. However, because these changes have happened over time, in some cases several years, many firms have missed, or have under-prioritised the potential risks they face.

This in some instances has led to complacency regarding legacy security systems – if something has always worked, and was secure in the past why mess with it? But of course, this doesn’t take into account the new wave of attacks coming from outside the weakened perimeter. Firewall technologyOne of a number of areas that this applies to is firewall technology, which has had to evolve to counter this next generation of security threats.

The firewall that has done a perfectly good job over the past five years, may not be enough to protect your business in the future. For example, firewalls deployed across a multi-site environment today, should be able to offer extra features such as the ability to optimise and protect business-critical traffic from being swamped by less important network activities.
So, ideally your active firewall should feature product capabilities like compression, data-deduplication or application-based prioritisation and bandwidth guarantees. Meanwhile, businesses are facing an unprecedented wave of ransomware attacks.

These generally come in through email, but you could also have computers “calling home” to the Command & Control (C&C) server to install stealthware. With the right firewall – often described as next generation – in place, these activities can be detected and curbed. In addition to the protection on the perimeter, you can deploy more firewalls internally to create zones. Zone-ing or segmentation makes it harder for malware and attackers to cross network boundaries. Often it makes sense to allow for direct access to cloud applications from each branch office location, effectively moving away from the traditional centralised access approach.

Allowing internet access from branch locations may now mean deploying firewalls at these locations.

The practical challenges here are threefold: Does the deployed, ‘smaller’ firewall device at each branch provide all the security controls needed and is it still affordable? Must-haves would be next-generation firewall features such as app control, user awareness, integrated IPS, the ability to intercept SSL, and advanced threat and malware detection. Can these devices be effectively managed from a central user interface? This is important, because it means that only one security policy needs to be defined and maintained across all the deployed firewalls, even though enforcement now takes place in multiple physical locations. What does the associated operational cost look like? Firewall devices need to be trouble-shot, logs need to be managed, updates applied etc. Next Generation FirewallsAs with all things IT, Next Generation Firewalls (NGFW) are subject to more hype than reality. While many are fully featured, some are overmarketed versions of older technology and despite there being plenty of choice, there can be a blurring around the capabilities and performance on offer. The customer should start by determining their needs, as they differ by organisational type, size, performance requirements, security concerns and of course compliance requirements. While there is a wide variation of prices in NGFW, often they are not matched directly to capability – which is why needs precedes budget considerations. At the risk of creating a boring feature list, some of the elements to consider and prioritise for Next Generation Firewalls include application firewalling (using deep packet inspection), intrusion prevention, encrypted traffic inspection TLS/SSl, website filtering, bandwidth management, and third party identity management integration (LDAP, Radius active directory, etc.) Other features can include antivirus, sandbox filtering, logging and auditing tools, network access control, DDoS protection and of course cloud capabilities. Clearly different organisations will have a divergent range of needs driven by their own size, performance and security requirements. With the significant range of solutions on offer, the challenge can often be selection, particularly with the significant number of new suppliers entering the market with innovative offerings. However, these can often create more cloud than light in this area, plus there’s a real risk that if they have a genuinely innovative solution, they will be acquired by a bigger player. Budget and management capabilities are also key elements in this equation.

Given that a firewall often is deployed for considerably more than three years it’s crucial to make the right decision to protect your environment, not only against today’s threats but also those that will be the centre of attacks in the future. Having been around security for more than 40 years, my own suggestion is that the conservative approach of going with a well-established player that can and will continue to invest in threat defences and upgrades is the best route.

There are many organisations that fit this bill, including Barracuda Networks, Check Point and WatchGuard Technologies to name a few.
Subject to the size and potential cost of your deployment, putting one or more suppliers through a full POC (proof of concept) ahead of the decision can be a very effective investment to protect your organisation in a radically changed risk environment from three years ago, and one which will continue to change at potentially an even faster rate. ENDS For further press information, please contact Annabelle Brown on 01326 318212, email pr@nuvias.com. Wick Hill https://www.wickhill.com/ About the authorIan Kilpatrick is EVP (Executive Vice-President) Cyber Security for Nuvias Group and Chairman Wick Hill Group.

A leading and influential figure in the IT channel, Ian has many years’ experience in security and overall responsibility at Nuvias for cyber security strategy. He was a founder member of the award-winning Wick Hill Group in the 1970s and, thanks to his enthusiasm, motivational abilities and drive, led the company through its successful growth and development, to become a leading, international, value-added distributor, focused on security. Wick Hill was acquired by Nuvias in July 2015.
Ian is a thought leader, with a strong vision of the future in IT, focussing on business needs and benefits, rather than just technology. He is a much published author and a regular speaker at IT events. About Nuvias GroupNuvias Group is the pan-EMEA, high value distribution business, which is redefining international, specialist distribution in IT.

The company has created a platform to deliver a consistent, high value, service-led and solution-rich proposition across EMEA.

This allows partner and vendor communities to provide exceptional business support to customers and enables new standards of channel success. The Group today consists of Wick Hill, an award-winning, value-added distributor with a strong specialisation in security; Zycko, an award-winning, specialist EMEA distributor, with a focus on advanced networking; and SIPHON Networks, an award-wining UC solutions and technology integrator for the channel.

All three companies have proven experience at providing innovative technology solutions from world-class vendors, and delivering market growth for vendor partners and customers.

The Group has seventeen regional offices across EMEA, as well as serving additional countries through those offices.

Turnover is in excess of US$ 330 million.

CompSci Prof raises ballot hacking fears over strange pro-Trump voting patterns

Calls for audit of votes in key swing states just to make sure nothing went awry Donald Trump's surprise win in the United States' presidential election could conceivably be attributed to illegal hacking and needs to be investigated, according to a security expert. A statistical analysis by J Alex Halderman, professor of computer science at the University of Michigan's Center for Computer Security and Society, has shown that in three states there were worrying downturns in votes for Democratic Party candidate Hillary Clinton. Halderman feels voting patterns were particularly odd in counties that use electronic voting machines and which don't use a paper receipt to record votes. In some cases such counties showed a seven per cent swing against Clinton, compared to votes predicted by polls.

That swing was enough to tip the election Trump's way, as he took some states - and their electoral college votes - by a few tens of thousands of votes. "I believe the most likely explanation is that the polls were systematically wrong, rather than that the election was hacked.

But I don’t believe that either one of these seemingly unlikely explanations is overwhelmingly more likely than the other," Halderman writes. "The only way to know whether a cyberattack changed the result is to closely examine the available physical evidence  - paper ballots and voting equipment in critical states like Wisconsin, Michigan, and Pennsylvania. Unfortunately, nobody is ever going to examine that evidence unless candidates in those states act now, in the next several days, to petition for recounts." That electronic voting machines are not designed with security in mind and are easy to hack is well documented.

For more than a decade security experts have warned that the machines are susceptible to easy hacks. That hacking aimed at exposing secret information played a part in the US election is without doubt.

A series of leaked emails from the Democratic National Congress that were a key issue for voters, and several election boards had their systems attacked by hackers. Attacks aimed at influencing elections are not uncommon.

Costa Rica investigated such claims, and the Ukrainian government claimed to have found sophisticated election machine hacking code in 2014 that could have altered the course of the vote. Halderman is clear; the only secure form of voting is on paper, with a viable audit trail.

This works well in the UK and Australia, where election nights are busy times as officials index paper ballots on camera.

But the US moved early on electronic voting and many machines don’t provide a paper receipt for auditing. At this stage, the problem is largely moot.

The deadline for a legal challenge to the results is very close and there is little appetite for such a fight. Let's not forget, too, that president-elect Donald Trump never ruled out he would not accept losing the election if he felt any fraud was involved.

A late recount and allegations of digital deviousness has the potential to turn things ugly stateside. ® Sponsored: Customer Identity and Access Management