10.1 C
London
Monday, October 23, 2017
Home Tags Configuration Management

Tag: Configuration Management

Ansible is the Rodney Dangerfield of Red Hatrsquo;s software portfolio: It, too, “donrsquo;t get no respect.” Despite the Ansible automated configuration management tool helping to sell Red Hatrsquo;s hybrid cloud story, delivering six deals worth more than $1 million and one deal worth over $5 million, not a single analyst in the latest financial call bothered to check on Ansiblersquo;s progress. Why? Theyrsquo;re fixated on OpenShift, and perhaps rightly so. OpenShift is Red Hatrsquo;s most obvious successor to the Red Hat Enterprise Linux (RHEL) throne.To read this article in full or to leave a comment, please click here
Using NetChart to standardize its network configuration, TIM Brasil managed to enhance its quality performance indicators.Rio de Janeiro, 6 April 2017 – To help improve the quality of their LTE network, TIM Brasil have made the decision to add 4G capabilities to their existing Configuration Management platform NetChart.
Since 2013, TIM Brasil has used NetChart from Bwtech to standardize network configuration parameters of the 2G and 3G networks.

TIM Brasil, one of the largest mobile... Source: RealWire
The only sane and efficient way to manage a large numbers of servers—or even a few dozen, if they change often—is through automation.

Automation tools have to be learned and mastered, so they exact a significant up-front cost, but they dramatically reduce the administrative burden in the long run. Perhaps most important, they provide a staunch line of defense against the fatal fat-fingered mistake, which even the most sophisticated cloud operators struggle to avoid.Ease of use. Configuration management is simple with SaltStack.

Because Salt uses the YAML configuration format, states are can be written quickly and easily. YAML state descriptions are structured well, with solid readability.

The support for Mako, JSON, Wempy, and Jinja allows developers to extend Salt’s capabilities.

The availability of built-in modules makes it easy to configure and manage states.To read this article in full or to leave a comment, please click here(Insider Story)
An update for ansible is now available for Red Hat OpenStack Platform 10.0(Newton).Red Hat Product Security has rated this update as having a security impact ofImportant.

A Common Vulnerability Scoring System (CVSS) base score, which givesa detailed s...
It's easy to get bogged down when looking for insights from data using Hadoop.

But that doesn't have to happen, and these tips can help. Many technology and security teams, particularly in finance, are running data lake projects together to build data analytics capabilities using Hadoop. The goal for security teams that are doing this is to create a platform that lets them gain meaningful, timely insights from similar data to help solve a wide range of problems.

These problems range from continuous monitoring of cyber hygiene factors across the IT environment (e.g., asset, vulnerability, configuration, and access management) to identifying threat actors moving across their networks by correlating logs across large, cumbersome data sets such as those from Web proxies, Active Directory, DNS, and NetFlow. The reason for this trend is that some members of security teams (typically the chief information security officer and leadership team, control managers, security operations, incident response) have recognized that they're all looking for tailored insights from different analysis of overlapping data sets.

For example, the same data that can advance the CISO's executive communications with risk and audit functions can simplify security operations' job in monitoring and detecting for malicious external or internal activity across devices, users, and applications. By building a platform that can store and enable all the analysis required on this data, security teams are looking to consolidate the output of all their security solutions and, where possible, simplify their tech stacks. Generally, data lake projects go through four phases: build data lake; ingest data; do analysis; deliver insight.

At each of these phases, challenges must be navigated.  The first is to get a data lake that can support ingesting relevant data sets at the right frequency and speed — and enable varied analysis techniques to generate relevant insights.

The second is to build and run efficient analysis on these data sets.

The third is to do that in a way that delivers the insights that stakeholders need.

The fourth is to give stakeholders a way to interact with information that's tailored to their concerns, decisions, and accountability. Most projects today run into problems during the first two phases. Here's why. Phase 1 Problems: Issues with the BuildBuilding a platform for data analysis on Hadoop is hard. Hadoop isn't a single technology; it's a rather complex ecosystem of technology components — HDFS, Spark, Kafka, Yarn, Hive, HBase, Phoenix, and Solr, to name a few.

Figuring out what components of this ecosystem play nice with each other is tricky.
Some components work for some use cases, but not for others.

First you need to know how to connect the parts that work with each other to solve the range of problems you face.

Then you need to know when the limits of these joined-up parts will be reached for the analysis you want to conduct.

That's before you've considered that there are several different vendors of the Hadoop distribution. Sometimes you'll find that you need to swap out an underlying part of your data lake to get the performance you need (either for data ingestion or the math your data scientists want to apply to the data).

This requires expensive people with experience and the ability to experiment quickly when new components are released that enable greater functionality or performance improvements.  If you build a data lake in-house, work on a minimum viable product model.

Develop the capabilities you need for the analysis you need rather than trying to build a platform that will solve every problem you can imagine.
Start with a small set of problems and with relatively simple data sets.
Speak to your peers about paths they walked down that were nonproductive and budget carefully.
If you go down the "buy" route, grill vendors about their promises of a "data lake out of a box." At what point does the data analysis workflow break? What is the extensibility of the data lake as your scale and IT operating model change? Phase 2 Problems: Ingestion IndigestionOnce you have a data lake, it's time to ingest data. Here are two errors commonly made. First: "Let's ingest all data we can get our hands on, then figure out what analysis we want to do later." This leads to problems because sooner or later, the CFO will demand to see some value (that is, insight) from the investment.
It then either becomes apparent that the data sets aren't ideal to generate meaningful insights, so budgets get cut, or analysis begins but it's not easy to get valuable correlations in the data you have and the insights that can be presented under time pressure are underwhelming. The second error is to ingest data sets that aren't well curated, cleaned, and understood.

The result is that you end up with a data swamp, not a data lake.

This often happens when Hadoop is used to replicate ad hoc data analysis methods already in place.
It can also happen when the IT operations team doesn't have time to move the data that would be most useful across the network.  There are three principles that can help when ingesting data for analysis: Identify data that is readily available and enables fast time to value at low cost. Prioritize data that supports solving problems you know you have and that you know you can solve. Use the minimum amount of data needed to deliver maximum insights to many stakeholders. For example, there's a lot you can achieve with just these data sources: vulnerability scan, antivirus, Active Directory, and your configuration management database. Lastly, if you're going to spend time understanding, cleaning, and ingesting data, it's worth making sure the data sets you choose can solve complementary problems and lay the foundation to solve more complicated problems more easily.  Related Content: Nik Whitfield is a noted computer scientist and cyber security technology entrepreneur. He founded Panaseer in 2014, a cybersecurity software company that gives businesses unparalleled visibility and insight into their cybersecurity weaknesses. Panaseer announced in November ...
View Full Bio More Insights
An updated spacewalk-remote-utils package that adds one enhancement is nowavailable for Red Hat Network Tools. Red Hat Network Tools provide programs and libraries that allow your system touse provisioning, monitoring, and configuration management capabilities providedby Red Hat Network and Red Hat Network Satellite.

The spacewalk-remote-utilspackage contains the spacewalk-create-channel utility that can be used to createchannels with a package set for a particular release.This update adds the following enhancement:* The spacewalk-remote-utils package has been updated to include channeldefinitions for Red Hat Enterprise Linux 7.3. (BZ#1395363)All users of Red Hat Network Tools are advised to upgrade to this updatedpackage, which adds this enhancement. Before applying this update, make sure all previously-released erratarelevant to your system have been applied.This update is available via the Red Hat Network.

Details on how touse the Red Hat Network to apply this update are available athttps://access.redhat.com/site/articles/11258Red Hat Network Tools SRPMS: spacewalk-remote-utils-2.3.0-12.el5sat.src.rpm     MD5: 0608de3fae57a576a38b53f29a04a815SHA-256: 94016aebd6572139c411a53b0a6570cca8cca4f3e4eb34cfcc1fea4d1a906cdb spacewalk-remote-utils-2.3.0-12.el6sat.src.rpm     MD5: 4650587ae1944d18dedf6cf457f81706SHA-256: bd1b3fd95c6e0864f56e3eca9221608588aba3aacebff11e97aed51b0ea3eb56 spacewalk-remote-utils-2.3.0-12.el7sat.src.rpm     MD5: 2c3f27271deffd47d56b51549dbe997eSHA-256: 5eb456eb69b7ea10e0b53aa601ea409ab38524a1cf02a0cc81cf48c71417f596   IA-32: spacewalk-remote-utils-2.3.0-12.el5sat.noarch.rpm     MD5: d0f354fb9ce72469c753cf22894b80e0SHA-256: 8b56f6b75554a4626987e456bd9d086619e2953c056857c6288339586bbd356c spacewalk-remote-utils-2.3.0-12.el6sat.noarch.rpm     MD5: f17cb17684ff3f5d66327c55ff1eb81fSHA-256: 18e2930468495b58b30d3508757bfa26b847ba479d1a1564ac8e308b94ecf94f   IA-64: spacewalk-remote-utils-2.3.0-12.el5sat.noarch.rpm     MD5: d0f354fb9ce72469c753cf22894b80e0SHA-256: 8b56f6b75554a4626987e456bd9d086619e2953c056857c6288339586bbd356c   PPC: spacewalk-remote-utils-2.3.0-12.el5sat.noarch.rpm     MD5: d0f354fb9ce72469c753cf22894b80e0SHA-256: 8b56f6b75554a4626987e456bd9d086619e2953c056857c6288339586bbd356c spacewalk-remote-utils-2.3.0-12.el6sat.noarch.rpm     MD5: f17cb17684ff3f5d66327c55ff1eb81fSHA-256: 18e2930468495b58b30d3508757bfa26b847ba479d1a1564ac8e308b94ecf94f spacewalk-remote-utils-2.3.0-12.el7sat.noarch.rpm     MD5: 5dc288da3d90d8f4b7d519dd746f3c88SHA-256: dab60f11183c836b57e5a60a2321b16c7182ca99ca7b02d8fbe9cdc362b4cbc7   PPC64LE: spacewalk-remote-utils-2.3.0-12.el7sat.noarch.rpm     MD5: 5dc288da3d90d8f4b7d519dd746f3c88SHA-256: dab60f11183c836b57e5a60a2321b16c7182ca99ca7b02d8fbe9cdc362b4cbc7   s390x: spacewalk-remote-utils-2.3.0-12.el5sat.noarch.rpm     MD5: d0f354fb9ce72469c753cf22894b80e0SHA-256: 8b56f6b75554a4626987e456bd9d086619e2953c056857c6288339586bbd356c spacewalk-remote-utils-2.3.0-12.el6sat.noarch.rpm     MD5: f17cb17684ff3f5d66327c55ff1eb81fSHA-256: 18e2930468495b58b30d3508757bfa26b847ba479d1a1564ac8e308b94ecf94f spacewalk-remote-utils-2.3.0-12.el7sat.noarch.rpm     MD5: 5dc288da3d90d8f4b7d519dd746f3c88SHA-256: dab60f11183c836b57e5a60a2321b16c7182ca99ca7b02d8fbe9cdc362b4cbc7   x86_64: spacewalk-remote-utils-2.3.0-12.el5sat.noarch.rpm     MD5: d0f354fb9ce72469c753cf22894b80e0SHA-256: 8b56f6b75554a4626987e456bd9d086619e2953c056857c6288339586bbd356c spacewalk-remote-utils-2.3.0-12.el6sat.noarch.rpm     MD5: f17cb17684ff3f5d66327c55ff1eb81fSHA-256: 18e2930468495b58b30d3508757bfa26b847ba479d1a1564ac8e308b94ecf94f spacewalk-remote-utils-2.3.0-12.el7sat.noarch.rpm     MD5: 5dc288da3d90d8f4b7d519dd746f3c88SHA-256: dab60f11183c836b57e5a60a2321b16c7182ca99ca7b02d8fbe9cdc362b4cbc7   (The unlinked packages above are only available from the Red Hat Network) These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from:
It’s no secret that devops and IT security, like oil and water, are hard to mix.

After all, devops is all about going fast, while security is all about proceeding carefully. However, both devops and security serve a higher authority—the business—and the business will be served only if devops and security learn to get along. Security can (and should) be baked into the devops process, resulting in what is often referred to as devsecops.
IT security teams are obliged to understand how applications and data move from development and testing to staging and production, and to address weaknesses along the way.

At the same time, devops teams must understand that security is at least partly their responsibility, not merely slapped onto the application at the very end.

Done right, security and devops go hand in hand. Because half of this equation is about making devops more security-aware, I’ve put together a primer on some basic security principles and described their applicability in devops environments. Of course, this list is only a start.

Feel free to comment and suggest other terms and examples. Vulnerabilities vs. exploits A vulnerability is a weakness that may allow an attacker to compromise a system.
Vulnerabilities usually happen due to bad code, design errors, or programming errors.

They are basically bugs, albeit bugs that may not interfere with normal operations of the application, except to open a door to a would-be intruder.

For a recent example, look at Dirty Cow. Whenever you’re using open source components, it is recommended that you scan the code for known vulnerabilities (CVEs), then remediate by updating the affected components to newer versions that are patched.
In some cases, it’s possible to neutralize the risk posed by a vulnerability by changing configuration settings. An exploit, on the other hand, is code that exploits the vulnerability—that is, a hack.
It’s very common for a vulnerability to be discovered by an ethical researcher (a “white hat”) and to be patched before it has ever been exploited. However, if an exploit has been used, it’s often referred to as existing “in the wild.” The situation where a known vulnerability has an exploit in the wild and has yet to be patched is obviously to be avoided.
In devops environments, vulnerability management must be automated and integrated into the development and delivery cycle using automated steps in CI/CD tools, with a clear policy (typically created by security teams and compliance teams) as to what constitutes an acceptable level of risk, and success/fail criteria for scanned code. Zero-day vs. known vulnerabilities (CVE) Vulnerabilities in public software can be resolved by the developers, and fixes deployed to all users before malicious users become aware of them.
Such “known vulnerabilities” are recorded on the Common Vulnerabilities and Exposures (CVE) system, operated by MITRE. However, in some situations hackers discover new vulnerabilities before they’ve been publicly revealed and fixed.

These “zero-day vulnerabilities” (so called because the developers have zero days to work on a fix once the vulnerability becomes public) are the most dangerous, but they are also less common.

There is no way to detect a zero-day vulnerability up front. However, zero days can be mitigated through network segmentation, continuous monitoring, and encrypting secrets so that even if they are stolen, they are not exposed.

Behavioral analytics and machine learning can also be applied to understand normal usage patterns and flag anomalies as they happen, reducing the potential damage from zero days. Attack surface The attack surface is composed of all the possible entry points into a system through which an attacker could gain access.
It is always advised to minimize the attack surface by eliminating or shutting down parts of a system that are not needed for a particular workload. In devops environments, where applications are deployed and updated frequently, it’s easy to lose sight of the various components and code elements that are included, changed, or added with each update. Over time, this can result in a bloated attack surface, so it’s important to first understand the workloads and configure servers and applications in an optimal manner, removing unnecessary functions and components. Using one “cookie cutter” template will simply result in a larger attack surface, so you need to adjust to specific workloads or at least group workloads by application or trust level.

Then, it’s highly recommended to review the configurations periodically to ensure there’s no “creep up” of the attack surface. Least-privilege principle This principle dictates that users and application components should only have access to the minimum information and resources they need, in order to prevent both accidental and deliberate system misuse.

The principle relies on the notion that if you have access to only what you need, then the damage will be limited if your privileges are compromised. Applying least privilege can dramatically reduce the spread of malware, which tends to use the privileges of a user who was tricked into installing or activating the software.
It is also advised to perform periodic reviews of user privileges and trim them—especially with respect to users who have changed roles or left the company. In devops environments, it’s also recommended to separately define access privileges to development, testing, staging, and production environments, minimizing the potential damage in case of an attack and making it easier to recover from one. Lateral movement (east-west) Lateral movement, sometimes described as “east-west attacks,” refers to the ability of an attacker to move across the network sideways, from server to server or from application to application, thus expanding the attack or moving closer to valuable assets.

This is in contrast to north-south movement, which relates to moving across layers—from a web application into a database, for example. Network controls such as segmentation are crucial in preventing lateral movement and in limiting the damage that a successful attacker might inflict. Network segmentation is akin to the compartmentalization of a ship or submarine: If one section is breached, it is sealed off, preventing the entire ship from going down. Because one of the goals of devops is to remove barriers, this could be a tricky one to master.
It’s important to distinguish between openness in the delivery process, from development through to production, and openness across the network.

The former contributes to agility and process efficiency, but the latter seldom does. For example, there’s usually no cross-talk in the processes required to deliver different applications.
If you have a web retail application and an ERP application, and they are developed and run by different teams, then they belong on separate network segments.

There’s absolutely no devops justification to have an open network between them. Segregation of duties Remember those movies where you need two people to simultaneously turn the key in order to launch nuclear missiles? Segregation of duties is about restricting the privileges that users have in access to systems and data, and limiting the ability of one privileged user to cause damage either by mistake or maliciously.

For example, it’s best practice to separate administration rights of a server from the administration rights of the application running on that server. In a devops environment, the key is to make the segregation of duties part of the CI/CD process and apply it equally to systems as well as users, so no single system or user would be able to compromise your deployment. Orchestrator admins should not also be the configuration management admins, for example. Data exfiltration Data exfiltration, or the unauthorized extraction of data from your systems, might result in sensitive data being accessed by unauthorized parties.
It’s often referred to as “data theft,” but data theft isn’t like physical theft: When data is stolen it still remains where it was, making it more difficult to detect the “loss.” To prevent exfiltration, ensure that “secrets” and sensitive data such as personal information, passwords, and credit card data are encrypted.

Also prevent outbound network connections where they are not required. In development environments, it’s recommended to use data masking or fake data. Using real data means you have to protect your dev environment as you would a production environment, and many organizations don’t want to invest the resources to do that. Denial of service (DoS) DoS is an attack vector whose purpose it is to deny your users from getting service from your systems, by using a variety of methods that place a massive load on your servers, applications, or networks, paralyzing them or causing them to crash. On the internet, DoS attacks are usually distributed (DDoS).

DDoS attacks are much more difficult to block because they don’t originate from a single IP. That said, even a single-origin DoS can be devastating if it comes from within.

For example, a container may be compromised and used as a platform to repeatedly open processes or sockets on the host (attacks known respectively as fork bombs and socket bombs).
Such attacks can cause the host to freeze or crash in seconds. There are many and varied ways to prevent and detect DoS attacks, but proper configuration and sticking to the basic tenets of a minimal attack surface, patching, and least privileges go a long way to making DoS less likely. Organizations that adopt devops methods may actually recover faster from DoS when it does occur because they can more easily relaunch their applications on different nodes (or different clouds) and roll back to previous versions without losing data. Advanced persistent threat (APT) APT is the name given to sophisticated attacks that often take many months to unravel.
In a typical scenario, an intruder will first find a point of infiltration, using a vulnerability or configuration error, and plant code that will collect network traffic or scan processes on the host. Using the data collected, the intruder will then progress to the next phase of the attack, perhaps infiltrating deeper into the network.

This step-by-step process continues until the intruder can lay his hands on a valuable asset, such as customer or financial data, at which point he will go for the final attack, typically data exfiltration. Because APT is not a single attack vector but a combination of many methods, there isn’t any one single thing you can do to protect yourself. Rather you must employ multiple layers of security and be sensitive to anomalies. In devops environments this is even more difficult because they are anything but static.
In addition to avoiding vulnerabilities, applying least privilege religiously, and making it difficult to breach your environment in the first place, you should also implement network segmentation to hinder an intruder’s progress, and monitor your applications for abnormal activity. “Left shift” of security One of the results of continuous development and rapid devops cycles is that developers must bear more of the responsibility for delivering secure code.

Their commits are often integrated straight into the application, and the traditional security gates of penetration testing and code review simply don’t work fast enough to detect or stop anything.
Security tests must “shift left,” or move upstream into the development pipeline.

The easiest way to do this is to integrate security tools with CI/CD tools and insert the necessary steps into the build process. The 10 terms above comprise only a partial list, but in today’s rapidly converging environments it is imperative that devops teams understand security better.

By the same token, security teams must understand that in devops environments security cannot be applied as an afterthought or without understanding how applications are developed and delivered through the pipeline, nor can they use the security tools of yesterday to gate or hinder speedy devops deployments. Learning to speak each other’s lingo is a good start. Amir Jerbi is co-founder and CTO of Aqua Security. Prior to Aqua, he was chief architect at CA Technologies in charge of the host-based security product line. New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth.

The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers.
InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content.
Send all inquiries to newtechforum@infoworld.com.
Product Cisco Bug ID Fixed Release Availability Cisco SocialMiner CSCvc32449 11.6.1 (15-Jun-2017) Cisco Unified MeetingPlace CSCvc23583 Cisco WebEx Node for MCS CSCvc23453 Cisco Jabber Guest CSCvc23580 11.0.1 (28-Feb-2017) Cisco Application and Co...
Security experts have long said that internet-connected systems and software need security controls and features built in by design, in the same manner they’re built into physical infrastructure. The National Institute of Standards and Technology agrees and has issued guidance to help software engineers build secure products. Titled “Systems Security Engineering: Considerations for a Multidisciplinary Approach in the Engineering of Trustworthy Secure Systems,” the guideline emphasizes incorporating “well-defined engineering-based security design principles at every level, from the physical to the virtual,” NIST Fellow Ron Ross wrote on the Taking Measure blog. A holistic approach does more than make systems penetration-resistant; even after a compromise, they’re still capable enough to contain the damage and resilient enough to keep supporting critical missions and business functions. NIST’s guidance uses the international standard ISO/IEC/IEEE 15288 for systems and software engineering as a framework, and it maps out “every security activity that would help the engineers make a more trustworthy system” for each of the 30-plus processes defined by the standard. The activities cover the entire system lifecycle, from the initial business or mission analysis to requirements definition to the design and architecture phases, and they’re applicable for new, upgraded, or repurposed systems. “We have a high degree of confidence our bridges and airplanes are safe and structurally sound. We trust those technologies because we know that they were designed and built by applying the basic laws of physics, principles of mathematics, and concepts of engineering,” Ross wrote. Similarly, applying fundamental principles in mathematics, computer science, and systems/software engineering can give us the same level of confidence in our software and hardware. Taking a holistic approach A holistic approach requires coordinating across different specialties, such as information, software and hardware assurance, physical security, antitamper protection, communications security, and cryptography. It also demands addressing multiple focus areas, such as privacy, verification, penetration resistance, architecture, performance, validation, and vulnerability. The guidance addressed the dependencies and  subspecialties by grouping the processes in the system lifecycle into four families: Agreement Process: Tasks related to acquiring products and services, as well as providing services as a supplier. Organizational Project-Enabling Process: Lifecycle model management, infrastructure management, portfolio management, human resource management, quality management, and knowledge management. Technical Management Process: Project planning, project assessment and control, decision management, risk management, configuration management, information management, and quality assurance. Technical Process: All the activities related to business or mission analysis, defining stakeholder needs and requirements, defining system requirements, defining the architecture, defining the design, system analysis, implementation, integration, verification, transition, validation, operations, maintenance, and disposal. The processes outlined in the publication do not prescribe a mandatory set of activities and do not explicitly map to specific stages in the lifecycle, NIST warned. Engineers should rely on their experience and their understanding of the organization’s objectives to tailor the processes to meet the stakeholder’s requirements for a trustworthy system. The publication also did not attempt to formally define systems security engineering. There is something for everyone involved in the process, from business stakeholders to developers, administrators, and security analysts. Calling on engineers When civil engineers build a bridge, they have to consider the weight of vehicles and people crossing the bridge, the stress caused by wind and other natural elements, and the materials used to build the bridge itself. Buildings have to meet specific structural and fire codes to make sure they are safe and will not collapse. Similarly, software engineers need to build systems with security controls already included in the design and not added afterward as a separate component. If bridges were routinely collapsing, scientists and engineers would be immediately on the scene to figure out what went wrong and identify how to fix it for future projects. Currently, instead of asking engineers and scientists to perform root-cause failure analysis to find and fix the problem, cybersecurity focuses on add-ons. Changing how technology is designed and built—by strengthening underlying systems and system components, and developing with well-defined security requirements—would help reduce the number of known, unknown, and adversary-created vulnerabilities, Ross said. NIST’s approach echoes what Dan Kaminsky, chief scientist and co-founder of White Ops, said in his keynote speech at the Black Hat security conference earlier this year. Kaminsky called for an “NIH [National Institutes of Health] for Cyber” to study the security challenges and come up with engineering solutions addressing them. While Kaminsky was using the name of a different federal agency, his message was the same: Cybersecurity needs to be treated as an engineering discipline with tools and principles that can be used to build secure systems. “We didn’t stop our cities from burning by making fire illegal or heal the ill by making sickness a crime. We actually studied the problems and learned to deliver safety,” Kaminsky said in his speech. “If we want to make security better, give people environments that are easy to work with and still secure.” Addressing the IoT problem While NIST focused the language on systems and software, the guidance provides a welcome direction for the internet of things, most of which hit the market with little to no security controls. NIST’s authority extends to only government agencies and contractors, so the guidance is not binding for engineers working in the private sector. Even so, these recommendations can raise expectations on what features must be included to be acceptable for the marketplace. This NIST publication is the culmination of nearly four years of work, Ross said. The final draft was originally expected in December, but the release date was moved up after a crippling distributed denial-of-service attack against Dyn temporarily cut off access to large parts of the internet. The attack also revived discussions on whether the government should try to regulate the security of IoT, especially since there are currently no consequences for manufacturers selling subpar devices to consumers. Regulation would be difficult, as many of the embedded devices aren’t manufactured in the United States. “While I’m not taking a certain level of regulation off the board, the United States can’t regulate the world,” Rep. Greg Walden (R-Ore.), chairman of the Subcommittee on Communications and Technology said during a recent Congressional hearing on IoT security. Building trustworthy systems The rapid pace of technological innovation, the dramatic growth in consumer demand for new technology, and the boom in IoT have made it difficult to understand, let alone protect, the global information technology infrastructure. There are too many areas to cover—software, firmware, hardware components—and cyberhygiene efforts, such as patching, asset management, and vulnerability scanning, are not enough. “Our fundamental cybersecurity problem can be summed up in three words—too much complexity,” Ross wrote. “Creating more trustworthy, secure systems requires a holistic view of the problems, the application of concepts, principles, and best practices of science and engineering to solve those problems, and the leadership and will to do the right thing—even when such actions may not be popular.”

The number of enterprises with at least one security vulnerability is the highest in five years

London, UK - 9 November 2016 - Enterprises across the globe are refreshing their network equipment earlier in its lifecycle in a move to embrace workplace mobility, Internet of Things, and software-defined networking strategies.
In addition, their equipment refresh is more strategic, with architectural vision in mind.

But despite the higher refresh rate, networks are getting less secure, largely due to neglected patching.

These are some of the highlights in the annual Network Barometer Report today by Dimension Data.

First published in 2009, the 2016 Network Barometer Report was compiled from data gathered from 300,000 service incidents logged for client networks that Dimension Data supports.

Dimension Data also carried out 320 technology lifecycle management assessments covering 97,000 network devices in organisations of all sizes and all industry sectors across 28 countries.

Andre van Schalkwyk, Senior Practice Manager Network Consulting, Dimension Data said, “Since 2010, networks had been ageing.

This year’s Report reverses that trend, and for the first time in five years, we’re seeing networks age more slowly.

“Ageing networks are not necessarily a bad thing: companies just need to understand the implications.

They require a different support construct, with gradually increasing support costs. On the other hand, this also means that organisations can delay refresh costs,” says van Schalkwyk, and points out that ageing networks are unlikely to support initiatives such as software-defined networking and automation, or handle traffic volumes necessary for collaboration or cloud.

According to the Report, in Europe, Asia-Pacific, and Australia enterprises’ network age reduced in line with the global average, while in the Americas, the number of ageing and obsolete devices decreased much faster, from 60% in the 2015 Report to 29% in the 2016 Report.

This can be attributed to the release of pent-up spend following four years of financial constraint.
Van Schalkwyk said clients in the Americas appear to be refreshing networks with the new generation of programmable infrastructure.
In Asia-Pacific and Australia, equipment refresh occurred as part of data centre network redesigns.

In contrast to the global trend, in Middle East and Africa, the network age increased, possibly the result of economic uncertainty, particularly in South Africa.

Meanwhile, of the 97,000 network devices that Dimension Data discovered, the number of devices that have at least one known [1]security vulnerability increased from 60% in the 2015 Report to 76% in the 2016 Report – the highest figure in five years.

In Europe the rise in network vulnerabilities has been very steep over the last three years, hiking from 26% in 2014 to 51% in 2015 and to 82% in the 2016 Report. Network vulnerability has also risen in organisations in the Middle East and Africa over the last three years.
In Australia, 87% of network devices have at least one known vulnerability.
In Asia-Pacific and the Americas, networks are slightly less vulnerable - respectively 49% and 66%, compared to 61% and 73% in the previous edition.

Other highlights in the 2016 Network Barometer Report include:

  • The percentage of devices supporting IPv6 rose steeply from 21% last year to 41% this year, due to the increase in current devices in networks.

    This allows organisations with newer networks to support their digitisation strategies by enabling connectivity for the Internet of Things, big data, analytics, and containerisation.
  • Software-defined networking is coming soon, but not just yet. While there is market interest in software-defined networks, it’s early in the adoption cycle and today, few organisational networks are capable of supporting a software-defined approach.
    In 2015 less than 0.4% of devices could support software-defined WAN and only 1.3% of data centre switches were SDN-ready.
  • Incident response is 69% faster, and repair time 32% faster networks monitored by Dimension Data.
    These numbers reduce by a further 55% and 36% respectively, when combined with Dimension Data’s service desk integration.
  • 37% of incidents are caused by configuration or human error, which can be avoided with proper monitoring, configuration management, and automation.

[1] A security advisory is a notice issued by a manufacturer that they are aware of a security vulnerability on one of their products.

Follow us on
Twitter
Facebook
LinkedIn

About Dimension Data
Dimension Data uses the power of technology to help organisations achieve great things in the digital era.

As a member of the NTT Group, we accelerate our clients’ ambitions through digital infrastructure, hybrid cloud, workspaces for tomorrow, and cybersecurity. With a turnover of USD 7.5 billion, offices in 58 countries, and 31,000 employees, we deliver wherever our clients are, at every stage of their technology journey. We’re proud to be the Official Technology Partner of Amaury Sport Organisation, which owns the Tour de France, and the title partner of the cycling team, Team Dimension Data for Qhubeka.
Visit us at http://www.dimensiondata.com

For more information
Charlotte Martin / Jonathan Mathias
Finn Partners
020 3217 7060
DimensionData@finnpartners.com

The Dell Endpoint Data Security and Management Portfolio includes technologies from both Dell and EMC, from Mozy to RSA and AirWatch. AUSTIN, Texas—Dell EMC is pulling together parts from both companies to create a new broad portfolio of endpoint security and management offerings that officials say will be increasingly important as employees become more mobile and more mobile devices are brought into the corporate environment.The new Dell Endpoint Data Security and Management Portfolio, introduced Oct. 19 here at the Dell EMC World 2016 show, also comes as organizations are looking to pare down the number of security vendors they deal with.The lineup of offerings draw upon Dell and Mozy by Dell as well as from former EMC businesses RSA and VMware's AirWatch.

Executives from both Dell and EMC worked together from the time Dell's bid to buy EMC for more than $60 billion was announced in October 2015 to plan out how to align their respective product portfolios.

The result is that the combined company has been able to unveil enhanced products and lineups six weeks after the deal closed.The new security portfolio "conveys the potential of the combination of the two companies and what we will be able to do over time," Jeff Clarke, vice chairman of operations and president of Dell Technologies' Client Solutions Group, told journalists during a roundtable discussion during the show. Businesses are seeing employees not only working outside of the office more often, but also the number of computing devices—from notebooks to smartphones to tablets—they're using for work growing.

At the same time, the number of connected devices, systems and sensors is increasing rapidly; CEO Michael Dell saying during his keynote address that by 2031, there could be as many as 200 billion worldwide. Such numbers increase the attack surface for cyber-criminals and call for a broader approach to security, and attacks are becoming more sophisticated, Dell said. With the acquisition of EMC, Dell now has the tools to offer a more complete lineup of security capabilities that not only work to keep intruders out of the corporate network, but also to more quickly detect and remediate threats when they get in.In the area of data protection, Dell unveiled Dell Data Protection l Endpoint Security Suite Enterprise that offers authentication, filed-based data encryption and advanced threat prevention.
In addition, the portfolio includes MozyEnterprise and MozyPro for protecting cloud data in laptops, desktops and small servers in distributed enterprise environments and for small and midsize businesses (SMBs) and making it easier to recover from incidents that result in data loss.

Through Mozy, customers also can sync files across devices like multiple laptops, PCs and smartphones.For identity assurance, RSA SecurID Access is designed to provide authentication that includes context-based access control and single sign-on for improved access to web and software-as-a-service applications (SaaS). RSA NetWitness Endpoint is for threat detection and response through the use of behavioral analytics and machine learning capabilities.Dell also is offering VMware's AirWatch Unified Endpoint Management for over-the-air device management, which includes everything from configuration management and operating system patch management to client health and security management.Brett Hansen, executive director of data security solutions at Dell, said combining of the technologies is the first step in creating a more complete security solution, with fuller integration of the products coming in the second half of 2017.Clarke and other Dell officials said that another challenge for customers is reducing the number of point security products and vendors they're using in their environments.

There are about 1,600 security technology vendors in the world, and some larger enterprises are using applications from dozens of vendors, they said.

Customers want to whittle those numbers, and while Dell officials understand that businesses won't only use one vendor, but believe they have the broad lineup of products that customers will want as part of their larger security infrastructure.
Updated spacewalk-proxy-installer, osad, and koan packages are now available forRed Hat Network Tools. Red Hat Network Tools provide programs and libraries that allow your system touse provisioning, monitoring, and configuration management capabilities providedby Red Hat Network and Red Hat Network Satellite.This update fixes the following bugs:* Prior to this update, systems registered through a Satellite Proxy received a404 response when attempting to register to Red Hat Access Insights.

This updatefixes the Red Hat Satellite Proxy configuration to address this problem. Usersneed to run the configure-proxy.sh script after applying this erratum for thefix to take effect. (BZ#1367918)* Prior to this update, performing a kickstart installation of a Red HatEnterprise Linux 7 system from Satellite did not work correctly due to koan notcorrectly handling GRUB 2 parameters.

This bug has been fixed and kickstart nowworks as expected. (BZ#1208253)* Prior to this update, a variety of SSL-related misconfigurations could resultin the osad service on a client failing with a cryptic "Not able to reconnect"error.

This update adds a link to a Red Hat Knowledgebase article with detailson what might be wrong and how to fix it. (BZ#1277448)Users of Red Hat Network Tools are advised to upgrade to these updated packages,which fix these bugs. Before applying this update, make sure that all previously released erratarelevant to your system have been applied.This update is available via Red Hat Network.

Details on how to use the RedHat Network to apply this update are available athttps://access.redhat.com/articles/11258Red Hat Network Tools SRPMS: cobbler-2.0.7-68.el5sat.src.rpm     MD5: eef37836efffabd8e746f92e198ba40aSHA-256: e3630570a86e3511c7b42d9f1d7c82c0c7ab474bf92e88eea84185d803b3d663 cobbler-2.0.7-68.el6sat.src.rpm     MD5: 388aed7acb157e4e52617d9c001c4af4SHA-256: a007fb560820191a205a6e272fc00163c9a59f4950bd0e17cd1813b71ba0923a cobbler-2.0.7-68.el7sat.src.rpm     MD5: 524d84bb95793724997b76f8de2a2b99SHA-256: 9fca746e6417cb277c495b83f40931f54018896cd14afe08e08a1364d3d6a1fb osad-5.11.44-10.el5sat.src.rpm     MD5: dd851ff75dff62072c70d701fec502efSHA-256: 3fb765c1a2a9f7d5f8f074fc7d7fd21865231b80e2e2bad6b1fc41f1658ef8a5 osad-5.11.44-10.el6sat.src.rpm     MD5: 20a87e1b34e5e09592be3f7f825203a5SHA-256: 8c75023646f576a40705d18a821d8c1c6f98efa8d3547509a6e0561ae598ab04 osad-5.11.44-10.el7sat.src.rpm     MD5: 0d52d7487f79f80590a7893207e30979SHA-256: 63c1f416b67fe5f8fc3d543d4583c6545ee08b3b532c2987dda20796fb9a38d3 spacewalk-proxy-installer-2.0.1-5.el5sat.src.rpm     MD5: 64f078a00006cb229c5ba8c6429537b8SHA-256: 727826df4f352eb69c3d04174bbfd3aa06dee4ab60ea89006d2773a6e44939a1 spacewalk-proxy-installer-2.3.0-7.el6sat.src.rpm     MD5: a5a21b417d3a0d3d2d3e73804aeeaae0SHA-256: e6563351729c9aa1cee10663134687f0b411201461990a6942738ede64419c90   IA-32: koan-2.0.7-68.el5sat.noarch.rpm     MD5: c82b9553363c882e53857f7c22b8e05bSHA-256: d453a9879ed42994a5c62c9c56ade53bb133b9f3176e9ac1d0df52b1222168c5 koan-2.0.7-68.el6sat.noarch.rpm     MD5: 5fc1764a2abd04eb235d1c428cc2fa40SHA-256: c612bf7fe2f5dfda48f2f8926ed6b3394d8a75dd009d7756e8e68980e95381bb osad-5.11.44-10.el5sat.noarch.rpm     MD5: 522d8c4e1fed357d2fc3ffd397abc6feSHA-256: 0e52f1800502b3be615a8c199dddb1f162d7647741981c102e8b8845e3f9b0c7 osad-5.11.44-10.el6sat.noarch.rpm     MD5: 2db8c3dc4a642474a92b2e39793178c1SHA-256: 7923d6c105c7b90ef7d457d4c66efaa6b0ef03e7fa8f18e5a501285b8d37be34 spacewalk-proxy-installer-2.0.1-5.el5sat.noarch.rpm     MD5: 85aeb54a30bd7531ea65d358b0941c7fSHA-256: ede683e0082745547d96afc4753e2ff49ab538d7dc87f4c4068cdeda6cd735de spacewalk-proxy-installer-2.3.0-7.el6sat.noarch.rpm     MD5: 632087f84a051a3095ba3539e919d0a9SHA-256: a1e8f7d01b5ff64565ec18c92a2d99a6718b8af592e541f5294017faa22838fc   IA-64: koan-2.0.7-68.el5sat.noarch.rpm     MD5: c82b9553363c882e53857f7c22b8e05bSHA-256: d453a9879ed42994a5c62c9c56ade53bb133b9f3176e9ac1d0df52b1222168c5 osad-5.11.44-10.el5sat.noarch.rpm     MD5: 522d8c4e1fed357d2fc3ffd397abc6feSHA-256: 0e52f1800502b3be615a8c199dddb1f162d7647741981c102e8b8845e3f9b0c7   PPC: koan-2.0.7-68.el5sat.noarch.rpm     MD5: c82b9553363c882e53857f7c22b8e05bSHA-256: d453a9879ed42994a5c62c9c56ade53bb133b9f3176e9ac1d0df52b1222168c5 koan-2.0.7-68.el6sat.noarch.rpm     MD5: 5fc1764a2abd04eb235d1c428cc2fa40SHA-256: c612bf7fe2f5dfda48f2f8926ed6b3394d8a75dd009d7756e8e68980e95381bb koan-2.0.7-68.el7sat.noarch.rpm     MD5: b6138013ccedc2fed9bb8603fff20821SHA-256: 2b53f3ab8cdc3ef3b4eff03682657b02ab031e69a018a4006330e24649a7c104 osad-5.11.44-10.el5sat.noarch.rpm     MD5: 522d8c4e1fed357d2fc3ffd397abc6feSHA-256: 0e52f1800502b3be615a8c199dddb1f162d7647741981c102e8b8845e3f9b0c7 osad-5.11.44-10.el6sat.noarch.rpm     MD5: 2db8c3dc4a642474a92b2e39793178c1SHA-256: 7923d6c105c7b90ef7d457d4c66efaa6b0ef03e7fa8f18e5a501285b8d37be34 osad-5.11.44-10.el7sat.noarch.rpm     MD5: 5b5e226fce2a9acc7120b88f06d8f8e8SHA-256: 0fc0681b328c77629278f50abc8cf3a381ed05e5b8ca6d1495bfff374bf2cbd0 spacewalk-proxy-installer-2.3.0-7.el6sat.noarch.rpm     MD5: 632087f84a051a3095ba3539e919d0a9SHA-256: a1e8f7d01b5ff64565ec18c92a2d99a6718b8af592e541f5294017faa22838fc   PPC64LE: koan-2.0.7-68.el7sat.noarch.rpm     MD5: b6138013ccedc2fed9bb8603fff20821SHA-256: 2b53f3ab8cdc3ef3b4eff03682657b02ab031e69a018a4006330e24649a7c104 osad-5.11.44-10.el7sat.noarch.rpm     MD5: 5b5e226fce2a9acc7120b88f06d8f8e8SHA-256: 0fc0681b328c77629278f50abc8cf3a381ed05e5b8ca6d1495bfff374bf2cbd0   s390x: koan-2.0.7-68.el5sat.noarch.rpm     MD5: c82b9553363c882e53857f7c22b8e05bSHA-256: d453a9879ed42994a5c62c9c56ade53bb133b9f3176e9ac1d0df52b1222168c5 koan-2.0.7-68.el6sat.noarch.rpm     MD5: 5fc1764a2abd04eb235d1c428cc2fa40SHA-256: c612bf7fe2f5dfda48f2f8926ed6b3394d8a75dd009d7756e8e68980e95381bb koan-2.0.7-68.el7sat.noarch.rpm     MD5: b6138013ccedc2fed9bb8603fff20821SHA-256: 2b53f3ab8cdc3ef3b4eff03682657b02ab031e69a018a4006330e24649a7c104 osad-5.11.44-10.el5sat.noarch.rpm     MD5: 522d8c4e1fed357d2fc3ffd397abc6feSHA-256: 0e52f1800502b3be615a8c199dddb1f162d7647741981c102e8b8845e3f9b0c7 osad-5.11.44-10.el6sat.noarch.rpm     MD5: 2db8c3dc4a642474a92b2e39793178c1SHA-256: 7923d6c105c7b90ef7d457d4c66efaa6b0ef03e7fa8f18e5a501285b8d37be34 osad-5.11.44-10.el7sat.noarch.rpm     MD5: 5b5e226fce2a9acc7120b88f06d8f8e8SHA-256: 0fc0681b328c77629278f50abc8cf3a381ed05e5b8ca6d1495bfff374bf2cbd0 spacewalk-proxy-installer-2.3.0-7.el6sat.noarch.rpm     MD5: 632087f84a051a3095ba3539e919d0a9SHA-256: a1e8f7d01b5ff64565ec18c92a2d99a6718b8af592e541f5294017faa22838fc   x86_64: koan-2.0.7-68.el5sat.noarch.rpm     MD5: c82b9553363c882e53857f7c22b8e05bSHA-256: d453a9879ed42994a5c62c9c56ade53bb133b9f3176e9ac1d0df52b1222168c5 koan-2.0.7-68.el6sat.noarch.rpm     MD5: 5fc1764a2abd04eb235d1c428cc2fa40SHA-256: c612bf7fe2f5dfda48f2f8926ed6b3394d8a75dd009d7756e8e68980e95381bb koan-2.0.7-68.el7sat.noarch.rpm     MD5: b6138013ccedc2fed9bb8603fff20821SHA-256: 2b53f3ab8cdc3ef3b4eff03682657b02ab031e69a018a4006330e24649a7c104 osad-5.11.44-10.el5sat.noarch.rpm     MD5: 522d8c4e1fed357d2fc3ffd397abc6feSHA-256: 0e52f1800502b3be615a8c199dddb1f162d7647741981c102e8b8845e3f9b0c7 osad-5.11.44-10.el6sat.noarch.rpm     MD5: 2db8c3dc4a642474a92b2e39793178c1SHA-256: 7923d6c105c7b90ef7d457d4c66efaa6b0ef03e7fa8f18e5a501285b8d37be34 osad-5.11.44-10.el7sat.noarch.rpm     MD5: 5b5e226fce2a9acc7120b88f06d8f8e8SHA-256: 0fc0681b328c77629278f50abc8cf3a381ed05e5b8ca6d1495bfff374bf2cbd0 spacewalk-proxy-installer-2.0.1-5.el5sat.noarch.rpm     MD5: 85aeb54a30bd7531ea65d358b0941c7fSHA-256: ede683e0082745547d96afc4753e2ff49ab538d7dc87f4c4068cdeda6cd735de spacewalk-proxy-installer-2.3.0-7.el6sat.noarch.rpm     MD5: 632087f84a051a3095ba3539e919d0a9SHA-256: a1e8f7d01b5ff64565ec18c92a2d99a6718b8af592e541f5294017faa22838fc   (The unlinked packages above are only available from the Red Hat Network) 1367918 - Missing Reverse Proxy configuration to allow host registration to Insights through the RHN Proxy These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from: