Thursday, December 14, 2017
Home Tags Server

Tag: Server

Hack hole turns pleb users to admin queens, kills AV to boot Lenovo has patched a dangerous hole in its rebuilt Solution Center that could allow attackers to gain god mode access on hacked machines and to kill running processes including anti-virus. The pre-installed OEM software helps users update Lenovo tools and manage features like firewalls. Attackers with existing but unprivileged hacked access can gain privilege escalation to run tasks with local system rights. Trustwave lead researcher Martin Rakhmanov quietly reported the flaws (CVE-2016-5249 - CVE-2016-5248) to Lenovo which issued a patch. "This could be used in mounting further attacks by disabling anti-virus or some other protection mechanisms for instance," Rakhmanov says. "Specifically, we at Trustwave SpiderLabs' found that the new version, even though significantly reworked, still allowed unprivileged users to elevate privileges to LocalSystem." Rakhmanov says that the TCP server API that loads .NET assemblies from disk does not do so from only trusted paths, as intended. Instead, it loads any .NET assembly on the same partition where the Lenovo Solution Center software is installed. Attackers can load their malicious .NET assemblies into a privileged process, granting them easy privilege escalation. Users should upgrade to version 3.3.003 of the Lenovo Solution Center or uninstall it to protect themselves. Lenovo took about five weeks to fix the flaw, faster than when similar holes were reported in Solution Centre in May. ®
The critical open-source hypervisor used by major public cloud providers now enables live patching and incorporates other new features. The open-source Xen Project issued a major release of its namesake virtualization hypervisor on June 23, with the debut of Xen 4.7, which incorporates live patching and other security features.Xen is the underlying hypervisor used to help enable the largest public cloud provider in the world, Amazon Web Services (AWS), as well as public clouds from Rackspace, IBM and others. Xen is also widely deployed in private cloud and enterprise production environments.One of the biggest issues with any technology today is the constant need to patch for security vulnerabilities. Prior to the Xen 4.7 release, a Xen patch would have required a system restart, but that's no longer the case, thanks to the new live-patching feature.The idea of live patching is not new in the open-source world. With a live patching system, rather than requiring a service to be stopped and restarted before the patch is active, a patch can be applied to a running system that doesn't need to be rebooted.

The Linux kernel first introduced live patching technology as part of the Linux 4.0 update in April 2015. "The live patching technology within the Xen Project itself is a completely independent implementation of live patching, but [is] based on ideas used in other implementations of live patching for Linux and input from the Xen Project community," James Bulpin, member of the Xen Project advisory board and senior director of technology and chief architect for XenServer at Citrix Systems, told eWEEK. For Xen users, the live patching capability could have a very large impact.

Back in October 2014, Amazon, Rackspace and IBM Softlayer all had to reboot their cloud servers to enable a critical patch.

The critical issue turned out to be CVE-2014-7188, a memory-related security problem. While the security bug itself was kept private until the public cloud providers were able to patch, there was time involved and potentially service disruptions. With Xen 4.7 and future releases, live patching an issue such as CVE-2014-7188 will be significantly less troublesome for Xen users.Most security vulnerabilities like CVE-2014-7188 should be straightforward for public clouds to patch and avoid rebooting, Bulpin said. "Now with live patching, the choice to reboot is in the hands of the cloud admins."Xen 4.7 includes the ability to let users easily exclude capabilities they don't need with a tool called Kconfig.

The basic idea behind the removal capability is that by only including features that a specific deployment needs, the potential attack surface can be reduced."Previously, to configure what components are enabled, users would need to edit a configuration file in a source tree by hand," Bulpin said. "Now they can just use the Kconfig infrastructure to have a better experience."The default configuration of Xen contains components that upstream developers think average users would employ, and such configuration is fully supported by the upstream project, he said."The Xen Project is seeing growing use across a range of different use cases, including public cloud, traditional server virtualization, automotive, aviation and other embedded scenarios," Bulpin said. "Although these all use the core hypervisor functions, they don't all need the same set of drivers, schedulers and other components."Sean Michael Kerner is a senior editor at eWEEK and InternetNews.com.

Follow him on Twitter
@TechJournalist.
Silent maintainers put on notice An unpatched remote code execution hole has been publicly disclosed in the popular Swagger API framework, putting users at risk. The client and server hole (CVE-2016-5641) exists in code generators within the REST programming tool, also know as the OpenAPI Specification. A module for the popular Metasploit hacking suite has been crafted making exploitation of the flaw easier. Application security researcher Scott Davis says an injectable parameters in Swagger JSON or YAML files allow remote code execution across NodeJS, PHP, Ruby, and Java. "By leveraging this vulnerability, an attacker can inject arbitrary execution code embedded with a client or server generated automatically to interact with the definition of service," Davis says. "This is considered an abuse of trust in definition of service, and could be an interesting space for further research." Swagger maintainers have been quite despite the disclosure made last month through US CERT which was coupled with a patch proffered by Rapid 7. Other code generation tools may be open to parameter injection, Davis warns. The Swagger generators are a "powerful" means for organisations to offer developers easy access to their APIs, but many fail to account for malicious Swagger definition documents. These documents would lead to classic parameter injection. Maliciously crafted Swagger documents can be used to dynamically create HTTP API clients and servers with embedded arbitrary code execution in the underlying operating system.

This is achieved by the fact that some parsers/generators trust insufficiently sanitized parameters within a Swagger document to generate a client code base. On the client side, a vulnerability exists in trusting a malicious Swagger document to create any generated code base locally, most often in the form of a dynamically generated API client. On the server side, a vulnerability exists in a service that consumes Swagger to dynamically generate and serve API clients, server mocks and testing specs. Users can do little but "carefully inspect Swagger documents" for language-specific escape sequences, until the affected code generators are patched by maintainers. ® Sponsored: Rise of the machines
greyweedResearchers have detected a family of malicious apps, some that were available in Google Play, that contain malicious code capable of secretly rooting an estimated 90 percent of all Android phones. In a recently published blog post, antivirus provider Trend Micro said that Godless, as the malware family has been dubbed, contains a collection of rooting exploits that works against virtually any device running Android 5.1 or earlier.

That accounts for an estimated 90 percent of all Android devices. Members of the family have been found in a variety of app stores, including Google Play, and have been installed on more than 850,000 devices worldwide.

Godless has struck hardest at users in India, Indonesia, and Thailand, but so far less than 2 percent of those infected are in the US. Once an app with the malicious code is installed, it has the ability to pull from a vast repository of exploits to root the particular device it's running on.
In that respect, the app functions something like the many available exploit kits that cause hacked websites to identify specific vulnerabilities in individual visitors' browsers and serve drive-by exploits.

Trend Micro Mobile Threats Analyst Veo Zhang wrote: Godless is reminiscent of an exploit kit, in that it uses an open-source rooting framework called android-rooting-tools.

The said framework has various exploits in its arsenal that can be used to root various Android-based devices.

The two most prominent vulnerabilities targeted by this kit are CVE-2015-3636 (used by the PingPongRoot exploit) and CVE-2014-3153 (used by the Towelroot exploit).

The remaining exploits are deprecated and relatively unknown even in the security community. In addition, with root privilege, the malware can then receive remote instructions on which app to download and silently install on mobile devices.

This can then lead to affected users receiving unwanted apps, which may then lead to unwanted ads.

Even worse, these threats can also be used to install backdoors and spy on users. The first Godless apps stored the rooting exploits in a binary file called libgodlikelib.so directly on an infected device. Once an app is installed, it waits for the device screen to turn off and then proceeds with its rooting routine.

After it successfully roots the device, it installs an app with all-powerful system privileges so it can't be easily be removed.

The earlier apps also install a system app that implements a standalone Google Play client that automatically downloads and installs apps.

The client can also leave feedback in Google Play to fraudulently improve certain apps’ rankings. More recent Godless apps download the rooting exploit and payload from the server located at hxxp://market[.]moboplay[.]com/softs[.]ashx, most likely so that the malware can bypass security checks done by Google Play and other app stores.

The later variants also install a backdoor with root access in order to silently install apps on affected devices. The post went on to say that "various apps in Google Play," including utility apps such as flashlights and Wi-Fi apps and copies of popular games, contain the malicious rooting code.

Trend identified only one such app by name.
It was called Summer Flashlight, and had been installed from 1,000 to 5,000 times.

The app was recently ejected from Google Play, but for the time being, its listing is still available in search engine caches. Evil twin The Trend post also said researchers encountered a large number of benign apps in both Google Play and elsewhere that have corresponding malicious versions that share the same developer certificate. "Thus, there is a potential risk that users with non-malicious apps will be upgraded to the malicious versions without them knowing about apps’ new malicious behavior. Note that updating apps outside of Google Play is a violation of the store’s terms and conditions." Godless is only the latest Android malware to use rooting bugs to gain a persistent foothold on handsets. Last November, researchers discovered a family of more than 20,000 trojanized apps that used powerful exploits to gain root access to the Android operating system. Root exploits aren't automatically malicious. People often deliberately use them to expand the capabilities of their devices or to bypass restrictions imposed by carriers or manufacturers.

But because root exploits have the ability to circumvent key Android security protections, users should run them only after thoroughly researching the topic and the specific app that's doing the rooting.

As always, Android users should avoid using third-party app stores, with the notable exception of Amazon's.

Even when downloading from one of these stores, users should avoid apps from unknown developers.
Updated cockpit packages that fix several bugs and add various enhancements arenow available for Red Hat Enterprise Linux 7 Extras. Cockpit is a server administration interface that makes it easy to administerGNU/Linux servers through a web browser.The cockpit packages have been rebased to version 0.108, which provides a numberof bug fixes and enhancements over the previous version. Notably, strict browsersecurity policy for Cockpit is now enforced. This defines what code can be runin a Cockpit session and mitigates a number of browser-based attacks.(BZ#1337971)Users of cockpit are advised to upgrade to these updated packages, which fixthese bugs and add these enhancements. Before applying this update, make sure all previously released errata relevantto your system have been applied.For details on how to apply this update, refer to:https://access.redhat.com/articles/11258Red Hat Enterprise Linux Extras (v. 7) SRPMS: cockpit-0.108-1.el7.src.rpm     MD5: 7741b662f10b01905463d6d3f06f2645SHA-256: d666d07f1041a4633483b71c571c99bb896ff387378d7515c74ebf61b7b7de2c   x86_64: cockpit-0.108-1.el7.x86_64.rpm     MD5: 623b63d6994cde6d86bf14071a0a655eSHA-256: eae38b4904d6c451bdee2e3ee2751c72ae4bfe79e5bf5504348ec54e3b23e9ea cockpit-bridge-0.108-1.el7.x86_64.rpm     MD5: 1ff814c833e6a254b10bba04f4a32064SHA-256: 0c2ae719be77b5533063debe2c5d975c649e256d4c6106bddb31f883ae57b0d9 cockpit-debuginfo-0.108-1.el7.x86_64.rpm     MD5: 80e2e11e2d157d72405158bc82d4ca62SHA-256: 18bb769a91123e76b392172e3c5a03c2492f8ca2f47645f5185ec4265ceca577 cockpit-doc-0.108-1.el7.x86_64.rpm     MD5: 279af91930498355c9541d8ed6a3cd32SHA-256: 8ae30c4a9f9bbfda74c894e9f5acf9a2741c3991259e51b04d0039f3f19d2c22 cockpit-docker-0.108-1.el7.x86_64.rpm     MD5: b1348fbea9aa222e047df23d6fc0d23eSHA-256: b9a5ecd1c55b8086a2b07c2c8e09785a3752ee1b1c6ebaf9e8564c31c67ed0bd cockpit-pcp-0.108-1.el7.x86_64.rpm     MD5: fd0cddc203502546fe9909c4eeed293eSHA-256: 27be6119053e8313f2357690398b7777ca26356c5ed6e306b4402ef6c82eb001 cockpit-shell-0.108-1.el7.noarch.rpm     MD5: ee43a3f7057ed85b10707b4446ca8719SHA-256: e2c882b749aa8bb6d9af6b916faa66948e269e8873b663c5f603ae9280c39448 cockpit-storaged-0.108-1.el7.noarch.rpm     MD5: 108eb26e40cdd50f8ee61745f6eea7ecSHA-256: 86784453796254bd5b395d42b3564b7910cac207646eabafe7d231bad4473beb cockpit-ws-0.108-1.el7.x86_64.rpm     MD5: 8c3a2c004980e9e8833caa3a72cb4a23SHA-256: 5418de459a1f9ba2baa0d8d429fb161fb0e2bdc80f9d410becfd9d814d809df9   (The unlinked packages above are only available from the Red Hat Network) 1337971 - Rebase Cockpit in RHEL Extras 7.2.5 These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from:
Part of an e-mail thread discussing workarounds to keep Hillary Clinton's private e-mail server from being blocked by security filters at the State Department. Part of an e-mail thread discussing workarounds to keep Hillary Clinton's private e-mail server from being blocked by security filters at the State Department. Clinton Chief of Staff Huma Abedin suggested Clinton get a State Department e-mail account so her messages didn't "go to spam." Documents recently obtained by the conservative advocacy group Judicial Watch show that in December 2010, then-US Secretary of State Hillary Clinton and her staff were having difficulty communicating with State Department officials by e-mail because spam filters were blocking their messages.

To fix the problem, State Department IT turned the filters off—potentially exposing State's employees to phishing attacks and other malicious e-mails. The mail problems prompted Clinton Chief of Staff Huma Abedin to suggest to Clinton, "We should talk about putting you on State e-mail or releasing your e-mail address to the department so you are not going to spam." Clinton replied, "Let's get [a] separate address or device but I don't want any risk of the personal [e-mail] being accessible." The mail filter system—Trend Micro's ScanMail for Exchange 8—was apparently causing some messages from Clinton's private server (Clintonemail.com) to not be delivered.
Some were "bounced;" others were accepted by the server but were quarantined and never delivered to the recipient.

According to the e-mail thread published yesterday by Judicial Watch, State's IT team turned off both spam and antivirus filters on two "bridgehead" mail relay servers while waiting for a fix from Trend Micro. There was some doubt about whether Trend Micro would address the issue before State performed an upgrade to the latest version of the mail filtering software. "I am not confident Trend Micro will provide an update for SMEX 8," wrote one member of State's IT team, Trey Jammes. "That is two revs behind their current offering, SMEX 10, and they are pushing us to go to that (currently in pilot), and they have never not yet been able to deliver a fool-proof solution for a problem that has been around for at least 2 years. Unfortunately, we have seen similar problems with SMEX 10… I don't think we have seen that problem with SMEX 10 when running without the anti-spam piece." A State Department contractor support tech confirmed that two filters needed to be shut off in order to temporarily fix the problem—a measure that State's IT team took with some trepidation, because the filters had "blocked malicious content in the recent past." It's not clear from the thread that the issue was ever satisfactorily resolved, either with SMEX 8 or SMEX 10.

But State's unclassified e-mail system has been repeatedly breached by attackers. An attack purported to have been staged by Russian hackers caused the department to briefly shut down all its unclassified e-mail systems in 2014 but persisted within State's network for more than a year afterward.

Then Iranians spear-phished State employees in 2015, breaching the e-mail system again. The latest batch of documents obtained by Judicial Watch also includes an e-mail from Justin Cooper, the former aide to President Bill Clinton who set up the private mail server for Clinton and her staff at State, to Abedin, explaining that the server had been shut down briefly because "we were attacked again." He explained further in a follow-up e-mail, "I had to shut down the server…Someone was trying to hack us and while they did not get in I didn't want to let them have the chance.
I will restart it in the morning." Listing image by Lorie Shaull
An updated Samba package that fixes bugs and adds enhancements is now availablefor Red Hat Gluster Storage 3.1 Update 3 Samba is an implementation of the server message block (SMB) protocol.
Itallows the networking of Microsoft Windows®, Linux, UNIX, and otheroperating systems together, enabling access to Windows-based file andprinter shares.
Samba's use of SMB allows it to appear as a Windows serverto Windows clients.This update is a rebase to the latest upstream version of Samba 4.4,which fixes a number of bugs and adds a number of enhancements.

The mostimportant of these updates are:* Samba now supports version 3.1.1 of the SMB protocol.* The CTDB_LOGGING option has replaced the previous options CTDB_LOGFILEand CTDB_SYSLOG.* A new 'logging' option for smb.conf and several new logging backends(systemd-journal and lttng) are now available.* The 'change notify' and 'kernel change notify' options are now globaloptions rather than being specific to a share.* SMB multi-channel has been added as a Technology Preview.All users of Red Hat Gluster Storage with Samba are advised to upgrade tothis updated package. Before applying this update, make sure all previously released erratarelevant to your system have been applied.For details on how to apply this update, refer to:https://access.redhat.com/articles/11258Red Hat Gluster Storage Server 3.1 on RHEL-7 SRPMS: libtalloc-2.1.6-1.el7rhgs.src.rpm     MD5: bfac41d62ab87fd3b68d7cd4904aa7e9SHA-256: 48f21654c2f365f82fc90d7ed05749098194d441b7243914e1cc58c4a14c5baf samba-4.4.3-7.el7rhgs.src.rpm     MD5: 4556d02db2811cf2dfb2e4e38e31e736SHA-256: 0a25cabc4eec2d615f776dc5500a28c9257e99a531db5b8b26e9c1bbe73ae397   x86_64: ctdb-4.4.3-7.el7rhgs.x86_64.rpm     MD5: 03b4c0df813c5caa33a77cc8dda386aeSHA-256: 4885728a15e9c17fd963ebaf9f04c30f47c1bb2a594df11083df1359ba5213f4 ctdb-tests-4.4.3-7.el7rhgs.x86_64.rpm     MD5: 5740d8a1c6af97be2c1e633ccefb7a88SHA-256: 01ccd14be6d60e30c2465c42979fd9284f7e394960d6e7ab217104c3e612fbe9 libsmbclient-4.4.3-7.el7rhgs.x86_64.rpm     MD5: 3c1291c9a1dc3bf2ba2946323044ea27SHA-256: f3137044160a8aac9d77615537ca68fcca51bed89fd164ce53f4367137397725 libsmbclient-devel-4.4.3-7.el7rhgs.x86_64.rpm     MD5: 86abd48853e6a0fd267999b17c3816fbSHA-256: 5bd61f0257166f0710aee430a9eb87d1b19b79669510d9388dafa4366e7fb5b2 libtalloc-2.1.6-1.el7rhgs.x86_64.rpm     MD5: 1bbbfbf0e36e483d09278bbaa9bbbe39SHA-256: 55f9e4913bd398b17a92c85ad1c91cb54af1b8e2d964cbd833b049f5837f57de libtalloc-devel-2.1.6-1.el7rhgs.x86_64.rpm     MD5: f9d1647715ef786de5cb587273cbd805SHA-256: 9bab8312d6e922c1bf8511bf384f3ee9ba115f9b5acf26dcc186701676692cee libwbclient-4.4.3-7.el7rhgs.x86_64.rpm     MD5: bdb63ebe71b2805bbf114c65e19aa87dSHA-256: b2b77f7e58781d0d44011f6ff575b6fd3df6c1155b40d3f8827a326542d63b78 libwbclient-devel-4.4.3-7.el7rhgs.x86_64.rpm     MD5: eab80d7a9d91157e1fd187404093bb59SHA-256: a2cff0c66dc02ccfec853edb866778eade67a8fff35254b74304ef0ee656c34c pytalloc-2.1.6-1.el7rhgs.x86_64.rpm     MD5: 2791549ecae30a9c2320af599319a115SHA-256: 45400cc7656b53fb07f4e9d862b31adfcaaaeabf82807cf8c260987738fe688c pytalloc-devel-2.1.6-1.el7rhgs.x86_64.rpm     MD5: b102c846925459c1cb76783e1540248fSHA-256: f6a03c2e9a9ec6a85aa55fad8765088aae60a4f7dd2e8f3f41ed286b3b1b4ccc samba-4.4.3-7.el7rhgs.x86_64.rpm     MD5: 08c69cf8d8d17d7c4630f17b709313c6SHA-256: 315097b961816aa61c4278e285c0b15fa03f0d6894f90d36d6983c776ab492f7 samba-client-4.4.3-7.el7rhgs.x86_64.rpm     MD5: 99cd59b0e3192816a36f661bcf44aa86SHA-256: 3168c10d78f47dfd620644deb14a61fae8ca6112b20b0246d5c2959e468660c8 samba-client-libs-4.4.3-7.el7rhgs.x86_64.rpm     MD5: af0ad60d744e7eaed98ca9a78e6d0aecSHA-256: c4d08610f93b3a89a89d5f588fef5e963e59e53bd74027ae23e491437572451c samba-common-4.4.3-7.el7rhgs.noarch.rpm     MD5: 8afdcf2fd07687e689675780ce4395a0SHA-256: 5a91d965b7434eb6baa0207e2d2891142528dfd49b40ec3510fae1ea92026fa7 samba-common-libs-4.4.3-7.el7rhgs.x86_64.rpm     MD5: b4478a69bec6c6b6197a72581ed5dfbbSHA-256: 9073573ce722b58475f298ca5068c675dea5321931c36733852a2e59e6fbc98b samba-common-tools-4.4.3-7.el7rhgs.x86_64.rpm     MD5: bd24e0dd978247e00816e81f81a69e3aSHA-256: d7cf7721d4a5c0cdf46d907db83c28cfcfeeaef3a77d8c867add8557e0255c0f samba-dc-4.4.3-7.el7rhgs.x86_64.rpm     MD5: 74ea3f2ef567cbf113d7966c12a32f1aSHA-256: 3c3561363438d4e1ca23a1a09a0abfa77f29a9b28c0fd6d69d9bac71fa4e1b85 samba-dc-libs-4.4.3-7.el7rhgs.x86_64.rpm     MD5: 522a0ce55e49cada20936030b5e2bcb1SHA-256: 44cb9763bcd6004609e48ab6f7a14d28a009a7c89d6655b2fb756f40ccd68599 samba-devel-4.4.3-7.el7rhgs.x86_64.rpm     MD5: cd2b6b07c1f2aceb2ba976c5b9d47c71SHA-256: f11db5aedc86169e9fbe378ab5915c40a6d78b1c9f1bc86dca528119e61c641d samba-libs-4.4.3-7.el7rhgs.x86_64.rpm     MD5: cacc9e55ea6a0ce0ecfc3b08e337f6edSHA-256: 2195b0356d95bfcdcc0ac82b1aa644825123e6fe5a5ef93485de56f31293f10b samba-pidl-4.4.3-7.el7rhgs.noarch.rpm     MD5: e0926df67de098891ec3371b510137b7SHA-256: c90d0523e7221e184c582c656f13de1e7d52498e6d95f2861fc03caa37194186 samba-python-4.4.3-7.el7rhgs.x86_64.rpm     MD5: 416685d77b4a1fbff4976c4fdfd22b45SHA-256: d5d557650830ebf7470c8dc34b6a4f95eaac73074c4b5e6e86873238a7b1710c samba-test-4.4.3-7.el7rhgs.x86_64.rpm     MD5: 815e2af1d30ba86047d54eb75b21de68SHA-256: 07dc06796aad66f5197912656fb71c8dd6150961a685b161d1b4db10c28503e1 samba-test-libs-4.4.3-7.el7rhgs.x86_64.rpm     MD5: 68e7e1a8f601c8aaa641f5170e59e078SHA-256: 0a0cc2a554dd0d0b4eba1c44ae50ad0f11e6fd268b76a82d38d2f754bb70c0f9 samba-vfs-glusterfs-4.4.3-7.el7rhgs.x86_64.rpm     MD5: d0b0647da5257c5fc996e58e6171a994SHA-256: 19818a2ba51066ca273284bf3ba9bce5f622cf8c75b065705d2a108f9d8f29eb samba-winbind-4.4.3-7.el7rhgs.x86_64.rpm     MD5: e3a3ba9a627d9607cbe6fa5d26c285aeSHA-256: 594bde517b53a6ffe83eaf0ee3aa8dd121b8c4082a5978de3cbecd827feb8051 samba-winbind-clients-4.4.3-7.el7rhgs.x86_64.rpm     MD5: 5daf7245de8b96420bff114a8377cd10SHA-256: abee757da4fba6c31898ecd44250dcdf0b2f7f860897f044ca4bb915c8ceb91b samba-winbind-krb5-locator-4.4.3-7.el7rhgs.x86_64.rpm     MD5: 3247b2c9a6f747cf96fce2a5dc50320fSHA-256: f948443b46e888e0ea8daa7d580b7b5f85cc04923b95bfc3070b69aa514f25e8 samba-winbind-modules-4.4.3-7.el7rhgs.x86_64.rpm     MD5: 741ba0623feb6cfa5fd409357f17eedfSHA-256: d8d71c9f43e882a5c02883eba1da585e66d2622d51fa378d021fbd19bc210267   (The unlinked packages above are only available from the Red Hat Network) 1318624 - [RHEL7] [RFE] Samba rebase to 4.4 + Samba libs update1322677 - CTDB: ctdb node remains in banned state until the ctdb service is restarted.1332237 - SAMBA : New file created in windows mount is not listed in share for quite a while even after multiple refresh on the share1333360 - Samba: Multiple smbd crashes (notifyd) after a ctdb-internal network interface is brought down in a ctdb cluster.1335584 - SAMBA CRASH : Multiple smbd crash while performing VSS functionality in windows client1337569 - SMB: Getting error with upgrade and net join ldb: unable to stat module /usr/lib64/samba/ldb : No such file or directory These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from:
Updated packages such as rhsc-setup-plugins, redhat-access-plugin-rhsc, andorg.ovirt.engine-root that adds an enhancement and fixes several bugs are nowavailable for use with Red Hat Gluster Storage Console 3.1 Update 3. Red Hat Gluster Storage Console is a powerful and simple web basedGraphical User Interface for managing a Red Hat Gluster Storageenvironment.
It helps Storage Administrators to easily create and managemultiple trusted storage pools.

This includes features like elasticallyexpanding and shrinking a pool,creating and managing volumes.This advisory adds an enhancement and fixes the following bugs:- A Nagios plugin has been added to monitor if a replicate volume has entriesthat are not in sync with other bricks of the replica set. Now, administratorscan ensure that they do not perform maintenance actions when there are pendingheals, and can also monitor the heal progress by viewing the trendinginformation on entries to be healed. (BZ 1312207)- Red Hat Access APIs are no longer supported, and the access-plugin is removedfrom Red Hat Gluster Storage Console. Now, customers will no longer be able toquery support cases or open a ticket from within Red Hat Gluster StorageConsole.

For more information, see https://access.redhat.com/solutions/2325641.(BZ 1290720)- Previously, the glusterd services were stopped automatically while moving thehost to maintenance. With this fix, users can select 'Stop Glusterd services'option while moving the host to maintenance to control the behaviour.(BZ1311384)- Previously, when the cluster version was edited in the "Edit Cluster" dialogbox in the Red Hat Storage Console, the compatible version field got loaded withthe highest available compatibility version and the current version of thecluster was not displayed. With this fix, the cluster version is displayedcorrectly. (BZ 1167572)Users of Red Hat Gluster Storage Console are advised to upgrade tothese updated packages, which add these enhancements and fix several bugs. Before applying this update, make sure all previously released erratarelevant to your system have been applied.For details on how to apply this update, refer to:https://access.redhat.com/articles/11258Red Hat Gluster Storage Management Console 3.1 on RHEL-6 SRPMS: redhat-access-plugin-rhsc-3.1.3-0.el6.src.rpm     MD5: 46fabc5676a7b7bd9f5118376bafe582SHA-256: 65f9880300e0610e49a32a5f47a012e1cc48b9aafdc9e2b3c78b9d1cab5fb433 rhsc-3.1.3-0.73.el6.src.rpm     MD5: 56c1a6c9dcbe70e75e039472d30ba73dSHA-256: 2a906accb6cd970bf73d1ba14727c8d55b814bc907707d9c3a3a98a7dd61db8b rhsc-setup-plugins-3.1.3-1.el6rhs.src.rpm     MD5: 7626dc5041956ec0e52c0f52fdfe4161SHA-256: f314ad4901a56282a4aa71b1b402cb096ebdade7fda30424736faa7d79726d71   x86_64: redhat-access-plugin-rhsc-3.1.3-0.el6.noarch.rpm     MD5: 2039cb118ccf3635c0fb19fb0e409cb9SHA-256: 478e72d3c855c62baeb3fdaa26cd9c14dd2bb5c53e13c6009f2f96c5182bada8 rhsc-3.1.3-0.73.el6.noarch.rpm     MD5: 0f8bea29512675124c48e7afeb18d7d8SHA-256: 17d76f45426dda6bcb2aae4150f6d340042ff7ec960d618d505d669ddedb3730 rhsc-backend-3.1.3-0.73.el6.noarch.rpm     MD5: 8ad606c9f4e2c32b62b5753face050fdSHA-256: d456420f7ee894a1bc6d9b15bce11109d83b9d7dddafae45dc195726b5cd284c rhsc-dbscripts-3.1.3-0.73.el6.noarch.rpm     MD5: 150f9125e12aeb41bd103e41de7ead20SHA-256: 7d9f433ee111f3e46ecb5f86f64589a96412b4ee877f5637534466519a0e7bb4 rhsc-extensions-api-impl-3.1.3-0.73.el6.noarch.rpm     MD5: 40a5693ff7b952a06e159c44d77a98c0SHA-256: 60fa7aa986dc3bc22913277fd6bb5f73cc8150a8438fcfbeefe5924a4ff78774 rhsc-extensions-api-impl-javadoc-3.1.3-0.73.el6.noarch.rpm     MD5: 1c581d71bfe566f191115080e599564dSHA-256: 141cc9ff80fd5d3d66b99a0c3e8dd088b328e4e63b902905874ff29de076edf3 rhsc-lib-3.1.3-0.73.el6.noarch.rpm     MD5: fef2cf6ab979746025452d18a42e24e6SHA-256: 8855df9ad0e2bde19da44b3c2015fc5287fc1fd1c072f70c985d6537cccd3905 rhsc-restapi-3.1.3-0.73.el6.noarch.rpm     MD5: 758529744345d9b8462d6979e39ff326SHA-256: a0add3d5cb0a3dfd39ee839bd9440d8e64ab88c86b6e215141aa2a5684cd54e6 rhsc-setup-3.1.3-0.73.el6.noarch.rpm     MD5: 8f9b8b3f4d13bd2494cfe1bee5651785SHA-256: d07e59c0e4b7182381d5c6b6fa346534cbecbaa6b829bd8d5e9e47e3616018b0 rhsc-setup-base-3.1.3-0.73.el6.noarch.rpm     MD5: c921d4ec9a14da3e2bc529672bea2ac6SHA-256: 947eed9946fc164dc3965aa6b2547bf4e4ca5bd1769e8a7910f90225ef93b16d rhsc-setup-plugin-ovirt-engine-3.1.3-0.73.el6.noarch.rpm     MD5: 941a8d80a3834d2caa9e3ca27e01e8a7SHA-256: 1216d5c507b83ff1db9db8cbb44e567795b57fe0704743b51ccdf3963b8ff509 rhsc-setup-plugin-ovirt-engine-common-3.1.3-0.73.el6.noarch.rpm     MD5: a8651166eb041896e5e371901a893d2eSHA-256: f15462f050178e233f44b156dbf456c201026f06e95016e439dc40360f38f86e rhsc-setup-plugins-3.1.3-1.el6rhs.noarch.rpm     MD5: 571eea5f747155cfb22ca8e39d758309SHA-256: b03a2716f88a19c2c49b1aab1b52c841e6c3892cf3f9424b6e5babc9281fbaa9 rhsc-tools-3.1.3-0.73.el6.noarch.rpm     MD5: b6003271eaef24c70edd1890cfb8f330SHA-256: c64bdb0d554bbf1795b3bb05fe10fd01ce78118d4cbac985ecfb2fddaa371eff rhsc-webadmin-portal-3.1.3-0.73.el6.noarch.rpm     MD5: 0298ee26694c5787340dda6094d9497eSHA-256: 7d81975f479f1ab55cf7064db5bdae8a56c4435ce8da8cb229627c21f34e6425   (The unlinked packages above are only available from the Red Hat Network) 1167572 - Compatibility Version changes to highest value while editing Cluster1272161 - network lost after update of console managed storage node1311384 - Host in maintenance should optionally stop glusterd services1312207 - RFE: Add self-heal monitoring nagios plugin1327953 - Update to nagios-server-addons-0.2.4-1 --> Need to decide on the most optimal way, wrt file /etc/nagios/gluster/gluster-templates.cfg1329936 - Heal info plugin shows Critical state when files are healing, which is misleading1343650 - [geo-rep+rhsc]: geo-rep creation fails and status goes to UNKNOWN with latest build These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from:
Red Hat Gluster Storage 3.1 Update 3, which fixes several bugs,and adds various enhancements, is now available for Red HatEnterprise Linux 6. Red Hat Gluster Storage is a software only scale-out storage solution thatprovides flexible and affordable unstructured data storage.
It unifies datastorage and infrastructure, increases performance, and improvesavailability and manageability to meet enterprise-level storage challenges.This update fixes numerous bugs and adds various enhancements.
Spaceprecludes documenting all of these changes in this advisory. Users aredirected to the Red Hat Gluster Storage 3.1 Technical Notes, linked to inthe References section, for information on the most significant of thesechanges.All users of Red Hat Gluster Storage are advised to upgrade to theseupdated packages. Before applying this update, make sure all previously released erratarelevant to your system have been applied.For details on how to apply this update, refer to:https://access.redhat.com/articles/11258Red Hat Gluster Storage Management Console 3.1 on RHEL-6 SRPMS: gluster-nagios-common-0.2.4-1.el6rhs.src.rpm     MD5: 260e12660861c1c2771db83891438d87SHA-256: 6bce0e4357b1e369ca0ec91f7cde4fd24c25e4b865ec2d1dbe4941c395436d1a nagios-server-addons-0.2.5-1.el6rhs.src.rpm     MD5: e72890095baba1b3dba5c659ca74cb40SHA-256: 1d363b40d84896f464a4590f1ecf205ff9dc23371931bc120564206fcc93031e   x86_64: gluster-nagios-common-0.2.4-1.el6rhs.noarch.rpm     MD5: 62f9908cf01371bac9dfb0bf509e44a8SHA-256: b09188c6a874d584f5fad17545d1479c30cdb36509ed07a2db3cc9f50f069cbb nagios-server-addons-0.2.5-1.el6rhs.noarch.rpm     MD5: e4cc4155de7b9d3c0a3b1dbd368d6530SHA-256: 84183bb7d1588a26dcdfc5df626893c295102b949ea6cd6d46469688552ba09a   Red Hat Gluster Storage Server 3.1 on RHEL-6 SRPMS: gluster-nagios-addons-0.2.7-1.el6rhs.src.rpm     MD5: a991fd44479dd005025abe3900097a24SHA-256: 8fc121540b9009e5b16b3d3f18bab99f76b4c225cc1443e5dd4d950377dcd092 gluster-nagios-common-0.2.4-1.el6rhs.src.rpm     MD5: 260e12660861c1c2771db83891438d87SHA-256: 6bce0e4357b1e369ca0ec91f7cde4fd24c25e4b865ec2d1dbe4941c395436d1a glusterfs-3.7.9-10.el6rhs.src.rpm     MD5: 142b5896697b0d638bf21771a3b8f058SHA-256: 7eeafa1c4512d105120d971134455bbdd1a7fa3166db1778a1337e2cdd3a161d redhat-storage-server-3.1.3.0-3.el6rhs.src.rpm     MD5: 49c433b6e57f831fc18b879c330ff760SHA-256: 7937778f08851cafe73a479fc3bdd17a256556ba6328117131a8a9f6efb998d4 sanlock-2.8-3.el6.src.rpm     MD5: 355e9a46502daab60af0d039a0e128b2SHA-256: 75411b8a7567b52237ba02e8b2a98579b0d901e5eb111bcd214205ce18073fb7 vdsm-4.16.30-1.5.el6rhs.src.rpm     MD5: 995aea7b66bbec5c3873f935d7280ecaSHA-256: 8b5a81db824eaaf431a6d19b65281064f6d81418ef2b59dbd2b1515b862dd550   x86_64: fence-sanlock-2.8-3.el6.x86_64.rpm     MD5: 53e0be9a25f274e82f07ab2da26a5363SHA-256: 41780d1f38de4676d43ee3fd5b9aea7186206c78b199f42e0d64e3f8c0d40a70 gluster-nagios-addons-0.2.7-1.el6rhs.x86_64.rpm     MD5: 9b03235f098589d284deac2af40a5240SHA-256: 71b2fcb36fde8d3a105347b0950934cc7d1375565c8e86320fa77aa35a5656e2 gluster-nagios-common-0.2.4-1.el6rhs.noarch.rpm     MD5: 62f9908cf01371bac9dfb0bf509e44a8SHA-256: b09188c6a874d584f5fad17545d1479c30cdb36509ed07a2db3cc9f50f069cbb glusterfs-3.7.9-10.el6rhs.x86_64.rpm     MD5: a668ad58a5e27e1a6cff4d5470fd0060SHA-256: 305b0c0ddfad2654425320dab09e5d5ee9307829e77ee22f0cf2a95c6c6b6596 glusterfs-api-3.7.9-10.el6rhs.x86_64.rpm     MD5: b5731abe36a8ec83ac35b11d25784c57SHA-256: 461df35b98af428868d4e2b937ca0f01cc3e7156fbb8a2ca3c22b2ed891f2e8a glusterfs-api-devel-3.7.9-10.el6rhs.x86_64.rpm     MD5: 36bbdb0c5a2196ad66d3defbd67d5a26SHA-256: 5e4e63f353e040d2e7e53a3cd133d6ffd4d174ec100210d1b3cee5c6073601d1 glusterfs-cli-3.7.9-10.el6rhs.x86_64.rpm     MD5: 94868b54495567ac51861d5da6dedab7SHA-256: 48878a33080ac6bf2ff0b0e8b38bb7286c217d10d0fbbf8f971ce6b372e99853 glusterfs-client-xlators-3.7.9-10.el6rhs.x86_64.rpm     MD5: ed7c5e092603f0d2da93f7653843f6b9SHA-256: 5e7c4f343dd1a6743238dbc99a5321f19b6d374992079e1e93c814ecf6b297d9 glusterfs-devel-3.7.9-10.el6rhs.x86_64.rpm     MD5: 5116b99ed67013171e8f94c75d245d54SHA-256: 6a4904c65e928fb65d44595f7e62234f00c2cdafa709dc4e5671459cd77a24d7 glusterfs-fuse-3.7.9-10.el6rhs.x86_64.rpm     MD5: 12752f88b2021d02135fc837b52cb933SHA-256: 6424dd37f2d21fb64f5ca2b83f110175a9b35700247c2b259d665cc4bbd5db42 glusterfs-ganesha-3.7.9-10.el6rhs.x86_64.rpm     MD5: fd1207727b31cd8b8bae31f59140648bSHA-256: 03f4cb6cf95dde9ad43fd8b2432552655f8da34fa8a3b373478cda166bfe8f4d glusterfs-geo-replication-3.7.9-10.el6rhs.x86_64.rpm     MD5: 0bb343d9c8bf4aa184c4ffca65530e0cSHA-256: 135d929092ed913c62e95c64d46143cf2674d8e2e04b3c979ce5149cf1bea33a glusterfs-libs-3.7.9-10.el6rhs.x86_64.rpm     MD5: f51b395b2082241d68087566c0fe9aa2SHA-256: ba6cee5f5b3b54228d778f9086f482da4e394760255285b4b6ce16bb5adb7c76 glusterfs-rdma-3.7.9-10.el6rhs.x86_64.rpm     MD5: 9244b2ff5b12f0ea59edda941f547f34SHA-256: 80826f79c21f4244c12982717b44e8754a84ad92748c392f9130eccb03fed1e2 glusterfs-server-3.7.9-10.el6rhs.x86_64.rpm     MD5: 8029efcef08a95d2d6a8fd35ce356590SHA-256: 9596f41543a5d84256ea05ed0054121cdc2b29e64ef43583ee880a317ff80ec2 python-gluster-3.7.9-10.el6rhs.noarch.rpm     MD5: 3482fc405ced4444f8409e3fa7279ec1SHA-256: 99200c3a3f06042f9a6b36719dc1401acca9c7b3a4fa09ef4f06bc59bc3345e5 redhat-storage-server-3.1.3.0-3.el6rhs.noarch.rpm     MD5: 63e93327190c878f258ea012273e145cSHA-256: 1be8adf0f800c7d7bfc3301bf2c682d985f24b54e665b5ccdd8cc529d19ddcf9 sanlock-2.8-3.el6.x86_64.rpm     MD5: 582fbf28c2e677c5e597c65bfea545bcSHA-256: acc707727d0fe2c571c6c68d2e8845072f8506c822eb38641848e1acea35a470 sanlock-devel-2.8-3.el6.x86_64.rpm     MD5: 92a621cc855d63acc16f064169bc2cacSHA-256: 6c048c6979ff24468871b38f8315492fe4183f7426f4b7dfe3b620cc171891c9 sanlock-lib-2.8-3.el6.x86_64.rpm     MD5: 782df5f57f2f37e7b5ee2ad865227ab0SHA-256: cb77ea3f0f9b656db00bff30863b5d55914059b586de0b7342ffa78779f8e4d7 sanlock-python-2.8-3.el6.x86_64.rpm     MD5: f2d6556031b06f563439d5e6eebf0b08SHA-256: 86fe9b0f1f2c356878bdd2f8ae556bf0a5936603103fd737255ab4b313470147 vdsm-4.16.30-1.5.el6rhs.x86_64.rpm     MD5: b42a804d88058c81ba276c6f97399ae8SHA-256: 1203a6326ffd12e553295238de23319bc6d8a8744cafc2aa87f99518750470f5 vdsm-cli-4.16.30-1.5.el6rhs.noarch.rpm     MD5: cf560978c78fe83d2164a23aad737e10SHA-256: 347c968a53912dbe93a3854b881413d65a35fcec1539516712d6800e86b56f71 vdsm-debug-plugin-4.16.30-1.5.el6rhs.noarch.rpm     MD5: 68bc9a6b05e566fa10b605f3203e13b7SHA-256: 1c919062f708460e7d237764401056f6edec6e3ac9255c5162f7437e170bc8fd vdsm-gluster-4.16.30-1.5.el6rhs.noarch.rpm     MD5: 16e07f4b543e995b7be26aa6e6c59da8SHA-256: 205379ab1241bafff17feec151fd3e562138ee5d302c8c5bf95c6e4f27fc3031 vdsm-hook-ethtool-options-4.16.30-1.5.el6rhs.noarch.rpm     MD5: 4524e19c3b299091376806b545b33755SHA-256: 46e5157ccc436bfa6a6af9ff09ef7847b942749e01f02bd8588778663e1a8f78 vdsm-hook-faqemu-4.16.30-1.5.el6rhs.noarch.rpm     MD5: bf6b18bd09c62480f61d1e5a1b66f38eSHA-256: e21be55862a9fb1e866253b6720f860bca31536261340f52ba1294d9e3fc6045 vdsm-hook-openstacknet-4.16.30-1.5.el6rhs.noarch.rpm     MD5: 1c31b21e9d38c4f731e43d6deeb4e41bSHA-256: 66802a251b684b05ec2457d59b248ef16afcf47fb3c2af364ec937ed9c002cc6 vdsm-hook-qemucmdline-4.16.30-1.5.el6rhs.noarch.rpm     MD5: 6c449b08587d4dd089a7e059ead8c613SHA-256: 6cddd40113908ddf68d7a21b7c4279d86efba9e463840bcc29775fce3e0fb749 vdsm-jsonrpc-4.16.30-1.5.el6rhs.noarch.rpm     MD5: b43c619785da841de91b6e01ea75c197SHA-256: a835a56dcca2e8cc0b8d5983324123772ab35be4af602b6a167ef81b92634ceb vdsm-python-4.16.30-1.5.el6rhs.noarch.rpm     MD5: 74e5261a17279116ba23cebde4390eb3SHA-256: b37448fdf53556e19ffe325a412472c7a686da5cf603f2a042d1eecfee894b22 vdsm-python-zombiereaper-4.16.30-1.5.el6rhs.noarch.rpm     MD5: 87812029f2a610cb3052b1b5033af723SHA-256: 668985221fd09484e0122b0a30ec470c68c76e7a4ea89f6ae9e235be683e0c72 vdsm-reg-4.16.30-1.5.el6rhs.noarch.rpm     MD5: 1a01c742c741719b778e56e3a64ec0c2SHA-256: 4e7ea473f093d23a00a9db1df64c52005f72b24047fb0a40638b7984157d7532 vdsm-tests-4.16.30-1.5.el6rhs.noarch.rpm     MD5: b9fc38b48c76e93c362a8415c799bf36SHA-256: 992143157f879f1192f587a4ede694e28a129a17d577fa8badd87cebeb38cbf0 vdsm-xmlrpc-4.16.30-1.5.el6rhs.noarch.rpm     MD5: 8f5ad3bdceae84b0d58d6fd281437566SHA-256: 764ccbd06edcbe3998b231fd727cf23ad0f31fb0ca0c4201e2821dde3f3b2cf5 vdsm-yajsonrpc-4.16.30-1.5.el6rhs.noarch.rpm     MD5: a4aaf3f5d4a21a2c20cfbbe100b9b49bSHA-256: 80fb1b5b59cda0f057a34c5bbde56fdf209bffa4ddaf62755071ef6867ff60aa   Red Hat Storage Native Client SRPMS: glusterfs-3.7.9-10.el6.src.rpm     MD5: 2b59362ec80bdd1e7cb0ad26800c6a67SHA-256: 9928854a6dd3e5b59a0a80a3ec0bd354f7002fb97fa0e2f51bc5558e0269c65b   x86_64: glusterfs-3.7.9-10.el6.x86_64.rpm     MD5: 84addaba995315d1ce8a2a0331b3215fSHA-256: 8ce97cc691af0b1bfb947c2db31a7832520db00ff5f5a4ebc5d2ba97926e946d glusterfs-api-3.7.9-10.el6.x86_64.rpm     MD5: 4232d2980ea3b81fc356c640add5dfeeSHA-256: a378dcfb4c8c4f2b02ea9be6e0f7a9cbe755de9cf0123b4c73770b6ccbaa0a9e glusterfs-api-devel-3.7.9-10.el6.x86_64.rpm     MD5: b8fc0d5c28c9443a4256ed7414eb68b9SHA-256: 0df6c68b1288917c27e7207782830004344fde09ccfb2dd5daf96f1a125dfda8 glusterfs-cli-3.7.9-10.el6.x86_64.rpm     MD5: c6cf2c4dc6496e3ba966764e08cbcdf6SHA-256: 58af33622428911eae561444ed31c76c8ecd8b9acd2d1dca803b4deb1f93ca5b glusterfs-client-xlators-3.7.9-10.el6.x86_64.rpm     MD5: 5a65232b01ee82bd636c7a52a3554bf8SHA-256: ae3d813c13a97dfa42d95b31afc771e07a62af8b66dcf61930502c81b257a3ca glusterfs-debuginfo-3.7.9-10.el6.x86_64.rpm     MD5: 212ec5744fefb5e3ea5f9d7aaf26d131SHA-256: a511f31fa3e710b9677e7dce76839eb4f1e3329041cda485835eb08cc7aa3077 glusterfs-devel-3.7.9-10.el6.x86_64.rpm     MD5: 964506221780adfad73724ff8b55a597SHA-256: 46396beea811b55a434c4ff859854b40ea935d56f762754f6694da6c28f41e07 glusterfs-fuse-3.7.9-10.el6.x86_64.rpm     MD5: a94f6f1a49259147bc834ce16baf9957SHA-256: c2f9e176d6b5dcf97f368e83b7eec846636cc16d3886ef73007b0ef00faf3476 glusterfs-libs-3.7.9-10.el6.x86_64.rpm     MD5: 480d0b63524e8aa50d0e494b4d0f2f2fSHA-256: 5076abf1c65621b9dae3e0cd2aeb04de1f945e2661bcf58e9e771831b12a8a97 glusterfs-rdma-3.7.9-10.el6.x86_64.rpm     MD5: e9524613fd2497145488a1193480c8b3SHA-256: 8004756d1a797d92f614159a81b8974250ad26cc8f3555dd0889f5f527c85728 python-gluster-3.7.9-10.el6.noarch.rpm     MD5: 1c7886a87c4d5b830203ce1ef740af5aSHA-256: 977a2b0a1b7a365c2fe86db0f3ba89c0bf7e2f499e6ad36c7057628777e8e09c   (The unlinked packages above are only available from the Red Hat Network) 1101702 - setting lower op-version should throw failure message1113954 - glusterd logs are filled with "readv on /var/run/a30ad20ae7386a2fe58445b1a2b1359c.socket failed (Invalid argument)"1114045 - DHT :- log is full of ' Found anomalies in /<DIR> (gfid = 00000000-0000-0000-0000-000000000000)' - for each Directory which was self healed1115367 - "rm -rf *" from multiple mount points fails to remove directories on all the subvolumes1118762 - DHT : few Files are not accessible and not listed on mount + more than one Directory have same gfid + (sometimes) attributes has ?? in ls output after renaming Directories from multiple client at same time1121186 - DHT : If Directory deletion is in progress and lookup from another mount heals that Directory on sub-volumes. then rmdir/rm -rf on parents fails with error 'Directory not empty'1159263 - [USS]: Newly created directories doesnt have .snaps folder1162648 - [USS]: There should be limit to the size of "snapshot-directory"1231150 - After resetting diagnostics.client-log-level, still Debug messages are logging in scrubber log1233213 - [New] - volume info --xml gives host UUID as zeros1255639 - [libgfapi]: do an explicit lookup on the inodes linked in readdirp1258875 - DHT: Once remove brick start failed in between Remove brick commit should not be allowed1261838 - [geo-rep]: Multiple geo-rep session to the same slave is allowed for different users1273539 - Remove dependency of glusterfs on rsyslog1276219 - [GlusterD]: After log rotate of cmd_history.log file, the next executed gluster commands are not present in the cmd_history.log file.1277414 - [Snapshot]: Snapshot restore stucks in post validation.1277828 - RFE:nfs-ganesha:prompt the nfs-ganesha disable cli to let user provide "yes or no" option1278332 - nfs-ganesha server do not enter grace period during failover/failback1279628 - [GSS]-gluster v heal volname info does not work with enabled ssl/tls1282747 - While file is self healing append to the file hangs1283957 - Data Tiering:tier volume status shows as in-progress on all nodes of a cluster even if the node is not part of volume1285196 - Dist-geo-rep : checkpoint doesn't reach even though all the files have been synced through hybrid crawl.1285200 - Dist-geo-rep : geo-rep worker crashed while init with [Errno 34] Numerical result out of range.1285203 - dist-geo-rep: status details incorrectly indicate few files as skipped when all the files are properly synced to slave1286191 - dist-rep + quota : directory selfheal is not healing xattr 'trusted.glusterfs.quota.limit-set'; If you bring a replica pair down1287951 - [GlusterD]Probing a node having standalone volume, should not happen1289439 - snapd doesn't come up automatically after node reboot.1290653 - [GlusterD]: GlusterD log is filled with error messages - " Failed to aggregate response from node/brick"1291988 - [geo-rep]: ChangelogException: [Errno 22] Invalid argument observed upon rebooting the ACTIVE master node1292034 - nfs-ganesha installation : no pacemaker package installed for RHEL 6.71293273 - [GlusterD]: Peer detach happening with a node which is hosting volume bricks1294062 - [georep+disperse]: Geo-Rep session went to faulty with errors "[Errno 5] Input/output error"1294612 - Self heal command gives error "Launching heal operation to perform index self heal on volume vol0 has been unsuccessful"1294642 - quota: handle quota xattr removal when quota is enabled again1294751 - Able to create files when quota limit is set to 01294790 - promotions and demotions not happening after attach tier due to fix layout taking very long time(3 days)1296176 - geo-rep: hard-link rename issue on changelog replay1298068 - GlusterD restart, starting the bricks when server quorum not met1298162 - fuse mount crashed with mount point inaccessible and core found1298955 - [GSS] - Setting of any option using volume set fails when the clients are 3.0.4 and server is 3.1.11299432 - Glusterd: Creation of volume is failing if one of the brick is down on the server1299737 - values for Number of Scrubbed files, Number of Unsigned files, Last completed scrub time and Duration of last scrub are shown as zeros in bit rot scrub status1300231 - 'gluster volume get' returns 0 value for server-quorum-ratio1300679 - promotions not balanced across hot tier sub-volumes1302355 - Over some time Files which were accessible become inaccessible(music files)1302553 - heal info reporting slow when IO is in progress on the volume1302688 - [HC] Implement fallocate, discard and zerofill with sharding1303125 - After GlusterD restart, Remove-brick commit happening even though data migration not completed.1303591 - AFR+SNAPSHOT: File with hard link have different inode number in USS1303593 - [USS]: If .snaps already exists, ls -la lists it even after enabling USS1304282 - [USS]: Need defined rules for snapshot-directory, setting to a/b works but in linux a/b is b is subdirectory of a1304585 - quota: disabling and enabling quota in a quick interval removes quota's limit usage settings on multiple directories1305456 - Errors seen in cli.log, while executing the command 'gluster snapshot info --xml'1305735 - Improve error message for unsupported clients1305836 - DHT: Take blocking locks while renaming files1305849 - cd to .snaps fails with "transport endpoint not connected" after force start of the volume.1306194 - NFS+attach tier:IOs hang while attach tier is issued1306218 - quota: xattr trusted.glusterfs.quota.limit-objects not healed on a root of newly added brick1306667 - Newly created volume start, starting the bricks when server quorum not met1306907 - [New] - quarantine folder becomes empty and bitrot status does not list any files which are corrupted1308837 - Peers goes to rejected state after reboot of one node when quota is enabled on cloned volume.1311362 - [AFR]: "volume heal info" command is failing during in-service upgrade to latest.1311839 - False positives in heal info1313290 - [HC] glusterfs mount crashed1313320 - features.sharding is not available in 'gluster volume set help'1313352 - Dist-geo-rep: Support geo-replication to work with sharding1313370 - No xml output on gluster volume heal info command with --xml1314373 - Peer information is not propagated to all the nodes in the cluster, when the peer is probed with its second interface FQDN/IP1314391 - glusterd crashed when probing a node with firewall enabled on only one node1314421 - [HC] Ensure o-direct behaviour when sharding is enabled on volume and files opened with o_direct1314724 - Multi-threaded SHD support1315201 - [GSS] - smbd crashes on 3.1.1 with samba-vfs 4.11317790 - Cache swift xattrs1317940 - smbd crashes while accessing multiple volume shares via same client1318170 - marker: set inode ctx before lookup is unwind1318427 - gfid-reset of a directory in distributed replicate volume doesn't set gfid on 2nd till last subvolumes1318428 - ./tests/basic/tier/tier-file-create.t dumping core fairly often on build machines in Linux1319406 - gluster volume heal info shows conservative merge entries as in split-brain1319592 - DHT-rebalance: rebalance status shows failed when replica pair bricks are brought down in distrep volume while re-name of files going on1319619 - RHGS-3.1 op-version need to be corrected1319634 - Data Tiering:File create terminates with "Input/output error" as split brain is observed1319638 - rpc: set bind-insecure to off by default1319658 - setting enable-shared-storage without mentioning the domain, doesn't enables shared storage1319670 - regression : RHGS 3.0 introduced a maximum value length in the info files1319688 - Probing a new RHGS node, which is part of another cluster, should throw proper error message in logs and CLI1319695 - Disabling enable-shared-storage deletes the volume with the name - "gluster_shared_storage"1319698 - Creation of files on hot tier volume taking very long time1319710 - glusterd: disable ping timer b/w glusterd and make epoll thread count default 11319996 - glusterfs-devel: 3.7.0-3.el6 client package fails to install on dependency1319998 - while performing in-service software upgrade, gluster-client-xlators, glusterfs-ganesha, python-gluster package should not get installed when distributed volume up1320000 - While performing in-service software update, glusterfs-geo-replication and glusterfs-cli packages are updated even when glusterfsd or distributed volume is up1320390 - build: spec file conflict resolution1320412 - disperse: Provide an option to enable/disable eager lock1321509 - Critical error message seen in glusterd log file, after logrotate1321550 - Do not succeed mkdir without gfid-req1321556 - Continuous nfs_grace_monitor log messages observed in /var/log/messages1322247 - SAMBA+TIER : File size is not getting updated when created on windows samba share mount1322306 - [scale] Brick process does not start after node reboot1322695 - TIER : Wrong message display.On detach tier success the message reflects Tier command failed.1322765 - glusterd: glusted didn't come up after node reboot error" realpath () failed for brick /run/gluster/snaps/130949baac8843cda443cf8a6441157f/brick3/b3.

The underlying file system may be in bad state [No such file or directory]"1323042 - Inconsistent directory structure on dht subvols caused by parent layouts going stale during entry create operations because of fix-layout1323119 - TIER : Attach tier fails1323424 - Ganesha: Continuous "0-glfs_h_poll_cache_invalidation: invalid argument" messages getting logged in ganesha-gfapi logs.1324338 - Too many log messages showing inode ctx is NULL for 00000000-0000-0000-0000-0000000000001324604 - [Perf] : 14-53% regression in metadata performance with RHGS 3.1.3 on FUSE mounts1324820 - /var/lib/glusterd/$few-directories not owned by any package, causing it to remain after glusterfs-server is uninstalled1325750 - Volume stop is failing when one of brick is down due to underlying filesystem crash1325760 - Worker dies with [Errno 5] Input/output error upon creation of entries at slave1325975 - nfs-ganesha crashes with segfault error while doing refresh config on volume.1326248 - [tiering]: during detach tier operation, Input/output error is seen with new file writes on NFS mount1326498 - DHT: Provide mechanism to nuke a entire directory from a client (offloading the work to the bricks)1326505 - fuse: fix inode and dentry leaks1326663 - [DHT-Rebalance]: with few brick process down, rebalance process isn't killed even after stopping rebalance process1327035 - fuse: Avoid redundant lookup on "." and ".." as part of every readdirp1327036 - Use after free bug in notify_kernel_loop in fuse-bridge code1327165 - snapshot-clone: clone volume doesn't start after node reboot1327552 - [geo-rep]: geo status shows $MASTER Nodes always with hostname even if volume is configured with IP1327751 - glusterd memory overcommit1328194 - upgrading from RHGS 3.1.2 el7 client package to 3.1.3 throws warning1328397 - [geo-rep]: schedule_georep.py doesn't touch the mount in every iteration1328411 - SMB:while running I/O on cifs mount and doing graph switch causes cifs mount to hang.1328721 - [Tiering]: promotion of files may not be balanced on distributed hot tier when promoting files with size as that of max.mb1329118 - volume create fails with "Failed to store the Volume information" due to /var/lib/glusterd/vols missing with latest build1329514 - rm -rf to a dir gives directory not empty(ENOTEMPTY) error1329895 - eager-lock should be used as cluster.eager-lock in /var/lib/glusterd/group/virt file as there is a new option disperse.eager-lock1330044 - one of vm goes to paused state when network goes down and comes up back1330385 - glusterd restart is failing if volume brick is down due to underlying FS crash.1330511 - build: redhat-storage-server for RHGS 3.1.3 - [RHEL 6.8]1330881 - Inode leaks found in data-self-heal1330901 - dht must avoid fresh lookups when a single replica pair goes offline1331260 - Swift: The GET on object manifest with certain byte range fails to show the content of file.1331280 - Some of VMs go to paused state when there is concurrent I/O on vms1331376 - [geo-rep]: schedule_georep.py doesn't work when invoked using cron1332077 - We need more debug info from stack wind and unwind calls1332199 - Self Heal fails on a replica3 volume with 'disk quota exceeded'1332269 - /var/lib/glusterd/groups/groups file doesn't gets updated when the file is edited or modified1332949 - Heal info shows split-brain for .shard directory though only one brick was down1332957 - [Tiering]: detach tier fails due to the error - 'removing tier fix layout xattr from /'1333643 - Files present in the .shard folder even after deleting all the vms from the UI1333668 - SAMBA-VSS : Permission denied issue while restoring the directory from windows client 1 when files are deleted from windows client 21334092 - [NFS-Ganesha] : stonith-enabled option not set with new versions of cman,pacemaker,corosync and pcs1334234 - [Tiering]: Files remain in hot tier even after detach tier completes1334668 - getting dependency error while upgrading RHGS client to build glusterfs-3.7.9-4.el7.x86_64.1334985 - Under high read load, sometimes the message "XDR decoding failed" appears in the logs and read fails1335082 - [Tiering]: Detach tier commit is allowed before rebalance is complete1335114 - refresh-config failing with latest 2.3.1-6 nfs-ganesha build.1335357 - Modified volume options are not syncing once glusterd comes up.1335359 - Adding of identical brick (with diff IP/hostname) from peer node is failing.1335364 - Fix excessive logging due to NULL dict in dht1335367 - Failing to remove/replace the bad brick part of the volume.1335437 - Self heal shows different information for the same volume from each node1335505 - Brick logs spammed with dict_get errors1335826 - failover is not working with latest builds.1336295 - Replace brick causes vm to pause and /.shard is always present in the heal info1336332 - glusterfs processes doesn't stop after invoking stop-all-gluster-processes.sh1337384 - Brick processes not getting ports once glusterd comes up.1337649 - log flooded with Could not map name=xxxx to a UUID when config'd with long hostnames1339090 - During failback, nodes other than failed back node do not enter grace period1339136 - Some of the VMs pause with read-only file system error even when volume-status reports all bricks are up1339163 - [geo-rep]: Monitor crashed with [Errno 3] No such process1339208 - Ganesha gets killed with segfault error while rebalance is in progress.1340085 - Directory creation(mkdir) fails when the remove brick is initiated for replicated volumes accessing via nfs-ganesha1340383 - [geo-rep]: If the session is renamed, geo-rep configuration are not retained1341034 - [quota+snapshot]: Directories are inaccessible from activated snapshot, when the snapshot was created during directory creation1341316 - [geo-rep]: Snapshot creation having geo-rep session is broken with latest build1341567 - After setting up ganesha on RHEL 6, nodes remains in stopped state and grace related failures observed in pcs status1341820 - [geo-rep]: Upgrade from 3.1.2 to 3.1.3 breaks the existing geo-rep session1342252 - [geo-rep]: Remove brick with geo-rep session fails with latest build1342261 - [georep]: Stopping volume fails if it has geo-rep session (Even in stopped state)1342426 - self heal deamon killed due to oom kills on a dist-disperse volume using nfs ganesha1342938 - [geo-rep]: Add-Brick use case: create push-pem force on existing geo-rep fails1343549 - libglusterfs: Negate all but O_DIRECT flag if present on anon fds1344278 - [disperse] mkdir after re balance give Input/Output Error1344625 - fail delete volume operation if one of the glusterd instance is down in cluster1347217 - Incorrect product version observed for RHEL 6 and 7 in product certificates These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from:
Red Hat Gluster Storage 3.1 Update 3, which fixes several bugs,is now available for Red Hat Enterprise Linux 7. Red Hat Gluster Storage is a software only scale-out storage solution thatprovides flexible and affordable unstructured data storage.
It unifies datastorage and infrastructure, increases performance, and improvesavailability and manageability to meet enterprise-level storage challenges.This update fixes numerous bugs and adds various enhancements.
Spaceprecludes documenting all of these changes in this advisory. Users aredirected to the Red Hat Gluster Storage 3.1 Technical Notes, linked to inthe References section, for information on the most significant of thesechanges.All users of Red Hat Gluster Storage are advised to upgrade to theseupdated packages. Before applying this update, make sure all previously released erratarelevant to your system have been applied.For details on how to apply this update, refer to:https://access.redhat.com/articles/11258Red Hat Gluster Storage Server 3.1 on RHEL-7 SRPMS: ansible-1.9.4-1.el7aos.src.rpm     MD5: d96cbe5b47f76384b4f792e3485a112fSHA-256: 9b3d245284e72799ff3c9ccb1377c46cdf0662b2dfd0a768ae13980579650c9c gluster-nagios-addons-0.2.7-1.el7rhgs.src.rpm     MD5: 72ac36627ec745da1798381e84ec6b13SHA-256: ffb4e07a3d952454984a0488d4546e454de123a7e3b675dd25bd56255a756f78 gluster-nagios-common-0.2.4-1.el7rhgs.src.rpm     MD5: f9770483910c63ddb741a4a33d6e57c6SHA-256: 16ef062697fae30fd5e4bb0961a024e2d6c99dc443a5dc8eadbe66169b8579ce glusterfs-3.7.9-10.el7rhgs.src.rpm     MD5: e41ef124857fc50a710954488059016cSHA-256: d54ba15df1bddfd920e39e7652abef0f5cf51e26ed008808ddb7c03286af169a nagios-server-addons-0.2.5-1.el7rhgs.src.rpm     MD5: fdbf0380a9088936e0ee948587c4d5fdSHA-256: a812d512c7f1bbfbaa54a2c022c88913b89f13c749e9b594d2c2dc49926bd4d7 python-ecdsa-0.11-3.el7aos.src.rpm     MD5: 6b0036f0cb93dd46382c268e9a5d11d9SHA-256: 40b460ea54e9c4d80a804d56087407b4bd634ffb2813d3721d2b4af9210c69e4 python-httplib2-0.9.1-2.el7aos.src.rpm     MD5: cee6c40a648cbd4be75013997d98c505SHA-256: 910ba8432700fa515e12ca33d991fcbefbe62c47cff0c7d4d48eaec6b11d7091 python-keyczar-0.71c-2.el7aos.src.rpm     MD5: ce7b66aa1438c9e86ee8fa32cf1c8e86SHA-256: eeaccfacea1fb6bc13dc9f74a8e9c50e4bbe68171e3642c08e0aca1e85c738a7 python-paramiko-1.15.2-1.el7aos.src.rpm     MD5: 854ec02cea8723ee23b1dd64e7de94d6SHA-256: 7c410f45f8a3e8406bd58ab3b67ee6c1023dbe001e707a2b69eeb337d0c86b59 redhat-storage-server-3.1.3.0-3.el7rhgs.src.rpm     MD5: ab5455751a0890dc3119a5179387ab70SHA-256: cd6f27d6a71441d8e14db4b662e0b40569e5cbb1b6846cc2ea26a7467447013d sshpass-1.05-5.el7aos.src.rpm     MD5: a725c70d907aa818e2b50c4d33777bf7SHA-256: 36a2f38d1f33981a1b4bd1c89ba66124151f09464a48c6d4e46846fe85ef46b4 vdsm-4.16.30-1.5.el7rhgs.src.rpm     MD5: 0d7ac0a8ab2e810d29a09eaaaf88be28SHA-256: 03809be4fd3feac75a9fa62b630ec04b70d86bd202a2d37c8e67d213a8f93fe4   x86_64: ansible-1.9.4-1.el7aos.noarch.rpm     MD5: 8941dbcffbfc7f6606161ffb5d264bebSHA-256: 3ee20b8d21fa1d931c8912eac2c3c9f94642452edc64c480116822c6d4fd2191 gluster-nagios-addons-0.2.7-1.el7rhgs.x86_64.rpm     MD5: 512f93f885f5646c2843bf01550f3fb2SHA-256: 4e3668ba0cdcc2425fba0b716a976f070e9daa33f8873bf00e4276096dc7be1d gluster-nagios-common-0.2.4-1.el7rhgs.noarch.rpm     MD5: f92eb9bdea0e4eb11f60affc6d249886SHA-256: ff9ba53f1ae51ebf886a67414367d2885a3f3fdcbad761d31f8d3941bf721d6d glusterfs-3.7.9-10.el7rhgs.x86_64.rpm     MD5: dee94107dee28649f968dd771cd15159SHA-256: b0f3f2cab2fe64b194b5f20474eaa7d34a2a97c3cc4ebdc28326583e049bf6c5 glusterfs-api-3.7.9-10.el7rhgs.x86_64.rpm     MD5: 7126f9b714eafd23fde9719c379a658fSHA-256: fd59f7acc564f98b65cf12e170772cbeef987a52aba68cb710cdf3979c02cc54 glusterfs-api-devel-3.7.9-10.el7rhgs.x86_64.rpm     MD5: 1a977718bb437c74e9e14a9ac6908d01SHA-256: cc4481056c269a6db0dc1001884b92263e9c5df7d073354f518c88ee7ea9412e glusterfs-cli-3.7.9-10.el7rhgs.x86_64.rpm     MD5: cd317004e0929a314b6bdf62fa893166SHA-256: d34b619f1b753f4d73bed6fb01a81793ed614db5d71420b44c40e95ba73817bb glusterfs-client-xlators-3.7.9-10.el7rhgs.x86_64.rpm     MD5: 8f4fd9806d27c306124d743b99ab8f80SHA-256: 62de9bee59c56cc67594207753ae99157713021e107ba8656876772d68e415c1 glusterfs-devel-3.7.9-10.el7rhgs.x86_64.rpm     MD5: 69e3feba90ccc8b9b874d1e41d0d7ddbSHA-256: 393c76818bb4e13c56d38879727b87dfe09c1e50d3bd99d3fbef8def324ac9cb glusterfs-fuse-3.7.9-10.el7rhgs.x86_64.rpm     MD5: 80ec067cbadd5cce0b3e7cb866b9d991SHA-256: de80b4067d91b68e21584d5fa4029cd6e428885667f8c9b509beaf9ddb58bf1c glusterfs-ganesha-3.7.9-10.el7rhgs.x86_64.rpm     MD5: 2e4e4ea4e4f0a5adebdc095069580de7SHA-256: d1cb77e73b3f9f205bec81e3b1917684aabd87d117c5d879c227c4480b6f887a glusterfs-geo-replication-3.7.9-10.el7rhgs.x86_64.rpm     MD5: de78d48047bb620b7e2cb334eb33ba4bSHA-256: 54dee3e3dec8636fecad872b4be62913f39ef8bf81d93755a94087305a3d54d1 glusterfs-libs-3.7.9-10.el7rhgs.x86_64.rpm     MD5: 71a3016cf7349f1f9bbb4753008b7b73SHA-256: ded435bf495fcd41475121f6965edbb3dcc870bfef42e92559ceecc34420ceb9 glusterfs-rdma-3.7.9-10.el7rhgs.x86_64.rpm     MD5: bfaffb6e69381cb8b2de6ea1558048afSHA-256: 18c3ae35adbb7a7b0302c6a25beb9653739dbf7d01367475a57bb23268ca17a4 glusterfs-server-3.7.9-10.el7rhgs.x86_64.rpm     MD5: 9c5915c7e9c6e33cf87c881f1c5d6ec0SHA-256: acf782c9fda03c7c66110cf91e6281e6f7fe15153bd9415a7311555684b211a9 nagios-server-addons-0.2.5-1.el7rhgs.x86_64.rpm     MD5: df1471b1483d3be2a9a1d5e49253dd11SHA-256: 6c2883a93862128f40f53c2cad79e6517453a05782b50227de48b8cb513d41d7 python-ecdsa-0.11-3.el7aos.noarch.rpm     MD5: 37c016b6d5d52baeb30a41326299140cSHA-256: b74e7f42540ead08d91f3cee5ca7feecc8997a5a7a74f1fafea7a52564da51b1 python-gluster-3.7.9-10.el7rhgs.noarch.rpm     MD5: 5fb02f1c61727f1eba032387f9887fc8SHA-256: a1feb70585d530cef15dea5253df367f45c3a4ae11255a786b375feb72db4f2b python-httplib2-0.9.1-2.el7aos.noarch.rpm     MD5: 068f9c49b5e91815b583a5b97c4a0115SHA-256: 8e118bd288e734c3677c9bca9d3ee6f64cd7c693aebf25c94eaf1b2011716384 python-keyczar-0.71c-2.el7aos.noarch.rpm     MD5: 74440352ef9e5fef21ef8ba57f47d17eSHA-256: 40dc9a3c299a46c6bc41384449d3185d65061e03c18decdef4fa64ae9aa1c61e python-paramiko-1.15.2-1.el7aos.noarch.rpm     MD5: 1d7ee68b919bfce0edc2a5d7804c5b41SHA-256: 91841e6426b9fe3ad415527f470c2077964fe3dcb546948fc2a223b1d45b802e redhat-storage-server-3.1.3.0-3.el7rhgs.noarch.rpm     MD5: 8b67b8e1b2a2a96cf5c421b6804e0bbbSHA-256: 7bc5ae8b96dde7dcbd98d4e615e092f51ce0d2dfadd696428be4eb5f508790ac sshpass-1.05-5.el7aos.x86_64.rpm     MD5: 7f3b784a6ae8ec720f7565d2c9d05180SHA-256: 94be4744b68b4cede5fa3f8d930a75d3544b1725a15eae011cf5105de10a3426 vdsm-4.16.30-1.5.el7rhgs.x86_64.rpm     MD5: 68c359fe380f7addf02266fc6ccc7281SHA-256: f5137f1398b851483e9b53f8a2a04f91564bc43741d27375a26e60b8f059613c vdsm-cli-4.16.30-1.5.el7rhgs.noarch.rpm     MD5: 2cf2e5acb595343c8743f2438a8cf52fSHA-256: a90ae33fe29e726d0e26288c39e919cc5530eb4370a6705bc96d9553ee0c12e3 vdsm-debug-plugin-4.16.30-1.5.el7rhgs.noarch.rpm     MD5: 3ae1e0a1c130192a61f6efdb67492fa4SHA-256: 53997bc82e86bd3db48ae3dd7f3dcfb39185d0c13d5c005479a6ea8b3bc71529 vdsm-gluster-4.16.30-1.5.el7rhgs.noarch.rpm     MD5: 3c6972386ffe4f1212ea0019e5c1a2a9SHA-256: 4ae1110fb5a729bf3a047f386e36c9ba917189569d661f51f2df883e104a2a21 vdsm-hook-ethtool-options-4.16.30-1.5.el7rhgs.noarch.rpm     MD5: 29978f57d1792cd66d40af0d86faa1bbSHA-256: f55b7acb2a91e91649072142cb9758d9d54d31327a815dc047806263a7d21960 vdsm-hook-faqemu-4.16.30-1.5.el7rhgs.noarch.rpm     MD5: 059d6f3f18ecbbb2d0eb8b47df3ccdd4SHA-256: 20837e382c3b3343e53c064039518ade9faa111f102f71da05b6073198a042c8 vdsm-hook-openstacknet-4.16.30-1.5.el7rhgs.noarch.rpm     MD5: 9e0e458f28e95d4439cdadc6b4fba791SHA-256: d03d38b92657a09a71cc356e8167c0b3c19c9096b89d5a8a25c44a5810890a71 vdsm-hook-qemucmdline-4.16.30-1.5.el7rhgs.noarch.rpm     MD5: f78010cee6b0ef7b849c8e8f93e02eecSHA-256: a93180598fa3c9604647820532029f81584401ccc0dd9c0e7f5b8f598c11e09c vdsm-jsonrpc-4.16.30-1.5.el7rhgs.noarch.rpm     MD5: 84ec541e7b99d736eb4c099fcf94d33fSHA-256: e28ccaee860d2e9cf7a30c77cea456353bbc426009131bb2da6078deffcf0e2d vdsm-python-4.16.30-1.5.el7rhgs.noarch.rpm     MD5: 6fb203497b5b04e75228101fc5b9bb42SHA-256: 4f80ee5855b0f3430fe8f50bcdf5afac93929861cf8b03d77b4fd3a00cc5f9b8 vdsm-python-zombiereaper-4.16.30-1.5.el7rhgs.noarch.rpm     MD5: cbfefc612b04d9a9057800a11ccb1879SHA-256: 92624027460eda3223cb0743fb6f3b09bc943a8ab62e06bd2ada6d0f25957f07 vdsm-reg-4.16.30-1.5.el7rhgs.noarch.rpm     MD5: 3e626a4f2247d40265bb0a11b17aba8fSHA-256: 9e4a84b53a60fa601ad3fe0ca0118843f2a9fb100f7a01357cdfa627e5ec16cf vdsm-tests-4.16.30-1.5.el7rhgs.noarch.rpm     MD5: 681e2e0adacfe29d55428ee1ebf85b49SHA-256: 862ecdb2813c9d3c3e39a6302251332862b6bef3f84e08880f66779d15462d9f vdsm-xmlrpc-4.16.30-1.5.el7rhgs.noarch.rpm     MD5: 63a40221701fc88df4b6945368fba22aSHA-256: dd3b0cb27666134d0e2da987bd59ec800b3d6b36ed28b8ddf8d2f857baebdc16 vdsm-yajsonrpc-4.16.30-1.5.el7rhgs.noarch.rpm     MD5: 5287bf7499ddbb39e617a375e2314c96SHA-256: 846da8a49b6d5bee47c428fd779e2ca2e08e4f7a5e4b3d465fe23e8575703f82   (The unlinked packages above are only available from the Red Hat Network) 1243351 - Warning messages seen in glusterd logs when installed from RHGS 3.1 ISO based on RHEL 7.11329511 - build: redhat-storage-server for RHGS 3.1.3 These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from:
Onion rings get more scrambled The University of California wants to defeat deanonymisation with a hardened version of the Tor browser. The uni boffins are with the Tor Project testing an address space layout randomisation (ASLR) -esque technique dubbed Selfrando. It is hoped the technique described in the paper Selfrando: Securing the Tor Browser against De-anonymisation Exploits [PDF] will help to frustrate deanonymisation efforts by government agencies. Sanfrando, currently under testing in hardened Tor versions, is described as an "enhanced and practical load-time randomisation technique" that defends against exploits, namely that allegedly used by the FBI. "Our solution significantly improves security over standard ASLR techniques currently used by Firefox and other mainstream browsers," the boffins say. "It has negligible run-time overhead, a perfectly acceptable load-time overhead, and it requires no changes to protect the Tor Browser. "Moreover, selfrando can be combined with integrity techniques such as execute-only memory to further secure the Tor Browser and virtually any other C/C++ application." The nine-strong research team is composed of Mauro Conti; Stephen Crane; Tommaso Frassetto; Andrei Homescu; Georg Koppen; Per Larsen; Christopher Liebchen; Mike Perry, and Ahmad-Reza Sadeghi. They cite the 2013 attacks in which the FBI compromised Tor hidden services servers to deliver an exploit to de-anonymise users. The exploit abused an use-after-free vulnerability in Firefox to gain arbitrary code execution. "The main payload of the exploit collected the MAC address and the host name from the victim machine and sent the data to an attacker-controlled web server, bypassing Tor," they write. "That message also included a unique ID provided by the booby-trapped page in order to correlate a specific user to a specific visit." The team will now work to improve Selfrando's operating specific features including thread-local storage support, something relied on in Firefox’s default heap allocator jemalloc. Tor Browser developers have expressed desire to use that new allocator. ® Sponsored: Rise of the machines
VIDEO: Gus Robertson, CEO of NGINX, discusses his firm's latest technology and what's coming next. In the span of three and half years, Gus Robertson has helped lead NGINX to have one of the most deployed technologies on the planet. NGINX's primary tec...