Red Hat Gluster Storage 3.1 Update 2, which fixes several bugs,and adds various enhancements, is now available for Red HatEnterprise Linux 6.
Red Hat Gluster Storage is a software only scale-out storage solution thatprovides flexible and affordable unstructured data storage.
It unifies datastorage and infrastructure, increases performance, and improvesavailability and manageability to meet enterprise-level storage challenges.Red Hat Gluster Storage’s Unified File and Object Storage is built onOpenStack’s Object Storage (swift).This update also fixes numerous bugs and adds various enhancements.
Spaceprecludes documenting all of these changes in this advisory. Users aredirected to the Red Hat Gluster Storage 3.1 Technical Notes, linked to inthe References section, for information on the most significant of thesechanges.This advisory introduces the following new features:* Writable SnapshotsRed Hat Gluster Storage snapshots can now be cloned and made writable bycreating a new volume based on an existing snapshot.

Clones are space efficient,as the cloned volume and original snapshot share the same logical volume backend, only consuming additional space as the clone diverges from the snapshot.For more information, see the Red Hat Gluster Storage 3.1 Administration Guide:https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/.* RESTful Volume Management with Heketi [Technology Preview]Heketi provides a RESTful management interface for managing Red Hat GlusterStorage volume lifecycles.

This interface allows cloud services like OpenStackManila, Kubernetes, and OpenShift to dynamically provision Red Hat GlusterStorage volumes.

For details about this technology preview, see the Red HatGluster Storage 3.1 Administration Guide:https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/*Red Hat Gluster Storage for ContainersWith the Red Hat Gluster Storage 3.1 update 2 release a Red Hat Gluster Storageenvironment can be set up in a container.

Containers use shared operatingsystems and are much more efficient than hypervisors in system resource terms.Containers rest on top of a single Linux instance and allows applications to usethe same Linux kernel as the system that they’re running on.

This improves theoverall efficiency and reduces the space consumption considerably.

For moreinformation, see the Red Hat Gluster Storage 3.1 Administration Guide:https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/.* BitRot scrubber statusThe BitRot scrubber command (gluster volume bitrot VOLNAME scrub status)can now display scrub progress and list identified corrupted files,allowing administrators to locate and repair corrupted files more easily.See the Red Hat Gluster Storage 3.1 Administration Guide for details:https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/.* Samba Asynchronous I/OWith this release, asynchronous I/O from Samba to Red Hat Gluster Storage issupported.

The aio read size option is now enabled and set to 4096 by default.This increases the throughput when the client is multithreaded or there aremultiple programs accessing the same share.
If you have Linux clients using SMB2.0 or higher, Red Hat recommends disabling asynchronous I/O (setting aio readsize to 0).All users of Red Hat Gluster Storage are advised to apply this update.
Before applying this update, make sure all previously released erratarelevant to your system have been applied.For details on how to apply this update, refer to:https://access.redhat.com/articles/11258Red Hat Storage Management Console 3.0

SRPMS:
gluster-nagios-common-0.2.3-1.el6rhs.src.rpm
    MD5: ee86d858dc57d1f7006ece083de49717SHA-256: 55bf077f217325748557569e8fdc490ea63ce38e248e54e30ed242319e9617a4
nagios-server-addons-0.2.3-1.el6rhs.src.rpm
    MD5: 3779cea67b105093ef91c784953f2e7eSHA-256: 3ee7e3a9d56beb8d09812ed3bb40d9a066e7f7276842b60122d5132920be7d7c
nrpe-2.15-4.2.el6rhs.src.rpm
    MD5: 5e4e3b6d3ab32a239df67e76e7e9c87eSHA-256: d343d3619f8b8e2aab315599a1f3a43c75af1b615513976019a64e76575eaacf
nsca-2.9.1-4.2.el6rhs.src.rpm
    MD5: fc199001caf48f6f94166e235f603a38SHA-256: 14707c4a255ffa981dbed8c06549a3e261479eb487426aa157ff6311b413c7ad
 
x86_64:
gluster-nagios-common-0.2.3-1.el6rhs.noarch.rpm
    MD5: 4295cba50842d64306459be2ef63864cSHA-256: 16dd78cdaca6ff5c7bcb9e22b9ffea3e51a73c11c748d21661a3cd7b3d48de55
nagios-plugins-nrpe-2.15-4.2.el6rhs.x86_64.rpm
    MD5: 6d28849b1cd532c712ce6dc8ca1d237eSHA-256: 401f949f18b74520625d0d86cf17809b79e6b177c6d3761cd9cc4874805cb4c5
nagios-server-addons-0.2.3-1.el6rhs.noarch.rpm
    MD5: d515fb49547fa04b597acfa7f24ccd9dSHA-256: 06949eefcc61127c85ed6379b1a789584c1d8649e34d9f5bf54310cb963769b0
nsca-2.9.1-4.2.el6rhs.x86_64.rpm
    MD5: 84b89bce9c88d9938d08c2c322165a24SHA-256: b2d8ca29d9ffee947104243db3bf1d9e71ad37590ab8982824dcdb8de99a8b75
 
Red Hat Storage Native Client

SRPMS:
glusterfs-3.7.5-19.el6.src.rpm
    MD5: a9cba0e9d5fc9f75c662f1fc7b705e37SHA-256: 600852365dad54f82eaf4751594e73e68d3066229051d6dec7820cce7fd22b75
 
x86_64:
glusterfs-3.7.5-19.el6.x86_64.rpm
    MD5: 4802abfa7513b5795ce82dfb7415f782SHA-256: efcc1f20c20dd89a6a498576a1138a5eed819c542e8dc2ebda3606ea9b63dea3
glusterfs-api-3.7.5-19.el6.x86_64.rpm
    MD5: c213626b4826efa5a93f6fb798c69dabSHA-256: f39563d5451b8104622ce65d19dbabc41f331586584cfe5fac596d621e168ecc
glusterfs-api-devel-3.7.5-19.el6.x86_64.rpm
    MD5: 5f58bc3c3a2cded4ec285e50c81e4983SHA-256: 68f9bc435d039274b01d8fc0dbbc5c2e15d9e60cfc952094f25787175f215840
glusterfs-cli-3.7.5-19.el6.x86_64.rpm
    MD5: d5a8b4292d73bd0031602b8af80020b8SHA-256: 66460c8ec7c584971b6041994d188d244c9720c7f9d9dfa8c88c6e3d26302e0b
glusterfs-client-xlators-3.7.5-19.el6.x86_64.rpm
    MD5: 82c337cf7fb7a4b02200bdfd3d4e93daSHA-256: 20a2ffbd8112d11781955579017fdf1c1bb1af4e591da3be513ced774ae16d46
glusterfs-debuginfo-3.7.5-19.el6.x86_64.rpm
    MD5: fffeb0668197fbe64b3fcf4b980a020aSHA-256: 63e47f087783531d1c4fbf117202485922a05f443a353baed28111fbe7578c05
glusterfs-devel-3.7.5-19.el6.x86_64.rpm
    MD5: faab974bd1f8b2edbff294b795e5c2eaSHA-256: 172d2074794e9590da2e6a3dcf9d0f677f638eff0ad39bf109b6bb57bc3b0dd5
glusterfs-fuse-3.7.5-19.el6.x86_64.rpm
    MD5: 2e95012eb4b7cf4c20539e0501cfab6bSHA-256: 507bc9736482624f89fdf23480339c889440c662b85dfefe15f7884f6f541be4
glusterfs-libs-3.7.5-19.el6.x86_64.rpm
    MD5: a4d623f830542c59e0376105ec08bbfaSHA-256: 3828a367cf1149c211f8f2bb5389652ed98be92e0bd4332a91429f7c38be23af
glusterfs-rdma-3.7.5-19.el6.x86_64.rpm
    MD5: c9ee1be74578515cbba6e59f9d50a277SHA-256: df131056870e7fed7b9f782992fd279fed4bf6acf5d45b1c3698ee88de7fe9a3
python-gluster-3.7.5-19.el6.noarch.rpm
    MD5: 50030e359aa4d158dd483b848b305307SHA-256: 7523d69466823c3e992734dcf3e1aacab7b666e809a9bbdf57a7e7671bbe05f4
 
Red Hat Storage Server 3.0

SRPMS:
gluster-nagios-common-0.2.3-1.el6rhs.src.rpm
    MD5: ee86d858dc57d1f7006ece083de49717SHA-256: 55bf077f217325748557569e8fdc490ea63ce38e248e54e30ed242319e9617a4
glusterfs-3.7.5-19.el6rhs.src.rpm
    MD5: bf0267041eea7695873389fb36bdcdeaSHA-256: 45883467434c886913c1bb5560d1f9fdfa1576462c3dd56d8b262c788d4737e6
heketi-1.0.2-1.el6rhs.src.rpm
    MD5: ee35aee21807044bfe0091e1ead8f23aSHA-256: 4b8ead9a0405327658ec286e17193fa7e885b9b262036d863c8a012497505366
nrpe-2.15-4.2.el6rhs.src.rpm
    MD5: 5e4e3b6d3ab32a239df67e76e7e9c87eSHA-256: d343d3619f8b8e2aab315599a1f3a43c75af1b615513976019a64e76575eaacf
nsca-2.9.1-4.2.el6rhs.src.rpm
    MD5: fc199001caf48f6f94166e235f603a38SHA-256: 14707c4a255ffa981dbed8c06549a3e261479eb487426aa157ff6311b413c7ad
redhat-storage-server-3.1.2.0-1.el6rhs.src.rpm
    MD5: ec184da7a47b2ec9a5ca3e6623f4b458SHA-256: 9ebecb5ab92edb2a9ff5c04d47a1ffc1f2ab13961e295d1385a6aae1aa39669d
vdsm-4.16.30-1.3.el6rhs.src.rpm
    MD5: 451f0bc1ae93eb1ea587485e42fe7150SHA-256: 05625bcfa75967b9c5509143b3fc044c07e950a03252ff9a92e1b367d43fa3ac
 
x86_64:
gluster-nagios-common-0.2.3-1.el6rhs.noarch.rpm
    MD5: 4295cba50842d64306459be2ef63864cSHA-256: 16dd78cdaca6ff5c7bcb9e22b9ffea3e51a73c11c748d21661a3cd7b3d48de55
glusterfs-3.7.5-19.el6rhs.x86_64.rpm
    MD5: 917c68b7d81924d20c28474e025f7927SHA-256: a9c64a7089f68d948dc0b7fbdea03edeee5c697cd6f5a3cf11b060d0ee9555c2
glusterfs-api-3.7.5-19.el6rhs.x86_64.rpm
    MD5: 2ecc47e962298cca7fc3543b7c565c7aSHA-256: 94cb9bb3e4e28ad588c34de7c429b7e2f21335d76f2adbfca00d244b6ac8347a
glusterfs-api-devel-3.7.5-19.el6rhs.x86_64.rpm
    MD5: ebe6446c2684603897051127e486791fSHA-256: 470a420c2b0ea4be3dc0310b736ca06c4f8526428d19432c30d42b190d8411e9
glusterfs-cli-3.7.5-19.el6rhs.x86_64.rpm
    MD5: 309297c212e08c2c532151746cb34824SHA-256: 1ab4504600f5f07527ea32ccb3414675e1ef9d69a8776c94f252bcf6f652a129
glusterfs-client-xlators-3.7.5-19.el6rhs.x86_64.rpm
    MD5: be039008d50116f8206fcfe1a3b406e1SHA-256: ed71ba18711f1c29c3c0085801fec4054927c233cd14f222585381942fe5a06e
glusterfs-devel-3.7.5-19.el6rhs.x86_64.rpm
    MD5: d5ee626e24a1d28996fd367cbd2e28b0SHA-256: 4403158f5f32d2d859f21e7cbde34569a53c1736a50917a11bcc6f3db717b740
glusterfs-fuse-3.7.5-19.el6rhs.x86_64.rpm
    MD5: 87c23182d46c087e0d21a278e3705bd0SHA-256: fd166c206a830714f0b261c639c31c7cd4e7f56dfe86e2f15629f7f194e20dd1
glusterfs-ganesha-3.7.5-19.el6rhs.x86_64.rpm
    MD5: e5f42d0f07a99053ab7cfc48dcfd98c1SHA-256: 76d43bde9d0ef98bc143f2c061d2d041ad15413a142267bafce42b03d087a84e
glusterfs-geo-replication-3.7.5-19.el6rhs.x86_64.rpm
    MD5: d89dcfeb0ca05a1b8d2ae74d706d66e1SHA-256: 6570fe2e10eeb277f33f1e4b513c0c5c2ebb4f8f612e23747dbf496ce4fa4b0d
glusterfs-libs-3.7.5-19.el6rhs.x86_64.rpm
    MD5: 5089140ca75d2a6d4f80f55bcfce6318SHA-256: 5800d1e67829403f1efcb807e13b58a44f0d7b7dc5c86bb600fa3efe692120ce
glusterfs-rdma-3.7.5-19.el6rhs.x86_64.rpm
    MD5: 01f56b0e57c176d6f2e4cce844bc7a2eSHA-256: dc5724ba494c1b9e550c2ded3cd6c04178eea2ca013b62b723a6c7410fc3a3f4
glusterfs-server-3.7.5-19.el6rhs.x86_64.rpm
    MD5: b58fd78e9f219eb9677c37c79ff6b158SHA-256: 7e51828f7ca9189b66b15de2aa6cb891ba5883a64f39290bb8307a280773f227
heketi-1.0.2-1.el6rhs.x86_64.rpm
    MD5: 70d4241c016c27560087c4e02550c9beSHA-256: df508175d3e302cb983ff99f0f9383300fe3bc3400010f430560fbc1e5b244f4
heketi-devel-1.0.2-1.el6rhs.noarch.rpm
    MD5: 2cf71c6378370e25920fe33184fb12fdSHA-256: 37887606b89528ac0004773c14ec68865c07bee97af4fa7b68f7254fd406b0e5
heketi-unit-test-devel-1.0.2-1.el6rhs.x86_64.rpm
    MD5: 850394335edc9309a36ae384b360949aSHA-256: 7466b5b589e8efa4b638deb4edb8a53dfc1ed12995d680258a03fad9f39829b2
nrpe-2.15-4.2.el6rhs.x86_64.rpm
    MD5: bdb4280e6a2931d284e641ee2c8437d8SHA-256: 9b9c4b867ba0170f37eb619c2fb37d2eee4969618124fe272d58c5aab8ee5dc0
nsca-client-2.9.1-4.2.el6rhs.x86_64.rpm
    MD5: b54865e9903652cc18534f5e92f89150SHA-256: 18078268f97f143f7f14af6ef286914098132ded59f910342c37922edd260381
python-gluster-3.7.5-19.el6rhs.noarch.rpm
    MD5: bc88bb7b991001533f1efee2b05a710aSHA-256: 683e344e35a35fc9a4c62c929f739e711f5a9d7047ebfe4b34f7a716befbd46a
redhat-storage-server-3.1.2.0-1.el6rhs.noarch.rpm
    MD5: cc04c06ba5bee202679316f075174e3eSHA-256: 4750202eadacf29fe768352a87fb15f2b28bd595928635b0996f2f8b1e44f668
vdsm-4.16.30-1.3.el6rhs.x86_64.rpm
    MD5: f677cae5ba1e2fa2464ecfe65a29c850SHA-256: e585e32ab99da6ef3d72172adfc31af99f25974848be481cc0aa1e2839995486
vdsm-cli-4.16.30-1.3.el6rhs.noarch.rpm
    MD5: 8d225ac02ef2f63c9cd9c3a45b02995dSHA-256: 66c7623ba0ce769a7e7dcf193a00c9fcd2b6f5e2e7a60e670d34497ad3105ab1
vdsm-debug-plugin-4.16.30-1.3.el6rhs.noarch.rpm
    MD5: 16b8496e00e3a6a6186d2e59020e7217SHA-256: 22d89d89e71808bd6232be276294caacedba705b54019198fe4de72fb00078fa
vdsm-gluster-4.16.30-1.3.el6rhs.noarch.rpm
    MD5: b286bc5f76b653bf79c0122de33ec8b1SHA-256: 4ed07ca857f048cad66782fc6dc1a4a388d20db548cb3d357413e8326567d612
vdsm-hook-ethtool-options-4.16.30-1.3.el6rhs.noarch.rpm
    MD5: 2e71fc07a66fb614978ae174c0b95e44SHA-256: e5d802dce1f7da395bc8aab4bc6c20165ba01c8f52f2178c8ea3c9a696e2a227
vdsm-hook-faqemu-4.16.30-1.3.el6rhs.noarch.rpm
    MD5: bafa45f767ce3dedd1f5cb0b14b3bb61SHA-256: 8978827311984d3eddc943e7d84a56a116569c7e42a214f5d4295316e54e5726
vdsm-hook-openstacknet-4.16.30-1.3.el6rhs.noarch.rpm
    MD5: 9dbeea388333ebed695f09a049516131SHA-256: 35986951b9f9ef93896833c7defa9ec6aab135a7ef3724b1b2a4171333b0eb68
vdsm-hook-qemucmdline-4.16.30-1.3.el6rhs.noarch.rpm
    MD5: 256912791dda53919a5e272ae0ba057aSHA-256: 27e2e023851b42fbe7dab7d8322b32b94319f38ed7f48be9efd5fdf8031c2c3f
vdsm-jsonrpc-4.16.30-1.3.el6rhs.noarch.rpm
    MD5: b0d2815f419fdd2381fe455a763677dcSHA-256: 8a36dfcf0bf95e097d5b317e09eee160a81997f6f0830e7563ebdcdd6adb3147
vdsm-python-4.16.30-1.3.el6rhs.noarch.rpm
    MD5: 45e0f83ca5310ff48f97a8ad6ca04214SHA-256: 96e47fd05943245267261fcde11685b9d868713862420ae19750231095bfd75b
vdsm-python-zombiereaper-4.16.30-1.3.el6rhs.noarch.rpm
    MD5: df2e684720ccc720276d1eddb8488225SHA-256: cf61007f67ae7ec798d92970702f587de2bf93aecdd0e87a3cc23557f9505832
vdsm-reg-4.16.30-1.3.el6rhs.noarch.rpm
    MD5: 04ed6fd07beedaee76c98736713acd9cSHA-256: d619f3a54f433e34fe7d2f9211384b9397e53f72c4a0d52259860621f85c9ec3
vdsm-tests-4.16.30-1.3.el6rhs.noarch.rpm
    MD5: cdb7ea49c1a27258a9c1d811ba363c0eSHA-256: e13543b8351ff0fea6b7fa2117c21ad911056b73fa58035568b2e8efc8448439
vdsm-xmlrpc-4.16.30-1.3.el6rhs.noarch.rpm
    MD5: d785c5e90a5537e910a39b9406eeaae3SHA-256: f0108a6e81e6c548c95cb3285df1f8289c0af871e0c94613de6d89d15a0506ce
vdsm-yajsonrpc-4.16.30-1.3.el6rhs.noarch.rpm
    MD5: 417f9da202e1a4df9ecc83f36ecd82b2SHA-256: 372940dc9bf4978d6eecb94ba504a32654a44c211c8d4b580dbef1d15fb82afa
 
(The unlinked packages above are only available from the Red Hat Network)

1018170 – quota: numbers of warning messages in nfs.log a single file itself1060676 – [add-brick]: I/O on NFS fails when bricks are added to a distribute-replicate volume1139193 – git operations fail when add-brick operation is done1177592 – quota: rename of “dir” fails in case of quota space availability is around 1GB1199033 – [epoll] Typo in the gluster volume set help message for server.event-threads and client.event-threads1224064 – [Backup]: Glusterfind session entry persists even after volume is deleted1224880 – [Backup]: Unable to delete session entry from glusterfind list1228079 – [Backup]: Crash observed when keyboard interrupt is encountered in the middle of any glusterfind command1228643 – I/O failure on attaching tier1230540 – Quota list is not working on tiered volume.1231144 – Data Tiering; Self heal deamon stops showing up in “vol status” once attach tier is done1236020 – Data Tiering: Change the error message when a detach-tier status is issued on a non-tier volume1236052 – Data Tiering:Throw a warning when user issues a detach-tier commit command1236153 – setting enable-shared-storage without mentioning the domain, doesn’t enables shared storage1236503 – Disabling enable-shared-storage deletes the volume with the name – “gluster_shared_storage”1237022 – Probing a new RHGS node, which is part of another cluster, should throw proper error message in logs and CLI1237059 – DHT-rebalance: rebalance status shows failed when replica pair bricks are brought down in distrep volume while re-name of files going on1238561 – FSAL_GLUSTER : nfs4_getfacl do not display DENY entries1238634 – Though scrubber settings changed on one volume log shows all volumes scrubber information1240502 – nfs-ganesha: remove the entry of the deleted node1240918 – Quota: After rename operation , gluster v quota <volname> list-objects command give incorrect no. of files in output1241436 – nfs-ganesha: refresh-config stdout output includes dbus messages “method return sender=:1.61 -> dest=:1.65 reply_serial=2″1242022 – rdma : pending – porting log messages to a new framework1242148 – With NFSv4 ACLs enabled, rename of a file/dir to an existing file/dir fails1243534 – Error messages observed in cli.log1243797 – quota/marker: dir count in inode quota is not atomic1244792 – nfs-ganesha: nfs-ganesha debuginfo package has missing debug symbols1247515 – [upgrade] Error messages seen in glusterd logs, while upgrading from RHGS 2.1.6 to RHGS 3.11247947 – [upgrade] After in-service software upgrade from RHGS 2.1 to RHGS 3.1, bumping up op-version failed1248895 – [upgrade] After in-service software upgrade from RHGS 2.1.6 to RHGS 3.1, probing a new RHGS 3.1 node is moving the peer to rejected state1251471 – FSAL_GLUSTER: Code clean up in acl implemenatation1251477 – FSAL_GLUSTER : Removal of previous acl implementation1257209 – With quota enabled, when files are created and deleted from mountpoint, error messages are seen in brick logs1257343 – vol heal info fails when transport.socket.bind-address is set in glusterd1257957 – nfs-ganesha: nfs-ganesha process gets killed while executing UNLOCK with a cthon test on vers=31258341 – Disperse volume: single file creation is generating many log messages1260530 – Provide more meaningful errors on peer probe and peer detach1261765 – NFS Ganesha export lost during IO on EC volume1262191 – nfs-ganesha: having acls and quota enabled for volume and nfs-ganesha coredump while creating data1262680 – IO hung on v4 ganesha mount1264804 – ECVOL: glustershd log grows quickly and fills up the root volume1265200 – quota: set quota version for files/directories1267488 – [upgrade] Volume status doesn’t show proper information when nodes are upgraded from 2.1.6 to 3.1.11269203 – regression : RHGS 3.0 introduced a maximum value length in the info files1269557 – FUSE clients in a container environment hang and do not recover post losing connections to all bricks1270321 – Add heketi package to product RH Gluster Storage 31271178 – rm -rf on /run/gluster/vol/<directory name>/ is not showing quota output header for other quota limit applied directories1271184 – quota : display the size equivalent to the soft limit percentage in gluster v quota <volname> list* command1271648 – tier/cli: number of bricks remains the same in v info –xml1271659 – gluster v status –xml for a replicated hot tier volume1271725 – Data Tiering: Disallow attach tier on a volume where any rebalance process is in progress to avoid deadlock(like remove brick commit pending etc)1271727 – Tiering/glusted: volume status failed after detach tier start1271733 – Tier/shd: Tracker bug for tier and shd compatibility1271750 – glusterd: disable ping timer b/w glusterd and make epoll thread count default 11271999 – After upgrading to RHGS 3.1.2 build, the other peer was shown as disconnected1272335 – [Heketi] Not all /etc/fstab enteries are cleaned up after volume delete1272341 – Data Tiering:Promotions fail when brick of EC (disperse) cold layer are down1272403 – [Tier]: man page of gluster should be updated to list tier commands1272407 – Data Tiering:error “[2015-10-14 18:15:09.270483] E [MSGID: 122037] [ec-common.c:1502:ec_update_size_version_done] 0-tiervolume-disperse-1: Failed to update version and size [Input/output error]”1272408 – Data Tiering:[2015-10-15 02:54:52.259879] E [MSGID: 109039] [dht-common.c:2833:dht_vgetxattr_cbk] 0-tiervolume-cold-dht: vgetxattr: Subvolume tiervolume-disperse-1 returned -1 [No such file or directory]1272452 – Data Tiering:heat counters not getting reset and also internal ops seem to be heating the files1273347 – [Tier]: glusterfs crashed –volfile-id rebalance/tiervolume1273348 – [Tier]: lookup from client takes too long {~7m for 18k files}1273385 – tiering + nfs-ganesha: tiering has a segfault1273706 – build: package release in NVR should only be integral1273711 – Disperse volume: df -h on a nfs mount throws Invalid argument error1273728 – Crash while bringing down the bricks and self heal1273850 – Replica pairs in a volume shouldn’t be from the same node1273868 – Heketi doesn’t allow deleting nodes with drives missing/inaccessible1275155 – [Tier]: Typo in the output while setting the wrong value of low/hi watermark1275158 – Data Tiering:Getting lookup failed on files in hot tier, when volume is restarted1275515 – Reduce ‘CTR disabled’ brick log message from ERROR to INFO/DEBUG1275521 – Wrong value of snap-max-hard-limit observed in ‘gluster volume info’.1275525 – snap-max-hard-limit for snapshots always shows as 256 in info file.1275633 – Clone creation should not be successful when the node participating in volume goes down.1275751 – Data Tiering:File create terminates with “Input/output error” as split brain is observed1275912 – AFR self-heal-daemon option is still set on volume though tier is detached1275925 – [New] – Message displayed after attach tier is misleading1275971 – [RFE] Geo-replication support for Volumes running in docker containers1275998 – Data Tiering: “ls” count taking link files and promote/demote files into consideration both on fuse and nfs mount1276051 – Data Tiering:inconsistent linkfile creation when lookups issued on cold tier files1276227 – Data Tiering:delete command rm -rf not deleting files the linkto file(hashed) which are under migration and possible spit-brain observed and possible disk wastage1276245 – [Tier]: Stopping and Starting tier volume triggers fixing layout which fails on local host1276248 – [Tier]: restarting volume reports “insert/update failure” in cold brick logs1276273 – [Tier]: start tier daemon using rebal tier start doesnt start tierd if it is failed on any of single node1276330 – [heketi-cli] Incorrect error message when storage-host-name is missing while adding a node1276334 – Data Tiering:tiering deamon crashes when trying to heat the file1276340 – [heketi-cli] Inconsistency with the requirement for zone value1276348 – nfs-ganesha: ACL issue after adding an ace for a user the file permissions gets modified1276542 – RHGS-3.1.2 op-version need to be corrected1276587 – [GlusterD]: After updating one of rhgs 2.1.6 node to 3.1.2 in two node cluster, volume status is failing1276678 – CTR should be enabled on attach tier, disabled otherwise.1277028 – SMB: share entry from smb.conf is not removed after setting user.cifs and user.smb to disable.1277043 – Upgrading to 3.7.-5-5 has changed volume to distributed disperse1277088 – Data Tiering:Rename of cold file to a hot file causing split brain and showing two copies of files in mount point1277126 – [New] – Files in a tiered volume gets promoted when bitd signs them1277316 – Data Tiering: fix lookup-unhashed for tiered volumes.1277359 – Data Tiering:Filenames with spaces are not getting migrated at all1277368 – Bit rot version and signature for the files on a tiered volume are missing after few promotions and demotions of the files.1277659 – lookup and set xattr fails when bit rot is enabled on a tiered volume.1277886 – FSAL_GLUSTER : if only DENY entry is set for a user/group, then it lost all its default permission1277944 – “Transport endpoint not connected” in heal info though hot tier bricks are up1278254 – [Snapshot]: Clone creation fails on tiered volume with pre-validation failed message1278270 – [Tier]: “failed to reset target size back to 0” errors in tier logs while performing rename ops1278279 – EC: File healing promotes it to hot tier1278346 – Data Tiering:Regression:NFS crashed due to dht readdirp after attach tier1278384 – ‘ls’ on client mount lists varying number of files while promotion/demotion1278389 – Data Tiering: Tiering deamon is seeing each part of a file in a Disperse cold volume as a different file1278390 – Data Tiering:Regression:Detach tier commit is passing when detach tier is in progress1278399 – I/O failure on attaching tier on nfs client1278408 – [Tier]: Volume start failed after tier attach to newly created stopped volume.1278419 – Data Tiering:Data Loss:File migrations(flushing of data) to cold tier fails on detach tier with quota limits reached1278723 – Tier : Move common functions into tier.rc1278754 – Data Tiering:Metadata changes to a file should not heat/promote the file1278798 – Few snapshot creation fails with pre-validation failed message on tiered volume.1279314 – [Tier]: After volume restart, unable to stop the started detach tier1279350 – [Tier]: Space is missed b/w the words in the detach tier stop error message1279830 – File creation in nested folders fails when add-brick operation is done on a volume with exclusive file lock.1280410 – ec-readdir.t is failing consistently1281304 – sometimes files are not getting demoted from hot tier to cold tier1281946 – Large system file distribution is broken1282701 – build: compile error on RHEL51282729 – Creation of files on hot tier volume taking very long time1283035 – [GlusterD]: Incorrect peer status showing if volume restart done before entire cluster update.1283050 – self-heal won’t work in disperse volumes when they are attached as tiers1283057 – nfs-ganesha+tiering: the fs-sanity is taking is more than 24 hours to complete on nfs vers=31283410 – cache mode must be the default mode for tiered volumes1283505 – when scrubber is scheduled scrubd moves the files to hot tier1283563 – libgfapi to support set_volfile-server-transport type “unix”1283566 – qupta/marker: backward compatibility with quota xattr vesrioning1283608 – nfs-ganesha: Upcall sent on null gfid1283940 – Data Tiering: new set of gluster v tier commands not working as expected1283961 – Data Tiering:Change the default tiering values to optimize tiering settings1284387 – Without detach tier commit, status changes back to tier migration1284834 – tiering: Seeing error messages E “/usr/lib64/glusterfs/3.7.5/xlator/features/changetimerecorder.so(ctr_lookup+0x54f) [0x7f6c435c116f] ) 0-ctr: invalid argument: loc->name [Invalid argument] after attach tier1285166 – Snapshot creation after attach-tier causes glusterd crash1285226 – Masking the wrong values in Bitrot status command1285238 – Corrupted objects list does not get cleared even after all the files in the volume are deleted and count increases as old + new count1285281 – CTDB: yum update fails on RHEL6 for ctdb package with dependency on procps-ng and systemd-units1285295 – [geo-rep]: Recommended Shared volume use on geo-replication is broken in latest build1285306 – Unresolved dependencies on ctdb-4.2.4-7.el6rhs.x86_641285651 – [Tier]: Error: attempt to set internal xattr: trusted.ec.* [Operation not permitted]1285678 – RHGS312 RHEL7.2 based ISO is not working. throws error like ImportError; no library named udev1285783 – fops-during-migration.t fails if hot and cold tiers are dist-rep1285797 – tiering: T files getting created , even after disk quota exceeds1285958 – [GlusterD]: NFS service not running after layered installation of RHGS on RHEL7.x1285998 – Possible memory leak in the tiered daemon1286058 – Brick crashes because of race in bit-rot init1286218 – Data Tiering:Watermark:File continuously trying to demote itself but failing ” [dht-rebalance.c:608:__dht_rebalance_create_dst_file] 0-wmrk-tier-dht: chown failed for //AP.BH.avi on wmrk-cold-dht (No such file or directory)”1286346 – Data Tiering:Don’t allow or reset the frequency threshold values to zero when record counter features.record-counter is turned off1286604 – glusterfsd to support volfile-server-transport type “unix”1286605 – vol quota enable fails when transport.socket.bind-address is set in glusterd1286637 – [geo-rep+tiering]: symlinks are not getting synced to slave on tiered master setup1286654 – Data Tiering:Read heat not getting calculated and read operations not heating the file with counter enabled1286927 – Tier: ec xattrs are set on a newly created file present in the non-ec hot tier1287447 – remove watermark ie cluster.tier-mode from vol info after a detach tier is completed successfully1287532 – After detach-tier start writes still go to hot tier1287980 – [Quota]: Peer status is in “Rejected” state with Quota enabled volume1287997 – tiering: quota list command is not working after attach or detach1288490 – Good files does not promoted in a tiered volume when bitrot is enabled1288509 – rm -rf is taking very long time1288921 – Use after free bug in notify_kernel_loop in fuse-bridge code1288988 – Getting errors while launching the selfheal1289017 – Failed to show rebalance status if the volume name has ‘tier’ substring1289071 – [Tier]: Failed to open “demotequeryfile-master-tier-dht” errors logged on the node having only cold bricks1289092 – update redhat-release-server to the latest one available in rhel7.21289228 – [Tiering] + [DHT] – Detach tier fails to migrate the files when there are corrupted objects in hot tier.1289423 – Regular files are listed as ‘T’ files on nfs mount1289437 – [Tier]: rm -rf * from client during demotion causes a stale link file to remain in system with attributes as ?????1289483 – FSAL_GLUSTER : Rename throws error in mount when acl is enabled1289893 – Excessive “dict is NULL” logging1289975 – Access to files fails with I/O error through uss for tiered volume1290401 – File is not demoted after self heal (split-brain)1291052 – [tiering]: read/write freq-threshold allows negative values1291152 – [tiering]: cluster.tier-max-files option in tiering is not honored1291195 – [georep+tiering]: Geo-replication sync is broken if cold tier is EC1291560 – Renames/deletes failed with “No such file or directory” when few of the bricks from the hot tier went offline1291566 – first file created after hot tier full fails to create, but later ends up as a stale erroneous file (file with ???????????)1291969 – [Tiering]: When files are heated continuously, promotions are too aggressive that it promotes files way beyond high water mark1292205 – When volume creation fails, gluster volume and brick lvs are not deleted1292605 – (RHEL6) hook script for CTDB should not change Samba config1292705 – gluster cli crashed while performing ‘gluster vol bitrot <vol_name> scrub status’1292773 – (RHEL6) S30Samba scripts do not work on systemd systems1293228 – Disperse: Disperse volume (cold vol) crashes while writing files on tier volume1293237 – [Tier]: “Bad file descriptor” on removal of symlink only on tiered volume1293286 – heal info output shouldn’t print number of entries processed when brick is unreachable.1293380 – [tiering]: Tiering isn’t started after attaching hot tier and hence no promotion/demotion1293903 – [Tier]: can not delete symlinks from client using rm1294073 – [tiering]: Incorrect display of ‘gluster v tier help’1294478 – quota: limit xattr not healed for a sub-directory on a newly added bricks1294487 – glusterfsd crash while bouncing the bricks1294594 – [Tier]: Killing glusterfs tier process doesn’t reflect as failed/faulty in tier status1294774 – Quota Aux mount crashed1294816 – Unable to modify quota hard limit on tier volume after disk limit got exceeded1295299 – glusterfs fuse mount session hangs indefinitely when a file create progressively fills up the hot tier completely1295736 – “Operation not supported” error logs seen continuosly in brick logs1296048 – Attach tier + nfs : Creates fail with invalid argument errors1296134 – Rebalance crashed after detach tier.1297004 – [write-behind] : Write/Append to a full volume causes fuse client to crash1297300 – Stale stat information for corrupted objects (replicated volume)1299724 – Excessive logging in mount when bricks of the replica are down1299799 – Snapshot creation fails on a tiered volume1300246 – [Tiering]: Values of watermarks, min free disk etc will be miscalculated with quota set on root directory of gluster volume1300682 – [georep+tiering]: Hardlink sync is broken if master volume is tiered1302901 – SMB: SMB crashes with AIO enabled on reads + vers=3.01303894 – promotions not happening when space is created on previously full hot tier1304684 – [quota]: Incorrect disk usage shown on a tiered volume1305172 – [Tier]: Endup in multiple entries of same file on client after rename which had a hardlinks

These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from: