13.6 C
London
Tuesday, September 26, 2017
Home Tags Fully-Qualified Domain Name (FQDN)

Tag: Fully-Qualified Domain Name (FQDN)

Details rhosp-director-images packages are now available for Red Hat OpenStack Platform9 Release Candidate. Changes to the rhosp-director-images component:* This update resolves an issue whereby the image building process was missingDIB_CLOUD_INIT_ETC_HOSTS=false, which would result in the '/etc/hosts' filecontaining entries for the node names and FQDN pointing to the loopback address,causing the cluster to be unable to start. (BZ#1337465) Solution Before applying this update, ensure all previously released errata relevant toyour system have been applied.Red Hat OpenStack Platform 9 runs on Red Hat Enterprise Linux 7.2.The Red Hat OpenStack Platform 9 Release Notes contain the following:* An explanation of the way in which the provided components interact to form aworking cloud computing environment.* Technology Previews, Recommended Practices, and Known Issues.* The channels required for Red Hat OpenStack Platform 9, including whichchannels need to be enabled and disabled.The Release Notes are available at:https://access.redhat.com/documentation/en/red-hat-openstack-platform/9/single/release-notesThis update is available through 'yum update' on systems registered through RedHat Subscription Manager.

For more information about Red Hat SubscriptionManager, see:https://access.redhat.com/documentation/en-US/Red_Hat_Subscription_Management/1/html/RHSM/index.html Updated packages Red Hat OpenStack 9.0 director for RHEL 7 SRPMS: rhosp-director-images-9.0-20160805.1.el7ost.src.rpm     MD5: 877d47bb724d7b7d1db0689964f59a19SHA-256: 546f8209763f59eed95fc17480785b335892c62591982fdf7ebc3e5e6b845ea8   x86_64: rhosp-director-images-9.0-20160805.1.el7ost.noarch.rpm     MD5: c4003d645420cad064b3155b440aebb0SHA-256: 48bdbafd8c731db915b40ea296cd640fbac68474c76f1fe1fd2d6890dbc06003 rhosp-director-images-ipa-9.0-20160805.1.el7ost.noarch.rpm     MD5: fcdcb992ab2a46d64ab8c03d8e5d8484SHA-256: 7c5e3770e920d9a3da19817fd4d46ac6a8eb5ef16903b776de1c738595a3444b   (The unlinked packages above are only available from the Red Hat Network) Bugs fixed (see bugzilla for more information) 1337465 - Overcloud nodes /etc/hosts file contains entry pointing to the loopback address for the nodes hostname1358923 - New images for director. These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from:https://www.redhat.com/security/team/key/#package The Red Hat security contact is secalert@redhat.com. More contact details at http://www.redhat.com/security/team/contact/
Red Hat Gluster Storage 3.1 Update 3, which fixes several bugs,and adds various enhancements, is now available for Red HatEnterprise Linux 6. Red Hat Gluster Storage is a software only scale-out storage solution thatprovides flexible and affordable unstructured data storage.
It unifies datastorage and infrastructure, increases performance, and improvesavailability and manageability to meet enterprise-level storage challenges.This update fixes numerous bugs and adds various enhancements.
Spaceprecludes documenting all of these changes in this advisory. Users aredirected to the Red Hat Gluster Storage 3.1 Technical Notes, linked to inthe References section, for information on the most significant of thesechanges.All users of Red Hat Gluster Storage are advised to upgrade to theseupdated packages. Before applying this update, make sure all previously released erratarelevant to your system have been applied.For details on how to apply this update, refer to:https://access.redhat.com/articles/11258Red Hat Gluster Storage Management Console 3.1 on RHEL-6 SRPMS: gluster-nagios-common-0.2.4-1.el6rhs.src.rpm     MD5: 260e12660861c1c2771db83891438d87SHA-256: 6bce0e4357b1e369ca0ec91f7cde4fd24c25e4b865ec2d1dbe4941c395436d1a nagios-server-addons-0.2.5-1.el6rhs.src.rpm     MD5: e72890095baba1b3dba5c659ca74cb40SHA-256: 1d363b40d84896f464a4590f1ecf205ff9dc23371931bc120564206fcc93031e   x86_64: gluster-nagios-common-0.2.4-1.el6rhs.noarch.rpm     MD5: 62f9908cf01371bac9dfb0bf509e44a8SHA-256: b09188c6a874d584f5fad17545d1479c30cdb36509ed07a2db3cc9f50f069cbb nagios-server-addons-0.2.5-1.el6rhs.noarch.rpm     MD5: e4cc4155de7b9d3c0a3b1dbd368d6530SHA-256: 84183bb7d1588a26dcdfc5df626893c295102b949ea6cd6d46469688552ba09a   Red Hat Gluster Storage Server 3.1 on RHEL-6 SRPMS: gluster-nagios-addons-0.2.7-1.el6rhs.src.rpm     MD5: a991fd44479dd005025abe3900097a24SHA-256: 8fc121540b9009e5b16b3d3f18bab99f76b4c225cc1443e5dd4d950377dcd092 gluster-nagios-common-0.2.4-1.el6rhs.src.rpm     MD5: 260e12660861c1c2771db83891438d87SHA-256: 6bce0e4357b1e369ca0ec91f7cde4fd24c25e4b865ec2d1dbe4941c395436d1a glusterfs-3.7.9-10.el6rhs.src.rpm     MD5: 142b5896697b0d638bf21771a3b8f058SHA-256: 7eeafa1c4512d105120d971134455bbdd1a7fa3166db1778a1337e2cdd3a161d redhat-storage-server-3.1.3.0-3.el6rhs.src.rpm     MD5: 49c433b6e57f831fc18b879c330ff760SHA-256: 7937778f08851cafe73a479fc3bdd17a256556ba6328117131a8a9f6efb998d4 sanlock-2.8-3.el6.src.rpm     MD5: 355e9a46502daab60af0d039a0e128b2SHA-256: 75411b8a7567b52237ba02e8b2a98579b0d901e5eb111bcd214205ce18073fb7 vdsm-4.16.30-1.5.el6rhs.src.rpm     MD5: 995aea7b66bbec5c3873f935d7280ecaSHA-256: 8b5a81db824eaaf431a6d19b65281064f6d81418ef2b59dbd2b1515b862dd550   x86_64: fence-sanlock-2.8-3.el6.x86_64.rpm     MD5: 53e0be9a25f274e82f07ab2da26a5363SHA-256: 41780d1f38de4676d43ee3fd5b9aea7186206c78b199f42e0d64e3f8c0d40a70 gluster-nagios-addons-0.2.7-1.el6rhs.x86_64.rpm     MD5: 9b03235f098589d284deac2af40a5240SHA-256: 71b2fcb36fde8d3a105347b0950934cc7d1375565c8e86320fa77aa35a5656e2 gluster-nagios-common-0.2.4-1.el6rhs.noarch.rpm     MD5: 62f9908cf01371bac9dfb0bf509e44a8SHA-256: b09188c6a874d584f5fad17545d1479c30cdb36509ed07a2db3cc9f50f069cbb glusterfs-3.7.9-10.el6rhs.x86_64.rpm     MD5: a668ad58a5e27e1a6cff4d5470fd0060SHA-256: 305b0c0ddfad2654425320dab09e5d5ee9307829e77ee22f0cf2a95c6c6b6596 glusterfs-api-3.7.9-10.el6rhs.x86_64.rpm     MD5: b5731abe36a8ec83ac35b11d25784c57SHA-256: 461df35b98af428868d4e2b937ca0f01cc3e7156fbb8a2ca3c22b2ed891f2e8a glusterfs-api-devel-3.7.9-10.el6rhs.x86_64.rpm     MD5: 36bbdb0c5a2196ad66d3defbd67d5a26SHA-256: 5e4e63f353e040d2e7e53a3cd133d6ffd4d174ec100210d1b3cee5c6073601d1 glusterfs-cli-3.7.9-10.el6rhs.x86_64.rpm     MD5: 94868b54495567ac51861d5da6dedab7SHA-256: 48878a33080ac6bf2ff0b0e8b38bb7286c217d10d0fbbf8f971ce6b372e99853 glusterfs-client-xlators-3.7.9-10.el6rhs.x86_64.rpm     MD5: ed7c5e092603f0d2da93f7653843f6b9SHA-256: 5e7c4f343dd1a6743238dbc99a5321f19b6d374992079e1e93c814ecf6b297d9 glusterfs-devel-3.7.9-10.el6rhs.x86_64.rpm     MD5: 5116b99ed67013171e8f94c75d245d54SHA-256: 6a4904c65e928fb65d44595f7e62234f00c2cdafa709dc4e5671459cd77a24d7 glusterfs-fuse-3.7.9-10.el6rhs.x86_64.rpm     MD5: 12752f88b2021d02135fc837b52cb933SHA-256: 6424dd37f2d21fb64f5ca2b83f110175a9b35700247c2b259d665cc4bbd5db42 glusterfs-ganesha-3.7.9-10.el6rhs.x86_64.rpm     MD5: fd1207727b31cd8b8bae31f59140648bSHA-256: 03f4cb6cf95dde9ad43fd8b2432552655f8da34fa8a3b373478cda166bfe8f4d glusterfs-geo-replication-3.7.9-10.el6rhs.x86_64.rpm     MD5: 0bb343d9c8bf4aa184c4ffca65530e0cSHA-256: 135d929092ed913c62e95c64d46143cf2674d8e2e04b3c979ce5149cf1bea33a glusterfs-libs-3.7.9-10.el6rhs.x86_64.rpm     MD5: f51b395b2082241d68087566c0fe9aa2SHA-256: ba6cee5f5b3b54228d778f9086f482da4e394760255285b4b6ce16bb5adb7c76 glusterfs-rdma-3.7.9-10.el6rhs.x86_64.rpm     MD5: 9244b2ff5b12f0ea59edda941f547f34SHA-256: 80826f79c21f4244c12982717b44e8754a84ad92748c392f9130eccb03fed1e2 glusterfs-server-3.7.9-10.el6rhs.x86_64.rpm     MD5: 8029efcef08a95d2d6a8fd35ce356590SHA-256: 9596f41543a5d84256ea05ed0054121cdc2b29e64ef43583ee880a317ff80ec2 python-gluster-3.7.9-10.el6rhs.noarch.rpm     MD5: 3482fc405ced4444f8409e3fa7279ec1SHA-256: 99200c3a3f06042f9a6b36719dc1401acca9c7b3a4fa09ef4f06bc59bc3345e5 redhat-storage-server-3.1.3.0-3.el6rhs.noarch.rpm     MD5: 63e93327190c878f258ea012273e145cSHA-256: 1be8adf0f800c7d7bfc3301bf2c682d985f24b54e665b5ccdd8cc529d19ddcf9 sanlock-2.8-3.el6.x86_64.rpm     MD5: 582fbf28c2e677c5e597c65bfea545bcSHA-256: acc707727d0fe2c571c6c68d2e8845072f8506c822eb38641848e1acea35a470 sanlock-devel-2.8-3.el6.x86_64.rpm     MD5: 92a621cc855d63acc16f064169bc2cacSHA-256: 6c048c6979ff24468871b38f8315492fe4183f7426f4b7dfe3b620cc171891c9 sanlock-lib-2.8-3.el6.x86_64.rpm     MD5: 782df5f57f2f37e7b5ee2ad865227ab0SHA-256: cb77ea3f0f9b656db00bff30863b5d55914059b586de0b7342ffa78779f8e4d7 sanlock-python-2.8-3.el6.x86_64.rpm     MD5: f2d6556031b06f563439d5e6eebf0b08SHA-256: 86fe9b0f1f2c356878bdd2f8ae556bf0a5936603103fd737255ab4b313470147 vdsm-4.16.30-1.5.el6rhs.x86_64.rpm     MD5: b42a804d88058c81ba276c6f97399ae8SHA-256: 1203a6326ffd12e553295238de23319bc6d8a8744cafc2aa87f99518750470f5 vdsm-cli-4.16.30-1.5.el6rhs.noarch.rpm     MD5: cf560978c78fe83d2164a23aad737e10SHA-256: 347c968a53912dbe93a3854b881413d65a35fcec1539516712d6800e86b56f71 vdsm-debug-plugin-4.16.30-1.5.el6rhs.noarch.rpm     MD5: 68bc9a6b05e566fa10b605f3203e13b7SHA-256: 1c919062f708460e7d237764401056f6edec6e3ac9255c5162f7437e170bc8fd vdsm-gluster-4.16.30-1.5.el6rhs.noarch.rpm     MD5: 16e07f4b543e995b7be26aa6e6c59da8SHA-256: 205379ab1241bafff17feec151fd3e562138ee5d302c8c5bf95c6e4f27fc3031 vdsm-hook-ethtool-options-4.16.30-1.5.el6rhs.noarch.rpm     MD5: 4524e19c3b299091376806b545b33755SHA-256: 46e5157ccc436bfa6a6af9ff09ef7847b942749e01f02bd8588778663e1a8f78 vdsm-hook-faqemu-4.16.30-1.5.el6rhs.noarch.rpm     MD5: bf6b18bd09c62480f61d1e5a1b66f38eSHA-256: e21be55862a9fb1e866253b6720f860bca31536261340f52ba1294d9e3fc6045 vdsm-hook-openstacknet-4.16.30-1.5.el6rhs.noarch.rpm     MD5: 1c31b21e9d38c4f731e43d6deeb4e41bSHA-256: 66802a251b684b05ec2457d59b248ef16afcf47fb3c2af364ec937ed9c002cc6 vdsm-hook-qemucmdline-4.16.30-1.5.el6rhs.noarch.rpm     MD5: 6c449b08587d4dd089a7e059ead8c613SHA-256: 6cddd40113908ddf68d7a21b7c4279d86efba9e463840bcc29775fce3e0fb749 vdsm-jsonrpc-4.16.30-1.5.el6rhs.noarch.rpm     MD5: b43c619785da841de91b6e01ea75c197SHA-256: a835a56dcca2e8cc0b8d5983324123772ab35be4af602b6a167ef81b92634ceb vdsm-python-4.16.30-1.5.el6rhs.noarch.rpm     MD5: 74e5261a17279116ba23cebde4390eb3SHA-256: b37448fdf53556e19ffe325a412472c7a686da5cf603f2a042d1eecfee894b22 vdsm-python-zombiereaper-4.16.30-1.5.el6rhs.noarch.rpm     MD5: 87812029f2a610cb3052b1b5033af723SHA-256: 668985221fd09484e0122b0a30ec470c68c76e7a4ea89f6ae9e235be683e0c72 vdsm-reg-4.16.30-1.5.el6rhs.noarch.rpm     MD5: 1a01c742c741719b778e56e3a64ec0c2SHA-256: 4e7ea473f093d23a00a9db1df64c52005f72b24047fb0a40638b7984157d7532 vdsm-tests-4.16.30-1.5.el6rhs.noarch.rpm     MD5: b9fc38b48c76e93c362a8415c799bf36SHA-256: 992143157f879f1192f587a4ede694e28a129a17d577fa8badd87cebeb38cbf0 vdsm-xmlrpc-4.16.30-1.5.el6rhs.noarch.rpm     MD5: 8f5ad3bdceae84b0d58d6fd281437566SHA-256: 764ccbd06edcbe3998b231fd727cf23ad0f31fb0ca0c4201e2821dde3f3b2cf5 vdsm-yajsonrpc-4.16.30-1.5.el6rhs.noarch.rpm     MD5: a4aaf3f5d4a21a2c20cfbbe100b9b49bSHA-256: 80fb1b5b59cda0f057a34c5bbde56fdf209bffa4ddaf62755071ef6867ff60aa   Red Hat Storage Native Client SRPMS: glusterfs-3.7.9-10.el6.src.rpm     MD5: 2b59362ec80bdd1e7cb0ad26800c6a67SHA-256: 9928854a6dd3e5b59a0a80a3ec0bd354f7002fb97fa0e2f51bc5558e0269c65b   x86_64: glusterfs-3.7.9-10.el6.x86_64.rpm     MD5: 84addaba995315d1ce8a2a0331b3215fSHA-256: 8ce97cc691af0b1bfb947c2db31a7832520db00ff5f5a4ebc5d2ba97926e946d glusterfs-api-3.7.9-10.el6.x86_64.rpm     MD5: 4232d2980ea3b81fc356c640add5dfeeSHA-256: a378dcfb4c8c4f2b02ea9be6e0f7a9cbe755de9cf0123b4c73770b6ccbaa0a9e glusterfs-api-devel-3.7.9-10.el6.x86_64.rpm     MD5: b8fc0d5c28c9443a4256ed7414eb68b9SHA-256: 0df6c68b1288917c27e7207782830004344fde09ccfb2dd5daf96f1a125dfda8 glusterfs-cli-3.7.9-10.el6.x86_64.rpm     MD5: c6cf2c4dc6496e3ba966764e08cbcdf6SHA-256: 58af33622428911eae561444ed31c76c8ecd8b9acd2d1dca803b4deb1f93ca5b glusterfs-client-xlators-3.7.9-10.el6.x86_64.rpm     MD5: 5a65232b01ee82bd636c7a52a3554bf8SHA-256: ae3d813c13a97dfa42d95b31afc771e07a62af8b66dcf61930502c81b257a3ca glusterfs-debuginfo-3.7.9-10.el6.x86_64.rpm     MD5: 212ec5744fefb5e3ea5f9d7aaf26d131SHA-256: a511f31fa3e710b9677e7dce76839eb4f1e3329041cda485835eb08cc7aa3077 glusterfs-devel-3.7.9-10.el6.x86_64.rpm     MD5: 964506221780adfad73724ff8b55a597SHA-256: 46396beea811b55a434c4ff859854b40ea935d56f762754f6694da6c28f41e07 glusterfs-fuse-3.7.9-10.el6.x86_64.rpm     MD5: a94f6f1a49259147bc834ce16baf9957SHA-256: c2f9e176d6b5dcf97f368e83b7eec846636cc16d3886ef73007b0ef00faf3476 glusterfs-libs-3.7.9-10.el6.x86_64.rpm     MD5: 480d0b63524e8aa50d0e494b4d0f2f2fSHA-256: 5076abf1c65621b9dae3e0cd2aeb04de1f945e2661bcf58e9e771831b12a8a97 glusterfs-rdma-3.7.9-10.el6.x86_64.rpm     MD5: e9524613fd2497145488a1193480c8b3SHA-256: 8004756d1a797d92f614159a81b8974250ad26cc8f3555dd0889f5f527c85728 python-gluster-3.7.9-10.el6.noarch.rpm     MD5: 1c7886a87c4d5b830203ce1ef740af5aSHA-256: 977a2b0a1b7a365c2fe86db0f3ba89c0bf7e2f499e6ad36c7057628777e8e09c   (The unlinked packages above are only available from the Red Hat Network) 1101702 - setting lower op-version should throw failure message1113954 - glusterd logs are filled with "readv on /var/run/a30ad20ae7386a2fe58445b1a2b1359c.socket failed (Invalid argument)"1114045 - DHT :- log is full of ' Found anomalies in /<DIR> (gfid = 00000000-0000-0000-0000-000000000000)' - for each Directory which was self healed1115367 - "rm -rf *" from multiple mount points fails to remove directories on all the subvolumes1118762 - DHT : few Files are not accessible and not listed on mount + more than one Directory have same gfid + (sometimes) attributes has ?? in ls output after renaming Directories from multiple client at same time1121186 - DHT : If Directory deletion is in progress and lookup from another mount heals that Directory on sub-volumes. then rmdir/rm -rf on parents fails with error 'Directory not empty'1159263 - [USS]: Newly created directories doesnt have .snaps folder1162648 - [USS]: There should be limit to the size of "snapshot-directory"1231150 - After resetting diagnostics.client-log-level, still Debug messages are logging in scrubber log1233213 - [New] - volume info --xml gives host UUID as zeros1255639 - [libgfapi]: do an explicit lookup on the inodes linked in readdirp1258875 - DHT: Once remove brick start failed in between Remove brick commit should not be allowed1261838 - [geo-rep]: Multiple geo-rep session to the same slave is allowed for different users1273539 - Remove dependency of glusterfs on rsyslog1276219 - [GlusterD]: After log rotate of cmd_history.log file, the next executed gluster commands are not present in the cmd_history.log file.1277414 - [Snapshot]: Snapshot restore stucks in post validation.1277828 - RFE:nfs-ganesha:prompt the nfs-ganesha disable cli to let user provide "yes or no" option1278332 - nfs-ganesha server do not enter grace period during failover/failback1279628 - [GSS]-gluster v heal volname info does not work with enabled ssl/tls1282747 - While file is self healing append to the file hangs1283957 - Data Tiering:tier volume status shows as in-progress on all nodes of a cluster even if the node is not part of volume1285196 - Dist-geo-rep : checkpoint doesn't reach even though all the files have been synced through hybrid crawl.1285200 - Dist-geo-rep : geo-rep worker crashed while init with [Errno 34] Numerical result out of range.1285203 - dist-geo-rep: status details incorrectly indicate few files as skipped when all the files are properly synced to slave1286191 - dist-rep + quota : directory selfheal is not healing xattr 'trusted.glusterfs.quota.limit-set'; If you bring a replica pair down1287951 - [GlusterD]Probing a node having standalone volume, should not happen1289439 - snapd doesn't come up automatically after node reboot.1290653 - [GlusterD]: GlusterD log is filled with error messages - " Failed to aggregate response from node/brick"1291988 - [geo-rep]: ChangelogException: [Errno 22] Invalid argument observed upon rebooting the ACTIVE master node1292034 - nfs-ganesha installation : no pacemaker package installed for RHEL 6.71293273 - [GlusterD]: Peer detach happening with a node which is hosting volume bricks1294062 - [georep+disperse]: Geo-Rep session went to faulty with errors "[Errno 5] Input/output error"1294612 - Self heal command gives error "Launching heal operation to perform index self heal on volume vol0 has been unsuccessful"1294642 - quota: handle quota xattr removal when quota is enabled again1294751 - Able to create files when quota limit is set to 01294790 - promotions and demotions not happening after attach tier due to fix layout taking very long time(3 days)1296176 - geo-rep: hard-link rename issue on changelog replay1298068 - GlusterD restart, starting the bricks when server quorum not met1298162 - fuse mount crashed with mount point inaccessible and core found1298955 - [GSS] - Setting of any option using volume set fails when the clients are 3.0.4 and server is 3.1.11299432 - Glusterd: Creation of volume is failing if one of the brick is down on the server1299737 - values for Number of Scrubbed files, Number of Unsigned files, Last completed scrub time and Duration of last scrub are shown as zeros in bit rot scrub status1300231 - 'gluster volume get' returns 0 value for server-quorum-ratio1300679 - promotions not balanced across hot tier sub-volumes1302355 - Over some time Files which were accessible become inaccessible(music files)1302553 - heal info reporting slow when IO is in progress on the volume1302688 - [HC] Implement fallocate, discard and zerofill with sharding1303125 - After GlusterD restart, Remove-brick commit happening even though data migration not completed.1303591 - AFR+SNAPSHOT: File with hard link have different inode number in USS1303593 - [USS]: If .snaps already exists, ls -la lists it even after enabling USS1304282 - [USS]: Need defined rules for snapshot-directory, setting to a/b works but in linux a/b is b is subdirectory of a1304585 - quota: disabling and enabling quota in a quick interval removes quota's limit usage settings on multiple directories1305456 - Errors seen in cli.log, while executing the command 'gluster snapshot info --xml'1305735 - Improve error message for unsupported clients1305836 - DHT: Take blocking locks while renaming files1305849 - cd to .snaps fails with "transport endpoint not connected" after force start of the volume.1306194 - NFS+attach tier:IOs hang while attach tier is issued1306218 - quota: xattr trusted.glusterfs.quota.limit-objects not healed on a root of newly added brick1306667 - Newly created volume start, starting the bricks when server quorum not met1306907 - [New] - quarantine folder becomes empty and bitrot status does not list any files which are corrupted1308837 - Peers goes to rejected state after reboot of one node when quota is enabled on cloned volume.1311362 - [AFR]: "volume heal info" command is failing during in-service upgrade to latest.1311839 - False positives in heal info1313290 - [HC] glusterfs mount crashed1313320 - features.sharding is not available in 'gluster volume set help'1313352 - Dist-geo-rep: Support geo-replication to work with sharding1313370 - No xml output on gluster volume heal info command with --xml1314373 - Peer information is not propagated to all the nodes in the cluster, when the peer is probed with its second interface FQDN/IP1314391 - glusterd crashed when probing a node with firewall enabled on only one node1314421 - [HC] Ensure o-direct behaviour when sharding is enabled on volume and files opened with o_direct1314724 - Multi-threaded SHD support1315201 - [GSS] - smbd crashes on 3.1.1 with samba-vfs 4.11317790 - Cache swift xattrs1317940 - smbd crashes while accessing multiple volume shares via same client1318170 - marker: set inode ctx before lookup is unwind1318427 - gfid-reset of a directory in distributed replicate volume doesn't set gfid on 2nd till last subvolumes1318428 - ./tests/basic/tier/tier-file-create.t dumping core fairly often on build machines in Linux1319406 - gluster volume heal info shows conservative merge entries as in split-brain1319592 - DHT-rebalance: rebalance status shows failed when replica pair bricks are brought down in distrep volume while re-name of files going on1319619 - RHGS-3.1 op-version need to be corrected1319634 - Data Tiering:File create terminates with "Input/output error" as split brain is observed1319638 - rpc: set bind-insecure to off by default1319658 - setting enable-shared-storage without mentioning the domain, doesn't enables shared storage1319670 - regression : RHGS 3.0 introduced a maximum value length in the info files1319688 - Probing a new RHGS node, which is part of another cluster, should throw proper error message in logs and CLI1319695 - Disabling enable-shared-storage deletes the volume with the name - "gluster_shared_storage"1319698 - Creation of files on hot tier volume taking very long time1319710 - glusterd: disable ping timer b/w glusterd and make epoll thread count default 11319996 - glusterfs-devel: 3.7.0-3.el6 client package fails to install on dependency1319998 - while performing in-service software upgrade, gluster-client-xlators, glusterfs-ganesha, python-gluster package should not get installed when distributed volume up1320000 - While performing in-service software update, glusterfs-geo-replication and glusterfs-cli packages are updated even when glusterfsd or distributed volume is up1320390 - build: spec file conflict resolution1320412 - disperse: Provide an option to enable/disable eager lock1321509 - Critical error message seen in glusterd log file, after logrotate1321550 - Do not succeed mkdir without gfid-req1321556 - Continuous nfs_grace_monitor log messages observed in /var/log/messages1322247 - SAMBA+TIER : File size is not getting updated when created on windows samba share mount1322306 - [scale] Brick process does not start after node reboot1322695 - TIER : Wrong message display.On detach tier success the message reflects Tier command failed.1322765 - glusterd: glusted didn't come up after node reboot error" realpath () failed for brick /run/gluster/snaps/130949baac8843cda443cf8a6441157f/brick3/b3.

The underlying file system may be in bad state [No such file or directory]"1323042 - Inconsistent directory structure on dht subvols caused by parent layouts going stale during entry create operations because of fix-layout1323119 - TIER : Attach tier fails1323424 - Ganesha: Continuous "0-glfs_h_poll_cache_invalidation: invalid argument" messages getting logged in ganesha-gfapi logs.1324338 - Too many log messages showing inode ctx is NULL for 00000000-0000-0000-0000-0000000000001324604 - [Perf] : 14-53% regression in metadata performance with RHGS 3.1.3 on FUSE mounts1324820 - /var/lib/glusterd/$few-directories not owned by any package, causing it to remain after glusterfs-server is uninstalled1325750 - Volume stop is failing when one of brick is down due to underlying filesystem crash1325760 - Worker dies with [Errno 5] Input/output error upon creation of entries at slave1325975 - nfs-ganesha crashes with segfault error while doing refresh config on volume.1326248 - [tiering]: during detach tier operation, Input/output error is seen with new file writes on NFS mount1326498 - DHT: Provide mechanism to nuke a entire directory from a client (offloading the work to the bricks)1326505 - fuse: fix inode and dentry leaks1326663 - [DHT-Rebalance]: with few brick process down, rebalance process isn't killed even after stopping rebalance process1327035 - fuse: Avoid redundant lookup on "." and ".." as part of every readdirp1327036 - Use after free bug in notify_kernel_loop in fuse-bridge code1327165 - snapshot-clone: clone volume doesn't start after node reboot1327552 - [geo-rep]: geo status shows $MASTER Nodes always with hostname even if volume is configured with IP1327751 - glusterd memory overcommit1328194 - upgrading from RHGS 3.1.2 el7 client package to 3.1.3 throws warning1328397 - [geo-rep]: schedule_georep.py doesn't touch the mount in every iteration1328411 - SMB:while running I/O on cifs mount and doing graph switch causes cifs mount to hang.1328721 - [Tiering]: promotion of files may not be balanced on distributed hot tier when promoting files with size as that of max.mb1329118 - volume create fails with "Failed to store the Volume information" due to /var/lib/glusterd/vols missing with latest build1329514 - rm -rf to a dir gives directory not empty(ENOTEMPTY) error1329895 - eager-lock should be used as cluster.eager-lock in /var/lib/glusterd/group/virt file as there is a new option disperse.eager-lock1330044 - one of vm goes to paused state when network goes down and comes up back1330385 - glusterd restart is failing if volume brick is down due to underlying FS crash.1330511 - build: redhat-storage-server for RHGS 3.1.3 - [RHEL 6.8]1330881 - Inode leaks found in data-self-heal1330901 - dht must avoid fresh lookups when a single replica pair goes offline1331260 - Swift: The GET on object manifest with certain byte range fails to show the content of file.1331280 - Some of VMs go to paused state when there is concurrent I/O on vms1331376 - [geo-rep]: schedule_georep.py doesn't work when invoked using cron1332077 - We need more debug info from stack wind and unwind calls1332199 - Self Heal fails on a replica3 volume with 'disk quota exceeded'1332269 - /var/lib/glusterd/groups/groups file doesn't gets updated when the file is edited or modified1332949 - Heal info shows split-brain for .shard directory though only one brick was down1332957 - [Tiering]: detach tier fails due to the error - 'removing tier fix layout xattr from /'1333643 - Files present in the .shard folder even after deleting all the vms from the UI1333668 - SAMBA-VSS : Permission denied issue while restoring the directory from windows client 1 when files are deleted from windows client 21334092 - [NFS-Ganesha] : stonith-enabled option not set with new versions of cman,pacemaker,corosync and pcs1334234 - [Tiering]: Files remain in hot tier even after detach tier completes1334668 - getting dependency error while upgrading RHGS client to build glusterfs-3.7.9-4.el7.x86_64.1334985 - Under high read load, sometimes the message "XDR decoding failed" appears in the logs and read fails1335082 - [Tiering]: Detach tier commit is allowed before rebalance is complete1335114 - refresh-config failing with latest 2.3.1-6 nfs-ganesha build.1335357 - Modified volume options are not syncing once glusterd comes up.1335359 - Adding of identical brick (with diff IP/hostname) from peer node is failing.1335364 - Fix excessive logging due to NULL dict in dht1335367 - Failing to remove/replace the bad brick part of the volume.1335437 - Self heal shows different information for the same volume from each node1335505 - Brick logs spammed with dict_get errors1335826 - failover is not working with latest builds.1336295 - Replace brick causes vm to pause and /.shard is always present in the heal info1336332 - glusterfs processes doesn't stop after invoking stop-all-gluster-processes.sh1337384 - Brick processes not getting ports once glusterd comes up.1337649 - log flooded with Could not map name=xxxx to a UUID when config'd with long hostnames1339090 - During failback, nodes other than failed back node do not enter grace period1339136 - Some of the VMs pause with read-only file system error even when volume-status reports all bricks are up1339163 - [geo-rep]: Monitor crashed with [Errno 3] No such process1339208 - Ganesha gets killed with segfault error while rebalance is in progress.1340085 - Directory creation(mkdir) fails when the remove brick is initiated for replicated volumes accessing via nfs-ganesha1340383 - [geo-rep]: If the session is renamed, geo-rep configuration are not retained1341034 - [quota+snapshot]: Directories are inaccessible from activated snapshot, when the snapshot was created during directory creation1341316 - [geo-rep]: Snapshot creation having geo-rep session is broken with latest build1341567 - After setting up ganesha on RHEL 6, nodes remains in stopped state and grace related failures observed in pcs status1341820 - [geo-rep]: Upgrade from 3.1.2 to 3.1.3 breaks the existing geo-rep session1342252 - [geo-rep]: Remove brick with geo-rep session fails with latest build1342261 - [georep]: Stopping volume fails if it has geo-rep session (Even in stopped state)1342426 - self heal deamon killed due to oom kills on a dist-disperse volume using nfs ganesha1342938 - [geo-rep]: Add-Brick use case: create push-pem force on existing geo-rep fails1343549 - libglusterfs: Negate all but O_DIRECT flag if present on anon fds1344278 - [disperse] mkdir after re balance give Input/Output Error1344625 - fail delete volume operation if one of the glusterd instance is down in cluster1347217 - Incorrect product version observed for RHEL 6 and 7 in product certificates These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from: