X-Git-Url: https://git.whamcloud.com/?p=fs%2Flustre-release.git;a=blobdiff_plain;f=lustre%2FChangeLog;h=69cfa5aab3be6b5d8a2c950e3db86395650fe3b1;hp=9655de79498adc300c20026549e478db551864bc;hb=a2a38a33b48af911dfe190e4a7dedf8fcfd196bd;hpb=8109c77007b1ebe2ec7e32a9bc18b682e8a0b0c8 diff --git a/lustre/ChangeLog b/lustre/ChangeLog index 9655de7..69cfa5a 100644 --- a/lustre/ChangeLog +++ b/lustre/ChangeLog @@ -1,16 +1,83 @@ TBD Intel Corporation + * version 2.9.0 + * See https://wiki.hpdd.intel.com/display/PUB/Lustre+Support+Matrix + for currently supported client and server kernel versions. + * Server known to build on patched kernels: + 2.6.32-431.29.2.el6 (RHEL6.5) + 2.6.32-504.30.3.el6 (RHEL6.6) + 2.6.32-573.26.1.el6 (RHEL6.7) + 2.6.32-642.3.1.el6 (RHEL6.8) + 3.10.0-327.28.2.el7 (RHEL7.2) + 3.0.101-0.47.71 (SLES11 SP3) + 3.0.101-80 (SLES11 SP4) + 3.12.59-60.41 (SLES12 SP1) + vanilla linux 4.2.1 (ZFS only) + * Client known to build on unpatched kernels: + 2.6.32-431.29.2.el6 (RHEL6.5) + 2.6.32-504.30.3.el6 (RHEL6.6) + 2.6.32-573.26.1.el6 (RHEL6.7) + 2.6.32-642.3.1.el6 (RHEL6.8) + 3.10.0-327.28.2.el7 (RHEL7.2) + 3.0.101-0.47.71 (SLES11 SP3) + 3.0.101-80 (SLES11 SP4) + 3.12.59-60.41 (SLES12 SP1) + vanilla linux 4.4.6 + * Recommended e2fsprogs version: 1.42.13.wc4 or newer + * Recommended ZFS / SPL version: 0.6.5.7 + * Tested with ZFS / SPL version: 0.6.5.7 + * NFS export disabled when stack size < 8192 (32-bit Lustre clients), + since the NFSv4 export of Lustre filesystem with 4K stack may cause a + stack overflow. For more information, please refer to bugzilla 17630. + * NFSv4 reexport to 32-bit NFS client nodes requires Lustre client on + the re-exporting nodes to be mounted with "32bitapi" mount option + +-------------------------------------------------------------------------------- + +02-29-2016 Intel Corporation + * version 2.8.0 + * See https://wiki.hpdd.intel.com/display/PUB/Lustre+Support+Matrix + for currently supported client and server kernel versions. + * Server known to build on patched kernels: + 2.6.32-431.29.2.el6 (RHEL6.5) + 2.6.32-504.30.3.el6 (RHEL6.6) + 2.6.32-573.12.1.el6 (RHEL6.7) + 3.10.0-327.3.1.el7 (RHEL7.2) + 3.0.101-0.47.71 (SLES11 SP3) + 3.0.101-68 (SLES11 SP4) + vanilla linux 4.2.1 (ZFS only) + * Client known to build on unpatched kernels: + 2.6.32-431.29.2.el6 (RHEL6.5) + 2.6.32-504.30.3.el6 (RHEL6.6) + 2.6.32-573.12.1.el6 (RHEL6.7) + 3.10.0-327.3.1.el7 (RHEL7.2) + 3.0.101-0.47.71 (SLES11 SP3) + 3.0.101-68 (SLES11 SP4) + 3.12.39-47 (SLES12) + vanilla linux 4.2.1 + * Recommended e2fsprogs version: 1.42.13.wc4 or newer + * Recommended ZFS / SPL version: 0.6.4.2 + * Tested with ZFS / SPL version: 0.6.4.2 + * NFS export disabled when stack size < 8192 (32-bit Lustre clients), + since the NFSv4 export of Lustre filesystem with 4K stack may cause a + stack overflow. For more information, please refer to bugzilla 17630. + * NFSv4 reexport to 32-bit NFS client nodes requires Lustre client on + the re-exporting nodes to be mounted with "32bitapi" mount option + +-------------------------------------------------------------------------------- + +03-10-2015 Intel Corporation * version 2.7.0 * See https://wiki.hpdd.intel.com/display/PUB/Lustre+Support+Matrix for currently supported client and server kernel versions. * Server known to build on patched kernels: 2.6.32-431.29.2.el6 (RHEL6.5) - 2.6.32-504.1.3.el6 (RHEL6.6) - 3.0.101-0.40 (SLES11 SP3) + 2.6.32-504.8.1.el6 (RHEL6.6) + 3.0.101-0.46 (SLES11 SP3) * Client known to build on unpatched kernels: 2.6.32-431.29.2.el6 (RHEL6.5) - 2.6.32-504.1.3.el6 (RHEL6.6) - 3.10.0-123.9.2.el7 (RHEL7) - 3.0.101-0.40 (SLES11 SP3) + 2.6.32-504.8.1.el6 (RHEL6.6) + 3.10.0-123.20.1.el7 (RHEL7) + 3.0.101-0.46 (SLES11 SP3) * Recommended e2fsprogs version: 1.42.9.wc1 or newer * NFS export disabled when stack size < 8192 (32-bit Lustre clients), since the NFSv4 export of Lustre filesystem with 4K stack may cause a @@ -18,6 +85,20 @@ TBD Intel Corporation * NFSv4 reexport to 32-bit NFS client nodes requires Lustre client on the re-exporting nodes to be mounted with "32bitapi" mount option +Severity : enhancement +Jira : LU-6050 +Description: control OST-index in IDIF via ROCOMPAT flag. +Details : Introduce new flag OBD_ROCOMPAT_IDX_IN_IDIF that is stored in the + last_rcvd file. For new formatted OST device, it will be auto set; + for the case of upgrading from old OST device, you can enable it + via the lproc interface osd-ldiskfs.index_in_idif. With such flag + enabled, for new created OST-object, its IDIF-in-LMA will contain + the OST-index; for the existing OST-object, the OSD will convert + old format IDIF as new format IDIF with OST-index stored in the + LMA EA when accessing such OST-object or via OI scrub. Once such + flag is enabled, it cannot be reverted back, so the system cannot + be downgraded to the orignal incompatible version. + -------------------------------------------------------------------------------- 07-30-2014 Intel Corporation @@ -422,7 +503,7 @@ Description: Remove set_info(KEY_UNLINKED) from MDS/OSC Severity : enhancement Bugzilla : 19526 Description: correctly handle big reply message. -Details : send LNet event if reply is bigger then buffer and adjust this buffer +Details : send LNet event if reply is bigger than buffer and adjust this buffer correctly. Severity : normal @@ -2689,7 +2770,7 @@ Severity : normal Bugzilla : 19128 Description: Out or order replies might be lost on replay Details : In ptlrpc_retain_replayable_request if we cannot find retained - request with tid smaller then one currently being added, add it + request with tid smaller than one currently being added, add it to the start, not end of the list. -------------------------------------------------------------------------------- @@ -2902,13 +2983,13 @@ Details : When osc reconnect to OST, OST(filter)should clear grant info of Severity : normal Frequency : rare Bugzilla : 11662 -Description: Grant space more than avaiable left space sometimes. +Description: Grant space more than available left space sometimes. Details : When then OST is about to be full, if two bulk writing from different clients came to OST. Accord the avaliable space of the OST, the first req should be permitted, and the second one should be denied by ENOSPC. But if the seconde arrived before - the first one is commited. The OST might wrongly permit second - writing, which will cause grant space > avaiable space. + the first one is committed. The OST might wrongly permit second + writing, which will cause grant space > available space. Severity : normal Frequency : when client is evicted @@ -3037,11 +3118,11 @@ Details : decrease the amount of synchronous RPC between clients and servers Severity : enhancement Bugzilla : 4900 Description: Async OSC create to avoid the blocking unnecessarily. -Details : If a OST has no remain object, system will block on the creating - when need to create a new object on this OST. Now, ways use - pre-created objects when available, instead of blocking on an - empty osc while others are not empty. If we must block, we block - for the shortest possible period of time. +Details : If an OST has no remaining object, system will block on the + creation when it needs to create a new object on this OST. Now, + ways use pre-created objects when available, instead of blocking on + an empty osc while others are not empty. If we must block, we + block for the shortest possible period of time. Severity : major Bugzilla : 11710 @@ -3807,7 +3888,7 @@ Details : If one node attempts to overwrite an executable in use by Severity : enhancement Bugzilla : 4900 Description: Async OSC create to avoid the blocking unnecessarily. -Details : If a OST has no remain object, system will block on the creating +Details : If an OST has no remaining object, system will block on the creating when need to create a new object on this OST. Now, ways use pre-created objects when available, instead of blocking on an empty osc while others are not empty. If we must block, we block @@ -4715,7 +4796,7 @@ Frequency : common Bugzilla : 9489, 3273 Description: First write from each client to each OST was only 4kB in size, to initialize client writeback cache, which caused sub-optimal - RPCs and poor layout on disk for the first writen file. + RPCs and poor layout on disk for the first written file. Details : Clients now request an initial cache grant at (re)connect time and so that they can start streaming writes to the cache right away and always do full-sized RPCs if there is enough data.