TBD Intel Corporation
+ * version 2.9.0
+ * See https://wiki.hpdd.intel.com/display/PUB/Lustre+Support+Matrix
+ for currently supported client and server kernel versions.
+ * Server known to build on patched kernels:
+ 2.6.32-431.29.2.el6 (RHEL6.5)
+ 2.6.32-504.30.3.el6 (RHEL6.6)
+ 2.6.32-573.26.1.el6 (RHEL6.7)
+ 2.6.32-642.3.1.el6 (RHEL6.8)
+ 3.10.0-327.22.2.el7 (RHEL7.2)
+ 3.0.101-0.47.71 (SLES11 SP3)
+ 3.0.101-77 (SLES11 SP4)
+ 3.12.59-60.41 (SLES12 SP1)
+ vanilla linux 4.2.1 (ZFS only)
+ * Client known to build on unpatched kernels:
+ 2.6.32-431.29.2.el6 (RHEL6.5)
+ 2.6.32-504.30.3.el6 (RHEL6.6)
+ 2.6.32-573.26.1.el6 (RHEL6.7)
+ 2.6.32-642.3.1.el6 (RHEL6.8)
+ 3.10.0-327.22.2.el7 (RHEL7.2)
+ 3.0.101-0.47.71 (SLES11 SP3)
+ 3.0.101-77 (SLES11 SP4)
+ 3.12.59-60.41 (SLES12 SP1)
+ vanilla linux 4.4.6
+ * Recommended e2fsprogs version: 1.42.13.wc4 or newer
+ * Recommended ZFS / SPL version: 0.6.5.7
+ * Tested with ZFS / SPL version: 0.6.5.7
+ * NFS export disabled when stack size < 8192 (32-bit Lustre clients),
+ since the NFSv4 export of Lustre filesystem with 4K stack may cause a
+ stack overflow. For more information, please refer to bugzilla 17630.
+ * NFSv4 reexport to 32-bit NFS client nodes requires Lustre client on
+ the re-exporting nodes to be mounted with "32bitapi" mount option
+
+--------------------------------------------------------------------------------
+
+02-29-2016 Intel Corporation
* version 2.8.0
* See https://wiki.hpdd.intel.com/display/PUB/Lustre+Support+Matrix
for currently supported client and server kernel versions.
* Server known to build on patched kernels:
2.6.32-431.29.2.el6 (RHEL6.5)
- 2.6.32-504.12.2.el6 (RHEL6.6)
- 3.0.101-0.46 (SLES11 SP3)
+ 2.6.32-504.30.3.el6 (RHEL6.6)
+ 2.6.32-573.12.1.el6 (RHEL6.7)
+ 3.10.0-327.3.1.el7 (RHEL7.2)
+ 3.0.101-0.47.71 (SLES11 SP3)
+ 3.0.101-68 (SLES11 SP4)
+ vanilla linux 4.2.1 (ZFS only)
* Client known to build on unpatched kernels:
2.6.32-431.29.2.el6 (RHEL6.5)
- 2.6.32-504.12.2.el6 (RHEL6.6)
- 3.10.0-229.el7 (RHEL7.1)
- 3.0.101-0.46 (SLES11 SP3)
- * Recommended e2fsprogs version: 1.42.9.wc1 or newer
+ 2.6.32-504.30.3.el6 (RHEL6.6)
+ 2.6.32-573.12.1.el6 (RHEL6.7)
+ 3.10.0-327.3.1.el7 (RHEL7.2)
+ 3.0.101-0.47.71 (SLES11 SP3)
+ 3.0.101-68 (SLES11 SP4)
+ 3.12.39-47 (SLES12)
+ vanilla linux 4.2.1
+ * Recommended e2fsprogs version: 1.42.13.wc4 or newer
+ * Recommended ZFS / SPL version: 0.6.4.2
+ * Tested with ZFS / SPL version: 0.6.4.2
* NFS export disabled when stack size < 8192 (32-bit Lustre clients),
since the NFSv4 export of Lustre filesystem with 4K stack may cause a
stack overflow. For more information, please refer to bugzilla 17630.
Severity : enhancement
Bugzilla : 19526
Description: correctly handle big reply message.
-Details : send LNet event if reply is bigger then buffer and adjust this buffer
+Details : send LNet event if reply is bigger than buffer and adjust this buffer
correctly.
Severity : normal
Bugzilla : 19128
Description: Out or order replies might be lost on replay
Details : In ptlrpc_retain_replayable_request if we cannot find retained
- request with tid smaller then one currently being added, add it
+ request with tid smaller than one currently being added, add it
to the start, not end of the list.
--------------------------------------------------------------------------------
Severity : normal
Frequency : rare
Bugzilla : 11662
-Description: Grant space more than avaiable left space sometimes.
+Description: Grant space more than available left space sometimes.
Details : When then OST is about to be full, if two bulk writing from
different clients came to OST. Accord the avaliable space of the
OST, the first req should be permitted, and the second one
should be denied by ENOSPC. But if the seconde arrived before
- the first one is commited. The OST might wrongly permit second
- writing, which will cause grant space > avaiable space.
+ the first one is committed. The OST might wrongly permit second
+ writing, which will cause grant space > available space.
Severity : normal
Frequency : when client is evicted
Severity : enhancement
Bugzilla : 4900
Description: Async OSC create to avoid the blocking unnecessarily.
-Details : If a OST has no remain object, system will block on the creating
- when need to create a new object on this OST. Now, ways use
- pre-created objects when available, instead of blocking on an
- empty osc while others are not empty. If we must block, we block
- for the shortest possible period of time.
+Details : If an OST has no remaining object, system will block on the
+ creation when it needs to create a new object on this OST. Now,
+ ways use pre-created objects when available, instead of blocking on
+ an empty osc while others are not empty. If we must block, we
+ block for the shortest possible period of time.
Severity : major
Bugzilla : 11710
Severity : enhancement
Bugzilla : 4900
Description: Async OSC create to avoid the blocking unnecessarily.
-Details : If a OST has no remain object, system will block on the creating
+Details : If an OST has no remaining object, system will block on the creating
when need to create a new object on this OST. Now, ways use
pre-created objects when available, instead of blocking on an
empty osc while others are not empty. If we must block, we block
Bugzilla : 9489, 3273
Description: First write from each client to each OST was only 4kB in size,
to initialize client writeback cache, which caused sub-optimal
- RPCs and poor layout on disk for the first writen file.
+ RPCs and poor layout on disk for the first written file.
Details : Clients now request an initial cache grant at (re)connect time
and so that they can start streaming writes to the cache right
away and always do full-sized RPCs if there is enough data.