tbd Sun Microsystems, Inc.
* version 2.0.0
* Support for kernels:
- 2.6.16.60-0.33 (SLES 10),
- 2.6.18-128.1.1.el5 (RHEL 5),
+ 2.6.16.60-0.39.3 (SLES 10),
+ 2.6.18-128.1.14.el5 (RHEL 5),
2.6.22.14 vanilla (kernel.org).
* Client support for unpatched kernels:
(see http://wiki.lustre.org/index.php?title=Patchless_Client)
2.6.16 - 2.6.21 vanilla (kernel.org)
- * Recommended e2fsprogs version: 1.41.5.sun2
+ * Recommended e2fsprogs version: 1.41.6.sun1
* Note that reiserfs quotas are disabled on SLES 10 in this kernel.
* RHEL 4 and RHEL 5/SLES 10 clients behaves differently on 'cd' to a
removed cwd "./" (refer to Bugzilla 14399).
* File join has been disabled in this release, refer to Bugzilla 16929.
+Severity : enhancement
+Bugzilla : 19856
+Description: Add LustreNetLink, a kernel-userspace communcation path.
+
+Severity : enhancement
+Bugzilla : 19847
+Description: Update kernel to SLES10 SP2 2.6.16.60-0.39.3.
+
+Severity : normal
+Frequency : rare
+Bugzilla : 18800
+Description: access to llog context before init.
+Details : move handling CATALOGS file at osc layer and forbid access to llog
+ context before init.
+
+Severity : normal
+Bugzilla : 19529
+Description: Avoid deadlock for local client writes
+Details : Use new OBD_BRW_MEMALLOC flag to notify OST about writes in the
+ memory freeing context. This allows OST threads to set the
+ PF_MEMALLOC flag on task structures in order to allocate memory
+ from reserved pools and complete IO.
+ Use GFP_HIGHUSER for OST allocations for non-local client writes,
+ so that the OST threads generate memory pressure and allow
+ inactive pages to be reclaimed.
+
+Severity : enhancement
+Bugzilla : 19846
+Description: Update kernel to RHEL5.3 2.6.18-128.1.14.el5.
+
+Severity : normal
+Frequency : rare
+Bugzilla : 18380
+Description: lock ordering violation between &cli->cl_sem and _lprocfs_lock
+Details : move ldlm namespace creation in setup phase to avoid grab
+ _lprocfs_lock with cli_sem held.
+
+Severity : normal
+Bugzilla : 19507
+Description: Temporarily disable grant shrink.
+Details : Disable the feature for debugging.
+
+Severity : normal
+Bugzilla : 18624
+Description: Unable to run several mkfs.lustre on loop devices at the same
+ time.
+Details : mkfs.lustre returns error 256 on the concurrent loop devices
+ formatting. The solution is to proper handle the error.
+
+Severity : enhancement
+Bugzilla : 19024
+Description: Update kernel to RHEL5.3 2.6.18-128.1.6.el5.
+
+Severity : enhancement
+Bugzilla : 19212
+Description: Update kernel to SLES10 SP2 2.6.16.60-0.37.
+
+Severity : normal
+Bugzilla : 19528
+Description: resolve race between obd_disconnect and class_disconnect_exports
+Details : if obd_disconnect will be called to already disconnected export he
+ forget release one reference and osc module can't unloaded.
+
+Severity : enhancement
+Bugzilla : 18688
+Description: Allow tuning service thread via /proc
+Details : For each service a new
+ /proc/fs/lustre/{service}/*/thread_{min,max,started} entry is
+ created that can be used to set min/max thread counts, and get the
+ current number of running threads.
+
Severity : normal
Bugzilla : 18382
Descriptoin: don't return error if have particaly created objects for file.
Bugzilla : 19293
Description: move AT tunable parameters for more consistent usage
Details : add AT tunables under /proc/sys/lustre, add to conf_param parsing
-
+
Severity : enhancement
Bugzilla : 17974
Description: add lazystatfs mount option to allow statfs(2) to skip down OSTs
Details : When multiple mount protection fails during remount, proper error
should be returned
+Severity : enhancement
+Bugzilla : 16823
+Description: Allow stripe size to be up to 4G-64k
+Details : Fix math logic to allow large stripe sizes.
+
+Severity : high
+Bugzilla : 17569
+Description: add check for >8TB ldiskfs filesystems
+Details : ext3-based ldiskfs does not support greater than 8TB LUNs.
+ Don't allow >8TB ldiskfs filesystems to be mounted without
+ force_over_8tb mount option
+
--------------------------------------------------------------------------------
2007-08-10 Cluster File Systems, Inc. <info@clusterfs.com>