+Severity : enhancement
+Bugzilla : 10651
+Description: Nanosecond timestamp support for ldiskfs
+Details : The on-disk ldiskfs filesystem has added support for nanosecond
+ resolution timestamps. There is not yet support for this at
+ the Lustre filesystem level.
+
+Severity : normal
+Frequency : during server recovery
+Bugzilla : 11203
+Description: MDS failing to send precreate requests due to OSCC_FLAG_RECOVERING
+Details : request with rq_no_resend flag not awake l_wait_event if they get a
+ timeout.
+
+Severity : minor
+Frequency : nfs export on patchless client
+Bugzilla : 11970
+Description: connectathon hang when test nfs export over patchless client
+Details : Disconnected dentry cannot be found with lookup, so we do not need
+ to unhash it or make it invalid
+
+Bugzilla : 11757
+Description: fix llapi_lov_get_uuids() to allow many OSTs to be returned
+Details: : Change llapi_lov_get_uuids() to read the UUIDs from /proc instead
+ of using an ioctl. This allows lfsck for > 160 OSTs to succeed.
+
+Severity : minor
+Frequency : rare
+Bugzilla : 11546
+Description: open req refcounting wrong on reconnect
+Details : If reconnect happened between getting open reply from server and
+ call to mdc_set_replay_data in ll_file_open, we will schedule
+ replay for unreferenced request that we are about to free.
+ Subsequent close will crash in variety of ways.
+ Check that request is still eligible for replay in
+ mdc_set_replay_data().
+
+Severity : minor
+Frequency : rare
+Bugzilla : 11512
+Description: disable writes to filesystem when reading health_check file
+Details : the default for reading the health_check proc file has changed
+ to NOT do a journal transaction and write to disk, because this
+ can cause reads of the /proc file to hang and block HA state
+ checking on a healthy but otherwise heavily loaded system. It
+ is possible to return to the previous behaviour during configure
+ with --enable-health-write.
+
+Severity : enhancement
+Bugzilla : 10768
+Description: 64-bit inode version
+Details: : Add a on-disk 64-bit inode version for ext3 to track changes made
+ to the inode. This will be required for version-based recovery.
+
+Severity : normal
+Frequency : rare
+Bugzilla : 11818
+Description: MDS fails to start if a duplicate client export is detected
+Details : in some rare cases it was possible for a client to connect to
+ an MDS multiple times. Upon recovery the MDS would detect this
+ and fail during startup. Handle this more gracefully.
+
+Severity : enhancement
+Bugzilla : 11563
+Description: Add -o localflock option to simulate old noflock
+behaviour.
+Details : This will achieve local-only flock/fcntl locks
+ coherentness.
+
+Severity : minor
+Frequency : rare
+Bugzilla : 11658
+Description: log_commit_thread vs filter_destroy race leads to crash
+Details : Take import reference before releasing llog record semaphore
+
+Severity : normal
+Frequency : rare
+Bugzilla : 12477
+Description: Wrong request locking in request set processing
+Details : ptlrpc_check_set wrongly uses req->rq_lock for proctect add to
+ imp_delayed_list, in this place should be used imp_lock.
+
+Severity : normal
+Frequency : when reconnection
+Bugzilla : 11662
+Description: Grant Leak when osc reconnect to OST
+Details : When osc reconnect ost, OST(filter) should check whether it
+ should grant more space to client by comparing fed_grant and
+ cl_avail_grant, and return the granted space to client instead
+ of "new granted" space, because client will call osc_init_grant
+ to update the client grant space info.
+
+Severity : normal
+Frequency : when client reconnect to OST
+Bugzilla : 11662
+Description: Grant Leak when osc do resend and replay bulk write
+Details : When osc reconnect to OST, OST(filter)should clear grant info of
+ bulk write request, because the grant info while be sync between
+ OSC and OST when reconnect, and we should ignore the grant info
+ these of resend/replay write req.
+
+Severity : normal
+Frequency : rare
+Bugzilla : 11662
+Description: Grant space more than avaiable left space sometimes.
+Details : When then OST is about to be full, if two bulk writing from
+ different clients came to OST. Accord the avaliable space of the
+ OST, the first req should be permitted, and the second one
+ should be denied by ENOSPC. But if the seconde arrived before
+ the first one is commited. The OST might wrongly permit second
+ writing, which will cause grant space > avaiable space.
+
+Severity : normal
+Frequency : when client is evicted
+Bugzilla : 12371
+Description: Grant might be wrongly erased when osc is evicted by OST
+Details : when the import is evicted by server, it will fork another
+ thread ptlrpc_invalidate_import_thread to invalidate the
+ import, where the grant will be set to 0. While the original
+ thread will update the grant it got when connecting. So if
+ the former happened latter, the grant will be wrongly errased
+ because of this race.
+
+Severity : normal
+Frequency : rare
+Bugzilla : 12401
+Description: Checking Stale with correct fid
+Details : ll_revalidate_it should uses de_inode instead of op_data.fid2
+ to check whether it is stale, because sometimes, we want the
+ enqueue happened anyway, and op_data.fid2 will not be initialized.
+
+Severity : enhancement
+Bugzilla : 11647
+Description: update patchless client
+Details : Add support for patchless client with 2.6.20, 2.6.21 and RHEL 5
+
+Severity : normal
+Frequency : only with 2.4 kernel
+Bugzilla : 12134
+Description: random memory corruption
+Details : size of struct ll_inode_info is to big for union inode.u and this
+ can be cause of random memory corruption.
+
+Severity : normal
+Frequency : rare
+Bugzilla : 10818
+Description: Memory leak in recovery
+Details : Lov_mds_md was not free in an error handler in mds_create_object.
+ It should also check obd_fail before fsfilt_start, otherwise if
+ fsfilt_start return -EROFS,(failover mds during mds recovery).
+ then the req will return with repmsg->transno = 0 and rc = EROFS.
+ and we met hit the assert LASSERT(req->rq_reqmsg->transno ==
+ req->rq_repmsg->transno) in ptlrpc_replay_interpret. Fcc should
+ be freed no matter whether fsfilt_commit success or not.
+
+Severity : minor
+Frequency : only with huge count clients
+Bugzilla : 11817
+Description: Prevents from taking the superblock lock in llap_from_page for
+ a soon died page.
+Details : using LL_ORIGIN_REMOVEPAGE origin flag instead of LL_ORIGIN_UNKNOW
+ for llap_from_page call in ll_removepage prevents from taking the
+ superblock lock for a soon died page.
+
+Severity : normal
+Frequency : rare
+Bugzilla : 11935
+Description: Not check open intent error before release open handle
+Details : in some rare cases, the open intent error is not checked before
+ release open handle, which may cause
+ ASSERTION(open_req->rq_transno != 0), because it tries to release
+ the failed open handle.
+
+Severity : normal
+Frequency : rare
+Bugzilla : 12556
+Description: Set cat log bitmap only after create log success.
+Details : in some rare cases, the cat log bitmap is set too early. and it
+ should be set only after create log success.
+
+Severity : major
+Bugzilla : 11971
+Description: Accessing a block bevice can re-enable I/O when Lustre is
+ tearing down a device.
+Details : dev_clear_rdonly(bdev) must be called in kill_bdev() instead of
+ blkdev_put().
+
+Severity : minor
+Bugzilla : 11706
+Description: service threads may hog cpus when there are a lot of requests
+ coming
+Details : Insert cond_resched to give other threads a chance to use some of
+ the cpu
+
+Severity : normal
+Frequency : rare
+Bugzilla : 12086
+Description: the cat log was not initialized in recovery
+Details : When mds(mgs) do recovery, the tgt_count might be zero, so the
+ unlink log on mds will not be initialized until mds post
+ recovery. And also in mds post recovery, the unlink log will
+ initialization will be done asynchronausly, so there will be race
+ between add unlink log and unlink log initialization.
+
+Severity : normal
+Bugzilla : 12597
+Description: brw_stats were being printed incorrectly
+Details : brw_stats were being printed as log2 but all of them were not
+ recorded as log2. Also remove some code duplication arising from
+ filter_tally_{read,write}.
+
+Severity : normal
+Bugzilla : 11674
+Frequency : rare, only in recovery.
+Description: ASSERTION(req->rq_type != LI_POISON) failed
+Details : imp_lock should be held while iterating over imp_sending_list for
+ prevent destroy request after get timeout in ptlrpc_queue_wait.
+
+Severity : normal
+Bugzilla : 12689
+Description: replay-single.sh test 52 fails
+Details : A lock's skiplist need to be cleanup when it being unlinked
+ from its resource list.
+
+Severity : normal
+Bugzilla : 11737
+Description: Short directio read returns full requested size rather than
+ actual amount read.
+Details : Direct I/O operations should return actual amount of bytes
+ transferred rather than requested size.
+
+Severity : enhancement
+Bugzilla : 10589
+Description: metadata RPC reduction (e.g. for rm performance)
+Details : decrease the amount of synchronous RPC between clients and servers
+ by canceling conflicing lock before the operation on the client side
+ and packing thier handles into the main operation RPC to server.
+
+Severity : enhancement
+Bugzilla : 4900
+Description: Async OSC create to avoid the blocking unnecessarily.
+Details : If a OST has no remain object, system will block on the creating
+ when need to create a new object on this OST. Now, ways use
+ pre-created objects when available, instead of blocking on an
+ empty osc while others are not empty. If we must block, we block
+ for the shortest possible period of time.
+