Whamcloud - gitweb
LU-11518 ldlm: control lru_size for extent lock 62/39562/5
authorJinshan Xiong <jinshan.xiong@intel.com>
Fri, 31 Jul 2020 18:22:40 +0000 (21:22 +0300)
committerOleg Drokin <green@whamcloud.com>
Sat, 19 Sep 2020 14:13:04 +0000 (14:13 +0000)
commit6052cc88eb1232ac3b0193f0d47881887a2dcfdc
tree33793b483942d9a576a298d126d7e7af3df72415
parentff29ed8fe9c58bd2caa4d63bcbe7556e1c320703
LU-11518 ldlm: control lru_size for extent lock

We register ELC for extent locks to be canceled at enqueue time,
but it can't make positive effect to locks that have dirty pages
under it. To keep the semantics of lru_size, the client should
check how many unused locks are cached after adding a lock into
lru list. If it has already exceeded the hard limit
(ns_max_unused), the client will initiate async lock cancellation
process in batch mode (ns->ns_cancel_batch).

To do it, re-use the new batching LRU cancel functionality.

Wherever unlimited LRU cancel is called (not ELC), try to cancel in
batched mode.

And a new field named new sysfs attribute named *lru_cancel_batch*
is introduced into ldlm namespace to control the batch count.

Signed-off-by: Jinshan Xiong <jinshan.xiong@intel.com>
Signed-off-by: Shuichi Ihara <sihara@ddn.com>
Signed-off-by: Gu Zheng <gzheng@ddn.com>
Signed-off-by: Vitaly Fertman <c17818@cray.com>
Change-Id: Ib18b829372da8599ba872b5ac5ab7421661f942d
Reviewed-on: https://es-gerrit.dev.cray.com/157068
Reviewed-by: Andriy Skulysh <c17819@cray.com>
Reviewed-by: Alexey Lyashkov <c17817@cray.com>
Tested-by: Alexander Lezhoev <c17454@cray.com>
Reviewed-on: https://review.whamcloud.com/39562
Tested-by: jenkins <devops@whamcloud.com>
Tested-by: Maloo <maloo@whamcloud.com>
Reviewed-by: Andreas Dilger <adilger@whamcloud.com>
Reviewed-by: Oleg Drokin <green@whamcloud.com>
lustre/include/lustre_dlm.h
lustre/ldlm/ldlm_internal.h
lustre/ldlm/ldlm_lock.c
lustre/ldlm/ldlm_request.c
lustre/ldlm/ldlm_resource.c
lustre/tests/sanity.sh
lustre/tests/test-framework.sh