Whamcloud - gitweb
LU-14183 ldlm: wrong ldlm_add_waiting_lock usage 68/40868/2
authorVitaly Fertman <c17818@cray.com>
Fri, 4 Dec 2020 17:22:55 +0000 (20:22 +0300)
committerOleg Drokin <green@whamcloud.com>
Tue, 16 Mar 2021 18:15:33 +0000 (18:15 +0000)
commitaf07c9a79e263f940fea06a911803097b57b55f4
tree4a3f64bd22a26f6892debe0a68b5fd5ce6e9ee4f
parentb635a0435d13d8431a8344735322b84cb4613b68
LU-14183 ldlm: wrong ldlm_add_waiting_lock usage

exp_bl_lock_at accounted the period since BLAST send until cancel RPC
came to server originally. LU-6032 started to update l_blast_sent for
expired locks which are still busy - prolonged locks when the timeout
expired. In fact, this is a good idea to cover not the whole period
but until any involved RPC comes - it avoids excessively large lock
callback timeouts - and the IO which does the lock prolong is also
able to re-start the AT cycle by updating the l_blast_sent.

Unfortunately, the change seems to be made occasionally as the main
prolong code was not adjusted accordingly.

Fixes: 292aa42e08 ("LU-6032 ldlm: don't disable softirq for exp_rpc_lock")
HPE-bug-id: LUS-9278
Signed-off-by: Vitaly Fertman <c17818@cray.com>
Change-Id: Idc598508fc13aa33ac9fce56f13310ca6fc819d4
Tested-by: Jenkins Build User <nssreleng@cray.com>
Reviewed-by: Andriy Skulysh <c17819@cray.com>
Reviewed-by: Alexander Boyko <c17825@cray.com>
Reviewed-on: https://review.whamcloud.com/40868
Tested-by: jenkins <devops@whamcloud.com>
Tested-by: Maloo <maloo@whamcloud.com>
Reviewed-by: Alexander Boyko <alexander.boyko@hpe.com>
Reviewed-by: Andriy Skulysh <askulysh@gmail.com>
Reviewed-by: Oleg Drokin <green@whamcloud.com>
lustre/ldlm/ldlm_extent.c
lustre/ldlm/ldlm_lockd.c