as such an enforcement disconnects all MDS clients, then
another MDS trying to talk to that original MDS gets evicted
and an unlucky RPC (e.g. rmdir in test cleanup) can fail with:
rm: cannot remove '...d110h.recovery-small/source_dir': Is a directory
Fixes:
57f3262baa7 ("LU-15788 lmv: try another MDT if statfs failed")
Signed-off-by: Alex Zhuravlev <bzzz@whamcloud.com>
Change-Id: I593e1425b44fc19cb7b2b7da33fa10590532f930
Reviewed-on: https://review.whamcloud.com/c/fs/lustre-release/+/47940
Reviewed-by: Andreas Dilger <adilger@whamcloud.com>
Reviewed-by: Oleg Drokin <green@whamcloud.com>
Reviewed-by: Alexander Boyko <alexander.boyko@hpe.com>
Tested-by: jenkins <devops@whamcloud.com>
Tested-by: Maloo <maloo@whamcloud.com>
stack_trap "$LCTL set_param llite.$FSNAME-*.lazystatfs=$lazystatfs" EXIT
stack_trap "$LCTL set_param llite.$FSNAME-*.statahead_max=$max" EXIT
# stop a slave MDT where one ons stripe is located
- stop mds1 -f
+ stop mds1
stack_trap "start mds1 $(mdsdevname 1) $MDS_MOUNT_OPTS && \
wait_recovery_complete mds1 && clients_up && true" EXIT