After check_and_setup_lustre() mds1_dev is equal to
/dev/mapper/mds1_flakey, which is unexported to saved real
device (/dev/vdc) by stop():
stop mds1
elif dm_flakey_supported mds1; then
dm_cleanup_dev mds1
unexport_dm_dev mds1
As a result stack_trap() is called with non existing
/dev/mapper/mds1_flakey:
stack_trap 'stop mds1; start mds1 /dev/mapper/mds1_flakey
-o rw,user_xattr' EXIT
and failed as:
losetup: /dev/mapper/mds1_flakey: failed to set up loop device:
No such file or directory
Reproducer:
run ONLY=606 sh sanity-hsm.sh on "failover" setup (mds1_HOST !=
mds1failover_HOST), no llmount.sh before the test
Fixes:
54b9e3f78935 ("LU-684 tests: replace dev_read_only patch with dm-flakey")
Test-Parameters: trivial testlist=sanity-hsm env=ONLY=606
Test-Parameters: fstype=zfs testlist=sanity-hsm env=ONLY=606
Signed-off-by: Elena Gryaznova <elena.gryaznova@hpe.com>
HPE-bug-id: LUS-9920
Reviewed-by: Alexander Boyko <alexander.boyko@hpe.com>
Reviewed-by: Andriy Skulysh <andriy.skulysh@hpe.com>
Change-Id: I9ab3cbcf67c6fd046861810a2ceab262f211436b
Reviewed-on: https://review.whamcloud.com/43409
Reviewed-by: Jian Yu <yujian@whamcloud.com>
Tested-by: jenkins <devops@whamcloud.com>
Tested-by: Maloo <maloo@whamcloud.com>
Reviewed-by: Oleg Drokin <green@whamcloud.com>
local entry
#remount mds1 as ldiskfs or zfs type
- stack_trap "stop mds1; start mds1 $(mdsdevname 1) $MDS_MOUNT_OPTS" EXIT
stop mds1 || error "stop mds1 failed"
+ stack_trap "unmount_fstype mds1; start mds1 $(mdsdevname 1)\
+ $MDS_MOUNT_OPTS" EXIT
mount_fstype mds1 || error "remount mds1 failed"
for ((i = 0; i < 1; i++)); do