LU-17893 tests: wait destroys before replay-dual:test_28
replay-dual.sh:test_28() should take care that it drops only own
blocking ast. If test_26() ran before, there may be pending destroys
when test_28() runs. Dropping of blocking asts for destroys makes
replay-dual.sh to get accompanied with:
watchdog stack traces:
[169376.453554] Lustre: ll_ost00_057: service thread pid 236757 was
inactive for 40.816 seconds. Watchdog stack traces are limited to 3
per 300 seconds, skipping this one.
[169376.461659] [<0>] ldlm_completion_ast+0x99b/0xc00 [ptlrpc]
[169376.461782] [<0>] ldlm_cli_enqueue_local+0x302/0x890 [ptlrpc]
[169376.461888] [<0>] ofd_destroy_by_fid+0x29c/0x570 [ofd]
[169376.461906] [<0>] ofd_destroy_hdl+0x22c/0x960 [ofd]
lock timeouts:
[169638.155933] LustreError:
236757:0:(ldlm_request.c:104:ldlm_expired_completion_wait()) ###
lock timed out (enqueued at
1729087746, 303s ago); not entering
recovery in server code, just going back to sleep ns..
and system overload indications:
[169852.021044] Lustre: ll_ost00_052: service thread pid 236555
completed after 516.964s. This likely indicates the system was
overloaded (too many service threads, or not enough hardware
resources).
Wait for completion of destroys before starting test_28().
Test-Parameters: trivial testlist=replay-dual
Signed-off-by: Vladimir Saveliev <vladimir.saveliev@hpe.com>
Change-Id: I837579a428d8c2383fe884961d356ff417fc3f2e
Reviewed-on: https://review.whamcloud.com/c/fs/lustre-release/+/56712
Reviewed-by: Andreas Dilger <adilger@whamcloud.com>
Reviewed-by: Jian Yu <yujian@whamcloud.com>
Reviewed-by: Oleg Drokin <green@whamcloud.com>
Tested-by: Maloo <maloo@whamcloud.com>
Tested-by: jenkins <devops@whamcloud.com>