The coordinator only runs once per second, we need a mechanism
to send more work when everything is done (cdt_request_count
goes to zero)
Without this, there is a hard limit of max_requests per sec
requests that can be processed, causing performance issues
with small files.
Lustre-change: https://review.whamcloud.com/29742
Lustre-commit:
7251fea8dc3c4d29e30c5a3f763c4c33d35f90a7
Signed-off-by: Ben Evans <bevans@cray.com>
Change-Id: I563666a1a3e53f0ec5908de593de71ff4d925467
Reviewed-by: Frank Zago <fzago@cray.com>
Reviewed-by: Sergey Cheremencev <cherementsev@gmail.com>
Signed-off-by: Minh Diep <minh.diep@intel.com>
Reviewed-on: https://review.whamcloud.com/30538
Tested-by: Jenkins
Reviewed-by: Sergey Cheremencev <c17829@cray.com>
Tested-by: Maloo <hpdd-maloo@intel.com>
Reviewed-by: John L. Hammond <john.hammond@intel.com>
* repeatedly locking/unlocking the catalog for each request
* and preventing other HSM operations from happening */
wait_event_interruptible_timeout(cdt->cdt_waitq,
- kthread_should_stop(),
+ kthread_should_stop() ||
+ cdt->cdt_wakeup_coordinator,
wait_event_time);
+ cdt->cdt_wakeup_coordinator = false;
CDEBUG(D_HSM, "coordinator resumes\n");
if (kthread_should_stop()) {
mdt_cdt_put_request(car);
LASSERT(atomic_read(&cdt->cdt_request_count) >= 1);
- atomic_dec(&cdt->cdt_request_count);
+ if (atomic_dec_and_test(&cdt->cdt_request_count)) {
+ /* request count is empty, nudge coordinator for more work */
+ cdt->cdt_wakeup_coordinator = true;
+ wake_up_interruptible(&cdt->cdt_waitq);
+ }
RETURN(0);
}
/* Remove archive on last unlink policy */
bool cdt_remove_archive_on_last_unlink;
+
+ bool cdt_wakeup_coordinator;
};
/* mdt state flag bits */