We will stop readahead if we have covered @ria_end_idx_min
but lock contention case happen, this could cause a problem
with small read with SSF mode.
Because even lock contention happen, but every client could
potentially have a large consecutive pages to read, if this
is a 4K read, it will end up with a lot small read.
To fix this problem, at least allow readahead continue with
one stripe size, this is exact behavior before the commit
Without Patch:
Max Write: 13082.37 MiB/sec (13717.85 MB/sec)
Max Read: 854.17 MiB/sec (895.67 MB/sec)
With Patch:
Max Write: 12448.90 MiB/sec (13053.61 MB/sec)
Max Read: 23921.73 MiB/sec (25083.75 MB/sec)
Fixes: cfbeae9 ("LU-12043 llite: extend readahead locks for striped file")
Change-Id: I59963592f6dbe6babd746cd01441f4a99a8cafcb
Signed-off-by: Wang Shilong <wshilong@ddn.com>
Reviewed-on: https://review.whamcloud.com/37697
Reviewed-by: Andreas Dilger <adilger@whamcloud.com>
Tested-by: jenkins <devops@whamcloud.com>
Reviewed-by: Bobi Jam <bobijam@hotmail.com>
Tested-by: Maloo <maloo@whamcloud.com>
Reviewed-by: Oleg Drokin <green@whamcloud.com>
if (ra.cra_end_idx == 0 || ra.cra_end_idx < page_idx) {
pgoff_t end_idx;
if (ra.cra_end_idx == 0 || ra.cra_end_idx < page_idx) {
pgoff_t end_idx;
+ /*
+ * Do not shrink ria_end_idx at any case until
+ * the minimum end of current read is covered.
+ *
+ * Do not extend read lock accross stripe if
+ * lock contention detected.
+ */
+ if (ra.cra_contention &&
+ page_idx > ria->ria_end_idx_min) {
+ ria->ria_end_idx = *ra_end;
+ break;
+ }
+
cl_read_ahead_release(env, &ra);
rc = cl_io_read_ahead(env, io, page_idx, &ra);
if (rc < 0)
break;
cl_read_ahead_release(env, &ra);
rc = cl_io_read_ahead(env, io, page_idx, &ra);
if (rc < 0)
break;
- /* Do not shrink ria_end_idx at any case until
- * the minimum end of current read is covered.
- * And only shrink ria_end_idx if the matched
- * LDLM lock doesn't cover more. */
- if (page_idx > ra.cra_end_idx ||
- (ra.cra_contention &&
- page_idx > ria->ria_end_idx_min)) {
+ /*
+ * Only shrink ria_end_idx if the matched
+ * LDLM lock doesn't cover more.
+ */
+ if (page_idx > ra.cra_end_idx) {
ria->ria_end_idx = ra.cra_end_idx;
break;
}
ria->ria_end_idx = ra.cra_end_idx;
break;
}