Whamcloud - gitweb
LU-12043 llite,readahead: don't always use max RPC size 33/35033/4
authorWang Shilong <wshilong@ddn.com>
Sun, 2 Jun 2019 15:17:26 +0000 (23:17 +0800)
committerOleg Drokin <green@whamcloud.com>
Tue, 25 Jun 2019 01:54:25 +0000 (01:54 +0000)
Since 64M RPC landed, @PTLRPC_MAX_BRW_PAGES will be 64M.
And we always try to use this max possible RPC size to check
whether we should avoid fast IO and trigger real context IO.

This is not good for following reasons:

(1) Since current default RPC size is still 4M,
most of system won't use 64M for most of time.

(2) Currently default readahead size per file is still 64M,
which makes fast IO always run out of all readahead pages
before next IO. This breaks what users really want for readahead
grapping pages in advance.

To fix this problem, we use 16M as a balance value if RPC smaller
than 16M, patch also fix the problem that @ras_rpc_size could not
grow bigger which is possibe in the following case:

1) set RPC to 16M
2) Set RPC to 64M

In the current logic ras->ras_rpc_size will be kept as 16M which is wrong.

Change-Id: Ida9f839f7c692cd88d32dc0909503f6ae991d909
Signed-off-by: Wang Shilong <wshilong@ddn.com>
Reviewed-on: https://review.whamcloud.com/35033
Tested-by: Jenkins
Reviewed-by: Li Xi <lixi@ddn.com>
Reviewed-by: Andreas Dilger <adilger@whamcloud.com>
Tested-by: Maloo <maloo@whamcloud.com>
Reviewed-by: Oleg Drokin <green@whamcloud.com>
lustre/llite/llite_internal.h
lustre/llite/rw.c

index 1e86edd..7a121fc 100644 (file)
@@ -349,6 +349,9 @@ static inline struct pcc_inode *ll_i2pcci(struct inode *inode)
        return ll_i2info(inode)->lli_pcc_inode;
 }
 
+/* default to use at least 16M for fast read if possible */
+#define RA_REMAIN_WINDOW_MIN                   MiB_TO_PAGES(16UL)
+
 /* default to about 64M of readahead on a given system. */
 #define SBI_DEFAULT_READAHEAD_MAX              MiB_TO_PAGES(64UL)
 
index 2eb305f..a5f3f9c 100644 (file)
@@ -369,7 +369,7 @@ ll_read_ahead_pages(const struct lu_env *env, struct cl_io *io,
                                         io->ci_obj, ra.cra_end, page_idx);
                                /* update read ahead RPC size.
                                 * NB: it's racy but doesn't matter */
-                               if (ras->ras_rpc_size > ra.cra_rpc_size &&
+                               if (ras->ras_rpc_size != ra.cra_rpc_size &&
                                    ra.cra_rpc_size > 0)
                                        ras->ras_rpc_size = ra.cra_rpc_size;
                                /* trim it to align with optimal RPC size */
@@ -1188,6 +1188,8 @@ int ll_readpage(struct file *file, struct page *vmpage)
                struct ll_file_data *fd = LUSTRE_FPRIVATE(file);
                struct ll_readahead_state *ras = &fd->fd_ras;
                struct lu_env  *local_env = NULL;
+               unsigned long fast_read_pages =
+                       max(RA_REMAIN_WINDOW_MIN, ras->ras_rpc_size);
                struct vvp_page *vpg;
 
                result = -ENODATA;
@@ -1224,7 +1226,7 @@ int ll_readpage(struct file *file, struct page *vmpage)
                         * the case, we can't do fast IO because we will need
                         * a cl_io to issue the RPC. */
                        if (ras->ras_window_start + ras->ras_window_len <
-                           ras->ras_next_readahead + PTLRPC_MAX_BRW_PAGES) {
+                           ras->ras_next_readahead + fast_read_pages) {
                                /* export the page and skip io stack */
                                vpg->vpg_ra_used = 1;
                                cl_page_export(env, page, 1);