Whamcloud - gitweb
LU-13004 target: convert tgt_send_buffer to use KIOV 27/36827/6
authorMr NeilBrown <neilb@suse.de>
Sat, 18 Jan 2020 13:57:51 +0000 (08:57 -0500)
committerOleg Drokin <green@whamcloud.com>
Tue, 28 Jan 2020 06:03:28 +0000 (06:03 +0000)
Rather than BULK_BUF_KVEC, use a BULK_BUF_KIOV descriptor.

This is a step towards removing KVEC support.

Signed-off-by: Mr NeilBrown <neilb@suse.de>
Change-Id: I27a81f6a3ef7ba9b079b1b93f56f475a38aaa3f4
Reviewed-on: https://review.whamcloud.com/36827
Reviewed-by: James Simmons <jsimmons@infradead.org>
Reviewed-by: Mike Pershin <mpershin@whamcloud.com>
Tested-by: jenkins <devops@whamcloud.com>
Tested-by: Maloo <maloo@whamcloud.com>
Reviewed-by: Oleg Drokin <green@whamcloud.com>
lustre/target/tgt_handler.c

index 320281f..0558a63 100644 (file)
@@ -1069,12 +1069,20 @@ int tgt_send_buffer(struct tgt_session_info *tsi, struct lu_rdbuf *rdbuf)
        struct ptlrpc_bulk_desc *desc;
        int                      i;
        int                      rc;
+       int                      pages = 0;
 
        ENTRY;
 
-       desc = ptlrpc_prep_bulk_exp(req, rdbuf->rb_nbufs, 1,
-                                 PTLRPC_BULK_PUT_SOURCE | PTLRPC_BULK_BUF_KVEC,
-                                   MDS_BULK_PORTAL, &ptlrpc_bulk_kvec_ops);
+       for (i = 0; i < rdbuf->rb_nbufs; i++)
+               /* There is only one caller (out_read) and we *know* that
+                * bufs are at most 4K, and 4K aligned, so a simple DIV_ROUND_UP
+                * is always sufficient.
+                */
+               pages += DIV_ROUND_UP(rdbuf->rb_bufs[i].lb_len, PAGE_SIZE);
+       desc = ptlrpc_prep_bulk_exp(req, pages, 1,
+                                 PTLRPC_BULK_PUT_SOURCE | PTLRPC_BULK_BUF_KIOV,
+                                   MDS_BULK_PORTAL,
+                                   &ptlrpc_bulk_kiov_nopin_ops);
        if (desc == NULL)
                RETURN(-ENOMEM);