From 2686b25c301f055a15d13f085f5184e6f5cbbe13 Mon Sep 17 00:00:00 2001 From: Olaf Weber Date: Mon, 8 Jun 2015 09:15:39 +0200 Subject: [PATCH] LU-6325 ptlrpc: make ptlrpcd threads cpt-aware MIME-Version: 1.0 Content-Type: text/plain; charset=utf8 Content-Transfer-Encoding: 8bit On NUMA systems, the placement of worker threads relative to the memory they use greatly affects performance. The CPT mechanism can be used to constrain a number of Lustre thread types, and this change makes it possible to configure the placement of ptlrpcd threads in a similar manner. To simplify the code changes, the global structures used to manage ptlrpcd threads are changed to one per CPT. In particular this means there will be one ptlrpcd recovery thread per CPT. To prevent ptlrpcd threads from wandering all over the system, all ptlrpcd thread are bound to a CPT. Note that some CPT configuration is always created, but the defaults are not likely to be correct for a NUMA system. After discussing the options with Liang Zhen we decided that we would not bind ptlrpcd threads to specific CPUs, and rather trust the kernel scheduler to migrate ptlrpcd threads. With all ptlrpcd threads bound to a CPT, but not to specific CPUs, the load policy mechanism can be radically simplified: - PDL_POLICY_LOCAL and PDL_POLICY_ROUND are currently identical. - PDL_POLICY_ROUND, if fully implemented, would cost us the locality we are trying to achieve, so most or all calls using this policy would have to be changed to PDL_POLICY_LOCAL. - PDL_POLICY_PREFERRED is not used, and cannot be implemented without binding ptlrpcd threads to individual CPUs. - PDL_POLICY_SAME is rarely used, and cannot be implemented without binding ptlrpcd threads to individual CPUs. The partner mechanism is also updated, because now all ptlrpcd threads are "bound" threads. The only difference between the various bind policies, PDB_POLICY_NONE, PDB_POLICY_FULL, PDB_POLICY_PAIR, and PDB_POLICY_NEIGHBOR, is the number of partner threads. The bind policy is replaced with a tunable that directly specifies the size of the groups of ptlrpcd partner threads. Ensure that the ptlrpc_request_set for a ptlrpcd thread is created on the same CPT that the thread will work on. When threads are bound to specific nodes and/or CPUs in a NUMA system, it pays to ensure that the datastructures used by these threads are also on the same node. Visible changes: * ptlrpcd thread names include the CPT number, for example "ptlrpcd_02_07". In this case the "07" is relative to the CPT, and not a CPU number. Tunables added: * ptlrpcd_cpts (string): A CPT string describing the CPU partitions that ptlrpcd threads should run on. Used to make ptlrpcd threads run on a subset of all CPTs. * ptlrpcd_per_cpt_max (int): The maximum number of ptlrpcd threads to run in a CPT. * ptlrpcd_partner_group_size (int): The desired number of threads in each ptlrpcd partner thread group. Default is 2, corresponding to the old PDB_POLICY_PAIR. A negative value makes all ptlrpcd threads in a CPT partners of each other. Tunables obsoleted: * max_ptlrpcds: The new ptlrcpd_per_cpt_max can be used to obtain the same effect. * ptlrpcd_bind_policy: The new ptlrpcd_partner_group_size can be used to obtain the same effect. Internal interface changes: * pdb_policy_t and related code have been removed. Groups of partner ptlrpcd threads are still created, and all threads in a partner group are bound on the same CPT. The ptlrpcd threads bound to a CPT are typically divided into several partner groups. The partner groups on a CPT all have an equal number of ptlrpcd threads. * pdl_policy_t and related code have been removed. Since ptlrpcd threads are not bound to a specific CPU, all the code that avoids scheduling on the current CPU (or attempts to do so) has been removed as non-functional. A simplified form of PDL_POLICY_LOCAL is kept as the only load policy. * LIOD_BIND and related code have been removed. All ptlrpcd threads are now bound to a CPT, and no additional binding policy is implemented. * ptlrpc_prep_set(): Changed to allocate a ptlrpc_request_set on the current CPT. * ptlrpcd(): If an error is encountered before entering the main loop store the error in pc_error before exiting. * ptlrpcd_start(): Check pc_error to verify that the ptlrpcd thread has successfully entered its main loop. * ptlrpcd_init(): Initialize the struct ptlrpcd_ctl for all threads for a CPT before starting any of them. This closes a race during startup where a partner thread could reference a non-initialized struct ptlrpcd_ctl. Signed-off-by: Olaf Weber Change-Id: I3ac40ea56f9c792c3e7c36967e2e1f20105c566c Reviewed-on: http://review.whamcloud.com/13972 Tested-by: Jenkins Reviewed-by: Grégoire Pichon Reviewed-by: Stephen Champion Reviewed-by: James Simmons Tested-by: Maloo Reviewed-by: Jinshan Xiong Reviewed-by: Oleg Drokin --- lustre/include/lustre_net.h | 94 ++--- lustre/ldlm/ldlm_request.c | 18 +- lustre/mdc/mdc_locks.c | 4 +- lustre/osc/osc_cache.c | 28 +- lustre/osc/osc_cl_internal.h | 2 +- lustre/osc/osc_internal.h | 2 +- lustre/osc/osc_io.c | 2 +- lustre/osc/osc_request.c | 89 ++--- lustre/osp/osp_precreate.c | 2 +- lustre/osp/osp_sync.c | 2 +- lustre/osp/osp_trans.c | 4 +- lustre/ptlrpc/client.c | 12 +- lustre/ptlrpc/import.c | 17 +- lustre/ptlrpc/pinger.c | 26 +- lustre/ptlrpc/ptlrpc_internal.h | 2 +- lustre/ptlrpc/ptlrpcd.c | 786 ++++++++++++++++++++++++---------------- lustre/quota/qsd_request.c | 4 +- 17 files changed, 601 insertions(+), 493 deletions(-) diff --git a/lustre/include/lustre_net.h b/lustre/include/lustre_net.h index 9a0b67e..64419d0 100644 --- a/lustre/include/lustre_net.h +++ b/lustre/include/lustre_net.h @@ -1872,34 +1872,42 @@ struct ptlrpcd_ctl { * Stop completion. */ struct completion pc_finishing; - /** - * Thread requests set. - */ - struct ptlrpc_request_set *pc_set; - /** + /** + * Thread requests set. + */ + struct ptlrpc_request_set *pc_set; + /** * Thread name used in kthread_run() - */ - char pc_name[16]; - /** - * Environment for request interpreters to run in. - */ - struct lu_env pc_env; + */ + char pc_name[16]; + /** + * Environment for request interpreters to run in. + */ + struct lu_env pc_env; + /** + * CPT the thread is bound on. + */ + int pc_cpt; /** * Index of ptlrpcd thread in the array. */ - int pc_index; - /** - * Number of the ptlrpcd's partners. - */ - int pc_npartners; - /** - * Pointer to the array of partners' ptlrpcd_ctl structure. - */ - struct ptlrpcd_ctl **pc_partners; - /** - * Record the partner index to be processed next. - */ - int pc_cursor; + int pc_index; + /** + * Pointer to the array of partners' ptlrpcd_ctl structure. + */ + struct ptlrpcd_ctl **pc_partners; + /** + * Number of the ptlrpcd's partners. + */ + int pc_npartners; + /** + * Record the partner index to be processed next. + */ + int pc_cursor; + /** + * Error code if the thread failed to fully start. + */ + int pc_error; }; /* Bits for pc_flags */ @@ -1922,10 +1930,6 @@ enum ptlrpcd_ctl_flags { * This is a recovery ptlrpc thread. */ LIOD_RECOVERY = 1 << 3, - /** - * The ptlrpcd is bound to some CPU core. - */ - LIOD_BIND = 1 << 4, }; /** @@ -2636,43 +2640,11 @@ void ptlrpc_pinger_ir_down(void); /** @} */ int ptlrpc_pinger_suppress_pings(void); -/* ptlrpc daemon bind policy */ -typedef enum { - /* all ptlrpcd threads are free mode */ - PDB_POLICY_NONE = 1, - /* all ptlrpcd threads are bound mode */ - PDB_POLICY_FULL = 2, - /* ... */ - PDB_POLICY_PAIR = 3, - /* ... , - * means each ptlrpcd[X] has two partners: thread[X-1] and thread[X+1]. - * If kernel supports NUMA, pthrpcd threads are binded and - * grouped by NUMA node */ - PDB_POLICY_NEIGHBOR = 4, -} pdb_policy_t; - -/* ptlrpc daemon load policy - * It is caller's duty to specify how to push the async RPC into some ptlrpcd - * queue, but it is not enforced, affected by "ptlrpcd_bind_policy". If it is - * "PDB_POLICY_FULL", then the RPC will be processed by the selected ptlrpcd, - * Otherwise, the RPC may be processed by the selected ptlrpcd or its partner, - * depends on which is scheduled firstly, to accelerate the RPC processing. */ -typedef enum { - /* on the same CPU core as the caller */ - PDL_POLICY_SAME = 1, - /* within the same CPU partition, but not the same core as the caller */ - PDL_POLICY_LOCAL = 2, - /* round-robin on all CPU cores, but not the same core as the caller */ - PDL_POLICY_ROUND = 3, - /* the specified CPU core is preferred, but not enforced */ - PDL_POLICY_PREFERRED = 4, -} pdl_policy_t; - /* ptlrpc/ptlrpcd.c */ void ptlrpcd_stop(struct ptlrpcd_ctl *pc, int force); void ptlrpcd_free(struct ptlrpcd_ctl *pc); void ptlrpcd_wake(struct ptlrpc_request *req); -void ptlrpcd_add_req(struct ptlrpc_request *req, pdl_policy_t policy, int idx); +void ptlrpcd_add_req(struct ptlrpc_request *req); void ptlrpcd_add_rqset(struct ptlrpc_request_set *set); int ptlrpcd_addref(void); void ptlrpcd_decref(void); diff --git a/lustre/ldlm/ldlm_request.c b/lustre/ldlm/ldlm_request.c index 833078a..d3049c0 100644 --- a/lustre/ldlm/ldlm_request.c +++ b/lustre/ldlm/ldlm_request.c @@ -1233,14 +1233,14 @@ int ldlm_cli_cancel_req(struct obd_export *exp, struct list_head *cancels, ldlm_cancel_pack(req, cancels, count); - ptlrpc_request_set_replen(req); - if (flags & LCF_ASYNC) { - ptlrpcd_add_req(req, PDL_POLICY_LOCAL, -1); - sent = count; - GOTO(out, 0); - } else { - rc = ptlrpc_queue_wait(req); - } + ptlrpc_request_set_replen(req); + if (flags & LCF_ASYNC) { + ptlrpcd_add_req(req); + sent = count; + GOTO(out, 0); + } + + rc = ptlrpc_queue_wait(req); if (rc == LUSTRE_ESTALE) { CDEBUG(D_DLMTRACE, "client/server (nid %s) " "out of sync -- not fatal\n", @@ -2271,7 +2271,7 @@ static int replay_one_lock(struct obd_import *imp, struct ldlm_lock *lock) aa = ptlrpc_req_async_args(req); aa->lock_handle = body->lock_handle[0]; req->rq_interpret_reply = (ptlrpc_interpterer_t)replay_lock_interpret; - ptlrpcd_add_req(req, PDL_POLICY_LOCAL, -1); + ptlrpcd_add_req(req); RETURN(0); } diff --git a/lustre/mdc/mdc_locks.c b/lustre/mdc/mdc_locks.c index d93146e..9a17940 100644 --- a/lustre/mdc/mdc_locks.c +++ b/lustre/mdc/mdc_locks.c @@ -1257,8 +1257,8 @@ int mdc_intent_getattr_async(struct obd_export *exp, ga->ga_minfo = minfo; ga->ga_einfo = einfo; - req->rq_interpret_reply = mdc_intent_getattr_async_interpret; - ptlrpcd_add_req(req, PDL_POLICY_LOCAL, -1); + req->rq_interpret_reply = mdc_intent_getattr_async_interpret; + ptlrpcd_add_req(req); RETURN(0); } diff --git a/lustre/osc/osc_cache.c b/lustre/osc/osc_cache.c index 66e08b7..f4b3f96 100644 --- a/lustre/osc/osc_cache.c +++ b/lustre/osc/osc_cache.c @@ -1984,7 +1984,7 @@ static unsigned int get_write_extents(struct osc_object *obj, static int osc_send_write_rpc(const struct lu_env *env, struct client_obd *cli, - struct osc_object *osc, pdl_policy_t pol) + struct osc_object *osc) __must_hold(osc) { struct list_head rpclist = LIST_HEAD_INIT(rpclist); @@ -2038,7 +2038,7 @@ __must_hold(osc) if (!list_empty(&rpclist)) { LASSERT(page_count > 0); - rc = osc_build_rpc(env, cli, &rpclist, OBD_BRW_WRITE, pol); + rc = osc_build_rpc(env, cli, &rpclist, OBD_BRW_WRITE); LASSERT(list_empty(&rpclist)); } @@ -2058,7 +2058,7 @@ __must_hold(osc) */ static int osc_send_read_rpc(const struct lu_env *env, struct client_obd *cli, - struct osc_object *osc, pdl_policy_t pol) + struct osc_object *osc) __must_hold(osc) { struct osc_extent *ext; @@ -2087,7 +2087,7 @@ __must_hold(osc) osc_object_unlock(osc); LASSERT(page_count > 0); - rc = osc_build_rpc(env, cli, &rpclist, OBD_BRW_READ, pol); + rc = osc_build_rpc(env, cli, &rpclist, OBD_BRW_READ); LASSERT(list_empty(&rpclist)); osc_object_lock(osc); @@ -2137,8 +2137,7 @@ static struct osc_object *osc_next_obj(struct client_obd *cli) } /* called with the loi list lock held */ -static void osc_check_rpcs(const struct lu_env *env, struct client_obd *cli, - pdl_policy_t pol) +static void osc_check_rpcs(const struct lu_env *env, struct client_obd *cli) __must_hold(&cli->cl_loi_list_lock) { struct osc_object *osc; @@ -2168,7 +2167,7 @@ __must_hold(&cli->cl_loi_list_lock) * do io on writes while there are cache waiters */ osc_object_lock(osc); if (osc_makes_rpc(cli, osc, OBD_BRW_WRITE)) { - rc = osc_send_write_rpc(env, cli, osc, pol); + rc = osc_send_write_rpc(env, cli, osc); if (rc < 0) { CERROR("Write request failed with %d\n", rc); @@ -2192,7 +2191,7 @@ __must_hold(&cli->cl_loi_list_lock) } } if (osc_makes_rpc(cli, osc, OBD_BRW_READ)) { - rc = osc_send_read_rpc(env, cli, osc, pol); + rc = osc_send_read_rpc(env, cli, osc); if (rc < 0) CERROR("Read request failed with %d\n", rc); } @@ -2207,7 +2206,7 @@ __must_hold(&cli->cl_loi_list_lock) } static int osc_io_unplug0(const struct lu_env *env, struct client_obd *cli, - struct osc_object *osc, pdl_policy_t pol, int async) + struct osc_object *osc, int async) { int rc = 0; @@ -2219,7 +2218,7 @@ static int osc_io_unplug0(const struct lu_env *env, struct client_obd *cli, * potential stack overrun problem. LU-2859 */ atomic_inc(&cli->cl_lru_shrinkers); spin_lock(&cli->cl_loi_list_lock); - osc_check_rpcs(env, cli, pol); + osc_check_rpcs(env, cli); spin_unlock(&cli->cl_loi_list_lock); atomic_dec(&cli->cl_lru_shrinkers); } else { @@ -2233,14 +2232,13 @@ static int osc_io_unplug0(const struct lu_env *env, struct client_obd *cli, static int osc_io_unplug_async(const struct lu_env *env, struct client_obd *cli, struct osc_object *osc) { - /* XXX: policy is no use actually. */ - return osc_io_unplug0(env, cli, osc, PDL_POLICY_ROUND, 1); + return osc_io_unplug0(env, cli, osc, 1); } void osc_io_unplug(const struct lu_env *env, struct client_obd *cli, - struct osc_object *osc, pdl_policy_t pol) + struct osc_object *osc) { - (void)osc_io_unplug0(env, cli, osc, pol, 0); + (void)osc_io_unplug0(env, cli, osc, 0); } int osc_prep_async_page(struct osc_object *osc, struct osc_page *ops, @@ -2994,7 +2992,7 @@ int osc_cache_writeback_range(const struct lu_env *env, struct osc_object *obj, } if (unplug) - osc_io_unplug(env, osc_cli(obj), obj, PDL_POLICY_ROUND); + osc_io_unplug(env, osc_cli(obj), obj); if (hp || discard) { int rc; diff --git a/lustre/osc/osc_cl_internal.h b/lustre/osc/osc_cl_internal.h index 24db084..9dc9cdf 100644 --- a/lustre/osc/osc_cl_internal.h +++ b/lustre/osc/osc_cl_internal.h @@ -448,7 +448,7 @@ int osc_cache_writeback_range(const struct lu_env *env, struct osc_object *obj, int osc_cache_wait_range(const struct lu_env *env, struct osc_object *obj, pgoff_t start, pgoff_t end); void osc_io_unplug(const struct lu_env *env, struct client_obd *cli, - struct osc_object *osc, pdl_policy_t pol); + struct osc_object *osc); int lru_queue_work(const struct lu_env *env, void *data); void osc_object_set_contended (struct osc_object *obj); diff --git a/lustre/osc/osc_internal.h b/lustre/osc/osc_internal.h index 7e2348c..842d5ac 100644 --- a/lustre/osc/osc_internal.h +++ b/lustre/osc/osc_internal.h @@ -128,7 +128,7 @@ int osc_sync_base(struct osc_object *obj, struct obdo *oa, int osc_process_config_base(struct obd_device *obd, struct lustre_cfg *cfg); int osc_build_rpc(const struct lu_env *env, struct client_obd *cli, - struct list_head *ext_list, int cmd, pdl_policy_t p); + struct list_head *ext_list, int cmd); long osc_lru_shrink(const struct lu_env *env, struct client_obd *cli, long target, bool force); long osc_lru_reclaim(struct client_obd *cli); diff --git a/lustre/osc/osc_io.c b/lustre/osc/osc_io.c index 4fb098c..043a6de 100644 --- a/lustre/osc/osc_io.c +++ b/lustre/osc/osc_io.c @@ -687,7 +687,7 @@ static int osc_io_data_version_start(const struct lu_env *env, dva = ptlrpc_req_async_args(req); dva->dva_oio = oio; - ptlrpcd_add_req(req, PDL_POLICY_ROUND, -1); + ptlrpcd_add_req(req); RETURN(0); } diff --git a/lustre/osc/osc_request.c b/lustre/osc/osc_request.c index e02698e..0a1b5f5 100644 --- a/lustre/osc/osc_request.c +++ b/lustre/osc/osc_request.c @@ -238,7 +238,7 @@ int osc_setattr_async(struct obd_export *exp, struct obdo *oa, /* do mds to ost setattr asynchronously */ if (!rqset) { /* Do not wait for response. */ - ptlrpcd_add_req(req, PDL_POLICY_ROUND, -1); + ptlrpcd_add_req(req); } else { req->rq_interpret_reply = (ptlrpc_interpterer_t)osc_setattr_interpret; @@ -250,7 +250,7 @@ int osc_setattr_async(struct obd_export *exp, struct obdo *oa, sa->sa_cookie = cookie; if (rqset == PTLRPCD_SET) - ptlrpcd_add_req(req, PDL_POLICY_ROUND, -1); + ptlrpcd_add_req(req); else ptlrpc_set_add_req(rqset, req); } @@ -341,14 +341,14 @@ int osc_punch_base(struct obd_export *exp, struct obdo *oa, CLASSERT(sizeof(*sa) <= sizeof(req->rq_async_args)); sa = ptlrpc_req_async_args(req); sa->sa_oa = oa; - sa->sa_upcall = upcall; - sa->sa_cookie = cookie; - if (rqset == PTLRPCD_SET) - ptlrpcd_add_req(req, PDL_POLICY_ROUND, -1); - else - ptlrpc_set_add_req(rqset, req); + sa->sa_upcall = upcall; + sa->sa_cookie = cookie; + if (rqset == PTLRPCD_SET) + ptlrpcd_add_req(req); + else + ptlrpc_set_add_req(rqset, req); - RETURN(0); + RETURN(0); } static int osc_sync_interpret(const struct lu_env *env, @@ -427,7 +427,7 @@ int osc_sync_base(struct osc_object *obj, struct obdo *oa, fa->fa_cookie = cookie; if (rqset == PTLRPCD_SET) - ptlrpcd_add_req(req, PDL_POLICY_ROUND, -1); + ptlrpcd_add_req(req); else ptlrpc_set_add_req(rqset, req); @@ -550,9 +550,9 @@ static int osc_destroy(const struct lu_env *env, struct obd_export *exp, osc_can_send_destroy(cli), &lwi); } - /* Do not wait for response */ - ptlrpcd_add_req(req, PDL_POLICY_ROUND, -1); - RETURN(0); + /* Do not wait for response */ + ptlrpcd_add_req(req); + RETURN(0); } static void osc_announce_cached(struct client_obd *cli, struct obdo *oa, @@ -1441,7 +1441,7 @@ static int osc_brw_redo_request(struct ptlrpc_request *request, * to add a series of BRW RPCs into a self-defined ptlrpc_request_set * and wait for all of them to be finished. We should inherit request * set from old request. */ - ptlrpcd_add_req(new_req, PDL_POLICY_SAME, -1); + ptlrpcd_add_req(new_req); DEBUG_REQ(D_INFO, new_req, "new request"); RETURN(0); @@ -1599,7 +1599,7 @@ static int brw_interpret(const struct lu_env *env, osc_wake_cache_waiters(cli); spin_unlock(&cli->cl_loi_list_lock); - osc_io_unplug(env, cli, NULL, PDL_POLICY_SAME); + osc_io_unplug(env, cli, NULL); RETURN(rc); } @@ -1627,7 +1627,7 @@ static void brw_commit(struct ptlrpc_request *req) * Extents in the list must be in OES_RPC state. */ int osc_build_rpc(const struct lu_env *env, struct client_obd *cli, - struct list_head *ext_list, int cmd, pdl_policy_t pol) + struct list_head *ext_list, int cmd) { struct ptlrpc_request *req = NULL; struct osc_extent *ext; @@ -1793,19 +1793,7 @@ int osc_build_rpc(const struct lu_env *env, struct client_obd *cli, page_count, aa, cli->cl_r_in_flight, cli->cl_w_in_flight); - /* XXX: Maybe the caller can check the RPC bulk descriptor to - * see which CPU/NUMA node the majority of pages were allocated - * on, and try to assign the async RPC to the CPU core - * (PDL_POLICY_PREFERRED) to reduce cross-CPU memory traffic. - * - * But on the other hand, we expect that multiple ptlrpcd - * threads and the initial write sponsor can run in parallel, - * especially when data checksum is enabled, which is CPU-bound - * operation and single ptlrpcd thread cannot process in time. - * So more ptlrpcd threads sharing BRW load - * (with PDL_POLICY_ROUND) seems better. - */ - ptlrpcd_add_req(req, pol, -1); + ptlrpcd_add_req(req); rc = 0; EXIT; @@ -2101,17 +2089,17 @@ no_match: aa->oa_flags = NULL; } - req->rq_interpret_reply = - (ptlrpc_interpterer_t)osc_enqueue_interpret; - if (rqset == PTLRPCD_SET) - ptlrpcd_add_req(req, PDL_POLICY_ROUND, -1); - else - ptlrpc_set_add_req(rqset, req); - } else if (intent) { - ptlrpc_req_finished(req); - } - RETURN(rc); - } + req->rq_interpret_reply = + (ptlrpc_interpterer_t)osc_enqueue_interpret; + if (rqset == PTLRPCD_SET) + ptlrpcd_add_req(req); + else + ptlrpc_set_add_req(rqset, req); + } else if (intent) { + ptlrpc_req_finished(req); + } + RETURN(rc); + } rc = osc_enqueue_fini(req, upcall, cookie, &lockh, einfo->ei_mode, flags, agl, rc); @@ -2451,15 +2439,16 @@ static int osc_set_info_async(const struct lu_env *env, struct obd_export *exp, req->rq_interpret_reply = osc_shrink_grant_interpret; } - ptlrpc_request_set_replen(req); - if (!KEY_IS(KEY_GRANT_SHRINK)) { - LASSERT(set != NULL); - ptlrpc_set_add_req(set, req); - ptlrpc_check_set(NULL, set); - } else - ptlrpcd_add_req(req, PDL_POLICY_ROUND, -1); + ptlrpc_request_set_replen(req); + if (!KEY_IS(KEY_GRANT_SHRINK)) { + LASSERT(set != NULL); + ptlrpc_set_add_req(set, req); + ptlrpc_check_set(NULL, set); + } else { + ptlrpcd_add_req(req); + } - RETURN(0); + RETURN(0); } static int osc_reconnect(const struct lu_env *env, @@ -2551,7 +2540,7 @@ static int osc_import_event(struct obd_device *obd, cli = &obd->u.cli; /* all pages go to failing rpcs due to the invalid * import */ - osc_io_unplug(env, cli, NULL, PDL_POLICY_ROUND); + osc_io_unplug(env, cli, NULL); ldlm_namespace_cleanup(ns, LDLM_FL_LOCAL_ONLY); cl_env_put(env, &refcheck); @@ -2617,7 +2606,7 @@ static int brw_queue_work(const struct lu_env *env, void *data) CDEBUG(D_CACHE, "Run writeback work for client obd %p.\n", cli); - osc_io_unplug(env, cli, NULL, PDL_POLICY_SAME); + osc_io_unplug(env, cli, NULL); RETURN(0); } diff --git a/lustre/osp/osp_precreate.c b/lustre/osp/osp_precreate.c index 3c4d3c8..cac5bf2 100644 --- a/lustre/osp/osp_precreate.c +++ b/lustre/osp/osp_precreate.c @@ -207,7 +207,7 @@ static int osp_statfs_update(struct osp_device *d) d->opd_statfs_fresh_till = cfs_time_shift(obd_timeout * 1000); d->opd_statfs_update_in_progress = 1; - ptlrpcd_add_req(req, PDL_POLICY_ROUND, -1); + ptlrpcd_add_req(req); RETURN(0); } diff --git a/lustre/osp/osp_sync.c b/lustre/osp/osp_sync.c index c7406a8..f0d785a 100644 --- a/lustre/osp/osp_sync.c +++ b/lustre/osp/osp_sync.c @@ -556,7 +556,7 @@ static void osp_sync_send_new_rpc(struct osp_device *d, jra->jra_magic = OSP_JOB_MAGIC; INIT_LIST_HEAD(&jra->jra_link); - ptlrpcd_add_req(req, PDL_POLICY_ROUND, -1); + ptlrpcd_add_req(req); } diff --git a/lustre/osp/osp_trans.c b/lustre/osp/osp_trans.c index efe9a85..bcedee5 100644 --- a/lustre/osp/osp_trans.c +++ b/lustre/osp/osp_trans.c @@ -567,7 +567,7 @@ int osp_unplug_async_request(const struct lu_env *env, args->oaua_waitq = NULL; args->oaua_flow_control = false; req->rq_interpret_reply = osp_update_interpret; - ptlrpcd_add_req(req, PDL_POLICY_LOCAL, -1); + ptlrpcd_add_req(req); } return rc; @@ -967,7 +967,7 @@ static int osp_send_update_req(const struct lu_env *env, atomic_inc(args->oaua_count); } - ptlrpcd_add_req(req, PDL_POLICY_LOCAL, -1); + ptlrpcd_add_req(req); req = NULL; } else { osp_thandle_get(oth); /* hold for commit callback */ diff --git a/lustre/ptlrpc/client.c b/lustre/ptlrpc/client.c index d874d70..ec92ea1 100644 --- a/lustre/ptlrpc/client.c +++ b/lustre/ptlrpc/client.c @@ -917,15 +917,17 @@ ptlrpc_prep_req(struct obd_import *imp, __u32 version, int opcode, int count, } /** - * Allocate and initialize new request set structure. + * Allocate and initialize new request set structure on the current CPT. * Returns a pointer to the newly allocated set structure or NULL on error. */ struct ptlrpc_request_set *ptlrpc_prep_set(void) { - struct ptlrpc_request_set *set; + struct ptlrpc_request_set *set; + int cpt; ENTRY; - OBD_ALLOC(set, sizeof *set); + cpt = cfs_cpt_current(cfs_cpt_table, 0); + OBD_CPT_ALLOC(set, cfs_cpt_table, cpt, sizeof *set); if (!set) RETURN(NULL); atomic_set(&set->set_refcount, 1); @@ -2928,7 +2930,7 @@ int ptlrpc_replay_req(struct ptlrpc_request *req) atomic_inc(&req->rq_import->imp_replay_inflight); ptlrpc_request_addref(req); /* ptlrpcd needs a ref */ - ptlrpcd_add_req(req, PDL_POLICY_LOCAL, -1); + ptlrpcd_add_req(req); RETURN(0); } @@ -3172,7 +3174,7 @@ static void ptlrpcd_add_work_req(struct ptlrpc_request *req) req->rq_xid = ptlrpc_next_xid(); req->rq_import_generation = req->rq_import->imp_generation; - ptlrpcd_add_req(req, PDL_POLICY_ROUND, -1); + ptlrpcd_add_req(req); } static int work_interpreter(const struct lu_env *env, diff --git a/lustre/ptlrpc/import.c b/lustre/ptlrpc/import.c index e682e7c..e8f595b 100644 --- a/lustre/ptlrpc/import.c +++ b/lustre/ptlrpc/import.c @@ -756,16 +756,15 @@ int ptlrpc_connect_import(struct obd_import *imp) lustre_msg_add_op_flags(request->rq_reqmsg, MSG_CONNECT_TRANSNO); - DEBUG_REQ(D_RPCTRACE, request, "(re)connect request (timeout %d)", - request->rq_timeout); - ptlrpcd_add_req(request, PDL_POLICY_ROUND, -1); - rc = 0; + DEBUG_REQ(D_RPCTRACE, request, "(re)connect request (timeout %d)", + request->rq_timeout); + ptlrpcd_add_req(request); + rc = 0; out: - if (rc != 0) { - IMPORT_SET_STATE(imp, LUSTRE_IMP_DISCON); - } + if (rc != 0) + IMPORT_SET_STATE(imp, LUSTRE_IMP_DISCON); - RETURN(rc); + RETURN(rc); } EXPORT_SYMBOL(ptlrpc_connect_import); @@ -1341,7 +1340,7 @@ static int signal_completed_replay(struct obd_import *imp) req->rq_timeout *= 3; req->rq_interpret_reply = completed_replay_interpret; - ptlrpcd_add_req(req, PDL_POLICY_ROUND, -1); + ptlrpcd_add_req(req); RETURN(0); } diff --git a/lustre/ptlrpc/pinger.c b/lustre/ptlrpc/pinger.c index 1e23bab..bf3cc90 100644 --- a/lustre/ptlrpc/pinger.c +++ b/lustre/ptlrpc/pinger.c @@ -96,22 +96,22 @@ EXPORT_SYMBOL(ptlrpc_obd_ping); static int ptlrpc_ping(struct obd_import *imp) { - struct ptlrpc_request *req; - ENTRY; + struct ptlrpc_request *req; + ENTRY; - req = ptlrpc_prep_ping(imp); - if (req == NULL) { - CERROR("OOM trying to ping %s->%s\n", - imp->imp_obd->obd_uuid.uuid, - obd2cli_tgt(imp->imp_obd)); - RETURN(-ENOMEM); - } + req = ptlrpc_prep_ping(imp); + if (req == NULL) { + CERROR("OOM trying to ping %s->%s\n", + imp->imp_obd->obd_uuid.uuid, + obd2cli_tgt(imp->imp_obd)); + RETURN(-ENOMEM); + } - DEBUG_REQ(D_INFO, req, "pinging %s->%s", - imp->imp_obd->obd_uuid.uuid, obd2cli_tgt(imp->imp_obd)); - ptlrpcd_add_req(req, PDL_POLICY_ROUND, -1); + DEBUG_REQ(D_INFO, req, "pinging %s->%s", + imp->imp_obd->obd_uuid.uuid, obd2cli_tgt(imp->imp_obd)); + ptlrpcd_add_req(req); - RETURN(0); + RETURN(0); } static void ptlrpc_update_next_ping(struct obd_import *imp, int soon) diff --git a/lustre/ptlrpc/ptlrpc_internal.h b/lustre/ptlrpc/ptlrpc_internal.h index 54a522b..30e72a0 100644 --- a/lustre/ptlrpc/ptlrpc_internal.h +++ b/lustre/ptlrpc/ptlrpc_internal.h @@ -68,7 +68,7 @@ extern struct mutex pinger_mutex; int ptlrpc_start_thread(struct ptlrpc_service_part *svcpt, int wait); /* ptlrpcd.c */ -int ptlrpcd_start(int index, int max, const char *name, struct ptlrpcd_ctl *pc); +int ptlrpcd_start(struct ptlrpcd_ctl *pc); /* client.c */ void ptlrpc_at_adj_net_latency(struct ptlrpc_request *req, diff --git a/lustre/ptlrpc/ptlrpcd.c b/lustre/ptlrpc/ptlrpcd.c index cf8b3a0..d0e52ec 100644 --- a/lustre/ptlrpc/ptlrpcd.c +++ b/lustre/ptlrpc/ptlrpcd.c @@ -67,22 +67,90 @@ #include "ptlrpc_internal.h" +/* One of these per CPT. */ struct ptlrpcd { - int pd_size; - int pd_index; - int pd_nthreads; - struct ptlrpcd_ctl pd_thread_rcv; - struct ptlrpcd_ctl pd_threads[0]; + int pd_size; + int pd_index; + int pd_cpt; + int pd_cursor; + int pd_nthreads; + int pd_groupsize; + struct ptlrpcd_ctl pd_threads[0]; }; +/* + * max_ptlrpcds is obsolete, but retained to ensure that the kernel + * module will load on a system where it has been tuned. + * A value other than 0 implies it was tuned, in which case the value + * is used to derive a setting for ptlrpcd_per_cpt_max. + */ static int max_ptlrpcds; CFS_MODULE_PARM(max_ptlrpcds, "i", int, 0644, - "Max ptlrpcd thread count to be started."); + "Max ptlrpcd thread count to be started."); -static int ptlrpcd_bind_policy = PDB_POLICY_PAIR; +/* + * ptlrpcd_bind_policy is obsolete, but retained to ensure that + * the kernel module will load on a system where it has been tuned. + * A value other than 0 implies it was tuned, in which case the value + * is used to derive a setting for ptlrpcd_partner_group_size. + */ +static int ptlrpcd_bind_policy; CFS_MODULE_PARM(ptlrpcd_bind_policy, "i", int, 0644, - "Ptlrpcd threads binding mode."); -static struct ptlrpcd *ptlrpcds; + "Ptlrpcd threads binding mode (obsolete)."); + +/* + * ptlrpcd_per_cpt_max: The maximum number of ptlrpcd threads to run + * in a CPT. + */ +static int ptlrpcd_per_cpt_max; +CFS_MODULE_PARM(ptlrpcd_per_cpt_max, "i", int, 0644, + "Max ptlrpcd thread count to be started per cpt."); + +/* + * ptlrpcd_partner_group_size: The desired number of threads in each + * ptlrpcd partner thread group. Default is 2, corresponding to the + * old PDB_POLICY_PAIR. A negative value makes all ptlrpcd threads in + * a CPT partners of each other. + */ +static int ptlrpcd_partner_group_size; +CFS_MODULE_PARM(ptlrpcd_partner_group_size, "i", int, 0644, + "Number of ptlrpcd threads in a partner group."); + +/* + * ptlrpcd_cpts: A CPT string describing the CPU partitions that + * ptlrpcd threads should run on. Used to make ptlrpcd threads run on + * a subset of all CPTs. + * + * ptlrpcd_cpts=2 + * ptlrpcd_cpts=[2] + * run ptlrpcd threads only on CPT 2. + * + * ptlrpcd_cpts=0-3 + * ptlrpcd_cpts=[0-3] + * run ptlrpcd threads on CPTs 0, 1, 2, and 3. + * + * ptlrpcd_cpts=[0-3,5,7] + * run ptlrpcd threads on CPTS 0, 1, 2, 3, 5, and 7. + */ +static char *ptlrpcd_cpts; +CFS_MODULE_PARM(ptlrpcd_cpts, "s", charp, 0644, + "CPU partitions ptlrpcd threads should run in"); + +/* ptlrpcds_cpt_idx maps cpt numbers to an index in the ptlrpcds array. */ +static int *ptlrpcds_cpt_idx; + +/* ptlrpcds_num is the number of entries in the ptlrpcds array. */ +static int ptlrpcds_num; +static struct ptlrpcd **ptlrpcds; + +/* + * In addition to the regular thread pool above, there is a single + * global recovery thread. Recovery isn't critical for performance, + * and doesn't block, but must always be able to proceed, and it is + * possible that all normal ptlrpcd threads are blocked. Hence the + * need for a dedicated thread. + */ +static struct ptlrpcd_ctl ptlrpcd_rcv; struct mutex ptlrpcd_mutex; static int ptlrpcd_users = 0; @@ -97,45 +165,29 @@ void ptlrpcd_wake(struct ptlrpc_request *req) EXPORT_SYMBOL(ptlrpcd_wake); static struct ptlrpcd_ctl * -ptlrpcd_select_pc(struct ptlrpc_request *req, pdl_policy_t policy, int index) +ptlrpcd_select_pc(struct ptlrpc_request *req) { - int idx = 0; - - if (req != NULL && req->rq_send_state != LUSTRE_IMP_FULL) - return &ptlrpcds->pd_thread_rcv; - - switch (policy) { - case PDL_POLICY_SAME: - idx = smp_processor_id() % ptlrpcds->pd_nthreads; - break; - case PDL_POLICY_LOCAL: - /* Before CPU partition patches available, process it the same - * as "PDL_POLICY_ROUND". */ -# ifdef CFS_CPU_MODE_NUMA -# warning "fix this code to use new CPU partition APIs" -# endif - /* Fall through to PDL_POLICY_ROUND until the CPU - * CPU partition patches are available. */ - index = -1; - case PDL_POLICY_PREFERRED: - if (index >= 0 && index < num_online_cpus()) { - idx = index % ptlrpcds->pd_nthreads; - break; - } - /* Fall through to PDL_POLICY_ROUND for bad index. */ - default: - /* Fall through to PDL_POLICY_ROUND for unknown policy. */ - case PDL_POLICY_ROUND: - /* We do not care whether it is strict load balance. */ - idx = ptlrpcds->pd_index + 1; - if (idx == smp_processor_id()) - idx++; - idx %= ptlrpcds->pd_nthreads; - ptlrpcds->pd_index = idx; - break; - } - - return &ptlrpcds->pd_threads[idx]; + struct ptlrpcd *pd; + int cpt; + int idx; + + if (req != NULL && req->rq_send_state != LUSTRE_IMP_FULL) + return &ptlrpcd_rcv; + + cpt = cfs_cpt_current(cfs_cpt_table, 1); + if (ptlrpcds_cpt_idx == NULL) + idx = cpt; + else + idx = ptlrpcds_cpt_idx[cpt]; + pd = ptlrpcds[idx]; + + /* We do not care whether it is strict load balance. */ + idx = pd->pd_cursor; + if (++idx == pd->pd_nthreads) + idx = 0; + pd->pd_cursor = idx; + + return &pd->pd_threads[idx]; } /** @@ -145,12 +197,12 @@ ptlrpcd_select_pc(struct ptlrpc_request *req, pdl_policy_t policy, int index) void ptlrpcd_add_rqset(struct ptlrpc_request_set *set) { struct list_head *tmp, *pos; - struct ptlrpcd_ctl *pc; - struct ptlrpc_request_set *new; - int count, i; + struct ptlrpcd_ctl *pc; + struct ptlrpc_request_set *new; + int count, i; - pc = ptlrpcd_select_pc(NULL, PDL_POLICY_LOCAL, -1); - new = pc->pc_set; + pc = ptlrpcd_select_pc(NULL); + new = pc->pc_set; list_for_each_safe(pos, tmp, &set->set_requests) { struct ptlrpc_request *req = @@ -210,7 +262,7 @@ static int ptlrpcd_steal_rqset(struct ptlrpc_request_set *des, * Requests that are added to the ptlrpcd queue are sent via * ptlrpcd_check->ptlrpc_check_set(). */ -void ptlrpcd_add_req(struct ptlrpc_request *req, pdl_policy_t policy, int idx) +void ptlrpcd_add_req(struct ptlrpc_request *req) { struct ptlrpcd_ctl *pc; @@ -240,7 +292,7 @@ void ptlrpcd_add_req(struct ptlrpc_request *req, pdl_policy_t policy, int idx) spin_unlock(&req->rq_lock); } - pc = ptlrpcd_select_pc(req, policy, idx); + pc = ptlrpcd_select_pc(req); DEBUG_REQ(D_INFO, req, "add req [%p] to pc [%s:%d]", req, pc->pc_name, pc->pc_index); @@ -371,42 +423,49 @@ static int ptlrpcd_check(struct lu_env *env, struct ptlrpcd_ctl *pc) */ static int ptlrpcd(void *arg) { - struct ptlrpcd_ctl *pc = arg; - struct ptlrpc_request_set *set = pc->pc_set; - struct lu_context ses = { 0 }; - struct lu_env env = { .le_ses = &ses }; - int rc, exit = 0; + struct ptlrpcd_ctl *pc = arg; + struct ptlrpc_request_set *set; + struct lu_context ses = { 0 }; + struct lu_env env = { .le_ses = &ses }; + int rc = 0; + int exit = 0; ENTRY; unshare_fs_struct(); -#if defined(CONFIG_SMP) - if (test_bit(LIOD_BIND, &pc->pc_flags)) { - int index = pc->pc_index; - - if (index >= 0 && index < num_possible_cpus()) { - while (!cpu_online(index)) { - if (++index >= num_possible_cpus()) - index = 0; - } - set_cpus_allowed_ptr(current, - cpumask_of_node(cpu_to_node(index))); - } - } -#endif + + if (cfs_cpt_bind(cfs_cpt_table, pc->pc_cpt) != 0) + CWARN("Failed to bind %s on CPT %d\n", pc->pc_name, pc->pc_cpt); + + /* + * Allocate the request set after the thread has been bound + * above. This is safe because no requests will be queued + * until all ptlrpcd threads have confirmed that they have + * successfully started. + */ + set = ptlrpc_prep_set(); + if (set == NULL) + GOTO(failed, rc = -ENOMEM); + spin_lock(&pc->pc_lock); + pc->pc_set = set; + spin_unlock(&pc->pc_lock); + /* Both client and server (MDT/OST) may use the environment. */ - rc = lu_context_init(&env.le_ctx, LCT_MD_THREAD | LCT_DT_THREAD | - LCT_CL_THREAD | LCT_REMEMBER | + rc = lu_context_init(&env.le_ctx, LCT_MD_THREAD | + LCT_DT_THREAD | + LCT_CL_THREAD | + LCT_REMEMBER | LCT_NOREF); - if (rc == 0) { - rc = lu_context_init(env.le_ses, - LCT_SESSION|LCT_REMEMBER|LCT_NOREF); - if (rc != 0) - lu_context_fini(&env.le_ctx); + if (rc != 0) + GOTO(failed, rc); + rc = lu_context_init(env.le_ses, LCT_SESSION | + LCT_REMEMBER | + LCT_NOREF); + if (rc != 0) { + lu_context_fini(&env.le_ctx); + GOTO(failed, rc); } - complete(&pc->pc_starting); - if (rc != 0) - RETURN(rc); + complete(&pc->pc_starting); /* * This mainloop strongly resembles ptlrpc_set_wait() except that our @@ -454,201 +513,120 @@ static int ptlrpcd(void *arg) complete(&pc->pc_finishing); return 0; + +failed: + pc->pc_error = rc; + complete(&pc->pc_starting); + RETURN(rc); } -/* XXX: We want multiple CPU cores to share the async RPC load. So we start many - * ptlrpcd threads. We also want to reduce the ptlrpcd overhead caused by - * data transfer cross-CPU cores. So we bind ptlrpcd thread to specified - * CPU core. But binding all ptlrpcd threads maybe cause response delay - * because of some CPU core(s) busy with other loads. - * - * For example: "ls -l", some async RPCs for statahead are assigned to - * ptlrpcd_0, and ptlrpcd_0 is bound to CPU_0, but CPU_0 may be quite busy - * with other non-ptlrpcd, like "ls -l" itself (we want to the "ls -l" - * thread, statahead thread, and ptlrpcd thread can run in parallel), under - * such case, the statahead async RPCs can not be processed in time, it is - * unexpected. If ptlrpcd_0 can be re-scheduled on other CPU core, it may - * be better. But it breaks former data transfer policy. - * - * So we shouldn't be blind for avoiding the data transfer. We make some - * compromise: divide the ptlrpcd threds pool into two parts. One part is - * for bound mode, each ptlrpcd thread in this part is bound to some CPU - * core. The other part is for free mode, all the ptlrpcd threads in the - * part can be scheduled on any CPU core. We specify some partnership - * between bound mode ptlrpcd thread(s) and free mode ptlrpcd thread(s), - * and the async RPC load within the partners are shared. +static void ptlrpcd_ctl_init(struct ptlrpcd_ctl *pc, int index, int cpt) +{ + ENTRY; + + pc->pc_index = index; + pc->pc_cpt = cpt; + init_completion(&pc->pc_starting); + init_completion(&pc->pc_finishing); + spin_lock_init(&pc->pc_lock); + + if (index < 0) { + /* Recovery thread. */ + snprintf(pc->pc_name, sizeof(pc->pc_name), "ptlrpcd_rcv"); + } else { + /* Regular thread. */ + snprintf(pc->pc_name, sizeof(pc->pc_name), + "ptlrpcd_%02d_%02d", cpt, index); + } + + EXIT; +} + +/* XXX: We want multiple CPU cores to share the async RPC load. So we + * start many ptlrpcd threads. We also want to reduce the ptlrpcd + * overhead caused by data transfer cross-CPU cores. So we bind + * all ptlrpcd threads to a CPT, in the expectation that CPTs + * will be defined in a way that matches these boundaries. Within + * a CPT a ptlrpcd thread can be scheduled on any available core. * - * It can partly avoid data transfer cross-CPU (if the bound mode ptlrpcd - * thread can be scheduled in time), and try to guarantee the async RPC - * processed ASAP (as long as the free mode ptlrpcd thread can be scheduled - * on any CPU core). + * Each ptlrpcd thread has its own request queue. This can cause + * response delay if the thread is already busy. To help with + * this we define partner threads: these are other threads bound + * to the same CPT which will check for work in each other's + * request queues if they have no work to do. * - * As for how to specify the partnership between bound mode ptlrpcd - * thread(s) and free mode ptlrpcd thread(s), the simplest way is to use - * pair. In future, we can specify some more complex - * partnership based on the patches for CPU partition. But before such - * patches are available, we prefer to use the simplest one. + * The desired number of partner threads can be tuned by setting + * ptlrpcd_partner_group_size. The default is to create pairs of + * partner threads. */ -# ifdef CFS_CPU_MODE_NUMA -# warning "fix ptlrpcd_bind() to use new CPU partition APIs" -# endif -static int ptlrpcd_bind(int index, int max) +static int ptlrpcd_partners(struct ptlrpcd *pd, int index) { - struct ptlrpcd_ctl *pc; - int rc = 0; -#if defined(CONFIG_NUMA) - cpumask_t mask; -#endif + struct ptlrpcd_ctl *pc; + struct ptlrpcd_ctl **ppc; + int first; + int i; + int rc = 0; ENTRY; - LASSERT(index <= max - 1); - pc = &ptlrpcds->pd_threads[index]; - switch (ptlrpcd_bind_policy) { - case PDB_POLICY_NONE: - pc->pc_npartners = -1; - break; - case PDB_POLICY_FULL: - pc->pc_npartners = 0; - set_bit(LIOD_BIND, &pc->pc_flags); - break; - case PDB_POLICY_PAIR: - LASSERT(max % 2 == 0); - pc->pc_npartners = 1; - break; - case PDB_POLICY_NEIGHBOR: -#if defined(CONFIG_NUMA) - { - int i; - cpumask_copy(&mask, cpumask_of_node(cpu_to_node(index))); - for (i = max; i < num_online_cpus(); i++) - cpumask_clear_cpu(i, &mask); - pc->pc_npartners = cpumask_weight(&mask) - 1; - set_bit(LIOD_BIND, &pc->pc_flags); - } -#else - LASSERT(max >= 3); - pc->pc_npartners = 2; -#endif - break; - default: - CERROR("unknown ptlrpcd bind policy %d\n", ptlrpcd_bind_policy); - rc = -EINVAL; - } + LASSERT(index >= 0 && index < pd->pd_nthreads); + pc = &pd->pd_threads[index]; + pc->pc_npartners = pd->pd_groupsize - 1; - if (rc == 0 && pc->pc_npartners > 0) { - OBD_ALLOC(pc->pc_partners, - sizeof(struct ptlrpcd_ctl *) * pc->pc_npartners); - if (pc->pc_partners == NULL) { - pc->pc_npartners = 0; - rc = -ENOMEM; - } else { - switch (ptlrpcd_bind_policy) { - case PDB_POLICY_PAIR: - if (index & 0x1) { - set_bit(LIOD_BIND, &pc->pc_flags); - pc->pc_partners[0] = &ptlrpcds-> - pd_threads[index - 1]; - ptlrpcds->pd_threads[index - 1]. - pc_partners[0] = pc; - } - break; - case PDB_POLICY_NEIGHBOR: -#if defined(CONFIG_NUMA) - { - struct ptlrpcd_ctl *ppc; - int i, pidx; - /* partners are cores in the same NUMA node. - * setup partnership only with ptlrpcd threads - * that are already initialized - */ - for (pidx = 0, i = 0; i < index; i++) { - if (cpumask_test_cpu(i, &mask)) { - ppc = &ptlrpcds->pd_threads[i]; - pc->pc_partners[pidx++] = ppc; - ppc->pc_partners[ppc-> - pc_npartners++] = pc; - } - } - /* adjust number of partners to the number - * of partnership really setup */ - pc->pc_npartners = pidx; - } -#else - if (index & 0x1) - set_bit(LIOD_BIND, &pc->pc_flags); - if (index > 0) { - pc->pc_partners[0] = &ptlrpcds-> - pd_threads[index - 1]; - ptlrpcds->pd_threads[index - 1]. - pc_partners[1] = pc; - if (index == max - 1) { - pc->pc_partners[1] = - &ptlrpcds->pd_threads[0]; - ptlrpcds->pd_threads[0]. - pc_partners[0] = pc; - } - } -#endif - break; - } - } - } + if (pc->pc_npartners <= 0) + GOTO(out, rc); - RETURN(rc); -} + OBD_CPT_ALLOC(pc->pc_partners, cfs_cpt_table, pc->pc_cpt, + sizeof(struct ptlrpcd_ctl *) * pc->pc_npartners); + if (pc->pc_partners == NULL) { + pc->pc_npartners = 0; + GOTO(out, rc = -ENOMEM); + } + first = index - index % pd->pd_groupsize; + ppc = pc->pc_partners; + for (i = first; i < first + pd->pd_groupsize; i++) { + if (i != index) + *ppc++ = &pd->pd_threads[i]; + } +out: + RETURN(rc); +} -int ptlrpcd_start(int index, int max, const char *name, struct ptlrpcd_ctl *pc) +int ptlrpcd_start(struct ptlrpcd_ctl *pc) { - int rc; - ENTRY; + struct task_struct *task; + int rc = 0; + ENTRY; - /* - * Do not allow start second thread for one pc. - */ + /* + * Do not allow starting a second thread for one pc. + */ if (test_and_set_bit(LIOD_START, &pc->pc_flags)) { CWARN("Starting second thread (%s) for same pc %p\n", - name, pc); + pc->pc_name, pc); RETURN(0); } - pc->pc_index = index; - init_completion(&pc->pc_starting); - init_completion(&pc->pc_finishing); - spin_lock_init(&pc->pc_lock); - strlcpy(pc->pc_name, name, sizeof(pc->pc_name)); - pc->pc_set = ptlrpc_prep_set(); - if (pc->pc_set == NULL) - GOTO(out, rc = -ENOMEM); - - /* - * So far only "client" ptlrpcd uses an environment. In the future, - * ptlrpcd thread (or a thread-set) has to be given an argument, - * describing its "scope". - */ - rc = lu_context_init(&pc->pc_env.le_ctx, LCT_CL_THREAD|LCT_REMEMBER); - if (rc != 0) - GOTO(out_set, rc); + /* + * So far only "client" ptlrpcd uses an environment. In the future, + * ptlrpcd thread (or a thread-set) has to be given an argument, + * describing its "scope". + */ + rc = lu_context_init(&pc->pc_env.le_ctx, LCT_CL_THREAD|LCT_REMEMBER); + if (rc != 0) + GOTO(out, rc); - { - struct task_struct *task; - if (index >= 0) { - rc = ptlrpcd_bind(index, max); - if (rc < 0) - GOTO(out_env, rc); - } + task = kthread_run(ptlrpcd, pc, pc->pc_name); + if (IS_ERR(task)) + GOTO(out_set, rc = PTR_ERR(task)); - task = kthread_run(ptlrpcd, pc, pc->pc_name); - if (IS_ERR(task)) - GOTO(out_env, rc = PTR_ERR(task)); + wait_for_completion(&pc->pc_starting); + rc = pc->pc_error; + if (rc != 0) + GOTO(out_set, rc); - wait_for_completion(&pc->pc_starting); - } RETURN(0); -out_env: - lu_context_fini(&pc->pc_env.le_ctx); - out_set: if (pc->pc_set != NULL) { struct ptlrpc_request_set *set = pc->pc_set; @@ -658,7 +636,7 @@ out_set: spin_unlock(&pc->pc_lock); ptlrpc_set_destroy(set); } - clear_bit(LIOD_BIND, &pc->pc_flags); + lu_context_fini(&pc->pc_env.le_ctx); out: clear_bit(LIOD_START, &pc->pc_flags); RETURN(rc); @@ -703,7 +681,6 @@ void ptlrpcd_free(struct ptlrpcd_ctl *pc) clear_bit(LIOD_START, &pc->pc_flags); clear_bit(LIOD_STOP, &pc->pc_flags); clear_bit(LIOD_FORCE, &pc->pc_flags); - clear_bit(LIOD_BIND, &pc->pc_flags); out: if (pc->pc_npartners > 0) { @@ -714,23 +691,39 @@ out: pc->pc_partners = NULL; } pc->pc_npartners = 0; + pc->pc_error = 0; EXIT; } static void ptlrpcd_fini(void) { - int i; + int i; + int j; + int ncpts; ENTRY; if (ptlrpcds != NULL) { - for (i = 0; i < ptlrpcds->pd_nthreads; i++) - ptlrpcd_stop(&ptlrpcds->pd_threads[i], 0); - for (i = 0; i < ptlrpcds->pd_nthreads; i++) - ptlrpcd_free(&ptlrpcds->pd_threads[i]); - ptlrpcd_stop(&ptlrpcds->pd_thread_rcv, 0); - ptlrpcd_free(&ptlrpcds->pd_thread_rcv); - OBD_FREE(ptlrpcds, ptlrpcds->pd_size); - ptlrpcds = NULL; + for (i = 0; i < ptlrpcds_num; i++) { + if (ptlrpcds[i] == NULL) + break; + for (j = 0; j < ptlrpcds[i]->pd_nthreads; j++) + ptlrpcd_stop(&ptlrpcds[i]->pd_threads[j], 0); + for (j = 0; j < ptlrpcds[i]->pd_nthreads; j++) + ptlrpcd_free(&ptlrpcds[i]->pd_threads[j]); + OBD_FREE(ptlrpcds[i], ptlrpcds[i]->pd_size); + ptlrpcds[i] = NULL; + } + OBD_FREE(ptlrpcds, sizeof(ptlrpcds[0]) * ptlrpcds_num); + } + ptlrpcds_num = 0; + + ptlrpcd_stop(&ptlrpcd_rcv, 0); + ptlrpcd_free(&ptlrpcd_rcv); + + if (ptlrpcds_cpt_idx != NULL) { + ncpts = cfs_cpt_number(cfs_cpt_table); + OBD_FREE(ptlrpcds_cpt_idx, ncpts * sizeof(ptlrpcds_cpt_idx[0])); + ptlrpcds_cpt_idx = NULL; } EXIT; @@ -738,65 +731,220 @@ static void ptlrpcd_fini(void) static int ptlrpcd_init(void) { - int nthreads = num_online_cpus(); - char name[16]; - int size, i = -1, j, rc = 0; + int nthreads; + int groupsize; + int size; + int i; + int j; + int rc = 0; + struct cfs_cpt_table *cptable; + __u32 *cpts = NULL; + int ncpts; + int cpt; + struct ptlrpcd *pd; ENTRY; - if (max_ptlrpcds > 0 && max_ptlrpcds < nthreads) - nthreads = max_ptlrpcds; - if (nthreads < 2) - nthreads = 2; - if (nthreads < 3 && ptlrpcd_bind_policy == PDB_POLICY_NEIGHBOR) - ptlrpcd_bind_policy = PDB_POLICY_PAIR; - else if (nthreads % 2 != 0 && ptlrpcd_bind_policy == PDB_POLICY_PAIR) - nthreads &= ~1; /* make sure it is even */ - - size = offsetof(struct ptlrpcd, pd_threads[nthreads]); - OBD_ALLOC(ptlrpcds, size); - if (ptlrpcds == NULL) - GOTO(out, rc = -ENOMEM); - - snprintf(name, 15, "ptlrpcd_rcv"); - set_bit(LIOD_RECOVERY, &ptlrpcds->pd_thread_rcv.pc_flags); - rc = ptlrpcd_start(-1, nthreads, name, &ptlrpcds->pd_thread_rcv); - if (rc < 0) - GOTO(out, rc); - - /* XXX: We start nthreads ptlrpc daemons. Each of them can process any - * non-recovery async RPC to improve overall async RPC efficiency. - * - * But there are some issues with async I/O RPCs and async non-I/O - * RPCs processed in the same set under some cases. The ptlrpcd may - * be blocked by some async I/O RPC(s), then will cause other async - * non-I/O RPC(s) can not be processed in time. - * - * Maybe we should distinguish blocked async RPCs from non-blocked - * async RPCs, and process them in different ptlrpcd sets to avoid - * unnecessary dependency. But how to distribute async RPCs load - * among all the ptlrpc daemons becomes another trouble. */ - for (i = 0; i < nthreads; i++) { - snprintf(name, 15, "ptlrpcd_%d", i); - rc = ptlrpcd_start(i, nthreads, name, &ptlrpcds->pd_threads[i]); - if (rc < 0) - GOTO(out, rc); - } + /* + * Determine the CPTs that ptlrpcd threads will run on. + */ + cptable = cfs_cpt_table; + ncpts = cfs_cpt_number(cptable); + if (ptlrpcd_cpts != NULL) { + struct cfs_expr_list *el; + + size = ncpts * sizeof(ptlrpcds_cpt_idx[0]); + OBD_ALLOC(ptlrpcds_cpt_idx, size); + if (ptlrpcds_cpt_idx == NULL) + GOTO(out, rc = -ENOMEM); + + rc = cfs_expr_list_parse(ptlrpcd_cpts, + strlen(ptlrpcd_cpts), + 0, ncpts - 1, &el); + if (rc != 0) { + CERROR("%s: invalid CPT pattern string: %s", + "ptlrpcd_cpts", ptlrpcd_cpts); + GOTO(out, rc = -EINVAL); + } - ptlrpcds->pd_size = size; - ptlrpcds->pd_index = 0; - ptlrpcds->pd_nthreads = nthreads; + rc = cfs_expr_list_values(el, ncpts, &cpts); + cfs_expr_list_free(el); + if (rc <= 0) { + CERROR("%s: failed to parse CPT array %s: %d\n", + "ptlrpcd_cpts", ptlrpcd_cpts, rc); + if (rc == 0) + rc = -EINVAL; + GOTO(out, rc); + } + /* + * Create the cpt-to-index map. When there is no match + * in the cpt table, pick a cpt at random. This could + * be changed to take the topology of the system into + * account. + */ + for (cpt = 0; cpt < ncpts; cpt++) { + for (i = 0; i < rc; i++) + if (cpts[i] == cpt) + break; + if (i >= rc) + i = cpt % rc; + ptlrpcds_cpt_idx[cpt] = i; + } + + cfs_expr_list_values_free(cpts, rc); + ncpts = rc; + } + ptlrpcds_num = ncpts; + + size = ncpts * sizeof(ptlrpcds[0]); + OBD_ALLOC(ptlrpcds, size); + if (ptlrpcds == NULL) + GOTO(out, rc = -ENOMEM); + + /* + * The max_ptlrpcds parameter is obsolete, but do something + * sane if it has been tuned, and complain if + * ptlrpcd_per_cpt_max has also been tuned. + */ + if (max_ptlrpcds != 0) { + CWARN("max_ptlrpcds is obsolete.\n"); + if (ptlrpcd_per_cpt_max == 0) { + ptlrpcd_per_cpt_max = max_ptlrpcds / ncpts; + /* Round up if there is a remainder. */ + if (max_ptlrpcds % ncpts != 0) + ptlrpcd_per_cpt_max++; + CWARN("Setting ptlrpcd_per_cpt_max = %d\n", + ptlrpcd_per_cpt_max); + } else { + CWARN("ptlrpd_per_cpt_max is also set!\n"); + } + } + + /* + * The ptlrpcd_bind_policy parameter is obsolete, but do + * something sane if it has been tuned, and complain if + * ptlrpcd_partner_group_size is also tuned. + */ + if (ptlrpcd_bind_policy != 0) { + CWARN("ptlrpcd_bind_policy is obsolete.\n"); + if (ptlrpcd_partner_group_size == 0) { + switch (ptlrpcd_bind_policy) { + case 1: /* PDB_POLICY_NONE */ + case 2: /* PDB_POLICY_FULL */ + ptlrpcd_partner_group_size = 1; + break; + case 3: /* PDB_POLICY_PAIR */ + ptlrpcd_partner_group_size = 2; + break; + case 4: /* PDB_POLICY_NEIGHBOR */ +#ifdef CONFIG_NUMA + ptlrpcd_partner_group_size = -1; /* CPT */ +#else + ptlrpcd_partner_group_size = 3; /* Triplets */ +#endif + break; + default: /* Illegal value, use the default. */ + ptlrpcd_partner_group_size = 2; + break; + } + CWARN("Setting ptlrpcd_partner_group_size = %d\n", + ptlrpcd_partner_group_size); + } else { + CWARN("ptlrpcd_partner_group_size is also set!\n"); + } + } + + if (ptlrpcd_partner_group_size == 0) + ptlrpcd_partner_group_size = 2; + else if (ptlrpcd_partner_group_size < 0) + ptlrpcd_partner_group_size = -1; + else if (ptlrpcd_per_cpt_max > 0 && + ptlrpcd_partner_group_size > ptlrpcd_per_cpt_max) + ptlrpcd_partner_group_size = ptlrpcd_per_cpt_max; + + /* + * Start the recovery thread first. + */ + set_bit(LIOD_RECOVERY, &ptlrpcd_rcv.pc_flags); + ptlrpcd_ctl_init(&ptlrpcd_rcv, -1, CFS_CPT_ANY); + rc = ptlrpcd_start(&ptlrpcd_rcv); + if (rc < 0) + GOTO(out, rc); + + for (i = 0; i < ncpts; i++) { + if (cpts == NULL) + cpt = i; + else + cpt = cpts[i]; + + nthreads = cfs_cpt_weight(cptable, cpt); + if (ptlrpcd_per_cpt_max > 0 && ptlrpcd_per_cpt_max < nthreads) + nthreads = ptlrpcd_per_cpt_max; + if (nthreads < 2) + nthreads = 2; + + if (ptlrpcd_partner_group_size <= 0) { + groupsize = nthreads; + } else if (nthreads <= ptlrpcd_partner_group_size) { + groupsize = nthreads; + } else { + groupsize = ptlrpcd_partner_group_size; + if (nthreads % groupsize != 0) + nthreads += groupsize - (nthreads % groupsize); + } + + size = offsetof(struct ptlrpcd, pd_threads[nthreads]); + OBD_CPT_ALLOC(pd, cptable, cpt, size); + if (!pd) + GOTO(out, rc = -ENOMEM); + pd->pd_size = size; + pd->pd_index = i; + pd->pd_cpt = cpt; + pd->pd_cursor = 0; + pd->pd_nthreads = nthreads; + pd->pd_groupsize = groupsize; + ptlrpcds[i] = pd; + + /* + * The ptlrpcd threads in a partner group can access + * each other's struct ptlrpcd_ctl, so these must be + * initialized before any thead is started. + */ + for (j = 0; j < nthreads; j++) { + ptlrpcd_ctl_init(&pd->pd_threads[j], j, cpt); + rc = ptlrpcd_partners(pd, j); + if (rc < 0) + GOTO(out, rc); + } + + /* XXX: We start nthreads ptlrpc daemons on this cpt. + * Each of them can process any non-recovery + * async RPC to improve overall async RPC + * efficiency. + * + * But there are some issues with async I/O RPCs + * and async non-I/O RPCs processed in the same + * set under some cases. The ptlrpcd may be + * blocked by some async I/O RPC(s), then will + * cause other async non-I/O RPC(s) can not be + * processed in time. + * + * Maybe we should distinguish blocked async RPCs + * from non-blocked async RPCs, and process them + * in different ptlrpcd sets to avoid unnecessary + * dependency. But how to distribute async RPCs + * load among all the ptlrpc daemons becomes + * another trouble. + */ + for (j = 0; j < nthreads; j++) { + rc = ptlrpcd_start(&pd->pd_threads[j]); + if (rc < 0) + GOTO(out, rc); + } + } out: - if (rc != 0 && ptlrpcds != NULL) { - for (j = 0; j <= i; j++) - ptlrpcd_stop(&ptlrpcds->pd_threads[j], 0); - for (j = 0; j <= i; j++) - ptlrpcd_free(&ptlrpcds->pd_threads[j]); - ptlrpcd_stop(&ptlrpcds->pd_thread_rcv, 0); - ptlrpcd_free(&ptlrpcds->pd_thread_rcv); - OBD_FREE(ptlrpcds, size); - ptlrpcds = NULL; - } + if (rc != 0) + ptlrpcd_fini(); RETURN(rc); } diff --git a/lustre/quota/qsd_request.c b/lustre/quota/qsd_request.c index 1abc4eb..8c13ff5 100644 --- a/lustre/quota/qsd_request.c +++ b/lustre/quota/qsd_request.c @@ -133,7 +133,7 @@ int qsd_send_dqacq(const struct lu_env *env, struct obd_export *exp, ptlrpc_req_finished(req); } else { req->rq_interpret_reply = qsd_dqacq_interpret; - ptlrpcd_add_req(req, PDL_POLICY_LOCAL, -1); + ptlrpcd_add_req(req); } RETURN(rc); @@ -325,7 +325,7 @@ int qsd_intent_lock(const struct lu_env *env, struct obd_export *exp, } else { /* queue lock request and return */ req->rq_interpret_reply = qsd_intent_interpret; - ptlrpcd_add_req(req, PDL_POLICY_LOCAL, -1); + ptlrpcd_add_req(req); } RETURN(rc); -- 1.8.3.1