From: Andrew Perepechko Date: Fri, 28 Apr 2017 17:20:06 +0000 (+0300) Subject: LU-9417 mdc: excessive memory consumption by the xattr cache X-Git-Tag: 2.10.51~102 X-Git-Url: https://git.whamcloud.com/?p=fs%2Flustre-release.git;a=commitdiff_plain;h=94f080b395d6517e8da06a945bb10b0bfe69ad78 LU-9417 mdc: excessive memory consumption by the xattr cache The refill operation of the xattr cache does not know the reply size in advance, so it makes a guess based on the maxeasize value returned by the MDS. In practice, it allocates 16 KiB for the common case and 4 MiB for the large xattr case. However, a typical reply is just a few hundred bytes. Even worse, RHEL6 has a very bad vmalloc()/vfree() design with global locks. If we follow the conservative approach, we can prepare a single memory page for the reply. It is large enough for any reasonable xattr set and, at the same time, it does not require multiple page memory reclaim, which can be costly. If, for a specific file, the reply is larger than a single page, the client is prepared to handle that and will fall back to non-cached xattr code. Indeed, if this happens often and xattrs are often used to store large values, it makes sense to disable the xattr cache at all since it wasn't designed for such [mis]use. Change-Id: I98d97ffea5c83bccbc9f254717af8d2c0ac4d77f Signed-off-by: Andrew Perepechko Reviewed-on: https://review.whamcloud.com/26887 Reviewed-by: Fan Yong Reviewed-by: Ben Evans Tested-by: Jenkins Tested-by: Maloo Reviewed-by: Oleg Drokin --- diff --git a/lustre/mdc/mdc_locks.c b/lustre/mdc/mdc_locks.c index a85324d..dad1954 100644 --- a/lustre/mdc/mdc_locks.c +++ b/lustre/mdc/mdc_locks.c @@ -336,6 +336,10 @@ mdc_intent_open_pack(struct obd_export *exp, struct lookup_intent *it, return req; } +#define GA_DEFAULT_EA_NAME_LEN 20 +#define GA_DEFAULT_EA_VAL_LEN 250 +#define GA_DEFAULT_EA_NUM 10 + static struct ptlrpc_request * mdc_intent_getxattr_pack(struct obd_export *exp, struct lookup_intent *it, @@ -344,7 +348,6 @@ mdc_intent_getxattr_pack(struct obd_export *exp, struct ptlrpc_request *req; struct ldlm_intent *lit; int rc, count = 0; - __u32 maxdata; struct list_head cancels = LIST_HEAD_INIT(cancels); ENTRY; @@ -364,22 +367,21 @@ mdc_intent_getxattr_pack(struct obd_export *exp, lit = req_capsule_client_get(&req->rq_pill, &RMF_LDLM_INTENT); lit->opc = IT_GETXATTR; - maxdata = class_exp2cliimp(exp)->imp_connect_data.ocd_max_easize; - /* pack the intended request */ - mdc_pack_body(req, &op_data->op_fid1, op_data->op_valid, maxdata, -1, - 0); + mdc_pack_body(req, &op_data->op_fid1, op_data->op_valid, + GA_DEFAULT_EA_NAME_LEN * GA_DEFAULT_EA_NUM, + -1, 0); - req_capsule_set_size(&req->rq_pill, &RMF_EADATA, - RCL_SERVER, maxdata); + req_capsule_set_size(&req->rq_pill, &RMF_EADATA, RCL_SERVER, + GA_DEFAULT_EA_NAME_LEN * GA_DEFAULT_EA_NUM); - req_capsule_set_size(&req->rq_pill, &RMF_EAVALS, - RCL_SERVER, maxdata); + req_capsule_set_size(&req->rq_pill, &RMF_EAVALS, RCL_SERVER, + GA_DEFAULT_EA_VAL_LEN * GA_DEFAULT_EA_NUM); - req_capsule_set_size(&req->rq_pill, &RMF_EAVALS_LENS, - RCL_SERVER, maxdata); + req_capsule_set_size(&req->rq_pill, &RMF_EAVALS_LENS, RCL_SERVER, + sizeof(__u32) * GA_DEFAULT_EA_NUM); - req_capsule_set_size(&req->rq_pill, &RMF_ACL, RCL_SERVER, maxdata); + req_capsule_set_size(&req->rq_pill, &RMF_ACL, RCL_SERVER, 0); ptlrpc_request_set_replen(req);