Before the patch qti_lqes_qunit_min used int to store qunit
value, while lqe_qunit type is _u64. lqe_qunit > 2G caused
an overflow in a local integer argument. For example, when
block hard limit was set to 500TB(i.e. lqe_qunit was about
64TB in a system with 2 OSTs), qti_lqes_qunit_min returned
0 instead of 64TB in a qmt_lvbo_fill. Thus new qunit was not
set on OSTs(qsd_set_qunit wasn't called). Without the qunit,
OST began to send release request after each acquire. For
example, to write 10MB at the OST were sent 2 acquire and
2 release reuests(as qunit was not set on OST). With the
fix, i.e. in a normal case, OST needs just one acquire
request. The issue caused performance drop in a bufferred
write up to 15%-20% if compare with a baseline without PQ
patches.
Note, the issue exists only when a hard limit is set to some
high value(>100GB). The exact hard limit value depends on OSTs
number in a system and on amount of used space, but let's think
that issue doesn't exist on a clean system with 2 OSTs and hard
block limit 100G(this case was checked).
Remove qmt_pool_hash - it is not used anywhere since
"LU-11023 quota: remove quota pool ID".
Lustre-change: https://review.whamcloud.com/45133
Lustre-commit:
7b8c6cd976c584b4e965b24bf4369ded86cda811
HPE-bug-id: LUS-10250
Change-Id: I2c4ce38f5b9395ed1f4868d4c8efc00751116b15
Signed-off-by: Sergey Cheremencev <sergey.cheremencev@hpe.com>
Reviewed-by: Petros Koutoupis <petros.koutoupis@hpe.com>
Reviewed-by: Alexander Boyko <alexander.boyko@hpe.com>
Reviewed-by: Andreas Dilger <adilger@whamcloud.com>
Reviewed-on: https://review.whamcloud.com/46791
Tested-by: jenkins <devops@whamcloud.com>
Tested-by: Maloo <maloo@whamcloud.com>
qti->qti_lqes_cnt = 0;
}
-int qti_lqes_min_qunit(const struct lu_env *env)
+__u64 qti_lqes_min_qunit(const struct lu_env *env)
{
- int i, min, qunit;
+ __u64 min, qunit;
+ int i;
for (i = 1, min = qti_lqe_qunit(env, 0); i < qti_lqes_cnt(env); i++) {
qunit = qti_lqe_qunit(env, i);
- if (qunit < min)
+ /* if qunit is 0, lqe is not enforced and we can ignore it */
+ if (qunit && qunit < min)
min = qunit;
}
/* pointer to ldlm namespace to be used for quota locks */
struct ldlm_namespace *qmt_ns;
- /* Hash table containing a qmt_pool_info structure for each pool
- * this quota master is in charge of. We only have 2 pools in this
- * hash for the time being:
- * - one for quota management on the default metadata pool
- * - one for quota managment on the default data pool
- *
- * Once we support quota on non-default pools, then more pools will
- * be added to this hash table and pool master setup would have to be
- * handled via configuration logs */
- struct cfs_hash *qmt_pool_hash;
-
/* List of pools managed by this master target */
struct list_head qmt_pool_list;
/* rw semaphore to protect pool list */
int qti_lqes_add(const struct lu_env *env, struct lquota_entry *lqe);
void qti_lqes_del(const struct lu_env *env, int index);
void qti_lqes_fini(const struct lu_env *env);
-int qti_lqes_min_qunit(const struct lu_env *env);
+__u64 qti_lqes_min_qunit(const struct lu_env *env);
int qti_lqes_edquot(const struct lu_env *env);
int qti_lqes_restore_init(const struct lu_env *env);
void qti_lqes_restore_fini(const struct lu_env *env);
{
struct seq_file *m = file->private_data;
struct qmt_pool_info *pool = m->private;
- long long least_qunit;
- int qunit, rc;
+ long long least_qunit, qunit;
+ int rc;
LASSERT(pool != NULL);