From 45c6577dd82a12d45e2dd74f41321b7c4e253fb4 Mon Sep 17 00:00:00 2001 From: Niu Yawei Date: Mon, 27 May 2013 01:54:33 -0400 Subject: [PATCH] LUDOC-151 quota: wrong least inode qunit The least inode qunit is 1024 but not 1000. Signed-off-by: Niu Yawei Change-Id: Ia54a82c9c0143468a350567af3ccaf0b6a7395a5 Reviewed-on: http://review.whamcloud.com/6455 Tested-by: Hudson Reviewed-by: Johann Lombardi Reviewed-by: Richard Henwood --- ConfiguringQuotas.xml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/ConfiguringQuotas.xml b/ConfiguringQuotas.xml index d31f1ea..c40af6b 100644 --- a/ConfiguringQuotas.xml +++ b/ConfiguringQuotas.xml @@ -207,7 +207,7 @@ testfs-OST0001_UUID 0 - 16384 - 0 - 0 - <indexterm><primary>Quotas</primary><secondary>allocating</secondary></indexterm>Quota Allocation In Lustre, quota must be properly allocated or users may experience unnecessary failures. The file system block quota is divided up among the OSTs within the file system. Each OST requests an allocation which is increased up to the quota limit. The quota allocation is then quantized to reduce the number of quota-related request traffic. The Lustre quota system distributes quotas from the Quota Master Target (aka QMT). Only one QMT instance is supported for now and only runs on the same node as MDT0000. All OSTs and MDTs set up a Quota Slave Device (aka QSD) which connects to the QMT to allocate/release quota space. The QSD is setup directly from the OSD layer. - To reduce quota requests, quota space is initially allocated to QSDs in very large chunks. How much unused quota space can be hold by a target is controlled by the qunit size. When quota space for a given ID is close to exhaustion on the QMT, the qunit size is reduced and QSDs are notified of the new qunit size value via a glimpse callback. Slaves are then responsible for releasing quota space above the new qunit value. The qunit size isn't shrunk indefinitely and there is a minimal value of 1MB for blocks and 1,000 for inodes. This means that the quota space rebalancing process will stop when this mininum value is reached. As a result, quota exceeded can be returned while many slaves still have 1MB or 1,000 inodes of spare quota space. + To reduce quota requests, quota space is initially allocated to QSDs in very large chunks. How much unused quota space can be hold by a target is controlled by the qunit size. When quota space for a given ID is close to exhaustion on the QMT, the qunit size is reduced and QSDs are notified of the new qunit size value via a glimpse callback. Slaves are then responsible for releasing quota space above the new qunit value. The qunit size isn't shrunk indefinitely and there is a minimal value of 1MB for blocks and 1,024 for inodes. This means that the quota space rebalancing process will stop when this mininum value is reached. As a result, quota exceeded can be returned while many slaves still have 1MB or 1,024 inodes of spare quota space. If we look at the setquota example again, running this lfs quota command: # lfs quota -u bob -v /mnt/testfs -- 1.8.3.1