From a591002608f4034150997adf079b7d93fa512c93 Mon Sep 17 00:00:00 2001 From: Richard Henwood Date: Mon, 30 Nov 2015 15:28:09 -0600 Subject: [PATCH] LUDOC-313 cleanup: Remove ambiguity in quota documentation. Explain how to detect which version is running. Clarify which server type you need to execute which commands on. Clarify which commands need to be executed on a client. Improve readability. Change-Id: I707c5dace26369d161cfd1a08276edf7fbada06b Signed-off-by: Richard Henwood Reviewed-on: http://review.whamcloud.com/17399 Tested-by: Jenkins Reviewed-by: John Fuchs-Chesney --- ConfiguringQuotas.xml | 361 +++++++++++++++++++++++--------------------------- Revision.xml | 14 ++ 2 files changed, 180 insertions(+), 195 deletions(-) diff --git a/ConfiguringQuotas.xml b/ConfiguringQuotas.xml index 6f2a070..0402b02 100644 --- a/ConfiguringQuotas.xml +++ b/ConfiguringQuotas.xml @@ -4,45 +4,6 @@ xmlns:xl="http://www.w3.org/1999/xlink" version="5.0" xml:lang="en-US" xml:id="configuringquotas"> Configuring and Managing Quotas - This chapter describes how to configure quotas and includes the - following sections: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
<indexterm> @@ -66,8 +27,27 @@ xml:id="configuringquotas"> <literal>lctl</literal> commands (post-mount).</para> </listitem> <listitem> - <para>Quotas are distributed (as the Lustre file system is a - distributed file system), which has several ramifications.</para> + <para>The quota feature in Lustre software is distributed + throughout the system (as the Lustre file system is a distributed file + system). Because of this, quota setup and behavior on Lustre is + different from local disk quotas in the following ways:</para> + <itemizedlist> + <listitem> + <para>No single point of administration: some commands must be + executed on the MGS, other commands on the MDSs and OSSs, and still + other commands on the client.</para> + </listitem> + <listitem> + <para>Granularity: a local quota is typically specified for + kilobyte resolution, Lustre uses one megabyte as the smallest quota + resolution.</para> + </listitem> + <listitem> + <para>Accuracy: quota information is distributed throughout +the file system and can only be accurately calculated with a completely +quite file system.</para> + </listitem> + </itemizedlist> </listitem> <listitem> <para>Quotas are allocated and consumed in a quantized fashion.</para> @@ -101,18 +81,101 @@ xml:id="configuringquotas"> <primary>Quotas</primary> <secondary>enabling disk</secondary> </indexterm>Enabling Disk Quotas - Prior to Lustre software release 2.4.0, enabling quota involved a - full file system scan via - lfs quotacheck. All file systems formatted with Lustre - software release 2.4.0 or newer no longer require quotacheck to be run - since up-to-date accounting information are now always maintained by the - OSD layer, regardless of the quota enforcement status. + The design of quotas on Lustre has management and enforcement + separated from resource usage and accounting. Lustre software is + responsible for management and enforcement. The back-end file + system is responsible for resource usage and accounting. Because of + this, it is necessary to begin enabling quotas by enabling quotas on the + back-end disk system. Because quota setup is dependent on the Lustre + software version in use, you may first need to run + lfs get_param version to identify + you are currently using. + +
+ Enabling Disk Quotas (Lustre Software Prior to Release 2.4) + + + For Lustre software releases older than release 2.4, + lfs quotacheck must be first run from a client node to + create quota files on the Lustre targets (i.e. the MDT and OSTs). + lfs quotacheck requires the file system to be quiescent + (i.e. no modifying operations like write, truncate, create or delete + should run concurrently). Failure to follow this caution may result in + inaccurate user/group disk usage. Operations that do not change Lustre + files (such as read or mount) are okay to run. + lfs quotacheck performs a scan on all the Lustre + targets to calculates the block/inode usage for each user/group. If the + Lustre file system has many files, + quotacheck may take a long time to complete. Several + options can be passed to + lfs quotacheck: + +# lfs quotacheck -ug /mnt/testfs + + + + + u-- checks the user disk quota information + + + + g-- checks the group disk quota information + + + By default, quota is turned on after + quotacheck completes. However, this setting isn't + persistent and quota will have to be enabled again (via + lfs quotaon) if one of the Lustre targets is + restarted. + lfs quotaoff is used to turn off quota. + To enable quota permanently with a Lustre software release older + than release 2.4, the + quota_type parameter must be used. This requires + setting + mdd.quota_type and + ost.quota_type, respectively, on the MDT and OSTs. + quota_type can be set to the string + u (user), + g (group) or + ug for both users and groups. This parameter can be + specified at + mkfs time ( + mkfs.lustre --param mdd.quota_type=ug) or with + tunefs.lustre. As an example: + +tunefs.lustre --param ost.quota_type=ug $ost_dev + + When using + mkfs.lustre --param mdd.quota_type=ug or + tunefs.lustre --param ost.quota_type=ug, be sure to + run the command on all OSTs and the MDT. Otherwise, abnormal results may + occur. + + + In Lustre software releases before 2.4, when new OSTs are + added to the file system, quotas are not automatically propagated to + the new OSTs. As a workaround, clear and then reset quotas for each + user or group using the + lfs setquota command. In the example below, quotas + are cleared and reset for user + bob on file system + testfs: + +$ lfs setquota -u bob -b 0 -B 0 -i 0 -I 0 /mnt/testfs +$ lfs setquota -u bob -b 307200 -B 309200 -i 10000 -I 11000 /mnt/testfs + + +
Enabling Disk Quotas (Lustre Software Release 2.4 and later) - Although quota enforcement is managed by the Lustre software, each - OSD implementation relies on the backend file system to maintain - per-user/group block and inode usage: + Quota setup is orchestrated by the MGS and all setup + commands in this section must be run on the MGS. Once setup, + verification of the quota state must be performed on the MDT. Although + quota enforcement is managed by the Lustre software, each OSD + implementation relies on the back-end file system to maintain + per-user/group block and inode usage. Hence, differences exist + when setting up quotas with ldiskfs or ZFS back-ends: For ldiskfs backends, @@ -144,20 +207,20 @@ xml:id="configuringquotas"> - As a result, - lfs quotacheck is now deprecated and not required any - more when running Lustre software release 2.4 on the servers. + Lustre file systems formatted with a Lustre release prior to 2.4.0 - can be still safely upgraded to release 2.4.0, but won't have functional - space usage report until + can be still safely upgraded to release 2.4.0, but will not have + functional space usage report until tunefs.lustre --quota is run against all targets. This command sets the QUOTA feature flag in the superblock and runs e2fsck (as a result, the target must be offline) to build the per-UID/GID disk usage - database. + database. See for further + important considerations. + - Lustre software release 2.4 and beyond requires a version of + Lustre software release 2.4 and later requires a version of e2fsprogs that supports quota (i.e. newer or equal to 1.42.3.wc1) to be - installed on the server nodes using ldiskfs backend (e2fsprogs isn't + installed on the server nodes using ldiskfs backend (e2fsprogs is not needed with ZFS backend). In general, we recommend to use the latest e2fsprogs version available on @@ -180,68 +243,67 @@ xml:id="configuringquotas"> lctl conf_param on the MGS via the following syntax: -lctl conf_param -fsname.quota. -ost|mdt= -u|g|ug|none +lctl conf_param fsname.quota.ost|mdt=u|g|ug|none - ost-- to configure block quota managed by + ost -- to configure block quota managed by OSTs - mdt-- to configure inode quota managed by + mdt -- to configure inode quota managed by MDTs - u-- to enable quota enforcement for users + u -- to enable quota enforcement for users only - g-- to enable quota enforcement for groups + g -- to enable quota enforcement for groups only - ug-- to enable quota enforcement for both users + ug -- to enable quota enforcement for both users and groups - none-- to disable quota enforcement for both users + none -- to disable quota enforcement for both users and groups Examples: To turn on user and group quotas for block only on file system - testfs1, run: - -$ lctl conf_param testfs1.quota.ost=ug + testfs1, on the MGS run: + $ lctl conf_param testfs1.quota.ost=ug To turn on group quotas for inodes on file system - testfs2, run: - -$ lctl conf_param testfs2.quota.mdt=g + testfs2, on the MGS run: + $ lctl conf_param testfs2.quota.mdt=g To turn off user and group quotas for both inode and block on file system - testfs3, run: - -$ lctl conf_param testfs3.quota.ost=none + testfs3, on the MGS run: + $ lctl conf_param testfs3.quota.ost=none - -$ lctl conf_param testfs3.quota.mdt=none + $ lctl conf_param testfs3.quota.mdt=none - Once the quota parameter set on the MGS, all targets which are part - of the file system will be notified of the new quota settings and - enable/disable quota enforcement as needed. The per-target enforcement - status can still be verified by running the following command on the - Lustre servers: +
+ + <indexterm> + <primary>Quotas</primary> + <secondary>verifying</secondary> + </indexterm>Quota Verification + Once the quota parameters have been configured, all targets + which are part of the file system will be automatically notified of the + new quota settings and enable/disable quota enforcement as needed. The + per-target enforcement status can still be verified by running the + following command on the MDS(s): $ lctl get_param osd-*.*.quota_slave.info osd-zfs.testfs-MDT0000.quota_slave.info= @@ -253,88 +315,7 @@ conn to master: setup user uptodate: glb[1],slv[1],reint[0] group uptodate: glb[1],slv[1],reint[0] - - Lustre software release 2.4 comes with a new quota protocol and a - new on-disk format, be sure to check the Interoperability section below - (see - .) when migrating to release - 2.4 - -
-
- Enabling Disk Quotas (Lustre Releases Previous to Release 2.4 - ) - - - - - In Lustre software releases previous to release 2.4, when new OSTs are - added to the file system, quotas are not automatically propagated to - the new OSTs. As a workaround, clear and then reset quotas for each - user or group using the - lfs setquota command. In the example below, quotas - are cleared and reset for user - bob on file system - testfs: - -$ lfs setquota -u bob -b 0 -B 0 -i 0 -I 0 /mnt/testfs -$ lfs setquota -u bob -b 307200 -B 309200 -i 10000 -I 11000 /mnt/testfs - - For Lustre software releases older than release 2.4, - lfs quotacheck must be first run from a client node to - create quota files on the Lustre targets (i.e. the MDT and OSTs). - lfs quotacheck requires the file system to be quiescent - (i.e. no modifying operations like write, truncate, create or delete - should run concurrently). Failure to follow this caution may result in - inaccurate user/group disk usage. Operations that do not change Lustre - files (such as read or mount) are okay to run. - lfs quotacheck performs a scan on all the Lustre - targets to calculates the block/inode usage for each user/group. If the - Lustre file system has many files, - quotacheck may take a long time to complete. Several - options can be passed to - lfs quotacheck: - -# lfs quotacheck -ug /mnt/testfs - - - - - u-- checks the user disk quota information - - - - g-- checks the group disk quota information - - - By default, quota is turned on after - quotacheck completes. However, this setting isn't - persistent and quota will have to be enabled again (via - lfs quotaon) if one of the Lustre targets is - restarted. - lfs quotaoff is used to turn off quota. - To enable quota permanently with a Lustre software release older - than release 2.4, the - quota_type parameter must be used. This requires - setting - mdd.quota_type and - ost.quota_type, respectively, on the MDT and OSTs. - quota_type can be set to the string - u (user), - g (group) or - ug for both users and groups. This parameter can be - specified at - mkfs time ( - mkfs.lustre --param mdd.quota_type=ug) or with - tunefs.lustre. As an example: - -tunefs.lustre --param ost.quota_type=ug $ost_dev - - When using - mkfs.lustre --param mdd.quota_type=ug or - tunefs.lustre --param ost.quota_type=ug, be sure to - run the command on all OSTs and the MDT. Otherwise, abnormal results may - occur. +
@@ -343,9 +324,10 @@ tunefs.lustre --param ost.quota_type=ug $ost_dev Quotas creating Quota Administration - Once the file system is up and running, quota limits on blocks and - files can be set for both user and group. This is controlled via three - quota parameters: + Once the file system is up and running, quota limits on blocks + and inodes can be set for both user and group. This is + controlled entirely from a client via three quota + parameters: Grace period-- The period of time (in seconds) within which users are allowed to exceed their soft limit. There @@ -367,7 +349,7 @@ tunefs.lustre --param ost.quota_type=ug $ost_dev The grace period applies to all users. The user block soft limit is for all users who are using a blocks quota. - Soft limit-- The grace timer is started + Soft limit -- The grace timer is started once the soft limit is exceeded. At this point, the user/group can still allocate block/inode. When the grace time expires and if the user is still above the soft limit, the soft limit becomes a hard limit and the @@ -376,7 +358,7 @@ tunefs.lustre --param ost.quota_type=ug $ost_dev smaller than the hard limit. If the soft limit is not needed, it should be set to zero (0). - Hard limit-- Block or inode allocation + Hard limit -- Block or inode allocation will fail with EDQUOT(i.e. quota exceeded) when the hard limit is reached. The hard limit is the absolute limit. When a grace period is set, @@ -385,38 +367,27 @@ tunefs.lustre --param ost.quota_type=ug $ost_dev Due to the distributed nature of a Lustre file system and the need to mainain performance under load, those quota parameters may not be 100% accurate. The quota settings can be manipulated via the - lfs command which includes several options to work with - quotas: + lfs command, executed on a client, and includes several + options to work with quotas: - quota-- displays general quota information (disk + quota -- displays general quota information (disk usage and limits) - setquota-- specifies quota limits and tunes the + setquota -- specifies quota limits and tunes the grace period. By default, the grace period is one week. Usage: -lfs quota [-q] [-v] [-h] [-o obd_uuid] [-u|-g -uname|uid|gname|gid] -/mount_point -lfs quota -t --u|-g -/mount_point -lfs setquota --u|--user|-g|--group -username|groupname [-b -block-softlimit] \ - [-B -block_hardlimit] [-i -inode_softlimit] \ - [-I -inode_hardlimit] -/mount_point +lfs quota [-q] [-v] [-h] [-o obd_uuid] [-u|-g uname|uid|gname|gid] /mount_point +lfs quota -t -u|-g /mount_point +lfs setquota -u|--user|-g|--group username|groupname [-b block-softlimit] \ + [-B block_hardlimit] [-i inode_softlimit] \ + [-I inode_hardlimit] /mount_point To display general quota information (disk usage and limits) for the user running the command and his primary group, run: @@ -583,10 +554,10 @@ $ cp: writing `/mnt/testfs/foo`: Disk quota exceeded. Quotas Interoperability - Interoperability + Quotas and Version Interoperability The new quota protocol introduced in Lustre software release 2.4.0 - isn't compatiblewith the old one. As a - consequence, + is not compatiblewith previous + versions. As a consequence, all Lustre servers must be upgraded to release 2.4.0 for quota to be functional. Quota limits set on the Lustre file system prior to the upgrade will be automatically migrated to the new quota @@ -606,7 +577,7 @@ $ cp: writing `/mnt/testfs/foo`: Disk quota exceeded. version which are compatible with release 2.4: - Release 2.3-based clients and beyond + Release 2.3-based clients and later Release 1.8 clients newer or equal to release 1.8.9-wc1 diff --git a/Revision.xml b/Revision.xml index 59a2c6d..6d79255 100644 --- a/Revision.xml +++ b/Revision.xml @@ -19,6 +19,20 @@ example: Lustre software release version 2.4 includes support for multiple metadata servers. + which version? + version + which version of Lustre am I running? + + The current version of Lustre + that is in use on the client can be found using the command + lfs get_param version, for example: + $ lfs get_param version +version= +lustre: 2.7.59 +kernel: patchless_client +build: v2_7_59_0-g703195a-CHANGED-3.10.0.lustreopa + + Only the latest revision of this document is made readily available because changes are continually arriving. The current and latest revision of this manual is available from links maintained at: