From f925856744c4d9cc7467eee2384595b725f231c9 Mon Sep 17 00:00:00 2001 From: Wang Shilong Date: Mon, 8 May 2017 13:46:41 +0800 Subject: [PATCH] LUDOC-202 quota: add project quota support LU-4017 implments project quota for Lustre 2.10, update Lustre manual to reflect changes. Signed-off-by: Wang Shilong Change-Id: Ica6f907da2d8c5dbf83a13d77090dd2f525b5e40 Reviewed-on: https://review.whamcloud.com/26979 Reviewed-by: Andreas Dilger Tested-by: Jenkins Reviewed-by: Fan Yong --- ConfiguringQuotas.xml | 147 +++++++++++++++++++++++++++++++++----------------- UpgradingLustre.xml | 12 +++++ UserUtilities.xml | 47 ++++++++-------- 3 files changed, 136 insertions(+), 70 deletions(-) diff --git a/ConfiguringQuotas.xml b/ConfiguringQuotas.xml index c68d719..7e6838b 100644 --- a/ConfiguringQuotas.xml +++ b/ConfiguringQuotas.xml @@ -10,14 +10,14 @@ xml:id="configuringquotas"> Quotas configuring Working with Quotas - Quotas allow a system administrator to limit the amount of disk space - a user or group can use. Quotas are set by root, and can be specified for - individual users and/or groups. Before a file is written to a partition - where quotas are set, the quota of the creator's group is checked. If a - quota exists, then the file size counts towards the group's quota. If no - quota exists, then the owner's user quota is checked before the file is - written. Similarly, inode usage for specific functions can be controlled if - a user over-uses the allocated space. + Quotas allow a system administrator to limit the amount of disk + space a user, group, or project can use. Quotas are set by root, and can + be specified for individual users, groups, and/or projects. Before a file + is written to a partition where quotas are set, the quota of the creator's + group is checked. If a quota exists, then the file size counts towards + the group's quota. If no quota exists, then the owner's user quota is + checked before the file is written. Similarly, inode usage for specific + functions can be controlled if a user over-uses the allocated space. Lustre quota enforcement differs from standard Linux quota enforcement in several ways: @@ -170,25 +170,31 @@ $ lfs setquota -u bob -b 307200 -B 309200 -i 10000 -I 11000 /mnt/testfs Enabling Disk Quotas (Lustre Software Release 2.4 and later) Quota setup is orchestrated by the MGS and all setup - commands in this section must be run on the MGS. Once setup, - verification of the quota state must be performed on the MDT. Although - quota enforcement is managed by the Lustre software, each OSD - implementation relies on the back-end file system to maintain - per-user/group block and inode usage. Hence, differences exist - when setting up quotas with ldiskfs or ZFS back-ends: + commands in this section must be run on the MGS and project quotas need + lustre Relase 2.10 and later. Once setup, verification of the + quota state must be performed on the MDT. Although quota enforcement is + managed by the Lustre software, each OSD implementation relies on the + back-end file system to maintain per-user/group/project block and inode + usage. Hence, differences exist when setting up quotas with ldiskfs or + ZFS back-ends: For ldiskfs backends, mkfs.lustre now creates empty quota files and enables the QUOTA feature flag in the superblock which turns quota accounting on at mount time automatically. e2fsck was also modified - to fix the quota files when the QUOTA feature flag is present. + to fix the quota files when the QUOTA feature flag is present. The + project quota feature is disabled by default, and + tune2fs needs to be run to enable every target + manually. - For ZFS backend, accounting ZAPs are created and maintained by - the ZFS file system itself. While ZFS tracks per-user and group block - usage, it does not handle inode accounting. The ZFS OSD implements - its own support for inode tracking. Two options are available: + For ZFS backend, the project quota feature is not + supported yet. Accounting ZAPs are created and maintained + by the ZFS file system itself. While ZFS tracks per-user and group + block usage, it does not handle inode accounting for ZFS versions + prior to zfs-0.7.0. The ZFS OSD implements its own support for inode + tracking. Two options are available: The ZFS OSD can estimate the number of inodes in-use based @@ -196,7 +202,8 @@ $ lfs setquota -u bob -b 307200 -B 309200 -i 10000 -I 11000 /mnt/testfs can be enabled by running the following command on the server running the target: lctl set_param - osd-zfs.${FSNAME}-${TARGETNAME}.quota_iused_estimate=1. + osd-zfs.${FSNAME}-${TARGETNAME}.quota_iused_estimate=1. + Similarly to block accounting, dedicated ZAPs are also @@ -214,16 +221,24 @@ $ lfs setquota -u bob -b 307200 -B 309200 -i 10000 -I 11000 /mnt/testfs tunefs.lustre --quota is run against all targets. This command sets the QUOTA feature flag in the superblock and runs e2fsck (as a result, the target must be offline) to build the per-UID/GID disk usage - database. See for further - important considerations. + database. + Lustre filesystems formatted with a Lustre release + prior to 2.10 can be still safely upgraded to release 2.10, but will not + have project quota usage reporting functional until + tune2fs -O project is run against all ldiskfs backend + targets. This command sets the PROJECT feature flag in the superblock and + runs e2fsck (as a result, the target must be offline). See + for further important + considerations. Lustre software release 2.4 and later requires a version of - e2fsprogs that supports quota (i.e. newer or equal to 1.42.3.wc1) to be - installed on the server nodes using ldiskfs backend (e2fsprogs is not - needed with ZFS backend). In general, we recommend to use the latest - e2fsprogs version available on - + e2fsprogs that supports quota (i.e. newer or equal to 1.42.13.wc5, + 1.42.13.wc6 or newer is needed for project quota support) to be + installed on the server nodes using ldiskfs backend (e2fsprogs is not + needed with ZFS backend). In general, we recommend to use the latest + e2fsprogs version available on + http://downloads.hpdd.intel.com/public/e2fsprogs/. The ldiskfs OSD relies on the standard Linux quota to maintain accounting information on disk. As a consequence, the Linux kernel @@ -243,7 +258,7 @@ $ lfs setquota -u bob -b 307200 -B 309200 -i 10000 -I 11000 /mnt/testfs lctl conf_param on the MGS via the following syntax: -lctl conf_param fsname.quota.ost|mdt=u|g|ug|none +lctl conf_param fsname.quota.ost|mdt=u|g|p|ugp|none @@ -268,26 +283,32 @@ lctl conf_param fsname.quota.ost|mdt - ug -- to enable quota enforcement for both users - and groups + p -- to enable quota enforcement for projects + only - none -- to disable quota enforcement for both users - and groups + ugp -- to enable quota enforcement for all users, + groups and projects + + + + none -- to disable quota enforcement for all users, + groups and projects Examples: - To turn on user and group quotas for block only on file system + To turn on user, group, and project quotas for block only on + file system testfs1, on the MGS run: - $ lctl conf_param testfs1.quota.ost=ug + $ lctl conf_param testfs1.quota.ost=ugp To turn on group quotas for inodes on file system testfs2, on the MGS run: $ lctl conf_param testfs2.quota.mdt=g - To turn off user and group quotas for both inode and block on file - system + To turn off user, group, and project quotas for both inode and block + on file system testfs3, on the MGS run: $ lctl conf_param testfs3.quota.ost=none @@ -325,13 +346,13 @@ group uptodate: glb[1],slv[1],reint[0] creating Quota Administration Once the file system is up and running, quota limits on blocks - and inodes can be set for both user and group. This is - controlled entirely from a client via three quota + and inodes can be set for user, group, and project. This is + controlled entirely from a client via three quota parameters: Grace period-- The period of time (in seconds) within which users are allowed to exceed their soft limit. There - are four types of grace periods: + are six types of grace periods: user block soft limit @@ -345,18 +366,24 @@ group uptodate: glb[1],slv[1],reint[0] group inode soft limit + + project block soft limit + + + project inode soft limit + The grace period applies to all users. The user block soft limit is for all users who are using a blocks quota. Soft limit -- The grace timer is started - once the soft limit is exceeded. At this point, the user/group can still - allocate block/inode. When the grace time expires and if the user is still - above the soft limit, the soft limit becomes a hard limit and the - user/group can't allocate any new block/inode any more. The user/group - should then delete files to be under the soft limit. The soft limit MUST be - smaller than the hard limit. If the soft limit is not needed, it should be - set to zero (0). + once the soft limit is exceeded. At this point, the user/group/project + can still allocate block/inode. When the grace time expires and if the + user is still above the soft limit, the soft limit becomes a hard limit + and the user/group/project can't allocate any new block/inode any more. + The user/group/project should then delete files to be under the soft limit. + The soft limit MUST be smaller than the hard limit. If the soft limit is + not needed, it should be set to zero (0). Hard limit -- Block or inode allocation will fail with @@ -383,9 +410,9 @@ group uptodate: glb[1],slv[1],reint[0] Usage: -lfs quota [-q] [-v] [-h] [-o obd_uuid] [-u|-g uname|uid|gname|gid] /mount_point -lfs quota -t -u|-g /mount_point -lfs setquota -u|--user|-g|--group username|groupname [-b block-softlimit] \ +lfs quota [-q] [-v] [-h] [-o obd_uuid] [-u|-g|-p uname|uid|gname|gid|projid] /mount_point +lfs quota -t {-u|-g|-p} /mount_point +lfs setquota {-u|--user|-g|--group|-p|--project} username|groupname [-b block-softlimit] \ [-B block_hardlimit] [-i inode_softlimit] \ [-I inode_hardlimit] /mount_point @@ -405,11 +432,28 @@ $ lfs quota -u bob /mnt/testfs $ lfs quota -u bob -v /mnt/testfs + To display general quota information for a specific project (" + 1" in this example), run: + +$ lfs quota -p 1 /mnt/testfs + To display general quota information for a specific group (" eng" in this example), run: $ lfs quota -g eng /mnt/testfs + To limit quota usage for a specific project ID on a specific + directory ("/mnt/testfs/dir" in this example), run: + +$ chattr +P /mnt/testfs/dir +$ chattr -p 1 /mnt/testfs/dir +$ lfs setquota -p 1 -b 307200 -B 309200 -i 10000 -I 11000 /mnt/testfs + + Please note that if it is desired to have + lfs quota -p show the space/inode usage under the + directory properly (much faster than du), then the + user/admin needs to use different project IDs for different directories. + To display block and inode grace times for user quotas, run: $ lfs quota -t -u /mnt/testfs @@ -586,6 +630,11 @@ $ cp: writing `/mnt/testfs/foo`: Disk quota exceeded. Release 2.1 clients newer or equal to release 2.1.4 + To use the project quota functionality introduced in + Lustre 2.10, all Lustre servers and clients must be + upgraded to Lustre release 2.10 or later for project quota to work + correctly. Otherwise, project quota will be inaccessible on + clients and not be accounted for on OSTs.
diff --git a/UpgradingLustre.xml b/UpgradingLustre.xml index 4c7b524..826217b 100644 --- a/UpgradingLustre.xml +++ b/UpgradingLustre.xml @@ -319,6 +319,18 @@ xml:id="upgradinglustre"> OSTs: <screen>tunefs.lustre --quota</screen></para> </listitem> + <listitem> + <para>(Optional) If you are upgrading before Lustre software release + 2.10, to enable the project quota feature enter the following on every + ldiskfs backend target: + <screen>tune2fs –O project /dev/<replaceable>dev</replaceable></screen> + </para> + <note><para>Enabling the <literal>project</literal> feature will prevent + the filesystem from being used by older versions of ldiskfs, so it + should only be enabled if the project quota feature is required and/or + after it is known that the upgraded release does not need to be + downgraded.</para></note> + </listitem> <listitem> <para>When setting up the file system, enter: <screen>conf_param $FSNAME.quota.mdt=$QUOTA_TYPE diff --git a/UserUtilities.xml b/UserUtilities.xml index a596173..a53139a 100644 --- a/UserUtilities.xml +++ b/UserUtilities.xml @@ -46,27 +46,27 @@ lfs setstripe -d <replaceable>dir</replaceable> lfs osts [path] lfs pool_list <replaceable>filesystem</replaceable>[.<replaceable>pool</replaceable>]| <replaceable>pathname</replaceable> lfs quota [-q] [-v] [-h] [-o <replaceable>obd_uuid</replaceable>|-I <replaceable>ost_idx</replaceable>|-i <replaceable>mdt_idx</replaceable>] - [-u <replaceable>username|uid|-g</replaceable> <replaceable>group|gid</replaceable>] <replaceable>/mount_point</replaceable> -lfs quota -t -u|-g <replaceable>/mount_point</replaceable> + [-u <replaceable>username|uid|-g</replaceable> <replaceable>group|gid</replaceable>|-p <replaceable>projid</replaceable>] <replaceable>/mount_point</replaceable> +lfs quota -t -u|-g|-p <replaceable>/mount_point</replaceable> lfs quotacheck [-ug] <replaceable>/mount_point</replaceable> lfs quotachown [-i] <replaceable>/mount_point</replaceable> lfs quotainv [-ug] [-f] <replaceable>/mount_point</replaceable> lfs quotaon [-ugf] <replaceable>/mount_point</replaceable> lfs quotaoff [-ug] <replaceable>/mount_point</replaceable> -lfs setquota {-u|--user|-g|--group} <replaceable>uname|uid|gname|gid</replaceable> +lfs setquota {-u|--user|-g|--group|-p|--project} <replaceable>uname|uid|gname|gid|projid</replaceable> [--block-softlimit <replaceable>block_softlimit</replaceable>] [--block-hardlimit <replaceable>block_hardlimit</replaceable>] [--inode-softlimit <replaceable>inode_softlimit</replaceable>] [--inode-hardlimit <replaceable>inode_hardlimit</replaceable>] <replaceable>/mount_point</replaceable> -lfs setquota -u|--user|-g|--group <replaceable>uname|uid|gname|gid</replaceable> +lfs setquota -u|--user|-g|--group|-p|--project <replaceable>uname|uid|gname|gid|projid</replaceable> [-b <replaceable>block_softlimit</replaceable>] [-B <replaceable>block_hardlimit</replaceable>] [-i <replaceable>inode-softlimit</replaceable>] [-I <replaceable>inode_hardlimit</replaceable>] <replaceable>/mount_point</replaceable> -lfs setquota -t -u|-g [--block-grace <replaceable>block_grace</replaceable>] +lfs setquota -t -u|-g|-p [--block-grace <replaceable>block_grace</replaceable>] [--inode-grace <replaceable>inode_grace</replaceable>] <replaceable>/mount_point</replaceable> -lfs setquota -t -u|-g [-b <replaceable>block_grace</replaceable>] [-i <replaceable>inode_grace</replaceable>] +lfs setquota -t -u|-g|-p [-b <replaceable>block_grace</replaceable>] [-i <replaceable>inode_grace</replaceable>] <replaceable>/mount_point</replaceable> lfs help </screen> @@ -732,8 +732,8 @@ lfs help <literal>quota [-q] [-v] [-o <replaceable>obd_uuid</replaceable>|-i <replaceable>mdt_idx</replaceable>|-I - <replaceable>ost_idx</replaceable>] [-u|-g - <replaceable>uname|uid|gname|gid]</replaceable> + <replaceable>ost_idx</replaceable>] [-u|-g|-p + <replaceable>uname|uid|gname|gid|projid]</replaceable> <replaceable>/mount_point</replaceable></literal> </para> <para> </para> @@ -741,11 +741,11 @@ lfs help <entry> <para>Displays disk usage and limits, either for the full file system or for objects on a specific OBD. A user or group name - or an ID can be specified. If both user and group are omitted, - quotas for the current UID/GID are shown. The - <literal>-q</literal> option disables printing of additional - descriptions (including column titles). It fills in blank - spaces in the + or an usr, group and project ID can be specified. If all user, + group project ID are omitted, quotas for the current UID/GID + are shown. The <literal>-q</literal> option disables printing + of additional descriptions (including column titles). It fills + in blank spaces in the <literal>grace</literal> column with zeros (when there is no grace period set), to ensure that the number of columns is consistent. The @@ -757,14 +757,15 @@ lfs help <entry nameend="c2" namest="c1"> <para> <literal>quota -t - <replaceable>-u|-g</replaceable> + <replaceable>-u|-g|-p</replaceable> <replaceable>/mount_point</replaceable></literal> </para> </entry> <entry> <para>Displays block and inode grace times for user ( <literal>-u</literal>) or group ( - <literal>-g</literal>) quotas.</para> + <literal>-g</literal>) or project ( + <literal>-p</literal>) quotas.</para> </entry> </row> <row> @@ -849,9 +850,9 @@ lfs help <row> <entry nameend="c2" namest="c1"> <para> - <literal>setquota -u|-g - <replaceable> - uname|uid|gname|gid}</replaceable>[--block-softlimit + <literal>setquota {-u|-g|-p + <replaceable>uname|uid|gname|gid|projid}</replaceable> + [--block-softlimit <replaceable>block_softlimit</replaceable>] [--block-hardlimit <replaceable>block_hardlimit</replaceable>] @@ -863,8 +864,8 @@ lfs help </para> </entry> <entry> - <para>Sets file system quotas for users or groups. Limits can - be specified with + <para>Sets file system quotas for users, groups or one project. + Limits can be specified with <literal>--{block|inode}-{softlimit|hardlimit}</literal> or their short equivalents <literal>-b</literal>, @@ -886,7 +887,7 @@ lfs help <row> <entry nameend="c2" namest="c1"> <para> - <literal>setquota -t -u|-g [--block-grace + <literal>setquota -t -u|-g|-p [--block-grace <replaceable>block_grace</replaceable>] [--inode-grace <replaceable>inode_grace</replaceable>] <replaceable>/mount_point</replaceable></literal> @@ -986,6 +987,10 @@ $ lfs df --pool <screen> $ lfs quota -u bob /mnt/lustre </screen> + <para>List quotas of project ID '1'.</para> + <screen> +$ lfs quota -p 1 /mnt/lustre +</screen> <para>Show grace times for user quotas on <literal>/mnt/lustre</literal>.</para> <screen> -- 1.8.3.1