1 <?xml version='1.0' encoding='utf-8'?>
2 <chapter xmlns="http://docbook.org/ns/docbook"
3 xmlns:xl="http://www.w3.org/1999/xlink" version="5.0" xml:lang="en-US"
4 xml:id="configuringquotas">
5 <title xml:id="configuringquotas.title">Configuring and Managing
7 <section xml:id="quota_configuring">
10 <primary>Quotas</primary>
11 <secondary>configuring</secondary>
12 </indexterm>Working with Quotas</title>
13 <para>Quotas allow a system administrator to limit the amount of disk
14 space a user, group, or project can use. Quotas are set by root, and can
15 be specified for individual users, groups, and/or projects. Before a file
16 is written to a partition where quotas are set, the quota of the creator's
17 group is checked. If a quota exists, then the file size counts towards
18 the group's quota. If no quota exists, then the owner's user quota is
19 checked before the file is written. Similarly, inode usage for specific
20 functions can be controlled if a user over-uses the allocated space.</para>
21 <para>Lustre quota enforcement differs from standard Linux quota
22 enforcement in several ways:</para>
25 <para>Quotas are administered via the
26 <literal>lfs</literal> and
27 <literal>lctl</literal> commands (post-mount).</para>
30 <para>The quota feature in Lustre software is distributed
31 throughout the system (as the Lustre file system is a distributed file
32 system). Because of this, quota setup and behavior on Lustre is
33 different from local disk quotas in the following ways:</para>
36 <para>No single point of administration: some commands must be
37 executed on the MGS, other commands on the MDSs and OSSs, and still
38 other commands on the client.</para>
41 <para>Granularity: a local quota is typically specified for
42 kilobyte resolution, Lustre uses one megabyte as the smallest quota
46 <para>Accuracy: quota information is distributed throughout
47 the file system and can only be accurately calculated with a completely
48 quite file system.</para>
53 <para>Quotas are allocated and consumed in a quantized fashion.</para>
56 <para>Client does not set the
57 <literal>usrquota</literal> or
58 <literal>grpquota</literal> options to mount. As of Lustre software
59 release 2.4, space accounting is always enabled by default and quota
60 enforcement can be enabled/disabled on a per-file system basis with
61 <literal>lctl conf_param</literal>. It is worth noting that both
62 <literal>lfs quotaon</literal> and
63 <literal>quota_type</literal> are deprecated as of Lustre software
68 <para>Although a quota feature is available in the Lustre software, root
69 quotas are NOT enforced.</para>
71 <literal>lfs setquota -u root</literal> (limits are not enforced)</para>
73 <literal>lfs quota -u root</literal> (usage includes internal Lustre data
74 that is dynamic in size and does not accurately reflect mount point
75 visible block and inode usage).</para>
78 <section xml:id="enabling_disk_quotas">
81 <primary>Quotas</primary>
82 <secondary>enabling disk</secondary>
83 </indexterm>Enabling Disk Quotas</title>
84 <para>The design of quotas on Lustre has management and enforcement
85 separated from resource usage and accounting. Lustre software is
86 responsible for management and enforcement. The back-end file
87 system is responsible for resource usage and accounting. Because of
88 this, it is necessary to begin enabling quotas by enabling quotas on the
89 back-end disk system. Because quota setup is dependent on the Lustre
90 software version in use, you may first need to run
91 <literal>lctl get_param version</literal> to identify
92 <xref linkend="whichversion"/> you are currently using.
95 <title>Enabling Disk Quotas (Lustre Software Prior to Release 2.4)
98 For Lustre software releases older than release 2.4,
99 <literal>lfs quotacheck</literal> must be first run from a client node to
100 create quota files on the Lustre targets (i.e. the MDT and OSTs).
101 <literal>lfs quotacheck</literal> requires the file system to be quiescent
102 (i.e. no modifying operations like write, truncate, create or delete
103 should run concurrently). Failure to follow this caution may result in
104 inaccurate user/group disk usage. Operations that do not change Lustre
105 files (such as read or mount) are okay to run.
106 <literal>lfs quotacheck</literal> performs a scan on all the Lustre
107 targets to calculates the block/inode usage for each user/group. If the
108 Lustre file system has many files,
109 <literal>quotacheck</literal> may take a long time to complete. Several
110 options can be passed to
111 <literal>lfs quotacheck</literal>:</para>
113 # lfs quotacheck -ug /mnt/testfs
118 <literal>u</literal>-- checks the user disk quota information</para>
122 <literal>g</literal>-- checks the group disk quota information</para>
125 <para>By default, quota is turned on after
126 <literal>quotacheck</literal> completes. However, this setting isn't
127 persistent and quota will have to be enabled again (via
128 <literal>lfs quotaon</literal>) if one of the Lustre targets is
130 <literal>lfs quotaoff</literal> is used to turn off quota.</para>
131 <para>To enable quota permanently with a Lustre software release older
132 than release 2.4, the
133 <literal>quota_type</literal> parameter must be used. This requires
135 <literal>mdd.quota_type</literal> and
136 <literal>ost.quota_type</literal>, respectively, on the MDT and OSTs.
137 <literal>quota_type</literal> can be set to the string
138 <literal>u</literal> (user),
139 <literal>g</literal> (group) or
140 <literal>ug</literal> for both users and groups. This parameter can be
142 <literal>mkfs</literal> time (
143 <literal>mkfs.lustre --param mdd.quota_type=ug</literal>) or with
144 <literal>tunefs.lustre</literal>. As an example:</para>
146 tunefs.lustre --param ost.quota_type=ug $ost_dev
149 <literal>mkfs.lustre --param mdd.quota_type=ug</literal> or
150 <literal>tunefs.lustre --param ost.quota_type=ug</literal>, be sure to
151 run the command on all OSTs and the MDT. Otherwise, abnormal results may
155 In Lustre software releases before 2.4, when new OSTs are
156 added to the file system, quotas are not automatically propagated to
157 the new OSTs. As a workaround, clear and then reset quotas for each
158 user or group using the
159 <literal>lfs setquota</literal> command. In the example below, quotas
160 are cleared and reset for user
161 <literal>bob</literal> on file system
162 <literal>testfs</literal>:
164 $ lfs setquota -u bob -b 0 -B 0 -i 0 -I 0 /mnt/testfs
165 $ lfs setquota -u bob -b 307200 -B 309200 -i 10000 -I 11000 /mnt/testfs
169 <section remap="h3" condition="l24">
170 <title>Enabling Disk Quotas (Lustre Software Release 2.4 and
172 <para>Quota setup is orchestrated by the MGS and <emphasis>all setup
173 commands in this section must be run on the MGS and project quotas need
174 lustre Relase 2.10 and later</emphasis>. Once setup, verification of the
175 quota state must be performed on the MDT. Although quota enforcement is
176 managed by the Lustre software, each OSD implementation relies on the
177 back-end file system to maintain per-user/group/project block and inode
178 usage. Hence, differences exist when setting up quotas with ldiskfs or
179 ZFS back-ends:</para>
182 <para>For ldiskfs backends,
183 <literal>mkfs.lustre</literal> now creates empty quota files and
184 enables the QUOTA feature flag in the superblock which turns quota
185 accounting on at mount time automatically. e2fsck was also modified
186 to fix the quota files when the QUOTA feature flag is present. The
187 project quota feature is disabled by default, and
188 <literal>tune2fs</literal> needs to be run to enable every target
192 <para>For ZFS backend, <emphasis>the project quota feature is not
193 supported yet.</emphasis> Accounting ZAPs are created and maintained
194 by the ZFS file system itself. While ZFS tracks per-user and group
195 block usage, it does not handle inode accounting for ZFS versions
196 prior to zfs-0.7.0. The ZFS OSD implements its own support for inode
197 tracking. Two options are available:</para>
200 <para>The ZFS OSD can estimate the number of inodes in-use based
201 on the number of blocks used by a given user or group. This mode
202 can be enabled by running the following command on the server
204 <literal>lctl set_param
205 osd-zfs.${FSNAME}-${TARGETNAME}.quota_iused_estimate=1</literal>.
209 <para>Similarly to block accounting, dedicated ZAPs are also
210 created the ZFS OSD to maintain per-user and group inode usage.
211 This is the default mode which corresponds to
212 <literal>quota_iused_estimate</literal> set to 0.</para>
218 <para>Lustre file systems formatted with a Lustre release prior to 2.4.0
219 can be still safely upgraded to release 2.4.0, but will not have
220 functional space usage report until
221 <literal>tunefs.lustre --quota</literal> is run against all targets. This
222 command sets the QUOTA feature flag in the superblock and runs e2fsck (as
223 a result, the target must be offline) to build the per-UID/GID disk usage
225 <para condition="l2A">Lustre filesystems formatted with a Lustre release
226 prior to 2.10 can be still safely upgraded to release 2.10, but will not
227 have project quota usage reporting functional until
228 <literal>tune2fs -O project</literal> is run against all ldiskfs backend
229 targets. This command sets the PROJECT feature flag in the superblock and
230 runs e2fsck (as a result, the target must be offline). See
231 <xref linkend="quota_interoperability"/> for further important
232 considerations.</para>
235 <para>Lustre software release 2.4 and later requires a version of
236 e2fsprogs that supports quota (i.e. newer or equal to 1.42.13.wc5,
237 1.42.13.wc6 or newer is needed for project quota support) to be
238 installed on the server nodes using ldiskfs backend (e2fsprogs is not
239 needed with ZFS backend). In general, we recommend to use the latest
240 e2fsprogs version available on
241 <link xl:href="http://downloads.whamcloud.com/e2fsprogs/">
242 http://downloads.whamcloud.com/public/e2fsprogs/</link>.</para>
243 <para>The ldiskfs OSD relies on the standard Linux quota to maintain
244 accounting information on disk. As a consequence, the Linux kernel
245 running on the Lustre servers using ldiskfs backend must have
246 <literal>CONFIG_QUOTA</literal>,
247 <literal>CONFIG_QUOTACTL</literal> and
248 <literal>CONFIG_QFMT_V2</literal> enabled.</para>
250 <para>As of Lustre software release 2.4.0, quota enforcement is thus
251 turned on/off independently of space accounting which is always enabled.
253 <replaceable>on|off</replaceable></literal> as well as the per-target
254 <literal>quota_type</literal> parameter are deprecated in favor of a
255 single per-file system quota parameter controlling inode/block quota
256 enforcement. Like all permanent parameters, this quota parameter can be
258 <literal>lctl conf_param</literal> on the MGS via the following
261 lctl conf_param <replaceable>fsname</replaceable>.quota.<replaceable>ost|mdt</replaceable>=<replaceable>u|g|p|ugp|none</replaceable>
266 <literal>ost</literal> -- to configure block quota managed by
271 <literal>mdt</literal> -- to configure inode quota managed by
276 <literal>u</literal> -- to enable quota enforcement for users
281 <literal>g</literal> -- to enable quota enforcement for groups
286 <literal>p</literal> -- to enable quota enforcement for projects
291 <literal>ugp</literal> -- to enable quota enforcement for all users,
292 groups and projects</para>
296 <literal>none</literal> -- to disable quota enforcement for all users,
297 groups and projects</para>
300 <para>Examples:</para>
301 <para>To turn on user, group, and project quotas for block only on
303 <literal>testfs1</literal>, <emphasis>on the MGS</emphasis> run:</para>
304 <screen>$ lctl conf_param testfs1.quota.ost=ugp
306 <para>To turn on group quotas for inodes on file system
307 <literal>testfs2</literal>, on the MGS run:</para>
308 <screen>$ lctl conf_param testfs2.quota.mdt=g
310 <para>To turn off user, group, and project quotas for both inode and block
312 <literal>testfs3</literal>, on the MGS run:</para>
313 <screen>$ lctl conf_param testfs3.quota.ost=none
315 <screen>$ lctl conf_param testfs3.quota.mdt=none
320 <primary>Quotas</primary>
321 <secondary>verifying</secondary>
322 </indexterm>Quota Verification</title>
323 <para>Once the quota parameters have been configured, all targets
324 which are part of the file system will be automatically notified of the
325 new quota settings and enable/disable quota enforcement as needed. The
326 per-target enforcement status can still be verified by running the
327 following <emphasis>command on the MDS(s)</emphasis>:</para>
329 $ lctl get_param osd-*.*.quota_slave.info
330 osd-zfs.testfs-MDT0000.quota_slave.info=
331 target name: testfs-MDT0000
335 conn to master: setup
336 user uptodate: glb[1],slv[1],reint[0]
337 group uptodate: glb[1],slv[1],reint[0]
342 <section xml:id="quota_administration">
345 <primary>Quotas</primary>
346 <secondary>creating</secondary>
347 </indexterm>Quota Administration</title>
348 <para>Once the file system is up and running, quota limits on blocks
349 and inodes can be set for user, group, and project. This is <emphasis>
350 controlled entirely from a client</emphasis> via three quota
353 <emphasis role="bold">Grace period</emphasis>-- The period of time (in
354 seconds) within which users are allowed to exceed their soft limit. There
355 are six types of grace periods:</para>
358 <para>user block soft limit</para>
361 <para>user inode soft limit</para>
364 <para>group block soft limit</para>
367 <para>group inode soft limit</para>
370 <para>project block soft limit</para>
373 <para>project inode soft limit</para>
376 <para>The grace period applies to all users. The user block soft limit is
377 for all users who are using a blocks quota.</para>
379 <emphasis role="bold">Soft limit</emphasis> -- The grace timer is started
380 once the soft limit is exceeded. At this point, the user/group/project
381 can still allocate block/inode. When the grace time expires and if the
382 user is still above the soft limit, the soft limit becomes a hard limit
383 and the user/group/project can't allocate any new block/inode any more.
384 The user/group/project should then delete files to be under the soft limit.
385 The soft limit MUST be smaller than the hard limit. If the soft limit is
386 not needed, it should be set to zero (0).</para>
388 <emphasis role="bold">Hard limit</emphasis> -- Block or inode allocation
390 <literal>EDQUOT</literal>(i.e. quota exceeded) when the hard limit is
391 reached. The hard limit is the absolute limit. When a grace period is set,
392 one can exceed the soft limit within the grace period if under the hard
394 <para>Due to the distributed nature of a Lustre file system and the need to
395 maintain performance under load, those quota parameters may not be 100%
396 accurate. The quota settings can be manipulated via the
397 <literal>lfs</literal> command, executed on a client, and includes several
398 options to work with quotas:</para>
402 <varname>quota</varname> -- displays general quota information (disk
403 usage and limits)</para>
407 <varname>setquota</varname> -- specifies quota limits and tunes the
408 grace period. By default, the grace period is one week.</para>
413 lfs quota [-q] [-v] [-h] [-o obd_uuid] [-u|-g|-p <replaceable>uname|uid|gname|gid|projid</replaceable>] <replaceable>/mount_point</replaceable>
414 lfs quota -t {-u|-g|-p} <replaceable>/mount_point</replaceable>
415 lfs setquota {-u|--user|-g|--group|-p|--project} <replaceable>username|groupname</replaceable> [-b <replaceable>block-softlimit</replaceable>] \
416 [-B <replaceable>block_hardlimit</replaceable>] [-i <replaceable>inode_softlimit</replaceable>] \
417 [-I <replaceable>inode_hardlimit</replaceable>] <replaceable>/mount_point</replaceable>
419 <para>To display general quota information (disk usage and limits) for the
420 user running the command and his primary group, run:</para>
422 $ lfs quota /mnt/testfs
424 <para>To display general quota information for a specific user ("
425 <literal>bob</literal>" in this example), run:</para>
427 $ lfs quota -u bob /mnt/testfs
429 <para>To display general quota information for a specific user ("
430 <literal>bob</literal>" in this example) and detailed quota statistics for
431 each MDT and OST, run:</para>
433 $ lfs quota -u bob -v /mnt/testfs
435 <para>To display general quota information for a specific project ("
436 <literal>1</literal>" in this example), run:</para>
438 $ lfs quota -p 1 /mnt/testfs
440 <para>To display general quota information for a specific group ("
441 <literal>eng</literal>" in this example), run:</para>
443 $ lfs quota -g eng /mnt/testfs
445 <para>To limit quota usage for a specific project ID on a specific
446 directory ("<literal>/mnt/testfs/dir</literal>" in this example), run:</para>
448 $ chattr +P /mnt/testfs/dir
449 $ chattr -p 1 /mnt/testfs/dir
450 $ lfs setquota -p 1 -b 307200 -B 309200 -i 10000 -I 11000 /mnt/testfs
452 <para>Please note that if it is desired to have
453 <literal>lfs quota -p</literal> show the space/inode usage under the
454 directory properly (much faster than <literal>du</literal>), then the
455 user/admin needs to use different project IDs for different directories.
457 <para>To display block and inode grace times for user quotas, run:</para>
459 $ lfs quota -t -u /mnt/testfs
461 <para>To set user or group quotas for a specific ID ("bob" in this
462 example), run:</para>
464 $ lfs setquota -u bob -b 307200 -B 309200 -i 10000 -I 11000 /mnt/testfs
466 <para>In this example, the quota for user "bob" is set to 300 MB
467 (309200*1024) and the hard limit is 11,000 files. Therefore, the inode hard
468 limit should be 11000.</para>
469 <para>The quota command displays the quota allocated and consumed by each
470 Lustre target. Using the previous
471 <literal>setquota</literal> example, running this
472 <literal>lfs</literal> quota command:</para>
474 $ lfs quota -u bob -v /mnt/testfs
476 <para>displays this command output:</para>
478 Disk quotas for user bob (uid 6000):
479 Filesystem kbytes quota limit grace files quota limit grace
480 /mnt/testfs 0 30720 30920 - 0 10000 11000 -
481 testfs-MDT0000_UUID 0 - 8192 - 0 - 2560 -
482 testfs-OST0000_UUID 0 - 8192 - 0 - 0 -
483 testfs-OST0001_UUID 0 - 8192 - 0 - 0 -
484 Total allocated inode limit: 2560, total allocated block limit: 24576
486 <para>Global quota limits are stored in dedicated index files (there is one
487 such index per quota type) on the quota master target (aka QMT). The QMT
488 runs on MDT0000 and exports the global indices via <replaceable>lctl
489 get_param</replaceable>. The global indices can thus be dumped via the
492 # lctl get_param qmt.testfs-QMT0000.*.glb-*
493 </screen>The format of global indexes depends on the OSD type. The ldiskfs OSD
494 uses an IAM files while the ZFS OSD creates dedicated ZAPs.</para>
495 <para>Each slave also stores a copy of this global index locally. When the
496 global index is modified on the master, a glimpse callback is issued on the
497 global quota lock to notify all slaves that the global index has been
498 modified. This glimpse callback includes information about the identifier
499 subject to the change. If the global index on the QMT is modified while a
500 slave is disconnected, the index version is used to determine whether the
501 slave copy of the global index isn't up to date any more. If so, the slave
502 fetches the whole index again and updates the local copy. The slave copy of
503 the global index can also be accessed via the following command:
505 lctl get_param osd-*.*.quota_slave.limit*
508 <para>Prior to 2.4, global quota limits used to be stored in
509 administrative quota files using the on-disk format of the linux quota
510 file. When upgrading MDT0000 to 2.4, those administrative quota files are
511 converted into IAM indexes automatically, conserving existing quota
512 limits previously set by the administrator.</para>
515 <section xml:id="quota_allocation">
518 <primary>Quotas</primary>
519 <secondary>allocating</secondary>
520 </indexterm>Quota Allocation</title>
521 <para>In a Lustre file system, quota must be properly allocated or users
522 may experience unnecessary failures. The file system block quota is divided
523 up among the OSTs within the file system. Each OST requests an allocation
524 which is increased up to the quota limit. The quota allocation is then
525 <emphasis role="italic">quantized</emphasis> to reduce the number of
526 quota-related request traffic.</para>
527 <para>The Lustre quota system distributes quotas from the Quota Master
528 Target (aka QMT). Only one QMT instance is supported for now and only runs
529 on the same node as MDT0000. All OSTs and MDTs set up a Quota Slave Device
530 (aka QSD) which connects to the QMT to allocate/release quota space. The
531 QSD is setup directly from the OSD layer.</para>
532 <para>To reduce quota requests, quota space is initially allocated to QSDs
533 in very large chunks. How much unused quota space can be hold by a target
534 is controlled by the qunit size. When quota space for a given ID is close
535 to exhaustion on the QMT, the qunit size is reduced and QSDs are notified
536 of the new qunit size value via a glimpse callback. Slaves are then
537 responsible for releasing quota space above the new qunit value. The qunit
538 size isn't shrunk indefinitely and there is a minimal value of 1MB for
539 blocks and 1,024 for inodes. This means that the quota space rebalancing
540 process will stop when this minimum value is reached. As a result, quota
541 exceeded can be returned while many slaves still have 1MB or 1,024 inodes
542 of spare quota space.</para>
543 <para>If we look at the
544 <literal>setquota</literal> example again, running this
545 <literal>lfs quota</literal> command:</para>
547 # lfs quota -u bob -v /mnt/testfs
549 <para>displays this command output:</para>
551 Disk quotas for user bob (uid 500):
552 Filesystem kbytes quota limit grace files quota limit grace
553 /mnt/testfs 30720* 30720 30920 6d23h56m44s 10101* 10000 11000
555 testfs-MDT0000_UUID 0 - 0 - 10101 - 10240
556 testfs-OST0000_UUID 0 - 1024 - - - -
557 testfs-OST0001_UUID 30720* - 29896 - - - -
558 Total allocated inode limit: 10240, total allocated block limit: 30920
560 <para>The total quota limit of 30,920 is allocated to user bob, which is
561 further distributed to two OSTs.</para>
562 <para>Values appended with '
563 <literal>*</literal>' show that the quota limit has been exceeded, causing
564 the following error when trying to write or create a file:</para>
567 $ cp: writing `/mnt/testfs/foo`: Disk quota exceeded.
571 <para>It is very important to note that the block quota is consumed per
572 OST and the inode quota per MDS. Therefore, when the quota is consumed on
573 one OST (resp. MDT), the client may not be able to create files
574 regardless of the quota available on other OSTs (resp. MDTs).</para>
575 <para>Setting the quota limit below the minimal qunit size may prevent
576 the user/group from all file creation. It is thus recommended to use
577 soft/hard limits which are a multiple of the number of OSTs * the minimal
580 <para>To determine the total number of inodes, use
581 <literal>lfs df -i</literal>(and also
582 <literal>lctl get_param *.*.filestotal</literal>). For more information on
584 <literal>lfs df -i</literal> command and the command output, see
585 <xref linkend="dbdoclet.checking_free_space" />.</para>
586 <para>Unfortunately, the
587 <literal>statfs</literal> interface does not report the free inode count
588 directly, but instead reports the total inode and used inode counts. The
589 free inode count is calculated for
590 <literal>df</literal> from (total inodes - used inodes). It is not critical
591 to know the total inode count for a file system. Instead, you should know
592 (accurately), the free inode count and the used inode count for a file
593 system. The Lustre software manipulates the total inode count in order to
594 accurately report the other two values.</para>
596 <section xml:id="quota_interoperability">
599 <primary>Quotas</primary>
600 <secondary>Interoperability</secondary>
601 </indexterm>Quotas and Version Interoperability</title>
602 <para>The new quota protocol introduced in Lustre software release 2.4.0
603 <emphasis role="bold">is not compatible</emphasis> with previous
604 versions. As a consequence,
605 <emphasis role="bold">all Lustre servers must be upgraded to release 2.4.0
606 for quota to be functional</emphasis>. Quota limits set on the Lustre file
607 system prior to the upgrade will be automatically migrated to the new quota
608 index format. As for accounting information with ldiskfs backend, they will
609 be regenerated by running
610 <literal>tunefs.lustre --quota</literal> against all targets. It is worth
612 <literal>tunefs.lustre --quota</literal> is
613 <emphasis role="bold">mandatory</emphasis> for all targets formatted with a
614 Lustre software release older than release 2.4.0, otherwise quota
615 enforcement as well as accounting won't be functional.</para>
616 <para>Besides, the quota protocol in release 2.4 takes for granted that the
617 Lustre client supports the
618 <literal>OBD_CONNECT_EINPROGRESS</literal> connect flag. Clients supporting
619 this flag will retry indefinitely when the server returns
620 <literal>EINPROGRESS</literal> in a reply. Here is the list of Lustre client
621 version which are compatible with release 2.4:</para>
624 <para>Release 2.3-based clients and later</para>
627 <para>Release 1.8 clients newer or equal to release 1.8.9-wc1</para>
630 <para>Release 2.1 clients newer or equal to release 2.1.4</para>
633 <para condition="l2A">To use the project quota functionality introduced in
634 Lustre 2.10, <emphasis role="bold">all Lustre servers and clients must be
635 upgraded to Lustre release 2.10 or later for project quota to work
636 correctly</emphasis>. Otherwise, project quota will be inaccessible on
637 clients and not be accounted for on OSTs.</para>
639 <section xml:id="granted_cache_and_quota_limits">
642 <primary>Quotas</primary>
643 <secondary>known issues</secondary>
644 </indexterm>Granted Cache and Quota Limits</title>
645 <para>In a Lustre file system, granted cache does not respect quota limits.
646 In this situation, OSTs grant cache to a Lustre client to accelerate I/O.
647 Granting cache causes writes to be successful in OSTs, even if they exceed
648 the quota limits, and will overwrite them.</para>
649 <para>The sequence is:</para>
652 <para>A user writes files to the Lustre file system.</para>
655 <para>If the Lustre client has enough granted cache, then it returns
656 'success' to users and arranges the writes to the OSTs.</para>
659 <para>Because Lustre clients have delivered success to users, the OSTs
660 cannot fail these writes.</para>
663 <para>Because of granted cache, writes always overwrite quota limitations.
664 For example, if you set a 400 GB quota on user A and use IOR to write for
665 user A from a bundle of clients, you will write much more data than 400 GB,
666 and cause an out-of-quota error (
667 <literal>EDQUOT</literal>).</para>
669 <para>The effect of granted cache on quota limits can be mitigated, but
670 not eradicated. Reduce the maximum amount of dirty data on the clients
671 (minimal value is 1MB):</para>
675 <literal>lctl set_param osc.*.max_dirty_mb=8</literal>
681 <section xml:id="lustre_quota_statistics">
684 <primary>Quotas</primary>
685 <secondary>statistics</secondary>
686 </indexterm>Lustre Quota Statistics</title>
687 <para>The Lustre software includes statistics that monitor quota activity,
688 such as the kinds of quota RPCs sent during a specific period, the average
689 time to complete the RPCs, etc. These statistics are useful to measure
690 performance of a Lustre file system.</para>
691 <para>Each quota statistic consists of a quota event and
692 <literal>min_time</literal>,
693 <literal>max_time</literal> and
694 <literal>sum_time</literal> values for the event.</para>
695 <informaltable frame="all">
697 <colspec colname="c1" colwidth="50*" />
698 <colspec colname="c2" colwidth="50*" />
703 <emphasis role="bold">Quota Event</emphasis>
708 <emphasis role="bold">Description</emphasis>
717 <emphasis role="bold">sync_acq_req</emphasis>
721 <para>Quota slaves send a acquiring_quota request and wait for
728 <emphasis role="bold">sync_rel_req</emphasis>
732 <para>Quota slaves send a releasing_quota request and wait for
739 <emphasis role="bold">async_acq_req</emphasis>
743 <para>Quota slaves send an acquiring_quota request and do not
744 wait for its return.</para>
750 <emphasis role="bold">async_rel_req</emphasis>
754 <para>Quota slaves send a releasing_quota request and do not wait
755 for its return.</para>
761 <emphasis role="bold">wait_for_blk_quota
762 (lquota_chkquota)</emphasis>
766 <para>Before data is written to OSTs, the OSTs check if the
767 remaining block quota is sufficient. This is done in the
768 lquota_chkquota function.</para>
774 <emphasis role="bold">wait_for_ino_quota
775 (lquota_chkquota)</emphasis>
779 <para>Before files are created on the MDS, the MDS checks if the
780 remaining inode quota is sufficient. This is done in the
781 lquota_chkquota function.</para>
787 <emphasis role="bold">wait_for_blk_quota
788 (lquota_pending_commit)</emphasis>
792 <para>After blocks are written to OSTs, relative quota
793 information is updated. This is done in the lquota_pending_commit
800 <emphasis role="bold">wait_for_ino_quota
801 (lquota_pending_commit)</emphasis>
805 <para>After files are created, relative quota information is
806 updated. This is done in the lquota_pending_commit
813 <emphasis role="bold">wait_for_pending_blk_quota_req
814 (qctxt_wait_pending_dqacq)</emphasis>
818 <para>On the MDS or OSTs, there is one thread sending a quota
819 request for a specific UID/GID for block quota at any time. At
820 that time, if other threads need to do this too, they should
821 wait. This is done in the qctxt_wait_pending_dqacq
828 <emphasis role="bold">wait_for_pending_ino_quota_req
829 (qctxt_wait_pending_dqacq)</emphasis>
833 <para>On the MDS, there is one thread sending a quota request for
834 a specific UID/GID for inode quota at any time. If other threads
835 need to do this too, they should wait. This is done in the
836 qctxt_wait_pending_dqacq function.</para>
842 <emphasis role="bold">nowait_for_pending_blk_quota_req
843 (qctxt_wait_pending_dqacq)</emphasis>
847 <para>On the MDS or OSTs, there is one thread sending a quota
848 request for a specific UID/GID for block quota at any time. When
849 threads enter qctxt_wait_pending_dqacq, they do not need to wait.
850 This is done in the qctxt_wait_pending_dqacq function.</para>
856 <emphasis role="bold">nowait_for_pending_ino_quota_req
857 (qctxt_wait_pending_dqacq)</emphasis>
861 <para>On the MDS, there is one thread sending a quota request for
862 a specific UID/GID for inode quota at any time. When threads
863 enter qctxt_wait_pending_dqacq, they do not need to wait. This is
864 done in the qctxt_wait_pending_dqacq function.</para>
870 <emphasis role="bold">quota_ctl</emphasis>
874 <para>The quota_ctl statistic is generated when lfs
875 <literal>setquota</literal>,
876 <literal>lfs quota</literal> and so on, are issued.</para>
882 <emphasis role="bold">adjust_qunit</emphasis>
886 <para>Each time qunit is adjusted, it is counted.</para>
893 <title>Interpreting Quota Statistics</title>
894 <para>Quota statistics are an important measure of the performance of a
895 Lustre file system. Interpreting these statistics correctly can help you
896 diagnose problems with quotas, and may indicate adjustments to improve
897 system performance.</para>
898 <para>For example, if you run this command on the OSTs:</para>
900 lctl get_param lquota.testfs-OST0000.stats
902 <para>You will get a result similar to this:</para>
904 snapshot_time 1219908615.506895 secs.usecs
905 async_acq_req 1 samples [us] 32 32 32
906 async_rel_req 1 samples [us] 5 5 5
907 nowait_for_pending_blk_quota_req(qctxt_wait_pending_dqacq) 1 samples [us] 2\
909 quota_ctl 4 samples [us] 80 3470 4293
910 adjust_qunit 1 samples [us] 70 70 70
913 <para>In the first line,
914 <literal>snapshot_time</literal> indicates when the statistics were taken.
915 The remaining lines list the quota events and their associated
917 <para>In the second line, the
918 <literal>async_acq_req</literal> event occurs one time. The
919 <literal>min_time</literal>,
920 <literal>max_time</literal> and
921 <literal>sum_time</literal> statistics for this event are 32, 32 and 32,
922 respectively. The unit is microseconds (μs).</para>
923 <para>In the fifth line, the quota_ctl event occurs four times. The
924 <literal>min_time</literal>,
925 <literal>max_time</literal> and
926 <literal>sum_time</literal> statistics for this event are 80, 3470 and
927 4293, respectively. The unit is microseconds (μs).</para>