1 <?xml version='1.0' encoding='utf-8'?>
2 <chapter xmlns="http://docbook.org/ns/docbook"
3 xmlns:xl="http://www.w3.org/1999/xlink" version="5.0" xml:lang="en-US"
4 xml:id="configuringquotas">
5 <title xml:id="configuringquotas.title">Configuring and Managing
7 <section xml:id="quota_configuring">
10 <primary>Quotas</primary>
11 <secondary>configuring</secondary>
12 </indexterm>Working with Quotas</title>
13 <para>Quotas allow a system administrator to limit the amount of disk
14 space a user, group, or project can use. Quotas are set by root, and can
15 be specified for individual users, groups, and/or projects. Before a file
16 is written to a partition where quotas are set, the quota of the creator's
17 group is checked. If a quota exists, then the file size counts towards
18 the group's quota. If no quota exists, then the owner's user quota is
19 checked before the file is written. Similarly, inode usage for specific
20 functions can be controlled if a user over-uses the allocated space.</para>
21 <para>Lustre quota enforcement differs from standard Linux quota
22 enforcement in several ways:</para>
25 <para>Quotas are administered via the
26 <literal>lfs</literal> and
27 <literal>lctl</literal> commands (post-mount).</para>
30 <para>The quota feature in Lustre software is distributed
31 throughout the system (as the Lustre file system is a distributed file
32 system). Because of this, quota setup and behavior on Lustre is
33 somewhat different from local disk quotas in the following ways:</para>
36 <para>No single point of administration: some commands must be
37 executed on the MGS, other commands on the MDSs and OSSs, and still
38 other commands on the client.</para>
41 <para>Granularity: a local quota is typically specified for
42 kilobyte resolution, Lustre uses one megabyte as the smallest quota
46 <para>Accuracy: quota information is distributed throughout the file
47 system and can only be accurately calculated with a quiescent file
48 system in order to minimize performance overhead during normal use.
54 <para>Quotas are allocated and consumed in a quantized fashion.</para>
57 <para>Client does not set the
58 <literal>usrquota</literal> or
59 <literal>grpquota</literal> options to mount. Space accounting is
60 enabled by default and quota enforcement can be enabled/disabled on
61 a per-filesystem basis with <literal>lctl conf_param</literal>.</para>
65 <para>Although a quota feature is available in the Lustre software, root
66 quotas are NOT enforced.</para>
68 <literal>lfs setquota -u root</literal> (limits are not enforced)</para>
70 <literal>lfs quota -u root</literal> (usage includes internal Lustre data
71 that is dynamic in size and does not accurately reflect mount point
72 visible block and inode usage).</para>
75 <section xml:id="enabling_disk_quotas">
78 <primary>Quotas</primary>
79 <secondary>enabling disk</secondary>
80 </indexterm>Enabling Disk Quotas</title>
81 <para>The design of quotas on Lustre has management and enforcement
82 separated from resource usage and accounting. Lustre software is
83 responsible for management and enforcement. The back-end file
84 system is responsible for resource usage and accounting. Because of
85 this, it is necessary to begin enabling quotas by enabling quotas on the
89 <para>Quota setup is orchestrated by the MGS and <emphasis>all setup
90 commands in this section must be run directly on the MGS</emphasis>.
91 Support for project quotas specifically requires Lustre Release 2.10 or
92 later. A <emphasis>patched server</emphasis> may be required, depending
93 on the kernel version and backend filesystem type:</para>
94 <informaltable frame="all">
96 <colspec colname="c1" colwidth="50*" />
97 <colspec colname="c2" colwidth="50*" align="center" />
102 <emphasis role="bold">Configuration</emphasis>
107 <emphasis role="bold">Patched Server Required?</emphasis>
115 <emphasis>ldiskfs with kernel version < 4.5</emphasis>
117 <entry><para>Yes</para></entry>
121 <emphasis>ldiskfs with kernel version >= 4.5</emphasis>
123 <entry><para>No</para></entry>
127 <emphasis>zfs version >=0.8 with kernel
128 version < 4.5</emphasis>
130 <entry><para>Yes</para></entry>
134 <emphasis>zfs version >=0.8 with kernel
135 version > 4.5</emphasis>
137 <entry><para>No</para></entry>
142 <para>*Note: Project quotas are not supported on zfs versions earlier
145 <para>Once setup, verification of the quota state must be performed on the
146 MDT. Although quota enforcement is managed by the Lustre software, each
147 OSD implementation relies on the back-end file system to maintain
148 per-user/group/project block and inode usage. Hence, differences exist
149 when setting up quotas with ldiskfs or ZFS back-ends:</para>
152 <para>For ldiskfs backends,
153 <literal>mkfs.lustre</literal> now creates empty quota files and
154 enables the QUOTA feature flag in the superblock which turns quota
155 accounting on at mount time automatically. e2fsck was also modified
156 to fix the quota files when the QUOTA feature flag is present. The
157 project quota feature is disabled by default, and
158 <literal>tune2fs</literal> needs to be run to enable every target
162 <para>For ZFS backend, <emphasis>the project quota feature is not
163 supported on zfs versions less than 0.8.0.</emphasis> Accounting ZAPs
164 are created and maintained by the ZFS file system itself. While ZFS
165 tracks per-user and group block usage, it does not handle inode
166 accounting for ZFS versions prior to zfs-0.7.0. The ZFS OSD previously
167 implemented its own support for inode tracking. Two options are
171 <para>The ZFS OSD can estimate the number of inodes in-use based
172 on the number of blocks used by a given user or group. This mode
173 can be enabled by running the following command on the server
175 <literal>lctl set_param
176 osd-zfs.${FSNAME}-${TARGETNAME}.quota_iused_estimate=1</literal>.
180 <para>Similarly to block accounting, dedicated ZAPs are also
181 created the ZFS OSD to maintain per-user and group inode usage.
182 This is the default mode which corresponds to
183 <literal>quota_iused_estimate</literal> set to 0.</para>
189 <para condition="l2A">Lustre filesystems formatted with a Lustre release
190 prior to 2.10 can be still safely upgraded to release 2.10, but will not
191 have project quota usage reporting functional until
192 <literal>tune2fs -O project</literal> is run against all ldiskfs backend
193 targets. This command sets the PROJECT feature flag in the superblock and
194 runs e2fsck (as a result, the target must be offline). See
195 <xref linkend="quota_interoperability"/> for further important
196 considerations.</para>
199 <para>Lustre requires a version of e2fsprogs that supports quota
200 to be installed on the server nodes when using the ldiskfs backend
201 (e2fsprogs is not needed with ZFS backend). In general, we recommend
202 to use the latest e2fsprogs version available on
203 <link xl:href="http://downloads.whamcloud.com/e2fsprogs/">
204 http://downloads.whamcloud.com/public/e2fsprogs/</link>.</para>
205 <para>The ldiskfs OSD relies on the standard Linux quota to maintain
206 accounting information on disk. As a consequence, the Linux kernel
207 running on the Lustre servers using ldiskfs backend must have
208 <literal>CONFIG_QUOTA</literal>,
209 <literal>CONFIG_QUOTACTL</literal> and
210 <literal>CONFIG_QFMT_V2</literal> enabled.</para>
212 <para>Quota enforcement is turned on/off independently of space
213 accounting which is always enabled. There is a single per-file
214 system quota parameter controlling inode/block quota enforcement.
215 Like all permanent parameters, this quota parameter can be set via
216 <literal>lctl conf_param</literal> on the MGS via the command:</para>
218 lctl conf_param <replaceable>fsname</replaceable>.quota.<replaceable>ost|mdt</replaceable>=<replaceable>u|g|p|ugp|none</replaceable>
223 <literal>ost</literal> -- to configure block quota managed by
228 <literal>mdt</literal> -- to configure inode quota managed by
233 <literal>u</literal> -- to enable quota enforcement for users
238 <literal>g</literal> -- to enable quota enforcement for groups
243 <literal>p</literal> -- to enable quota enforcement for projects
248 <literal>ugp</literal> -- to enable quota enforcement for all users,
249 groups and projects</para>
253 <literal>none</literal> -- to disable quota enforcement for all users,
254 groups and projects</para>
257 <para>Examples:</para>
258 <para>To turn on user, group, and project quotas for block only on
260 <literal>testfs1</literal>, <emphasis>on the MGS</emphasis> run:</para>
261 <screen>mgs# lctl conf_param testfs1.quota.ost=ugp </screen>
262 <para>To turn on group quotas for inodes on file system
263 <literal>testfs2</literal>, on the MGS run:</para>
264 <screen>mgs# lctl conf_param testfs2.quota.mdt=g </screen>
265 <para>To turn off user, group, and project quotas for both inode and block
267 <literal>testfs3</literal>, on the MGS run:</para>
268 <screen>mgs# lctl conf_param testfs3.quota.ost=none</screen>
269 <screen>mgs# lctl conf_param testfs3.quota.mdt=none</screen>
270 <section xml:id="quota_verification">
273 <primary>Quotas</primary>
274 <secondary>verifying</secondary>
275 </indexterm>Quota Verification</title>
276 <para>Once the quota parameters have been configured, all targets
277 which are part of the file system will be automatically notified of the
278 new quota settings and enable/disable quota enforcement as needed. The
279 per-target enforcement status can still be verified by running the
280 following <emphasis>command on the MDS(s)</emphasis>:</para>
282 $ lctl get_param osd-*.*.quota_slave.info
283 osd-zfs.testfs-MDT0000.quota_slave.info=
284 target name: testfs-MDT0000
288 conn to master: setup
289 user uptodate: glb[1],slv[1],reint[0]
290 group uptodate: glb[1],slv[1],reint[0]
294 <section xml:id="quota_administration">
297 <primary>Quotas</primary>
298 <secondary>creating</secondary>
299 </indexterm>Quota Administration</title>
300 <para>Once the file system is up and running, quota limits on blocks
301 and inodes can be set for user, group, and project. This is <emphasis>
302 controlled entirely from a client</emphasis> via three quota
305 <emphasis role="bold">Grace period</emphasis>-- The period of time (in
306 seconds) within which users are allowed to exceed their soft limit. There
307 are six types of grace periods:</para>
310 <para>user block soft limit</para>
313 <para>user inode soft limit</para>
316 <para>group block soft limit</para>
319 <para>group inode soft limit</para>
322 <para>project block soft limit</para>
325 <para>project inode soft limit</para>
328 <para>The grace period applies to all users. The user block soft limit is
329 for all users who are using a blocks quota.</para>
331 <emphasis role="bold">Soft limit</emphasis> -- The grace timer is started
332 once the soft limit is exceeded. At this point, the user/group/project
333 can still allocate block/inode. When the grace time expires and if the
334 user is still above the soft limit, the soft limit becomes a hard limit
335 and the user/group/project can't allocate any new block/inode any more.
336 The user/group/project should then delete files to be under the soft limit.
337 The soft limit MUST be smaller than the hard limit. If the soft limit is
338 not needed, it should be set to zero (0).</para>
340 <emphasis role="bold">Hard limit</emphasis> -- Block or inode allocation
342 <literal>EDQUOT</literal>(i.e. quota exceeded) when the hard limit is
343 reached. The hard limit is the absolute limit. When a grace period is set,
344 one can exceed the soft limit within the grace period if under the hard
346 <para>Due to the distributed nature of a Lustre file system and the need to
347 maintain performance under load, those quota parameters may not be 100%
348 accurate. The quota settings can be manipulated via the
349 <literal>lfs</literal> command, executed on a client, and includes several
350 options to work with quotas:</para>
354 <varname>quota</varname> -- displays general quota information (disk
355 usage and limits)</para>
359 <varname>setquota</varname> -- specifies quota limits and tunes the
360 grace period. By default, the grace period is one week.</para>
365 lfs quota [-q] [-v] [-h] [-o obd_uuid] [-u|-g|-p <replaceable>uname|uid|gname|gid|projid</replaceable>] <replaceable>/mount_point</replaceable>
366 lfs quota -t {-u|-g|-p} <replaceable>/mount_point</replaceable>
367 lfs setquota {-u|--user|-g|--group|-p|--project} <replaceable>username|groupname</replaceable> [-b <replaceable>block-softlimit</replaceable>] \
368 [-B <replaceable>block_hardlimit</replaceable>] [-i <replaceable>inode_softlimit</replaceable>] \
369 [-I <replaceable>inode_hardlimit</replaceable>] <replaceable>/mount_point</replaceable>
371 <para>To display general quota information (disk usage and limits) for the
372 user running the command and his primary group, run:</para>
374 $ lfs quota /mnt/testfs
376 <para>To display general quota information for a specific user ("
377 <literal>bob</literal>" in this example), run:</para>
379 $ lfs quota -u bob /mnt/testfs
381 <para>To display general quota information for a specific user ("
382 <literal>bob</literal>" in this example) and detailed quota statistics for
383 each MDT and OST, run:</para>
385 $ lfs quota -u bob -v /mnt/testfs
387 <para>To display general quota information for a specific project ("
388 <literal>1</literal>" in this example), run:</para>
390 $ lfs quota -p 1 /mnt/testfs
392 <para>To display general quota information for a specific group ("
393 <literal>eng</literal>" in this example), run:</para>
395 $ lfs quota -g eng /mnt/testfs
397 <para>To limit quota usage for a specific project ID on a specific
398 directory ("<literal>/mnt/testfs/dir</literal>" in this example), run:</para>
400 $ chattr +P /mnt/testfs/dir
401 $ chattr -p 1 /mnt/testfs/dir
402 $ lfs setquota -p 1 -b 307200 -B 309200 -i 10000 -I 11000 /mnt/testfs
404 <para>Please note that if it is desired to have
405 <literal>lfs quota -p</literal> show the space/inode usage under the
406 directory properly (much faster than <literal>du</literal>), then the
407 user/admin needs to use different project IDs for different directories.
409 <para>To display block and inode grace times for user quotas, run:</para>
411 $ lfs quota -t -u /mnt/testfs
413 <para>To set user or group quotas for a specific ID ("bob" in this
414 example), run:</para>
416 $ lfs setquota -u bob -b 307200 -B 309200 -i 10000 -I 11000 /mnt/testfs
418 <para>In this example, the quota for user "bob" is set to 300 MB
419 (309200*1024) and the hard limit is 11,000 files. Therefore, the inode hard
420 limit should be 11000.</para>
421 <para>The quota command displays the quota allocated and consumed by each
422 Lustre target. Using the previous
423 <literal>setquota</literal> example, running this
424 <literal>lfs</literal> quota command:</para>
426 $ lfs quota -u bob -v /mnt/testfs
428 <para>displays this command output:</para>
430 Disk quotas for user bob (uid 6000):
431 Filesystem kbytes quota limit grace files quota limit grace
432 /mnt/testfs 0 30720 30920 - 0 10000 11000 -
433 testfs-MDT0000_UUID 0 - 8192 - 0 - 2560 -
434 testfs-OST0000_UUID 0 - 8192 - 0 - 0 -
435 testfs-OST0001_UUID 0 - 8192 - 0 - 0 -
436 Total allocated inode limit: 2560, total allocated block limit: 24576
438 <para>Global quota limits are stored in dedicated index files (there is one
439 such index per quota type) on the quota master target (aka QMT). The QMT
440 runs on MDT0000 and exports the global indices via <replaceable>lctl
441 get_param</replaceable>. The global indices can thus be dumped via the
444 # lctl get_param qmt.testfs-QMT0000.*.glb-*
445 </screen>The format of global indexes depends on the OSD type. The ldiskfs OSD
446 uses an IAM files while the ZFS OSD creates dedicated ZAPs.</para>
447 <para>Each slave also stores a copy of this global index locally. When the
448 global index is modified on the master, a glimpse callback is issued on the
449 global quota lock to notify all slaves that the global index has been
450 modified. This glimpse callback includes information about the identifier
451 subject to the change. If the global index on the QMT is modified while a
452 slave is disconnected, the index version is used to determine whether the
453 slave copy of the global index isn't up to date any more. If so, the slave
454 fetches the whole index again and updates the local copy. The slave copy of
455 the global index can also be accessed via the following command:
457 lctl get_param osd-*.*.quota_slave.limit*
460 <section condition='l2C' xml:id="default_quota">
463 <primary>Quotas</primary>
464 <secondary>default</secondary>
465 </indexterm>Default Quota</title>
466 <para>The default quota is used to enforce the quota limits for any user,
467 group, or project that do not have quotas set by administrator.</para>
468 <para>The default quota can be disabled by setting limits to
469 <literal>0</literal>.</para>
470 <section xml:id="defalut_quota_usage">
473 <primary>Quotas</primary>
474 <secondary>usage</secondary>
475 </indexterm>Usage</title>
477 lfs quota [-U|--default-usr|-G|--default-grp|-P|--default-prj] <replaceable>/mount_point</replaceable>
478 lfs setquota {-U|--default-usr|-G|--default-grp|-P|--default-prj} [-b <replaceable>block-softlimit</replaceable>] \
479 [-B <replaceable>block_hardlimit</replaceable>] [-i <replaceable>inode_softlimit</replaceable>] [-I <replaceable>inode_hardlimit</replaceable>] <replaceable>/mount_point</replaceable>
480 lfs setquota {-u|-g|-p} <replaceable>username|groupname</replaceable> -d <replaceable>/mount_point</replaceable>
482 <para>To set the default user quota:</para>
484 # lfs setquota -U -b 10G -B 11G -i 100K -I 105K /mnt/testfs
486 <para>To set the default group quota:</para>
488 # lfs setquota -G -b 10G -B 11G -i 100K -I 105K /mnt/testfs
490 <para>To set the default project quota:</para>
492 # lfs setquota -P -b 10G -B 11G -i 100K -I 105K /mnt/testfs
494 <para>To disable the default user quota:</para>
496 # lfs setquota -U -b 0 -B 0 -i 0 -I 0 /mnt/testfs
498 <para>To disable the default group quota:</para>
500 # lfs setquota -G -b 0 -B 0 -i 0 -I 0 /mnt/testfs
502 <para>To disable the default project quota:</para>
504 # lfs setquota -P -b 0 -B 0 -i 0 -I 0 /mnt/testfs
508 If quota limits are set for some user, group or project, it will use
509 those specific quota limits instead of the default quota. Quota limits for
510 any user, group or project will use the default quota by setting its quota
511 limits to <literal>0</literal>.
516 <section xml:id="quota_allocation">
519 <primary>Quotas</primary>
520 <secondary>allocating</secondary>
521 </indexterm>Quota Allocation</title>
522 <para>In a Lustre file system, quota must be properly allocated or users
523 may experience unnecessary failures. The file system block quota is divided
524 up among the OSTs within the file system. Each OST requests an allocation
525 which is increased up to the quota limit. The quota allocation is then
526 <emphasis role="italic">quantized</emphasis> to reduce the number of
527 quota-related request traffic.</para>
528 <para>The Lustre quota system distributes quotas from the Quota Master
529 Target (aka QMT). Only one QMT instance is supported for now and only runs
530 on the same node as MDT0000. All OSTs and MDTs set up a Quota Slave Device
531 (aka QSD) which connects to the QMT to allocate/release quota space. The
532 QSD is setup directly from the OSD layer.</para>
533 <para>To reduce quota requests, quota space is initially allocated to QSDs
534 in very large chunks. How much unused quota space can be held by a target
535 is controlled by the qunit size. When quota space for a given ID is close
536 to exhaustion on the QMT, the qunit size is reduced and QSDs are notified
537 of the new qunit size value via a glimpse callback. Slaves are then
538 responsible for releasing quota space above the new qunit value. The qunit
539 size isn't shrunk indefinitely and there is a minimal value of 1MB for
540 blocks and 1,024 for inodes. This means that the quota space rebalancing
541 process will stop when this minimum value is reached. As a result, quota
542 exceeded can be returned while many slaves still have 1MB or 1,024 inodes
543 of spare quota space.</para>
544 <para>If we look at the
545 <literal>setquota</literal> example again, running this
546 <literal>lfs quota</literal> command:</para>
548 # lfs quota -u bob -v /mnt/testfs
550 <para>displays this command output:</para>
552 Disk quotas for user bob (uid 500):
553 Filesystem kbytes quota limit grace files quota limit grace
554 /mnt/testfs 30720* 30720 30920 6d23h56m44s 10101* 10000 11000
556 testfs-MDT0000_UUID 0 - 0 - 10101 - 10240
557 testfs-OST0000_UUID 0 - 1024 - - - -
558 testfs-OST0001_UUID 30720* - 29896 - - - -
559 Total allocated inode limit: 10240, total allocated block limit: 30920
561 <para>The total quota limit of 30,920 is allocated to user bob, which is
562 further distributed to two OSTs.</para>
563 <para>Values appended with '
564 <literal>*</literal>' show that the quota limit has been exceeded, causing
565 the following error when trying to write or create a file:</para>
568 $ cp: writing `/mnt/testfs/foo`: Disk quota exceeded.
572 <para>It is very important to note that the block quota is consumed per
573 OST and the inode quota per MDS. Therefore, when the quota is consumed on
574 one OST (resp. MDT), the client may not be able to create files
575 regardless of the quota available on other OSTs (resp. MDTs).</para>
576 <para>Setting the quota limit below the minimal qunit size may prevent
577 the user/group from all file creation. It is thus recommended to use
578 soft/hard limits which are a multiple of the number of OSTs * the minimal
581 <para>To determine the total number of inodes, use
582 <literal>lfs df -i</literal>(and also
583 <literal>lctl get_param *.*.filestotal</literal>). For more information on
585 <literal>lfs df -i</literal> command and the command output, see
586 <xref linkend="dbdoclet.checking_free_space" />.</para>
587 <para>Unfortunately, the
588 <literal>statfs</literal> interface does not report the free inode count
589 directly, but instead reports the total inode and used inode counts. The
590 free inode count is calculated for
591 <literal>df</literal> from (total inodes - used inodes). It is not critical
592 to know the total inode count for a file system. Instead, you should know
593 (accurately), the free inode count and the used inode count for a file
594 system. The Lustre software manipulates the total inode count in order to
595 accurately report the other two values.</para>
597 <section xml:id="quota_interoperability">
600 <primary>Quotas</primary>
601 <secondary>Interoperability</secondary>
602 </indexterm>Quotas and Version Interoperability</title>
603 <para condition="l2A">To use the project quota functionality introduced in
604 Lustre 2.10, <emphasis role="bold">all Lustre servers and clients must be
605 upgraded to Lustre release 2.10 or later for project quota to work
606 correctly</emphasis>. Otherwise, project quota will be inaccessible on
607 clients and not be accounted for on OSTs. Furthermore, the
608 <emphasis role="bold">servers may be required to use a patched kernel,
609 </emphasis> for more information see
610 <xref linkend="enabling_disk_quotas"/>.</para>
612 <section xml:id="granted_cache_and_quota_limits">
615 <primary>Quotas</primary>
616 <secondary>known issues</secondary>
617 </indexterm>Granted Cache and Quota Limits</title>
618 <para>In a Lustre file system, granted cache does not respect quota limits.
619 In this situation, OSTs grant cache to a Lustre client to accelerate I/O.
620 Granting cache causes writes to be successful in OSTs, even if they exceed
621 the quota limits, and will overwrite them.</para>
622 <para>The sequence is:</para>
625 <para>A user writes files to the Lustre file system.</para>
628 <para>If the Lustre client has enough granted cache, then it returns
629 'success' to users and arranges the writes to the OSTs.</para>
632 <para>Because Lustre clients have delivered success to users, the OSTs
633 cannot fail these writes.</para>
636 <para>Because of granted cache, writes always overwrite quota limitations.
637 For example, if you set a 400 GB quota on user A and use IOR to write for
638 user A from a bundle of clients, you will write much more data than 400 GB,
639 and cause an out-of-quota error (
640 <literal>EDQUOT</literal>).</para>
642 <para>The effect of granted cache on quota limits can be mitigated, but
643 not eradicated. Reduce the maximum amount of dirty data on the clients
644 (minimal value is 1MB):</para>
648 <literal>lctl set_param osc.*.max_dirty_mb=8</literal>
654 <section xml:id="lustre_quota_statistics">
657 <primary>Quotas</primary>
658 <secondary>statistics</secondary>
659 </indexterm>Lustre Quota Statistics</title>
660 <para>The Lustre software includes statistics that monitor quota activity,
661 such as the kinds of quota RPCs sent during a specific period, the average
662 time to complete the RPCs, etc. These statistics are useful to measure
663 performance of a Lustre file system.</para>
664 <para>Each quota statistic consists of a quota event and
665 <literal>min_time</literal>,
666 <literal>max_time</literal> and
667 <literal>sum_time</literal> values for the event.</para>
668 <informaltable frame="all">
670 <colspec colname="c1" colwidth="50*" />
671 <colspec colname="c2" colwidth="50*" />
676 <emphasis role="bold">Quota Event</emphasis>
681 <emphasis role="bold">Description</emphasis>
690 <emphasis role="bold">sync_acq_req</emphasis>
694 <para>Quota slaves send a acquiring_quota request and wait for
701 <emphasis role="bold">sync_rel_req</emphasis>
705 <para>Quota slaves send a releasing_quota request and wait for
712 <emphasis role="bold">async_acq_req</emphasis>
716 <para>Quota slaves send an acquiring_quota request and do not
717 wait for its return.</para>
723 <emphasis role="bold">async_rel_req</emphasis>
727 <para>Quota slaves send a releasing_quota request and do not wait
728 for its return.</para>
734 <emphasis role="bold">wait_for_blk_quota
735 (lquota_chkquota)</emphasis>
739 <para>Before data is written to OSTs, the OSTs check if the
740 remaining block quota is sufficient. This is done in the
741 lquota_chkquota function.</para>
747 <emphasis role="bold">wait_for_ino_quota
748 (lquota_chkquota)</emphasis>
752 <para>Before files are created on the MDS, the MDS checks if the
753 remaining inode quota is sufficient. This is done in the
754 lquota_chkquota function.</para>
760 <emphasis role="bold">wait_for_blk_quota
761 (lquota_pending_commit)</emphasis>
765 <para>After blocks are written to OSTs, relative quota
766 information is updated. This is done in the lquota_pending_commit
773 <emphasis role="bold">wait_for_ino_quota
774 (lquota_pending_commit)</emphasis>
778 <para>After files are created, relative quota information is
779 updated. This is done in the lquota_pending_commit
786 <emphasis role="bold">wait_for_pending_blk_quota_req
787 (qctxt_wait_pending_dqacq)</emphasis>
791 <para>On the MDS or OSTs, there is one thread sending a quota
792 request for a specific UID/GID for block quota at any time. At
793 that time, if other threads need to do this too, they should
794 wait. This is done in the qctxt_wait_pending_dqacq
801 <emphasis role="bold">wait_for_pending_ino_quota_req
802 (qctxt_wait_pending_dqacq)</emphasis>
806 <para>On the MDS, there is one thread sending a quota request for
807 a specific UID/GID for inode quota at any time. If other threads
808 need to do this too, they should wait. This is done in the
809 qctxt_wait_pending_dqacq function.</para>
815 <emphasis role="bold">nowait_for_pending_blk_quota_req
816 (qctxt_wait_pending_dqacq)</emphasis>
820 <para>On the MDS or OSTs, there is one thread sending a quota
821 request for a specific UID/GID for block quota at any time. When
822 threads enter qctxt_wait_pending_dqacq, they do not need to wait.
823 This is done in the qctxt_wait_pending_dqacq function.</para>
829 <emphasis role="bold">nowait_for_pending_ino_quota_req
830 (qctxt_wait_pending_dqacq)</emphasis>
834 <para>On the MDS, there is one thread sending a quota request for
835 a specific UID/GID for inode quota at any time. When threads
836 enter qctxt_wait_pending_dqacq, they do not need to wait. This is
837 done in the qctxt_wait_pending_dqacq function.</para>
843 <emphasis role="bold">quota_ctl</emphasis>
847 <para>The quota_ctl statistic is generated when lfs
848 <literal>setquota</literal>,
849 <literal>lfs quota</literal> and so on, are issued.</para>
855 <emphasis role="bold">adjust_qunit</emphasis>
859 <para>Each time qunit is adjusted, it is counted.</para>
866 <title>Interpreting Quota Statistics</title>
867 <para>Quota statistics are an important measure of the performance of a
868 Lustre file system. Interpreting these statistics correctly can help you
869 diagnose problems with quotas, and may indicate adjustments to improve
870 system performance.</para>
871 <para>For example, if you run this command on the OSTs:</para>
873 lctl get_param lquota.testfs-OST0000.stats
875 <para>You will get a result similar to this:</para>
877 snapshot_time 1219908615.506895 secs.usecs
878 async_acq_req 1 samples [us] 32 32 32
879 async_rel_req 1 samples [us] 5 5 5
880 nowait_for_pending_blk_quota_req(qctxt_wait_pending_dqacq) 1 samples [us] 2\
882 quota_ctl 4 samples [us] 80 3470 4293
883 adjust_qunit 1 samples [us] 70 70 70
886 <para>In the first line,
887 <literal>snapshot_time</literal> indicates when the statistics were taken.
888 The remaining lines list the quota events and their associated
890 <para>In the second line, the
891 <literal>async_acq_req</literal> event occurs one time. The
892 <literal>min_time</literal>,
893 <literal>max_time</literal> and
894 <literal>sum_time</literal> statistics for this event are 32, 32 and 32,
895 respectively. The unit is microseconds (μs).</para>
896 <para>In the fifth line, the quota_ctl event occurs four times. The
897 <literal>min_time</literal>,
898 <literal>max_time</literal> and
899 <literal>sum_time</literal> statistics for this event are 80, 3470 and
900 4293, respectively. The unit is microseconds (μs).</para>
905 vim:expandtab:shiftwidth=2:tabstop=8: