1 <?xml version='1.0' encoding='utf-8'?>
2 <chapter xmlns="http://docbook.org/ns/docbook"
3 xmlns:xl="http://www.w3.org/1999/xlink" version="5.0" xml:lang="en-US"
4 xml:id="configuringquotas">
5 <title xml:id="configuringquotas.title">Configuring and Managing
7 <section xml:id="quota_configuring">
10 <primary>Quotas</primary>
11 <secondary>configuring</secondary>
12 </indexterm>Working with Quotas</title>
13 <para>Quotas allow a system administrator to limit the amount of disk
14 space a user, group, or project can use. Quotas are set by root, and can
15 be specified for individual users, groups, and/or projects. Before a file
16 is written to a partition where quotas are set, the quota of the creator's
17 group is checked. If a quota exists, then the file size counts towards
18 the group's quota. If no quota exists, then the owner's user quota is
19 checked before the file is written. Similarly, inode usage for specific
20 functions can be controlled if a user over-uses the allocated space.</para>
21 <para>Lustre quota enforcement differs from standard Linux quota
22 enforcement in several ways:</para>
25 <para>Quotas are administered via the
26 <literal>lfs</literal> and
27 <literal>lctl</literal> commands (post-mount).</para>
30 <para>The quota feature in Lustre software is distributed
31 throughout the system (as the Lustre file system is a distributed file
32 system). Because of this, quota setup and behavior on Lustre is
33 different from local disk quotas in the following ways:</para>
36 <para>No single point of administration: some commands must be
37 executed on the MGS, other commands on the MDSs and OSSs, and still
38 other commands on the client.</para>
41 <para>Granularity: a local quota is typically specified for
42 kilobyte resolution, Lustre uses one megabyte as the smallest quota
46 <para>Accuracy: quota information is distributed throughout the file
47 system and can only be accurately calculated with a quiescent file
53 <para>Quotas are allocated and consumed in a quantized fashion.</para>
56 <para>Client does not set the
57 <literal>usrquota</literal> or
58 <literal>grpquota</literal> options to mount. As of Lustre software
59 release 2.4, space accounting is always enabled by default and quota
60 enforcement can be enabled/disabled on a per-file system basis with
61 <literal>lctl conf_param</literal>. It is worth noting that both
62 <literal>lfs quotaon</literal> and
63 <literal>quota_type</literal> are deprecated as of Lustre software
68 <para>Although a quota feature is available in the Lustre software, root
69 quotas are NOT enforced.</para>
71 <literal>lfs setquota -u root</literal> (limits are not enforced)</para>
73 <literal>lfs quota -u root</literal> (usage includes internal Lustre data
74 that is dynamic in size and does not accurately reflect mount point
75 visible block and inode usage).</para>
78 <section xml:id="enabling_disk_quotas">
81 <primary>Quotas</primary>
82 <secondary>enabling disk</secondary>
83 </indexterm>Enabling Disk Quotas</title>
84 <para>The design of quotas on Lustre has management and enforcement
85 separated from resource usage and accounting. Lustre software is
86 responsible for management and enforcement. The back-end file
87 system is responsible for resource usage and accounting. Because of
88 this, it is necessary to begin enabling quotas by enabling quotas on the
89 back-end disk system. Because quota setup is dependent on the Lustre
90 software version in use, you may first need to run
91 <literal>lctl get_param version</literal> to identify
92 <xref linkend="whichversion"/> you are currently using.
94 <section remap="h3" condition="l24" xml:id="enabling_disk_quota_after24">
95 <title>Enabling Disk Quotas (Lustre Software Release 2.4 and
98 <para>Quota setup is orchestrated by the MGS and <emphasis>all setup
99 commands in this section must be run directly on the MGS</emphasis>.
100 Support for project quotas specifically requires Lustre Release 2.10 or
101 later. A <emphasis>patched server</emphasis> may be required, depending
102 on the kernel version and backend filesystem type:</para>
103 <informaltable frame="all">
105 <colspec colname="c1" colwidth="50*" />
106 <colspec colname="c2" colwidth="50*" align="center" />
111 <emphasis role="bold">Configuration</emphasis>
116 <emphasis role="bold">Patched Server Required?</emphasis>
124 <emphasis>ldiskfs with kernel version < 4.5</emphasis>
126 <entry><para>Yes</para></entry>
130 <emphasis>ldiskfs with kernel version >= 4.5</emphasis>
132 <entry><para>No</para></entry>
136 <emphasis>zfs version >=0.8 with kernel
137 version < 4.5</emphasis>
139 <entry><para>Yes</para></entry>
143 <emphasis>zfs version >=0.8 with kernel
144 version > 4.5</emphasis>
146 <entry><para>No</para></entry>
151 <para>*Note: Project quotas are not supported on zfs versions earlier
154 <para>Once setup, verification of the quota state must be performed on the
155 MDT. Although quota enforcement is managed by the Lustre software, each
156 OSD implementation relies on the back-end file system to maintain
157 per-user/group/project block and inode usage. Hence, differences exist
158 when setting up quotas with ldiskfs or ZFS back-ends:</para>
161 <para>For ldiskfs backends,
162 <literal>mkfs.lustre</literal> now creates empty quota files and
163 enables the QUOTA feature flag in the superblock which turns quota
164 accounting on at mount time automatically. e2fsck was also modified
165 to fix the quota files when the QUOTA feature flag is present. The
166 project quota feature is disabled by default, and
167 <literal>tune2fs</literal> needs to be run to enable every target
171 <para>For ZFS backend, <emphasis>the project quota feature is not
172 supported on zfs versions less than 0.8.0.</emphasis> Accounting ZAPs
173 are created and maintained by the ZFS file system itself. While ZFS
174 tracks per-user and group block usage, it does not handle inode
175 accounting for ZFS versions prior to zfs-0.7.0. The ZFS OSD previously
176 implemented its own support for inode tracking. Two options are
180 <para>The ZFS OSD can estimate the number of inodes in-use based
181 on the number of blocks used by a given user or group. This mode
182 can be enabled by running the following command on the server
184 <literal>lctl set_param
185 osd-zfs.${FSNAME}-${TARGETNAME}.quota_iused_estimate=1</literal>.
189 <para>Similarly to block accounting, dedicated ZAPs are also
190 created the ZFS OSD to maintain per-user and group inode usage.
191 This is the default mode which corresponds to
192 <literal>quota_iused_estimate</literal> set to 0.</para>
198 <para>Lustre file systems formatted with a Lustre release prior to 2.4.0
199 can be still safely upgraded to release 2.4.0, but will not have
200 functional space usage report until
201 <literal>tunefs.lustre --quota</literal> is run against all targets. This
202 command sets the QUOTA feature flag in the superblock and runs e2fsck (as
203 a result, the target must be offline) to build the per-UID/GID disk usage
205 <para condition="l2A">Lustre filesystems formatted with a Lustre release
206 prior to 2.10 can be still safely upgraded to release 2.10, but will not
207 have project quota usage reporting functional until
208 <literal>tune2fs -O project</literal> is run against all ldiskfs backend
209 targets. This command sets the PROJECT feature flag in the superblock and
210 runs e2fsck (as a result, the target must be offline). See
211 <xref linkend="quota_interoperability"/> for further important
212 considerations.</para>
215 <para>Lustre software release 2.4 and later requires a version of
216 e2fsprogs that supports quota (i.e. newer or equal to 1.42.13.wc5,
217 1.42.13.wc6 or newer is needed for project quota support) to be
218 installed on the server nodes using ldiskfs backend (e2fsprogs is not
219 needed with ZFS backend). In general, we recommend to use the latest
220 e2fsprogs version available on
221 <link xl:href="http://downloads.whamcloud.com/e2fsprogs/">
222 http://downloads.whamcloud.com/public/e2fsprogs/</link>.</para>
223 <para>The ldiskfs OSD relies on the standard Linux quota to maintain
224 accounting information on disk. As a consequence, the Linux kernel
225 running on the Lustre servers using ldiskfs backend must have
226 <literal>CONFIG_QUOTA</literal>,
227 <literal>CONFIG_QUOTACTL</literal> and
228 <literal>CONFIG_QFMT_V2</literal> enabled.</para>
230 <para>As of Lustre software release 2.4.0, quota enforcement is thus
231 turned on/off independently of space accounting which is always enabled.
233 <replaceable>on|off</replaceable></literal> as well as the per-target
234 <literal>quota_type</literal> parameter are deprecated in favor of a
235 single per-file system quota parameter controlling inode/block quota
236 enforcement. Like all permanent parameters, this quota parameter can be
238 <literal>lctl conf_param</literal> on the MGS via the following
241 lctl conf_param <replaceable>fsname</replaceable>.quota.<replaceable>ost|mdt</replaceable>=<replaceable>u|g|p|ugp|none</replaceable>
246 <literal>ost</literal> -- to configure block quota managed by
251 <literal>mdt</literal> -- to configure inode quota managed by
256 <literal>u</literal> -- to enable quota enforcement for users
261 <literal>g</literal> -- to enable quota enforcement for groups
266 <literal>p</literal> -- to enable quota enforcement for projects
271 <literal>ugp</literal> -- to enable quota enforcement for all users,
272 groups and projects</para>
276 <literal>none</literal> -- to disable quota enforcement for all users,
277 groups and projects</para>
280 <para>Examples:</para>
281 <para>To turn on user, group, and project quotas for block only on
283 <literal>testfs1</literal>, <emphasis>on the MGS</emphasis> run:</para>
284 <screen>$ lctl conf_param testfs1.quota.ost=ugp
286 <para>To turn on group quotas for inodes on file system
287 <literal>testfs2</literal>, on the MGS run:</para>
288 <screen>$ lctl conf_param testfs2.quota.mdt=g
290 <para>To turn off user, group, and project quotas for both inode and block
292 <literal>testfs3</literal>, on the MGS run:</para>
293 <screen>$ lctl conf_param testfs3.quota.ost=none
295 <screen>$ lctl conf_param testfs3.quota.mdt=none
297 <section xml:id="quota_verification">
300 <primary>Quotas</primary>
301 <secondary>verifying</secondary>
302 </indexterm>Quota Verification</title>
303 <para>Once the quota parameters have been configured, all targets
304 which are part of the file system will be automatically notified of the
305 new quota settings and enable/disable quota enforcement as needed. The
306 per-target enforcement status can still be verified by running the
307 following <emphasis>command on the MDS(s)</emphasis>:</para>
309 $ lctl get_param osd-*.*.quota_slave.info
310 osd-zfs.testfs-MDT0000.quota_slave.info=
311 target name: testfs-MDT0000
315 conn to master: setup
316 user uptodate: glb[1],slv[1],reint[0]
317 group uptodate: glb[1],slv[1],reint[0]
322 <section xml:id="quota_administration">
325 <primary>Quotas</primary>
326 <secondary>creating</secondary>
327 </indexterm>Quota Administration</title>
328 <para>Once the file system is up and running, quota limits on blocks
329 and inodes can be set for user, group, and project. This is <emphasis>
330 controlled entirely from a client</emphasis> via three quota
333 <emphasis role="bold">Grace period</emphasis>-- The period of time (in
334 seconds) within which users are allowed to exceed their soft limit. There
335 are six types of grace periods:</para>
338 <para>user block soft limit</para>
341 <para>user inode soft limit</para>
344 <para>group block soft limit</para>
347 <para>group inode soft limit</para>
350 <para>project block soft limit</para>
353 <para>project inode soft limit</para>
356 <para>The grace period applies to all users. The user block soft limit is
357 for all users who are using a blocks quota.</para>
359 <emphasis role="bold">Soft limit</emphasis> -- The grace timer is started
360 once the soft limit is exceeded. At this point, the user/group/project
361 can still allocate block/inode. When the grace time expires and if the
362 user is still above the soft limit, the soft limit becomes a hard limit
363 and the user/group/project can't allocate any new block/inode any more.
364 The user/group/project should then delete files to be under the soft limit.
365 The soft limit MUST be smaller than the hard limit. If the soft limit is
366 not needed, it should be set to zero (0).</para>
368 <emphasis role="bold">Hard limit</emphasis> -- Block or inode allocation
370 <literal>EDQUOT</literal>(i.e. quota exceeded) when the hard limit is
371 reached. The hard limit is the absolute limit. When a grace period is set,
372 one can exceed the soft limit within the grace period if under the hard
374 <para>Due to the distributed nature of a Lustre file system and the need to
375 maintain performance under load, those quota parameters may not be 100%
376 accurate. The quota settings can be manipulated via the
377 <literal>lfs</literal> command, executed on a client, and includes several
378 options to work with quotas:</para>
382 <varname>quota</varname> -- displays general quota information (disk
383 usage and limits)</para>
387 <varname>setquota</varname> -- specifies quota limits and tunes the
388 grace period. By default, the grace period is one week.</para>
393 lfs quota [-q] [-v] [-h] [-o obd_uuid] [-u|-g|-p <replaceable>uname|uid|gname|gid|projid</replaceable>] <replaceable>/mount_point</replaceable>
394 lfs quota -t {-u|-g|-p} <replaceable>/mount_point</replaceable>
395 lfs setquota {-u|--user|-g|--group|-p|--project} <replaceable>username|groupname</replaceable> [-b <replaceable>block-softlimit</replaceable>] \
396 [-B <replaceable>block_hardlimit</replaceable>] [-i <replaceable>inode_softlimit</replaceable>] \
397 [-I <replaceable>inode_hardlimit</replaceable>] <replaceable>/mount_point</replaceable>
399 <para>To display general quota information (disk usage and limits) for the
400 user running the command and his primary group, run:</para>
402 $ lfs quota /mnt/testfs
404 <para>To display general quota information for a specific user ("
405 <literal>bob</literal>" in this example), run:</para>
407 $ lfs quota -u bob /mnt/testfs
409 <para>To display general quota information for a specific user ("
410 <literal>bob</literal>" in this example) and detailed quota statistics for
411 each MDT and OST, run:</para>
413 $ lfs quota -u bob -v /mnt/testfs
415 <para>To display general quota information for a specific project ("
416 <literal>1</literal>" in this example), run:</para>
418 $ lfs quota -p 1 /mnt/testfs
420 <para>To display general quota information for a specific group ("
421 <literal>eng</literal>" in this example), run:</para>
423 $ lfs quota -g eng /mnt/testfs
425 <para>To limit quota usage for a specific project ID on a specific
426 directory ("<literal>/mnt/testfs/dir</literal>" in this example), run:</para>
428 $ chattr +P /mnt/testfs/dir
429 $ chattr -p 1 /mnt/testfs/dir
430 $ lfs setquota -p 1 -b 307200 -B 309200 -i 10000 -I 11000 /mnt/testfs
432 <para>Please note that if it is desired to have
433 <literal>lfs quota -p</literal> show the space/inode usage under the
434 directory properly (much faster than <literal>du</literal>), then the
435 user/admin needs to use different project IDs for different directories.
437 <para>To display block and inode grace times for user quotas, run:</para>
439 $ lfs quota -t -u /mnt/testfs
441 <para>To set user or group quotas for a specific ID ("bob" in this
442 example), run:</para>
444 $ lfs setquota -u bob -b 307200 -B 309200 -i 10000 -I 11000 /mnt/testfs
446 <para>In this example, the quota for user "bob" is set to 300 MB
447 (309200*1024) and the hard limit is 11,000 files. Therefore, the inode hard
448 limit should be 11000.</para>
449 <para>The quota command displays the quota allocated and consumed by each
450 Lustre target. Using the previous
451 <literal>setquota</literal> example, running this
452 <literal>lfs</literal> quota command:</para>
454 $ lfs quota -u bob -v /mnt/testfs
456 <para>displays this command output:</para>
458 Disk quotas for user bob (uid 6000):
459 Filesystem kbytes quota limit grace files quota limit grace
460 /mnt/testfs 0 30720 30920 - 0 10000 11000 -
461 testfs-MDT0000_UUID 0 - 8192 - 0 - 2560 -
462 testfs-OST0000_UUID 0 - 8192 - 0 - 0 -
463 testfs-OST0001_UUID 0 - 8192 - 0 - 0 -
464 Total allocated inode limit: 2560, total allocated block limit: 24576
466 <para>Global quota limits are stored in dedicated index files (there is one
467 such index per quota type) on the quota master target (aka QMT). The QMT
468 runs on MDT0000 and exports the global indices via <replaceable>lctl
469 get_param</replaceable>. The global indices can thus be dumped via the
472 # lctl get_param qmt.testfs-QMT0000.*.glb-*
473 </screen>The format of global indexes depends on the OSD type. The ldiskfs OSD
474 uses an IAM files while the ZFS OSD creates dedicated ZAPs.</para>
475 <para>Each slave also stores a copy of this global index locally. When the
476 global index is modified on the master, a glimpse callback is issued on the
477 global quota lock to notify all slaves that the global index has been
478 modified. This glimpse callback includes information about the identifier
479 subject to the change. If the global index on the QMT is modified while a
480 slave is disconnected, the index version is used to determine whether the
481 slave copy of the global index isn't up to date any more. If so, the slave
482 fetches the whole index again and updates the local copy. The slave copy of
483 the global index can also be accessed via the following command:
485 lctl get_param osd-*.*.quota_slave.limit*
488 <para>Prior to 2.4, global quota limits used to be stored in
489 administrative quota files using the on-disk format of the linux quota
490 file. When upgrading MDT0000 to 2.4, those administrative quota files are
491 converted into IAM indexes automatically, conserving existing quota
492 limits previously set by the administrator.</para>
495 <section xml:id="quota_allocation">
498 <primary>Quotas</primary>
499 <secondary>allocating</secondary>
500 </indexterm>Quota Allocation</title>
501 <para>In a Lustre file system, quota must be properly allocated or users
502 may experience unnecessary failures. The file system block quota is divided
503 up among the OSTs within the file system. Each OST requests an allocation
504 which is increased up to the quota limit. The quota allocation is then
505 <emphasis role="italic">quantized</emphasis> to reduce the number of
506 quota-related request traffic.</para>
507 <para>The Lustre quota system distributes quotas from the Quota Master
508 Target (aka QMT). Only one QMT instance is supported for now and only runs
509 on the same node as MDT0000. All OSTs and MDTs set up a Quota Slave Device
510 (aka QSD) which connects to the QMT to allocate/release quota space. The
511 QSD is setup directly from the OSD layer.</para>
512 <para>To reduce quota requests, quota space is initially allocated to QSDs
513 in very large chunks. How much unused quota space can be held by a target
514 is controlled by the qunit size. When quota space for a given ID is close
515 to exhaustion on the QMT, the qunit size is reduced and QSDs are notified
516 of the new qunit size value via a glimpse callback. Slaves are then
517 responsible for releasing quota space above the new qunit value. The qunit
518 size isn't shrunk indefinitely and there is a minimal value of 1MB for
519 blocks and 1,024 for inodes. This means that the quota space rebalancing
520 process will stop when this minimum value is reached. As a result, quota
521 exceeded can be returned while many slaves still have 1MB or 1,024 inodes
522 of spare quota space.</para>
523 <para>If we look at the
524 <literal>setquota</literal> example again, running this
525 <literal>lfs quota</literal> command:</para>
527 # lfs quota -u bob -v /mnt/testfs
529 <para>displays this command output:</para>
531 Disk quotas for user bob (uid 500):
532 Filesystem kbytes quota limit grace files quota limit grace
533 /mnt/testfs 30720* 30720 30920 6d23h56m44s 10101* 10000 11000
535 testfs-MDT0000_UUID 0 - 0 - 10101 - 10240
536 testfs-OST0000_UUID 0 - 1024 - - - -
537 testfs-OST0001_UUID 30720* - 29896 - - - -
538 Total allocated inode limit: 10240, total allocated block limit: 30920
540 <para>The total quota limit of 30,920 is allocated to user bob, which is
541 further distributed to two OSTs.</para>
542 <para>Values appended with '
543 <literal>*</literal>' show that the quota limit has been exceeded, causing
544 the following error when trying to write or create a file:</para>
547 $ cp: writing `/mnt/testfs/foo`: Disk quota exceeded.
551 <para>It is very important to note that the block quota is consumed per
552 OST and the inode quota per MDS. Therefore, when the quota is consumed on
553 one OST (resp. MDT), the client may not be able to create files
554 regardless of the quota available on other OSTs (resp. MDTs).</para>
555 <para>Setting the quota limit below the minimal qunit size may prevent
556 the user/group from all file creation. It is thus recommended to use
557 soft/hard limits which are a multiple of the number of OSTs * the minimal
560 <para>To determine the total number of inodes, use
561 <literal>lfs df -i</literal>(and also
562 <literal>lctl get_param *.*.filestotal</literal>). For more information on
564 <literal>lfs df -i</literal> command and the command output, see
565 <xref linkend="dbdoclet.checking_free_space" />.</para>
566 <para>Unfortunately, the
567 <literal>statfs</literal> interface does not report the free inode count
568 directly, but instead reports the total inode and used inode counts. The
569 free inode count is calculated for
570 <literal>df</literal> from (total inodes - used inodes). It is not critical
571 to know the total inode count for a file system. Instead, you should know
572 (accurately), the free inode count and the used inode count for a file
573 system. The Lustre software manipulates the total inode count in order to
574 accurately report the other two values.</para>
576 <section xml:id="quota_interoperability">
579 <primary>Quotas</primary>
580 <secondary>Interoperability</secondary>
581 </indexterm>Quotas and Version Interoperability</title>
582 <para>The new quota protocol introduced in Lustre software release 2.4.0
583 <emphasis role="bold">is not compatible</emphasis> with previous
584 versions. As a consequence,
585 <emphasis role="bold">all Lustre servers must be upgraded to release 2.4.0
586 for quota to be functional</emphasis>. Quota limits set on the Lustre file
587 system prior to the upgrade will be automatically migrated to the new quota
588 index format. As for accounting information with ldiskfs backend, they will
589 be regenerated by running
590 <literal>tunefs.lustre --quota</literal> against all targets. It is worth
592 <literal>tunefs.lustre --quota</literal> is
593 <emphasis role="bold">mandatory</emphasis> for all targets formatted with a
594 Lustre software release older than release 2.4.0, otherwise quota
595 enforcement as well as accounting won't be functional.</para>
596 <para>Besides, the quota protocol in release 2.4 takes for granted that the
597 Lustre client supports the
598 <literal>OBD_CONNECT_EINPROGRESS</literal> connect flag. Clients supporting
599 this flag will retry indefinitely when the server returns
600 <literal>EINPROGRESS</literal> in a reply. Here is the list of Lustre client
601 version which are compatible with release 2.4:</para>
604 <para>Release 2.3-based clients and later</para>
607 <para>Release 1.8 clients newer or equal to release 1.8.9-wc1</para>
610 <para>Release 2.1 clients newer or equal to release 2.1.4</para>
613 <para condition="l2A">To use the project quota functionality introduced in
614 Lustre 2.10, <emphasis role="bold">all Lustre servers and clients must be
615 upgraded to Lustre release 2.10 or later for project quota to work
616 correctly</emphasis>. Otherwise, project quota will be inaccessible on
617 clients and not be accounted for on OSTs. Furthermore, the
618 <emphasis role="bold">servers may be required to use a patched kernel,
619 </emphasis> for more information see
620 <xref linkend="enabling_disk_quota_after24"/>.</para>
622 <section xml:id="granted_cache_and_quota_limits">
625 <primary>Quotas</primary>
626 <secondary>known issues</secondary>
627 </indexterm>Granted Cache and Quota Limits</title>
628 <para>In a Lustre file system, granted cache does not respect quota limits.
629 In this situation, OSTs grant cache to a Lustre client to accelerate I/O.
630 Granting cache causes writes to be successful in OSTs, even if they exceed
631 the quota limits, and will overwrite them.</para>
632 <para>The sequence is:</para>
635 <para>A user writes files to the Lustre file system.</para>
638 <para>If the Lustre client has enough granted cache, then it returns
639 'success' to users and arranges the writes to the OSTs.</para>
642 <para>Because Lustre clients have delivered success to users, the OSTs
643 cannot fail these writes.</para>
646 <para>Because of granted cache, writes always overwrite quota limitations.
647 For example, if you set a 400 GB quota on user A and use IOR to write for
648 user A from a bundle of clients, you will write much more data than 400 GB,
649 and cause an out-of-quota error (
650 <literal>EDQUOT</literal>).</para>
652 <para>The effect of granted cache on quota limits can be mitigated, but
653 not eradicated. Reduce the maximum amount of dirty data on the clients
654 (minimal value is 1MB):</para>
658 <literal>lctl set_param osc.*.max_dirty_mb=8</literal>
664 <section xml:id="lustre_quota_statistics">
667 <primary>Quotas</primary>
668 <secondary>statistics</secondary>
669 </indexterm>Lustre Quota Statistics</title>
670 <para>The Lustre software includes statistics that monitor quota activity,
671 such as the kinds of quota RPCs sent during a specific period, the average
672 time to complete the RPCs, etc. These statistics are useful to measure
673 performance of a Lustre file system.</para>
674 <para>Each quota statistic consists of a quota event and
675 <literal>min_time</literal>,
676 <literal>max_time</literal> and
677 <literal>sum_time</literal> values for the event.</para>
678 <informaltable frame="all">
680 <colspec colname="c1" colwidth="50*" />
681 <colspec colname="c2" colwidth="50*" />
686 <emphasis role="bold">Quota Event</emphasis>
691 <emphasis role="bold">Description</emphasis>
700 <emphasis role="bold">sync_acq_req</emphasis>
704 <para>Quota slaves send a acquiring_quota request and wait for
711 <emphasis role="bold">sync_rel_req</emphasis>
715 <para>Quota slaves send a releasing_quota request and wait for
722 <emphasis role="bold">async_acq_req</emphasis>
726 <para>Quota slaves send an acquiring_quota request and do not
727 wait for its return.</para>
733 <emphasis role="bold">async_rel_req</emphasis>
737 <para>Quota slaves send a releasing_quota request and do not wait
738 for its return.</para>
744 <emphasis role="bold">wait_for_blk_quota
745 (lquota_chkquota)</emphasis>
749 <para>Before data is written to OSTs, the OSTs check if the
750 remaining block quota is sufficient. This is done in the
751 lquota_chkquota function.</para>
757 <emphasis role="bold">wait_for_ino_quota
758 (lquota_chkquota)</emphasis>
762 <para>Before files are created on the MDS, the MDS checks if the
763 remaining inode quota is sufficient. This is done in the
764 lquota_chkquota function.</para>
770 <emphasis role="bold">wait_for_blk_quota
771 (lquota_pending_commit)</emphasis>
775 <para>After blocks are written to OSTs, relative quota
776 information is updated. This is done in the lquota_pending_commit
783 <emphasis role="bold">wait_for_ino_quota
784 (lquota_pending_commit)</emphasis>
788 <para>After files are created, relative quota information is
789 updated. This is done in the lquota_pending_commit
796 <emphasis role="bold">wait_for_pending_blk_quota_req
797 (qctxt_wait_pending_dqacq)</emphasis>
801 <para>On the MDS or OSTs, there is one thread sending a quota
802 request for a specific UID/GID for block quota at any time. At
803 that time, if other threads need to do this too, they should
804 wait. This is done in the qctxt_wait_pending_dqacq
811 <emphasis role="bold">wait_for_pending_ino_quota_req
812 (qctxt_wait_pending_dqacq)</emphasis>
816 <para>On the MDS, there is one thread sending a quota request for
817 a specific UID/GID for inode quota at any time. If other threads
818 need to do this too, they should wait. This is done in the
819 qctxt_wait_pending_dqacq function.</para>
825 <emphasis role="bold">nowait_for_pending_blk_quota_req
826 (qctxt_wait_pending_dqacq)</emphasis>
830 <para>On the MDS or OSTs, there is one thread sending a quota
831 request for a specific UID/GID for block quota at any time. When
832 threads enter qctxt_wait_pending_dqacq, they do not need to wait.
833 This is done in the qctxt_wait_pending_dqacq function.</para>
839 <emphasis role="bold">nowait_for_pending_ino_quota_req
840 (qctxt_wait_pending_dqacq)</emphasis>
844 <para>On the MDS, there is one thread sending a quota request for
845 a specific UID/GID for inode quota at any time. When threads
846 enter qctxt_wait_pending_dqacq, they do not need to wait. This is
847 done in the qctxt_wait_pending_dqacq function.</para>
853 <emphasis role="bold">quota_ctl</emphasis>
857 <para>The quota_ctl statistic is generated when lfs
858 <literal>setquota</literal>,
859 <literal>lfs quota</literal> and so on, are issued.</para>
865 <emphasis role="bold">adjust_qunit</emphasis>
869 <para>Each time qunit is adjusted, it is counted.</para>
876 <title>Interpreting Quota Statistics</title>
877 <para>Quota statistics are an important measure of the performance of a
878 Lustre file system. Interpreting these statistics correctly can help you
879 diagnose problems with quotas, and may indicate adjustments to improve
880 system performance.</para>
881 <para>For example, if you run this command on the OSTs:</para>
883 lctl get_param lquota.testfs-OST0000.stats
885 <para>You will get a result similar to this:</para>
887 snapshot_time 1219908615.506895 secs.usecs
888 async_acq_req 1 samples [us] 32 32 32
889 async_rel_req 1 samples [us] 5 5 5
890 nowait_for_pending_blk_quota_req(qctxt_wait_pending_dqacq) 1 samples [us] 2\
892 quota_ctl 4 samples [us] 80 3470 4293
893 adjust_qunit 1 samples [us] 70 70 70
896 <para>In the first line,
897 <literal>snapshot_time</literal> indicates when the statistics were taken.
898 The remaining lines list the quota events and their associated
900 <para>In the second line, the
901 <literal>async_acq_req</literal> event occurs one time. The
902 <literal>min_time</literal>,
903 <literal>max_time</literal> and
904 <literal>sum_time</literal> statistics for this event are 32, 32 and 32,
905 respectively. The unit is microseconds (μs).</para>
906 <para>In the fifth line, the quota_ctl event occurs four times. The
907 <literal>min_time</literal>,
908 <literal>max_time</literal> and
909 <literal>sum_time</literal> statistics for this event are 80, 3470 and
910 4293, respectively. The unit is microseconds (μs).</para>