From 71626556510627a391b1ff5b7316eefae5053a44 Mon Sep 17 00:00:00 2001 From: Richard Henwood Date: Wed, 18 May 2011 13:01:24 -0500 Subject: [PATCH] FIX: xrefs and tidying --- LustreProgrammingInterfaces.xml | 87 +++++++----------------- UserUtilities.xml | 142 ++++++++++++---------------------------- 2 files changed, 66 insertions(+), 163 deletions(-) diff --git a/LustreProgrammingInterfaces.xml b/LustreProgrammingInterfaces.xml index 4f59b27..be1bc08 100644 --- a/LustreProgrammingInterfaces.xml +++ b/LustreProgrammingInterfaces.xml @@ -1,50 +1,30 @@ - + - Lustre Programming Interfaces + Lustre Programming Interfaces This chapter describes public programming interfaces to control various aspects of Lustre from userspace. These interfaces are generally not guaranteed to remain unchanged over time, although we will make an effort to notify the user community well in advance of major changes. This chapter includes the following section: + - User/Group Cache Upcall + + - - - - l_getgroups Utility - - - + + - - - - - - Note -Lustre programming interface man pages are found in the lustre/doc folder. - - - - -
- <anchor xml:id="dbdoclet.50438291_pgfId-1293216" xreflabel=""/> -
- 33.1 <anchor xml:id="dbdoclet.50438291_32926" xreflabel=""/>User/Group <anchor xml:id="dbdoclet.50438291_marker-1293215" xreflabel=""/>Cache Upcall + + Lustre programming interface man pages are found in the lustre/doc folder. + +
+ 33.1 User/Group <anchor xml:id="dbdoclet.50438291_marker-1293215" xreflabel=""/>Cache Upcall This section describes user and group upcall. - - - - - - Note -For information on a universal UID/GID, see Environmental Requirements. - - - - + For information on a universal UID/GID, see Environmental Requirements. +
<anchor xml:id="dbdoclet.50438291_pgfId-1293218" xreflabel=""/>33.1.1 Name - Use /proc/fs/lustre/mdt/${FSNAME}-MDT{xxxx}/identity_upcall to look up a given user’s group membership. + Use /proc/fs/lustre/mdt/${FSNAME}-MDT{xxxx}/identity_upcall to look up a given user's group membership.
<anchor xml:id="dbdoclet.50438291_pgfId-1293220" xreflabel=""/>33.1.2 Description @@ -56,33 +36,23 @@ The MDS issues an upcall (set per MDS) to map the numeric UID to the supplementary group(s). - - - + If there is no upcall or if there is an upcall and it fails, supplementary groups will be added as supplied by the client (as they are now). - - - + The default upcall is /usr/sbin/l_getidentity, which can interact with the user/group database to obtain UID/GID/suppgid. The user/group database depends on authentication configuration, and can be local /etc/passwd, NIS, LDAP, etc. If necessary, the administrator can use a parse utility to set /proc/fs/lustre/mdt/${FSNAME}-MDT{xxxx}/identity_upcall. If the upcall interface is set to NONE, then upcall is disabled. The MDS uses the UID/GID/suppgid supplied by the client. - - - + The default group upcall is set by mkfs.lustre. Use tunefs.lustre --param or echo{path}>/proc/fs/lustre/mds/{mdsname}/group_upcall - - - + The Lustre administrator can specify permissions for a specific UID by configuring /etc/lustre/perm.conf on the MDS. As commented in lustre/utils/l_getidentity.c - - - + /** permission file format is like this: * {nid} {uid} {perms} * * '*' nid \ means any nid* '*' uid means any uid* the valid values for perms are:* setu\ @@ -96,9 +66,7 @@ preferential,* '*' nid is as default perm, and is not preferential.*/ To avoid repeated upcalls, the MDS caches supplemental group information. Use /proc/fs/lustre/mdt/${FSNAME}-MDT{xxxx}/identity_expire to set the cache time (default is 600 seconds). The kernel waits for the upcall to complete (at most, 5 seconds) and takes the "failure" behavior as described. Set the wait time in /proc/fs/lustre/mdt/${FSNAME}-MDT{xxxx}/identity_acquire_expire (default is 15 seconds). Cached entries are flushed by writing to /proc/fs/lustre/mdt/${FSNAME}-MDT{xxxx}/identity_flush. - - - +
@@ -107,15 +75,11 @@ preferential,* '*' nid is as default perm, and is not preferential.*/ Name of the MDS service - - - + Numeric UID - - - +
@@ -133,8 +97,8 @@ preferential,* '*' nid is as default perm, and is not preferential.*/
-
- 33.2 l_getgroups<anchor xml:id="dbdoclet.50438291_73963" xreflabel=""/><anchor xml:id="dbdoclet.50438291_marker-1294565" xreflabel=""/> Utility +
+ 33.2 l_getgroups<anchor xml:id="dbdoclet.50438291_marker-1294565" xreflabel=""/> Utility The l_getgroups utility handles Lustre user/group cache upcall.
<anchor xml:id="dbdoclet.50438291_pgfId-1294568" xreflabel=""/>Synopsis @@ -151,6 +115,5 @@ preferential,* '*' nid is as default perm, and is not preferential.*/ <anchor xml:id="dbdoclet.50438291_pgfId-1294574" xreflabel=""/>Files /proc/fs/lustre/mds/mds-service/group_upcall
-
diff --git a/UserUtilities.xml b/UserUtilities.xml index 9188c72..d5b71d3 100644 --- a/UserUtilities.xml +++ b/UserUtilities.xml @@ -1,50 +1,36 @@ - + - User Utilities + User Utilities This chapter describes user utilities and includes the following sections: - lfs + + - + + - lfs_migrate + + - + + - lfsck + + - - - - Filefrag - - - - - - Mount - - - - - - Handling Timeouts - - - + + -
- <anchor xml:id="dbdoclet.50438206_pgfId-1305210" xreflabel=""/> -
- 32.1 <anchor xml:id="dbdoclet.50438206_94597" xreflabel=""/>l<anchor xml:id="dbdoclet.50438206_marker-1305209" xreflabel=""/>fs +
+ 32.1 <anchor xreflabel=""/>l<anchor xml:id="dbdoclet.50438206_marker-1305209" xreflabel=""/>fs The lfs utility can be used for user configuration routines and monitoring.
<anchor xml:id="dbdoclet.50438206_pgfId-1305212" xreflabel=""/>Synopsis @@ -98,26 +84,8 @@ g <gname>| -g <gid>] <filesystem> <filesystem> lfs help - - - - - - Note -In the above example, the <filesystem> parameter refers to the mount point of the Lustre file system. The default mount point is /mnt/lustre - - - - - - - - - - Note -The old lfs quota output was very detailed and contained cluster-wide quota statistics (including cluster-wide limits for a user/group and cluster-wide usage for a user/group), as well as statistics for each MDS/OST. Now, lfs quota has been updated to provide only cluster-wide statistics, by default. To obtain the full report of cluster-wide limits, usage and statistics, use the -v option with lfs quota. - - - - + In the above example, the <filesystem> parameter refers to the mount point of the Lustre file system. The default mount point is /mnt/lustre + The old lfs quota output was very detailed and contained cluster-wide quota statistics (including cluster-wide limits for a user/group and cluster-wide usage for a user/group), as well as statistics for each MDS/OST. Now, lfs quota has been updated to provide only cluster-wide statistics, by default. To obtain the full report of cluster-wide limits, usage and statistics, use the -v option with lfs quota.
<anchor xml:id="dbdoclet.50438206_pgfId-1305262" xreflabel=""/>Description @@ -234,7 +202,7 @@ g <gname>| -g <gid>] <filesystem>   --quiet - Lists details about the file’s object ID information. + Lists details about the file's object ID information.   @@ -288,7 +256,7 @@ g <gname>| -g <gid>] <filesystem>   --size stripe_sizeThe default stripe-size is 0. The default start-ost is -1. Do NOT confuse them! If you set start-ost to 0, all new file creations occur on OST 0 (seldom a good idea).  - Number of bytes to store on an OST before moving to the next OST. A stripe_size of 0 uses the file system’s default stripe size, (default is 1 MB). Can be specified with k (KB), m (MB), or g (GB), respectively. + Number of bytes to store on an OST before moving to the next OST. A stripe_size of 0 uses the file system's default stripe size, (default is 1 MB). Can be specified with k (KB), m (MB), or g (GB), respectively.   @@ -306,7 +274,7 @@ g <gname>| -g <gid>] <filesystem> poollist {filesystem} [.poolname]|{pathname} - Lists pools in the file system or pathname, or OSTs in the file system’s pool. + Lists pools in the file system or pathname, or OSTs in the file system's pool. quota [-q] [-v] [-o obd_uuid|-i mdt_idx|-I ost_idx] [-u|-g <uname>|<uid>|<gname>|<gid>] <filesystem>  @@ -318,7 +286,7 @@ g <gname>| -g <gid>] <filesystem> quotachown - Changes the file’s owner and group on OSTs of the specified file system. + Changes the file's owner and group on OSTs of the specified file system. quotacheck [-ugf] <filesystem>  @@ -338,11 +306,11 @@ g <gname>| -g <gid>] <filesystem> setquota <-u|-g> <uname>|<uid>|<gname>|<gid> [--block-softlimit <block-softlimit>] [--block-hardlimit <block-hardlimit>] [--inode-softlimit <inode-softlimit>] [--inode-hardlimit <inode-hardlimit>] <filesystem> - Sets file system quotas for users or groups. Limits can be specified with --{block|inode}-{softlimit|hardlimit} or their short equivalents -b, -B, -i, -I. Users can set 1, 2, 3 or 4 limits.The old setquota interface is supported, but it may be removed in a future Lustre release. Also, limits can be specified with special suffixes, -b, -k, -m, -g, -t, and -p to indicate units of 1, 2^10, 2^20, 2^30, 2^40 and 2^50, respectively. By default, the block limits unit is 1 kilobyte (1,024), and block limits are always kilobyte-grained (even if specified in bytes). See Examples. + Sets file system quotas for users or groups. Limits can be specified with --{block|inode}-{softlimit|hardlimit} or their short equivalents -b, -B, -i, -I. Users can set 1, 2, 3 or 4 limits.The old setquota interface is supported, but it may be removed in a future Lustre release. Also, limits can be specified with special suffixes, -b, -k, -m, -g, -t, and -p to indicate units of 1, 2^10, 2^20, 2^30, 2^40 and 2^50, respectively. By default, the block limits unit is 1 kilobyte (1,024), and block limits are always kilobyte-grained (even if specified in bytes). See . setquota -t <-u|-g>[--block-grace <block-grace>][--inode-grace <inode-grace>] <filesystem> - Sets the file system quota grace times for users or groups. Grace time is specified in “XXwXXdXXhXXmXXs†format or as an integer seconds value. See Examples. + Sets the file system quota grace times for users or groups. Grace time is specified in 'XXwXXdXXhXXmXXs†format or as an integer seconds value. See . help @@ -388,7 +356,7 @@ g <gname>| -g <gid>] <filesystem> List space or inode usage for a specific OST pool. $ lfs df --pool <filesystem>[.<pool>] | <pathname> - List quotas of user ‘bob’. + List quotas of user 'bob'. $ lfs quota -u bob /mnt/lustre Show grace times for user quotas on /mnt/lustre. @@ -406,7 +374,7 @@ g <gname>| -g <gid>] <filesystem> Turns off quotas of user and group. $ lfs quotaoff -ug /mnt/lustre - Sets quotas of user ‘bob’, with a 1 GB block quota hardlimit and a 2 GB block quota softlimit. + Sets quotas of user 'bob', with a 1 GB block quota hardlimit and a 2 GB block quota softlimit. $ lfs setquota -u bob --block-softlimit 2000000 --block-hardlimit 1000000 /\ mnt/lustre @@ -443,8 +411,8 @@ mnt/lustre lctl
-
- 32.2 <anchor xml:id="dbdoclet.50438206_42260" xreflabel=""/>lfs_migrate +
+ 32.2 lfs_migrate The lfs_migrate utility is a simple tool to migrate files between Lustre OSTs.
<anchor xml:id="dbdoclet.50438206_pgfId-1305773" xreflabel=""/>Synopsis @@ -516,11 +484,11 @@ mnt/lustre
<anchor xml:id="dbdoclet.50438206_pgfId-1305837" xreflabel=""/>See Also - lfs +
-
- 32.3 <anchor xml:id="dbdoclet.50438206_91700" xreflabel=""/>lf<anchor xml:id="dbdoclet.50438206_marker-1305843" xreflabel=""/>sck +
+ 32.3 lf<anchor xml:id="dbdoclet.50438206_marker-1305843" xreflabel=""/>sck Lfsck ensures that objects are not referenced by multiple MDS files, that there are no orphan objects on the OSTs (objects that do not have any file on the MDS which references them), and that all of the objects referenced by the MDS exist. Under normal circumstances, Lustre maintains such coherency by distributed logging mechanisms, but under exceptional circumstances that may fail (e.g. disk failure, file system corruption leading to e2fsck repair). To avoid lengthy downtime, you can also run lfsck once Lustre is already started. The e2fsck utility is run on each of the local MDS and OST device file systems and verifies that the underlying ldiskfs is consistent. After e2fsck is run, lfsck does distributed coherency checking for the Lustre file system. In most cases, e2fsck is sufficient to repair any file system issues and lfsck is not required.
@@ -529,26 +497,8 @@ mnt/lustre [-n|--nofix] [-v|--verbose] --mdsdb mds_database_file --ostdb ost1_databas\ e_file [ost2_database_file...] <filesystem> - - - - - - Note -As shown, the <filesystem> parameter refers to the Lustre file system mount point. The default mount point is /mnt/lustre. - - - - - - - - - - Note -For lfsck, database filenames must be provided as absolute pathnames. Relative paths do not work, the databases cannot be properly opened. - - - - + As shown, the <filesystem> parameter refers to the Lustre file system mount point. The default mount point is /mnt/lustre. + For lfsck, database filenames must be provided as absolute pathnames. Relative paths do not work, the databases cannot be properly opened.
<anchor xml:id="dbdoclet.50438206_pgfId-1305851" xreflabel=""/>Options @@ -603,11 +553,11 @@ e_file [ost2_database_file...] <filesystem>
<anchor xml:id="dbdoclet.50438206_pgfId-1305910" xreflabel=""/>Description The lfsck utility is used to check and repair the distributed coherency of a Lustre file system. If an MDS or an OST becomes corrupt, run a distributed check on the file system to determine what sort of problems exist. Use lfsck to correct any defects found. - For more information on using e2fsck and lfsck, including examples, see Commit on Share. For information on resolving orphaned objects, see Working with Orphaned Objects. + For more information on using e2fsck and lfsck, including examples, see (Commit on Share). For information on resolving orphaned objects, see (Working with Orphaned Objects).
-
- 32.4 <anchor xml:id="dbdoclet.50438206_75125" xreflabel=""/>File<anchor xml:id="dbdoclet.50438206_marker-1305920" xreflabel=""/>frag +
+ 32.4 File<anchor xml:id="dbdoclet.50438206_marker-1305920" xreflabel=""/>frag The e2fsprogs package contains the filefrag tool which reports the extent of file fragmentation.
<anchor xml:id="dbdoclet.50438206_pgfId-1305923" xreflabel=""/>Synopsis @@ -617,16 +567,7 @@ e_file [ost2_database_file...] <filesystem>
<anchor xml:id="dbdoclet.50438206_pgfId-1305925" xreflabel=""/>Description The filefrag utility reports the extent of fragmentation in a given file. Initially, filefrag attempts to obtain extent information using FIEMAP ioctl, which is efficient and fast. If FIEMAP is not supported, then filefrag uses FIBMAP. - - - - - - Note -Lustre only supports FIEMAP ioctl. FIBMAP ioctl is not supported. - - - - + Lustre only supports FIEMAP ioctl. FIBMAP ioctl is not supported. In default mode The default mode is faster than the verbose/extent mode., filefrag returns the number of physically discontiguous extents in the file. In extent or verbose mode, each extent is printed with details. For Lustre, the extents are printed in device offset order, not logical offset order.
@@ -697,8 +638,8 @@ e_file [ost2_database_file...] <filesystem>
-
- 32.5 <anchor xml:id="dbdoclet.50438206_86244" xreflabel=""/>Mou<anchor xml:id="dbdoclet.50438206_marker-1305992" xreflabel=""/>nt +
+ 32.5 Mou<anchor xml:id="dbdoclet.50438206_marker-1305992" xreflabel=""/>nt Lustre uses the standard mount(8) Linux command. When mounting a Lustre file system, mount(8) executes the /sbin/mount.lustre command to complete the mount. The mount command supports these Lustre-specific options: @@ -753,8 +694,8 @@ e_file [ost2_database_file...] <filesystem>
-
- 32.6 <anchor xml:id="dbdoclet.50438206_56217" xreflabel=""/>Handling <anchor xml:id="dbdoclet.50438206_marker-1306030" xreflabel=""/>Timeouts +
+ 32.6 Handling <anchor xml:id="dbdoclet.50438206_marker-1306030" xreflabel=""/>Timeouts Timeouts are the most common cause of hung applications. After a timeout involving an MDS or failover OST, applications attempting to access the disconnected resource wait until the connection gets established. When a client performs any remote operation, it gives the server a reasonable amount of time to respond. If a server does not reply either due to a down network, hung server, or any other reason, a timeout occurs which requires a recovery. If a timeout occurs, a message (similar to this one), appears on the console of the client, and in /var/log/messages: @@ -766,5 +707,4 @@ e_file [ost2_database_file...] <filesystem>
-
-- 1.8.3.1