From b3ebba69d05a6a7c2c9255b0794ad017db8ed3ce Mon Sep 17 00:00:00 2001 From: Andreas Dilger Date: Wed, 1 Mar 2017 03:16:12 -0700 Subject: [PATCH] LUDOC-11 utils: improve mount.lustre.8 option description Add the "mgssec=" mount option for SSK and Kerberos. Add the "skpath=pathname" mount option for SSK. Add the "lazystatfs" and "nolazystatfs" mount options. Add a description of mount-by-label and mount-by-UUID, with caveats. Improve the description of the "mgsnode" parameter. Improve the description of the "flock" and "noflock" options. Move the "acl" and "noacl" mount options from the client to the server section, since they've been long deprecated on the client. Fix whitespace. Signed-off-by: Andreas Dilger Change-Id: I2343df49270ff1f18b7bed20042eafff893ebbe5 Reviewed-on: https://review.whamcloud.com/25680 Tested-by: Jenkins Reviewed-by: Joseph Gmitter --- SystemConfigurationUtilities.xml | 323 +++++++++++++++++++++++++++------------ 1 file changed, 224 insertions(+), 99 deletions(-) diff --git a/SystemConfigurationUtilities.xml b/SystemConfigurationUtilities.xml index 033a363..0299cd0 100644 --- a/SystemConfigurationUtilities.xml +++ b/SystemConfigurationUtilities.xml @@ -602,13 +602,13 @@ $ lctl conf_param testfs.llite.max_read_ahead_mb=16 Registers a new changelog user for a particular device. - Changelog entries are saved persistently on the MDT with each - filesystem operation, and are only purged beyond all registered - user's minimum set point (see - lfs changelog_clear). This may cause the - Changelog to consume a large amount of space, eventually - filling the MDT, if a changelog user is registered but never - consumes those records. + Changelog entries are saved persistently on the MDT with each + filesystem operation, and are only purged beyond all registered + user's minimum set point (see + lfs changelog_clear). This may cause the + Changelog to consume a large amount of space, eventually + filling the MDT, if a changelog user is registered but never + consumes those records. @@ -617,9 +617,9 @@ $ lctl conf_param testfs.llite.max_read_ahead_mb=16 Unregisters an existing changelog user. If the - user's "clear" record number is the minimum for - the device, changelog records are purged until the next minimum. - + user's "clear" record number is the minimum for + the device, changelog records are purged until the next minimum. + @@ -1089,7 +1089,7 @@ llverdev - >-f|--force> + -f|--force Forces the test to run without a confirmation that the device will be overwritten and all data will be permanently destroyed. @@ -1105,7 +1105,7 @@ llverdev - -o >offset + -o offset Offset (in kilobytes) of the start of the test (default value is 0). @@ -1809,7 +1809,7 @@ mount.lustre The mount.lustre utility starts a Lustre client or target service.
Synopsis - mount -t lustre [-o options] directory + mount -t lustre [-o options] device mountpoint
@@ -1833,17 +1833,18 @@ mount.lustre - mgs_nid:/fsname[/subdir] -   + mgsname:/fsname[/subdir] Mounts the Lustre file system named - fsname (optionally under subdirectory - subdir if specified) on the client by - contacting the Management Service at mgsspec on - the pathname given by directory. The format - for mgsspec is defined below. A mounted client - file system appears in fstab(5) and is usable, like any local file + fsname (optionally starting at + subdirectory subdir within the + filesystem, if specified) on the client at the directory + mountpoint, by contacting the Lustre + Management Service at mgsname. The + format for mgsname is defined below. A + client file system can be listed in fstab(5) + for automatic mount at boot time, is usable like any local file system, and provides a full POSIX standard-compliant interface. @@ -1853,7 +1854,23 @@ mount.lustre block_device - Starts the target service defined by the mkfs.lustre command on the physical disk block_device. A mounted target service file system is only useful for df(1) operations and appears in fstab(5) to show the device is in use. + Starts the target service defined by the + mkfs.lustre(8) command on the physical disk + block_device. The + block_device may be specified using + -L label to find + the first block device with that label (e.g. + testfs-MDT0000), or by UUID using the + -U uuid option. + Care should be taken if there is a device-level backup of the + target filesystem on the same node, which would have a + duplicate label and UUID if it has not been changed with + tune2fs(8) or similar. The mounted target + service filesystem mounted at + mountpoint is only useful for + df(1) operations and appears in + /proc/mounts to show the device is in use. + @@ -1879,25 +1896,79 @@ mount.lustre - mgsspec:=mgsnode[:mgsnode] -   + mgsname=mgsnode[:mgsnode] + + + mgsname is a colon-separated + list of mgsnode names where the MGS + service may run. Multiple mgsnode + values can be specified if the MGS service is configured for + HA failover and may be running on any one of the nodes. + + + + + + mgsnode=mgsnid[,mgsnid] + + + Each mgsnode may specify a + comma-separated list of NIDs, if there are different LNet + interfaces for that mgsnode. + + + + + + mgssec=flavor + + + Specifies the encryption flavor for the initial network + RPC connection to the MGS. Non-security flavors are: + null, plain, and + gssnull, which respectively disable, or + have no encryption or integrity features for testing purposes. + Kerberos flavors are: krb5n, + krb5a, krb5i, and + krb5p. Shared-secret key flavors are: + skn, ska, + ski, and skpi, see the + for more details. The security + flavor for client-to-server connections is specified in the + filesystem configuration that the client fetches from the MGS. + + + + + + skpath=file|directory - The MGS specification may be a colon-separated list of nodes. + + Path to a file or directory with the keyfile(s) to load for + this mount command. Keys are inserted into the + KEY_SPEC_SESSION_KEYRING keyring in the + kernel with a description containing + lustre: and a suffix which depends on + whether the context of the mount command is for an MGS, + MDT/OST, or client. + - mgsnode:=mgsnid[,mgsnid] + exclude=ostlist - Each node may be specified by a comma-separated list of NIDs. + Starts a client or MDT with a colon-separated list of + known inactive OSTs that it will not try to connect to. - In addition to the standard mount options, Lustre understands the following client-specific options: + In addition to the standard mount(8) options, Lustre understands + the following client-specific options: @@ -1915,10 +1986,31 @@ mount.lustre + always_ping + + + The client will periodically ping the server when it is + idle, even if the server ptlrpc module + is configured with the suppress_pings + option. This allows clients to reliably use the filesystem + even if they are not part of an external client health + monitoring mechanism. + + + + + flock - Enables full flock support, coherent across all client nodes. + Enables advisory file locking support between + participating applications using the flock(2) + system call. This causes file locking to be coherent across all + client nodes also using this mount option. This is useful if + applications need coherent userspace file locking across + multiple client nodes, but also imposes communications overhead + in order to maintain locking consistency between client nodes. + @@ -1926,7 +2018,13 @@ mount.lustre localflock - Enables local flock support, using only client-local flock (faster, for applications that require flock, but do not run on multiple nodes). + Enables client-local flock(2) support, + using only client-local advisory file locking. This is faster + than using the global flock option, and can + be used for applications that depend on functioning + flock(2) but run only on a single node. + It has minimal overhead using only the Linux kernel's locks. + @@ -1934,39 +2032,61 @@ mount.lustre noflock - Disables flock support entirely. Applications calling flock get an error. It is up to the administrator to choose either localflock (fastest, low impact, not coherent between nodes) or flock (slower, performance impact for use, coherent between nodes). + Disables flock(2) support entirely, + and is the default option. Applications calling + flock(2) get an + ENOSYS error. It is up to the administrator + to choose either the localflock or + flock mount option based on their + requirements. It is possible to mount clients with different + options, and only those mounted with flock + will be coherent amongst each other. + - user_xattr + lazystatfs - Enables get/set of extended attributes by regular users. See the attr(5) manual page. + Allows statfs(2) (as used by + df(1) and lfs-df(1)) to + return even if some OST or MDT is unresponsive or has been + temporarily or permanently disabled in the configuration. + This avoids blocking until all of the targets are available. + This is the default behavior since Lustre 2.9.0. + - nouser_xattr + nolazystatfs - Disables use of extended attributes by regular users. Root and system processes can still use extended attributes. + Requires that statfs(2) block until all + OSTs and MDTs are available and have returned space usage. + - acl + user_xattr - Enables POSIX Access Control List support. See the acl(5) manual page. + Enables get/set of extended attributes by regular users + in the user.* namespace. See the + attr(5) manual page for more details. + - noacl + nouser_xattr - Disables Access Control List support. + Disables use of extended attributes in the + user.* namespace by regular users. Root + and system processes can still use extended attributes. @@ -1974,7 +2094,7 @@ mount.lustre verbose - Enable mount/umount console messages. + Enable extra mount/umount console messages. @@ -1990,7 +2110,15 @@ mount.lustre user_fid2path - Enable FID to path translation by regular users. Note: This option allows a potential security hole because it allows regular users direct access to a file by its FID, bypassing POSIX path-based permission checks which could otherwise prevent the user from accessing a file in a directory that they do not have access to. Regular permission checks are still performed on the file itself, so the user cannot access a file to which they have no access rights. + Enable FID to path translation by regular + users. Note: This option allows a potential security hole + because it allows regular users direct access to a file by its + File ID, bypassing POSIX path-based permission checks which + could otherwise prevent the user from accessing a file in a + directory that they do not have access to. Regular permission + checks are still performed on the file itself, so the user + cannot access a file to which they have no access rights. + @@ -1998,13 +2126,19 @@ mount.lustre nouser_fid2path - Disable FID to path translation by regular users. Root and processes with CAP_DAC_READ_SEARCH can still perform FID to path translation. + Disable FID to path translation by + regular users. Root and processes with + CAP_DAC_READ_SEARCH can still perform FID + to path translation. + - In addition to the standard mount options and backing disk type (e.g. ext3) options, Lustre understands the following server-specific options: + In addition to the standard mount options and backing disk type + (e.g. ldiskfs) options, Lustre understands the following server-specific + mount options: @@ -2038,18 +2172,27 @@ mount.lustre - exclude=ostlist + abort_recov - Starts a client or MDT with a colon-separated list of known inactive OSTs. + Aborts client recovery on that server and starts the target service immediately. - abort_recov + max_sectors_kb=KB - Aborts client recovery and starts the target service immediately. + Sets the block device parameter + max_sectors_kb limit for the MDT or OST + target being mounted to specified maximum number of kilobytes. + When max_sectors_kb isn't specified as a + mount option, it will automatically be set to the + max_hw_sectors_kb (up to a maximum of 16MiB) + for that block device. This default behavior is suited for + most users. When max_sectors_kb=0 is used, + the current value for this tunable will be kept. + @@ -2065,8 +2208,19 @@ mount.lustre recovery_time_soft=timeout - Allows timeout seconds for clients to reconnect for recovery after a server crash. This timeout is incrementally extended if it is about to expire and the server is still handling new connections from recoverable clients. - The default soft recovery timeout is 3 times the value of the Lustre timeout parameter (see ). The default Lustre timeout is 100 seconds, which would make the soft recovery timeout default to 300 seconds (5 minutes). The soft recovery timeout is set at mount time and will not change if the Lustre timeout is changed after mount time. + Allows timeout seconds for clients to + reconnect for recovery after a server crash. This timeout is + incrementally extended if it is about to expire and the server + is still handling new connections from recoverable clients. + + The default soft recovery timeout is 3 times the value + of the Lustre timeout parameter (see + ). The default Lustre + timeout is 100 seconds, which would make the soft recovery + timeout default to 300 seconds (5 minutes). The soft recovery + timeout is set at mount time and will not change if the Lustre + timeout is changed after mount time. + @@ -2074,8 +2228,18 @@ mount.lustre recovery_time_hard=timeout - The server is allowed to incrementally extend its timeout up to a hard maximum of timeout seconds. - The default hard recovery timeout is 9 times the value of the Lustre timeout parameter (see ). The default Lustre timeout is 100 seconds, which would make the hard recovery timeout default to 900 seconds (15 minutes). The hard recovery timeout is set at mount time and will not change if the Lustre timeout is changed after mount time. + The server is allowed to incrementally extend its timeout + up to a hard maximum of timeout + seconds. + + The default hard recovery timeout is 9 times the value + of the Lustre timeout parameter (see + ). The default Lustre + timeout is 100 seconds, which would make the hard recovery + timeout default to 900 seconds (15 minutes). The hard recovery + timeout is set at mount time and will not change if the Lustre + timeout is changed after mount time. + @@ -2083,7 +2247,15 @@ mount.lustre noscrub - Typically the MDT will detect restoration from a file-level backup during mount. This mount option prevents the OI Scrub from starting automatically when the MDT is mounted. Manually starting LFSCK after mounting provides finer control over the starting conditions. This mount option also prevents OI scrub from occurring automatically when OI inconsistency is detected (see ) + Typically the MDT will detect restoration from a + file-level backup during mount. This mount option prevents + the OI Scrub from starting automatically when the MDT is + mounted. Manually starting LFSCK after mounting provides finer + control over the starting conditions. This mount option also + prevents OI scrub from occurring automatically when OI + inconsistency is detected (see + ). + @@ -2674,53 +2846,6 @@ lr_reader The stats-collect utility contains scripts used to collect application profiling information from Lustre clients and servers.
-
- <indexterm><primary>flock</primary></indexterm>Flock Feature - Lustre now includes the flock feature, which provides file locking support. Flock describes classes of file locks known as 'flocks'. Flock can apply or remove a lock on an open file as specified by the user. However, a single file may not, simultaneously, have both shared and exclusive locks. - By default, the flock utility is disabled on Lustre. Two modes are available. - - - - - - - - local mode - - - In this mode, locks are coherent on one node (a single-node flock), but not across all clients. To enable it, use -o localflock. This is a client-mount option. - - This mode does not impact performance and is appropriate for single-node databases. - - - - - - consistent mode - - - In this mode, locks are coherent across all clients. - To enable it, use the -o flock. This is a client-mount option. - This mode affects the performance of the file being flocked and may affect stability, depending on the Lustre version used. Consider using a newer Lustre version which is more stable. If the consistent mode is enabled and no applications are using flock, then it has no effect. - - - - - - A call to use flock may be blocked if another process is holding an - incompatible lock. Locks created using flock are applicable for an - open file table entry. Therefore, a single process may hold only one - type of lock (shared or exclusive) on a single file. Subsequent flock - calls on a file that is already locked converts the existing lock to - the new lock mode. -
- Example - $ mount -t lustre -o flock mgs@tcp0:/lustre /mnt/client - You can check it in /etc/mtab. It should look like, - mgs@tcp0:/lustre /mnt/client lustre rw,flock 0 0 - -
-
<indexterm><primary>fileset</primary></indexterm>Fileset Feature With the fileset feature, Lustre now provides subdirectory mount -- 1.8.3.1