From ac4fe7fe55be8bda8280fd967edc0f0da8ba35e1 Mon Sep 17 00:00:00 2001 From: Andreas Dilger Date: Thu, 1 May 2025 15:57:03 -0600 Subject: [PATCH] LU-18967 doc: fix lctl-pool man pages Update the lctl-pool man pages to fix the following issues: - lctl-pool_new: describe the POOLNAME formatting rules - lctl-pool_remove: make OST index numbers consistent with pool_add - lctl-pool_list: use a better POOLNAME example - lctl-pool_destroy: use a better POOLNAME example - lctl-pool_*: use a better example name for the pool Fix up minor formatting issues in the man pages. Test-Parameters: trivial Signed-off-by: Andreas Dilger Change-Id: Ided7c79fbdcfb15aa01a1c2b66cf44292f500c1e Reviewed-on: https://review.whamcloud.com/c/fs/lustre-release/+/59062 Tested-by: jenkins Tested-by: Maloo Reviewed-by: Frederick Dilger Reviewed-by: Aryan Gupta Reviewed-by: Oleg Drokin --- lustre/doc/lctl-pool_add.8 | 61 ++++++++++++++++++----------------- lustre/doc/lctl-pool_destroy.8 | 26 +++++++-------- lustre/doc/lctl-pool_list.8 | 12 +++---- lustre/doc/lctl-pool_new.8 | 72 ++++++++++++++++++++++++++---------------- lustre/doc/lctl-pool_remove.8 | 50 ++++++++++++++--------------- 5 files changed, 119 insertions(+), 102 deletions(-) diff --git a/lustre/doc/lctl-pool_add.8 b/lustre/doc/lctl-pool_add.8 index a9d2627..ef9fc09 100644 --- a/lustre/doc/lctl-pool_add.8 +++ b/lustre/doc/lctl-pool_add.8 @@ -1,14 +1,14 @@ -.TH LCTL-POOL_ADD 8 2024-08-14 Lustre "Lustre Configuration Utilities" +.TH LCTL-POOL_ADD 8 2025-05-01 Lustre "Lustre Configuration Utilities" .SH NAME lctl-pool_add \- add OSTs to a named pool .SH SYNOPSIS .SY "lctl pool_add" .RB [ --nowait | -n ] -.IR FSNAME . POOL +.IR FSNAME . POOLNAME .IR OST_INDEX1 " [" OST_INDEX2 ...] .SY "lctl pool_add" .RB [ --nowait | -n ] -.IR FSNAME . POOL +.IR FSNAME . POOLNAME .IR OST_RANGE1 " [" OST_RANGE2 ...] .YS .SH DESCRIPTION @@ -34,55 +34,58 @@ to index values. .P .BR NOTE: -After updating the MGS configuration, this command tries to wait and -check if pools are updated on a client. +After updating the MGS configuration, this command will wait up to 12s and +check if pools are updated on a client, unless the +.B --nowait +option is used. If the MGS is on a separate node from the MDS, a Lustre client must be mounted on the MGS node while the .B lctl -commands are being run for this. Otherwise, the client check is -skipped. +commands are being run for this. Otherwise, the client check is skipped. .P -The OST pool can be used by +This named list of OSTs can be used by .BR lfs-setstripe (1) -to specify the OSTs on which new files can be created, and -.BR lfs-find (1) -to locate files that were initially created on the specified -.IR poolname . -Note however, that the OSTs that make up a specific pool may change +to specify the OSTs on which new files can be created, though it is +important to note that the OSTs that make up a specific pool may change over time, and it is the .I poolname used at creation time that is stored on each file, not necessarily -OSTs that are in the current pool. As well, +OSTs that are in the current pool. The +.BR lfs-find (1) +command can locate files that were initially created on the specified +.IR poolname . +As well, .BR lfs-df (1) -can show only the free space or inodes in a named pool. +can show only the free space or inodes in a named pool. The +.BR lfs-quota (1) +and +.BR lfs-setquota (1) +commands can use a pool name to get/set a quota limit for OSTs in the pool. .SH OPTIONS .TP .BR -n ", " --nowait -Do not wait and check if pool is updated on a client. This is useful -when calling a lot of " -.B lctl -pool_*" in a row. This avoids revoking the clients "CONFIG" lock for each -command (by default clients retake their lock and update their configurations -in a delay between 5-10s). +Do not wait and check if pool is updated on a client. +This is useful when calling a lot of +.RB ' "lctl pool_*" ' +commands in a row, to avoid waiting for each command to complete. .SH EXAMPLES -.PP Create a pool named -.B local +.B flash in the .B testfs filesystem: -.RS 8 +.RS .EX -.B # lfs pool_new testfs.local +.B # lfs pool_new testfs.flash .EE .RE .PP -Add OSTs numbered 12, 13, and 14 to the -.B testfs.local +Add OSTs numbered 8, 10, and 12 through 14 to the +.B testfs.flash pool: -.RS 8 +.RS .EX -.B # lfs pool_add testfs.local 12 13 14 +.B # lfs pool_add testfs.flash 8 10 12-14 .EE .RE .SH AVAILABILITY diff --git a/lustre/doc/lctl-pool_destroy.8 b/lustre/doc/lctl-pool_destroy.8 index 27e586a..c39dcbb 100644 --- a/lustre/doc/lctl-pool_destroy.8 +++ b/lustre/doc/lctl-pool_destroy.8 @@ -1,5 +1,5 @@ -.TH LCTL-POOL_DESTROY 8 2024-09-08 Lustre "Lustre Configuration Utilities" -.SH Name +.TH LCTL-POOL_DESTROY 8 2025-05-01 Lustre "Lustre Configuration Utilities" +.SH NAME lctl-pool_destroy \- delete an OST pool .SH SYNOPSIS .SY "lctl pool_destroy" @@ -14,15 +14,6 @@ in the filesystem named The .B lctl pool_destroy command must be run on the MGS node and can only be used by the root user. -.SH OPTIONS -.TP -.BR -n ", " --nowait -Do not wait and check if pool is updated on a client. -This is useful when calling a lot of -.RB \(dq lctl \ pool_*\(dq -in a row. This avoids revoking the clients "CONFIG" lock for each -command (by default clients retake their lock and update their configurations -in a delay between 5-10s). .SH NOTES After updating the MGS configuration, this command tries to wait and check if pools are updated on a client. @@ -30,15 +21,22 @@ If the MGS is on a separate node from the MDS, a Lustre client must be mounted on the MGS node while the .B lctl commands are being run for this. Otherwise, the client check is skipped. +.SH OPTIONS +.TP +.BR -n ", " --nowait +Do not wait and check if pool is updated on a client. +This is useful when calling a lot of +.RB ' "lctl pool_*" ' +commands in a row, to avoid waiting for each command to complete. .SH EXAMPLES Delete a pool named -.B local +.B flash in the .B testfs -filesystem. +filesystem: .RS .EX -.B # lctl pool_destroy testfs.local +.B # lctl pool_destroy testfs.flash .EE .RE .SH AVAILABILITY diff --git a/lustre/doc/lctl-pool_list.8 b/lustre/doc/lctl-pool_list.8 index 881579b..6feb117 100644 --- a/lustre/doc/lctl-pool_list.8 +++ b/lustre/doc/lctl-pool_list.8 @@ -23,24 +23,24 @@ If specified, show the OSTs in the pool named .IR POOLNAME . .SH EXAMPLES List all OSTs in the pool -.B local +.B flash in the filesystem -.B testfs +.BR testfs : .RS .EX -.B # lctl pool_list testfs.local -Pool: lustre.local +.B # lctl pool_list testfs.flash +Pool: lustre.flash testfs-OST0001_UUID .EE .RE .PP List all pools in the filesystem -.B testfs +.BR testfs : .RS .EX .B # lctl pool_list testfs Pools from testfs: -testfs.local +testfs.flash testfs.remote .EE .RE diff --git a/lustre/doc/lctl-pool_new.8 b/lustre/doc/lctl-pool_new.8 index d80f81c..56cf73d 100644 --- a/lustre/doc/lctl-pool_new.8 +++ b/lustre/doc/lctl-pool_new.8 @@ -1,10 +1,10 @@ -.TH LCTL-POOL_NEW 8 2024-08-14 Lustre "Lustre Configuration Utilities" +.TH LCTL-POOL_NEW 8 2025-05-01 Lustre "Lustre Configuration Utilities" .SH NAME -lctl-pool_new \- create a new OST pool +lctl-pool_new \- create a new named list of OSTs .SH SYNOPSIS .SY "lctl pool_new" .RB [ --nowait | -n ] -.IR FSNAME . POOL +.IR FSNAME . POOLNAME .YS .SH DESCRIPTION Create a list of OSTs with the name @@ -16,14 +16,30 @@ The command must be run on the MGS node and can only be used by the root user. .P -.BR NOTE: -After updating the MGS configuration, this command tries to wait and -check if pools are updated on a client. +The +.I POOLNAME +must be 15 or fewer alphanumeric characters ('A-Za-z0-9'), and may contain +hyphen ('-') and underscore ('_') characters. The period ('.') character can +only be used to separate the +.I FSNAME +from the +.I POOLNAME +in commands to uniquely identify the pool when multiple filesystems are mounted +on a node. The pool name must not be one of the reserved keywords +.RB ' none ', +.RB ' ignore ', +or +.RB ' inherit '. +.P +.SH NOTES +After updating the MGS configuration, this command will wait up to 12s and +check if pools are updated on a client, unless the +.B --nowait +option is used. If the MGS is on a separate node from the MDS, a Lustre client must be mounted on the MGS node while the .B lctl -commands are being run for this. Otherwise, the client check is -skipped. +commands are being run for this. Otherwise, the client check is skipped. .P This named list of OSTs can be used by .BR lfs-setstripe (1) @@ -31,35 +47,36 @@ to specify the OSTs on which new files can be created, and .BR lfs-find (1) to locate files that were created on the specified pool. As well, .BR lfs-df (1) -can show only the free space or inodes in a named pool. +can show only the free space or inodes in a named pool. The +.BR lfs-quota (1) +and +.BR lfs-setquota (1) +commands can use a pool name to get/set a quota limit for OSTs in the pool. .SH OPTIONS .TP .BR -n ", " --nowait -Do not wait and check if pool is updated on a client. This is useful -when calling a lot of " -.B lctl -pool_*" in a row. This avoids revoking the clients "CONFIG" lock for each -command (by default clients retake their lock and update their configurations -in a delay between 5-10s). +Do not wait and check if pool is updated on a client. +This is useful when calling a lot of +.RB ' "lctl pool_*" ' +commands in a row, to avoid waiting for each command to complete. .SH EXAMPLES -.PP Create a pool named -.B local +.B flash in the .B testfs filesystem: -.RS 8 +.RS .EX -.B # lfs pool_new testfs.local +.B # lfs pool_new testfs.flash .EE .RE .PP -Add OSTs numbered 12, 13, and 14 to the -.B testfs.local +Add OSTs numbered 8, 10, and 12 through 14 to the +.B testfs.flash pool: -.RS 8 +.RS .EX -.B # lfs pool_add testfs.local 12 13 14 +.B # lfs pool_add testfs.flash 8 10 12-14 .EE .RE .SH AVAILABILITY @@ -69,9 +86,10 @@ is part of the filesystem package since release 1.7.0 .\" Added in commit 1.6.0-1808-g665e36b780 .SH SEE ALSO -.BR lfs-df (1), -.BR lfs-find (1), -.BR lfs-setstripe (1), -.BR lustre (7) +.BR lustre (7), .BR lctl (8), .BR lctl-pool_add (8), +.BR lfs-df (1), +.BR lfs-find (1), +.BR lfs-setquota (1), +.BR lfs-setstripe (1) diff --git a/lustre/doc/lctl-pool_remove.8 b/lustre/doc/lctl-pool_remove.8 index 8cd735d..5966d3c 100644 --- a/lustre/doc/lctl-pool_remove.8 +++ b/lustre/doc/lctl-pool_remove.8 @@ -1,4 +1,4 @@ -.TH LCTL-POOL_REMOVE 8 2024-08-30 Lustre "Lustre Configuration Utilities" +.TH LCTL-POOL_REMOVE 8 2025-05-01 Lustre "Lustre Configuration Utilities" .SH NAME lctl-pool_remove \- remove OST from a named pool .SH SYNOPSIS @@ -40,41 +40,39 @@ index values. .BR -n ", " --nowait Do not wait and check if pool is updated on a client. This is useful when calling a lot of -.RB \(dq "lctl pool_" *\(dq -in a row. This avoids revoking the clients "CONFIG" lock for each -command (by default clients retake their lock and update their configurations -in a delay between 5-10s). +.RB ' "lctl pool_*" ' +commands in a row, to avoid waiting for each command to complete. .SH EXAMPLES -Remove OSTs numbered 8, 9, and 10 from the -.B testfs.local -pool. +Remove OSTs numbered 8, 10, and 12 from the +.B testfs.flash +pool: .RS .EX -.B # lctl pool_remove testfs.local OST0008 OST0009 OST000a -OST lustre-OST0008_UUID removed from pool lustre.local -OST lustre-OST0009_UUID removed from pool lustre.local -OST lustre-OST000a_UUID removed from pool lustre.local +.B # lctl pool_remove testfs.flash OST0008 OST000a OST000c +OST lustre-OST0008_UUID removed from pool lustre.flash +OST lustre-OST000a_UUID removed from pool lustre.flash +OST lustre-OST000c_UUID removed from pool lustre.flash or -.B # lctl pool_remove testfs.local OST[8-a] -OST lustre-OST0008_UUID removed from pool lustre.local -OST lustre-OST0009_UUID removed from pool lustre.local -OST lustre-OST000a_UUID removed from pool lustre.local +.B # lctl pool_remove testfs.flash OST[8-c/2] +OST lustre-OST0008_UUID removed from pool lustre.flash +OST lustre-OST000a_UUID removed from pool lustre.flash +OST lustre-OST000c_UUID removed from pool lustre.flash .EE .RE .PP -List of OSTs can be set with comma seperated values or a combined format +List of OSTs can be set with comma seperated values or a combined format: .RS .EX -.B # lctl pool_remove testfs.local OST[8,a] -OST lustre-OST0008_UUID removed from pool lustre.local -OST lustre-OST000a_UUID removed from pool lustre.local +.B # lctl pool_remove testfs.flash OST[8,a] +OST lustre-OST0008_UUID removed from pool lustre.flash +OST lustre-OST000a_UUID removed from pool lustre.flash or -.B # lctl pool_remove testfs.local OST[4-6,8,a] -OST lustre-OST0004_UUID removed from pool lustre.local -OST lustre-OST0005_UUID removed from pool lustre.local -OST lustre-OST0006_UUID removed from pool lustre.local -OST lustre-OST0008_UUID removed from pool lustre.local -OST lustre-OST000a_UUID removed from pool lustre.local +.B # lctl pool_remove testfs.flash OST[4-6,8,a] +OST lustre-OST0004_UUID removed from pool lustre.flash +OST lustre-OST0005_UUID removed from pool lustre.flash +OST lustre-OST0006_UUID removed from pool lustre.flash +OST lustre-OST0008_UUID removed from pool lustre.flash +OST lustre-OST000a_UUID removed from pool lustre.flash .EE .RE .SH AVAILABILITY -- 1.8.3.1