From 6e76ad38857b0f51ef5f95f04779bd7ff26a9335 Mon Sep 17 00:00:00 2001 From: Richard Henwood Date: Fri, 20 May 2011 13:09:28 -0500 Subject: [PATCH] FIX: validation --- BackupAndRestore.xml | 2 +- BenchmarkingTests.xml | 4 +- ConfiguringQuotas.xml | 19 ++-- LNETSelfTest.xml | 259 +++++++++++++++++++++--------------------- ManagingFailover.xml | 2 +- ManagingFileSystemIO.xml | 2 +- ManagingSecurity.xml | 2 +- ManagingStripingFreeSpace.xml | 32 +++--- UpgradingLustre.xml | 2 +- 9 files changed, 164 insertions(+), 160 deletions(-) diff --git a/BackupAndRestore.xml b/BackupAndRestore.xml index 7d95634..fa3fc35 100644 --- a/BackupAndRestore.xml +++ b/BackupAndRestore.xml @@ -110,7 +110,7 @@ - --xattr <yes|no> + --xattr <yes|no> diff --git a/BenchmarkingTests.xml b/BenchmarkingTests.xml index 69def75..4a391ea 100644 --- a/BenchmarkingTests.xml +++ b/BenchmarkingTests.xml @@ -40,7 +40,7 @@ Typically with these tests, Lustre should deliver 85-90% of the raw device performance. - A utility stats-collect is also provided to collect application profiling information from Lustre clients and servers. See Collecting Application Profiling Information (stats-collect) for more information. + A utility stats-collect is also provided to collect application profiling information from Lustre clients and servers. See for more information.
24.1.2 Preparing to Use the Lustre I/O Kit @@ -60,7 +60,7 @@ Download the Lustre I/O kit (lustre-iokit)from: - http://downloads.whamcloud.com/ + http://downloads.whamcloud.com/
diff --git a/ConfiguringQuotas.xml b/ConfiguringQuotas.xml index 4c1d1de..0c63fd9 100644 --- a/ConfiguringQuotas.xml +++ b/ConfiguringQuotas.xml @@ -53,19 +53,20 @@ Use this procedure to enable (configure) disk quotas in Lustre. - - If you have re-complied your Linux kernel, be sure that CONFIG_QUOTA and CONFIG_QUOTACTL are enabled. Also, verify that CONFIG_QFMT_V1 and/or CONFIG_QFMT_V2 are enabled. - + +If you have re-complied your Linux kernel, be sure that CONFIG_QUOTA and CONFIG_QUOTACTL are enabled. Also, verify that CONFIG_QFMT_V1 and/or CONFIG_QFMT_V2 are enabled. + Quota is enabled in all Linux 2.6 kernels supplied for Lustre. Start the server. + - Mount the Lustre file system on the client and verify that the lquota module has loaded properly by using the lsmod command. + Mount the Lustre file system on the client and verify that the lquota module has loaded properly by using the lsmod command. - $ lsmod + $ lsmod [root@oss161 ~]# lsmod Module Size Used by obdfilter 220532 1 @@ -108,7 +109,7 @@ ksocklnd 111812 1 Lustre 1.6.6 introduced the v2 file format for operational quotas, with continued support for the old file format (v1). The ost.quota_type parameter handles '1' and '2' options, to specify the Lustre quota versions that will be used. For example: --param ost.quota_type=ug2 --param ost.quota_type=u1 - For more information about the v1 and v2 formats, see Quota File Formats. + For more information about the v1 and v2 formats, see .
@@ -265,14 +266,14 @@ lustre-OST0001_UUID 30720* - 28872 \ Hard limit -- When you are beyond the hard limit, you get -EQUOTA and cannot write inode/block any more. The hard limit is the absolute limit. When a grace period is set, you can exceed the soft limit within the grace period if are under the hard limits. Lustre quota allocation is controlled by two variables, quota_bunit_sz and quota_iunit_sz referring to KBs and inodes, respectively. These values can be accessed on the MDS as /proc/fs/lustre/mds/*/quota_* and on the OST as /proc/fs/lustre/obdfilter/*/quota_*. The quota_bunit_sz and quota_iunit_sz variables are the maximum qunit values for blocks and inodes, respectively. At any time, module lquota chooses a reasonable qunit between the minimum and maximum values. The /proc values are bounded by two other variables quota_btune_sz and quota_itune_sz. By default, the *tune_sz variables are set at 1/2 the *unit_sz variables, and you cannot set *tune_sz larger than *unit_sz. You must set bunit_sz first if it is increasing by more than 2x, and btune_sz first if it is decreasing by more than 2x. - Total number of inodes -- To determine the total number of inodes, use lfs df -i (and also /proc/fs/lustre/*/*/filestotal). For more information on using the lfs df -i command and the command output, see Checking File System Free Space. + Total number of inodes -- To determine the total number of inodes, use lfs df -i (and also /proc/fs/lustre/*/*/filestotal). For more information on using the lfs df -i command and the command output, see . Unfortunately, the statfs interface does not report the free inode count directly, but instead reports the total inode and used inode counts. The free inode count is calculated for df from (total inodes - used inodes). It is not critical to know a file system's total inode count. Instead, you should know (accurately), the free inode count and the used inode count for a file system. Lustre manipulates the total inode count in order to accurately report the other two values. The values set for the MDS must match the values set on the OSTs. The quota_bunit_sz parameter displays bytes, however lfs setquota uses KBs. The quota_bunit_sz parameter must be a multiple of 1024. A proper minimum KB size for lfs setquota can be calculated as: - + Size in KBs = minimum_quota_bunit_sz * (number of OSTS + 1) = 1024 * (number of OSTs +1) - + We add one (1) to the number of OSTs as the MDS also consumes KBs. As inodes are only consumed on the MDS, the minimum inode size for lfs setquota is equal to quota_iunit_sz. Setting the quota below this limit may prevent the user from all file creation. diff --git a/LNETSelfTest.xml b/LNETSelfTest.xml index 4d7d3fc..e222613 100644 --- a/LNETSelfTest.xml +++ b/LNETSelfTest.xml @@ -88,7 +88,7 @@ This section describes how to create and run an LNET self-test. The examples shown are for a test that simulates the traffic pattern of a set of Lustre servers on a TCP network accessed by Lustre clients on an InfiniBand network connected via LNET routers. In this example, half the clients are reading and half the clients are writing.
23.2.1 Creating a Session - A session is a set of processes that run on a test node. Only one session can be run at a time on a test node to ensure that the session has exclusive use of the node. The console node is used to create, change or destroy a session (new_session, end_session, show_session). For more about session parameters, see Session Commands. + A session is a set of processes that run on a test node. Only one session can be run at a time on a test node to ensure that the session has exclusive use of the node. The console node is used to create, change or destroy a session (new_session, end_session, show_session). For more about session parameters, see . Almost all operations should be performed within the context of a session. From the console node, a user can only operate nodes in his own session. If a session ends, the session context in all test nodes is stopped. The following commands set the LST_SESSION environment variable to identify the session on the console node and create a session called read_write: export LST_SESSION=$$ @@ -121,10 +121,10 @@ lst add_group writers 192.168.1.[2-254/2]@o2ib
23.2.3 Defining and Running the Tests - A test generates a network load between two groups of nodes, a source group identified using the --from parameter and a target group identified using the --to parameter. When a test is running, each node in the --from<group> simulates a client by sending requests to nodes in the --to<group>, which are simulating a set of servers, and then receives responses in return. This activity is designed to mimic Lustre RPC traffic. + A test generates a network load between two groups of nodes, a source group identified using the --from parameter and a target group identified using the --to parameter. When a test is running, each node in the --from<group> simulates a client by sending requests to nodes in the --to<group>, which are simulating a set of servers, and then receives responses in return. This activity is designed to mimic Lustre RPC traffic. A batch is a collection of tests that are started and stopped together and run in parallel. A test must always be run as part of a batch, even if it is just a single test. Users can only run or stop a test batch, not individual tests. Tests in a batch are non-destructive to the file system, and can be run in a normal Lustre environment (provided the performance impact is acceptable). - A simple batch might contain a single test, for example, to determine whether the network bandwidth presents an I/O bottleneck. In this example, the --to<group> could be comprised of Lustre OSSs and --from<group> the compute nodes. A second test could be added to perform pings from a login node to the MDS to see how checkpointing affects the ls -l process. + A simple batch might contain a single test, for example, to determine whether the network bandwidth presents an I/O bottleneck. In this example, the --to<group> could be comprised of Lustre OSSs and --from<group> the compute nodes. A second test could be added to perform pings from a login node to the MDS to see how checkpointing affects the ls -l process. Two types of tests are available: @@ -201,7 +201,7 @@ lst end_session - --timeout<seconds> + --timeout<seconds> Console timeout value of the session. The session ends automatically if it remains idle (i.e., no commands are issued) for this period. @@ -231,12 +231,12 @@ lst end_session Example: $ lst new_session --force read_write - end_session + end_session Stops all operations and tests in the current session and clears the session's status. $ lst end_session - show_session + show_session Shows the session information. This command prints information about the current session. It does not require LST_SESSION to be defined in the process environment. $ lst show_session @@ -245,8 +245,8 @@ lst end_session 23.3.2 Group Commands This section describes lst group commands. - add_group - <name> <NIDS> [<NIDs>...] + add_group + <name> <NIDS> [<NIDs>...] Creates the group and adds a list of test nodes to the group. @@ -267,7 +267,7 @@ lst end_session - <name> + <name> @@ -277,7 +277,7 @@ lst end_session - <NIDs> + <NIDs> @@ -290,13 +290,13 @@ lst end_session Example: $ lst add_group servers 192.168.10.[35,40-45]@tcp$ lst add_group clients 192.168.1.[10-100]@tcp 192.168.[2,4].\[10-20]@tcp - update_group - <name> - [--refresh] [--clean - <status> - ] [--remove - <NIDs> - ] + update_group + <name> + [--refresh] [--clean + <status> + ] [--remove + <NIDs> + ] Updates the state of nodes in a group or adjusts a group's membership. This command is useful if some nodes have crashed and should be excluded from the group. @@ -317,9 +317,11 @@ lst end_session + - --refresh + --refresh + Refreshes the state of all inactive nodes in the group. @@ -327,7 +329,7 @@ lst end_session - --clean<status> + --clean<status> Removes nodes with a specified status from the group. Status may be: @@ -390,7 +392,7 @@ lst end_session - --remove<NIDs> + --remove<NIDs> Removes specified nodes from the group. @@ -406,9 +408,9 @@ $ lst update_group clients --clean invalid // \ invalid == busy || down || unknown $ lst update_group clients --remove \192.168.1.[10-20]@tcp - list_group [ - <name> - ] [--active] [--busy] [--down] [--unknown] [--all] + list_group [ + <name> + ] [--active] [--busy] [--down] [--unknown] [--all] Prints information about a group or lists all groups in the current session if no group is specified. @@ -429,7 +431,7 @@ $ lst update_group clients --remove \192.168.1.[10-20]@tcp - <name> + <name> @@ -438,9 +440,11 @@ $ lst update_group clients --remove \192.168.1.[10-20]@tcp + - --active + --active + Lists the active nodes. @@ -448,9 +452,11 @@ $ lst update_group clients --remove \192.168.1.[10-20]@tcp + - --busy + --busy + Lists the busy nodes. @@ -459,7 +465,7 @@ $ lst update_group clients --remove \192.168.1.[10-20]@tcp - --down + --down @@ -469,7 +475,7 @@ $ lst update_group clients --remove \192.168.1.[10-20]@tcp - --unknown + --unknown @@ -479,7 +485,7 @@ $ lst update_group clients --remove \192.168.1.[10-20]@tcp - --all + --all @@ -509,12 +515,12 @@ $ lst list_group clients --busy 192.168.1.12@tcp Busy Total 1 node - del_group - <name> + del_group + <name> Removes a group from the session. If the group is referred to by any test, then the operation fails. If nodes in the group are referred to only by this group, then they are kicked out from the current session; otherwise, they are still in the current session. $ lst del_group clients - lstclient --sesid <NID> --group <name> [--server_mode] + lstclient --sesid <NID> --group <name> [--server_mode] Use lstclient to run the userland self-test client. The lstclient command should be executed after creating a session on the console. There are only two mandatory options for lstclient: @@ -534,7 +540,7 @@ Total 1 node - --sesid<NID> + --sesid<NID> @@ -544,7 +550,7 @@ Total 1 node - --group<name> + --group<name> @@ -554,7 +560,7 @@ Total 1 node - --server_mode + --server_mode @@ -574,14 +580,15 @@ Client1 $ lstclient --sesid 192.168.1.52@tcp --group clients 23.3.3 Batch and Test Commands This section describes lst batch and test commands. - add_batch NAME + add_batch NAME A default batch test set named batch is created when the session is started. You can specify a batch name by using add_batch: $ lst add_batch bulkperf Creates a batch test called bulkperf. - - add_test --batch <batchname> [--loop<#>] [--concurrency<#>] [--distribute<#:#>]--from <group> --to <group> {brw|ping} <test options> - + + add_test --batch <batchname> [--loop<#>] [--concurrency<#>] [--distribute<#:#>] \ +--from <group> --to <group> {brw|ping} <test options> + Adds a test to a batch. The parameters are described below. @@ -601,7 +608,7 @@ Client1 $ lstclient --sesid 192.168.1.52@tcp --group clients - --batch<batchname> + --batch<batchname> Names a group of tests for later execution. @@ -610,7 +617,7 @@ Client1 $ lstclient --sesid 192.168.1.52@tcp --group clients - --loop<#> + --loop<#> @@ -620,7 +627,7 @@ Client1 $ lstclient --sesid 192.168.1.52@tcp --group clients - --concurrency<#> + --concurrency<#> @@ -630,7 +637,7 @@ Client1 $ lstclient --sesid 192.168.1.52@tcp --group clients - --distribute<#:#> + --distribute<#:#> @@ -640,7 +647,7 @@ Client1 $ lstclient --sesid 192.168.1.52@tcp --group clients - --from<group> + --from<group> @@ -650,7 +657,7 @@ Client1 $ lstclient --sesid 192.168.1.52@tcp --group clients - --to<group> + --to<group> @@ -659,18 +666,18 @@ Client1 $ lstclient --sesid 192.168.1.52@tcp --group clients - ping + ping - Sends a small request message, resulting in a small reply message. For more details, see Defining and Running the Tests + Sends a small request message, resulting in a small reply message. For more details, see - brw + brw - Sends a small request message followed by a bulk data transfer, resulting in a small reply message. Defining and Running the Tests. Options are: + Sends a small request message followed by a bulk data transfer, resulting in a small reply message. . Options are: @@ -679,7 +686,7 @@ Client1 $ lstclient --sesid 192.168.1.52@tcp --group clients - read | write + read | write @@ -688,11 +695,10 @@ Client1 $ lstclient --sesid 192.168.1.52@tcp --group clients - - size=<#>| <#>K | <#>M + size=<#>| <#>K | <#>M @@ -701,11 +707,10 @@ Client1 $ lstclient --sesid 192.168.1.52@tcp --group clients - - check=full|simple + check=full|simple @@ -726,9 +731,7 @@ Server: (S1, S2, S3) --distribute 4:2 (C1,C2,C3,C4->S1,S2), (C5, C6->S3, S1) --distribute 6:3 (C1,C2,C3,C4,C5,C6->S1,S2,S3) The setting --distribute 1:1 is the default setting where each source node communicates with one target node. - When the setting --distribute 1:<n> (where - <n> - is the size of the target group) is used, each source node communicates with every node in the target group. + When the setting --distribute 1:<n> (where <n> is the size of the target group) is used, each source node communicates with every node in the target group. Note that if there are more source nodes than target nodes, some source nodes may share the same target nodes. Also, if there are more target nodes than source nodes, some higher-ranked target nodes will be idle. Example showing a brw test: $ lst add_group clients 192.168.1.[10-17]@tcp @@ -746,7 +749,7 @@ $ lst add_test --batch bulkperf --loop 100 --concurrency 4 \ - list_batch [<name>] [--test <index>] [--active] [--invalid] [--server | client] + list_batch [<name>] [--test <index>] [--active] [--invalid] [--server | client] Lists batches in the current session or lists client and server nodes in a batch or a test. @@ -768,7 +771,7 @@ $ lst add_test --batch bulkperf --loop 100 --concurrency 4 \ - --test<index> + --test<index> @@ -803,7 +806,7 @@ $ lst add_test --batch bulkperf --loop 100 --concurrency 4 \ - server | client + server | client @@ -830,34 +833,34 @@ $ lst list_batch bulkperf --server --active 192.168.10.102@tcp Active 192.168.10.103@tcp Active - run - - <name> - + run + + <name> + Runs the batch. $ lst run bulkperf - - stop - <name> - + + stop + <name> + Stops the batch. $ lst stop bulkperf - query - <name> - [--test - <index> - ] [--timeout - <seconds> - ] [--loop - <#> - ] [--delay - <seconds> - ] [--all] + query + <name> + [--test + <index> + ] [--timeout + <seconds> + ] [--loop + <#> + ] [--delay + <seconds> + ] [--all] Queries the batch status. @@ -879,7 +882,7 @@ $ lst list_batch bulkperf --server --active - --test<index> + --test<index> @@ -889,7 +892,7 @@ $ lst list_batch bulkperf --server --active - --timeout<seconds> + --timeout<seconds> @@ -899,7 +902,7 @@ $ lst list_batch bulkperf --server --active - --loop<#> + --loop<#> @@ -909,7 +912,7 @@ $ lst list_batch bulkperf --server --active - --delay<seconds> + --delay<seconds> @@ -919,7 +922,7 @@ $ lst list_batch bulkperf --server --active - --all + --all @@ -954,17 +957,17 @@ Batch is idle 23.3.4 Other Commands This section describes other lst commands. - - ping [-session] [--group - <name> - ] [--nodes - <NIDs> - ] [--batch - <name> - ] [--server] [--timeout - <seconds> - ] - + + ping [-session] [--group + <name> + ] [--nodes + <NIDs> + ] [--batch + <name> + ] [--server] [--timeout + <seconds> + ] + Sends a 'hello' query to the nodes. @@ -985,7 +988,7 @@ Batch is idle - --session + --session @@ -995,7 +998,7 @@ Batch is idle - --group<name> + --group<name> @@ -1005,7 +1008,7 @@ Batch is idle - --nodes<NIDs> + --nodes<NIDs> @@ -1015,7 +1018,7 @@ Batch is idle - --batch<name> + --batch<name> @@ -1025,17 +1028,17 @@ Batch is idle - --server + --server - Sends RPC to all server nodes instead of client nodes. This option is only used with --batch<name>. + Sends RPC to all server nodes instead of client nodes. This option is only used with --batch<name>. - --timeout<seconds> + --timeout<seconds> @@ -1054,21 +1057,21 @@ Batch is idle 192.168.1.19@tcp Down [session: <NULL> id: LNET_NID_ANY] 192.168.1.20@tcp Down [session: <NULL> id: LNET_NID_ANY] - - stat [--bw] [--rate] [--read] [--write] [--max] [--min] [--avg] " " [--timeout - <seconds> - ] [--delay - <seconds> - ] - <group> - |< - NIDs> - [ - <group> - | - <NIDs> - ] - + + stat [--bw] [--rate] [--read] [--write] [--max] [--min] [--avg] " " [--timeout + <seconds> + ] [--delay + <seconds> + ] + <group> + |< + NIDs> + [ + <group> + | + <NIDs> + ] + The collection performance and RPC statistics of one or more nodes. @@ -1089,7 +1092,7 @@ Batch is idle - --bw + --bw @@ -1099,7 +1102,7 @@ Batch is idle - --rate + --rate @@ -1109,7 +1112,7 @@ Batch is idle - --read + --read @@ -1119,7 +1122,7 @@ Batch is idle - --write + --write @@ -1129,7 +1132,7 @@ Batch is idle - --max + --max @@ -1139,7 +1142,7 @@ Batch is idle - --min + --min @@ -1149,7 +1152,7 @@ Batch is idle - --avg + --avg @@ -1159,7 +1162,7 @@ Batch is idle - --timeout<seconds> + --timeout<seconds> @@ -1169,7 +1172,7 @@ Batch is idle - --delay<seconds> + --delay<seconds> @@ -1199,7 +1202,7 @@ $ lst stat clients Only LNET performance statistics are available. By default, all statistics information is displayed. Users can specify additional information with these options. - show_error [--session] [<group>|<NIDs>]... + show_error [--session] [<group>|<NIDs>]... Lists the number of failed RPCs on test nodes. @@ -1220,7 +1223,7 @@ information is displayed. Users can specify additional information with these op - --session + --session diff --git a/ManagingFailover.xml b/ManagingFailover.xml index efaeeb2..daaaded 100644 --- a/ManagingFailover.xml +++ b/ManagingFailover.xml @@ -11,7 +11,7 @@ - For information about high availability(HA) management software, see the Lustre wiki topic Using Red Hat Cluster Manager with Lustre or the Lustre wiki topic LuUsing Pacemaker with stre. + For information about high availability(HA) management software, see the Lustre wiki topic Using Red Hat Cluster Manager with Lustre or the Lustre wiki topic LuUsing Pacemaker with stre.
20.1 Lustre Failover and <anchor xml:id="dbdoclet.50438213_marker-1301522" xreflabel=""/>Multiple-Mount Protection diff --git a/ManagingFileSystemIO.xml b/ManagingFileSystemIO.xml index 3321483..36915bb 100644 --- a/ManagingFileSystemIO.xml +++ b/ManagingFileSystemIO.xml @@ -106,7 +106,7 @@ Last login: Wed Nov 26 13:35:12 2008 from 192.168.0.6
19.1.3 Migrating Data within a File System - As stripes cannot be moved within the file system, data must be migrated manually by copying and renaming the file, removing the original file, and renaming the new file with the original file name. The simplest way to do this is to use the lfs_migrate command (see lfs_migrate). However, the steps for migrating a file by hand are also shown here for reference. + As stripes cannot be moved within the file system, data must be migrated manually by copying and renaming the file, removing the original file, and renaming the new file with the original file name. The simplest way to do this is to use the lfs_migrate command (see ). However, the steps for migrating a file by hand are also shown here for reference. Identify the file(s) to be moved. diff --git a/ManagingSecurity.xml b/ManagingSecurity.xml index 451f85c..15fb167 100644 --- a/ManagingSecurity.xml +++ b/ManagingSecurity.xml @@ -18,7 +18,7 @@
22.1.1 How ACLs Work Implementing ACLs varies between operating systems. Systems that support the Portable Operating System Interface (POSIX) family of standards share a simple yet powerful file system permission model, which should be well-known to the Linux/Unix administrator. ACLs add finer-grained permissions to this model, allowing for more complicated permission schemes. For a detailed explanation of ACLs on Linux, refer to the SuSE Labs article, Posix Access Control Lists on Linux: - http://www.suse.de/~agruen/acl/linux-acls/online/ + http://www.suse.de/~agruen/acl/linux-acls/online/ We have implemented ACLs according to this model. Lustre works with the standard Linux ACL tools, setfacl, getfacl, and the historical chacl, normally installed with the ACL package. ACL support is a system-range feature, meaning that all clients have ACL enabled or not. You cannot specify which clients should enable ACL. diff --git a/ManagingStripingFreeSpace.xml b/ManagingStripingFreeSpace.xml index 00b000e..e5f846b 100644 --- a/ManagingStripingFreeSpace.xml +++ b/ManagingStripingFreeSpace.xml @@ -85,25 +85,25 @@ Use the lfs setstripe command to create new files with a specific file layout (stripe pattern) configuration. lfs setstripe [--size|-s stripe_size] [--count|-c stripe_count] [--index|-i start_ost] [--pool|-p pool_name] <filename|dirname> - - stripe_size - + + stripe_size + The stripe_size indicates how much data to write to one OST before moving to the next OST. The default stripe_size is 1 MB, and passing a stripe_size of 0 causes the default stripe size to be used. Otherwise, the stripe_size value must be a multiple of 64 KB. - - stripe_count - + + stripe_count + The stripe_count indicates how many OSTs to use. The default stripe_count value is 1. Setting stripe_count to 0 causes the default stripe count to be used. Setting stripe_count to -1 means stripe over all available OSTs (full OSTs are skipped). - - start_ost - + + start_ost + The start OST is the first OST to which files are written. The default value for start_ost is -1, which allows the MDS to choose the starting index. This setting is strongly recommended, as it allows space and load balancing to be done by the MDS as needed. Otherwise, the file starts on the specified OST index. The numbering of the OSTs starts at 0. If you pass a start_ost value of 0 and a stripe_count value of 1, all files are written to OST 0, until space is exhausted. This is probably not what you meant to do. If you only want to adjust the stripe count and keep the other parameters at their default settings, do not specify any of the other parameters: lfs setstripe -c <stripe_count> <file> - - pool_name - + + pool_name + Specify the OST pool on which the file will be written. This allows limiting the OSTs used to a subset of all OSTs in the file system. For more details about using OST pools, see Creating and Managing OST Pools.
18.3.1 Using a Specific Striping Pattern/File Layout for a Single File @@ -184,7 +184,7 @@ bob 18.4.2 Inspecting the File Tree To inspect an entire tree of files, use the lfs find command: lfs find [--recursive | -r] <file or directory> ... - You can also use ls -l /proc/<pid>/fd/ to find open files using Lustre. For example: + You can also use ls -l /proc/<pid>/fd/ to find open files using Lustre. For example: $ lfs getstripe $(readlink /proc/$(pidof cat)/fd/1) Typical output is: /mnt/lustre/foo @@ -230,9 +230,9 @@ group - - -i, --inodes - + + -i, --inodes + Lists inodes instead of block usage. diff --git a/UpgradingLustre.xml b/UpgradingLustre.xml index 55cb420..04c7837 100644 --- a/UpgradingLustre.xml +++ b/UpgradingLustre.xml @@ -112,7 +112,7 @@ lustre-ldiskfs-<ver> - If you have a problem upgrading Lustre, contact us via the Whamcloud Jira bug tracker. + If you have a problem upgrading Lustre, contact us via the Whamcloud Jira bug tracker.
-- 1.8.3.1