From ce46349ba66bafc89ae7a78626e281090936073c Mon Sep 17 00:00:00 2001 From: Ned Bass Date: Fri, 7 Sep 2012 13:09:16 -0700 Subject: [PATCH] LUDOC-11 fix assorted minor errors A number of grammatical, spelling, and formatting errors were found by running the html manual through aspell -H. Signed-off-by: Ned Bass Change-Id: Ie4f3cfba62cfb452c1bddb21a3137142dada9d47 Reviewed-on: http://review.whamcloud.com/3908 Tested-by: Hudson Reviewed-by: Richard Henwood --- BackupAndRestore.xml | 6 +++--- BenchmarkingTests.xml | 12 ++++++------ ConfiguringLNET.xml | 8 ++++---- ConfiguringLustre.xml | 2 +- ConfiguringStorage.xml | 4 ++-- Glossary.xml | 2 +- LNETSelfTest.xml | 4 ++-- LustreDebugging.xml | 4 ++-- LustreMaintenance.xml | 38 +++++++++++++++++++------------------- LustreOperations.xml | 3 +-- LustreProc.xml | 4 ++-- LustreProgrammingInterfaces.xml | 6 +++--- LustreRecovery.xml | 8 ++++---- LustreTroubleshooting.xml | 4 ++-- LustreTuning.xml | 2 +- ManagingFailover.xml | 2 +- ManagingFileSystemIO.xml | 6 +++--- ManagingSecurity.xml | 4 ++-- ManagingStripingFreeSpace.xml | 4 ++-- SettingLustreProperties.xml | 10 +++++----- SettingUpLustreSystem.xml | 4 ++-- SystemConfigurationUtilities.xml | 16 ++++++++-------- UnderstandingLustre.xml | 4 ++-- UnderstandingLustreNetworking.xml | 2 +- 24 files changed, 79 insertions(+), 80 deletions(-) diff --git a/BackupAndRestore.xml b/BackupAndRestore.xml index 7d4b4c1..ef5b83a 100644 --- a/BackupAndRestore.xml +++ b/BackupAndRestore.xml @@ -99,7 +99,7 @@ --user=<user id> - The changelog user ID for the specified MDT. To use lustre_rsync, the changelog user must be registered. For details, see the changelog_register parameter in (lctl). This is a mandatory option if a valid status log created during a previous synchronization operation (--statusl) is not specified. + The changelog user ID for the specified MDT. To use lustre_rsync, the changelog user must be registered. For details, see the changelog_register parameter in (lctl). This is a mandatory option if a valid status log created during a previous synchronization operation (--statuslog) is not specified. @@ -107,7 +107,7 @@ --statuslog=<log> - A log file to which synchronization status is saved. When the lustre_rsync utility starts, if the status log from a previous synchronization operation is specified, then the state is read from the log and otherwise mandatory --source, --target and --mdt options can be skipped. Specifying the --source, --target and/or --mdt options, in addition to the --statuslog option, causes the specified parameters in the status log to be overriden. Command line options take precedence over options in the status log. + A log file to which synchronization status is saved. When the lustre_rsync utility starts, if the status log from a previous synchronization operation is specified, then the state is read from the log and otherwise mandatory --source, --target and --mdt options can be skipped. Specifying the --source, --target and/or --mdt options, in addition to the --statuslog option, causes the specified parameters in the status log to be overridden. Command line options take precedence over options in the status log. @@ -405,7 +405,7 @@ cfs21:~# ls /mnt/main fstab passwd
- <indexterm><primary>backup</primary><secondary>using LVM</secondary><tertiary>createing snapshots</tertiary></indexterm>Creating Snapshot Volumes + <indexterm><primary>backup</primary><secondary>using LVM</secondary><tertiary>creating snapshots</tertiary></indexterm>Creating Snapshot Volumes Whenever you want to make a "checkpoint" of the main Lustre file system, create LVM snapshots of all target MDT and OSTs in the LVM-based backup file system. You must decide the maximum size of a snapshot ahead of time, although you can dynamically change this later. The size of a daily snapshot is dependent on the amount of data changed daily in the main Lustre file system. It is likely that a two-day old snapshot will be twice as big as a one-day old snapshot. You can create as many snapshots as you have room for in the volume group. If necessary, you can dynamically add disks to the volume group. The snapshots of the target MDT and OSTs should be taken at the same point in time. Make sure that the cronjob updating the backup file system is not running, since that is the only thing writing to the disks. Here is an example: diff --git a/BenchmarkingTests.xml b/BenchmarkingTests.xml index 2d41e71..0f10348 100644 --- a/BenchmarkingTests.xml +++ b/BenchmarkingTests.xml @@ -94,7 +94,7 @@ Array performance with all LUNs loaded does not always match the performance of a single LUN when tested in isolation. - Prequisites: + Prerequisites: sgp_dd tool in the sg3_utils package @@ -128,7 +128,7 @@
Running sgpdd_survey The sgpdd_survey script must be customized for the particular device being tested and for the location where the script saves its working and result files (by specifying the ${rslt} variable). Customization variables are described at the beginning of the script. - When the sgpdd_survey script runs, it creates a number of working files and a pair of result files. The names of all the files created start with the prefixdefined in the variable ${rslt}. (The default value is /tmp.) The files include: + When the sgpdd_survey script runs, it creates a number of working files and a pair of result files. The names of all the files created start with the prefix defined in the variable ${rslt}. (The default value is /tmp.) The files include: File containing standard output data (same as stdout) @@ -161,7 +161,7 @@ thr - Number of threads generating I/O (1 thread in above example). - crg - Current regions, the number of disjount areas on the disk to which I/O is being sent (1 region in above example, indicating that no seeking is done). + crg - Current regions, the number of disjoint areas on the disk to which I/O is being sent (1 region in above example, indicating that no seeking is done). MB/s - Aggregate bandwidth measured by dividing the total amount of data by the elapsed time (180.45 MB/s in the above example). @@ -244,7 +244,7 @@ Determine the OST names. - On the OSS nodes to be tested, run the lctldl command. The OST device names are listed in the fourth column of the output. For example: + On the OSS nodes to be tested, run the lctl dl command. The OST device names are listed in the fourth column of the output. For example: $ lctl dl |grep obdfilter 0 UP obdfilter lustre-OST0001 lustre-OST0001_UUID 1159 2 UP obdfilter lustre-OST0002 lustre-OST0002_UUID 1159 @@ -338,7 +338,7 @@ List all OSCs you want to test. - Use the target=parameter to list the OSCs separated by spaces. List the individual OSCs by name seperated by spaces using the format <fsname>-<OST_name>-osc-<OSC_number> (for example, lustre-OST0000-osc-ffff88007754bc00). You do not have to specify an MDS or LOV. + Use the target=parameter to list the OSCs separated by spaces. List the individual OSCs by name separated by spaces using the format <fsname>-<OST_name>-osc-<OSC_number> (for example, lustre-OST0000-osc-ffff88007754bc00). You do not have to specify an MDS or LOV. Run the obdfilter_survey script with the target=parameter and case=netdisk. @@ -802,7 +802,7 @@ mdd, mdt, osd (current lustre version only supports mdd stack). It can be used w IO - Lustre target operations statistics - JBD - ldisfs journal statistics + JBD - ldiskfs journal statistics CLIENT - Lustre OSC request statistics diff --git a/ConfiguringLNET.xml b/ConfiguringLNET.xml index 5ddfa75..419233c 100644 --- a/ConfiguringLNET.xml +++ b/ConfiguringLNET.xml @@ -72,7 +72,7 @@ Examples are: 10.67.73.200@tcp0 10.67.75.100@o2ib - The first entry above identifes a TCP/IP node, while the second entry identifies an InfiniBand node. + The first entry above identifies a TCP/IP node, while the second entry identifies an InfiniBand node. When a mount command is run on a client, the client uses the NID of the MDS to retrieve configuration information. If an MDS has more than one NID, the client should use the appropriate NID for its local network. To determine the appropriate NID to specify in the mount command, use the lctl command. To display MDS NIDs, run on the MDS : lctl list_nids @@ -95,7 +95,7 @@ LNET lines in lustre.conf are only used by the local node to determine what to call its interfaces. They are not used for routing decisions.
- <indexterm><primary>configuing</primary><secondary>multihome</secondary></indexterm>Multihome Server Example + <indexterm><primary>configuring</primary><secondary>multihome</secondary></indexterm>Multihome Server Example If a server with multiple IP addresses (multihome server) is connected to a Lustre network, certain configuration setting are required. An example illustrating these setting consists of a network with the following nodes: @@ -164,7 +164,7 @@ tcp0 192.168.0.*; o2ib0 132.6.[1-3].[2-8/2]"'
<indexterm><primary>LNET</primary><secondary>routes</secondary></indexterm>Setting the LNET Module routes Parameter - The LNET module routes parameter is used to identify routers in a Lustre configuration. These parameters are set in modprob.conf on each Lustre node. + The LNET module routes parameter is used to identify routers in a Lustre configuration. These parameters are set in modprobe.conf on each Lustre node. The LNET routes parameter specifies a colon-separated list of router definitions. Each route is defined as a network number, followed by a list of routers: routes=<net type> <router NID(s)> This example specifies bi-directional routing in which TCP clients can reach Lustre resources on the IB networks and IB servers can access the TCP networks: @@ -180,7 +180,7 @@ tcp0 192.168.0.*; o2ib0 132.6.[1-3].[2-8/2]"' On the router nodes, use: lnet networks="tcp o2ib" forwarding=enabled On the MDS, use the reverse as shown below: - lnet networks="o2ib0" rountes="tcp0 132.6.1.[1-8]@o2ib0" + lnet networks="o2ib0" routes="tcp0 132.6.1.[1-8]@o2ib0" To start the routers, run: modprobe lnet lctl network configure diff --git a/ConfiguringLustre.xml b/ConfiguringLustre.xml index 0ea16f2..12e813f 100644 --- a/ConfiguringLustre.xml +++ b/ConfiguringLustre.xml @@ -682,7 +682,7 @@ filesystem summary: 2.5M 32 2.5M 0% /lustre For examples using these utilities, see the topic - The lfs utility is usful for configuring and querying a variety of options related to files. For more information, see . + The lfs utility is useful for configuring and querying a variety of options related to files. For more information, see . Some sample scripts are included in the directory where Lustre is installed. If you have installed the Lustre source code, the scripts are located in the lustre/tests sub-directory. These scripts enable quick setup of some simple standard Lustre configurations. diff --git a/ConfiguringStorage.xml b/ConfiguringStorage.xml index 1403936..b0d1cf6 100644 --- a/ConfiguringStorage.xml +++ b/ConfiguringStorage.xml @@ -1,7 +1,7 @@ Configuring Storage on a Lustre File System - This chapter describes best practices for storage selection and file system options to optimize perforance on RAID, and includes the following sections: + This chapter describes best practices for storage selection and file system options to optimize performance on RAID, and includes the following sections: @@ -30,7 +30,7 @@ - It is strongly recommended that hardware RAID be used with Lustre. Lustre currently does not support any redundancy at the file system level and RAID is required to protect agains disk failure. + It is strongly recommended that hardware RAID be used with Lustre. Lustre currently does not support any redundancy at the file system level and RAID is required to protect against disk failure.
diff --git a/Glossary.xml b/Glossary.xml index 4bce523..f1c670a 100644 --- a/Glossary.xml +++ b/Glossary.xml @@ -76,7 +76,7 @@ <glossterm>EA </glossterm> <glossdef> - <para>Extended Attribute. A small amount of data which can be retrieved through a name associated with a particular inode. Lustre uses EAa to store striping information (location of file data on OSTs). Examples of extended attributes are ACLs, striping information, and crypto keys.</para> + <para>Extended Attribute. A small amount of data which can be retrieved through a name associated with a particular inode. Lustre uses EAs to store striping information (location of file data on OSTs). Examples of extended attributes are ACLs, striping information, and crypto keys.</para> </glossdef> </glossentry> <glossentry xml:id="eviction"> diff --git a/LNETSelfTest.xml b/LNETSelfTest.xml index 6e13922..cee51e8 100644 --- a/LNETSelfTest.xml +++ b/LNETSelfTest.xml @@ -386,7 +386,7 @@ $ lst add_group clients 192.168.1.[10-100]@tcp 192.168.[2,4].\ <para> unknown</para> </entry> <entry> - <para> The node'™s status has yet to be determined.</para> + <para> The node's status has yet to be determined.</para> </entry> </row> <row> @@ -750,7 +750,7 @@ $ lst add_test --batch bulkperf --loop 100 --concurrency 4 \ <literal> --test<replaceable><index></replaceable></literal> </entry> <entry nameend="c3" namest="c2"> - <para>Lists tests in a batch. If no option is used, all tests in the batch are listed. IIf one of these options are used, only specified tests in the batch are listed:</para> + <para>Lists tests in a batch. If no option is used, all tests in the batch are listed. If one of these options are used, only specified tests in the batch are listed:</para> </entry> </row> <row> diff --git a/LustreDebugging.xml b/LustreDebugging.xml index 8402265..4b21340 100644 --- a/LustreDebugging.xml +++ b/LustreDebugging.xml @@ -49,7 +49,7 @@ Lustre Debugging Tools lfs - - This utility provides access to the extended attributes (EAs) of a Lustre file (along with other information). For more inforamtion about lfs, see . + - This utility provides access to the extended attributes (EAs) of a Lustre file (along with other information). For more information about lfs, see .
@@ -156,7 +156,7 @@ Lustre Debugging Tools The procedures below may be useful to administrators or developers debugging a Lustre files system.
<indexterm><primary>debugging</primary><secondary>message format</secondary></indexterm>Understanding the Lustre Debug Messaging Format - Lustre debug messages are categorized by originating sybsystem, message type, and locaton in the source code. For a list of subsystems and message types, see . + Lustre debug messages are categorized by originating subsystem, message type, and location in the source code. For a list of subsystems and message types, see . For a current list of subsystems and debug message types, see libcfs/include/libcfs/libcfs_debug.h in the Lustre tree diff --git a/LustreMaintenance.xml b/LustreMaintenance.xml index 4af4f65..b59943b 100644 --- a/LustreMaintenance.xml +++ b/LustreMaintenance.xml @@ -39,8 +39,8 @@
- <indexterm><primary>maintance</primary></indexterm> - <indexterm><primary>maintance</primary><secondary>inactive OSTs</secondary></indexterm> + <indexterm><primary>maintenance</primary></indexterm> + <indexterm><primary>maintenance</primary><secondary>inactive OSTs</secondary></indexterm> Working with Inactive OSTs To mount a client or an MDT with one or more inactive OSTs, run commands similar to this: client> mount -o exclude=testfs-OST0000 -t lustre \ @@ -53,7 +53,7 @@
- <indexterm><primary>maintance</primary><secondary>finding nodes</secondary></indexterm> + <title><indexterm><primary>maintenance</primary><secondary>finding nodes</secondary></indexterm> Finding Nodes in the Lustre File System There may be situations in which you need to find all nodes in your Lustre file system or get the names of all OSTs. To get a list of all Lustre nodes, run this command on the MGS: @@ -81,7 +81,7 @@ Finding Nodes in the Lustre File System 1: lustre-OST0001_UUID ACTIVE
- <indexterm><primary>maintance</primary><secondary>mounting a server</secondary></indexterm> + <title><indexterm><primary>maintenance</primary><secondary>mounting a server</secondary></indexterm> Mounting a Server Without Lustre Service If you are using a combined MGS/MDT, but you only want to start the MGS and not the MDT, run this command: mount -t lustre <MDT partition> -o nosvc <mount point> @@ -90,7 +90,7 @@ Mounting a Server Without Lustre Service $ mount -t lustre -L testfs-MDT0000 -o nosvc /mnt/test/mdt
- <indexterm><primary>maintance</primary><secondary>regenerating config logs</secondary></indexterm> + <title><indexterm><primary>maintenance</primary><secondary>regenerating config logs</secondary></indexterm> Regenerating Lustre Configuration Logs If the Lustre system's configuration logs are in a state where the file system cannot be started, use the writeconf command to erase them. After the writeconf command is run and the servers restart, the configuration logs are re-generated and stored on the MGS (as in a new file system). You should only use the writeconf command if: @@ -188,7 +188,7 @@ Regenerating Lustre Configuration Logs After the writeconf command is run, the configuration logs are re-generated as servers restart.
- <indexterm><primary>maintance</primary><secondary>changing a NID</secondary></indexterm> + <title><indexterm><primary>maintenance</primary><secondary>changing a NID</secondary></indexterm> Changing a Server NID If you need to change the NID on the MDT or an OST, run the writeconf command to erase Lustre configuration information (including server NIDs), and then re-generate the system configuration using updated server NIDs. Change a server NID in these situations: @@ -262,7 +262,7 @@ Changing a Server NID After the writeconf command is run, the configuration logs are re-generated as servers restart, and server NIDs in the updated list_nids file are used.
- <indexterm><primary>maintance</primary><secondary>adding a OST</secondary></indexterm> + <title><indexterm><primary>maintenance</primary><secondary>adding a OST</secondary></indexterm> Adding a New OST to a Lustre File System To add an OST to existing Lustre file system: @@ -286,8 +286,8 @@ $ mount -t lustre /dev/sda /mnt/test/ost12
- <indexterm><primary>maintance</primary><secondary>restoring a OST</secondary></indexterm> - <indexterm><primary>maintance</primary><secondary>removing a OST</secondary></indexterm> + <title><indexterm><primary>maintenance</primary><secondary>restoring a OST</secondary></indexterm> + <indexterm><primary>maintenance</primary><secondary>removing a OST</secondary></indexterm> Removing and Restoring OSTs OSTs can be removed from and restored to a Lustre file system. Currently in Lustre, removing an OST really means that the OST is 'deactivated' in the file system, not permanently removed. A removed OST still appears in the file system; do not create a new OST with the same name. You may want to remove (deactivate) an OST and prevent new files from being written to it in several situations: @@ -300,7 +300,7 @@ Removing and Restoring OSTs
- <indexterm><primary>maintance</primary><secondary>removing a OST</secondary></indexterm> + <title><indexterm><primary>maintenance</primary><secondary>removing a OST</secondary></indexterm> Removing an OST from the File System OSTs can be removed from a Lustre file system. Currently in Lustre, removing an OST actually means that the OST is 'deactivated' from the file system, not permanently removed. A removed OST still appears in the device listing; you should not normally create a new OST with the same name. You may want to deactivate an OST and prevent new files from being written to it in several situations: @@ -323,7 +323,7 @@ Removing and Restoring OSTs List all OSCs on the node, along with their device numbers. Run: - lctldl|grep " osc " + lctl dl|grep " osc " This is sample lctl dl | grep 11 UP osc lustre-OST-0000-osc-cac94211 4ea5b30f-6a8e-55a0-7519-2f20318ebdb4 5 12 UP osc lustre-OST-0001-osc-cac94211 4ea5b30f-6a8e-55a0-7519-2f20318ebdb4 5 @@ -388,7 +388,7 @@ Removing and Restoring OSTs
- <indexterm><primary>maintance</primary><secondary>backing up OST config</secondary></indexterm> + <title><indexterm><primary>maintenance</primary><secondary>backing up OST config</secondary></indexterm> <indexterm><primary>backup</primary><secondary>OST config</secondary></indexterm> Backing Up OST Configuration Files If the OST device is still accessible, then the Lustre configuration files on the OST should be backed up and saved for future use in order to avoid difficulties when a replacement OST is returned to service. These files rarely change, so they can and should be backed up while the OST is functional and accessible. If the deactivated OST is still available to mount (i.e. has not permanently failed or is unmountable due to severe corruption), an effort should be made to preserve these files. @@ -414,7 +414,7 @@ Backing Up OST Configuration Files
- <indexterm><primary>maintance</primary><secondary>restoring OST config</secondary></indexterm> + <title><indexterm><primary>maintenance</primary><secondary>restoring OST config</secondary></indexterm> <indexterm><primary>backup</primary><secondary>restoring OST config</secondary></indexterm> Restoring OST Configuration Files If the original OST is still available, it is best to follow the OST backup and restore procedure given in either , or and . @@ -462,14 +462,14 @@ The CONFIGS/mountdata file is created by mkfs.lustre
- <indexterm><primary>maintance</primary><secondary>reintroducing an OSTs</secondary></indexterm> + <title><indexterm><primary>maintenance</primary><secondary>reintroducing an OSTs</secondary></indexterm> Returning a Deactivated OST to Service If the OST was permanently deactivated, it needs to be reactivated in the MGS configuration. [mgs]# lctl conf_param {OST name}.osc.active=1 If the OST was temporarily deactivated, it needs to be reactivated on the MDS and clients. [mds]# lctl --device <devno> activate [client]# lctl set_param osc.<fsname>-<OST name>-*.active=0
- <indexterm><primary>maintance</primary><secondary>aborting recovery</secondary></indexterm> + <title><indexterm><primary>maintenance</primary><secondary>aborting recovery</secondary></indexterm> <indexterm><primary>backup</primary><secondary>aborting recovery</secondary></indexterm> Aborting Recovery You can abort recovery with either the lctl utility or by mounting the target with the abort_recov option (mount -o abort_recov). When starting a target, run: $ mount -t lustre -L <MDT name> -o abort_recov <mount_point> @@ -478,7 +478,7 @@ Aborting Recovery
- <indexterm><primary>maintance</primary><secondary>identifying OST host</secondary></indexterm> + <title><indexterm><primary>maintenance</primary><secondary>identifying OST host</secondary></indexterm> Determining Which Machine is Serving an OST In the course of administering a Lustre file system, you may need to determine which machine is serving a specific OST. It is not as simple as identifying the machine’s IP address, as IP is only one of several networking protocols that Lustre uses and, as such, LNET does not use IP addresses as node identifiers, but NIDs instead. To identify the NID that is serving a specific OST, run one of the following commands on a client (you do not need to be a root user): client$ lctl get_param osc.${fsname}-${OSTname}*.ost_conn_uuidFor example: client$ lctl get_param osc.*-OST0000*.ost_conn_uuid osc.lustre-OST0000-osc-f1579000.ost_conn_uuid=192.168.20.1@tcp- OR - client$ lctl get_param osc.*.ost_conn_uuid @@ -489,7 +489,7 @@ osc.lustre-OST0003-osc-f1579000.ost_conn_uuid=192.168.20.1@tcp osc.lustre-OST0004-osc-f1579000.ost_conn_uuid=192.168.20.1@tcp
- <indexterm><primary>maintance</primary><secondary>changing failover node address</secondary></indexterm> + <title><indexterm><primary>maintenance</primary><secondary>changing failover node address</secondary></indexterm> Changing the Address of a Failover Node To change the address of a failover node (e.g, to use node X instead of node Y), run this command on the OSS/OST partition: tunefs.lustre --erase-params --failnode=<NID> <device> @@ -498,7 +498,7 @@ Changing the Address of a Failover Node
- <indexterm><primary>maintance</primary><secondary>separate a combined MGS/MDT</secondary></indexterm> + <title><indexterm><primary>maintenance</primary><secondary>separate a combined MGS/MDT</secondary></indexterm> Separate a combined MGS/MDT These instructions assume the MGS node will be the same as the MDS node. For instructions on how to move MGS to a different node, see . These instructions are for doing the split without shutting down other servers and clients. @@ -516,7 +516,7 @@ Separate a combined MGS/MDT Copy the configuration data from MDT disk to the new MGS disk. mount -t ldiskfs -o ro <mdt-device> <mdt-mount-point> mount -t ldiskfs -o rw <mgs-device> <mgs-mount-point> - cp -r <mdt-moint-point>/CONFIGS/<filesystem-name>-* <mgs-mount-point>/CONFIGS/. + cp -r <mdt-mount-point>/CONFIGS/<filesystem-name>-* <mgs-mount-point>/CONFIGS/. umount <mgs-mount-point> umount <mdt-mount-point> See for alternative method. diff --git a/LustreOperations.xml b/LustreOperations.xml index 0a4819c..a77e4f0 100644 --- a/LustreOperations.xml +++ b/LustreOperations.xml @@ -359,8 +359,7 @@ mds1> cat /proc/fs/lustre/mds/testfs-MDT0000/recovery_status # debugfs -c -R "stat /O/0/d$((34976 % 32))/34976" /dev/lustre/ost_test2 The command output is: debugfs 1.42.3.wc3 (15-Aug-2012) -/dev/lustre/ost_test2: catastrophic mode - not reading inode or group bitma\ -ps +/dev/lustre/ost_test2: catastrophic mode - not reading inode or group bitmaps Inode: 352365 Type: regular Mode: 0666 Flags: 0x80000 Generation: 2393149953 Version: 0x0000002a:00005f81 User: 1000 Group: 1000 Size: 260096 diff --git a/LustreProc.xml b/LustreProc.xml index ece512a..c53066c 100644 --- a/LustreProc.xml +++ b/LustreProc.xml @@ -1024,7 +1024,7 @@ disk io size rpcs % cum % | rpcs % cum % There is no file size/block attributes pre-fetching and the traversing thread has to send synchronous glimpse size RPCs to OST(s). - The first issue is resolved by using statahead local dcache, and the second one is resolved by using asynchronous glimpse lock (AGL) RPCs for per-fetching file size/block attributes from OST(s). + The first issue is resolved by using statahead local dcache, and the second one is resolved by using asynchronous glimpse lock (AGL) RPCs for pre-fetching file size/block attributes from OST(s).
Tuning File Readahead File readahead is triggered when two or more sequential reads by an application fail to be satisfied by the Linux buffer cache. The size of the initial readahead is 1 MB. Additional readaheads grow linearly, and increment until the readahead cache on the client is full at 40 MB. @@ -1061,7 +1061,7 @@ disk io size rpcs % cum % | rpcs % cum %
<indexterm><primary>proc</primary><secondary>read cache</secondary></indexterm>OSS Read Cache - The OSS read cache feature provides read-only caching of data on an OSS. This functionality uses the regular Linux page cache to store the data. Just like caching from a regular filesytem in Linux, OSS read cache uses as much physical memory as is allocated. + The OSS read cache feature provides read-only caching of data on an OSS. This functionality uses the regular Linux page cache to store the data. Just like caching from a regular filesystem in Linux, OSS read cache uses as much physical memory as is allocated. OSS read cache improves Lustre performance in these situations: diff --git a/LustreProgrammingInterfaces.xml b/LustreProgrammingInterfaces.xml index de7d0c3..300645c 100644 --- a/LustreProgrammingInterfaces.xml +++ b/LustreProgrammingInterfaces.xml @@ -25,7 +25,7 @@
Description - The group upcall file contains the path to an executable that is invoked to resolve a numeric UID to a group membership list. This utility opens /proc/fs/lustre/mdt/{mdtname}/identity_info and writes the releated identity_downcall_data data structure (see ). The data is persisted with lctl set_param mdt.{mdtname}.identity_info. + The group upcall file contains the path to an executable that is invoked to resolve a numeric UID to a group membership list. This utility opens /proc/fs/lustre/mdt/{mdtname}/identity_info and writes the related identity_downcall_data data structure (see ). The data is persisted with lctl set_param mdt.{mdtname}.identity_info. For a sample upcall program, see lustre/utils/l_getidentity.c in the Lustre source distribution.
Primary and Secondary Groups @@ -57,7 +57,7 @@ * the valid values for perms are: * setuid/setgid/setgrp/rmtacl -- enable corresponding perm * nosetuid/nosetgid/nosetgrp/normtacl -- disable corresponding perm -* they can be listed together, seperated by ',', +* they can be listed together, separated by ',', * when perm and noperm are in the same line (item), noperm is preferential, * when they are in different lines (items), the latter is preferential, * '*' nid is as default perm, and is not preferential.*/ @@ -104,7 +104,7 @@ l_getidentity [-v] -s
Description - The group upcall file contains the path to an executable that is invoked to resolve a numeric UID to a group membership list. This utility opens /proc/fs/lustre/mdt/{mdtname}/identity_info and writes the releated identity_downcall_data data structure (see ). The data is persisted with lctl set_param mdt.{mdtname}.identity_info. + The group upcall file contains the path to an executable that is invoked to resolve a numeric UID to a group membership list. This utility opens /proc/fs/lustre/mdt/{mdtname}/identity_info and writes the related identity_downcall_data data structure (see ). The data is persisted with lctl set_param mdt.{mdtname}.identity_info. l_getidentity is the reference implementation of the user/group cache upcall.
diff --git a/LustreRecovery.xml b/LustreRecovery.xml index 3a9eb3c..cbd7095 100644 --- a/LustreRecovery.xml +++ b/LustreRecovery.xml @@ -26,7 +26,7 @@ <indexterm><primary>recovery</primary></indexterm> <indexterm><primary>recovery</primary><secondary>VBR</secondary><see>version-based recovery</see></indexterm> - <indexterm><primary>recovery</primary><secondary>commit on share</secondary><see>ommit on share</see></indexterm> + <indexterm><primary>recovery</primary><secondary>commit on share</secondary><see>commit on share</see></indexterm> <indexterm><primary>lustre</primary><secondary>recovery</secondary><see>recovery</see></indexterm> Recovery Overview Lustre's recovery feature is responsible for dealing with node or network failure and returning the cluster to a consistent, performant state. Because Lustre allows servers to perform asynchronous update operations to the on-disk file system (i.e., the server can reply without waiting for the update to synchronously commit to disk), the clients may have state in memory that is newer than what the server can recover from disk after a crash. @@ -135,7 +135,7 @@
Transaction Numbers - Each client request processed by the server that involves any state change (metadata update, file open, write, etc., depending on server type) is assigned a transaction number by the server that is a target-unique, monontonically increasing, server-wide 64-bit integer. The transaction number for each file system-modifying request is sent back to the client along with the reply to that client request. The transaction numbers allow the client and server to unambiguously order every modification to the file system in case recovery is needed. + Each client request processed by the server that involves any state change (metadata update, file open, write, etc., depending on server type) is assigned a transaction number by the server that is a target-unique, monotonically increasing, server-wide 64-bit integer. The transaction number for each file system-modifying request is sent back to the client along with the reply to that client request. The transaction numbers allow the client and server to unambiguously order every modification to the file system in case recovery is needed. Each reply sent to a client (regardless of request type) also contains the last committed transaction number that indicates the highest transaction number committed to the file system. The ldiskfs backing file system that Lustre uses enforces the requirement that any earlier disk operation will always be committed to disk before a later disk operation, so the last committed transaction number also reports that any requests with a lower transaction number have been committed to disk.
@@ -231,7 +231,7 @@ There are two scenarios under which client RPCs are not replayed: (1) Non-functioning or isolated clients do not reconnect, and they cannot replay their RPCs, causing a gap in the replay sequence. These clients get errors and are evicted. (2) Functioning clients connect, but they cannot replay some or all of their RPCs that occurred after the gap caused by the non-functioning/isolated clients. These clients get errors (caused by the failed clients). With VBR, these requests have a better chance to replay because the "gaps" are only related to specific files that the missing client(s) changed. . - In pre-VBR versions of Lustre, if the MGS or an OST went down and then recovered, a recovery process was triggered in which clients attempted to replay their requests. Clients were only allowed to replay RPCs in serial order. If a particular client could not replay its requests, then those requests were lost as well as the requests of clients later in the sequence. The ''downstream'' clients never got to replay their requests because of the wait on the earlier client'™s RPCs. Eventually, the recovery period would time out (so the component could accept new requests), leaving some number of clients evicted and their requests and data lost. + In pre-VBR versions of Lustre, if the MGS or an OST went down and then recovered, a recovery process was triggered in which clients attempted to replay their requests. Clients were only allowed to replay RPCs in serial order. If a particular client could not replay its requests, then those requests were lost as well as the requests of clients later in the sequence. The ''downstream'' clients never got to replay their requests because of the wait on the earlier client's RPCs. Eventually, the recovery period would time out (so the component could accept new requests), leaving some number of clients evicted and their requests and data lost. With VBR, the recovery mechanism does not result in the loss of clients or their data, because changes in inode versions are tracked, and more clients are able to reintegrate into the cluster. With VBR, inode tracking looks like this: @@ -344,7 +344,7 @@ Imperative recovery can also be disabled on the client side with the same mount option: # mount -t lustre -onoir mymgsnid@tcp:/testfs /mnt/testfs When a single client is deactivated in this manner, the MGS will deactivate imperative recovery for the whole cluster. IR-enabled clients will still get notification of target restart, but targets will not be allowed to shorten the recovery window. - You can also disable imperative recovery globally on the MGS by writing `state=disabled’ to the controling procfs entry + You can also disable imperative recovery globally on the MGS by writing `state=disabled’ to the controlling procfs entry # lctl set_param mgs.MGS.live.testfs="state=disabled" The above command will disable imperative recovery for file system named testfs
diff --git a/LustreTroubleshooting.xml b/LustreTroubleshooting.xml index d45fbc0..b14f0aa 100644 --- a/LustreTroubleshooting.xml +++ b/LustreTroubleshooting.xml @@ -201,7 +201,7 @@
- <indexterm><primary>troubleshooting</primary><secondary>reporting bugs</secondary></indexterm><indexterm><primary>reporting bugs</primary><see>troubleshooing</see></indexterm> + <title><indexterm><primary>troubleshooting</primary><secondary>reporting bugs</secondary></indexterm><indexterm><primary>reporting bugs</primary><see>troubleshooting</see></indexterm> Reporting a Lustre Bug If, after troubleshooting your Lustre system, you cannot resolve the problem, consider reporting a Lustre bug. The process for reporting a bug is described in the Lustre wiki topic Reporting Bugs. You can also post a question to the lustre-discuss mailing list or search the lustre-discuss Archives for information about your issue. @@ -473,7 +473,7 @@ ptlrpc_main+0x42e/0x7c0 [ptlrpc]
<indexterm><primary>troubleshooting</primary><secondary>timeouts on setup</secondary></indexterm>Handling Timeouts on Initial Lustre Setup - If you come across timeouts or hangs on the initial setup of your Lustre system, verify that name resolution for servers and clients is working correctly. Some distributions configure /etc/hosts sts so the name of the local machine (as reported by the 'hostname' command) is mapped to local host (127.0.0.1) instead of a proper IP address. + If you come across timeouts or hangs on the initial setup of your Lustre system, verify that name resolution for servers and clients is working correctly. Some distributions configure /etc/hosts so the name of the local machine (as reported by the 'hostname' command) is mapped to local host (127.0.0.1) instead of a proper IP address. This might produce this error: LustreError:(ldlm_handle_cancel()) received cancel for unknown lock cookie 0xe74021a4b41b954e from nid 0x7f000001 (0:127.0.0.1) diff --git a/LustreTuning.xml b/LustreTuning.xml index 2afab4f..fa7e290 100644 --- a/LustreTuning.xml +++ b/LustreTuning.xml @@ -126,7 +126,7 @@
<indexterm><primary>tuning</primary><secondary>for small files</secondary></indexterm>Improving Lustre Performance When Working with Small Files - A Lustre environment where an application writes small file chunks from many clients to a single file will result in bad I/O performance. To improve Lustre'™s performance with small files: + A Lustre environment where an application writes small file chunks from many clients to a single file will result in bad I/O performance. To improve Lustre's performance with small files: Have the application aggregate writes some amount before submitting them to Lustre. By default, Lustre enforces POSIX coherency semantics, so it results in lock ping-pong between client nodes if they are all writing to the same file at one time. diff --git a/ManagingFailover.xml b/ManagingFailover.xml index a64f234..5f9a18e 100644 --- a/ManagingFailover.xml +++ b/ManagingFailover.xml @@ -8,7 +8,7 @@ - For information about high availability(HA) management software, see the Lustre wiki topic Using Red Hat Cluster Manager with Lustre or the Lustre wiki topic LuUsing Pacemaker with stre. + For information about high availability(HA) management software, see the Lustre wiki topic Using Red Hat Cluster Manager with Lustre or the Lustre wiki topic Using Pacemaker with Lustre.
diff --git a/ManagingFileSystemIO.xml b/ManagingFileSystemIO.xml index c336688..7360e6f 100644 --- a/ManagingFileSystemIO.xml +++ b/ManagingFileSystemIO.xml @@ -107,7 +107,7 @@ Last login: Wed Nov 26 13:35:12 2008 from 192.168.0.6</screen> <section remap="h3"> <title> <indexterm><primary>I/O</primary><secondary>migrating data</secondary></indexterm> - <indexterm><primary>maintance</primary><secondary>full OSTs</secondary></indexterm> + <indexterm><primary>maintenance</primary><secondary>full OSTs</secondary></indexterm> Migrating Data within a File System As stripes cannot be moved within the file system, data must be migrated manually by copying and renaming the file, removing the original file, and renaming the new file with the original file name. The simplest way to do this is to use the lfs_migrate command (see ). However, the steps for migrating a file by hand are also shown here for reference. @@ -157,7 +157,7 @@ filesystem summary: 11.8G 7.3G 3.9G 61% \
<indexterm><primary>I/O</primary><secondary>bringing OST online</secondary></indexterm> - <indexterm><primary>maintance</primary><secondary>bringing OST online</secondary></indexterm> + <indexterm><primary>maintenance</primary><secondary>bringing OST online</secondary></indexterm> Returning an Inactive OST Back Online Once the deactivated OST(s) no longer are severely imbalanced, due to either active or passive data redistribution, they should be reactivated so they will again have new files allocated on them. @@ -180,7 +180,7 @@ filesystem summary: 11.8G 7.3G 3.9G 61% \
<indexterm><primary>I/O</primary><secondary>pools</secondary></indexterm> - <indexterm><primary>maintance</primary><secondary>pools</secondary></indexterm> + <indexterm><primary>maintenance</primary><secondary>pools</secondary></indexterm> <indexterm><primary>pools</primary></indexterm> Creating and Managing OST Pools The OST pools feature enables users to group OSTs together to make object placement more flexible. A 'pool' is the name associated with an arbitrary subset of OSTs in a Lustre cluster. diff --git a/ManagingSecurity.xml b/ManagingSecurity.xml index a0899fb..7b327cb 100644 --- a/ManagingSecurity.xml +++ b/ManagingSecurity.xml @@ -46,7 +46,7 @@ $ lctl get_param -n mdc.home-MDT0000-mdc-*.connect_flags | grep acl acl To mount the client with no ACLs: $ mount -t lustre -o noacl ibmds2@o2ib:/home /home - ACLs are enabled in Lustre on a system-wide basis; either all clients enable ACLs or none do. Activating ACLs is controlled by MDS mount options acl / noacl (enable/disableACLs). Client-side mount options acl/noacl are ignored. You do not need to change the client configuration, and the 'acl' string will not appear in the client /etc/mtab. The client acl mount option is no longer needed. If a client is mounted with that option, then this message appears in the MDS syslog: + ACLs are enabled in Lustre on a system-wide basis; either all clients enable ACLs or none do. Activating ACLs is controlled by MDS mount options acl / noacl (enable/disable ACLs). Client-side mount options acl/noacl are ignored. You do not need to change the client configuration, and the 'acl' string will not appear in the client /etc/mtab. The client acl mount option is no longer needed. If a client is mounted with that option, then this message appears in the MDS syslog: ...MDS requires ACL support but client does not The message is harmless but indicates a configuration issue, which should be corrected. If ACLs are not enabled on the MDS, then any attempts to reference an ACL on a client return an Operation not supported error. @@ -69,7 +69,7 @@ other::--- [root@client lustre]# setfacl -m user:chirag:rwx rain [root@client lustre]# ls -ld rain drwxrwx---+ 2 root root 4096 Feb 20 06:50 rain -[root@client lustre]# getfacl --omit-heade rain +[root@client lustre]# getfacl --omit-header rain user::rwx user:chirag:rwx group::r-x diff --git a/ManagingStripingFreeSpace.xml b/ManagingStripingFreeSpace.xml index a02cf62..1fc65be 100644 --- a/ManagingStripingFreeSpace.xml +++ b/ManagingStripingFreeSpace.xml @@ -27,7 +27,7 @@ spacestriping How Lustre Striping Works Lustre uses a round-robin algorithm for selecting the next OST to which a stripe is to be written. Normally the usage of OSTs is well balanced. However, if users create a small number of exceptionally large files or incorrectly specify striping parameters, imbalanced OST usage may result. - The MDS allocates objects on seqential OSTs. Periodically, it will adjust the striping layout to eliminate some degenerated cases where applications that create very regular file layouts (striping patterns) would preferentially use a particular OST in the sequence. + The MDS allocates objects on sequential OSTs. Periodically, it will adjust the striping layout to eliminate some degenerated cases where applications that create very regular file layouts (striping patterns) would preferentially use a particular OST in the sequence. Stripes are written to sequential OSTs until free space across the OSTs differs by more than 20%. The MDS will then use weighted random allocations with a preference for allocating objects on OSTs with more free space. This can reduce I/O performance until space usage is rebalanced to within 20% again. For a more detailed description of stripe assignments, see .
@@ -357,7 +357,7 @@ filesystem summary: 2211572 41924 \
- <indexterm><primary>space</primary><secondary>locationg weighting</secondary></indexterm>>Adjusting the Weighting Between Free Space and Location + <indexterm><primary>space</primary><secondary>location weighting</secondary></indexterm>>Adjusting the Weighting Between Free Space and Location The weighting priority can be adjusted in the /proc file /proc/fs/lustre/lov/lustre-mdtlov/qos_prio_free proc. The default value is 90%. Use this command on the MGS to permanently change this weighting: lctl conf_param <fsname>-MDT0000.lov.qos_prio_free=90 Increasing this value puts more weighting on free space. When the free space priority is set to 100%, then location is no longer used in stripe-ordering calculations and weighting is based entirely on free space. diff --git a/SettingLustreProperties.xml b/SettingLustreProperties.xml index 5a6dd25..9d6f779 100644 --- a/SettingLustreProperties.xml +++ b/SettingLustreProperties.xml @@ -186,7 +186,7 @@ struct lov_user_ost_data_v1 lmm_objects[0]; lmm_magic - Specifies the format of the returned striping information. LOV_MAGIC_V1 isused for lov_user_md_v1. LOV_MAGIC_V3 is used for lov_user_md_v3. + Specifies the format of the returned striping information. LOV_MAGIC_V1 is used for lov_user_md_v1. LOV_MAGIC_V3 is used for lov_user_md_v3. @@ -544,7 +544,7 @@ int llapi_file_create(const char *name, unsigned long long EEXIST - triping information has already been set and cannot be altered; name already exists. + Striping information has already been set and cannot be altered; name already exists. @@ -976,7 +976,7 @@ int get_file_info(char *path) return rc; } -/* Ping all OSTs that belong to this filesysem */ +/* Ping all OSTs that belong to this filesystem */ int ping_osts() { @@ -1029,7 +1029,7 @@ int main() } printf("Getting uuid list\n"); rc = get_my_uuids(file); - rintf("Write to the file\n"); + printf("Write to the file\n"); rc = write_file(file); rc = close_file(file); printf("Listing LOV data\n"); @@ -1038,7 +1038,7 @@ int main() rc = ping_osts(); /* the results should match lfs getstripe */ - printf("Confirming our results with lfs getsrtipe\n"); + printf("Confirming our results with lfs getstripe\n"); sprintf(sys_cmd, "/usr/bin/lfs getstripe %s/%s", MY_LUSTRE_DIR, TESTFILE); system(sys_cmd); diff --git a/SettingUpLustreSystem.xml b/SettingUpLustreSystem.xml index 3f8d4db..0f3c398 100644 --- a/SettingUpLustreSystem.xml +++ b/SettingUpLustreSystem.xml @@ -93,7 +93,7 @@ 2 KB/inode * 40 million inodes = 80 GB - Inode size depends on stripe count. Without any stripes an inode allocation of 512 bytes might be suffecient. As striping grows to ~80 stripes per file, the inode allocation is around 2KB. At 160 stripes, a rough estimate for inode allocation is 4.5KB. + Inode size depends on stripe count. Without any stripes an inode allocation of 512 bytes might be sufficient. As striping grows to ~80 stripes per file, the inode allocation is around 2KB. At 160 stripes, a rough estimate for inode allocation is 4.5KB. If the average file size is small, 4 KB for example, Lustre is not very efficient as the MDT uses as much space as the OSTs. However, this is not a common configuration for Lustre. @@ -301,7 +301,7 @@ 512 PB - Each OST or MDT on 64-bit kernel servers can have a file system up to 128 TB. On 32-bit systems, due to page cache limits, 16TB is the maximum block device size, which in turn applies to the size of OSTon 32-bit kernel servers. + Each OST or MDT on 64-bit kernel servers can have a file system up to 128 TB. On 32-bit systems, due to page cache limits, 16TB is the maximum block device size, which in turn applies to the size of OST on 32-bit kernel servers. You can have multiple OST file systems on a single OSS node. diff --git a/SystemConfigurationUtilities.xml b/SystemConfigurationUtilities.xml index 672b58d..1aef062 100644 --- a/SystemConfigurationUtilities.xml +++ b/SystemConfigurationUtilities.xml @@ -141,7 +141,7 @@ l_getidentity
Description - The group upcall file contains the path to an executable file that is invoked to resolve a numeric UID to a group membership list. This utility opens /proc/fs/lustre/mdt/{mdtname}/identity_info and writes the releated identity_downcall_data structure (see .) The data is persisted with lctl set_param mdt.{mdtname}.identity_info. + The group upcall file contains the path to an executable file that is invoked to resolve a numeric UID to a group membership list. This utility opens /proc/fs/lustre/mdt/{mdtname}/identity_info and writes the related identity_downcall_data structure (see .) The data is persisted with lctl set_param mdt.{mdtname}.identity_info. The l_getidentity utility is the reference implementation of the user or group cache upcall.
@@ -212,17 +212,17 @@ quit
Setting Parameters with lctl Lustre parameters are not always accessible using the procfs interface, as it is platform-specific. As a solution, lctl {get,set}_param has been introduced as a platform-independent interface to the Lustre tunables. Avoid direct references to /proc/{fs,sys}/{lustre,lnet}. For future portability, use lctl {get,set}_param . - When the file system is running, use the lctl set_param command to set temporary parameters (mapping to items in /proc/{fs,sys}/{lnet,lustre}). The lctl set_param command uses this syntax: + When the file system is running, use the lctl set_param command to set temporary parameters (mapping to items in /proc/{fs,sys}/{lnet,lustre}). The lctl set_param command uses this syntax: lctl set_param [-n] <obdtype>.<obdname>.<proc_file_name>=<value> For example: $ lctl set_param ldlm.namespaces.*osc*.lru_size=$((NR_CPU*100)) - Many permanent parameters can be set with lctl conf_param. In general, lctl conf_param can be used to specify any parameter settable in a /proc/fs/lustre file, with its own OBD device. The lctl conf_param command uses this syntax: + Many permanent parameters can be set with lctl conf_param. In general, lctl conf_param can be used to specify any parameter settable in a /proc/fs/lustre file, with its own OBD device. The lctl conf_param command uses this syntax: <obd|fsname>.<obdtype>.<proc_file_name>=<value>) For example: $ lctl conf_param testfs-MDT0000.mdt.identity_upcall=NONE $ lctl conf_param testfs.llite.max_read_ahead_mb=16 - The lctlconf_param command permanently sets parameters in the file system configuration. + The lctl conf_param command permanently sets parameters in the file system configuration. To get current Lustre parameter settings, use the lctl get_param command with this syntax: lctl get_param [-n] <obdtype>.<obdname>.<proc_file_name> @@ -789,7 +789,7 @@ root@oss1# ll_decode_filter_fid #12345[4,5,8] #123455: objid=614725 seq=0 parent=[0x18d11:0xebba84eb:0x1] #123458: objid=533088 seq=0 parent=[0x21417:0x19734d61:0x0] This shows that the three files in lost+found have decimal object IDs - 690670, 614725, and 533088, respectively. The object sequence number (formerly object group) is 0 for all current OST objects. - The MDT parent inode FIDs are hexdecimal numbers of the form sequence:oid:idx. Since the sequence number is below 0x100000000 in all these cases, the FIDs are in the legacy Inode and Generation In FID (IGIF) namespace and are mapped directly to the MDT inode = seq and generation = oid values; the MDT inodes are 0x751c5, 0x18d11, and 0x21417 respectively. For objects with MDT parent sequence numbers above 0x200000000, this indicates that the FID needs to be mapped via the MDT Object Index (OI) file on the MDT to determine the internal inode number. + The MDT parent inode FIDs are hexadecimal numbers of the form sequence:oid:idx. Since the sequence number is below 0x100000000 in all these cases, the FIDs are in the legacy Inode and Generation In FID (IGIF) namespace and are mapped directly to the MDT inode = seq and generation = oid values; the MDT inodes are 0x751c5, 0x18d11, and 0x21417 respectively. For objects with MDT parent sequence numbers above 0x200000000, this indicates that the FID needs to be mapped via the MDT Object Index (OI) file on the MDT to determine the internal inode number. The idx field shows the stripe number of this OST object in the Lustre RAID-0 striped file.
@@ -1166,7 +1166,7 @@ lshowmount
Description - The lshowmount utility shows the hosts that have Lustre mounted to a server. Ths utility looks for exports from the MGS, MDS, and obdfilter. + The lshowmount utility shows the hosts that have Lustre mounted to a server. This utility looks for exports from the MGS, MDS, and obdfilter.
Options @@ -1371,7 +1371,7 @@ lustre_rsync --statuslog|-l <log> --source|-s <source> --statuslog=<log> - A log file to which synchronization status is saved. When lustre_rsync starts, the state of a previous replication is read from here. If the status log from a previous synchronization operation is specified, otherwise mandatory options like --source, --target and --mdt options may be skipped. By specifying options like --source, --target and/or --mdt in addition to the --statuslog option, parameters in the status log can be overriden. Command line options take precedence over options in the status log. + A log file to which synchronization status is saved. When lustre_rsync starts, the state of a previous replication is read from here. If the status log from a previous synchronization operation is specified, otherwise mandatory options like --source, --target and --mdt options may be skipped. By specifying options like --source, --target and/or --mdt in addition to the --statuslog option, parameters in the status log can be overridden. Command line options take precedence over options in the status log. @@ -2616,7 +2616,7 @@ llog_reader /tmp/tfs-client <indexterm><primary>lr_reader</primary></indexterm> lr_reader The lr_reader utility translates a last received (last_rcvd) file into human-readable form. - The following utilites are part of the Lustre I/O kit. For more information, see . + The following utilities are part of the Lustre I/O kit. For more information, see .
<indexterm><primary>sgpdd_survey</primary></indexterm> diff --git a/UnderstandingLustre.xml b/UnderstandingLustre.xml index e872d45..ad3b9be 100644 --- a/UnderstandingLustre.xml +++ b/UnderstandingLustre.xml @@ -320,7 +320,7 @@ <para> None</para> </entry> <entry> - <para> Low latency, high bandwith network.</para> + <para> Low latency, high bandwidth network.</para> </entry> </row> </tbody> @@ -420,7 +420,7 @@ <note> <para>Versions of Lustre prior to 2.2 had a maximum stripe count for a single file was limited to 160 OSTs.</para> </note> - <para>Athough a single file can only be striped over 2000 objects, Lustre file systems can have thousands of OSTs. The I/O bandwidth to access a single file is the aggregated I/O bandwidth to the objects in a file, which can be as much as a bandwidth of up to 2000 servers. On systems with more than 2000 OSTs, clients can do I/O using multiple files to utilize the full file system bandwidth.</para> + <para>Although a single file can only be striped over 2000 objects, Lustre file systems can have thousands of OSTs. The I/O bandwidth to access a single file is the aggregated I/O bandwidth to the objects in a file, which can be as much as a bandwidth of up to 2000 servers. On systems with more than 2000 OSTs, clients can do I/O using multiple files to utilize the full file system bandwidth.</para> <para>For more information about striping, see <xref linkend="managingstripingfreespace"/>.</para> </section> </section> diff --git a/UnderstandingLustreNetworking.xml b/UnderstandingLustreNetworking.xml index 60f79e2..8c8e35a 100644 --- a/UnderstandingLustreNetworking.xml +++ b/UnderstandingLustreNetworking.xml @@ -1,7 +1,7 @@ <?xml version='1.0' encoding='UTF-8'?> <!-- This document was created with Syntext Serna Free. --><chapter xmlns="http://docbook.org/ns/docbook" xmlns:xl="http://www.w3.org/1999/xlink" version="5.0" xml:lang="en-US" xml:id="understandinglustrenetworking"> <title xml:id="understandinglustrenetworking.title">Understanding Lustre Networking (LNET) - This chapter introduces Lustre Networking (LNET) and includes the following for intereste sections: + This chapter introduces Lustre Networking (LNET) and includes the following sections: -- 1.8.3.1