<listitem>
<para>Verify that the Lustre file system (source) and the replica
file system (target) are identical
- <emphasis>before</emphasis>registering the changelog user. If the
+ <emphasis>before</emphasis> registering the changelog user. If the
file systems are discrepant, use a utility, e.g. regular
<literal>rsync</literal>(not
<literal>lustre_rsync</literal>), to make them identical.</para>
</listitem>
<listitem>
<para><literal>ost-survey</literal> - Performs I/O against OSTs individually to allow
- performance comparisons to detect if an OST is performing suboptimally due to hardware
+ performance comparisons to detect if an OST is performing sub-optimally due to hardware
issues.</para>
</listitem>
</itemizedlist>
information</para>
</listitem>
</itemizedlist></para>
- <para>When the target is formatted using the <literal>mkfs.lustre</literal>command, the failover
+ <para>When the target is formatted using the <literal>mkfs.lustre</literal> command, the failover
service node(s) for the target are designated using the <literal>--servicenode</literal>
option. In the example below, an OST with index <literal>0</literal> in the file system
<literal>testfs</literal> is formatted with two service nodes designated to serve as a
at the first error.</para>
<para>Below is the YAML syntax describing the various
configuration elements which can be operated on via DLC. Not all
- YAML elements are requied for all operations (add/delete/show).
+ YAML elements are required for all operations (add/delete/show).
The system ignores elements which are not pertinent to the requested
operation.</para>
<section>
<para>The LNET module routes parameter is used to identify routers in
a Lustre configuration. These parameters are set in
<literal>modprobe.conf</literal> on each Lustre node. </para>
- <para>Routes are typicall set to connect to segregated subnetworks
+ <para>Routes are typically set to connect to segregated subnetworks
or to cross connect two different types of networks such as tcp and
o2ib</para>
<para>The LNET routes parameter specifies a colon-separated list of
</listitem>
<listitem>
<para>
- <emphasis>Set</emphasis>lnet
+ <emphasis>Set</emphasis> lnet
<emphasis>module parameters to specify how Lustre Networking (LNET) is
to be configured to work with a Lustre file system and test the LNET
configuration.</emphasis>LNET will, by default, use the first TCP/IP
one can exceed the soft limit within the grace period if under the hard
limit.</para>
<para>Due to the distributed nature of a Lustre file system and the need to
- mainain performance under load, those quota parameters may not be 100%
+ maintain performance under load, those quota parameters may not be 100%
accurate. The quota settings can be manipulated via the
<literal>lfs</literal> command, executed on a client, and includes several
options to work with quotas:</para>
modified. This glimpse callback includes information about the identifier
subject to the change. If the global index on the QMT is modified while a
slave is disconnected, the index version is used to determine whether the
- slave copy of the global index isn't uptodate any more. If so, the slave
+ slave copy of the global index isn't up to date any more. If so, the slave
fetches the whole index again and updates the local copy. The slave copy of
the global index is also exported via /proc and can be accessed via the
following command:
may experience unnecessary failures. The file system block quota is divided
up among the OSTs within the file system. Each OST requests an allocation
which is increased up to the quota limit. The quota allocation is then
- <emphasis role="italic">quantized</emphasis>to reduce the number of
+ <emphasis role="italic">quantized</emphasis> to reduce the number of
quota-related request traffic.</para>
<para>The Lustre quota system distributes quotas from the Quota Master
Target (aka QMT). Only one QMT instance is supported for now and only runs
responsible for releasing quota space above the new qunit value. The qunit
size isn't shrunk indefinitely and there is a minimal value of 1MB for
blocks and 1,024 for inodes. This means that the quota space rebalancing
- process will stop when this mininum value is reached. As a result, quota
+ process will stop when this minimum value is reached. As a result, quota
exceeded can be returned while many slaves still have 1MB or 1,024 inodes
of spare quota space.</para>
<para>If we look at the
<secondary>Interoperability</secondary>
</indexterm>Quotas and Version Interoperability</title>
<para>The new quota protocol introduced in Lustre software release 2.4.0
- <emphasis role="bold">is not compatible</emphasis>with previous
+ <emphasis role="bold">is not compatible</emphasis> with previous
versions. As a consequence,
<emphasis role="bold">all Lustre servers must be upgraded to release 2.4.0
for quota to be functional</emphasis>. Quota limits set on the Lustre file
<literal>tunefs.lustre --quota</literal> against all targets. It is worth
noting that running
<literal>tunefs.lustre --quota</literal> is
- <emphasis role="bold">mandatory</emphasis>for all targets formatted with a
+ <emphasis role="bold">mandatory</emphasis> for all targets formatted with a
Lustre software release older than release 2.4.0, otherwise quota
enforcement as well as accounting won't be functional.</para>
<para>Besides, the quota protocol in release 2.4 takes for granted that the
<emphasis role="italic">Lustre server packages</emphasis>
</emphasis>. The required packages for Lustre servers are listed in
the table below, where
- <replaceable>ver</replaceable>refers to the Linux kernel distribution
+ <replaceable>ver</replaceable> refers to the Linux kernel distribution
(e.g., 2.6.32-358.6.2.el6) and
- <replaceable>arch</replaceable>refers to the processor architecture
+ <replaceable>arch</replaceable> refers to the processor architecture
(e.g., x86_64). These packages are available in the
<link xl:href="https://wiki.hpdd.intel.com/display/PUB/Lustre+Releases">
Lustre Releases</link>repository.</para>
<emphasis role="italic">Lustre client packages</emphasis>
</emphasis>. The required packages for Lustre clients are listed in
the table below, where
- <replaceable>ver</replaceable>refers to the Linux distribution (e.g.,
+ <replaceable>ver</replaceable> refers to the Linux distribution (e.g.,
2.6.18-348.1.1.el5). These packages are available in the
<link xl:href="https://wiki.hpdd.intel.com/display/PUB/Lustre+Releases">
Lustre Releases</link>repository.</para>
<para>The version of the kernel running on a Lustre client must be
the same as the version of the
<code>lustre-client-modules-</code>
- <replaceable>ver</replaceable>package being installed. If the
+ <replaceable>ver</replaceable> package being installed. If the
kernel running on the client is not compatible, a kernel that is
compatible must be installed on the client before the Lustre file
system software is installed.</para>
<para>The version of the kernel running on a Lustre client must be
the same as the version of the
<literal>lustre-client-modules-</literal>
- <replaceable>ver</replaceable>package being installed. If not, a
+ <replaceable>ver</replaceable> package being installed. If not, a
compatible kernel must be installed on the client before the Lustre
client packages are installed.</para>
</note></para>
</title>
<para>Robinhood is a Policy engine and reporting tool for large file
-systems. It maintains a replicate of file system medatada in a database that
+systems. It maintains a replicate of file system metadata in a database that
can be queried at will. Robinhood makes it possible to schedule mass action on
file system entries by defining attribute-based policies, provides fast <literal>find</literal>
and <literal>du</literal> enhanced clones, and provides administrators with an overall
<screen># lctl conf_param testfs.sys.jobid_var=SLURM_JOB_ID</screen>
<para>The <literal>lctl conf_param</literal> command to enable or disable
jobstats should be run on the MGS as root. The change is persistent, and
- will be propogated to the MDS, OSS, and client nodes automatically when
+ will be propagated to the MDS, OSS, and client nodes automatically when
it is set on the MGS and for each new client mount.</para>
<para>To temporarily enable jobstats on a client, or to use a different
jobid_var on a subset of nodes, such as nodes in a remote cluster that
</listitem>
<listitem>
<para><literal>xltop</literal> - A continuous Lustre monitor with batch scheduler
- integation. <link xmlns:xlink="http://www.w3.org/1999/xlink"
+ integration. <link xmlns:xlink="http://www.w3.org/1999/xlink"
xlink:href="https://github.com/jhammond/xltop"/></para>
</listitem>
</itemizedlist></para>
is described using a dash to separate the range, for example,
<literal>192.168.20.[0-255]@tcp</literal>.</para>
- <para>The range must be contiguous. The full LNET definiton for a
+ <para>The range must be contiguous. The full LNET definition for a
nidlist is as follows:</para>
<screen>
<para>Where multiple NIDs are specified separated by commas (for example,
<literal>10.67.73.200@tcp,192.168.10.1@tcp</literal>), the two NIDs refer
to the same host, and the Lustre software chooses the
- <emphasis>best</emphasis>one for communication. When a pair of NIDs is
+ <emphasis>best</emphasis> one for communication. When a pair of NIDs is
separated by a colon (for example,
<literal>10.67.73.200@tcp:10.67.73.201@tcp</literal>), the two NIDs refer
to two different hosts and are treated as a failover pair (the Lustre
<literal>qos_prio_free</literal> puts more weighting on the amount of free space
available on each OST and less on how stripes are distributed across OSTs. The default
value is 91 percent. When the free space priority is set to 100, weighting is based
- entirely on free space and location is no longer used by the striping algorthm.</para>
+ entirely on free space and location is no longer used by the striping algorithm.</para>
</listitem>
<listitem>
<para condition="l29"><literal>reserved_mb_low</literal> - The low watermark used to stop
RPC.</para>
<note>
<para>For information about universal UID/GID requirements in a Lustre file system
- ennvironment, see <xref xmlns:xlink="http://www.w3.org/1999/xlink"
+ environment, see <xref xmlns:xlink="http://www.w3.org/1999/xlink"
linkend="section_rh2_d4w_gk"/>.</para>
</note>
<section remap="h3">
users.</para>
<para>The root squash feature works by re-mapping the user ID (UID) and group ID (GID) of the
root user to a UID and GID specified by the system administrator, via the Lustre configuration
- management server (MGS). The root squash feature also enables the Lustre fle system
+ management server (MGS). The root squash feature also enables the Lustre file system
administrator to specify a set of client for which UID/GID re-mapping does not apply.</para>
<section remap="h3">
<title><indexterm><primary>root squash</primary><secondary>configuring</secondary></indexterm>Configuring Root Squash</title>
stops object allocation for the OST if available space is less than reserved or the OST has
fewer than 32 free inodes. The MDT starts object allocation when available space is twice
as big as the reserved space and the OST has more than 64 free inodes. Note, clients
- could appened existing files no matter what object allocation state is.</para>
+ could append existing files no matter what object allocation state is.</para>
<para condition="l29"> The reserved space for each OST can be adjusted by the user. Use the
<literal>lctl set_param</literal> command, for example the next command reserve 1GB space
for all OSTs.
of free space available on each OST and less on how stripes are distributed across OSTs. The
default value is <literal>91</literal> (percent). When the free space priority is set to
<literal>100</literal> (percent), weighting is based entirely on free space and location
- is no longer used by the striping algorthm. </para>
+ is no longer used by the striping algorithm. </para>
<para>To change the allocator weighting to <literal>100</literal>, enter this command on the
MGS:</para>
<screen>lctl conf_param <replaceable>fsname</replaceable>-MDT0000.lov.qos_prio_free=100</screen>
file size.</para>
<note condition='l24'><para>Starting in release 2.4, using the DNE
remote directory feature it is possible to increase the metadata
- capacity of a single filesystem by configuting additional MDTs into
+ capacity of a single filesystem by configuring additional MDTs into
the filesystem, see <xref linkend="dbdoclet.addingamdt"/> for details.
</para></note>
<para>For example, if the average file size is 5 MB and you have
<literal>mkfs.lustre</literal>. Decreasing the inode ratio tunable
<literal>bytes-per-inode</literal> will create more inodes for a given
MDT size, but will leave less space for extra per-file metadata. The
- inode ratio must always be strictly larget than the MDT inode size,
+ inode ratio must always be strictly larger than the MDT inode size,
which is 512 bytes by default. It is recommended to use an inode ratio
at least 512 bytes larger than the inode size to ensure the MDT does
not run out of space.</para>
for potential variations in future usage. This helps reduce the format
and file system check time and makes more space available for data.</para>
<para>The table below shows the default
- <emphasis role="italic">bytes-per-inode</emphasis>ratio ("inode ratio")
+ <emphasis role="italic">bytes-per-inode</emphasis> ratio ("inode ratio")
used for OSTs of various sizes when they are formatted.</para>
<para>
<table frame="all">
<replaceable>/mnt/chip</replaceable>. The Management Service is running on
a node reachable from this client via the cfs21@tcp0 NID.</para>
<screen>mount -t lustre cfs21@tcp0:/chipfs /mnt/chip</screen>
- <para condition='l29'>Similiar to the above example, but mounting a
+ <para condition='l29'>Similar to the above example, but mounting a
subdirectory under <replaceable>chipfs</replaceable> as a fileset.
<screen>mount -t lustre cfs21@tcp0:/chipfs/v1_0 /mnt/chipv1_0</screen>
</para>
mount.</para>
<para>It is important to note that invocation of the subdirectory mount is
voluntary by the client and not does prevent access to files that are
- visibile in multiple subdirectory mounts via hard links. Furthermore, it
+ visible in multiple subdirectory mounts via hard links. Furthermore, it
does not prevent the client from subsequently mounting the whole file
system without a subdirectory being specified.</para>
<figure>
<listitem>
<para>
<literal>Repaired Unmatched Pairs</literal> total number
- of unmatched MDT and OST-object paris have been
+ of unmatched MDT and OST-object pairs have been
repaired in the scanning-phase1</para>
</listitem>
<listitem>
<itemizedlist>
<listitem>
<para>
- <emphasis role="bold">Active/passive</emphasis>pair - In this
+ <emphasis role="bold">Active/passive</emphasis> pair - In this
configuration, the active node provides resources and serves data,
while the passive node is usually standing by idle. If the active
node fails, the passive node takes over and becomes active.</para>
</listitem>
<listitem>
<para>
- <emphasis role="bold">Active/active</emphasis>pair - In this
+ <emphasis role="bold">Active/active</emphasis> pair - In this
configuration, both nodes are active, each providing a subset of
resources. In case of a failure, the second node takes over resources
from the failed node.</para>
<listitem>
<para>Verifies the linkEA entry for each and regenerates the linkEA
if it is invalid or missing. The
- <emphasis role="italic">linkEA</emphasis>consists of the file name and
+ <emphasis role="italic">linkEA</emphasis> consists of the file name and
parent FID. It is stored as an extended attribute in the file
itself. Thus, the linkEA can be used to reconstruct the full path name
of a file.</para>
the OST(s) that contain the file data. If the MDT file points to one
object, all the file data is stored in that object. If the MDT file points
to more than one object, the file data is
- <emphasis role="italic">striped</emphasis>across the objects using RAID 0,
+ <emphasis role="italic">striped</emphasis> across the objects using RAID 0,
and each object is stored on a different OST. (For more information about
how striping is implemented in a Lustre file system, see
<xref linkend="dbdoclet.50438250_89922" />.</para>
<itemizedlist>
<listitem>
<para>The
- <emphasis>network bandwidth</emphasis>equals the aggregated bandwidth
+ <emphasis>network bandwidth</emphasis> equals the aggregated bandwidth
of the OSSs to the targets.</para>
</listitem>
<listitem>
<para>The
- <emphasis>disk bandwidth</emphasis>equals the sum of the disk
+ <emphasis>disk bandwidth</emphasis> equals the sum of the disk
bandwidths of the storage targets (OSTs) up to the limit of the network
bandwidth.</para>
</listitem>
<listitem>
<para>The
- <emphasis>aggregate bandwidth</emphasis>equals the minimum of the disk
+ <emphasis>aggregate bandwidth</emphasis> equals the minimum of the disk
bandwidth and the network bandwidth.</para>
</listitem>
<listitem>
<para>The
- <emphasis>available file system space</emphasis>equals the sum of the
+ <emphasis>available file system space</emphasis> equals the sum of the
available space of all the OSTs.</para>
</listitem>
</itemizedlist>
verify that data has been transmitted while the LND layer is connection oriented and typically
does verify data transmission.</para>
<para>LNETs are uniquely identified by a label comprised of a string corresponding to an LND and
- a number, such as tcp0, o2ib0, or o2ib1, that uniquely indentifies each LNET. Each node on an
+ a number, such as tcp0, o2ib0, or o2ib1, that uniquely identifies each LNET. Each node on an
LNET has at least one network identifier (NID). A NID is a combination of the address of the
network interface and the LNET label in the
form:<literal><replaceable>address</replaceable>@<replaceable>LNET_label</replaceable></literal>.</para>
<title xml:id="upgradinglustre.title">Upgrading a Lustre File System</title>
<para>This chapter describes interoperability between Lustre software
releases. It also provides procedures for upgrading from Lustre software
- release 1.8 to Lustre softeware release 2.x , from a Lustre software release
+ release 1.8 to Lustre software release 2.x , from a Lustre software release
2.x to a more recent Lustre software release 2.x (major release upgrade), and
from a a Lustre software release 2.x.y to a more recent Lustre software
release 2.x.y (minor release upgrade). It includes the following
not installed, configured, or administered properly. If a full backup
of the file system is not practical, a device-level backup of the MDT
file system is recommended. See
- <xref linkend="backupandrestore" />for a procedure.</para>
+ <xref linkend="backupandrestore" /> for a procedure.</para>
</caution>
</listitem>
<listitem>
</orderedlist>
<note>
<para>The mounting order described in the steps above must be followed
- for the intial mount and registration of a Lustre file system after an
+ for the initial mount and registration of a Lustre file system after an
upgrade. For a normal start of a Lustre file system, the mounting order
is MGT, OSTs, MDT(s), clients.</para>
</note>
not installed, configured, or administered properly. If a full backup
of the file system is not practical, a device-level backup of the MDT
file system is recommended. See
- <xref linkend="backupandrestore" />for a procedure.</para>
+ <xref linkend="backupandrestore" /> for a procedure.</para>
</caution>
</listitem>
<listitem>