environment.</para>
</caution>
</section>
- <section xml:id="dbdoclet.50438194_69255">
+ <section xml:id="dbdoclet.shutdownLustre">
+ <title>
+ <indexterm>
+ <primary>operations</primary>
+ <secondary>shutdownLustre</secondary>
+ </indexterm>Stopping the Filesystem</title>
+ <para>A complete Lustre filesystem shutdown occurs by unmounting all
+ clients and servers in the order shown below. Please note that unmounting
+ a block device causes the Lustre software to be shut down on that node.
+ </para>
+ <note><para>Please note that the <literal>-a -t lustre</literal> in the
+ commands below is not the name of a filesystem, but rather is
+ specifying to unmount all entries in /etc/mtab that are of type
+ <literal>lustre</literal></para></note>
+ <orderedlist>
+ <listitem><para>Unmount the clients</para>
+ <para>On each client node, unmount the filesystem on that client
+ using the <literal>umount</literal> command:</para>
+ <para><literal>umount -a -t lustre</literal></para>
+ <para>The example below shows the unmount of the
+ <literal>testfs</literal> filesystem on a client node:</para>
+ <para><screen>[root@client1 ~]# mount |grep testfs
+XXX.XXX.0.11@tcp:/testfs on /mnt/testfs type lustre (rw,lazystatfs)
+
+[root@client1 ~]# umount -a -t lustre
+[154523.177714] Lustre: Unmounted testfs-client</screen></para>
+ </listitem>
+ <listitem><para>Unmount the MDT and MGT</para>
+ <para>On the MGS and MDS node(s), use the <literal>umount</literal>
+ command:</para>
+ <para><literal>umount -a -t lustre</literal></para>
+ <para>The example below shows the unmount of the MDT and MGT for
+ the <literal>testfs</literal> filesystem on a combined MGS/MDS:
+ </para>
+ <para><screen>[root@mds1 ~]# mount |grep lustre
+/dev/sda on /mnt/mgt type lustre (ro)
+/dev/sdb on /mnt/mdt type lustre (ro)
+
+[root@mds1 ~]# umount -a -t lustre
+[155263.566230] Lustre: Failing over testfs-MDT0000
+[155263.775355] Lustre: server umount testfs-MDT0000 complete
+[155269.843862] Lustre: server umount MGS complete</screen></para>
+ <para>For a seperate MGS and MDS, the same command is used, first on
+ the MDS and then followed by the MGS.</para>
+ </listitem>
+ <listitem><para>Unmount all the OSTs</para>
+ <para>On each OSS node, use the <literal>umount</literal> command:
+ </para>
+ <para><literal>umount -a -t lustre</literal></para>
+ <para>The example below shows the unmount of all OSTs for the
+ <literal>testfs</literal> filesystem on server
+ <literal>OSS1</literal>:
+ </para>
+ <para><screen>[root@oss1 ~]# mount |grep lustre
+/dev/sda on /mnt/ost0 type lustre (ro)
+/dev/sdb on /mnt/ost1 type lustre (ro)
+/dev/sdc on /mnt/ost2 type lustre (ro)
+
+[root@oss1 ~]# umount -a -t lustre
+[155336.491445] Lustre: Failing over testfs-OST0002
+[155336.556752] Lustre: server umount testfs-OST0002 complete</screen></para>
+ </listitem>
+ </orderedlist>
+ <para>For unmount command syntax for a single OST, MDT, or MGT target
+ please refer to <xref linkend="dbdoclet.umountTarget"/></para>
+ </section>
+ <section xml:id="dbdoclet.umountTarget">
<title>
<indexterm>
<primary>operations</primary>
<secondary>unmounting</secondary>
- </indexterm>Unmounting a Server</title>
- <para>To stop a Lustre server, use the
+ </indexterm>Unmounting a Specific Target on a Server</title>
+ <para>To stop a Lustre OST, MDT, or MGT , use the
<literal>umount
- <replaceable>/mount</replaceable>
- <replaceable>point</replaceable></literal> command.</para>
- <para>For example, to stop
- <literal>ost0</literal> on mount point
- <literal>/mnt/test</literal>, run:</para>
- <screen>
-$ umount /mnt/test
-</screen>
+ <replaceable>/mount_point</replaceable></literal> command.</para>
+ <para>The example below stops an OST, <literal>ost0</literal>, on mount
+ point <literal>/mnt/ost0</literal> for the <literal>testfs</literal>
+ filesystem:</para>
+ <screen>[root@oss1 ~]# umount /mnt/ost0
+[ 385.142264] Lustre: Failing over testfs-OST0000
+[ 385.210810] Lustre: server umount testfs-OST0000 complete</screen>
<para>Gracefully stopping a server with the
<literal>umount</literal> command preserves the state of the connected
clients. The next time the server is started, it waits for clients to
</para>
</note>
</section>
- <section xml:id="dbdoclet.50438194_54138">
+ <section xml:id="dbdoclet.degraded_ost">
<title>
<indexterm>
<primary>operations</primary>
resets to
<literal>0</literal>.</para>
<para>It is recommended that this be implemented by an automated script
- that monitors the status of individual RAID devices.</para>
+ that monitors the status of individual RAID devices, such as MD-RAID's
+ <literal>mdadm(8)</literal> command with the <literal>--monitor</literal>
+ option to mark an affected device degraded or restored.</para>
</section>
<section xml:id="dbdoclet.50438194_88063">
<title>
will leave the namespace below it inaccessible. For this reason, by
default it is only possible to create remote sub-directories off MDT0. To
relax this restriction and enable remote sub-directories off any MDT, an
- administrator must issue the command
- <literal>lctl set_param mdt.*.enable_remote_dir=1</literal>.</para>
+ administrator must issue the following command on the MGS:
+ <screen>mgs# lctl conf_param <replaceable>fsname</replaceable>.mdt.enable_remote_dir=1</screen>
+ For Lustre filesystem 'scratch', the command executed is:
+ <screen>mgs# lctl conf_param scratch.mdt.enable_remote_dir=1</screen>
+ To verify the configuration setting execute the following command on any
+ MDS:
+ <screen>mds# lctl get_param mdt.*.enable_remote_dir</screen></para>
</warning>
<para condition='l28'>With Lustre software version 2.8, a new
tunable is available to allow users with a specific group ID to create
parameter to the 'wheel' or 'admin' group ID allows users with that GID
to create and delete remote and striped directories. Setting this
parameter to <literal>-1</literal> on MDT0 to permanently allow any
- non-root users create and delete remote and striped directories. For
- example:
- <screen>lctl set_param -P mdt.*.enable_remote_dir_gid=-1</screen>
+ non-root users create and delete remote and striped directories.
+ On the MGS execute the following command:
+ <screen>mgs# lctl conf_param <replaceable>fsname</replaceable>.mdt.enable_remote_dir_gid=-1</screen>
+ For the Lustre filesystem 'scratch', the commands expands to:
+ <screen>mgs# lctl conf_param scratch.mdt.enable_remote_dir_gid=-1</screen>.
+ The change can be verified by executing the following command on every MDS:
+ <screen>mds# lctl get_param mdt.<replaceable>*</replaceable>.enable_remote_dir_gid</screen>
</para>
</section>
<section xml:id="dbdoclet.lfsmkdirdne2" condition='l28'>
<para>The Lustre 2.8 DNE feature enables individual files in a given
directory to store their metadata on separate MDTs (a <emphasis>striped
directory</emphasis>) once additional MDTs have been added to the
- filesystem, see <xref linkend="dbdoclet.addingamdt"/>.
+ filesystem, see <xref linkend="dbdoclet.adding_new_mdt"/>.
The result of this is that metadata requests for
files in a striped directory are serviced by multiple MDTs and metadata
service load is distributed over all the MDTs that service a given
</section>
<section xml:id="dbdoclet.50438194_88217">
<title>Listing Parameters</title>
- <para>To list Lustre or LNET parameters that are available to set, use
+ <para>To list Lustre or LNet parameters that are available to set, use
the
<literal>lctl list_param</literal> command. For example:</para>
<screen>
<literal>--mgsnode=</literal> or
<literal>--servicenode=</literal>).</para>
<para>To display the NIDs of all servers in networks configured to work
- with the Lustre file system, run (while LNET is running):</para>
+ with the Lustre file system, run (while LNet is running):</para>
<screen>
lctl list_nids
</screen>
<para>Where multiple NIDs are specified separated by commas (for example,
<literal>10.67.73.200@tcp,192.168.10.1@tcp</literal>), the two NIDs refer
to the same host, and the Lustre software chooses the
- <emphasis>best</emphasis>one for communication. When a pair of NIDs is
+ <emphasis>best</emphasis> one for communication. When a pair of NIDs is
separated by a colon (for example,
<literal>10.67.73.200@tcp:10.67.73.201@tcp</literal>), the two NIDs refer
to two different hosts and are treated as a failover pair (the Lustre
</indexterm>Replacing an Existing OST or MDT</title>
<para>To copy the contents of an existing OST to a new OST (or an old MDT
to a new MDT), follow the process for either OST/MDT backups in
- <xref linkend='dbdoclet.50438207_71633' />or
- <xref linkend='dbdoclet.50438207_21638' />. For more information on
- removing a MDT, see
+ <xref linkend='dbdoclet.backup_device' />or
+ <xref linkend='dbdoclet.backup_target_filesystem' />.
+ For more information on removing a MDT, see
<xref linkend='dbdoclet.rmremotedir' />.</para>
</section>
<section xml:id="dbdoclet.50438194_30872">