Whamcloud - gitweb
LUDOC-11 misc: correct location/setting qos_threshold_rr
[doc/manual.git] / LustreMaintenance.xml
index dfeaa6f..271d1b3 100644 (file)
@@ -3,61 +3,67 @@
   <para>Once you have the Lustre file system up and running, you can use the procedures in this section to perform these basic Lustre maintenance tasks:</para>
   <itemizedlist>
     <listitem>
-      <para><xref linkend="dbdoclet.50438199_42877"/></para>
+      <para><xref linkend="lustremaint.inactiveOST"/></para>
     </listitem>
     <listitem>
-      <para><xref linkend="dbdoclet.50438199_15240"/></para>
+      <para><xref linkend="lustremaint.findingNodes"/></para>
     </listitem>
     <listitem>
-      <para><xref linkend="dbdoclet.50438199_26070"/></para>
+      <para><xref linkend="lustremaint.mountingServerWithoutLustre"/></para>
     </listitem>
     <listitem>
-      <para><xref linkend="dbdoclet.50438199_54623"/></para>
+      <para><xref linkend="lustremaint.regenerateConfigLogs"/></para>
     </listitem>
     <listitem>
-      <para><xref linkend="dbdoclet.changingservernid"/></para>
+      <para><xref linkend="lustremaint.changingservernid"/></para>
     </listitem>
     <listitem>
-      <para><xref linkend="dbdoclet.adding_new_mdt"/></para>
+      <para><xref linkend="lustremaint.clear_conf"/></para>
     </listitem>
     <listitem>
-      <para><xref linkend="dbdoclet.adding_new_ost"/></para>
+      <para><xref linkend="lustremaint.adding_new_mdt"/></para>
     </listitem>
     <listitem>
-      <para><xref linkend="dbdoclet.deactivating_mdt_ost"/></para>
+      <para><xref linkend="lustremaint.adding_new_ost"/></para>
     </listitem>
     <listitem>
-      <para><xref linkend="dbdoclet.rmremotedir"/></para>
+      <para><xref linkend="lustremaint.deactivating_mdt_ost"/></para>
     </listitem>
     <listitem>
-      <para><xref linkend="dbdoclet.inactivemdt"/></para>
+      <para><xref linkend="lustremaint.rmremotedir"/></para>
     </listitem>
     <listitem>
-      <para><xref xmlns:xlink="http://www.w3.org/1999/xlink" linkend="section_remove_ost"/></para>
+      <para><xref linkend="lustremaint.inactivemdt"/></para>
     </listitem>
     <listitem>
-      <para><xref xmlns:xlink="http://www.w3.org/1999/xlink" linkend="section_ydg_pgt_tl"/></para>
+      <para><xref linkend="lustremaint.remove_ost"/></para>
     </listitem>
     <listitem>
-      <para><xref xmlns:xlink="http://www.w3.org/1999/xlink" linkend="section_restore_ost"/></para>
+      <para><xref linkend="lustremaint.ydg_pgt_tl"/></para>
     </listitem>
     <listitem>
-      <para><xref xmlns:xlink="http://www.w3.org/1999/xlink" linkend="section_ucf_qgt_tl"/></para>
+      <para><xref linkend="lustremaint.restore_ost"/></para>
     </listitem>
     <listitem>
-      <para><xref linkend="dbdoclet.50438199_77819"/></para>
+      <para><xref linkend="lustremaint.ucf_qgt_tl"/></para>
     </listitem>
     <listitem>
-      <para><xref linkend="dbdoclet.50438199_12607"/></para>
+      <para><xref linkend="lustremaint.abortRecovery"/></para>
     </listitem>
     <listitem>
-      <para><xref linkend="dbdoclet.50438199_62333"/></para>
+      <para><xref linkend="lustremaint.determineOST"/></para>
     </listitem>
     <listitem>
-      <para><xref linkend="dbdoclet.50438199_62545"/></para>
+      <para><xref linkend="lustremaint.ChangeAddrFailoverNode"/></para>
+    </listitem>
+    <listitem>
+      <para><xref linkend="lustremaint.seperateCombinedMGSMDT"/></para>
+    </listitem>
+    <listitem>
+      <para><xref linkend="lustremaint.setMDTReadonly"/></para>
     </listitem>
   </itemizedlist>
-  <section xml:id="dbdoclet.50438199_42877">
+  <section xml:id="lustremaint.inactiveOST">
       <title>
           <indexterm><primary>maintenance</primary></indexterm>
           <indexterm><primary>maintenance</primary><secondary>inactive OSTs</secondary></indexterm>
@@ -74,7 +80,7 @@
       <literal>exclude=testfs-OST0000:testfs-OST0001</literal>.</para>
     </note>
     </section>
-    <section xml:id="dbdoclet.50438199_15240">
+    <section xml:id="lustremaint.findingNodes">
       <title><indexterm><primary>maintenance</primary><secondary>finding nodes</secondary></indexterm>
 Finding Nodes in the Lustre File System</title>
       <para>There may be situations in which you need to find all nodes in
@@ -105,7 +111,7 @@ Finding Nodes in the Lustre File System</title>
 0: testfs-OST0000_UUID ACTIVE 
 1: testfs-OST0001_UUID ACTIVE </screen>
     </section>
-    <section xml:id="dbdoclet.50438199_26070">
+    <section xml:id="lustremaint.mountingServerWithoutLustre">
       <title><indexterm><primary>maintenance</primary><secondary>mounting a server</secondary></indexterm>
 Mounting a Server Without Lustre Service</title>
       <para>If you are using a combined MGS/MDT, but you only want to start the MGS and not the MDT, run this command:</para>
@@ -114,13 +120,15 @@ Mounting a Server Without Lustre Service</title>
       <para>In this example, the combined MGS/MDT is <literal>testfs-MDT0000</literal> and the mount point is <literal>/mnt/test/mdt</literal>.</para>
       <screen>$ mount -t lustre -L testfs-MDT0000 -o nosvc /mnt/test/mdt</screen>
     </section>
-    <section xml:id="dbdoclet.50438199_54623">
+    <section xml:id="lustremaint.regenerateConfigLogs">
       <title><indexterm><primary>maintenance</primary><secondary>regenerating config logs</secondary></indexterm>
 Regenerating Lustre Configuration Logs</title>
-      <para>If the Lustre file system configuration logs are in a state where the file system cannot
-      be started, use the <literal>writeconf</literal> command to erase them. After the
-        <literal>writeconf</literal> command is run and the servers restart, the configuration logs
-      are re-generated and stored on the MGS (as in a new file system).</para>
+      <para>If the Lustre file system configuration logs are in a state where
+      the file system cannot be started, use the
+      <literal>tunefs.lustre --writeconf</literal> command to regenerate them.
+      After the <literal>writeconf</literal> command is run and the servers
+      restart, the configuration logs are re-generated and stored on the MGS
+      (as with a new file system).</para>
       <para>You should only use the <literal>writeconf</literal> command if:</para>
       <itemizedlist>
         <listitem>
@@ -130,82 +138,84 @@ Regenerating Lustre Configuration Logs</title>
           <para>A server NID is being changed</para>
         </listitem>
       </itemizedlist>
-      <para>The <literal>writeconf</literal> command is destructive to some configuration items (i.e., OST pools information and items set via <literal>conf_param</literal>), and should be used with caution. To avoid problems:</para>
-      <itemizedlist>
-        <listitem>
-          <para>Shut down the file system before running the <literal>writeconf</literal> command</para>
-        </listitem>
-        <listitem>
-          <para>Run the <literal>writeconf</literal> command on all servers (MDT first, then OSTs)</para>
-        </listitem>
-        <listitem>
-          <para>Start the file system in this order:</para>
-          <itemizedlist>
-            <listitem>
-              <para>MGS (or the combined MGS/MDT)</para>
-            </listitem>
-            <listitem>
-              <para>MDT</para>
-            </listitem>
-            <listitem>
-              <para>OSTs</para>
-            </listitem>
-            <listitem>
-              <para>Lustre clients</para>
-            </listitem>
-          </itemizedlist>
-        </listitem>
-      </itemizedlist>
+      <para>The <literal>writeconf</literal> command is destructive to some
+      configuration items (e.g. OST pools information and tunables set via
+      <literal>conf_param</literal>), and should be used with caution.</para>
       <caution>
-        <para>The OST pools feature enables a group of OSTs to be named for file striping purposes. If you use OST pools, be aware that running the <literal>writeconf</literal> command erases <emphasis role="bold">all</emphasis> pools information (as well as any other parameters set via <literal>lctl conf_param</literal>). We recommend that the pools definitions (and <literal>conf_param</literal> settings) be executed via a script, so they can be reproduced easily after a <literal>writeconf</literal> is performed.</para>
+        <para>The OST pools feature enables a group of OSTs to be named for
+       file striping purposes. If you use OST pools, be aware that running
+       the <literal>writeconf</literal> command erases
+       <emphasis role="bold">all</emphasis> pools information (as well as
+       any other parameters set via <literal>lctl conf_param</literal>).
+       We recommend that the pools definitions (and
+       <literal>conf_param</literal> settings) be executed via a script,
+       so they can be regenerated easily after <literal>writeconf</literal>
+       is performed.  However, tunables saved with <literal>lctl set_param
+       -P</literal> are <emphasis>not</emphasis> erased in this case.</para>
       </caution>
+      <note>
+        <para>If the MGS still holds any configuration logs, it may be
+       possible to dump these logs to save any parameters stored with
+       <literal>lctl conf_param</literal> by dumping the config logs on
+       the MGS and saving the output:</para>
+<screen>
+mgs# lctl --device MGS llog_print <replaceable>fsname</replaceable>-client
+mgs# lctl --device MGS llog_print <replaceable>fsname</replaceable>-MDT0000
+mgs# lctl --device MGS llog_print <replaceable>fsname</replaceable>-OST0000
+</screen>
+      </note>
       <para>To regenerate Lustre file system configuration logs:</para>
       <orderedlist>
         <listitem>
-          <para>Shut down the file system in this order.</para>
+          <para>Stop the file system services in the following order before
+           running the <literal>tunefs.lustre --writeconf</literal> command:
+         </para>
           <orderedlist>
             <listitem>
               <para>Unmount the clients.</para>
             </listitem>
             <listitem>
-              <para>Unmount the MDT.</para>
+              <para>Unmount the MDT(s).</para>
             </listitem>
             <listitem>
-              <para>Unmount all OSTs.</para>
+              <para>Unmount the OST(s).</para>
+            </listitem>
+            <listitem>
+              <para>If the MGS is separate from the MDT it can remain mounted
+               during this process.</para>
             </listitem>
           </orderedlist>
         </listitem>
         <listitem>
-          <para>Make sure the the MDT and OST devices are available.</para>
+          <para>Make sure the MDT and OST devices are available.</para>
         </listitem>
         <listitem>
-          <para>Run the <literal>writeconf</literal> command on all servers.</para>
-          <para>Run writeconf on the MDT first, and then the OSTs.</para>
+          <para>Run the <literal>tunefs.lustre --writeconf</literal> command
+           on all target devices.</para>
+          <para>Run writeconf on the MDT(s) first, and then the OST(s).</para>
           <orderedlist>
             <listitem>
-              <para>On the MDT, run:</para>
-              <screen>mdt# tunefs.lustre --writeconf <replaceable>/dev/mdt_device</replaceable></screen>
+              <para>On each MDS, for each MDT run:</para>
+              <screen>mds# tunefs.lustre --writeconf <replaceable>/dev/mdt_device</replaceable></screen>
             </listitem>
             <listitem>
-              <para>
-              On each OST, run:
-              
-          <screen>ost# tunefs.lustre --writeconf <replaceable>/dev/ost_device</replaceable></screen>
+              <para> On each OSS, for each OST run:
+          <screen>oss# tunefs.lustre --writeconf <replaceable>/dev/ost_device</replaceable></screen>
           </para>
             </listitem>
           </orderedlist>
         </listitem>
         <listitem>
-          <para>Restart the file system in this order.</para>
+          <para>Restart the file system in the following order:</para>
           <orderedlist>
             <listitem>
-              <para>Mount the MGS (or the combined MGS/MDT).</para>
+              <para>Mount the separate MGT, if it is not already mounted.</para>
             </listitem>
             <listitem>
-              <para>Mount the MDT.</para>
+              <para>Mount the MDT(s) in order, starting with MDT0000.</para>
             </listitem>
             <listitem>
-              <para>Mount the OSTs.</para>
+              <para>Mount the OSTs in order, starting with OST0000.</para>
             </listitem>
             <listitem>
               <para>Mount the clients.</para>
@@ -213,9 +223,11 @@ Regenerating Lustre Configuration Logs</title>
           </orderedlist>
         </listitem>
       </orderedlist>
-      <para>After the <literal>writeconf</literal> command is run, the configuration logs are re-generated as servers restart.</para>
+      <para>After the <literal>tunefs.lustre --writeconf</literal> command is
+      run, the configuration logs are re-generated as servers connect to the
+      MGS.</para>
     </section>
-    <section xml:id="dbdoclet.changingservernid">
+    <section xml:id="lustremaint.changingservernid">
       <title><indexterm><primary>maintenance</primary><secondary>changing a NID</secondary></indexterm>
 Changing a Server NID</title>
       <para>In Lustre software release 2.3 or earlier, the <literal>tunefs.lustre
@@ -281,7 +293,58 @@ Changing a Server NID</title>
       <note><para>The previous configuration log is backed up on the MGS
       disk with the suffix <literal>'.bak'</literal>.</para></note>
     </section>
-    <section xml:id="dbdoclet.adding_new_mdt" condition='l24'>
+    <section xml:id="lustremaint.clear_conf" condition="l2B">
+      <title><indexterm>
+           <primary>maintenance</primary>
+               <secondary>Clearing a config</secondary>
+         </indexterm> Clearing configuration</title>
+      <para>
+         This command runs on MGS node having the MGS device mounted with
+         <literal>-o nosvc.</literal> It cleans up configuration files
+         stored in the CONFIGS/ directory of any records marked SKIP.
+         If the device name is given, then the specific logs for that
+         filesystem (e.g. testfs-MDT0000) are processed.  Otherwise, if a
+         filesystem name is given then all configuration files are cleared.
+         The previous configuration log is backed up on the MGS disk with
+         the suffix 'config.timestamp.bak'. Eg: Lustre-MDT0000-1476454535.bak.
+         </para>
+         <para> To clear a configuration:</para>
+         <orderedlist>
+            <listitem>
+                  <para>Shut down the file system in this order:</para>
+             <orderedlist>
+               <listitem>
+                 <para>Unmount the clients.</para>
+               </listitem>
+               <listitem>
+                 <para>Unmount the MDT.</para>
+               </listitem>
+               <listitem>
+                 <para>Unmount all OSTs.</para>
+               </listitem>
+             </orderedlist>
+            </listitem>
+            <listitem>
+              <para>
+                If the MGS and MDS share a partition, start the MGS only
+                using "nosvc" option.
+              </para>
+           <screen>mount -t lustre <replaceable>MDT partition</replaceable> -o nosvc <replaceable>mount_point</replaceable></screen>
+            </listitem>
+            <listitem>
+                <para>Run the <literal>clear_conf</literal> command on the MGS:
+                </para>
+           <screen>lctl clear_conf <replaceable>config</replaceable></screen>
+            <para>
+                       Example: To clear the configuration for
+                       <literal>MDT0000</literal> on a filesystem named
+                       <literal>testfs</literal>
+            </para>
+           <screen>mgs# lctl clear_conf testfs-MDT0000</screen>
+            </listitem>
+          </orderedlist>
+       </section>
+    <section xml:id="lustremaint.adding_new_mdt" condition='l24'>
       <title><indexterm>
         <primary>maintenance</primary>
         <secondary>adding an MDT</secondary>
@@ -334,7 +397,7 @@ client# lfs mkdir -c 4 /mnt/testfs/new_directory_striped_across_4_mdts
         </listitem>
       </orderedlist>
     </section>
-    <section xml:id="dbdoclet.adding_new_ost">
+    <section xml:id="lustremaint.adding_new_ost">
       <title><indexterm><primary>maintenance</primary><secondary>adding a OST</secondary></indexterm>
 Adding a New OST to a Lustre File System</title>
       <para>A new OST can be added to existing Lustre file system on either
@@ -369,7 +432,7 @@ oss# mount -t lustre /dev/sda /mnt/testfs/ost12</screen>
          This redistributes file data over the entire set of OSTs.</para>
           <para>For example, to rebalance all files within the directory
          <literal>/mnt/lustre/dir</literal>, enter:</para>
-          <screen>client# lfs_migrate /mnt/lustre/file</screen>
+          <screen>client# lfs_migrate /mnt/lustre/dir</screen>
           <para>To migrate files within the <literal>/test</literal> file
          system on <literal>OST0004</literal> that are larger than 4GB in
          size to other OSTs, enter:</para>
@@ -378,7 +441,7 @@ oss# mount -t lustre /dev/sda /mnt/testfs/ost12</screen>
         </listitem>
       </orderedlist>
     </section>
-    <section xml:id="dbdoclet.deactivating_mdt_ost">
+    <section xml:id="lustremaint.deactivating_mdt_ost">
       <title><indexterm><primary>maintenance</primary><secondary>restoring an OST</secondary></indexterm>
       <indexterm><primary>maintenance</primary><secondary>removing an OST</secondary></indexterm>
 Removing and Restoring MDTs and OSTs</title>
@@ -420,19 +483,19 @@ Removing and Restoring MDTs and OSTs</title>
           desire to continue using the filesystem before it is repaired.</para>
         </listitem>
       </itemizedlist>
-      <section condition="l24" xml:id="dbdoclet.rmremotedir">
+      <section condition="l24" xml:id="lustremaint.rmremotedir">
       <title><indexterm><primary>maintenance</primary><secondary>removing an MDT</secondary></indexterm>Removing an MDT from the File System</title>
         <para>If the MDT is permanently inaccessible,
-    <literal>lfs rm_entry {directory}</literal> can be used to delete the
-    directory entry for the unavailable MDT. Using <literal>rmdir</literal>
-    would otherwise report an IO error due to the remote MDT being inactive.
-    Please note that if the MDT <emphasis>is</emphasis> available, standard
-    <literal>rm -r</literal> should be used to delete the remote directory.
-    After the remote directory has been removed, the administrator should
-    mark the MDT as permanently inactive with:</para>
-<screen>lctl conf_param {MDT name}.mdc.active=0</screen>
-<para>A user can identify which MDT holds a remote sub-directory using
-the <literal>lfs</literal> utility. For example:</para>
+        <literal>lfs rm_entry {directory}</literal> can be used to delete the
+        directory entry for the unavailable MDT. Using <literal>rmdir</literal>
+        would otherwise report an IO error due to the remote MDT being inactive.
+        Please note that if the MDT <emphasis>is</emphasis> available, standard
+        <literal>rm -r</literal> should be used to delete the remote directory.
+        After the remote directory has been removed, the administrator should
+        mark the MDT as permanently inactive with:</para>
+        <screen>lctl conf_param {MDT name}.mdc.active=0</screen>
+        <para>A user can identify which MDT holds a remote sub-directory using
+        the <literal>lfs</literal> utility. For example:</para>
 <screen>client$ lfs getstripe --mdt-index /mnt/lustre/remote_dir1
 1
 client$ mkdir /mnt/lustre/local_dir0
@@ -441,8 +504,8 @@ client$ lfs getstripe --mdt-index /mnt/lustre/local_dir0
 </screen>
         <para>The <literal>lfs getstripe --mdt-index</literal> command
         returns the index of the MDT that is serving the given directory.</para>
-          </section>
-          <section xml:id="dbdoclet.inactivemdt" condition='l24'>
+      </section>
+      <section xml:id="lustremaint.inactivemdt" condition='l24'>
       <title>
           <indexterm><primary>maintenance</primary></indexterm>
           <indexterm><primary>maintenance</primary><secondary>inactive MDTs</secondary></indexterm>Working with Inactive MDTs</title>
@@ -450,7 +513,7 @@ client$ lfs getstripe --mdt-index /mnt/lustre/local_dir0
     the MDT is activated again. Clients accessing an inactive MDT will receive
     an EIO error.</para>
       </section>
-      <section remap="h3" xml:id="section_remove_ost">
+      <section remap="h3" xml:id="lustremaint.remove_ost">
       <title><indexterm>
           <primary>maintenance</primary>
           <secondary>removing an OST</secondary>
@@ -519,6 +582,11 @@ client$ lfs getstripe --mdt-index /mnt/lustre/local_dir0
               files with objects on the deactivated OST, and copy them
               to other OSTs in the file system to: </para>
               <screen>client# lfs find --ost <replaceable>ost_name</replaceable> <replaceable>/mount/point</replaceable> | lfs_migrate -y</screen>
+             <para>Note that if multiple OSTs are being deactivated at one
+             time, the <literal>lfs find</literal> command can take multiple
+             <literal>--ost</literal> arguments, and will return files that
+             are located on <emphasis>any</emphasis> of the specified OSTs.
+             </para>
             </listitem>
             <listitem>
               <para>If the OST is no longer available, delete the files
@@ -554,14 +622,14 @@ client$ lfs getstripe --mdt-index /mnt/lustre/local_dir0
               <note><para>A deactivated OST still appears in the file system
                 configuration, though a replacement OST can be created using the
                 <literal>mkfs.lustre --replace</literal> option, see
-                <xref linkend="section_restore_ost"/>.
+                <xref linkend="lustremaint.restore_ost"/>.
               </para></note>
             </listitem>
           </orderedlist>
         </listitem>
       </orderedlist>
     </section>
-      <section remap="h3" xml:id="section_ydg_pgt_tl">
+      <section remap="h3" xml:id="lustremaint.ydg_pgt_tl">
       <title><indexterm>
           <primary>maintenance</primary>
           <secondary>backing up OST config</secondary>
@@ -597,7 +665,7 @@ oss# mount -t ldiskfs <replaceable>/dev/ost_device</replaceable> /mnt/ost</scree
         </listitem>
       </orderedlist>
     </section>
-      <section xml:id="section_restore_ost">
+      <section xml:id="lustremaint.restore_ost">
       <title><indexterm>
           <primary>maintenance</primary>
           <secondary>restoring OST config</secondary>
@@ -669,7 +737,7 @@ oss0# dd if=/tmp/mountdata of=/mnt/ost/CONFIGS/mountdata bs=4 count=1 seek=5 ski
         </listitem>
       </orderedlist>
     </section>
-      <section xml:id="section_ucf_qgt_tl">
+      <section xml:id="lustremaint.ucf_qgt_tl">
       <title><indexterm>
           <primary>maintenance</primary>
           <secondary>reintroducing an OSTs</secondary>
@@ -683,7 +751,7 @@ oss0# dd if=/tmp/mountdata of=/mnt/ost/CONFIGS/mountdata bs=4 count=1 seek=5 ski
 client# lctl set_param osc.<replaceable>fsname</replaceable>-OST<replaceable>number</replaceable>-*.active=1</screen></para>
     </section>
     </section>
-    <section xml:id="dbdoclet.50438199_77819">
+    <section xml:id="lustremaint.abortRecovery">
       <title><indexterm><primary>maintenance</primary><secondary>aborting recovery</secondary></indexterm>
       <indexterm><primary>backup</primary><secondary>aborting recovery</secondary></indexterm>
 Aborting Recovery</title>
@@ -692,7 +760,7 @@ Aborting Recovery</title>
         <para>The recovery process is blocked until all OSTs are available. </para>
       </note>
     </section>
-    <section xml:id="dbdoclet.50438199_12607">
+    <section xml:id="lustremaint.determineOST">
       <title><indexterm><primary>maintenance</primary><secondary>identifying OST host</secondary></indexterm>
 Determining Which Machine is Serving an OST </title>
       <para>In the course of administering a Lustre file system, you may need to determine which
@@ -713,7 +781,7 @@ osc.testfs-OST0002-osc-f1579000.ost_conn_uuid=192.168.20.1@tcp
 osc.testfs-OST0003-osc-f1579000.ost_conn_uuid=192.168.20.1@tcp
 osc.testfs-OST0004-osc-f1579000.ost_conn_uuid=192.168.20.1@tcp</screen></para>
     </section>
-    <section xml:id="dbdoclet.50438199_62333">
+    <section xml:id="lustremaint.ChangeAddrFailoverNode">
       <title><indexterm><primary>maintenance</primary><secondary>changing failover node address</secondary></indexterm>
 Changing the Address of a Failover Node</title>
       <para>To change the address of a failover node (e.g, to use node X instead of node Y), run
@@ -726,13 +794,13 @@ Changing the Address of a Failover Node</title>
         <literal>--failnode</literal> options, see <xref xmlns:xlink="http://www.w3.org/1999/xlink"
         linkend="configuringfailover"/>.</para>
     </section>
-    <section xml:id="dbdoclet.50438199_62545">
+    <section xml:id="lustremaint.seperateCombinedMGSMDT">
       <title><indexterm><primary>maintenance</primary><secondary>separate a
         combined MGS/MDT</secondary></indexterm>
         Separate a combined MGS/MDT</title>
       <para>These instructions assume the MGS node will be the same as the MDS
         node. For instructions on how to move MGS to a different node, see
-        <xref linkend="dbdoclet.changingservernid"/>.</para>
+        <xref linkend="lustremaint.changingservernid"/>.</para>
       <para>These instructions are for doing the split without shutting down
         other servers and clients.</para>
       <orderedlist>
@@ -752,7 +820,7 @@ Changing the Address of a Failover Node</title>
              <screen>mds# cp -r <replaceable>/mdt_mount_point</replaceable>/CONFIGS/<replaceable>filesystem_name</replaceable>-* <replaceable>/mgs_mount_point</replaceable>/CONFIGS/. </screen>
              <screen>mds# umount <replaceable>/mgs_mount_point</replaceable></screen>
              <screen>mds# umount <replaceable>/mdt_mount_point</replaceable></screen>
-          <para>See <xref linkend="dbdoclet.50438199_54623"/> for alternative method.</para>
+          <para>See <xref linkend="lustremaint.regenerateConfigLogs"/> for alternative method.</para>
         </listitem>
         <listitem>
           <para>Start the MGS.</para>
@@ -772,4 +840,33 @@ Changing the Address of a Failover Node</title>
         </listitem>
       </orderedlist>
     </section>
+    <section xml:id="lustremaint.setMDTReadonly" condition="l2D">
+      <title><indexterm><primary>maintenance</primary>
+        <secondary>set an MDT to readonly</secondary></indexterm>
+        Set an MDT to read-only</title>
+      <para>It is sometimes desirable to be able to mark the filesystem
+      read-only directly on the server, rather than remounting the clients and
+      setting the option there. This can be useful if there is a rogue client
+      that is deleting files, or when decommissioning a system to prevent
+      already-mounted clients from modifying it anymore.</para>
+      <para>Set the <literal>mdt.*.readonly</literal> parameter to
+      <literal>1</literal> to immediately set the MDT to read-only.  All future
+      MDT access will immediately return a "Read-only file system" error
+      (<literal>EROFS</literal>) until the parameter is set to
+      <literal>0</literal> again.</para>
+      <para>Example of setting the <literal>readonly</literal> parameter to
+      <literal>1</literal>, verifying the current setting, accessing from a
+      client, and setting the parameter back to <literal>0</literal>:</para>
+      <screen>mds# lctl set_param mdt.fs-MDT0000.readonly=1
+mdt.fs-MDT0000.readonly=1
+
+mds# lctl get_param mdt.fs-MDT0000.readonly
+mdt.fs-MDT0000.readonly=1
+
+client$ touch test_file
+touch: cannot touch ‘test_file’: Read-only file system
+
+mds# lctl set_param mdt.fs-MDT0000.readonly=0
+mdt.fs-MDT0000.readonly=0</screen>
+    </section>
 </chapter>