1 <?xml version='1.0' encoding='UTF-8'?><chapter xmlns="http://docbook.org/ns/docbook" xmlns:xl="http://www.w3.org/1999/xlink" version="5.0" xml:lang="en-US" xml:id="lustremaintenance">
2 <title xml:id="lustremaintenance.title">Lustre Maintenance</title>
3 <para>Once you have the Lustre file system up and running, you can use the procedures in this section to perform these basic Lustre maintenance tasks:</para>
6 <para><xref linkend="lustremaint.inactiveOST"/></para>
9 <para><xref linkend="lustremaint.findingNodes"/></para>
12 <para><xref linkend="lustremaint.mountingServerWithoutLustre"/></para>
15 <para><xref linkend="lustremaint.regenerateConfigLogs"/></para>
18 <para><xref linkend="lustremaint.changingservernid"/></para>
21 <para><xref linkend="lustremaint.clear_conf"/></para>
24 <para><xref linkend="lustremaint.adding_new_mdt"/></para>
27 <para><xref linkend="lustremaint.adding_new_ost"/></para>
30 <para><xref linkend="lustremaint.deactivating_mdt_ost"/></para>
33 <para><xref linkend="lustremaint.rmremotedir"/></para>
36 <para><xref linkend="lustremaint.inactivemdt"/></para>
39 <para><xref linkend="lustremaint.remove_ost"/></para>
42 <para><xref linkend="lustremaint.ydg_pgt_tl"/></para>
45 <para><xref linkend="lustremaint.restore_ost"/></para>
48 <para><xref linkend="lustremaint.ucf_qgt_tl"/></para>
51 <para><xref linkend="lustremaint.abortRecovery"/></para>
54 <para><xref linkend="lustremaint.determineOST"/></para>
57 <para><xref linkend="lustremaint.ChangeAddrFailoverNode"/></para>
60 <para><xref linkend="lustremaint.seperateCombinedMGSMDT"/></para>
63 <para><xref linkend="lustremaint.setMDTReadonly"/></para>
66 <section xml:id="lustremaint.inactiveOST">
68 <indexterm><primary>maintenance</primary></indexterm>
69 <indexterm><primary>maintenance</primary><secondary>inactive OSTs</secondary></indexterm>
70 Working with Inactive OSTs</title>
71 <para>To mount a client or an MDT with one or more inactive OSTs, run commands similar to this:</para>
72 <screen>client# mount -o exclude=testfs-OST0000 -t lustre \
73 uml1:/testfs /mnt/testfs
74 client# lctl get_param lov.testfs-clilov-*.target_obd</screen>
75 <para>To activate an inactive OST on a live client or MDT, use the
76 <literal>lctl activate</literal> command on the OSC device. For example:</para>
77 <screen>lctl --device 7 activate</screen>
79 <para>A colon-separated list can also be specified. For example,
80 <literal>exclude=testfs-OST0000:testfs-OST0001</literal>.</para>
83 <section xml:id="lustremaint.findingNodes">
84 <title><indexterm><primary>maintenance</primary><secondary>finding nodes</secondary></indexterm>
85 Finding Nodes in the Lustre File System</title>
86 <para>There may be situations in which you need to find all nodes in
87 your Lustre file system or get the names of all OSTs.</para>
88 <para>To get a list of all Lustre nodes, run this command on the MGS:</para>
89 <screen># lctl get_param mgs.MGS.live.*</screen>
91 <para>This command must be run on the MGS.</para>
93 <para>In this example, file system <literal>testfs</literal> has three
94 nodes, <literal>testfs-MDT0000</literal>,
95 <literal>testfs-OST0000</literal>, and
96 <literal>testfs-OST0001</literal>.</para>
97 <screen>mgs:/root# lctl get_param mgs.MGS.live.*
102 testfs-OST0001 </screen>
103 <para>To get the names of all OSTs, run this command on the MDS:</para>
104 <screen>mds:/root# lctl get_param lov.*-mdtlov.target_obd </screen>
106 <para>This command must be run on the MDS.</para>
108 <para>In this example, there are two OSTs, testfs-OST0000 and
109 testfs-OST0001, which are both active.</para>
110 <screen>mgs:/root# lctl get_param lov.testfs-mdtlov.target_obd
111 0: testfs-OST0000_UUID ACTIVE
112 1: testfs-OST0001_UUID ACTIVE </screen>
114 <section xml:id="lustremaint.mountingServerWithoutLustre">
115 <title><indexterm><primary>maintenance</primary><secondary>mounting a server</secondary></indexterm>
116 Mounting a Server Without Lustre Service</title>
117 <para>If you are using a combined MGS/MDT, but you only want to start the MGS and not the MDT, run this command:</para>
118 <screen>mount -t lustre <replaceable>/dev/mdt_partition</replaceable> -o nosvc <replaceable>/mount_point</replaceable></screen>
119 <para>The <literal><replaceable>mdt_partition</replaceable></literal> variable is the combined MGS/MDT block device.</para>
120 <para>In this example, the combined MGS/MDT is <literal>testfs-MDT0000</literal> and the mount point is <literal>/mnt/test/mdt</literal>.</para>
121 <screen>$ mount -t lustre -L testfs-MDT0000 -o nosvc /mnt/test/mdt</screen>
123 <section xml:id="lustremaint.regenerateConfigLogs">
124 <title><indexterm><primary>maintenance</primary><secondary>regenerating config logs</secondary></indexterm>
125 Regenerating Lustre Configuration Logs</title>
126 <para>If the Lustre file system configuration logs are in a state where
127 the file system cannot be started, use the
128 <literal>tunefs.lustre --writeconf</literal> command to regenerate them.
129 After the <literal>writeconf</literal> command is run and the servers
130 restart, the configuration logs are re-generated and stored on the MGS
131 (as with a new file system).</para>
132 <para>You should only use the <literal>writeconf</literal> command if:</para>
135 <para>The configuration logs are in a state where the file system cannot start</para>
138 <para>A server NID is being changed</para>
141 <para>The <literal>writeconf</literal> command is destructive to some
142 configuration items (e.g. OST pools information and tunables set via
143 <literal>conf_param</literal>), and should be used with caution.</para>
145 <para>The OST pools feature enables a group of OSTs to be named for
146 file striping purposes. If you use OST pools, be aware that running
147 the <literal>writeconf</literal> command erases
148 <emphasis role="bold">all</emphasis> pools information (as well as
149 any other parameters set via <literal>lctl conf_param</literal>).
150 We recommend that the pools definitions (and
151 <literal>conf_param</literal> settings) be executed via a script,
152 so they can be regenerated easily after <literal>writeconf</literal>
153 is performed. However, tunables saved with <literal>lctl set_param
154 -P</literal> are <emphasis>not</emphasis> erased in this case.</para>
157 <para>If the MGS still holds any configuration logs, it may be
158 possible to dump these logs to save any parameters stored with
159 <literal>lctl conf_param</literal> by dumping the config logs on
160 the MGS and saving the output:</para>
162 mgs# lctl --device MGS llog_print <replaceable>fsname</replaceable>-client
163 mgs# lctl --device MGS llog_print <replaceable>fsname</replaceable>-MDT0000
164 mgs# lctl --device MGS llog_print <replaceable>fsname</replaceable>-OST0000
167 <para>To regenerate Lustre file system configuration logs:</para>
170 <para>Stop the file system services in the following order before
171 running the <literal>tunefs.lustre --writeconf</literal> command:
175 <para>Unmount the clients.</para>
178 <para>Unmount the MDT(s).</para>
181 <para>Unmount the OST(s).</para>
184 <para>If the MGS is separate from the MDT it can remain mounted
185 during this process.</para>
190 <para>Make sure the MDT and OST devices are available.</para>
193 <para>Run the <literal>tunefs.lustre --writeconf</literal> command
194 on all target devices.</para>
195 <para>Run writeconf on the MDT(s) first, and then the OST(s).</para>
198 <para>On each MDS, for each MDT run:</para>
199 <screen>mds# tunefs.lustre --writeconf <replaceable>/dev/mdt_device</replaceable></screen>
202 <para> On each OSS, for each OST run:
203 <screen>oss# tunefs.lustre --writeconf <replaceable>/dev/ost_device</replaceable></screen>
209 <para>Restart the file system in the following order:</para>
212 <para>Mount the separate MGT, if it is not already mounted.</para>
215 <para>Mount the MDT(s) in order, starting with MDT0000.</para>
218 <para>Mount the OSTs in order, starting with OST0000.</para>
221 <para>Mount the clients.</para>
226 <para>After the <literal>tunefs.lustre --writeconf</literal> command is
227 run, the configuration logs are re-generated as servers connect to the
230 <section xml:id="lustremaint.changingservernid">
231 <title><indexterm><primary>maintenance</primary><secondary>changing a NID</secondary></indexterm>
232 Changing a Server NID</title>
233 <para>In order to totally rewrite the Lustre configuration, the
234 <literal>tunefs.lustre --writeconf</literal> command is used to
235 rewrite all of the configuration files.</para>
236 <para>If you need to change only the NID of the MDT or OST, the
237 <literal>replace_nids</literal> command can simplify this process.
238 The <literal>replace_nids</literal> command differs from
239 <literal>tunefs.lustre --writeconf</literal> in that it does not
240 erase the entire configuration log, precluding the need the need to
241 execute the <literal>writeconf</literal> command on all servers and
242 re-specify all permanent parameter settings. However, the
243 <literal>writeconf</literal> command can still be used if desired.
245 <para>Change a server NID in these situations:</para>
248 <para>New server hardware is added to the file system, and the MDS or an OSS is being moved to the new machine.</para>
251 <para>New network card is installed in the server.</para>
254 <para>You want to reassign IP addresses.</para>
257 <para>To change a server NID:</para>
260 <para>Update the LNet configuration in the <literal>/etc/modprobe.conf</literal> file so the list of server NIDs is correct. Use <literal>lctl list_nids</literal> to view the list of server NIDS.</para>
261 <para>The <literal>lctl list_nids</literal> command indicates which network(s) are
262 configured to work with the Lustre file system.</para>
265 <para>Shut down the file system in this order:</para>
268 <para>Unmount the clients.</para>
271 <para>Unmount the MDT.</para>
274 <para>Unmount all OSTs.</para>
279 <para>If the MGS and MDS share a partition, start the MGS only:</para>
280 <screen>mount -t lustre <replaceable>MDT partition</replaceable> -o nosvc <replaceable>mount_point</replaceable></screen>
283 <para>Run the <literal>replace_nids</literal> command on the MGS:</para>
284 <screen>lctl replace_nids <replaceable>devicename</replaceable> <replaceable>nid1</replaceable>[,nid2,nid3 ...]</screen>
285 <para>where <replaceable>devicename</replaceable> is the Lustre target name, e.g.
286 <literal>testfs-OST0013</literal></para>
289 <para>If the MGS and MDS share a partition, stop the MGS:</para>
290 <screen>umount <replaceable>mount_point</replaceable></screen>
293 <note><para>The <literal>replace_nids</literal> command also cleans
294 all old, invalidated records out of the configuration log, while
295 preserving all other current settings.</para></note>
296 <note><para>The previous configuration log is backed up on the MGS
297 disk with the suffix <literal>'.bak'</literal>.</para></note>
299 <section xml:id="lustremaint.clear_conf" condition="l2B">
301 <primary>maintenance</primary>
302 <secondary>Clearing a config</secondary>
303 </indexterm> Clearing configuration</title>
305 This command runs on MGS node having the MGS device mounted with
306 <literal>-o nosvc.</literal> It cleans up configuration files
307 stored in the CONFIGS/ directory of any records marked SKIP.
308 If the device name is given, then the specific logs for that
309 filesystem (e.g. testfs-MDT0000) are processed. Otherwise, if a
310 filesystem name is given then all configuration files are cleared.
311 The previous configuration log is backed up on the MGS disk with
312 the suffix 'config.timestamp.bak'. Eg: Lustre-MDT0000-1476454535.bak.
314 <para> To clear a configuration:</para>
317 <para>Shut down the file system in this order:</para>
320 <para>Unmount the clients.</para>
323 <para>Unmount the MDT.</para>
326 <para>Unmount all OSTs.</para>
332 If the MGS and MDS share a partition, start the MGS only
333 using "nosvc" option.
335 <screen>mount -t lustre <replaceable>MDT partition</replaceable> -o nosvc <replaceable>mount_point</replaceable></screen>
338 <para>Run the <literal>clear_conf</literal> command on the MGS:
340 <screen>lctl clear_conf <replaceable>config</replaceable></screen>
342 Example: To clear the configuration for
343 <literal>MDT0000</literal> on a filesystem named
344 <literal>testfs</literal>
346 <screen>mgs# lctl clear_conf testfs-MDT0000</screen>
350 <section xml:id="lustremaint.adding_new_mdt">
352 <primary>maintenance</primary>
353 <secondary>adding an MDT</secondary>
354 </indexterm>Adding a New MDT to a Lustre File System</title>
355 <para>Additional MDTs can be added using the DNE feature to serve one
356 or more remote sub-directories within a filesystem, in order to
357 increase the total number of files that can be created in the
358 filesystem, to increase aggregate metadata performance, or to isolate
359 user or application workloads from other users of the filesystem. It
360 is possible to have multiple remote sub-directories reference the
361 same MDT. However, the root directory will always be located on
362 MDT0000. To add a new MDT into the file system:</para>
365 <para>Discover the maximum MDT index. Each MDT must have unique index.</para>
367 client$ lctl dl | grep mdc
368 36 UP mdc testfs-MDT0000-mdc-ffff88004edf3c00 4c8be054-144f-9359-b063-8477566eb84e 5
369 37 UP mdc testfs-MDT0001-mdc-ffff88004edf3c00 4c8be054-144f-9359-b063-8477566eb84e 5
370 38 UP mdc testfs-MDT0002-mdc-ffff88004edf3c00 4c8be054-144f-9359-b063-8477566eb84e 5
371 39 UP mdc testfs-MDT0003-mdc-ffff88004edf3c00 4c8be054-144f-9359-b063-8477566eb84e 5
375 <para>Add the new block device as a new MDT at the next available
376 index. In this example, the next available index is 4.</para>
378 mds# mkfs.lustre --reformat --fsname=<replaceable>testfs</replaceable> --mdt --mgsnode=<replaceable>mgsnode</replaceable> --index 4 <replaceable>/dev/mdt4_device</replaceable>
382 <para>Mount the MDTs.</para>
384 mds# mount –t lustre <replaceable>/dev/mdt4_blockdevice</replaceable> /mnt/mdt4
388 <para>In order to start creating new files and directories on the
389 new MDT(s) they need to be attached into the namespace at one or
390 more subdirectories using the <literal>lfs mkdir</literal> command.
391 All files and directories below those created with
392 <literal>lfs mkdir</literal> will also be created on the same MDT
393 unless otherwise specified.
396 client# lfs mkdir -i 3 /mnt/testfs/new_dir_on_mdt3
397 client# lfs mkdir -i 4 /mnt/testfs/new_dir_on_mdt4
398 client# lfs mkdir -c 4 /mnt/testfs/new_directory_striped_across_4_mdts
403 <section xml:id="lustremaint.adding_new_ost">
404 <title><indexterm><primary>maintenance</primary><secondary>adding a OST</secondary></indexterm>
405 Adding a New OST to a Lustre File System</title>
406 <para>A new OST can be added to existing Lustre file system on either
407 an existing OSS node or on a new OSS node. In order to keep client IO
408 load balanced across OSS nodes for maximum aggregate performance, it is
409 not recommended to configure different numbers of OSTs to each OSS node.
413 <para> Add a new OST by using <literal>mkfs.lustre</literal> as when
414 the filesystem was first formatted, see
415 <xref linkend="dbdoclet.format_ost" /> for details. Each new OST
416 must have a unique index number, use <literal>lctl dl</literal> to
417 see a list of all OSTs. For example, to add a new OST at index 12
418 to the <literal>testfs</literal> filesystem run following commands
419 should be run on the OSS:</para>
420 <screen>oss# mkfs.lustre --fsname=testfs --mgsnode=mds16@tcp0 --ost --index=12 /dev/sda
421 oss# mkdir -p /mnt/testfs/ost12
422 oss# mount -t lustre /dev/sda /mnt/testfs/ost12</screen>
425 <para>Balance OST space usage (possibly).</para>
426 <para>The file system can be quite unbalanced when new empty OSTs
427 are added to a relatively full filesystem. New file creations are
428 automatically balanced to favour the new OSTs. If this is a scratch
429 file system or files are pruned at regular intervals, then no further
430 work may be needed to balance the OST space usage as new files being
431 created will preferentially be placed on the less full OST(s). As old
432 files are deleted, they will release space on the old OST(s).</para>
433 <para>Files existing prior to the expansion can optionally be
434 rebalanced using the <literal>lfs_migrate</literal> utility.
435 This redistributes file data over the entire set of OSTs.</para>
436 <para>For example, to rebalance all files within the directory
437 <literal>/mnt/lustre/dir</literal>, enter:</para>
438 <screen>client# lfs_migrate /mnt/lustre/dir</screen>
439 <para>To migrate files within the <literal>/test</literal> file
440 system on <literal>OST0004</literal> that are larger than 4GB in
441 size to other OSTs, enter:</para>
442 <screen>client# lfs find /test --ost test-OST0004 -size +4G | lfs_migrate -y</screen>
443 <para>See <xref linkend="dbdoclet.lfs_migrate"/> for details.</para>
447 <section xml:id="lustremaint.deactivating_mdt_ost">
448 <title><indexterm><primary>maintenance</primary><secondary>restoring an OST</secondary></indexterm>
449 <indexterm><primary>maintenance</primary><secondary>removing an OST</secondary></indexterm>
450 Removing and Restoring MDTs and OSTs</title>
451 <para>OSTs and DNE MDTs can be removed from and restored to a Lustre
452 filesystem. Deactivating an OST means that it is temporarily or
453 permanently marked unavailable. Deactivating an OST on the MDS means
454 it will not try to allocate new objects there or perform OST recovery,
455 while deactivating an OST the client means it will not wait for OST
456 recovery if it cannot contact the OST and will instead return an IO
457 error to the application immediately if files on the OST are accessed.
458 An OST may be permanently deactivated from the file system,
459 depending on the situation and commands used.</para>
460 <note><para>A permanently deactivated MDT or OST still appears in the
461 filesystem configuration until the configuration is regenerated with
462 <literal>writeconf</literal> or it is replaced with a new MDT or OST
463 at the same index and permanently reactivated. A deactivated OST
464 will not be listed by <literal>lfs df</literal>.
466 <para>You may want to temporarily deactivate an OST on the MDS to
467 prevent new files from being written to it in several situations:</para>
470 <para>A hard drive has failed and a RAID resync/rebuild is underway,
471 though the OST can also be marked <emphasis>degraded</emphasis> by
472 the RAID system to avoid allocating new files on the slow OST which
473 can reduce performance, see <xref linkend='dbdoclet.degraded_ost' />
478 <para>OST is nearing its space capacity, though the MDS will already
479 try to avoid allocating new files on overly-full OSTs if possible,
480 see <xref linkend='dbdoclet.balancing_free_space' /> for details.
484 <para>MDT/OST storage or MDS/OSS node has failed, and will not
485 be available for some time (or forever), but there is still a
486 desire to continue using the filesystem before it is repaired.</para>
489 <section xml:id="lustremaint.rmremotedir">
490 <title><indexterm><primary>maintenance</primary><secondary>removing an MDT</secondary></indexterm>Removing an MDT from the File System</title>
491 <para>If the MDT is permanently inaccessible,
492 <literal>lfs rm_entry {directory}</literal> can be used to delete the
493 directory entry for the unavailable MDT. Using <literal>rmdir</literal>
494 would otherwise report an IO error due to the remote MDT being inactive.
495 Please note that if the MDT <emphasis>is</emphasis> available, standard
496 <literal>rm -r</literal> should be used to delete the remote directory.
497 After the remote directory has been removed, the administrator should
498 mark the MDT as permanently inactive with:</para>
499 <screen>lctl conf_param {MDT name}.mdc.active=0</screen>
500 <para>A user can identify which MDT holds a remote sub-directory using
501 the <literal>lfs</literal> utility. For example:</para>
502 <screen>client$ lfs getstripe --mdt-index /mnt/lustre/remote_dir1
504 client$ mkdir /mnt/lustre/local_dir0
505 client$ lfs getstripe --mdt-index /mnt/lustre/local_dir0
508 <para>The <literal>lfs getstripe --mdt-index</literal> command
509 returns the index of the MDT that is serving the given directory.</para>
511 <section xml:id="lustremaint.inactivemdt">
513 <indexterm><primary>maintenance</primary></indexterm>
514 <indexterm><primary>maintenance</primary><secondary>inactive MDTs</secondary></indexterm>Working with Inactive MDTs</title>
515 <para>Files located on or below an inactive MDT are inaccessible until
516 the MDT is activated again. Clients accessing an inactive MDT will receive
519 <section remap="h3" xml:id="lustremaint.remove_ost">
521 <primary>maintenance</primary>
522 <secondary>removing an OST</secondary>
523 </indexterm>Removing an OST from the File System</title>
524 <para>When deactivating an OST, note that the client and MDS each have
525 an OSC device that handles communication with the corresponding OST.
526 To remove an OST from the file system:</para>
529 <para>If the OST is functional, and there are files located on
530 the OST that need to be migrated off of the OST, the file creation
531 for that OST should be temporarily deactivated on the MDS (each MDS
532 if running with multiple MDS nodes in DNE mode).
536 <para condition="l29">With Lustre 2.9 and later, the MDS should be
537 set to only disable file creation on that OST by setting
538 <literal>max_create_count</literal> to zero:
539 <screen>mds# lctl set_param osp.<replaceable>osc_name</replaceable>.max_create_count=0</screen>
540 This ensures that files deleted or migrated off of the OST
541 will have their corresponding OST objects destroyed, and the space
542 will be freed. For example, to disable <literal>OST0000</literal>
543 in the filesystem <literal>testfs</literal>, run:
544 <screen>mds# lctl set_param osp.testfs-OST0000-osc-MDT*.max_create_count=0</screen>
545 on each MDS in the <literal>testfs</literal> filesystem.</para>
548 <para>With older versions of Lustre, to deactivate the OSC on the
550 <screen>mds# lctl set_param osp.<replaceable>osc_name</replaceable>.active=0</screen>
551 This will prevent the MDS from attempting any communication with
552 that OST, including destroying objects located thereon. This is
553 fine if the OST will be removed permanently, if the OST is not
554 stable in operation, or if it is in a read-only state. Otherwise,
555 the free space and objects on the OST will not decrease when
556 files are deleted, and object destruction will be deferred until
557 the MDS reconnects to the OST.</para>
558 <para>For example, to deactivate <literal>OST0000</literal> in
559 the filesystem <literal>testfs</literal>, run:
560 <screen>mds# lctl set_param osp.testfs-OST0000-osc-MDT*.active=0</screen>
561 Deactivating the OST on the <emphasis>MDS</emphasis> does not
562 prevent use of existing objects for read/write by a client.</para>
564 <para>If migrating files from a working OST, do not deactivate
565 the OST on clients. This causes IO errors when accessing files
566 located there, and migrating files on the OST would fail.</para>
569 <para>Do not use <literal>lctl conf_param</literal> to
570 deactivate the OST if it is still working, as this immediately
571 and permanently deactivates it in the file system configuration
572 on both the MDS and all clients.</para>
578 <para>Discover all files that have objects residing on the
579 deactivated OST. Depending on whether the deactivated OST is
580 available or not, the data from that OST may be migrated to
581 other OSTs, or may need to be restored from backup.</para>
584 <para>If the OST is still online and available, find all
585 files with objects on the deactivated OST, and copy them
586 to other OSTs in the file system to: </para>
587 <screen>client# lfs find --ost <replaceable>ost_name</replaceable> <replaceable>/mount/point</replaceable> | lfs_migrate -y</screen>
588 <para>Note that if multiple OSTs are being deactivated at one
589 time, the <literal>lfs find</literal> command can take multiple
590 <literal>--ost</literal> arguments, and will return files that
591 are located on <emphasis>any</emphasis> of the specified OSTs.
595 <para>If the OST is no longer available, delete the files
596 on that OST and restore them from backup:
597 <screen>client# lfs find --ost <replaceable>ost_uuid</replaceable> -print0 <replaceable>/mount/point</replaceable> |
598 tee /tmp/files_to_restore | xargs -0 -n 1 unlink</screen>
599 The list of files that need to be restored from backup is
600 stored in <literal>/tmp/files_to_restore</literal>. Restoring
601 these files is beyond the scope of this document.</para>
606 <para>Deactivate the OST.</para>
610 If there is expected to be a replacement OST in some short
611 time (a few days), the OST can temporarily be deactivated on
613 <screen>client# lctl set_param osc.<replaceable>fsname</replaceable>-OST<replaceable>number</replaceable>-*.active=0</screen>
614 <note><para>This setting is only temporary and will be reset
615 if the clients are remounted or rebooted. It needs to be run
616 on all clients.</para>
621 <para>If there is not expected to be a replacement for this OST in
622 the near future, permanently deactivate it on all clients and
623 the MDS by running the following command on the MGS:
624 <screen>mgs# lctl conf_param <replaceable>ost_name</replaceable>.osc.active=0</screen></para>
625 <note><para>A deactivated OST still appears in the file system
626 configuration, though a replacement OST can be created using the
627 <literal>mkfs.lustre --replace</literal> option, see
628 <xref linkend="lustremaint.restore_ost"/>.
635 <section remap="h3" xml:id="lustremaint.ydg_pgt_tl">
637 <primary>maintenance</primary>
638 <secondary>backing up OST config</secondary>
641 <primary>backup</primary>
642 <secondary>OST config</secondary>
643 </indexterm> Backing Up OST Configuration Files</title>
644 <para>If the OST device is still accessible, then the Lustre
645 configuration files on the OST should be backed up and saved for
646 future use in order to avoid difficulties when a replacement OST is
647 returned to service. These files rarely change, so they can and
648 should be backed up while the OST is functional and accessible. If
649 the deactivated OST is still available to mount (i.e. has not
650 permanently failed or is unmountable due to severe corruption), an
651 effort should be made to preserve these files. </para>
654 <para>Mount the OST file system.
655 <screen>oss# mkdir -p /mnt/ost
656 oss# mount -t ldiskfs <replaceable>/dev/ost_device</replaceable> /mnt/ost</screen>
660 <para>Back up the OST configuration files.
661 <screen>oss# tar cvf <replaceable>ost_name</replaceable>.tar -C /mnt/ost last_rcvd \
662 CONFIGS/ O/0/LAST_ID</screen>
666 <para> Unmount the OST file system. <screen>oss# umount /mnt/ost</screen>
671 <section xml:id="lustremaint.restore_ost">
673 <primary>maintenance</primary>
674 <secondary>restoring OST config</secondary>
677 <primary>backup</primary>
678 <secondary>restoring OST config</secondary>
679 </indexterm> Restoring OST Configuration Files</title>
680 <para>If the original OST is still available, it is best to follow the
681 OST backup and restore procedure given in either
682 <xref linkend="dbdoclet.backup_device"/>, or
683 <xref linkend="backup_fs_level"/> and
684 <xref linkend="backup_fs_level.restore"/>.</para>
685 <para>To replace an OST that was removed from service due to corruption
686 or hardware failure, the replacement OST needs to be formatted using
687 <literal>mkfs.lustre</literal>, and the Lustre file system configuration
688 should be restored, if available. Any objects stored on the OST will
689 be permanently lost, and files using the OST should be deleted and/or
690 restored from backup.</para>
691 <para condition="l25">With Lustre 2.5 and later, it is possible to
692 replace an OST to the same index without restoring the configuration
693 files, using the <literal>--replace</literal> option at format time.
694 <screen>oss# mkfs.lustre --ost --reformat --replace --index=<replaceable>old_ost_index</replaceable> \
695 <replaceable>other_options</replaceable> <replaceable>/dev/new_ost_dev</replaceable></screen>
696 The MDS and OSS will negotiate the <literal>LAST_ID</literal> value
697 for the replacement OST.
699 <para>If the OST configuration files were not backed up, due to the
700 OST file system being completely inaccessible, it is still possible to
701 replace the failed OST with a new one at the same OST index. </para>
704 <para>For older versions, format the OST file system without the
705 <literal>--replace</literal> option and restore the saved
707 <screen>oss# mkfs.lustre --ost --reformat --index=<replaceable>old_ost_index</replaceable> \
708 <replaceable>other_options</replaceable> <replaceable>/dev/new_ost_dev</replaceable></screen>
712 <para> Mount the OST file system.
713 <screen>oss# mkdir /mnt/ost
714 oss# mount -t ldiskfs <replaceable>/dev/new_ost_dev</replaceable> <replaceable>/mnt/ost</replaceable></screen>
718 <para>Restore the OST configuration files, if available.
719 <screen>oss# tar xvf <replaceable>ost_name</replaceable>.tar -C /mnt/ost</screen></para>
722 <para>Recreate the OST configuration files, if unavailable. </para>
723 <para>Follow the procedure in
724 <xref linkend="dbdoclet.repair_ost_lastid"/> to recreate the LAST_ID
725 file for this OST index. The <literal>last_rcvd</literal> file
726 will be recreated when the OST is first mounted using the default
727 parameters, which are normally correct for all file systems. The
728 <literal>CONFIGS/mountdata</literal> file is created by
729 <literal>mkfs.lustre</literal> at format time, but has flags set
730 that request it to register itself with the MGS. It is possible to
731 copy the flags from another working OST (which should be the same):
732 <screen>oss1# debugfs -c -R "dump CONFIGS/mountdata /tmp" <replaceable>/dev/other_osdev</replaceable>
733 oss1# scp /tmp/mountdata oss0:/tmp/mountdata
734 oss0# dd if=/tmp/mountdata of=/mnt/ost/CONFIGS/mountdata bs=4 count=1 seek=5 skip=5 conv=notrunc</screen></para>
737 <para> Unmount the OST file system.
738 <screen>oss# umount /mnt/ost</screen>
743 <section xml:id="lustremaint.ucf_qgt_tl">
745 <primary>maintenance</primary>
746 <secondary>reintroducing an OSTs</secondary>
747 </indexterm>Returning a Deactivated OST to Service</title>
748 <para>If the OST was permanently deactivated, it needs to be
749 reactivated in the MGS configuration.
750 <screen>mgs# lctl conf_param <replaceable>ost_name</replaceable>.osc.active=1</screen>
751 If the OST was temporarily deactivated, it needs to be reactivated on
753 <screen>mds# lctl set_param osp.<replaceable>fsname</replaceable>-OST<replaceable>number</replaceable>-*.active=1
754 client# lctl set_param osc.<replaceable>fsname</replaceable>-OST<replaceable>number</replaceable>-*.active=1</screen></para>
757 <section xml:id="lustremaint.abortRecovery">
758 <title><indexterm><primary>maintenance</primary><secondary>aborting recovery</secondary></indexterm>
759 <indexterm><primary>backup</primary><secondary>aborting recovery</secondary></indexterm>
760 Aborting Recovery</title>
761 <para>You can abort recovery with either the <literal>lctl</literal> utility or by mounting the target with the <literal>abort_recov</literal> option (<literal>mount -o abort_recov</literal>). When starting a target, run: <screen>mds# mount -t lustre -L <replaceable>mdt_name</replaceable> -o abort_recov <replaceable>/mount_point</replaceable></screen></para>
763 <para>The recovery process is blocked until all OSTs are available. </para>
766 <section xml:id="lustremaint.determineOST">
767 <title><indexterm><primary>maintenance</primary><secondary>identifying OST host</secondary></indexterm>
768 Determining Which Machine is Serving an OST </title>
769 <para>In the course of administering a Lustre file system, you may need to determine which
770 machine is serving a specific OST. It is not as simple as identifying the machine’s IP
771 address, as IP is only one of several networking protocols that the Lustre software uses and,
772 as such, LNet does not use IP addresses as node identifiers, but NIDs instead. To identify the
773 NID that is serving a specific OST, run one of the following commands on a client (you do not
774 need to be a root user):
775 <screen>client$ lctl get_param osc.<replaceable>fsname</replaceable>-OST<replaceable>number</replaceable>*.ost_conn_uuid</screen>
777 <screen>client$ lctl get_param osc.*-OST0000*.ost_conn_uuid
778 osc.testfs-OST0000-osc-f1579000.ost_conn_uuid=192.168.20.1@tcp</screen>-
780 <screen>client$ lctl get_param osc.*.ost_conn_uuid
781 osc.testfs-OST0000-osc-f1579000.ost_conn_uuid=192.168.20.1@tcp
782 osc.testfs-OST0001-osc-f1579000.ost_conn_uuid=192.168.20.1@tcp
783 osc.testfs-OST0002-osc-f1579000.ost_conn_uuid=192.168.20.1@tcp
784 osc.testfs-OST0003-osc-f1579000.ost_conn_uuid=192.168.20.1@tcp
785 osc.testfs-OST0004-osc-f1579000.ost_conn_uuid=192.168.20.1@tcp</screen></para>
787 <section xml:id="lustremaint.ChangeAddrFailoverNode">
788 <title><indexterm><primary>maintenance</primary><secondary>changing failover node address</secondary></indexterm>
789 Changing the Address of a Failover Node</title>
790 <para>To change the address of a failover node (e.g, to use node X instead of node Y), run
791 this command on the OSS/OST partition (depending on which option was used to originally
793 <screen>oss# tunefs.lustre --erase-params --servicenode=<replaceable>NID</replaceable> <replaceable>/dev/ost_device</replaceable></screen>
795 <screen>oss# tunefs.lustre --erase-params --failnode=<replaceable>NID</replaceable> <replaceable>/dev/ost_device</replaceable></screen>
796 For more information about the <literal>--servicenode</literal> and
797 <literal>--failnode</literal> options, see <xref xmlns:xlink="http://www.w3.org/1999/xlink"
798 linkend="configuringfailover"/>.</para>
800 <section xml:id="lustremaint.seperateCombinedMGSMDT">
801 <title><indexterm><primary>maintenance</primary><secondary>separate a
802 combined MGS/MDT</secondary></indexterm>
803 Separate a combined MGS/MDT</title>
804 <para>These instructions assume the MGS node will be the same as the MDS
805 node. For instructions on how to move MGS to a different node, see
806 <xref linkend="lustremaint.changingservernid"/>.</para>
807 <para>These instructions are for doing the split without shutting down
808 other servers and clients.</para>
811 <para>Stop the MDS.</para>
812 <para>Unmount the MDT</para>
813 <screen>umount -f <replaceable>/dev/mdt_device</replaceable> </screen>
816 <para>Create the MGS.</para>
817 <screen>mds# mkfs.lustre --mgs --device-size=<replaceable>size</replaceable> <replaceable>/dev/mgs_device</replaceable></screen>
820 <para>Copy the configuration data from MDT disk to the new MGS disk.</para>
821 <screen>mds# mount -t ldiskfs -o ro <replaceable>/dev/mdt_device</replaceable> <replaceable>/mdt_mount_point</replaceable></screen>
822 <screen>mds# mount -t ldiskfs -o rw <replaceable>/dev/mgs_device</replaceable> <replaceable>/mgs_mount_point</replaceable> </screen>
823 <screen>mds# cp -r <replaceable>/mdt_mount_point</replaceable>/CONFIGS/<replaceable>filesystem_name</replaceable>-* <replaceable>/mgs_mount_point</replaceable>/CONFIGS/. </screen>
824 <screen>mds# umount <replaceable>/mgs_mount_point</replaceable></screen>
825 <screen>mds# umount <replaceable>/mdt_mount_point</replaceable></screen>
826 <para>See <xref linkend="lustremaint.regenerateConfigLogs"/> for alternative method.</para>
829 <para>Start the MGS.</para>
830 <screen>mgs# mount -t lustre <replaceable>/dev/mgs_device</replaceable> <replaceable>/mgs_mount_point</replaceable></screen>
831 <para>Check to make sure it knows about all your file system</para>
832 <screen>mgs:/root# lctl get_param mgs.MGS.filesystems</screen>
835 <para>Remove the MGS option from the MDT, and set the new MGS nid.</para>
836 <screen>mds# tunefs.lustre --nomgs --mgsnode=<replaceable>new_mgs_nid</replaceable> <replaceable>/dev/mdt-device</replaceable></screen>
839 <para>Start the MDT.</para>
840 <screen>mds# mount -t lustre <replaceable>/dev/mdt_device /mdt_mount_point</replaceable></screen>
841 <para>Check to make sure the MGS configuration looks right:</para>
842 <screen>mgs# lctl get_param mgs.MGS.live.<replaceable>filesystem_name</replaceable></screen>
846 <section xml:id="lustremaint.setMDTReadonly" condition="l2D">
847 <title><indexterm><primary>maintenance</primary>
848 <secondary>set an MDT to readonly</secondary></indexterm>
849 Set an MDT to read-only</title>
850 <para>It is sometimes desirable to be able to mark the filesystem
851 read-only directly on the server, rather than remounting the clients and
852 setting the option there. This can be useful if there is a rogue client
853 that is deleting files, or when decommissioning a system to prevent
854 already-mounted clients from modifying it anymore.</para>
855 <para>Set the <literal>mdt.*.readonly</literal> parameter to
856 <literal>1</literal> to immediately set the MDT to read-only. All future
857 MDT access will immediately return a "Read-only file system" error
858 (<literal>EROFS</literal>) until the parameter is set to
859 <literal>0</literal> again.</para>
860 <para>Example of setting the <literal>readonly</literal> parameter to
861 <literal>1</literal>, verifying the current setting, accessing from a
862 client, and setting the parameter back to <literal>0</literal>:</para>
863 <screen>mds# lctl set_param mdt.fs-MDT0000.readonly=1
864 mdt.fs-MDT0000.readonly=1
866 mds# lctl get_param mdt.fs-MDT0000.readonly
867 mdt.fs-MDT0000.readonly=1
869 client$ touch test_file
870 touch: cannot touch ‘test_file’: Read-only file system
872 mds# lctl set_param mdt.fs-MDT0000.readonly=0
873 mdt.fs-MDT0000.readonly=0</screen>