1 <?xml version='1.0' encoding='utf-8'?>
2 <chapter xmlns="http://docbook.org/ns/docbook"
3 xmlns:xl="http://www.w3.org/1999/xlink" version="5.0" xml:lang="en-US"
4 xml:id="lustreoperations">
5 <title xml:id="lustreoperations.title">Lustre Operations</title>
6 <para>Once you have the Lustre file system up and running, you can use the
7 procedures in this section to perform these basic Lustre administration
9 <section xml:id="dbdoclet.50438194_42877">
12 <primary>operations</primary>
15 <primary>operations</primary>
16 <secondary>mounting by label</secondary>
17 </indexterm>Mounting by Label</title>
18 <para>The file system name is limited to 8 characters. We have encoded the
19 file system and target information in the disk label, so you can mount by
20 label. This allows system administrators to move disks around without
21 worrying about issues such as SCSI disk reordering or getting the
22 <literal>/dev/device</literal> wrong for a shared target. Soon, file system
23 naming will be made as fail-safe as possible. Currently, Linux disk labels
24 are limited to 16 characters. To identify the target within the file
25 system, 8 characters are reserved, leaving 8 characters for the file system
28 <replaceable>fsname</replaceable>-MDT0000 or
29 <replaceable>fsname</replaceable>-OST0a19
31 <para>To mount by label, use this command:</para>
34 <replaceable>file_system_label</replaceable>
35 <replaceable>/mount_point</replaceable>
37 <para>This is an example of mount-by-label:</para>
39 mds# mount -t lustre -L testfs-MDT0000 /mnt/mdt
42 <para>Mount-by-label should NOT be used in a multi-path environment or
43 when snapshots are being created of the device, since multiple block
44 devices will have the same label.</para>
46 <para>Although the file system name is internally limited to 8 characters,
47 you can mount the clients at any mount point, so file system users are not
48 subjected to short names. Here is an example:</para>
50 client# mount -t lustre mds0@tcp0:/short
51 <replaceable>/dev/long_mountpoint_name</replaceable>
54 <section xml:id="dbdoclet.50438194_24122">
57 <primary>operations</primary>
58 <secondary>starting</secondary>
59 </indexterm>Starting Lustre</title>
60 <para>On the first start of a Lustre file system, the components must be
61 started in the following order:</para>
64 <para>Mount the MGT.</para>
66 <para>If a combined MGT/MDT is present, Lustre will correctly mount
67 the MGT and MDT automatically.</para>
71 <para>Mount the MDT.</para>
73 <para condition='l24'>Mount all MDTs if multiple MDTs are
78 <para>Mount the OST(s).</para>
81 <para>Mount the client(s).</para>
85 <section xml:id="dbdoclet.50438194_84876">
88 <primary>operations</primary>
89 <secondary>mounting</secondary>
90 </indexterm>Mounting a Server</title>
91 <para>Starting a Lustre server is straightforward and only involves the
92 mount command. Lustre servers can be added to
93 <literal>/etc/fstab</literal>:</para>
97 <para>The mount command generates output similar to this:</para>
99 /dev/sda1 on /mnt/test/mdt type lustre (rw)
100 /dev/sda2 on /mnt/test/ost0 type lustre (rw)
101 192.168.0.21@tcp:/testfs on /mnt/testfs type lustre (rw)
103 <para>In this example, the MDT, an OST (ost0) and file system (testfs) are
106 LABEL=testfs-MDT0000 /mnt/test/mdt lustre defaults,_netdev,noauto 0 0
107 LABEL=testfs-OST0000 /mnt/test/ost0 lustre defaults,_netdev,noauto 0 0
109 <para>In general, it is wise to specify noauto and let your
110 high-availability (HA) package manage when to mount the device. If you are
111 not using failover, make sure that networking has been started before
112 mounting a Lustre server. If you are running Red Hat Enterprise Linux, SUSE
113 Linux Enterprise Server, Debian operating system (and perhaps others), use
115 <literal>_netdev</literal> flag to ensure that these disks are mounted after
116 the network is up.</para>
117 <para>We are mounting by disk label here. The label of a device can be read
119 <literal>e2label</literal>. The label of a newly-formatted Lustre server
121 <literal>FFFF</literal> if the
122 <literal>--index</literal> option is not specified to
123 <literal>mkfs.lustre</literal>, meaning that it has yet to be assigned. The
124 assignment takes place when the server is first started, and the disk label
125 is updated. It is recommended that the
126 <literal>--index</literal> option always be used, which will also ensure
127 that the label is set at format time.</para>
129 <para>Do not do this when the client and OSS are on the same node, as
130 memory pressure between the client and OSS can lead to deadlocks.</para>
133 <para>Mount-by-label should NOT be used in a multi-path
137 <section xml:id="dbdoclet.shutdownLustre">
140 <primary>operations</primary>
141 <secondary>shutdownLustre</secondary>
142 </indexterm>Stopping the Filesystem</title>
143 <para>A complete Lustre filesystem shutdown occurs by unmounting all
144 clients and servers in the order shown below. Please note that unmounting
145 a block device causes the Lustre software to be shut down on that node.
147 <note><para>Please note that the <literal>-a -t lustre</literal> in the
148 commands below is not the name of a filesystem, but rather is
149 specifying to unmount all entries in /etc/mtab that are of type
150 <literal>lustre</literal></para></note>
152 <listitem><para>Unmount the clients</para>
153 <para>On each client node, unmount the filesystem on that client
154 using the <literal>umount</literal> command:</para>
155 <para><literal>umount -a -t lustre</literal></para>
156 <para>The example below shows the unmount of the
157 <literal>testfs</literal> filesystem on a client node:</para>
158 <para><screen>[root@client1 ~]# mount |grep testfs
159 XXX.XXX.0.11@tcp:/testfs on /mnt/testfs type lustre (rw,lazystatfs)
161 [root@client1 ~]# umount -a -t lustre
162 [154523.177714] Lustre: Unmounted testfs-client</screen></para>
164 <listitem><para>Unmount the MDT and MGT</para>
165 <para>On the MGS and MDS node(s), use the <literal>umount</literal>
167 <para><literal>umount -a -t lustre</literal></para>
168 <para>The example below shows the unmount of the MDT and MGT for
169 the <literal>testfs</literal> filesystem on a combined MGS/MDS:
171 <para><screen>[root@mds1 ~]# mount |grep lustre
172 /dev/sda on /mnt/mgt type lustre (ro)
173 /dev/sdb on /mnt/mdt type lustre (ro)
175 [root@mds1 ~]# umount -a -t lustre
176 [155263.566230] Lustre: Failing over testfs-MDT0000
177 [155263.775355] Lustre: server umount testfs-MDT0000 complete
178 [155269.843862] Lustre: server umount MGS complete</screen></para>
179 <para>For a seperate MGS and MDS, the same command is used, first on
180 the MDS and then followed by the MGS.</para>
182 <listitem><para>Unmount all the OSTs</para>
183 <para>On each OSS node, use the <literal>umount</literal> command:
185 <para><literal>umount -a -t lustre</literal></para>
186 <para>The example below shows the unmount of all OSTs for the
187 <literal>testfs</literal> filesystem on server
188 <literal>OSS1</literal>:
190 <para><screen>[root@oss1 ~]# mount |grep lustre
191 /dev/sda on /mnt/ost0 type lustre (ro)
192 /dev/sdb on /mnt/ost1 type lustre (ro)
193 /dev/sdc on /mnt/ost2 type lustre (ro)
195 [root@oss1 ~]# umount -a -t lustre
196 [155336.491445] Lustre: Failing over testfs-OST0002
197 [155336.556752] Lustre: server umount testfs-OST0002 complete</screen></para>
200 <para>For unmount command syntax for a single OST, MDT, or MGT target
201 please refer to <xref linkend="dbdoclet.umountTarget"/></para>
203 <section xml:id="dbdoclet.umountTarget">
206 <primary>operations</primary>
207 <secondary>unmounting</secondary>
208 </indexterm>Unmounting a Specific Target on a Server</title>
209 <para>To stop a Lustre OST, MDT, or MGT , use the
211 <replaceable>/mount_point</replaceable></literal> command.</para>
212 <para>The example below stops an OST, <literal>ost0</literal>, on mount
213 point <literal>/mnt/ost0</literal> for the <literal>testfs</literal>
215 <screen>[root@oss1 ~]# umount /mnt/ost0
216 [ 385.142264] Lustre: Failing over testfs-OST0000
217 [ 385.210810] Lustre: server umount testfs-OST0000 complete</screen>
218 <para>Gracefully stopping a server with the
219 <literal>umount</literal> command preserves the state of the connected
220 clients. The next time the server is started, it waits for clients to
221 reconnect, and then goes through the recovery procedure.</para>
223 <literal>-f</literal>) flag is used, then the server evicts all clients and
224 stops WITHOUT recovery. Upon restart, the server does not wait for
225 recovery. Any currently connected clients receive I/O errors until they
228 <para>If you are using loopback devices, use the
229 <literal>-d</literal> flag. This flag cleans up loop devices and can
230 always be safely specified.</para>
233 <section xml:id="dbdoclet.50438194_57420">
236 <primary>operations</primary>
237 <secondary>failover</secondary>
238 </indexterm>Specifying Failout/Failover Mode for OSTs</title>
239 <para>In a Lustre file system, an OST that has become unreachable because
240 it fails, is taken off the network, or is unmounted can be handled in one
245 <literal>failout</literal> mode, Lustre clients immediately receive
246 errors (EIOs) after a timeout, instead of waiting for the OST to
251 <literal>failover</literal> mode, Lustre clients wait for the OST to
255 <para>By default, the Lustre file system uses
256 <literal>failover</literal> mode for OSTs. To specify
257 <literal>failout</literal> mode instead, use the
258 <literal>--param="failover.mode=failout"</literal> option as shown below
259 (entered on one line):</para>
261 oss# mkfs.lustre --fsname=
262 <replaceable>fsname</replaceable> --mgsnode=
263 <replaceable>mgs_NID</replaceable> --param=failover.mode=failout
265 <replaceable>ost_index</replaceable>
266 <replaceable>/dev/ost_block_device</replaceable>
268 <para>In the example below,
269 <literal>failout</literal> mode is specified for the OSTs on the MGS
270 <literal>mds0</literal> in the file system
271 <literal>testfs</literal>(entered on one line).</para>
273 oss# mkfs.lustre --fsname=testfs --mgsnode=mds0 --param=failover.mode=failout
274 --ost --index=3 /dev/sdb
277 <para>Before running this command, unmount all OSTs that will be affected
279 <literal>failover</literal>/
280 <literal>failout</literal> mode.</para>
283 <para>After initial file system configuration, use the
284 <literal>tunefs.lustre</literal> utility to change the mode. For example,
286 <literal>failout</literal> mode, run:</para>
289 $ tunefs.lustre --param failover.mode=failout
290 <replaceable>/dev/ost_device</replaceable>
295 <section xml:id="dbdoclet.degraded_ost">
298 <primary>operations</primary>
299 <secondary>degraded OST RAID</secondary>
300 </indexterm>Handling Degraded OST RAID Arrays</title>
301 <para>Lustre includes functionality that notifies Lustre if an external
302 RAID array has degraded performance (resulting in reduced overall file
303 system performance), either because a disk has failed and not been
304 replaced, or because a disk was replaced and is undergoing a rebuild. To
305 avoid a global performance slowdown due to a degraded OST, the MDS can
306 avoid the OST for new object allocation if it is notified of the degraded
308 <para>A parameter for each OST, called
309 <literal>degraded</literal>, specifies whether the OST is running in
310 degraded mode or not.</para>
311 <para>To mark the OST as degraded, use:</para>
313 lctl set_param obdfilter.{OST_name}.degraded=1
315 <para>To mark that the OST is back in normal operation, use:</para>
317 lctl set_param obdfilter.{OST_name}.degraded=0
319 <para>To determine if OSTs are currently in degraded mode, use:</para>
321 lctl get_param obdfilter.*.degraded
323 <para>If the OST is remounted due to a reboot or other condition, the flag
325 <literal>0</literal>.</para>
326 <para>It is recommended that this be implemented by an automated script
327 that monitors the status of individual RAID devices, such as MD-RAID's
328 <literal>mdadm(8)</literal> command with the <literal>--monitor</literal>
329 option to mark an affected device degraded or restored.</para>
331 <section xml:id="dbdoclet.50438194_88063">
334 <primary>operations</primary>
335 <secondary>multiple file systems</secondary>
336 </indexterm>Running Multiple Lustre File Systems</title>
337 <para>Lustre supports multiple file systems provided the combination of
338 <literal>NID:fsname</literal> is unique. Each file system must be allocated
339 a unique name during creation with the
340 <literal>--fsname</literal> parameter. Unique names for file systems are
341 enforced if a single MGS is present. If multiple MGSs are present (for
342 example if you have an MGS on every MDS) the administrator is responsible
343 for ensuring file system names are unique. A single MGS and unique file
344 system names provides a single point of administration and allows commands
345 to be issued against the file system even if it is not mounted.</para>
346 <para>Lustre supports multiple file systems on a single MGS. With a single
347 MGS fsnames are guaranteed to be unique. Lustre also allows multiple MGSs
348 to co-exist. For example, multiple MGSs will be necessary if multiple file
349 systems on different Lustre software versions are to be concurrently
350 available. With multiple MGSs additional care must be taken to ensure file
351 system names are unique. Each file system should have a unique fsname among
352 all systems that may interoperate in the future.</para>
353 <para>By default, the
354 <literal>mkfs.lustre</literal> command creates a file system named
355 <literal>lustre</literal>. To specify a different file system name (limited
356 to 8 characters) at format time, use the
357 <literal>--fsname</literal> option:</para>
360 mkfs.lustre --fsname=
361 <replaceable>file_system_name</replaceable>
365 <para>The MDT, OSTs and clients in the new file system must use the same
366 file system name (prepended to the device name). For example, for a new
368 <literal>foo</literal>, the MDT and two OSTs would be named
369 <literal>foo-MDT0000</literal>,
370 <literal>foo-OST0000</literal>, and
371 <literal>foo-OST0001</literal>.</para>
373 <para>To mount a client on the file system, run:</para>
375 client# mount -t lustre
376 <replaceable>mgsnode</replaceable>:
377 <replaceable>/new_fsname</replaceable>
378 <replaceable>/mount_point</replaceable>
380 <para>For example, to mount a client on file system foo at mount point
381 /mnt/foo, run:</para>
383 client# mount -t lustre mgsnode:/foo /mnt/foo
386 <para>If a client(s) will be mounted on several file systems, add the
388 <literal>/etc/xattr.conf</literal> file to avoid problems when files are
389 moved between the file systems:
390 <literal>lustre.* skip</literal></para>
393 <para>To ensure that a new MDT is added to an existing MGS create the MDT
395 <literal>--mdt --mgsnode=
396 <replaceable>mgs_NID</replaceable></literal>.</para>
398 <para>A Lustre installation with two file systems (
399 <literal>foo</literal> and
400 <literal>bar</literal>) could look like this, where the MGS node is
401 <literal>mgsnode@tcp0</literal> and the mount points are
402 <literal>/mnt/foo</literal> and
403 <literal>/mnt/bar</literal>.</para>
405 mgsnode# mkfs.lustre --mgs /dev/sda
406 mdtfoonode# mkfs.lustre --fsname=foo --mgsnode=mgsnode@tcp0 --mdt --index=0
408 ossfoonode# mkfs.lustre --fsname=foo --mgsnode=mgsnode@tcp0 --ost --index=0
410 ossfoonode# mkfs.lustre --fsname=foo --mgsnode=mgsnode@tcp0 --ost --index=1
412 mdtbarnode# mkfs.lustre --fsname=bar --mgsnode=mgsnode@tcp0 --mdt --index=0
414 ossbarnode# mkfs.lustre --fsname=bar --mgsnode=mgsnode@tcp0 --ost --index=0
416 ossbarnode# mkfs.lustre --fsname=bar --mgsnode=mgsnode@tcp0 --ost --index=1
419 <para>To mount a client on file system foo at mount point
420 <literal>/mnt/foo</literal>, run:</para>
422 client# mount -t lustre mgsnode@tcp0:/foo /mnt/foo
424 <para>To mount a client on file system bar at mount point
425 <literal>/mnt/bar</literal>, run:</para>
427 client# mount -t lustre mgsnode@tcp0:/bar /mnt/bar
430 <section xml:id="dbdoclet.lfsmkdir" condition='l24'>
433 <primary>operations</primary>
434 <secondary>remote directory</secondary>
435 </indexterm>Creating a sub-directory on a given MDT</title>
436 <para>Lustre 2.4 enables individual sub-directories to be serviced by
437 unique MDTs. An administrator can allocate a sub-directory to a given MDT
438 using the command:</para>
440 client# lfs mkdir –i
441 <replaceable>mdt_index</replaceable>
442 <replaceable>/mount_point/remote_dir</replaceable>
444 <para>This command will allocate the sub-directory
445 <literal>remote_dir</literal> onto the MDT of index
446 <literal>mdt_index</literal>. For more information on adding additional MDTs
448 <literal>mdt_index</literal> see
449 <xref linkend='dbdoclet.addmdtindex' />.</para>
451 <para>An administrator can allocate remote sub-directories to separate
452 MDTs. Creating remote sub-directories in parent directories not hosted on
453 MDT0 is not recommended. This is because the failure of the parent MDT
454 will leave the namespace below it inaccessible. For this reason, by
455 default it is only possible to create remote sub-directories off MDT0. To
456 relax this restriction and enable remote sub-directories off any MDT, an
457 administrator must issue the following command on the MGS:
458 <screen>mgs# lctl conf_param <replaceable>fsname</replaceable>.mdt.enable_remote_dir=1</screen>
459 For Lustre filesystem 'scratch', the command executed is:
460 <screen>mgs# lctl conf_param scratch.mdt.enable_remote_dir=1</screen>
461 To verify the configuration setting execute the following command on any
463 <screen>mds# lctl get_param mdt.*.enable_remote_dir</screen></para>
465 <para condition='l28'>With Lustre software version 2.8, a new
466 tunable is available to allow users with a specific group ID to create
467 and delete remote and striped directories. This tunable is
468 <literal>enable_remote_dir_gid</literal>. For example, setting this
469 parameter to the 'wheel' or 'admin' group ID allows users with that GID
470 to create and delete remote and striped directories. Setting this
471 parameter to <literal>-1</literal> on MDT0 to permanently allow any
472 non-root users create and delete remote and striped directories.
473 On the MGS execute the following command:
474 <screen>mgs# lctl conf_param <replaceable>fsname</replaceable>.mdt.enable_remote_dir_gid=-1</screen>
475 For the Lustre filesystem 'scratch', the commands expands to:
476 <screen>mgs# lctl conf_param scratch.mdt.enable_remote_dir_gid=-1</screen>.
477 The change can be verified by executing the following command on every MDS:
478 <screen>mds# lctl get_param mdt.<replaceable>*</replaceable>.enable_remote_dir_gid</screen>
481 <section xml:id="dbdoclet.lfsmkdirdne2" condition='l28'>
484 <primary>operations</primary>
485 <secondary>striped directory</secondary>
488 <primary>operations</primary>
489 <secondary>mkdir</secondary>
492 <primary>operations</primary>
493 <secondary>setdirstripe</secondary>
496 <primary>striping</primary>
497 <secondary>metadata</secondary>
498 </indexterm>Creating a directory striped across multiple MDTs</title>
499 <para>The Lustre 2.8 DNE feature enables individual files in a given
500 directory to store their metadata on separate MDTs (a <emphasis>striped
501 directory</emphasis>) once additional MDTs have been added to the
502 filesystem, see <xref linkend="dbdoclet.adding_new_mdt"/>.
503 The result of this is that metadata requests for
504 files in a striped directory are serviced by multiple MDTs and metadata
505 service load is distributed over all the MDTs that service a given
506 directory. By distributing metadata service load over multiple MDTs,
507 performance can be improved beyond the limit of single MDT
508 performance. Prior to the development of this feature all files in a
509 directory must record their metadata on a single MDT.</para>
510 <para>This command to stripe a directory over
511 <replaceable>mdt_count</replaceable> MDTs is:
515 <replaceable>mdt_count</replaceable>
516 <replaceable>/mount_point/new_directory</replaceable>
518 <para>The striped directory feature is most useful for distributing
519 single large directories (50k entries or more) across multiple MDTs,
520 since it incurs more overhead than non-striped directories.</para>
522 <section xml:id="dbdoclet.50438194_88980">
525 <primary>operations</primary>
526 <secondary>parameters</secondary>
527 </indexterm>Setting and Retrieving Lustre Parameters</title>
528 <para>Several options are available for setting parameters in
532 <para>When creating a file system, use mkfs.lustre. See
533 <xref linkend="dbdoclet.50438194_17237" />below.</para>
536 <para>When a server is stopped, use tunefs.lustre. See
537 <xref linkend="dbdoclet.50438194_55253" />below.</para>
540 <para>When the file system is running, use lctl to set or retrieve
541 Lustre parameters. See
542 <xref linkend="dbdoclet.50438194_51490" />and
543 <xref linkend="dbdoclet.50438194_63247" />below.</para>
546 <section xml:id="dbdoclet.50438194_17237">
547 <title>Setting Tunable Parameters with
548 <literal>mkfs.lustre</literal></title>
549 <para>When the file system is first formatted, parameters can simply be
551 <literal>--param</literal> option to the
552 <literal>mkfs.lustre</literal> command. For example:</para>
554 mds# mkfs.lustre --mdt --param="sys.timeout=50" /dev/sda
556 <para>For more details about creating a file system,see
557 <xref linkend="configuringlustre" />. For more details about
558 <literal>mkfs.lustre</literal>, see
559 <xref linkend="systemconfigurationutilities" />.</para>
561 <section xml:id="dbdoclet.50438194_55253">
562 <title>Setting Parameters with
563 <literal>tunefs.lustre</literal></title>
564 <para>If a server (OSS or MDS) is stopped, parameters can be added to an
565 existing file system using the
566 <literal>--param</literal> option to the
567 <literal>tunefs.lustre</literal> command. For example:</para>
569 oss# tunefs.lustre --param=failover.node=192.168.0.13@tcp0 /dev/sda
572 <literal>tunefs.lustre</literal>, parameters are
573 <emphasis>additive</emphasis>-- new parameters are specified in addition
574 to old parameters, they do not replace them. To erase all old
575 <literal>tunefs.lustre</literal> parameters and just use newly-specified
576 parameters, run:</para>
578 mds# tunefs.lustre --erase-params --param=
579 <replaceable>new_parameters</replaceable>
581 <para>The tunefs.lustre command can be used to set any parameter settable
582 in a /proc/fs/lustre file and that has its own OBD device, so it can be
585 <replaceable>obdname|fsname</replaceable>.
586 <replaceable>obdtype</replaceable>.
587 <replaceable>proc_file_name</replaceable>=
588 <replaceable>value</replaceable></literal>. For example:</para>
590 mds# tunefs.lustre --param mdt.identity_upcall=NONE /dev/sda1
592 <para>For more details about
593 <literal>tunefs.lustre</literal>, see
594 <xref linkend="systemconfigurationutilities" />.</para>
596 <section xml:id="dbdoclet.50438194_51490">
597 <title>Setting Parameters with
598 <literal>lctl</literal></title>
599 <para>When the file system is running, the
600 <literal>lctl</literal> command can be used to set parameters (temporary
601 or permanent) and report current parameter values. Temporary parameters
602 are active as long as the server or client is not shut down. Permanent
603 parameters live through server and client reboots.</para>
605 <para>The lctl list_param command enables users to list all parameters
607 <xref linkend="dbdoclet.50438194_88217" />.</para>
609 <para>For more details about the
610 <literal>lctl</literal> command, see the examples in the sections below
612 <xref linkend="systemconfigurationutilities" />.</para>
614 <title>Setting Temporary Parameters</title>
616 <literal>lctl set_param</literal> to set temporary parameters on the
617 node where it is run. These parameters map to items in
618 <literal>/proc/{fs,sys}/{lnet,lustre}</literal>. The
619 <literal>lctl set_param</literal> command uses this syntax:</para>
622 <replaceable>obdtype</replaceable>.
623 <replaceable>obdname</replaceable>.
624 <replaceable>proc_file_name</replaceable>=
625 <replaceable>value</replaceable>
627 <para>For example:</para>
629 # lctl set_param osc.*.max_dirty_mb=1024
630 osc.myth-OST0000-osc.max_dirty_mb=32
631 osc.myth-OST0001-osc.max_dirty_mb=32
632 osc.myth-OST0002-osc.max_dirty_mb=32
633 osc.myth-OST0003-osc.max_dirty_mb=32
634 osc.myth-OST0004-osc.max_dirty_mb=32
637 <section xml:id="dbdoclet.50438194_64195">
638 <title>Setting Permanent Parameters</title>
640 <literal>lctl conf_param</literal> command to set permanent parameters.
642 <literal>lctl conf_param</literal> command can be used to specify any
643 parameter settable in a
644 <literal>/proc/fs/lustre</literal> file, with its own OBD device. The
645 <literal>lctl conf_param</literal> command uses this syntax (same as the
647 <literal>mkfs.lustre</literal> and
648 <literal>tunefs.lustre</literal> commands):</para>
650 <replaceable>obdname|fsname</replaceable>.
651 <replaceable>obdtype</replaceable>.
652 <replaceable>proc_file_name</replaceable>=
653 <replaceable>value</replaceable>)
655 <para>Here are a few examples of
656 <literal>lctl conf_param</literal> commands:</para>
658 mgs# lctl conf_param testfs-MDT0000.sys.timeout=40
659 $ lctl conf_param testfs-MDT0000.mdt.identity_upcall=NONE
660 $ lctl conf_param testfs.llite.max_read_ahead_mb=16
661 $ lctl conf_param testfs-MDT0000.lov.stripesize=2M
662 $ lctl conf_param testfs-OST0000.osc.max_dirty_mb=29.15
663 $ lctl conf_param testfs-OST0000.ost.client_cache_seconds=15
664 $ lctl conf_param testfs.sys.timeout=40
667 <para>Parameters specified with the
668 <literal>lctl conf_param</literal> command are set permanently in the
669 file system's configuration file on the MGS.</para>
672 <section xml:id="dbdoclet.setparamp" condition='l25'>
673 <title>Setting Permanent Parameters with lctl set_param -P</title>
675 <literal>lctl set_param -P</literal> to set parameters permanently. This
676 command must be issued on the MGS. The given parameter is set on every
678 <literal>lctl</literal> upcall. Parameters map to items in
679 <literal>/proc/{fs,sys}/{lnet,lustre}</literal>. The
680 <literal>lctl set_param</literal> command uses this syntax:</para>
683 <replaceable>obdtype</replaceable>.
684 <replaceable>obdname</replaceable>.
685 <replaceable>proc_file_name</replaceable>=
686 <replaceable>value</replaceable>
688 <para>For example:</para>
690 # lctl set_param -P osc.*.max_dirty_mb=1024
691 osc.myth-OST0000-osc.max_dirty_mb=32
692 osc.myth-OST0001-osc.max_dirty_mb=32
693 osc.myth-OST0002-osc.max_dirty_mb=32
694 osc.myth-OST0003-osc.max_dirty_mb=32
695 osc.myth-OST0004-osc.max_dirty_mb=32
698 <literal>-d</literal>(only with -P) option to delete permanent
699 parameter. Syntax:</para>
702 <replaceable>obdtype</replaceable>.
703 <replaceable>obdname</replaceable>.
704 <replaceable>proc_file_name</replaceable>
706 <para>For example:</para>
708 # lctl set_param -P -d osc.*.max_dirty_mb
711 <section xml:id="dbdoclet.50438194_88217">
712 <title>Listing Parameters</title>
713 <para>To list Lustre or LNet parameters that are available to set, use
715 <literal>lctl list_param</literal> command. For example:</para>
717 lctl list_param [-FR]
718 <replaceable>obdtype</replaceable>.
719 <replaceable>obdname</replaceable>
721 <para>The following arguments are available for the
722 <literal>lctl list_param</literal> command.</para>
724 <literal>-F</literal> Add '
725 <literal>/</literal>', '
726 <literal>@</literal>' or '
727 <literal>=</literal>' for directories, symlinks and writeable files,
730 <literal>-R</literal> Recursively lists all parameters under the
731 specified path</para>
732 <para>For example:</para>
734 oss# lctl list_param obdfilter.lustre-OST0000
737 <section xml:id="dbdoclet.50438194_63247">
738 <title>Reporting Current Parameter Values</title>
739 <para>To report current Lustre parameter values, use the
740 <literal>lctl get_param</literal> command with this syntax:</para>
743 <replaceable>obdtype</replaceable>.
744 <replaceable>obdname</replaceable>.
745 <replaceable>proc_file_name</replaceable>
747 <para>This example reports data on RPC service times.</para>
749 oss# lctl get_param -n ost.*.ost_io.timeouts
750 service : cur 1 worst 30 (at 1257150393, 85d23h58m54s ago) 1 1 1 1
752 <para>This example reports the amount of space this client has reserved
753 for writeback cache with each OST:</para>
755 client# lctl get_param osc.*.cur_grant_bytes
756 osc.myth-OST0000-osc-ffff8800376bdc00.cur_grant_bytes=2097152
757 osc.myth-OST0001-osc-ffff8800376bdc00.cur_grant_bytes=33890304
758 osc.myth-OST0002-osc-ffff8800376bdc00.cur_grant_bytes=35418112
759 osc.myth-OST0003-osc-ffff8800376bdc00.cur_grant_bytes=2097152
760 osc.myth-OST0004-osc-ffff8800376bdc00.cur_grant_bytes=33808384
765 <section xml:id="dbdoclet.50438194_41817">
768 <primary>operations</primary>
769 <secondary>failover</secondary>
770 </indexterm>Specifying NIDs and Failover</title>
771 <para>If a node has multiple network interfaces, it may have multiple NIDs,
772 which must all be identified so other nodes can choose the NID that is
773 appropriate for their network interfaces. Typically, NIDs are specified in
774 a list delimited by commas (
775 <literal>,</literal>). However, when failover nodes are specified, the NIDs
776 are delimited by a colon (
777 <literal>:</literal>) or by repeating a keyword such as
778 <literal>--mgsnode=</literal> or
779 <literal>--servicenode=</literal>).</para>
780 <para>To display the NIDs of all servers in networks configured to work
781 with the Lustre file system, run (while LNet is running):</para>
785 <para>In the example below,
786 <literal>mds0</literal> and
787 <literal>mds1</literal> are configured as a combined MGS/MDT failover pair
789 <literal>oss0</literal> and
790 <literal>oss1</literal> are configured as an OST failover pair. The Ethernet
792 <literal>mds0</literal> is 192.168.10.1, and for
793 <literal>mds1</literal> is 192.168.10.2. The Ethernet addresses for
794 <literal>oss0</literal> and
795 <literal>oss1</literal> are 192.168.10.20 and 192.168.10.21
798 mds0# mkfs.lustre --fsname=testfs --mdt --mgs \
799 --servicenode=192.168.10.2@tcp0 \
800 -–servicenode=192.168.10.1@tcp0 /dev/sda1
801 mds0# mount -t lustre /dev/sda1 /mnt/test/mdt
802 oss0# mkfs.lustre --fsname=testfs --servicenode=192.168.10.20@tcp0 \
803 --servicenode=192.168.10.21 --ost --index=0 \
804 --mgsnode=192.168.10.1@tcp0 --mgsnode=192.168.10.2@tcp0 \
806 oss0# mount -t lustre /dev/sdb /mnt/test/ost0
807 client# mount -t lustre 192.168.10.1@tcp0:192.168.10.2@tcp0:/testfs \
809 mds0# umount /mnt/mdt
810 mds1# mount -t lustre /dev/sda1 /mnt/test/mdt
811 mds1# lctl get_param mdt.testfs-MDT0000.recovery_status
813 <para>Where multiple NIDs are specified separated by commas (for example,
814 <literal>10.67.73.200@tcp,192.168.10.1@tcp</literal>), the two NIDs refer
815 to the same host, and the Lustre software chooses the
816 <emphasis>best</emphasis> one for communication. When a pair of NIDs is
817 separated by a colon (for example,
818 <literal>10.67.73.200@tcp:10.67.73.201@tcp</literal>), the two NIDs refer
819 to two different hosts and are treated as a failover pair (the Lustre
820 software tries the first one, and if that fails, it tries the second
823 <literal>mkfs.lustre</literal> can be used to specify failover nodes.
824 Introduced in Lustre software release 2.0, the
825 <literal>--servicenode</literal> option is used to specify all service NIDs,
826 including those for primary nodes and failover nodes. When the
827 <literal>--servicenode</literal> option is used, the first service node to
828 load the target device becomes the primary service node, while nodes
829 corresponding to the other specified NIDs become failover locations for the
830 target device. An older option,
831 <literal>--failnode</literal>, specifies just the NIDS of failover nodes.
832 For more information about the
833 <literal>--servicenode</literal> and
834 <literal>--failnode</literal> options, see
835 <xref xmlns:xlink="http://www.w3.org/1999/xlink"
836 linkend="configuringfailover" />.</para>
838 <section xml:id="dbdoclet.50438194_70905">
841 <primary>operations</primary>
842 <secondary>erasing a file system</secondary>
843 </indexterm>Erasing a File System</title>
844 <para>If you want to erase a file system and permanently delete all the
845 data in the file system, run this command on your targets:</para>
847 $ "mkfs.lustre --reformat"
849 <para>If you are using a separate MGS and want to keep other file systems
850 defined on that MGS, then set the
851 <literal>writeconf</literal> flag on the MDT for that file system. The
852 <literal>writeconf</literal> flag causes the configuration logs to be
853 erased; they are regenerated the next time the servers start.</para>
855 <literal>writeconf</literal> flag on the MDT:</para>
858 <para>Unmount all clients/servers using this file system, run:</para>
864 <para>Permanently erase the file system and, presumably, replace it
865 with another file system, run:</para>
867 $ mkfs.lustre --reformat --fsname spfs --mgs --mdt --index=0 /dev/
868 <emphasis>{mdsdev}</emphasis>
872 <para>If you have a separate MGS (that you do not want to reformat),
874 <literal>--writeconf</literal> flag to
875 <literal>mkfs.lustre</literal> on the MDT, run:</para>
877 $ mkfs.lustre --reformat --writeconf --fsname spfs --mgsnode=
878 <replaceable>mgs_nid</replaceable> --mdt --index=0
879 <replaceable>/dev/mds_device</replaceable>
884 <para>If you have a combined MGS/MDT, reformatting the MDT reformats the
885 MGS as well, causing all configuration information to be lost; you can
886 start building your new file system. Nothing needs to be done with old
887 disks that will not be part of the new file system, just do not mount
891 <section xml:id="dbdoclet.50438194_16954">
894 <primary>operations</primary>
895 <secondary>reclaiming space</secondary>
896 </indexterm>Reclaiming Reserved Disk Space</title>
897 <para>All current Lustre installations run the ldiskfs file system
898 internally on service nodes. By default, ldiskfs reserves 5% of the disk
899 space to avoid file system fragmentation. In order to reclaim this space,
900 run the following command on your OSS for each OST in the file
903 tune2fs [-m reserved_blocks_percent] /dev/
904 <emphasis>{ostdev}</emphasis>
906 <para>You do not need to shut down Lustre before running this command or
907 restart it afterwards.</para>
909 <para>Reducing the space reservation can cause severe performance
910 degradation as the OST file system becomes more than 95% full, due to
911 difficulty in locating large areas of contiguous free space. This
912 performance degradation may persist even if the space usage drops below
913 95% again. It is recommended NOT to reduce the reserved disk space below
917 <section xml:id="dbdoclet.50438194_69998">
920 <primary>operations</primary>
921 <secondary>replacing an OST or MDS</secondary>
922 </indexterm>Replacing an Existing OST or MDT</title>
923 <para>To copy the contents of an existing OST to a new OST (or an old MDT
924 to a new MDT), follow the process for either OST/MDT backups in
925 <xref linkend='dbdoclet.backup_device' />or
926 <xref linkend='backup_fs_level' />.
927 For more information on removing a MDT, see
928 <xref linkend='dbdoclet.rmremotedir' />.</para>
930 <section xml:id="dbdoclet.50438194_30872">
933 <primary>operations</primary>
934 <secondary>identifying OSTs</secondary>
935 </indexterm>Identifying To Which Lustre File an OST Object Belongs</title>
936 <para>Use this procedure to identify the file containing a given object on
940 <para>On the OST (as root), run
941 <literal>debugfs</literal> to display the file identifier (
942 <literal>FID</literal>) of the file associated with the object.</para>
943 <para>For example, if the object is
944 <literal>34976</literal> on
945 <literal>/dev/lustre/ost_test2</literal>, the debug command is:
947 # debugfs -c -R "stat /O/0/d$((34976 % 32))/34976" /dev/lustre/ost_test2
949 <para>The command output is:
951 debugfs 1.42.3.wc3 (15-Aug-2012)
952 /dev/lustre/ost_test2: catastrophic mode - not reading inode or group bitmaps
953 Inode: 352365 Type: regular Mode: 0666 Flags: 0x80000
954 Generation: 2393149953 Version: 0x0000002a:00005f81
955 User: 1000 Group: 1000 Size: 260096
956 File ACL: 0 Directory ACL: 0
957 Links: 1 Blockcount: 512
958 Fragment: Address: 0 Number: 0 Size: 0
959 ctime: 0x4a216b48:00000000 -- Sat May 30 13:22:16 2009
960 atime: 0x4a216b48:00000000 -- Sat May 30 13:22:16 2009
961 mtime: 0x4a216b48:00000000 -- Sat May 30 13:22:16 2009
962 crtime: 0x4a216b3c:975870dc -- Sat May 30 13:22:04 2009
963 Size of extra inode fields: 24
964 Extended attributes stored in inode body:
965 fid = "b9 da 24 00 00 00 00 00 6a fa 0d 3f 01 00 00 00 eb 5b 0b 00 00 00 0000
966 00 00 00 00 00 00 00 00 " (32)
967 fid: objid=34976 seq=0 parent=[0x24dab9:0x3f0dfa6a:0x0] stripe=1
969 (0-64):4620544-4620607
973 <para>For Lustre software release 2.x file systems, the parent FID will
974 be of the form [0x200000400:0x122:0x0] and can be resolved directly
976 <literal>lfs fid2path [0x200000404:0x122:0x0]
977 /mnt/lustre</literal> command on any Lustre client, and the process is
981 <para>In this example the parent inode FID is an upgraded 1.x inode
982 (due to the first part of the FID being below 0x200000400), the MDT
984 <literal>0x24dab9</literal> and generation
985 <literal>0x3f0dfa6a</literal> and the pathname needs to be resolved
987 <literal>debugfs</literal>.</para>
990 <para>On the MDS (as root), use
991 <literal>debugfs</literal> to find the file associated with the
994 # debugfs -c -R "ncheck 0x24dab9" /dev/lustre/mdt_test
996 <para>Here is the command output:</para>
998 debugfs 1.42.3.wc2 (15-Aug-2012)
999 /dev/lustre/mdt_test: catastrophic mode - not reading inode or group bitmap\
1002 2415289 /ROOT/brian-laptop-guest/clients/client11/~dmtmp/PWRPNT/ZD16.BMP
1006 <para>The command lists the inode and pathname associated with the
1010 <literal>Debugfs</literal>' ''ncheck'' is a brute-force search that may
1011 take a long time to complete.</para>
1014 <para>To find the Lustre file from a disk LBA, follow the steps listed in
1015 the document at this URL:
1016 <link xl:href="http://smartmontools.sourceforge.net/badblockhowto.html">
1017 http://smartmontools.sourceforge.net/badblockhowto.html</link>. Then,
1018 follow the steps above to resolve the Lustre filename.</para>