1 <?xml version='1.0' encoding='utf-8'?>
2 <chapter xmlns="http://docbook.org/ns/docbook"
3 xmlns:xl="http://www.w3.org/1999/xlink" version="5.0" xml:lang="en-US"
4 xml:id="backupandrestore">
5 <title xml:id="backupandrestore.title">Backing Up and Restoring a File
7 <para>This chapter describes how to backup and restore at the file
8 system-level, device-level and file-level in a Lustre file system. Each
9 backup approach is described in the the following sections:</para>
13 <xref linkend="dbdoclet.backup_file"/>
18 <xref linkend="dbdoclet.backup_device"/>
23 <xref linkend="backup_fs_level"/>
28 <xref linkend="backup_fs_level.restore"/>
33 <xref linkend="dbdoclet.backup_lvm_snapshot"/>
37 <para>It is <emphasis>strongly</emphasis> recommended that sites perform
38 periodic device-level backup of the MDT(s)
39 (<xref linkend="dbdoclet.backup_device"/>),
40 for example twice a week with alternate backups going to a separate
41 device, even if there is not enough capacity to do a full backup of all
42 of the filesystem data. Even if there are separate file-level backups of
43 some or all files in the filesystem, having a device-level backup of the
44 MDT can be very useful in case of MDT failure or corruption. Being able to
45 restore a device-level MDT backup can avoid the significantly longer process
46 of restoring the entire filesystem from backup. Since the MDT is required
47 for access to all files, its loss would otherwise force full restore of the
48 filesystem (if that is even possible) even if the OSTs are still OK.</para>
49 <para>Performing a periodic device-level MDT backup can be done relatively
50 inexpensively because the storage need only be connected to the primary
51 MDS (it can be manually connected to the backup MDS in the rare case
52 it is needed), and only needs good linear read/write performance. While
53 the device-level MDT backup is not useful for restoring individual files,
54 it is most efficient to handle the case of MDT failure or corruption.</para>
55 <section xml:id="dbdoclet.backup_file">
58 <primary>backup</primary>
61 <primary>restoring</primary>
65 <primary>LVM</primary>
69 <primary>rsync</primary>
71 </indexterm>Backing up a File System</title>
72 <para>Backing up a complete file system gives you full control over the
73 files to back up, and allows restoration of individual files as needed.
74 File system-level backups are also the easiest to integrate into existing
75 backup solutions.</para>
76 <para>File system backups are performed from a Lustre client (or many
77 clients working parallel in different directories) rather than on
78 individual server nodes; this is no different than backing up any other
80 <para>However, due to the large size of most Lustre file systems, it is
81 not always possible to get a complete backup. We recommend that you back
82 up subsets of a file system. This includes subdirectories of the entire
83 file system, filesets for a single user, files incremented by date, and
84 so on, so that restores can be done more efficiently.</para>
86 <para>Lustre internally uses a 128-bit file identifier (FID) for all
87 files. To interface with user applications, the 64-bit inode numbers
88 are returned by the <literal>stat()</literal>,
89 <literal>fstat()</literal>, and
90 <literal>readdir()</literal> system calls on 64-bit applications, and
91 32-bit inode numbers to 32-bit applications.</para>
92 <para>Some 32-bit applications accessing Lustre file systems (on both
93 32-bit and 64-bit CPUs) may experience problems with the
94 <literal>stat()</literal>,
95 <literal>fstat()</literal> or
96 <literal>readdir()</literal> system calls under certain circumstances,
97 though the Lustre client should return 32-bit inode numbers to these
99 <para>In particular, if the Lustre file system is exported from a 64-bit
100 client via NFS to a 32-bit client, the Linux NFS server will export
101 64-bit inode numbers to applications running on the NFS client. If the
102 32-bit applications are not compiled with Large File Support (LFS), then
104 <literal>EOVERFLOW</literal> errors when accessing the Lustre files. To
105 avoid this problem, Linux NFS clients can use the kernel command-line
106 option "<literal>nfs.enable_ino64=0</literal>" in order to force the
107 NFS client to export 32-bit inode numbers to the client.</para>
109 <emphasis role="bold">Workaround</emphasis>: We very strongly recommend
111 <literal>tar(1)</literal> and other utilities that depend on the inode
112 number to uniquely identify an inode to be run on 64-bit clients. The
113 128-bit Lustre file identifiers cannot be uniquely mapped to a 32-bit
114 inode number, and as a result these utilities may operate incorrectly on
115 32-bit clients. While there is still a small chance of inode number
116 collisions with 64-bit inodes, the FID allocation pattern is designed
117 to avoid collisions for long periods of usage.</para>
122 <primary>backup</primary>
123 <secondary>rsync</secondary>
124 </indexterm>Lustre_rsync</title>
126 <literal>lustre_rsync</literal> feature keeps the entire file system in
127 sync on a backup by replicating the file system's changes to a second
128 file system (the second file system need not be a Lustre file system, but
129 it must be sufficiently large).
130 <literal>lustre_rsync</literal> uses Lustre changelogs to efficiently
131 synchronize the file systems without having to scan (directory walk) the
132 Lustre file system. This efficiency is critically important for large
133 file systems, and distinguishes the Lustre
134 <literal>lustre_rsync</literal> feature from other replication/backup
139 <primary>backup</primary>
140 <secondary>rsync</secondary>
141 <tertiary>using</tertiary>
142 </indexterm>Using Lustre_rsync</title>
144 <literal>lustre_rsync</literal> feature works by periodically running
145 <literal>lustre_rsync</literal>, a userspace program used to
146 synchronize changes in the Lustre file system onto the target file
148 <literal>lustre_rsync</literal> utility keeps a status file, which
149 enables it to be safely interrupted and restarted without losing
150 synchronization between the file systems.</para>
151 <para>The first time that
152 <literal>lustre_rsync</literal> is run, the user must specify a set of
153 parameters for the program to use. These parameters are described in
154 the following table and in
155 <xref linkend="dbdoclet.50438219_63667" />. On subsequent runs, these
156 parameters are stored in the the status file, and only the name of the
157 status file needs to be passed to
158 <literal>lustre_rsync</literal>.</para>
160 <literal>lustre_rsync</literal>:</para>
163 <para>Register the changelog user. For details, see the
164 <xref linkend="systemconfigurationutilities" />(
165 <literal>changelog_register</literal>) parameter in the
166 <xref linkend="systemconfigurationutilities" />(
167 <literal>lctl</literal>).</para>
173 <para>Verify that the Lustre file system (source) and the replica
174 file system (target) are identical
175 <emphasis>before</emphasis> registering the changelog user. If the
176 file systems are discrepant, use a utility, e.g. regular
177 <literal>rsync</literal>(not
178 <literal>lustre_rsync</literal>), to make them identical.</para>
182 <literal>lustre_rsync</literal> utility uses the following
184 <informaltable frame="all">
186 <colspec colname="c1" colwidth="3*" />
187 <colspec colname="c2" colwidth="10*" />
192 <emphasis role="bold">Parameter</emphasis>
197 <emphasis role="bold">Description</emphasis>
207 <replaceable>src</replaceable></literal>
211 <para>The path to the root of the Lustre file system (source)
212 which will be synchronized. This is a mandatory option if a
213 valid status log created during a previous synchronization
215 <literal>--statuslog</literal>) is not specified.</para>
222 <replaceable>tgt</replaceable></literal>
226 <para>The path to the root where the source file system will
227 be synchronized (target). This is a mandatory option if the
228 status log created during a previous synchronization
230 <literal>--statuslog</literal>) is not specified. This option
231 can be repeated if multiple synchronization targets are
239 <replaceable>mdt</replaceable></literal>
243 <para>The metadata device to be synchronized. A changelog
244 user must be registered for this device. This is a mandatory
245 option if a valid status log created during a previous
246 synchronization operation (
247 <literal>--statuslog</literal>) is not specified.</para>
254 <replaceable>userid</replaceable></literal>
258 <para>The changelog user ID for the specified MDT. To use
259 <literal>lustre_rsync</literal>, the changelog user must be
260 registered. For details, see the
261 <literal>changelog_register</literal> parameter in
262 <xref linkend="systemconfigurationutilities" />(
263 <literal>lctl</literal>). This is a mandatory option if a
264 valid status log created during a previous synchronization
266 <literal>--statuslog</literal>) is not specified.</para>
272 <literal>--statuslog=
273 <replaceable>log</replaceable></literal>
277 <para>A log file to which synchronization status is saved.
279 <literal>lustre_rsync</literal> utility starts, if the status
280 log from a previous synchronization operation is specified,
281 then the state is read from the log and otherwise mandatory
282 <literal>--source</literal>,
283 <literal>--target</literal> and
284 <literal>--mdt</literal> options can be skipped. Specifying
286 <literal>--source</literal>,
287 <literal>--target</literal> and/or
288 <literal>--mdt</literal> options, in addition to the
289 <literal>--statuslog</literal> option, causes the specified
290 parameters in the status log to be overridden. Command line
291 options take precedence over options in the status
298 <replaceable>yes|no</replaceable></literal>
301 <para>Specifies whether extended attributes (
302 <literal>xattrs</literal>) are synchronized or not. The
303 default is to synchronize extended attributes.</para>
306 <para>Disabling xattrs causes Lustre striping information
307 not to be synchronized.</para>
315 <literal>--verbose</literal>
319 <para>Produces verbose output.</para>
325 <literal>--dry-run</literal>
329 <para>Shows the output of
330 <literal>lustre_rsync</literal> commands (
331 <literal>copy</literal>,
332 <literal>mkdir</literal>, etc.) on the target file system
333 without actually executing them.</para>
339 <literal>--abort-on-err</literal>
343 <para>Stops processing the
344 <literal>lustre_rsync</literal> operation if an error occurs.
345 The default is to continue the operation.</para>
355 <primary>backup</primary>
356 <secondary>rsync</secondary>
357 <tertiary>examples</tertiary>
359 <literal>lustre_rsync</literal> Examples</title>
361 <literal>lustre_rsync</literal> commands are listed below.</para>
362 <para>Register a changelog user for an MDT (e.g.
363 <literal>testfs-MDT0000</literal>).</para>
364 <screen># lctl --device testfs-MDT0000 changelog_register testfs-MDT0000
365 Registered changelog userid 'cl1'</screen>
366 <para>Synchronize a Lustre file system (
367 <literal>/mnt/lustre</literal>) to a target file system (
368 <literal>/mnt/target</literal>).</para>
369 <screen>$ lustre_rsync --source=/mnt/lustre --target=/mnt/target \
370 --mdt=testfs-MDT0000 --user=cl1 --statuslog sync.log --verbose
371 Lustre filesystem: testfs
372 MDT device: testfs-MDT0000
376 Changelog registration: cl1
377 Starting changelog record: 0
379 lustre_rsync took 1 seconds
380 Changelog records consumed: 22</screen>
381 <para>After the file system undergoes changes, synchronize the changes
382 onto the target file system. Only the
383 <literal>statuslog</literal> name needs to be specified, as it has all
384 the parameters passed earlier.</para>
385 <screen>$ lustre_rsync --statuslog sync.log --verbose
386 Replicating Lustre filesystem: testfs
387 MDT device: testfs-MDT0000
391 Changelog registration: cl1
392 Starting changelog record: 22
394 lustre_rsync took 2 seconds
395 Changelog records consumed: 42</screen>
396 <para>To synchronize a Lustre file system (
397 <literal>/mnt/lustre</literal>) to two target file systems (
398 <literal>/mnt/target1</literal> and
399 <literal>/mnt/target2</literal>).</para>
400 <screen>$ lustre_rsync --source=/mnt/lustre --target=/mnt/target1 \
401 --target=/mnt/target2 --mdt=testfs-MDT0000 --user=cl1 \
402 --statuslog sync.log</screen>
406 <section xml:id="dbdoclet.backup_device">
409 <primary>backup</primary>
410 <secondary>MDT/OST device level</secondary>
411 </indexterm>Backing Up and Restoring an MDT or OST (ldiskfs Device Level)</title>
412 <para>In some cases, it is useful to do a full device-level backup of an
413 individual device (MDT or OST), before replacing hardware, performing
414 maintenance, etc. Doing full device-level backups ensures that all of the
415 data and configuration files is preserved in the original state and is the
416 easiest method of doing a backup. For the MDT file system, it may also be
417 the fastest way to perform the backup and restore, since it can do large
418 streaming read and write operations at the maximum bandwidth of the
419 underlying devices.</para>
421 <para>Keeping an updated full backup of the MDT is especially important
422 because permanent failure or corruption of the MDT file system renders
423 the much larger amount of data in all the OSTs largely inaccessible and
424 unusable. The storage needed for one or two full MDT device backups
425 is much smaller than doing a full filesystem backup, and can use less
426 expensive storage than the actual MDT device(s) since it only needs to
427 have good streaming read/write speed instead of high random IOPS.</para>
429 <warning condition='l23'>
430 <para>In Lustre software release 2.0 through 2.2, the only successful
431 way to backup and restore an MDT is to do a device-level backup as is
432 described in this section. File-level restore of an MDT is not possible
433 before Lustre software release 2.3, as the Object Index (OI) file cannot
434 be rebuilt after restore without the OI Scrub functionality.
435 <emphasis role="bold">Since Lustre software release 2.3</emphasis>,
436 Object Index files are automatically rebuilt at first mount after a
437 restore is detected (see
438 <link xl:href="http://jira.hpdd.intel.com/browse/LU-957">LU-957</link>),
439 and file-level backup is supported (see
440 <xref linkend="backup_fs_level"/>).</para>
442 <para>If hardware replacement is the reason for the backup or if a spare
443 storage device is available, it is possible to do a raw copy of the MDT or
444 OST from one block device to the other, as long as the new device is at
445 least as large as the original device. To do this, run:</para>
446 <screen>dd if=/dev/{original} of=/dev/{newdev} bs=4M</screen>
447 <para>If hardware errors cause read problems on the original device, use
448 the command below to allow as much data as possible to be read from the
449 original device while skipping sections of the disk with errors:</para>
450 <screen>dd if=/dev/{original} of=/dev/{newdev} bs=4k conv=sync,noerror /
451 count={original size in 4kB blocks}</screen>
452 <para>Even in the face of hardware errors, the <literal>ldiskfs</literal>
453 file system is very robust and it may be possible
454 to recover the file system data after running
455 <literal>e2fsck -fy /dev/{newdev}</literal> on the new device, along with
456 <literal>ll_recover_lost_found_objs</literal> for OST devices.</para>
457 <para condition="l26">With Lustre software version 2.6 and later, there is
458 no longer a need to run
459 <literal>ll_recover_lost_found_objs</literal> on the OSTs, since the
460 <literal>LFSCK</literal> scanning will automatically move objects from
461 <literal>lost+found</literal> back into its correct location on the OST
462 after directory corruption.</para>
463 <para>In order to ensure that the backup is fully consistent, the MDT or
464 OST must be unmounted, so that there are no changes being made to the
465 device while the data is being transferred. If the reason for the
466 backup is preventative (i.e. MDT backup on a running MDS in case of
467 future failures) then it is possible to perform a consistent backup from
468 an LVM snapshot. If an LVM snapshot is not available, and taking the
469 MDS offline for a backup is unacceptable, it is also possible to perform
470 a backup from the raw MDT block device. While the backup from the raw
471 device will not be fully consistent due to ongoing changes, the vast
472 majority of ldiskfs metadata is statically allocated, and inconsistencies
473 in the backup can be fixed by running <literal>e2fsck</literal> on the
474 backup device, and is still much better than not having any backup at all.
477 <section xml:id="backup_fs_level">
480 <primary>backup</primary>
481 <secondary>OST file system</secondary>
484 <primary>backup</primary>
485 <secondary>MDT file system</secondary>
486 </indexterm>Backing Up an OST or MDT (Backend File System Level)</title>
487 <para>This procedure provides an alternative to backup or migrate the data
488 of an OST or MDT at the file level. At the file-level, unused space is
489 omitted from the backup and the process may be completed quicker with a
490 smaller total backup size. Backing up a single OST device is not
491 necessarily the best way to perform backups of the Lustre file system,
492 since the files stored in the backup are not usable without metadata stored
493 on the MDT and additional file stripes that may be on other OSTs. However,
494 it is the preferred method for migration of OST devices, especially when it
495 is desirable to reformat the underlying file system with different
496 configuration options or to reduce fragmentation.</para>
498 <emphasis role="bold">Prior to Lustre software release 2.3</emphasis>, the
499 only successful way to perform an MDT backup and restore was to do a
500 device-level backup as described in
501 <xref linkend="dbdoclet.backup_device" />. The ability to do MDT
502 file-level backups is not available for Lustre software release 2.0
503 through 2.2, because restoration of the Object Index (OI) file does not
504 return the MDT to a functioning state.</para>
505 <para><emphasis role="bold">Since Lustre software release 2.3</emphasis>,
506 Object Index files are automatically rebuilt at first mount after a
507 restore is detected (see
508 <link xl:href="http://jira.hpdd.intel.com/browse/LU-957">LU-957</link>),
509 so file-level MDT restore is supported.</para></note>
510 <section xml:id="backup_fs_level.index_objects" condition="l2B">
513 <primary>backup</primary>
514 <secondary>index objects</secondary>
515 </indexterm>Backing Up an OST or MDT (Backend File System Level)</title>
516 <para>Prior to Lustre software release 2.11.0, we can only do the backend
517 file system level backup and restore process for ldiskfs-based systems.
518 The ability to perform a zfs-based MDT/OST file system level backup and
519 restore is introduced beginning in Lustre software release 2.11.0.
520 Differing from an ldiskfs-based system, index objects must be backed up
521 before the unmount of the target (MDT or OST) in order to be able to
522 restore the file system successfully. To enable index backup on the
523 target, execute the following command on the target server:</para>
524 <screen># lctl set_param osd-zfs.${fsname}-${target}.index_backup=1</screen>
525 <para><replaceable>${target}</replaceable> is composed of the target type
526 (MDT or OST) plus the target index, such as <literal>MDT0000</literal>,
527 <literal>OST0001</literal>, and so on.</para>
528 <note><para>The index_backup is also valid for an ldiskfs-based system,
529 that will be used when migrating data between ldiskfs-based and
530 zfs-based systems as described in <xref linkend="migrate_backends"/>.
533 <section xml:id="backup_fs_level.ost_mdt">
536 <primary>backup</primary>
537 <secondary>OST and MDT</secondary>
538 </indexterm>Backing Up an OST or MDT</title>
539 <para>For Lustre software release 2.3 and newer with MDT file-level backup
540 support, substitute <literal>mdt</literal> for <literal>ost</literal>
541 in the instructions below.</para>
544 <para><emphasis role="bold">Umount the target</emphasis></para>
547 <para><emphasis role="bold">Make a mountpoint for the file system.
549 <screen>[oss]# mkdir -p /mnt/ost</screen>
552 <para><emphasis role="bold">Mount the file system.</emphasis></para>
553 <para>For ldiskfs-based systems:</para>
554 <screen>[oss]# mount -t ldiskfs /dev/<emphasis>{ostdev}</emphasis> /mnt/ost</screen>
555 <para>For zfs-based systems:</para>
558 <para>Import the pool for the target if it is exported. For example:
559 <screen>[oss]# zpool import lustre-ost [-d ${ostdev_dir}]</screen>
563 <para>Enable the <literal>canmount</literal> property on the target
564 filesystem. For example:
565 <screen>[oss]# zfs set canmount=on ${fsname}-ost/ost</screen>
566 You also can specify the mountpoint property. By default, it will
567 be: <literal>/${fsname}-ost/ost</literal>
571 <para>Mount the target as 'zfs'. For example:
572 <screen>[oss]# zfs mount ${fsname}-ost/ost</screen>
579 <emphasis role="bold">Change to the mountpoint being backed
582 <screen>[oss]# cd /mnt/ost</screen>
586 <emphasis role="bold">Back up the extended attributes.</emphasis>
588 <screen>[oss]# getfattr -R -d -m '.*' -e hex -P . > ea-$(date +%Y%m%d).bak</screen>
590 <para>If the <literal>tar(1)</literal> command supports the
591 <literal>--xattr</literal> option (see below), the
592 <literal>getfattr</literal> step may be unnecessary as long as tar
593 correctly backs up the <literal>trusted.*</literal> attributes.
594 However, completing this step is not harmful and can serve as an
595 added safety measure.</para>
598 <para>In most distributions, the
599 <literal>getfattr</literal> command is part of the
600 <literal>attr</literal> package. If the
601 <literal>getfattr</literal> command returns errors like
602 <literal>Operation not supported</literal>, then the kernel does not
603 correctly support EAs. Stop and use a different backup method.</para>
608 <emphasis role="bold">Verify that the
609 <literal>ea-$date.bak</literal> file has properly backed up the EA
610 data on the OST.</emphasis>
612 <para>Without this attribute data, the MDT restore process will fail
613 and result in an unusable filesystem. The OST restore process may be
614 missing extra data that can be very useful in case of later file system
615 corruption. Look at this file with <literal>more</literal> or a text
616 editor. Each object file should have a corresponding item similar to
618 <screen>[oss]# file: O/0/d0/100992
620 0x0d822200000000004a8a73e500000000808a0100000000000000000000000000</screen>
624 <emphasis role="bold">Back up all file system data.</emphasis>
626 <screen>[oss]# tar czvf {backup file}.tgz [--xattrs] [--xattrs-include="trusted.*" --sparse .</screen>
629 <literal>--sparse</literal> option is vital for backing up an MDT.
630 Very old versions of tar may not support the
631 <literal>--sparse</literal> option correctly, which may cause the
632 MDT backup to take a long time. Known-working versions include
633 the tar from Red Hat Enterprise Linux distribution (RHEL version
634 6.3 or newer) or GNU tar version 1.25 and newer.</para>
637 <para>The tar <literal>--xattrs</literal> option is only available
638 in GNU tar version 1.27 or later or in RHEL 6.3 or newer. The
639 <literal>--xattrs-include="trusted.*"</literal> option is
640 <emphasis>required</emphasis> for correct restoration of the xattrs
641 when using GNU tar 1.27 or RHEL 7 and newer.</para>
646 <emphasis role="bold">Change directory out of the file
649 <screen>[oss]# cd -</screen>
653 <emphasis role="bold">Unmount the file system.</emphasis>
655 <screen>[oss]# umount /mnt/ost</screen>
657 <para>When restoring an OST backup on a different node as part of an
658 OST migration, you also have to change server NIDs and use the
659 <literal>--writeconf</literal> command to re-generate the
660 configuration logs. See
661 <xref linkend="lustremaintenance" />(Changing a Server NID).</para>
667 <section xml:id="backup_fs_level.restore">
670 <primary>backup</primary>
671 <secondary>restoring file system backup</secondary>
672 </indexterm>Restoring a File-Level Backup</title>
673 <para>To restore data from a file-level backup, you need to format the
674 device, restore the file data and then restore the EA data.</para>
677 <para>Format the new device.</para>
678 <screen>[oss]# mkfs.lustre --ost --index {<emphasis>OST index</emphasis>}
679 --replace --fstype=${fstype} {<emphasis>other options</emphasis>} /dev/<emphasis>{newdev}</emphasis></screen>
682 <para>Set the file system label (<emphasis role="bold">ldiskfs-based
683 systems only</emphasis>).</para>
684 <screen>[oss]# e2label {fsname}-OST{index in hex} /mnt/ost</screen>
687 <para>Mount the file system.</para>
688 <para>For ldiskfs-based systems:</para>
689 <screen>[oss]# mount -t ldiskfs /dev/<emphasis>{newdev}</emphasis> /mnt/ost</screen>
690 <para>For zfs-based systems:</para>
693 <para>Import the pool for the target if it is exported. For example:
695 <screen>[oss]# zpool import lustre-ost [-d ${ostdev_dir}]</screen>
698 <para>Enable the canmount property on the target filesystem. For
700 <screen>[oss]# zfs set canmount=on ${fsname}-ost/ost</screen>
701 <para>You also can specify the <literal>mountpoint</literal>
702 property. By default, it will be:
703 <literal>/${fsname}-ost/ost</literal></para>
706 <para>Mount the target as 'zfs'. For example:</para>
707 <screen>[oss]# zfs mount ${fsname}-ost/ost</screen>
712 <para>Change to the new file system mount point.</para>
713 <screen>[oss]# cd /mnt/ost</screen>
716 <para>Restore the file system backup.</para>
717 <screen>[oss]# tar xzvpf <emphasis>{backup file}</emphasis> [--xattrs] [--xattrs-include="trusted.*"] --sparse</screen>
719 <para>The tar <literal>--xattrs</literal> option is only available
720 in GNU tar version 1.27 or later or in RHEL 6.3 or newer. The
721 <literal>--xattrs-include="trusted.*"</literal> option is
722 <emphasis>required</emphasis> for correct restoration of the
723 MDT xattrs when using GNU tar 1.27 or RHEL 7 and newer. Otherwise,
724 the <literal>setfattr</literal> step below should be used.
729 <para>If not using a version of tar that supports direct xattr
730 backups, restore the file system extended attributes.</para>
731 <screen>[oss]# setfattr --restore=ea-${date}.bak</screen>
734 <literal>--xattrs</literal> option is supported by tar and specified
735 in the step above, this step is redundant.</para>
739 <para>Verify that the extended attributes were restored.</para>
740 <screen>[oss]# getfattr -d -m ".*" -e hex O/0/d0/100992 trusted.fid= \
741 0x0d822200000000004a8a73e500000000808a0100000000000000000000000000</screen>
744 <para>Remove old OI and LFSCK files.</para>
745 <screen>[oss]# rm -rf oi.16* lfsck_* LFSCK</screen>
748 <para>Remove old CATALOGS.</para>
749 <screen>[oss]# rm -f CATALOGS</screen>
751 <para>This is optional for the MDT side only. The CATALOGS record the
752 llog file handlers that are used for recovering cross-server updates.
753 Before OI scrub rebuilds the OI mappings for the llog files, the
754 related recovery will get a failure if it runs faster than the
755 background OI scrub. This will result in a failure of the whole mount
756 process. OI scrub is an online tool, therefore, a mount failure means
757 that the OI scrub will be stopped. Removing the old CATALOGS will
758 avoid this potential trouble. The side-effect of removing old
759 CATALOGS is that the recovery for related cross-server updates will
760 be aborted. However, this can be handled by LFSCK after the system
765 <para>Change directory out of the file system.</para>
766 <screen>[oss]# cd -</screen>
769 <para>Unmount the new file system.</para>
770 <screen>[oss]# umount /mnt/ost</screen>
771 <note><para>If the restored system has a different NID from the backup
772 system, please change the NID. For detail, please refer to
773 <xref linkend="dbdoclet.changingservernid" />. For example:</para>
774 <screen>[oss]# mount -t lustre -o nosvc ${fsname}-ost/ost /mnt/ost
775 [oss]# lctl replace_nids ${fsname}-OSTxxxx $new_nids
776 [oss]# umount /mnt/ost</screen></note>
779 <para>Mount the target as <literal>lustre</literal>.</para>
780 <para>Usually, we will use the <literal>-o abort_recov</literal> option
781 to skip unnecessary recovery. For example:</para>
782 <screen>[oss]# mount -t lustre -o abort_recov #{fsname}-ost/ost /mnt/ost</screen>
783 <para>Lustre can detect the restore automatically when mounting the
784 target, and then trigger OI scrub to rebuild the OIs and index objects
785 asynchronously in the background. You can check the OI scrub status
786 with the following command:</para>
787 <screen>[oss]# lctl get_param -n osd-${fstype}.${fsname}-${target}.oi_scrub</screen>
790 <para condition='l23'>If the file system was used between the time the
791 backup was made and when it was restored, then the online
792 <literal>LFSCK</literal> tool (part of Lustre code after version 2.3)
793 will automatically be
794 run to ensure the file system is coherent. If all of the device file
795 systems were backed up at the same time after the entire Lustre file system
796 was stopped, this step is unnecessary. In either case, the file system will
797 be immediately although there may be I/O errors reading
798 from files that are present on the MDT but not the OSTs, and files that
799 were created after the MDT backup will not be accessible or visible. See
800 <xref linkend="dbdoclet.lfsckadmin" />for details on using LFSCK.</para>
802 <section xml:id="dbdoclet.backup_lvm_snapshot">
805 <primary>backup</primary>
806 <secondary>using LVM</secondary>
807 </indexterm>Using LVM Snapshots with the Lustre File System</title>
808 <para>If you want to perform disk-based backups (because, for example,
809 access to the backup system needs to be as fast as to the primary Lustre
810 file system), you can use the Linux LVM snapshot tool to maintain multiple,
811 incremental file system backups.</para>
812 <para>Because LVM snapshots cost CPU cycles as new files are written,
813 taking snapshots of the main Lustre file system will probably result in
814 unacceptable performance losses. You should create a new, backup Lustre
815 file system and periodically (e.g., nightly) back up new/changed files to
816 it. Periodic snapshots can be taken of this backup file system to create a
817 series of "full" backups.</para>
819 <para>Creating an LVM snapshot is not as reliable as making a separate
820 backup, because the LVM snapshot shares the same disks as the primary MDT
821 device, and depends on the primary MDT device for much of its data. If
822 the primary MDT device becomes corrupted, this may result in the snapshot
823 being corrupted.</para>
828 <primary>backup</primary>
829 <secondary>using LVM</secondary>
830 <tertiary>creating</tertiary>
831 </indexterm>Creating an LVM-based Backup File System</title>
832 <para>Use this procedure to create a backup Lustre file system for use
833 with the LVM snapshot mechanism.</para>
836 <para>Create LVM volumes for the MDT and OSTs.</para>
837 <para>Create LVM devices for your MDT and OST targets. Make sure not
838 to use the entire disk for the targets; save some room for the
839 snapshots. The snapshots start out as 0 size, but grow as you make
840 changes to the current file system. If you expect to change 20% of
841 the file system between backups, the most recent snapshot will be 20%
842 of the target size, the next older one will be 40%, etc. Here is an
844 <screen>cfs21:~# pvcreate /dev/sda1
845 Physical volume "/dev/sda1" successfully created
846 cfs21:~# vgcreate vgmain /dev/sda1
847 Volume group "vgmain" successfully created
848 cfs21:~# lvcreate -L200G -nMDT0 vgmain
849 Logical volume "MDT0" created
850 cfs21:~# lvcreate -L200G -nOST0 vgmain
851 Logical volume "OST0" created
853 ACTIVE '/dev/vgmain/MDT0' [200.00 GB] inherit
854 ACTIVE '/dev/vgmain/OST0' [200.00 GB] inherit</screen>
857 <para>Format the LVM volumes as Lustre targets.</para>
858 <para>In this example, the backup file system is called
859 <literal>main</literal> and designates the current, most up-to-date
861 <screen>cfs21:~# mkfs.lustre --fsname=main --mdt --index=0 /dev/vgmain/MDT0
862 No management node specified, adding MGS to this MDT.
869 (MDT MGS first_time update )
870 Persistent mount opts: errors=remount-ro,iopen_nopriv,user_xattr
872 checking for existing Lustre data
874 formatting backing filesystem ldiskfs on /dev/vgmain/MDT0
875 target name main-MDT0000
877 options -i 4096 -I 512 -q -O dir_index -F
878 mkfs_cmd = mkfs.ext2 -j -b 4096 -L main-MDT0000 -i 4096 -I 512 -q
879 -O dir_index -F /dev/vgmain/MDT0
880 Writing CONFIGS/mountdata
881 cfs21:~# mkfs.lustre --mgsnode=cfs21 --fsname=main --ost --index=0
889 (OST first_time update )
890 Persistent mount opts: errors=remount-ro,extents,mballoc
891 Parameters: mgsnode=192.168.0.21@tcp
892 checking for existing Lustre data
894 formatting backing filesystem ldiskfs on /dev/vgmain/OST0
895 target name main-OST0000
897 options -I 256 -q -O dir_index -F
898 mkfs_cmd = mkfs.ext2 -j -b 4096 -L lustre-OST0000 -J size=400 -I 256
899 -i 262144 -O extents,uninit_bg,dir_nlink,huge_file,flex_bg -G 256
900 -E resize=4290772992,lazy_journal_init, -F /dev/vgmain/OST0
901 Writing CONFIGS/mountdata
902 cfs21:~# mount -t lustre /dev/vgmain/MDT0 /mnt/mdt
903 cfs21:~# mount -t lustre /dev/vgmain/OST0 /mnt/ost
904 cfs21:~# mount -t lustre cfs21:/main /mnt/main
912 <primary>backup</primary>
913 <secondary>new/changed files</secondary>
914 </indexterm>Backing up New/Changed Files to the Backup File
916 <para>At periodic intervals e.g., nightly, back up new and changed files
917 to the LVM-based backup file system.</para>
918 <screen>cfs21:~# cp /etc/passwd /mnt/main
920 cfs21:~# cp /etc/fstab /mnt/main
922 cfs21:~# ls /mnt/main
923 fstab passwd</screen>
928 <primary>backup</primary>
929 <secondary>using LVM</secondary>
930 <tertiary>creating snapshots</tertiary>
931 </indexterm>Creating Snapshot Volumes</title>
932 <para>Whenever you want to make a "checkpoint" of the main Lustre file
933 system, create LVM snapshots of all target MDT and OSTs in the LVM-based
934 backup file system. You must decide the maximum size of a snapshot ahead
935 of time, although you can dynamically change this later. The size of a
936 daily snapshot is dependent on the amount of data changed daily in the
937 main Lustre file system. It is likely that a two-day old snapshot will be
938 twice as big as a one-day old snapshot.</para>
939 <para>You can create as many snapshots as you have room for in the volume
940 group. If necessary, you can dynamically add disks to the volume
942 <para>The snapshots of the target MDT and OSTs should be taken at the
943 same point in time. Make sure that the cronjob updating the backup file
944 system is not running, since that is the only thing writing to the disks.
945 Here is an example:</para>
946 <screen>cfs21:~# modprobe dm-snapshot
947 cfs21:~# lvcreate -L50M -s -n MDT0.b1 /dev/vgmain/MDT0
948 Rounding up size to full physical extent 52.00 MB
949 Logical volume "MDT0.b1" created
950 cfs21:~# lvcreate -L50M -s -n OST0.b1 /dev/vgmain/OST0
951 Rounding up size to full physical extent 52.00 MB
952 Logical volume "OST0.b1" created
954 <para>After the snapshots are taken, you can continue to back up
955 new/changed files to "main". The snapshots will not contain the new
957 <screen>cfs21:~# cp /etc/termcap /mnt/main
958 cfs21:~# ls /mnt/main
965 <primary>backup</primary>
966 <secondary>using LVM</secondary>
967 <tertiary>restoring</tertiary>
968 </indexterm>Restoring the File System From a Snapshot</title>
969 <para>Use this procedure to restore the file system from an LVM
973 <para>Rename the LVM snapshot.</para>
974 <para>Rename the file system snapshot from "main" to "back" so you
975 can mount it without unmounting "main". This is recommended, but not
977 <literal>--reformat</literal> flag to
978 <literal>tunefs.lustre</literal> to force the name change. For
980 <screen>cfs21:~# tunefs.lustre --reformat --fsname=back --writeconf /dev/vgmain/MDT0.b1
981 checking for existing Lustre data
983 Reading CONFIGS/mountdata
984 Read previous values:
991 Persistent mount opts: errors=remount-ro,iopen_nopriv,user_xattr
1000 Persistent mount opts: errors=remount-ro,iopen_nopriv,user_xattr
1002 Writing CONFIGS/mountdata
1003 cfs21:~# tunefs.lustre --reformat --fsname=back --writeconf /dev/vgmain/OST0.b1
1004 checking for existing Lustre data
1006 Reading CONFIGS/mountdata
1007 Read previous values:
1008 Target: main-OST0000
1014 Persistent mount opts: errors=remount-ro,extents,mballoc
1015 Parameters: mgsnode=192.168.0.21@tcp
1016 Permanent disk data:
1017 Target: back-OST0000
1023 Persistent mount opts: errors=remount-ro,extents,mballoc
1024 Parameters: mgsnode=192.168.0.21@tcp
1025 Writing CONFIGS/mountdata
1027 <para>When renaming a file system, we must also erase the last_rcvd
1028 file from the snapshots</para>
1029 <screen>cfs21:~# mount -t ldiskfs /dev/vgmain/MDT0.b1 /mnt/mdtback
1030 cfs21:~# rm /mnt/mdtback/last_rcvd
1031 cfs21:~# umount /mnt/mdtback
1032 cfs21:~# mount -t ldiskfs /dev/vgmain/OST0.b1 /mnt/ostback
1033 cfs21:~# rm /mnt/ostback/last_rcvd
1034 cfs21:~# umount /mnt/ostback</screen>
1037 <para>Mount the file system from the LVM snapshot. For
1039 <screen>cfs21:~# mount -t lustre /dev/vgmain/MDT0.b1 /mnt/mdtback
1040 cfs21:~# mount -t lustre /dev/vgmain/OST0.b1 /mnt/ostback
1041 cfs21:~# mount -t lustre cfs21:/back /mnt/back</screen>
1044 <para>Note the old directory contents, as of the snapshot time. For
1046 <screen>cfs21:~/cfs/b1_5/lustre/utils# ls /mnt/back
1052 <section remap="h3">
1055 <primary>backup</primary>
1056 <secondary>using LVM</secondary>
1057 <tertiary>deleting</tertiary>
1058 </indexterm>Deleting Old Snapshots</title>
1059 <para>To reclaim disk space, you can erase old snapshots as your backup
1060 policy dictates. Run:</para>
1061 <screen>lvremove /dev/vgmain/MDT0.b1</screen>
1063 <section remap="h3">
1066 <primary>backup</primary>
1067 <secondary>using LVM</secondary>
1068 <tertiary>resizing</tertiary>
1069 </indexterm>Changing Snapshot Volume Size</title>
1070 <para>You can also extend or shrink snapshot volumes if you find your
1071 daily deltas are smaller or larger than expected. Run:</para>
1072 <screen>lvextend -L10G /dev/vgmain/MDT0.b1</screen>
1074 <para>Extending snapshots seems to be broken in older LVM. It is
1075 working in LVM v2.02.01.</para>
1079 <section xml:id="migrate_backends" condition="l2B">
1082 <primary>backup</primary>
1083 <secondary>ZFS ZPL</secondary>
1084 </indexterm>Migration Between ZFS and ldiskfs Target Filesystems
1086 <para>Beginning with Lustre 2.11.0, it is possible to migrate between
1087 ZFS and ldiskfs backends. For migrating OSTs, it is best to use
1088 <literal>lfs find</literal>/<literal>lfs_migrate</literal> to empty out
1089 an OST while the filesystem is in use and then reformat it with the new
1090 fstype. For instructions on removing the OST, please see
1091 <xref linkend="section_remove_ost"/>.</para>
1092 <section remap="h3" xml:id="migrate_backends.zfs2ldiskfs">
1095 <primary>backup</primary>
1096 <secondary>ZFS to ldiskfs</secondary>
1097 </indexterm>Migrate from a ZFS to an ldiskfs based filesystem</title>
1098 <para>The first step of the process is to make a ZFS backend backup
1099 using <literal>tar</literal> as described in
1100 <xref linkend="backup_fs_level"/>.</para>
1101 <para>Next, restore the backup to an ldiskfs-based system as described
1102 in <xref linkend="backup_fs_level.restore"/>.</para>
1104 <section remap="h3" xml:id="migrate_backends.ldiskfs2zfs">
1107 <primary>backup</primary>
1108 <secondary>ZFS to ldiskfs</secondary>
1109 </indexterm>Migrate from an ldiskfs to a ZFS based filesystem</title>
1110 <para>The first step of the process is to make an ldiskfs backend backup
1111 using <literal>tar</literal> as described in
1112 <xref linkend="backup_fs_level"/>.</para>
1113 <para><emphasis role="strong">Caution:</emphasis>For a migration from
1114 ldiskfs to zfs, it is required to enable index_backup before the
1115 unmount of the target. This is an additional step for a regular
1116 ldiskfs-based backup/restore and easy to be missed.</para>
1117 <para>Next, restore the backup to an ldiskfs-based system as described
1118 in <xref linkend="backup_fs_level.restore"/>.</para>