1 <?xml version='1.0' encoding='utf-8'?>
2 <chapter xmlns="http://docbook.org/ns/docbook"
3 xmlns:xl="http://www.w3.org/1999/xlink" version="5.0" xml:lang="en-US"
4 xml:id="backupandrestore">
5 <title xml:id="backupandrestore.title">Backing Up and Restoring a File
7 <para>This chapter describes how to backup and restore at the file
8 system-level, device-level and file-level in a Lustre file system. Each
9 backup approach is described in the the following sections:</para>
13 <xref linkend="dbdoclet.50438207_56395" />
18 <xref linkend="dbdoclet.50438207_71633" />
23 <xref linkend="dbdoclet.50438207_21638" />
28 <xref linkend="dbdoclet.50438207_22325" />
33 <xref linkend="dbdoclet.50438207_31553" />
37 <section xml:id="dbdoclet.50438207_56395">
40 <primary>backup</primary>
43 <primary>restoring</primary>
47 <primary>LVM</primary>
51 <primary>rsync</primary>
53 </indexterm>Backing up a File System</title>
54 <para>Backing up a complete file system gives you full control over the
55 files to back up, and allows restoration of individual files as needed.
56 File system-level backups are also the easiest to integrate into existing
57 backup solutions.</para>
58 <para>File system backups are performed from a Lustre client (or many
59 clients working parallel in different directories) rather than on
60 individual server nodes; this is no different than backing up any other
62 <para>However, due to the large size of most Lustre file systems, it is not
63 always possible to get a complete backup. We recommend that you back up
64 subsets of a file system. This includes subdirectories of the entire file
65 system, filesets for a single user, files incremented by date, and so
68 <para>In order to allow the file system namespace to scale for future
69 applications, Lustre software release 2.x internally uses a 128-bit file
70 identifier for all files. To interface with user applications, the Lustre
71 software presents 64-bit inode numbers for the
72 <literal>stat()</literal>,
73 <literal>fstat()</literal>, and
74 <literal>readdir()</literal> system calls on 64-bit applications, and
75 32-bit inode numbers to 32-bit applications.</para>
76 <para>Some 32-bit applications accessing Lustre file systems (on both
77 32-bit and 64-bit CPUs) may experience problems with the
78 <literal>stat()</literal>,
79 <literal>fstat()</literal> or
80 <literal>readdir()</literal> system calls under certain circumstances,
81 though the Lustre client should return 32-bit inode numbers to these
83 <para>In particular, if the Lustre file system is exported from a 64-bit
84 client via NFS to a 32-bit client, the Linux NFS server will export
85 64-bit inode numbers to applications running on the NFS client. If the
86 32-bit applications are not compiled with Large File Support (LFS), then
88 <literal>EOVERFLOW</literal> errors when accessing the Lustre files. To
89 avoid this problem, Linux NFS clients can use the kernel command-line
91 <literal>nfs.enable_ino64=0</literal>" in order to force the NFS client
92 to export 32-bit inode numbers to the client.</para>
94 <emphasis role="bold">Workaround</emphasis>: We very strongly recommend
96 <literal>tar(1)</literal> and other utilities that depend on the inode
97 number to uniquely identify an inode to be run on 64-bit clients. The
98 128-bit Lustre file identifiers cannot be uniquely mapped to a 32-bit
99 inode number, and as a result these utilities may operate incorrectly on
100 32-bit clients.</para>
105 <primary>backup</primary>
106 <secondary>rsync</secondary>
107 </indexterm>Lustre_rsync</title>
109 <literal>lustre_rsync</literal> feature keeps the entire file system in
110 sync on a backup by replicating the file system's changes to a second
111 file system (the second file system need not be a Lustre file system, but
112 it must be sufficiently large).
113 <literal>lustre_rsync</literal> uses Lustre changelogs to efficiently
114 synchronize the file systems without having to scan (directory walk) the
115 Lustre file system. This efficiency is critically important for large
116 file systems, and distinguishes the Lustre
117 <literal>lustre_rsync</literal> feature from other replication/backup
122 <primary>backup</primary>
123 <secondary>rsync</secondary>
124 <tertiary>using</tertiary>
125 </indexterm>Using Lustre_rsync</title>
127 <literal>lustre_rsync</literal> feature works by periodically running
128 <literal>lustre_rsync</literal>, a userspace program used to
129 synchronize changes in the Lustre file system onto the target file
131 <literal>lustre_rsync</literal> utility keeps a status file, which
132 enables it to be safely interrupted and restarted without losing
133 synchronization between the file systems.</para>
134 <para>The first time that
135 <literal>lustre_rsync</literal> is run, the user must specify a set of
136 parameters for the program to use. These parameters are described in
137 the following table and in
138 <xref linkend="dbdoclet.50438219_63667" />. On subsequent runs, these
139 parameters are stored in the the status file, and only the name of the
140 status file needs to be passed to
141 <literal>lustre_rsync</literal>.</para>
143 <literal>lustre_rsync</literal>:</para>
146 <para>Register the changelog user. For details, see the
147 <xref linkend="systemconfigurationutilities" />(
148 <literal>changelog_register</literal>) parameter in the
149 <xref linkend="systemconfigurationutilities" />(
150 <literal>lctl</literal>).</para>
156 <para>Verify that the Lustre file system (source) and the replica
157 file system (target) are identical
158 <emphasis>before</emphasis>registering the changelog user. If the
159 file systems are discrepant, use a utility, e.g. regular
160 <literal>rsync</literal>(not
161 <literal>lustre_rsync</literal>), to make them identical.</para>
165 <literal>lustre_rsync</literal> utility uses the following
167 <informaltable frame="all">
169 <colspec colname="c1" colwidth="3*" />
170 <colspec colname="c2" colwidth="10*" />
175 <emphasis role="bold">Parameter</emphasis>
180 <emphasis role="bold">Description</emphasis>
190 <replaceable>src</replaceable></literal>
194 <para>The path to the root of the Lustre file system (source)
195 which will be synchronized. This is a mandatory option if a
196 valid status log created during a previous synchronization
198 <literal>--statuslog</literal>) is not specified.</para>
205 <replaceable>tgt</replaceable></literal>
209 <para>The path to the root where the source file system will
210 be synchronized (target). This is a mandatory option if the
211 status log created during a previous synchronization
213 <literal>--statuslog</literal>) is not specified. This option
214 can be repeated if multiple synchronization targets are
222 <replaceable>mdt</replaceable></literal>
226 <para>The metadata device to be synchronized. A changelog
227 user must be registered for this device. This is a mandatory
228 option if a valid status log created during a previous
229 synchronization operation (
230 <literal>--statuslog</literal>) is not specified.</para>
237 <replaceable>userid</replaceable></literal>
241 <para>The changelog user ID for the specified MDT. To use
242 <literal>lustre_rsync</literal>, the changelog user must be
243 registered. For details, see the
244 <literal>changelog_register</literal> parameter in
245 <xref linkend="systemconfigurationutilities" />(
246 <literal>lctl</literal>). This is a mandatory option if a
247 valid status log created during a previous synchronization
249 <literal>--statuslog</literal>) is not specified.</para>
255 <literal>--statuslog=
256 <replaceable>log</replaceable></literal>
260 <para>A log file to which synchronization status is saved.
262 <literal>lustre_rsync</literal> utility starts, if the status
263 log from a previous synchronization operation is specified,
264 then the state is read from the log and otherwise mandatory
265 <literal>--source</literal>,
266 <literal>--target</literal> and
267 <literal>--mdt</literal> options can be skipped. Specifying
269 <literal>--source</literal>,
270 <literal>--target</literal> and/or
271 <literal>--mdt</literal> options, in addition to the
272 <literal>--statuslog</literal> option, causes the specified
273 parameters in the status log to be overridden. Command line
274 options take precedence over options in the status
281 <replaceable>yes|no</replaceable></literal>
284 <para>Specifies whether extended attributes (
285 <literal>xattrs</literal>) are synchronized or not. The
286 default is to synchronize extended attributes.</para>
289 <para>Disabling xattrs causes Lustre striping information
290 not to be synchronized.</para>
298 <literal>--verbose</literal>
302 <para>Produces verbose output.</para>
308 <literal>--dry-run</literal>
312 <para>Shows the output of
313 <literal>lustre_rsync</literal> commands (
314 <literal>copy</literal>,
315 <literal>mkdir</literal>, etc.) on the target file system
316 without actually executing them.</para>
322 <literal>--abort-on-err</literal>
326 <para>Stops processing the
327 <literal>lustre_rsync</literal> operation if an error occurs.
328 The default is to continue the operation.</para>
338 <primary>backup</primary>
339 <secondary>rsync</secondary>
340 <tertiary>examples</tertiary>
342 <literal>lustre_rsync</literal> Examples</title>
344 <literal>lustre_rsync</literal> commands are listed below.</para>
345 <para>Register a changelog user for an MDT (e.g.
346 <literal>testfs-MDT0000</literal>).</para>
347 <screen># lctl --device testfs-MDT0000 changelog_register testfs-MDT0000
348 Registered changelog userid 'cl1'</screen>
349 <para>Synchronize a Lustre file system (
350 <literal>/mnt/lustre</literal>) to a target file system (
351 <literal>/mnt/target</literal>).</para>
352 <screen>$ lustre_rsync --source=/mnt/lustre --target=/mnt/target \
353 --mdt=testfs-MDT0000 --user=cl1 --statuslog sync.log --verbose
354 Lustre filesystem: testfs
355 MDT device: testfs-MDT0000
359 Changelog registration: cl1
360 Starting changelog record: 0
362 lustre_rsync took 1 seconds
363 Changelog records consumed: 22</screen>
364 <para>After the file system undergoes changes, synchronize the changes
365 onto the target file system. Only the
366 <literal>statuslog</literal> name needs to be specified, as it has all
367 the parameters passed earlier.</para>
368 <screen>$ lustre_rsync --statuslog sync.log --verbose
369 Replicating Lustre filesystem: testfs
370 MDT device: testfs-MDT0000
374 Changelog registration: cl1
375 Starting changelog record: 22
377 lustre_rsync took 2 seconds
378 Changelog records consumed: 42</screen>
379 <para>To synchronize a Lustre file system (
380 <literal>/mnt/lustre</literal>) to two target file systems (
381 <literal>/mnt/target1</literal> and
382 <literal>/mnt/target2</literal>).</para>
383 <screen>$ lustre_rsync --source=/mnt/lustre --target=/mnt/target1 \
384 --target=/mnt/target2 --mdt=testfs-MDT0000 --user=cl1 \
385 --statuslog sync.log</screen>
389 <section xml:id="dbdoclet.50438207_71633">
392 <primary>backup</primary>
393 <secondary>MDS/OST device level</secondary>
394 </indexterm>Backing Up and Restoring an MDS or OST (Device Level)</title>
395 <para>In some cases, it is useful to do a full device-level backup of an
396 individual device (MDT or OST), before replacing hardware, performing
397 maintenance, etc. Doing full device-level backups ensures that all of the
398 data and configuration files is preserved in the original state and is the
399 easiest method of doing a backup. For the MDT file system, it may also be
400 the fastest way to perform the backup and restore, since it can do large
401 streaming read and write operations at the maximum bandwidth of the
402 underlying devices.</para>
404 <para>Keeping an updated full backup of the MDT is especially important
405 because a permanent failure of the MDT file system renders the much
406 larger amount of data in all the OSTs largely inaccessible and
409 <warning condition='l23'>
410 <para>In Lustre software release 2.0 through 2.2, the only successful way
411 to backup and restore an MDT is to do a device-level backup as is
412 described in this section. File-level restore of an MDT is not possible
413 before Lustre software release 2.3, as the Object Index (OI) file cannot
414 be rebuilt after restore without the OI Scrub functionality.
415 <emphasis role="bold">Since Lustre software release 2.3</emphasis>,
416 Object Index files are automatically rebuilt at first mount after a
417 restore is detected (see
418 <link xl:href="http://jira.hpdd.intel.com/browse/LU-957">LU-957</link>),
419 and file-level backup is supported (see
420 <xref linkend="dbdoclet.50438207_21638" />).</para>
422 <para>If hardware replacement is the reason for the backup or if a spare
423 storage device is available, it is possible to do a raw copy of the MDT or
424 OST from one block device to the other, as long as the new device is at
425 least as large as the original device. To do this, run:</para>
426 <screen>dd if=/dev/{original} of=/dev/{newdev} bs=1M</screen>
427 <para>If hardware errors cause read problems on the original device, use
428 the command below to allow as much data as possible to be read from the
429 original device while skipping sections of the disk with errors:</para>
430 <screen>dd if=/dev/{original} of=/dev/{newdev} bs=4k conv=sync,noerror /
431 count={original size in 4kB blocks}</screen>
432 <para>Even in the face of hardware errors, the
433 <literal>ldiskfs</literal> file system is very robust and it may be possible
434 to recover the file system data after running
435 <literal>e2fsck -fy /dev/{newdev}</literal> on the new device, along with
436 <literal>ll_recover_lost_found_objs</literal> for OST devices.</para>
437 <para condition="l26">With Lustre software version 2.6 and later, there is
438 no longer a need to run
439 <literal>ll_recover_lost_found_objs</literal> on the OSTs, since the
440 <literal>LFSCK</literal> scanning will automatically move objects from
441 <literal>lost+found</literal> back into its correct location on the OST
442 after directory corruption.</para>
444 <section xml:id="dbdoclet.50438207_21638">
447 <primary>backup</primary>
448 <secondary>OST file system</secondary>
451 <primary>backup</primary>
452 <secondary>MDT file system</secondary>
453 </indexterm>Making a File-Level Backup of an OST or MDT File System</title>
454 <para>This procedure provides an alternative to backup or migrate the data
455 of an OST or MDT at the file level. At the file-level, unused space is
456 omitted from the backed up and the process may be completed quicker with
457 smaller total backup size. Backing up a single OST device is not
458 necessarily the best way to perform backups of the Lustre file system,
459 since the files stored in the backup are not usable without metadata stored
460 on the MDT and additional file stripes that may be on other OSTs. However,
461 it is the preferred method for migration of OST devices, especially when it
462 is desirable to reformat the underlying file system with different
463 configuration options or to reduce fragmentation.</para>
465 <para>Prior to Lustre software release 2.3, the only successful way to
466 perform an MDT backup and restore is to do a device-level backup as is
468 <xref linkend="dbdoclet.50438207_71633" />. The ability to do MDT
469 file-level backups is not available for Lustre software release 2.0
470 through 2.2, because restoration of the Object Index (OI) file does not
471 return the MDT to a functioning state.
472 <emphasis role="bold">Since Lustre software release 2.3</emphasis>,
473 Object Index files are automatically rebuilt at first mount after a
474 restore is detected (see
475 <link xl:href="http://jira.hpdd.intel.com/browse/LU-957">LU-957</link>),
476 so file-level MDT restore is supported.</para>
478 <para>For Lustre software release 2.3 and newer with MDT file-level backup
480 <literal>mdt</literal> for
481 <literal>ost</literal> in the instructions below.</para>
485 <emphasis role="bold">Make a mountpoint for the file
488 <screen>[oss]# mkdir -p /mnt/ost</screen>
492 <emphasis role="bold">Mount the file system.</emphasis>
494 <screen>[oss]# mount -t ldiskfs /dev/<emphasis>{ostdev}</emphasis> /mnt/ost</screen>
498 <emphasis role="bold">Change to the mountpoint being backed
501 <screen>[oss]# cd /mnt/ost</screen>
505 <emphasis role="bold">Back up the extended attributes.</emphasis>
507 <screen>[oss]# getfattr -R -d -m '.*' -e hex -P . > ea-$(date +%Y%m%d).bak</screen>
510 <literal>tar(1)</literal> command supports the
511 <literal>--xattr</literal> option, the
512 <literal>getfattr</literal> step may be unnecessary as long as tar
514 <literal>trusted.*</literal> attributes. However, completing this step
515 is not harmful and can serve as an added safety measure.</para>
518 <para>In most distributions, the
519 <literal>getfattr</literal> command is part of the
520 <literal>attr</literal> package. If the
521 <literal>getfattr</literal> command returns errors like
522 <literal>Operation not supported</literal>, then the kernel does not
523 correctly support EAs. Stop and use a different backup method.</para>
528 <emphasis role="bold">Verify that the
529 <literal>ea-$date.bak</literal> file has properly backed up the EA
530 data on the OST.</emphasis>
532 <para>Without this attribute data, the restore process may be missing
533 extra data that can be very useful in case of later file system
534 corruption. Look at this file with more or a text editor. Each object
535 file should have a corresponding item similar to this:</para>
536 <screen>[oss]# file: O/0/d0/100992
538 0x0d822200000000004a8a73e500000000808a0100000000000000000000000000</screen>
542 <emphasis role="bold">Back up all file system data.</emphasis>
544 <screen>[oss]# tar czvf {backup file}.tgz [--xattrs] --sparse .</screen>
547 <literal>--sparse</literal> option is vital for backing up an MDT. In
549 <literal>--sparse</literal> behave correctly, and complete the backup
550 of and MDT in finite time, the version of tar must be specified.
551 Correctly functioning versions of tar include the Lustre software
552 enhanced version of tar at
553 <link xmlns:xlink="http://www.w3.org/1999/xlink"
554 xlink:href="https://wiki.hpdd.intel.com/display/PUB/Lustre+Tools#LustreTools-lustre-tar" />,
555 the tar from a Red Hat Enterprise Linux distribution (version 6.3 or
556 more recent) and the GNU tar version 1.25 or more recent.</para>
560 <literal>--xattrs</literal> option is only available in GNU tar
561 distributions from Red Hat or Intel.</para>
566 <emphasis role="bold">Change directory out of the file
569 <screen>[oss]# cd -</screen>
573 <emphasis role="bold">Unmount the file system.</emphasis>
575 <screen>[oss]# umount /mnt/ost</screen>
577 <para>When restoring an OST backup on a different node as part of an
578 OST migration, you also have to change server NIDs and use the
579 <literal>--writeconf</literal> command to re-generate the
580 configuration logs. See
581 <xref linkend="lustremaintenance" />(Changing a Server NID).</para>
586 <section xml:id="dbdoclet.50438207_22325">
589 <primary>backup</primary>
590 <secondary>restoring file system backup</secondary>
591 </indexterm>Restoring a File-Level Backup</title>
592 <para>To restore data from a file-level backup, you need to format the
593 device, restore the file data and then restore the EA data.</para>
596 <para>Format the new device.</para>
597 <screen>[oss]# mkfs.lustre --ost --index {<emphasis>OST index</emphasis>} {<emphasis>other options</emphasis>} /dev/<emphasis>{newdev}</emphasis></screen>
600 <para>Set the file system label.</para>
601 <screen>[oss]# e2label {fsname}-OST{index in hex} /mnt/ost</screen>
604 <para>Mount the file system.</para>
605 <screen>[oss]# mount -t ldiskfs /dev/<emphasis>{newdev}</emphasis> /mnt/ost</screen>
608 <para>Change to the new file system mount point.</para>
609 <screen>[oss]# cd /mnt/ost</screen>
612 <para>Restore the file system backup.</para>
613 <screen>[oss]# tar xzvpf <emphasis>{backup file}</emphasis> [--xattrs] --sparse</screen>
616 <para>Restore the file system extended attributes.</para>
617 <screen>[oss]# setfattr --restore=ea-${date}.bak</screen>
620 <literal>--xattrs</literal> option is supported by tar and specified
621 in the step above, this step is redundant.</para>
625 <para>Verify that the extended attributes were restored.</para>
626 <screen>[oss]# getfattr -d -m ".*" -e hex O/0/d0/100992 trusted.fid= \
627 0x0d822200000000004a8a73e500000000808a0100000000000000000000000000</screen>
630 <para>Change directory out of the file system.</para>
631 <screen>[oss]# cd -</screen>
634 <para>Unmount the new file system.</para>
635 <screen>[oss]# umount /mnt/ost</screen>
638 <para condition='l23'>If the file system was used between the time the backup was made and
639 when it was restored, then the online
640 <literal>LFSCK</literal> tool (part of Lustre code after version 2.3)
641 will automatically be
642 run to ensure the file system is coherent. If all of the device file
643 systems were backed up at the same time after the entire Lustre file system
644 was stopped, this step is unnecessary. In either case, the file system will
645 be immediately although there may be I/O errors reading
646 from files that are present on the MDT but not the OSTs, and files that
647 were created after the MDT backup will not be accessible or visible. See
648 <xref linkend="dbdoclet.lfsckadmin" />for details on using LFSCK.</para>
650 <section xml:id="dbdoclet.50438207_31553">
653 <primary>backup</primary>
654 <secondary>using LVM</secondary>
655 </indexterm>Using LVM Snapshots with the Lustre File System</title>
656 <para>If you want to perform disk-based backups (because, for example,
657 access to the backup system needs to be as fast as to the primary Lustre
658 file system), you can use the Linux LVM snapshot tool to maintain multiple,
659 incremental file system backups.</para>
660 <para>Because LVM snapshots cost CPU cycles as new files are written,
661 taking snapshots of the main Lustre file system will probably result in
662 unacceptable performance losses. You should create a new, backup Lustre
663 file system and periodically (e.g., nightly) back up new/changed files to
664 it. Periodic snapshots can be taken of this backup file system to create a
665 series of "full" backups.</para>
667 <para>Creating an LVM snapshot is not as reliable as making a separate
668 backup, because the LVM snapshot shares the same disks as the primary MDT
669 device, and depends on the primary MDT device for much of its data. If
670 the primary MDT device becomes corrupted, this may result in the snapshot
671 being corrupted.</para>
676 <primary>backup</primary>
677 <secondary>using LVM</secondary>
678 <tertiary>creating</tertiary>
679 </indexterm>Creating an LVM-based Backup File System</title>
680 <para>Use this procedure to create a backup Lustre file system for use
681 with the LVM snapshot mechanism.</para>
684 <para>Create LVM volumes for the MDT and OSTs.</para>
685 <para>Create LVM devices for your MDT and OST targets. Make sure not
686 to use the entire disk for the targets; save some room for the
687 snapshots. The snapshots start out as 0 size, but grow as you make
688 changes to the current file system. If you expect to change 20% of
689 the file system between backups, the most recent snapshot will be 20%
690 of the target size, the next older one will be 40%, etc. Here is an
692 <screen>cfs21:~# pvcreate /dev/sda1
693 Physical volume "/dev/sda1" successfully created
694 cfs21:~# vgcreate vgmain /dev/sda1
695 Volume group "vgmain" successfully created
696 cfs21:~# lvcreate -L200G -nMDT0 vgmain
697 Logical volume "MDT0" created
698 cfs21:~# lvcreate -L200G -nOST0 vgmain
699 Logical volume "OST0" created
701 ACTIVE '/dev/vgmain/MDT0' [200.00 GB] inherit
702 ACTIVE '/dev/vgmain/OST0' [200.00 GB] inherit</screen>
705 <para>Format the LVM volumes as Lustre targets.</para>
706 <para>In this example, the backup file system is called
707 <literal>main</literal> and designates the current, most up-to-date
709 <screen>cfs21:~# mkfs.lustre --fsname=main --mdt --index=0 /dev/vgmain/MDT0
710 No management node specified, adding MGS to this MDT.
717 (MDT MGS first_time update )
718 Persistent mount opts: errors=remount-ro,iopen_nopriv,user_xattr
720 checking for existing Lustre data
722 formatting backing filesystem ldiskfs on /dev/vgmain/MDT0
723 target name main-MDT0000
725 options -i 4096 -I 512 -q -O dir_index -F
726 mkfs_cmd = mkfs.ext2 -j -b 4096 -L main-MDT0000 -i 4096 -I 512 -q
727 -O dir_index -F /dev/vgmain/MDT0
728 Writing CONFIGS/mountdata
729 cfs21:~# mkfs.lustre --mgsnode=cfs21 --fsname=main --ost --index=0
737 (OST first_time update )
738 Persistent mount opts: errors=remount-ro,extents,mballoc
739 Parameters: mgsnode=192.168.0.21@tcp
740 checking for existing Lustre data
742 formatting backing filesystem ldiskfs on /dev/vgmain/OST0
743 target name main-OST0000
745 options -I 256 -q -O dir_index -F
746 mkfs_cmd = mkfs.ext2 -j -b 4096 -L lustre-OST0000 -J size=400 -I 256
747 -i 262144 -O extents,uninit_bg,dir_nlink,huge_file,flex_bg -G 256
748 -E resize=4290772992,lazy_journal_init, -F /dev/vgmain/OST0
749 Writing CONFIGS/mountdata
750 cfs21:~# mount -t lustre /dev/vgmain/MDT0 /mnt/mdt
751 cfs21:~# mount -t lustre /dev/vgmain/OST0 /mnt/ost
752 cfs21:~# mount -t lustre cfs21:/main /mnt/main
760 <primary>backup</primary>
761 <secondary>new/changed files</secondary>
762 </indexterm>Backing up New/Changed Files to the Backup File
764 <para>At periodic intervals e.g., nightly, back up new and changed files
765 to the LVM-based backup file system.</para>
766 <screen>cfs21:~# cp /etc/passwd /mnt/main
768 cfs21:~# cp /etc/fstab /mnt/main
770 cfs21:~# ls /mnt/main
771 fstab passwd</screen>
776 <primary>backup</primary>
777 <secondary>using LVM</secondary>
778 <tertiary>creating snapshots</tertiary>
779 </indexterm>Creating Snapshot Volumes</title>
780 <para>Whenever you want to make a "checkpoint" of the main Lustre file
781 system, create LVM snapshots of all target MDT and OSTs in the LVM-based
782 backup file system. You must decide the maximum size of a snapshot ahead
783 of time, although you can dynamically change this later. The size of a
784 daily snapshot is dependent on the amount of data changed daily in the
785 main Lustre file system. It is likely that a two-day old snapshot will be
786 twice as big as a one-day old snapshot.</para>
787 <para>You can create as many snapshots as you have room for in the volume
788 group. If necessary, you can dynamically add disks to the volume
790 <para>The snapshots of the target MDT and OSTs should be taken at the
791 same point in time. Make sure that the cronjob updating the backup file
792 system is not running, since that is the only thing writing to the disks.
793 Here is an example:</para>
794 <screen>cfs21:~# modprobe dm-snapshot
795 cfs21:~# lvcreate -L50M -s -n MDT0.b1 /dev/vgmain/MDT0
796 Rounding up size to full physical extent 52.00 MB
797 Logical volume "MDT0.b1" created
798 cfs21:~# lvcreate -L50M -s -n OST0.b1 /dev/vgmain/OST0
799 Rounding up size to full physical extent 52.00 MB
800 Logical volume "OST0.b1" created
802 <para>After the snapshots are taken, you can continue to back up
803 new/changed files to "main". The snapshots will not contain the new
805 <screen>cfs21:~# cp /etc/termcap /mnt/main
806 cfs21:~# ls /mnt/main
813 <primary>backup</primary>
814 <secondary>using LVM</secondary>
815 <tertiary>restoring</tertiary>
816 </indexterm>Restoring the File System From a Snapshot</title>
817 <para>Use this procedure to restore the file system from an LVM
821 <para>Rename the LVM snapshot.</para>
822 <para>Rename the file system snapshot from "main" to "back" so you
823 can mount it without unmounting "main". This is recommended, but not
825 <literal>--reformat</literal> flag to
826 <literal>tunefs.lustre</literal> to force the name change. For
828 <screen>cfs21:~# tunefs.lustre --reformat --fsname=back --writeconf /dev/vgmain/MDT0.b1
829 checking for existing Lustre data
831 Reading CONFIGS/mountdata
832 Read previous values:
839 Persistent mount opts: errors=remount-ro,iopen_nopriv,user_xattr
848 Persistent mount opts: errors=remount-ro,iopen_nopriv,user_xattr
850 Writing CONFIGS/mountdata
851 cfs21:~# tunefs.lustre --reformat --fsname=back --writeconf /dev/vgmain/OST0.b1
852 checking for existing Lustre data
854 Reading CONFIGS/mountdata
855 Read previous values:
862 Persistent mount opts: errors=remount-ro,extents,mballoc
863 Parameters: mgsnode=192.168.0.21@tcp
871 Persistent mount opts: errors=remount-ro,extents,mballoc
872 Parameters: mgsnode=192.168.0.21@tcp
873 Writing CONFIGS/mountdata
875 <para>When renaming a file system, we must also erase the last_rcvd
876 file from the snapshots</para>
877 <screen>cfs21:~# mount -t ldiskfs /dev/vgmain/MDT0.b1 /mnt/mdtback
878 cfs21:~# rm /mnt/mdtback/last_rcvd
879 cfs21:~# umount /mnt/mdtback
880 cfs21:~# mount -t ldiskfs /dev/vgmain/OST0.b1 /mnt/ostback
881 cfs21:~# rm /mnt/ostback/last_rcvd
882 cfs21:~# umount /mnt/ostback</screen>
885 <para>Mount the file system from the LVM snapshot. For
887 <screen>cfs21:~# mount -t lustre /dev/vgmain/MDT0.b1 /mnt/mdtback
888 cfs21:~# mount -t lustre /dev/vgmain/OST0.b1 /mnt/ostback
889 cfs21:~# mount -t lustre cfs21:/back /mnt/back</screen>
892 <para>Note the old directory contents, as of the snapshot time. For
894 <screen>cfs21:~/cfs/b1_5/lustre/utils# ls /mnt/back
903 <primary>backup</primary>
904 <secondary>using LVM</secondary>
905 <tertiary>deleting</tertiary>
906 </indexterm>Deleting Old Snapshots</title>
907 <para>To reclaim disk space, you can erase old snapshots as your backup
908 policy dictates. Run:</para>
909 <screen>lvremove /dev/vgmain/MDT0.b1</screen>
914 <primary>backup</primary>
915 <secondary>using LVM</secondary>
916 <tertiary>resizing</tertiary>
917 </indexterm>Changing Snapshot Volume Size</title>
918 <para>You can also extend or shrink snapshot volumes if you find your
919 daily deltas are smaller or larger than expected. Run:</para>
920 <screen>lvextend -L10G /dev/vgmain/MDT0.b1</screen>
922 <para>Extending snapshots seems to be broken in older LVM. It is
923 working in LVM v2.02.01.</para>