From: Andreas Dilger Date: Fri, 21 May 2021 00:59:47 +0000 (-0600) Subject: LUDOC-394 manual: remove 'dbdoclet.' from crossrefs X-Git-Url: https://git.whamcloud.com/?a=commitdiff_plain;h=refs%2Fchanges%2F59%2F43759%2F3;p=doc%2Fmanual.git LUDOC-394 manual: remove 'dbdoclet.' from crossrefs Remove the "dbdoclet." prefix from manual cross-references. This is a remnant of auto-generated cross-references when the manual was originally imported, but is not useful. Signed-off-by: Andreas Dilger Change-Id: Iad17a07542423b9aa99c5d2eb3727b6ef98b14ea Reviewed-on: https://review.whamcloud.com/43759 Reviewed-by: Arshad Hussain Tested-by: jenkins --- diff --git a/BackupAndRestore.xml b/BackupAndRestore.xml index d932250..8e8fcf8 100644 --- a/BackupAndRestore.xml +++ b/BackupAndRestore.xml @@ -10,12 +10,12 @@ - + - + @@ -30,13 +30,13 @@ - + It is strongly recommended that sites perform periodic device-level backup of the MDT(s) - (), + (), for example twice a week with alternate backups going to a separate device, even if there is not enough capacity to do a full backup of all of the filesystem data. Even if there are separate file-level backups of @@ -52,7 +52,7 @@ it is needed), and only needs good linear read/write performance. While the device-level MDT backup is not useful for restoring individual files, it is most efficient to handle the case of MDT failure or corruption. -
+
<indexterm> <primary>backup</primary> @@ -152,7 +152,7 @@ <literal>lustre_rsync</literal> is run, the user must specify a set of parameters for the program to use. These parameters are described in the following table and in - <xref linkend="dbdoclet.lustre_rsync" />. On subsequent runs, these + <xref linkend="lustre_rsync" />. On subsequent runs, these parameters are stored in the the status file, and only the name of the status file needs to be passed to <literal>lustre_rsync</literal>.</para> @@ -403,7 +403,7 @@ Changelog records consumed: 42</screen> </section> </section> </section> - <section xml:id="dbdoclet.backup_device"> + <section xml:id="backup_device"> <title> <indexterm> <primary>backup</primary> @@ -777,9 +777,9 @@ trusted.fid= \ will be immediately although there may be I/O errors reading from files that are present on the MDT but not the OSTs, and files that were created after the MDT backup will not be accessible or visible. See - <xref linkend="dbdoclet.lfsckadmin" />for details on using LFSCK.</para> + <xref linkend="lfsckadmin" />for details on using LFSCK.</para> </section> - <section xml:id="dbdoclet.backup_lvm_snapshot"> + <section xml:id="backup_lvm_snapshot"> <title> <indexterm> <primary>backup</primary> diff --git a/ConfigurationFilesModuleParameters.xml b/ConfigurationFilesModuleParameters.xml index 50cbcfa..67eed0c 100644 --- a/ConfigurationFilesModuleParameters.xml +++ b/ConfigurationFilesModuleParameters.xml @@ -6,13 +6,13 @@ <para>This section describes configuration files and module parameters and includes the following sections:</para> <itemizedlist> <listitem> - <para><xref linkend="dbdoclet.tuning_lnet_mod_params"/></para> + <para><xref linkend="tuning_lnet_mod_params"/></para> </listitem> <listitem> <para><xref linkend="module_options"/></para> </listitem> </itemizedlist> - <section xml:id="dbdoclet.tuning_lnet_mod_params"> + <section xml:id="tuning_lnet_mod_params"> <title> <indexterm><primary>configuring</primary></indexterm> <indexterm><primary>LNet</primary><see>configuring</see></indexterm> diff --git a/ConfiguringLNet.xml b/ConfiguringLNet.xml index b7868fd..3f22f7c 100755 --- a/ConfiguringLNet.xml +++ b/ConfiguringLNet.xml @@ -156,7 +156,7 @@ lnetctl net add --net tcp2 --if eth0 </itemizedlist></para> <para>For examples on adding multiple interfaces via <literal>lnetctl net add</literal> and/or YAML, please see - <xref linkend="dbdoclet.mrconfiguring" /> + <xref linkend="mrconfiguring" /> </para></note> <para>Networks can be deleted with the @@ -446,7 +446,7 @@ route: </indexterm>Showing routing information When routing is enabled on a node, the tiny, small and large routing buffers are allocated. See for more details on router + linkend="tuning_lnet_params"/> for more details on router buffers. This information can be shown as follows: lnetctl routing show: show routing information diff --git a/ConfiguringLustre.xml b/ConfiguringLustre.xml index ee5447d..63dad0f 100644 --- a/ConfiguringLustre.xml +++ b/ConfiguringLustre.xml @@ -9,16 +9,16 @@ - + - + -
+
<indexterm> <primary>Lustre</primary> @@ -123,10 +123,10 @@ mkfs.lustre --fsname= <replaceable>/dev/block_device</replaceable> </screen> <para>See - <xref linkend="dbdoclet.lustre_configure_multiple_fs" />for more details.</para> + <xref linkend="lustre_configure_multiple_fs" />for more details.</para> </note> </listitem> - <listitem xml:id="dbdoclet.addmdtindex"> + <listitem xml:id="addmdtindex"> <para>Optionally add in additional MDTs.</para> <screen> mkfs.lustre --fsname= @@ -151,7 +151,7 @@ mount -t lustre devices, mount them both.</para> </note> </listitem> - <listitem xml:id="dbdoclet.format_ost"> + <listitem xml:id="format_ost"> <para>Create the OST. On the OSS node, run:</para> <screen> mkfs.lustre --fsname= @@ -190,7 +190,7 @@ mkfs.lustre --fsname= instead format the whole disk for the file system.</para> </note> </listitem> - <listitem xml:id="dbdoclet.mount_ost"> + <listitem xml:id="mount_ost"> <para>Mount the OST. On the OSS node where the OST was created, run:</para> <screen> @@ -200,12 +200,12 @@ mount -t lustre </screen> <note> <para>To create additional OSTs, repeat Step - <xref linkend="dbdoclet.format_ost" />and Step - <xref linkend="dbdoclet.mount_ost" />, specifying the + <xref linkend="format_ost" />and Step + <xref linkend="mount_ost" />, specifying the next higher OST index number.</para> </note> </listitem> - <listitem xml:id="dbdoclet.mount_on_client"> + <listitem xml:id="mount_on_client"> <para>Mount the Lustre file system on the client. On the client node, run:</para> <screen> @@ -216,7 +216,7 @@ mount -t lustre </screen> <note> <para>To mount the filesystem on additional clients, repeat Step - <xref linkend="dbdoclet.mount_on_client" />.</para> + <xref linkend="mount_on_client" />.</para> </note> <note> <para>If you have a problem mounting the file system, check the @@ -704,7 +704,7 @@ Lustre: temp-MDT0000.mdt: set parameter identity_upcall=/usr/sbin/l_getidentity Lustre: Server temp-MDT0000 on device /dev/sdb has started </screen> </listitem> - <listitem xml:id="dbdoclet.create_and_mount_ost"> + <listitem xml:id="create_and_mount_ost"> <para>Create and mount <literal>ost0</literal>.</para> <para>In this example, the OSTs ( @@ -919,7 +919,7 @@ total 8.0M use.</para> </section> </section> - <section xml:id="dbdoclet.lustre_configure_additional_options"> + <section xml:id="lustre_configure_additional_options"> <title> <indexterm> <primary>Lustre</primary> @@ -937,10 +937,10 @@ total 8.0M </indexterm>Scaling the Lustre File System A Lustre file system can be scaled by adding OSTs or clients. For instructions on creating additional OSTs repeat Step - and Step - above. For mounting + and Step + above. For mounting additional clients, repeat Step - for each client. + for each client.
diff --git a/ConfiguringStorage.xml b/ConfiguringStorage.xml index ea2c3d0..162942b 100644 --- a/ConfiguringStorage.xml +++ b/ConfiguringStorage.xml @@ -22,7 +22,7 @@ </listitem> <listitem> <para> - <xref linkend="dbdoclet.ldiskfs_raid_opts"/> + <xref linkend="ldiskfs_raid_opts"/> </para> </listitem> <listitem> @@ -84,7 +84,7 @@ data integrity is critical. You should carefully consider whether the benefits of using writeback cache outweigh the risks.</para> </section> - <section xml:id="dbdoclet.ldiskfs_raid_opts"> + <section xml:id="ldiskfs_raid_opts"> <title> <indexterm> <primary>storage</primary> @@ -108,7 +108,7 @@ This is alternately referred to as the RAID stripe size. This is applicable to both MDT and OST file systems.</para> <para>For more information on how to override the defaults while formatting - MDT or OST file systems, see <xref linkend="dbdoclet.ldiskfs_mkfs_opts"/>.</para> + MDT or OST file systems, see <xref linkend="ldiskfs_mkfs_opts"/>.</para> <section remap="h3"> <title><indexterm><primary>storage</primary><secondary>configuring</secondary><tertiary>for mkfs</tertiary></indexterm>Computing file system parameters for mkfs For best results, use RAID 5 with 5 or 9 disks or RAID 6 with 6 or 10 disks, each on a different controller. The stripe width is the optimal minimum I/O size. Ideally, the RAID configuration should allow 1 MB Lustre RPCs to fit evenly on a single RAID stripe without an expensive read-modify-write cycle. Use this formula to determine the diff --git a/DataOnMDT.xml b/DataOnMDT.xml index ac1213a..330fd2e 100644 --- a/DataOnMDT.xml +++ b/DataOnMDT.xml @@ -4,7 +4,7 @@ xml:id="dataonmdt" condition="l2B"> Data on MDT (DoM) This chapter describes Data on MDT (DoM). -
+
<indexterm> <primary>dom</primary> @@ -32,7 +32,7 @@ a client writes or truncates the file beyond the size of the MDT component.</para> </section> - <section xml:id="dbdoclet.usercommands"> + <section xml:id="usercommands"> <title> <indexterm> <primary>dom</primary> @@ -48,7 +48,7 @@ <literal>lfs find</literal> command can be used to search the directory tree rooted at the given directory or file name for the files that match the given DoM component parameters, e.g. layout type.</para> - <section xml:id="dbdoclet.lfssetstripe"> + <section xml:id="lfssetstripe"> <title> <indexterm> <primary>dom</primary> @@ -246,7 +246,7 @@ client$ lfs getstripe /mnt/lustre/domdir/domfile lower value.</para></note> </section> </section> - <section xml:id="dbdoclet.domstripesize"> + <section xml:id="domstripesize"> <title> <indexterm> <primary>dom</primary> @@ -278,10 +278,10 @@ client$ lfs getstripe /mnt/lustre/domdir/domfile It is 1MB by default and can be changed with the <literal>lctl</literal> tool. For more information on setting <literal>dom_stripesize</literal> please see - <xref linkend="dbdoclet.dom_stripesize" />.</para> + <xref linkend="dom_stripesize" />.</para> </section> </section> - <section xml:id="dbdoclet.domlfsgetstripe"> + <section xml:id="domlfsgetstripe"> <title> <indexterm> <primary>dom</primary> @@ -327,7 +327,7 @@ client$ lfs getstripe -I1 -L -E /mnt/lustre/domfile both can be used to get size on the MDT.</para> </section> </section> - <section xml:id="dbdoclet.domlfsfind"> + <section xml:id="domlfsfind"> <title> <indexterm> <primary>dom</primary> @@ -375,7 +375,7 @@ client$ lfs find -L mdt -S +200K -type f /mnt/lustre files are found because their DoM size is 1MB.</para> </section> </section> - <section xml:id="dbdoclet.dom_stripesize"> + <section xml:id="dom_stripesize"> <title> <indexterm> <primary>dom</primary> @@ -444,7 +444,7 @@ mds# lctl get_param -n lod.*MDT0000*.dom_stripesize </para> </section> </section> - <section xml:id="dbdoclet.disabledom"> + <section xml:id="disabledom"> <title> <indexterm> <primary>dom</primary> diff --git a/InstallingLustre.xml b/InstallingLustre.xml index bae0684..63d1e52 100644 --- a/InstallingLustre.xml +++ b/InstallingLustre.xml @@ -13,7 +13,7 @@ </listitem> <listitem> <para> - <xref linkend="dbdoclet.lustre_installation" /> + <xref linkend="lustre_installation" /> </para> </listitem> </itemizedlist> @@ -317,7 +317,7 @@ <emphasis role="bold">Use the same user IDs (UID) and group IDs (GID) on all clients.</emphasis> </emphasis>If use of supplemental groups is required, see - <xref linkend="dbdoclet.identity_upcall" /> for information about + <xref linkend="identity_upcall" /> for information about supplementary user and group cache upcall (<code>identity_upcall</code>).</para> </listitem> <listitem> @@ -363,7 +363,7 @@ </itemizedlist></para> </section> </section> - <section xml:id="dbdoclet.lustre_installation"> + <section xml:id="lustre_installation"> <title>Lustre Software Installation Procedure Before installing the Lustre software, back up ALL data. The Lustre @@ -403,7 +403,7 @@ linkend="table.installed_server_pkg" />for a list of required packages. - + Install the Lustre server and e2fsprogs packages on all Lustre servers (MGS, MDSs, and OSSs). diff --git a/LNetMultiRail.xml b/LNetMultiRail.xml index 67b9516..95739a5 100644 --- a/LNetMultiRail.xml +++ b/LNetMultiRail.xml @@ -7,14 +7,14 @@ administration. - - - + + + - + -
+
<indexterm><primary>MR</primary><secondary>overview</secondary> </indexterm>Multi-Rail Overview In computer networking, multi-rail is an arrangement in which two or @@ -30,7 +30,7 @@ Multi-Rail High-Level Design
-
+
<indexterm><primary>MR</primary><secondary>configuring</secondary> </indexterm>Configuring Multi-Rail Every node using multi-rail networking needs to be properly @@ -50,7 +50,7 @@ For information on the dynamic peer discovery feature added in Lustre Release 2.11.0, see . -
+
<indexterm><primary>MR</primary> <secondary>multipleinterfaces</secondary> </indexterm>Configure Multiple Interfaces on the Local Node @@ -114,7 +114,7 @@ net: dev cpt: -1 CPT: "[0]"
-
+
<indexterm><primary>MR</primary> <secondary>deleteinterfaces</secondary> </indexterm>Deleting Network Interfaces @@ -150,7 +150,7 @@ net: interfaces: 0: eth0
-
+
<indexterm><primary>MR</primary> <secondary>addremotepeers</secondary> </indexterm>Adding Remote Peers that are Multi-Rail Capable @@ -200,7 +200,7 @@ peer: peer ni: - nid: 192.168.122.31@tcp
-
+
<indexterm><primary>MR</primary> <secondary>deleteremotepeers</secondary> </indexterm>Deleting Remote Peers @@ -234,7 +234,7 @@ peer: % lnetctl import --del < delPeer.yaml
-
+
<indexterm><primary>MR</primary> <secondary>mrrouting</secondary> </indexterm>Notes on routing with Multi-Rail @@ -246,13 +246,13 @@ peer: the same gateway node but as different routes. This uses the existing route monitoring algorithm to guard against interfaces going down. With the feature introduced in Lustre 2.13, the - new algorithm uses the feature to + new algorithm uses the feature to monitor the different interfaces of the gateway and always ensures that the healthiest interface is used. Therefore, the configuration described in this section applies to releases prior to Lustre 2.13. It will still work in 2.13 as well, however it is not required due to the reason mentioned above. -
+
<indexterm><primary>MR</primary> <secondary>mrrouting</secondary> <tertiary>routingex</tertiary> @@ -300,7 +300,7 @@ lnetctl peer add --nid <rtrX-nidA>@o2ib1,<rtrX-nidB>@o2ib1</screen> under development and single interface failure will still cause the entire router to go down.</para> </section> - <section xml:id="dbdoclet.mrroutingresiliency"> + <section xml:id="mrroutingresiliency"> <title><indexterm><primary>MR</primary> <secondary>mrrouting</secondary> <tertiary>routingresiliency</tertiary> @@ -353,7 +353,7 @@ lnetctl route add --net o2ib0 --gateway <rtrX-nidB>@o2ib1</screen> </listitem> </orderedlist> </section> - <section xml:id="dbdoclet.mrroutingmixed"> + <section xml:id="mrroutingmixed"> <title><indexterm><primary>MR</primary> <secondary>mrrouting</secondary> <tertiary>routingmixed</tertiary> @@ -380,7 +380,7 @@ lnetctl route add --net o2ib0 --gateway <rtrX-nidB>@o2ib1</screen> longer needed. One route per gateway should be configured. Gateway interfaces are used according to the Multi-Rail selection criteria.</para> </listitem> - <listitem><para>Routing now relies on <xref linkend="dbdoclet.mrhealth" /> + <listitem><para>Routing now relies on <xref linkend="mrhealth" /> to keep track of the route aliveness.</para></listitem> <listitem><para>Router interfaces are monitored via LNet Health. If an interface fails other interfaces will be used.</para></listitem> @@ -549,7 +549,7 @@ lnetctl route add --net o2ib0 --gateway <rtrX-nidB>@o2ib1</screen> </orderedlist> </section> </section> - <section xml:id="dbdoclet.mrhealth" condition="l2C"> + <section xml:id="mrhealth" condition="l2C"> <title><indexterm><primary>MR</primary><secondary>health</secondary> </indexterm>LNet Health LNet Multi-Rail has implemented the ability for multiple interfaces @@ -564,7 +564,7 @@ lnetctl route add --net o2ib0 --gateway <rtrX-nidB>@o2ib1 monitors the status of the send and receive operations and uses this status to increment the interface's health value in case of success and decrement it in case of failure. -
+
<indexterm><primary>MR</primary> <secondary>mrhealth</secondary> <tertiary>value</tertiary> @@ -576,7 +576,7 @@ lnetctl route add --net o2ib0 --gateway <rtrX-nidB>@o2ib1</screen> The granularity allows the Multi-Rail algorithm to select the interface that has the highest likelihood of sending or receiving a message.</para> </section> - <section xml:id="dbdoclet.mrhealthfailuretypes"> + <section xml:id="mrhealthfailuretypes"> <title><indexterm><primary>MR</primary> <secondary>mrhealth</secondary> <tertiary>failuretypes</tertiary> @@ -653,7 +653,7 @@ lnetctl route add --net o2ib0 --gateway <rtrX-nidB>@o2ib1</screen> </tbody></tgroup> </informaltable> </section> - <section xml:id="dbdoclet.mrhealthinterface"> + <section xml:id="mrhealthinterface"> <title><indexterm><primary>MR</primary> <secondary>mrhealth</secondary> <tertiary>interface</tertiary> @@ -799,12 +799,12 @@ lnetctl route add --net o2ib0 --gateway <rtrX-nidB>@o2ib1</screen> </tgroup> </informaltable> </section> - <section xml:id="dbdoclet.mrhealthdisplay"> + <section xml:id="mrhealthdisplay"> <title><indexterm><primary>MR</primary> <secondary>mrhealth</secondary> <tertiary>display</tertiary> </indexterm>Displaying Information -
+
Showing LNet Health Configuration Settings lnetctl can be used to show all the LNet health configuration settings using the lnetctl global show @@ -819,7 +819,7 @@ lnetctl route add --net o2ib0 --gateway <rtrX-nidB>@o2ib1 health_sensitivity: 100 recovery_interval: 1
-
+
Showing LNet Health Statistics LNet Health statistics are shown under a higher verbosity settings. To show the local interface health statistics: @@ -907,7 +907,7 @@ lnetctl route add --net o2ib0 --gateway <rtrX-nidB>@o2ib1 drop_length: 0
-
+
<indexterm><primary>MR</primary> <secondary>mrhealth</secondary> <tertiary>initialsetup</tertiary> diff --git a/LazySizeOnMDT.xml b/LazySizeOnMDT.xml index 033285e..076c725 100644 --- a/LazySizeOnMDT.xml +++ b/LazySizeOnMDT.xml @@ -4,7 +4,7 @@ xml:id="lsom" condition="l2C"> <title xml:id="lsom.title">Lazy Size on MDT (LSoM) This chapter describes Lazy Size on MDT (LSoM). -
+
<indexterm> <primary>lsom</primary> @@ -30,7 +30,7 @@ Future improvements will allow the LSoM data to be accessed by tools such as <literal>lfs find</literal>.</para> </section> - <section xml:id="dbdoclet.enablelsom"> + <section xml:id="enablelsom"> <title><indexterm><primary>lsom</primary> <secondary>enablelsom</secondary></indexterm>Enable LSoM LSoM is always enabled and nothing needs to be done to enable the @@ -59,7 +59,7 @@ extra overhead when accessing files, and is not recommended for normal usage.
-
+
<indexterm><primary>lsom</primary> <secondary>usercommands</secondary></indexterm>User Commands Lustre provides the lfs getsom command to list @@ -70,7 +70,7 @@ Lustre file system mount point. llsom_sync uses Lustre MDS changelogs and, thus, a changelog user must be registered to use this utility. -
+
<indexterm><primary>lsom</primary> <secondary>lfsgetsom</secondary></indexterm>lfs getsom for LSoM data diff --git a/LustreDebugging.xml b/LustreDebugging.xml index cce7317..51082e2 100644 --- a/LustreDebugging.xml +++ b/LustreDebugging.xml @@ -7,16 +7,16 @@ following sections: - + - + - + -
+
<indexterm><primary>debugging</primary></indexterm> Diagnostic and Debugging Tools A variety of diagnostic and analysis tools are available to debug @@ -76,15 +76,15 @@ Diagnostic and Debugging Tools - This tool is used with the debug_kernel option to manually dump the Lustre debugging log or post-process debugging logs that are dumped automatically. For more information about the - lctl tool, see and . + lctl tool, see and . Lustre subsystem asserts - A panic-style assertion (LBUG) in the kernel causes the Lustre file system to dump the debug log to the file /tmp/lustre-log.timestamp where it can be retrieved after a reboot. For more information, see . + linkend="troubleshooting"/>. @@ -102,8 +102,8 @@ Diagnostic and Debugging Tools The tools described in this section are provided in the Linux kernel or are available at an external website. For information about using some of these tools for Lustre debugging, see - and - . + and + .
<indexterm><primary>debugging</primary><secondary>admin tools</secondary></indexterm>Tools for Administrators and Developers @@ -233,7 +233,7 @@ Diagnostic and Debugging Tools
-
+
<indexterm><primary>debugging</primary><secondary>procedure</secondary></indexterm>Lustre Debugging Procedures The procedures below may be useful to administrators or developers debugging a Lustre files system. @@ -241,15 +241,15 @@ Diagnostic and Debugging Tools <indexterm><primary>debugging</primary><secondary>message format</secondary></indexterm>Understanding the Lustre Debug Messaging Format Lustre debug messages are categorized by originating subsystem, message type, and location in the source code. For a list of subsystems - and message types, see . + and message types, see . For a current list of subsystems and debug message types, see libcfs/include/libcfs/libcfs_debug.h in the Lustre software tree - The elements of a Lustre debug message are described in Format of Lustre Debug Messages. -
+ The elements of a Lustre debug message are described in Format of Lustre Debug Messages. +
Lustre Debug Messages Each Lustre debug message has the tag of the subsystem it originated in, the message type, and the location in the source code. The subsystems and debug types used are as @@ -530,7 +530,7 @@ Diagnostic and Debugging Tools
-
+
Format of Lustre Debug Messages The Lustre software uses the CDEBUG() and CERROR() macros to print the debug or error messages. To print the @@ -633,12 +633,12 @@ Diagnostic and Debugging Tools Lustre debug messages are maintained in a buffer, with the maximum buffer size specified (in MBs) by the debug_mb parameter (lctl get_param debug_mb). The buffer is circular, so debug messages are kept until the allocated buffer limit is reached, and then the first messages are overwritten.
-
+
<indexterm><primary>debugging</primary><secondary>using lctl</secondary></indexterm>Using the lctl Tool to View Debug Messages The lctl tool allows debug messages to be filtered based on subsystems and message types to extract information useful for troubleshooting from a kernel debug log. For a command - reference, see . + reference, see . You can use lctl to: @@ -853,7 +853,7 @@ lctl> debug_kernel [filename] modinfo libcfs
-
+
<indexterm><primary>debugging</primary><secondary>developers tools</secondary></indexterm>Lustre Debugging for Developers The procedures in this section may be useful to developers debugging Lustre source code. @@ -1255,7 +1255,7 @@ lctl> debug_kernel [filename] Dump the log into a user-specified log file using lctl - (see ). + (see ). diff --git a/LustreMaintenance.xml b/LustreMaintenance.xml index 5efcffc..ac0a3bf 100644 --- a/LustreMaintenance.xml +++ b/LustreMaintenance.xml @@ -418,7 +418,7 @@ Adding a New OST to a Lustre File System Add a new OST by using mkfs.lustre as when the filesystem was first formatted, see - for details. Each new OST + for details. Each new OST must have a unique index number, use lctl dl to see a list of all OSTs. For example, to add a new OST at index 12 to the testfs filesystem run following commands @@ -446,7 +446,7 @@ oss# mount -t lustre /dev/sda /mnt/testfs/ost12 system on OST0004 that are larger than 4GB in size to other OSTs, enter: client# lfs find /test --ost test-OST0004 -size +4G | lfs_migrate -y - See for details. + See for details.
@@ -476,14 +476,14 @@ Removing and Restoring MDTs and OSTs A hard drive has failed and a RAID resync/rebuild is underway, though the OST can also be marked degraded by the RAID system to avoid allocating new files on the slow OST which - can reduce performance, see + can reduce performance, see for more details. OST is nearing its space capacity, though the MDS will already try to avoid allocating new files on overly-full OSTs if possible, - see for details. + see for details. @@ -716,7 +716,7 @@ oss# mount -t ldiskfs /dev/ost_device /mnt/ost Restoring OST Configuration Files If the original OST is still available, it is best to follow the OST backup and restore procedure given in either - , or + , or and . To replace an OST that was removed from service due to corruption @@ -758,7 +758,7 @@ oss# mount -t ldiskfs /dev/new_ost_dev / Recreate the OST configuration files, if unavailable. Follow the procedure in - to recreate the LAST_ID + to recreate the LAST_ID file for this OST index. The last_rcvd file will be recreated when the OST is first mounted using the default parameters, which are normally correct for all file systems. The diff --git a/LustreMonitoring.xml b/LustreMonitoring.xml index a7cde5f..43814a5 100644 --- a/LustreMonitoring.xml +++ b/LustreMonitoring.xml @@ -10,7 +10,7 @@ Lustre Changelogs - Lustre Jobstats + Lustre Jobstats Lustre Monitoring Tool @@ -270,7 +270,7 @@ Lustre Changelogs Event types marked with * are not recorded by default. Refer to - for instructions on + for instructions on modifying the Changelogs mask. FID-to-full-pathname and pathname-to-FID functions are also included to map target and parent FIDs into the file system namespace. @@ -435,7 +435,7 @@ MARK CREAT MKDIR HLINK SLINK MKNOD UNLNK RMDIR RENME RNMTO CLOSE LYOUT \ TRUNC SATTR XATTR HSM MTIME CTIME MIGRT
-
+
Setting the Changelog Mask To set the current changelog mask on a specific device (lustre-MDT0000): @@ -591,7 +591,7 @@ mdd.seb-MDT0000.changelog_deniednext=120
-
+
<indexterm><primary>jobstats</primary><see>monitoring</see></indexterm> <indexterm><primary>monitoring</primary></indexterm> <indexterm><primary>monitoring</primary><secondary>jobstats</secondary></indexterm> diff --git a/LustreNodemap.xml b/LustreNodemap.xml index 76d1ff3..21a0c46 100644 --- a/LustreNodemap.xml +++ b/LustreNodemap.xml @@ -201,7 +201,7 @@ drwxr-xr-x 3 root root 4096 Jul 23 09:02 .. <para>If UID 11002 or GID 11001 do not exist on the Lustre MDS or MGS, create them in LDAP or other data sources, or trust clients by setting <literal>identity_upcall</literal> to <literal>NONE</literal>. For more - information, see <xref linkend="dbdoclet.identity_upcall"/>.</para> + information, see <xref linkend="identity_upcall"/>.</para> <para>Building a larger and more complex configuration is possible by iterating through the <literal>lctl</literal> commands above. In diff --git a/LustreOperations.xml b/LustreOperations.xml index 5f0cdd8..b529026 100644 --- a/LustreOperations.xml +++ b/LustreOperations.xml @@ -6,7 +6,7 @@ <para>Once you have the Lustre file system up and running, you can use the procedures in this section to perform these basic Lustre administration tasks.</para> - <section xml:id="dbdoclet.mount_by_label"> + <section xml:id="mount_by_label"> <title> <indexterm> <primary>operations</primary> @@ -51,7 +51,7 @@ client# mount -t lustre mds0@tcp0:/short <replaceable>/dev/long_mountpoint_name</replaceable> </screen> </section> - <section xml:id="dbdoclet.starting_lustre"> + <section xml:id="starting_lustre"> <title> <indexterm> <primary>operations</primary> @@ -81,7 +81,7 @@ client# mount -t lustre mds0@tcp0:/short </listitem> </orderedlist> </section> - <section xml:id="dbdoclet.mounting_server"> + <section xml:id="mounting_server"> <title> <indexterm> <primary>operations</primary> @@ -133,7 +133,7 @@ LABEL=testfs-OST0000 /mnt/test/ost0 lustre defaults,_netdev,noauto 0 0 environment.</para> </caution> </section> - <section xml:id="dbdoclet.shutdownLustre"> + <section xml:id="shutdownLustre"> <title> <indexterm> <primary>operations</primary> @@ -197,9 +197,9 @@ XXX.XXX.0.11@tcp:/testfs on /mnt/testfs type lustre (rw,lazystatfs) </listitem> </orderedlist> <para>For unmount command syntax for a single OST, MDT, or MGT target - please refer to <xref linkend="dbdoclet.umountTarget"/></para> + please refer to <xref linkend="umountTarget"/></para> </section> - <section xml:id="dbdoclet.umountTarget"> + <section xml:id="umountTarget"> <title> <indexterm> <primary>operations</primary> @@ -291,7 +291,7 @@ $ tunefs.lustre --param failover.mode=failout </para> </note> </section> - <section xml:id="dbdoclet.degraded_ost"> + <section xml:id="degraded_ost"> <title> <indexterm> <primary>operations</primary> @@ -327,7 +327,7 @@ lctl get_param obdfilter.*.degraded <literal>mdadm(8)</literal> command with the <literal>--monitor</literal> option to mark an affected device degraded or restored.</para> </section> - <section xml:id="dbdoclet.lustre_configure_multiple_fs"> + <section xml:id="lustre_configure_multiple_fs"> <title> <indexterm> <primary>operations</primary> @@ -426,7 +426,7 @@ client# mount -t lustre mgsnode@tcp0:/foo /mnt/foo client# mount -t lustre mgsnode@tcp0:/bar /mnt/bar </screen> </section> - <section xml:id="dbdoclet.lfsmkdir"> + <section xml:id="lfsmkdir"> <title> <indexterm> <primary>operations</primary> @@ -446,7 +446,7 @@ client# lfs mkdir –i <literal>mdt_index</literal>. For more information on adding additional MDTs and <literal>mdt_index</literal> see - <xref linkend='dbdoclet.addmdtindex' />.</para> + <xref linkend='addmdtindex' />.</para> <warning> <para>An administrator can allocate remote sub-directories to separate MDTs. Creating remote sub-directories in parent directories not hosted on @@ -478,7 +478,7 @@ client# lfs mkdir –i <screen>mds# lctl get_param mdt.<replaceable>*</replaceable>.enable_remote_dir_gid</screen> </para> </section> - <section xml:id="dbdoclet.lfsmkdirdne2" condition='l28'> + <section xml:id="lfsmkdirdne2" condition='l28'> <title> <indexterm> <primary>operations</primary> @@ -518,7 +518,7 @@ client# lfs mkdir -c <para>The striped directory feature is most useful for distributing single large directories (50k entries or more) across multiple MDTs, since it incurs more overhead than non-striped directories.</para> - <section xml:id="dbdoclet.lfsmkdirbyspace" condition='l2D'> + <section xml:id="lfsmkdirbyspace" condition='l2D'> <title>Directory creation by space/inode usage If the starting MDT is not specified when creating a new directory, this directory and its stripes will be distributed on MDTs by space usage. @@ -540,7 +540,7 @@ client# lfs mkdir -c
-
+
<indexterm> <primary>operations</primary> @@ -551,20 +551,20 @@ client# lfs mkdir -c <itemizedlist> <listitem> <para>When creating a file system, use mkfs.lustre. See - <xref linkend="dbdoclet.tuning_params_mkfs_lustre" />below.</para> + <xref linkend="tuning_params_mkfs_lustre" />below.</para> </listitem> <listitem> <para>When a server is stopped, use tunefs.lustre. See - <xref linkend="dbdoclet.setting_param_tunefs" />below.</para> + <xref linkend="setting_param_tunefs" />below.</para> </listitem> <listitem> <para>When the file system is running, use lctl to set or retrieve Lustre parameters. See - <xref linkend="dbdoclet.setting_param_with_lctl" />and - <xref linkend="dbdoclet.reporting_current_param" />below.</para> + <xref linkend="setting_param_with_lctl" />and + <xref linkend="reporting_current_param" />below.</para> </listitem> </itemizedlist> - <section xml:id="dbdoclet.tuning_params_mkfs_lustre"> + <section xml:id="tuning_params_mkfs_lustre"> <title>Setting Tunable Parameters with <literal>mkfs.lustre</literal> When the file system is first formatted, parameters can simply be @@ -579,7 +579,7 @@ mds# mkfs.lustre --mdt --param="sys.timeout=50" /dev/sda mkfs.lustre, see .
-
+
Setting Parameters with <literal>tunefs.lustre</literal> If a server (OSS or MDS) is stopped, parameters can be added to an @@ -614,7 +614,7 @@ mds# tunefs.lustre --param mdt.identity_upcall=NONE /dev/sda1 tunefs.lustre, see .
-
+
Setting Parameters with <literal>lctl</literal> When the file system is running, the @@ -625,7 +625,7 @@ mds# tunefs.lustre --param mdt.identity_upcall=NONE /dev/sda1 The lctl list_param command enables users to list all parameters that can be set. See - . + . For more details about the lctl command, see the examples in the sections below @@ -659,7 +659,7 @@ osc.myth-OST0003-osc.max_dirty_mb=32 osc.myth-OST0004-osc.max_dirty_mb=32
-
+
Setting Permanent Parameters Use lctl set_param -P or lctl conf_param command to set permanent parameters. @@ -695,7 +695,7 @@ $ lctl conf_param testfs.sys.timeout=40 file system's configuration file on the MGS.
-
+
Setting Permanent Parameters with lctl set_param -P The lctl set_param -P command can also set parameters permanently using the same syntax as @@ -741,7 +741,7 @@ lctl set_param -P -d provides an interactive list of available parameters.
-
+
Listing Parameters To list Lustre or LNet parameters that are available to set, use the @@ -767,7 +767,7 @@ lctl list_param [-FR] oss# lctl list_param obdfilter.lustre-OST0000
-
+
Reporting Current Parameter Values To report current Lustre parameter values, use the lctl get_param command with this syntax: @@ -873,7 +873,7 @@ mds1# lctl get_param mdt.testfs-MDT0000.recovery_status .
-
+
<indexterm> <primary>operations</primary> @@ -926,7 +926,7 @@ $ mkfs.lustre --reformat --writeconf --fsname spfs --mgsnode= them.</para> </note> </section> - <section xml:id="dbdoclet.reclaiming_reserved_disk_space"> + <section xml:id="reclaiming_reserved_disk_space"> <title> <indexterm> <primary>operations</primary> @@ -952,7 +952,7 @@ tune2fs [-m reserved_blocks_percent] /dev/ 5%.</para> </warning> </section> - <section xml:id="dbdoclet.replacing_existing_ost_mdt"> + <section xml:id="replacing_existing_ost_mdt"> <title> <indexterm> <primary>operations</primary> @@ -960,12 +960,12 @@ tune2fs [-m reserved_blocks_percent] /dev/ </indexterm>Replacing an Existing OST or MDT To copy the contents of an existing OST to a new OST (or an old MDT to a new MDT), follow the process for either OST/MDT backups in - or + or . For more information on removing a MDT, see .
-
+
<indexterm> <primary>operations</primary> diff --git a/LustreProc.xml b/LustreProc.xml index 782f2e6..bc70d6f 100644 --- a/LustreProc.xml +++ b/LustreProc.xml @@ -95,7 +95,7 @@ osc.testfs-OST0000-osc-ffff881071d5cc00.connect_flags </listitem> </itemizedlist> <para>For more information about using <literal>lctl</literal>, see <xref - xmlns:xlink="http://www.w3.org/1999/xlink" linkend="dbdoclet.setting_param_with_lctl"/>.</para> + xmlns:xlink="http://www.w3.org/1999/xlink" linkend="setting_param_with_lctl"/>.</para> <para>Data can also be viewed using the <literal>cat</literal> command with the full path to the file. The form of the <literal>cat</literal> command is similar to that of the <literal>lctl get_param</literal> @@ -119,7 +119,7 @@ osc.testfs-OST0000-osc-ffff881071d5cc00.connect_flags <para>The <literal>llstat</literal> utility can be used to monitor some Lustre file system I/O activity over a specified time period. For more details, see - <xref xmlns:xlink="http://www.w3.org/1999/xlink" linkend="dbdoclet.config_llstat"/></para> + <xref xmlns:xlink="http://www.w3.org/1999/xlink" linkend="config_llstat"/></para> <para>Some data is imported from attached clients and is available in a directory called <literal>exports</literal> located in the corresponding per-service directory on a Lustre server. For example: @@ -1720,7 +1720,7 @@ obdfilter.lol-OST0001.sync_journal=0</screen> <screen>$ lctl get_param obdfilter.*.sync_on_lock_cancel obdfilter.lol-OST0001.sync_on_lock_cancel=never</screen> </section> - <section xml:id="dbdoclet.TuningModRPCs" condition='l28'> + <section xml:id="TuningModRPCs" condition='l28'> <title> <indexterm> <primary>proc</primary> @@ -2375,7 +2375,7 @@ nid refs peer max tx min </listitem> </itemizedlist></para> </section> - <section remap="h3" xml:id="dbdoclet.balancing_free_space"> + <section remap="h3" xml:id="balancing_free_space"> <title><indexterm> <primary>proc</primary> <secondary>free space</secondary> @@ -2491,7 +2491,7 @@ nid refs peer max tx min ldlm.namespaces.myth-MDT0000-mdc-ffff8804296c2800.lru_max_age=900000 </screen> </section> - <section xml:id="dbdoclet.tuning_setting_thread_count"> + <section xml:id="tuning_setting_thread_count"> <title><indexterm> <primary>proc</primary> <secondary>thread counts</secondary> diff --git a/LustreProgrammingInterfaces.xml b/LustreProgrammingInterfaces.xml index cb72cf0..3fb4f98 100644 --- a/LustreProgrammingInterfaces.xml +++ b/LustreProgrammingInterfaces.xml @@ -8,16 +8,16 @@ This chapter includes the following sections:</para> <itemizedlist> <listitem> - <para><xref linkend="dbdoclet.identity_upcall"/></para> + <para><xref linkend="identity_upcall"/></para> </listitem> <listitem> - <para><xref linkend="dbdoclet.perm_downcall_data"/></para> + <para><xref linkend="perm_downcall_data"/></para> </listitem> </itemizedlist> <note> <para>Lustre programming interface man pages are found in the <literal>lustre/doc</literal> folder.</para> </note> - <section xml:id="dbdoclet.identity_upcall"> + <section xml:id="identity_upcall"> <title><indexterm> <primary>programming</primary> <secondary>upcall</secondary> @@ -48,7 +48,7 @@ list. This upcall executable opens the <literal>mdt.${FSNAME}-MDT{xxxx}.identity_info</literal> parameter file and writes the related <literal>identity_downcall_data</literal> data - structure (see <xref linkend="dbdoclet.perm_downcall_data"/>). The + structure (see <xref linkend="perm_downcall_data"/>). The upcall is configured with <literal>lctl set_param mdt.${FSNAME}-MDT{xxxx}.identity_upcall</literal>.</para> <para>The default identity upcall program installed is @@ -114,7 +114,7 @@ </itemizedlist> </section> </section> - <section xml:id="dbdoclet.perm_downcall_data"> + <section xml:id="perm_downcall_data"> <title>Data Structures struct perm_downcall_data { __u64 pdd_nid; diff --git a/LustreRecovery.xml b/LustreRecovery.xml index d0f24cc..d065be0 100644 --- a/LustreRecovery.xml +++ b/LustreRecovery.xml @@ -129,7 +129,7 @@ If multiple MDTs are in use, active-active failover is possible (e.g. two MDS nodes, each actively serving one or more different MDTs for the same filesystem). See - for more information. + for more information.
<indexterm><primary>recovery</primary><secondary>OST failure</secondary></indexterm>OST Failure (Failover) diff --git a/LustreTroubleshooting.xml b/LustreTroubleshooting.xml index 04ae32f..26ba2e7 100644 --- a/LustreTroubleshooting.xml +++ b/LustreTroubleshooting.xml @@ -11,7 +11,7 @@ - + @@ -185,7 +185,7 @@
-
+
<indexterm><primary>troubleshooting</primary><secondary>error messages</secondary></indexterm>Viewing Error Messages As Lustre software code runs on the kernel, single-digit error codes display to the application; these error codes are an indication of the problem. Refer to the kernel console @@ -216,7 +216,7 @@
-
+
<indexterm> <primary>troubleshooting</primary> <secondary>reporting bugs</secondary> @@ -246,7 +246,7 @@ tickets for your issue. <emphasis role="italic">For search tips, see <xref xmlns:xlink="http://www.w3.org/1999/xlink" - linkend="dbdoclet.searching_jira"/>.</emphasis></para> + linkend="searching_jira"/>.</emphasis></para> </listitem> <listitem> <para>To create a ticket, click <emphasis role="bold">+Create Issue</emphasis> in the @@ -288,17 +288,17 @@ <listitem> <para><emphasis role="italic">Attachments</emphasis> - Attach log sources such as Lustre debug log dumps (see <xref xmlns:xlink="http://www.w3.org/1999/xlink" - linkend="dbdoclet.debugging_tools"/>), syslogs, or console logs. <emphasis + linkend="debugging_tools"/>), syslogs, or console logs. <emphasis role="italic"><emphasis role="bold">Note:</emphasis></emphasis> Lustre debug logs must be processed using <code>lctl df</code> prior to attaching to a Jira ticket. For more information, see <xref xmlns:xlink="http://www.w3.org/1999/xlink" - linkend="dbdoclet.using_lctl_tool"/>. </para> + linkend="using_lctl_tool"/>. </para> </listitem> </itemizedlist>Other fields in the form are used for project tracking and are irrelevant to reporting an issue. You can leave these in their default state.</para> </listitem> </orderedlist></para> - <section xml:id="dbdoclet.searching_jira"> + <section xml:id="searching_jira"> <title>Searching Jira<superscript>*</superscript>for Duplicate Tickets Before submitting a ticket, always search the Jira bug tracker for an existing ticket for your issue. This avoids duplicating effort and @@ -357,7 +357,7 @@ then it is possible that you have discovered a programming error that allowed the servers to get out of sync. Please submit a Jira ticket (see ). + linkend="reporting_lustre_problem"/>). If the reported error is anything else (such as -5, "I/O error"), it likely indicates a storage device failure. The low-level file system returns this error if it is @@ -441,7 +441,7 @@ OST in its place or replace it with a newly-formatted OST. In that case, the missing objects are created and are read as zero-filled.
-
+
Fixing a Bad LAST_ID on an OST Each OST contains a LAST_ID file, which holds the last object (pre-)created by the MDS @@ -751,7 +751,7 @@ server now claims 791)! For information on determining the MDS memory and OSS memory - requirements, see . + requirements, see .
Setting SCSI I/O Sizes diff --git a/LustreTuning.xml b/LustreTuning.xml index 231918c..a8bde00 100644 --- a/LustreTuning.xml +++ b/LustreTuning.xml @@ -10,7 +10,7 @@ parameters. These parameters are contained in the /etc/modprobe.d/lustre.conf file. -
+
<indexterm> <primary>tuning</primary> @@ -97,7 +97,7 @@ lctl {get,set}_param {service}.thread_{min,max,started} <para> This works in a similar fashion to binding of threads on MDS. MDS thread tuning is covered in - <xref linkend="dbdoclet.mdsbinding" />.</para> + <xref linkend="mdsbinding" />.</para> <itemizedlist> <listitem> <para> @@ -113,9 +113,9 @@ lctl {get,set}_param {service}.thread_{min,max,started} </listitem> </itemizedlist> <para>For further details, see - <xref linkend="dbdoclet.tuning_setting_thread_count" />.</para> + <xref linkend="tuning_setting_thread_count" />.</para> </section> - <section xml:id="dbdoclet.mdstuning"> + <section xml:id="mdstuning"> <title> <indexterm> <primary>tuning</primary> @@ -136,7 +136,7 @@ lctl {get,set}_param {service}.thread_{min,max,started} </screen> </para> <para>For details, see - <xref linkend="dbdoclet.tuning_setting_thread_count" />.</para> + <xref linkend="tuning_setting_thread_count" />.</para> <para>The number of MDS service threads started depends on system size and the load on the server, and has a default maximum of 64. The maximum potential number of threads (<literal>MDS_MAX_THREADS</literal>) @@ -161,7 +161,7 @@ lctl {get,set}_param {service}.thread_{min,max,started} </itemizedlist> </section> </section> - <section xml:id="dbdoclet.mdsbinding"> + <section xml:id="mdsbinding"> <title> <indexterm> <primary>tuning</primary> @@ -173,7 +173,7 @@ lctl {get,set}_param {service}.thread_{min,max,started} bindings are selected automatically to provide good overall performance for a given CPU count. However, an administrator can deviate from these setting if they choose. For details on specifying the mapping of CPU cores to - CPTs see <xref linkend="dbdoclet.libcfstuning"/>. + CPTs see <xref linkend="libcfstuning"/>. </para> <itemizedlist> <listitem> @@ -204,7 +204,7 @@ options mdt mds_num_cpts=[0]</screen> </example> </para> </section> - <section xml:id="dbdoclet.tuning_lnet_params"> + <section xml:id="tuning_lnet_params"> <title> <indexterm> <primary>LNet</primary> @@ -265,7 +265,7 @@ options ksocklnd enable_irq_affinity=0 an administrator can bind an interface to one or more CPU partitions. Bindings are specified as options to the LNet modules. For more information on specifying module options, see - <xref linkend="dbdoclet.tuning_lnet_mod_params" /></para> + <xref linkend="tuning_lnet_mod_params" /></para> <para>For example, <literal>o2ib0(ib0)[0,1]</literal> will ensure that all messages for <literal>o2ib0</literal> will be handled by LND threads executing on @@ -509,7 +509,7 @@ lnet large_router_buffers=8192 be MAX.</para> </section> </section> - <section xml:id="dbdoclet.libcfstuning"> + <section xml:id="libcfstuning"> <title> <indexterm> <primary>tuning</primary> @@ -590,7 +590,7 @@ cpu_partition_table= </para> </section> </section> - <section xml:id="dbdoclet.lndtuning"> + <section xml:id="lndtuning"> <title> <indexterm> <primary>tuning</primary> @@ -1033,7 +1033,7 @@ cpu_partition_table= </informaltable> </section> </section> - <section xml:id="dbdoclet.nrstuning"> + <section xml:id="nrstuning"> <title> <indexterm> <primary>tuning</primary> @@ -1568,7 +1568,7 @@ ost.OSS.ost_io.nrs_orr_supported=reg_supported:reads_and_writes </listitem> </itemizedlist> </section> - <section xml:id="dbdoclet.tbftuning" condition='l26'> + <section xml:id="tbftuning" condition='l26'> <title> <indexterm> <primary>tuning</primary> @@ -1687,7 +1687,7 @@ $ lctl set_param ost.OSS.ost_io.nrs_tbf_rule=\ <para><emphasis role="bold">JobID based TBF policy</emphasis></para> <para>For the JobID, please see <xref xmlns:xlink="http://www.w3.org/1999/xlink" - linkend="dbdoclet.jobstats" /> for more details.</para> + linkend="jobstats" /> for more details.</para> <para>Command:</para> <screen>lctl set_param x.x.x.nrs_tbf_rule= "[reg|hp] start <replaceable>rule_name</replaceable> jobid={<replaceable>jobid_list</replaceable>} rate=<replaceable>rate</replaceable>" @@ -1913,7 +1913,7 @@ default * 10000, ref 0</screen> </itemizedlist> </section> </section> - <section xml:id="dbdoclet.delaytuning" condition='l2A'> + <section xml:id="delaytuning" condition='l2A'> <title> <indexterm> <primary>tuning</primary> @@ -2059,7 +2059,7 @@ ost.OSS.ost_io.nrs_delay_pct=hp_delay_pct:5 </itemizedlist> </section> </section> - <section xml:id="dbdoclet.tuning_lockless_IO"> + <section xml:id="tuning_lockless_IO"> <title> <indexterm> <primary>tuning</primary> @@ -2341,7 +2341,7 @@ ldlm.namespaces.filter-<replaceable>fsname</replaceable>-*. renegotiate the new maximum RPC size.</para></caution> </section> </section> - <section xml:id="dbdoclet.tuning_IO_small_files"> + <section xml:id="tuning_IO_small_files"> <title> <indexterm> <primary>tuning</primary> @@ -2381,7 +2381,7 @@ ldlm.namespaces.filter-<replaceable>fsname</replaceable>-*. </listitem> </itemizedlist> </section> - <section xml:id="dbdoclet.write_vs_read_performance"> + <section xml:id="write_vs_read_performance"> <title> <indexterm> <primary>tuning</primary> diff --git a/ManagingFailover.xml b/ManagingFailover.xml index d544787..e7b26e9 100644 --- a/ManagingFailover.xml +++ b/ManagingFailover.xml @@ -8,7 +8,7 @@ sections:</para> <itemizedlist> <listitem> - <para><xref linkend="dbdoclet.overview_mmp"/></para> + <para><xref linkend="overview_mmp"/></para> </listitem> <listitem> <para><xref xmlns:xlink="http://www.w3.org/1999/xlink" linkend="section_etn_4zf_tl"/></para> @@ -18,7 +18,7 @@ <para>For information about configuring a Lustre file system for failover, see <xref xmlns:xlink="http://www.w3.org/1999/xlink" linkend="configuringfailover"/></para> </note> - <section xml:id="dbdoclet.overview_mmp"> + <section xml:id="overview_mmp"> <title> <indexterm> <primary>multiple-mount protection</primary> diff --git a/ManagingLNet.xml b/ManagingLNet.xml index 8606bf6..40189ff 100644 --- a/ManagingLNet.xml +++ b/ManagingLNet.xml @@ -278,7 +278,7 @@ ents"</screen> identified. To ensure that a router is identified correctly, make sure to add its local NID in the routes parameter in the modprobe lustre configuration file. - See <xref linkend='dbdoclet.tuning_lnet_mod_params'/>.</para> + See <xref linkend='tuning_lnet_mod_params'/>.</para> </section> <section remap="h3"> <title><indexterm><primary>LNet</primary></indexterm><literal>lustre_routes_conversion</literal> diff --git a/SettingUpLustreSystem.xml b/SettingUpLustreSystem.xml index 53bd427..ce887ef 100644 --- a/SettingUpLustreSystem.xml +++ b/SettingUpLustreSystem.xml @@ -9,31 +9,31 @@ - + - + - + - + - + -
+
<indexterm><primary>setup</primary></indexterm> <indexterm><primary>setup</primary><secondary>hardware</secondary></indexterm> <indexterm><primary>design</primary><see>setup</see></indexterm> @@ -157,7 +157,7 @@ results.)</para> </section> </section> - <section xml:id="dbdoclet.space_requirements"> + <section xml:id="space_requirements"> <title><indexterm><primary>setup</primary><secondary>space</secondary></indexterm> <indexterm><primary>space</primary><secondary>determining requirements</secondary></indexterm> Determining Space Requirements @@ -216,7 +216,7 @@ The size is determined by the total number of servers in the Lustre file system cluster(s) that are managed by the MGS.
-
+
<indexterm> <primary>setup</primary> <secondary>MDT</secondary> @@ -260,7 +260,7 @@ <para>2 KiB/inode x 100 million inodes x 2 = 400 GiB ldiskfs MDT</para> </informalexample> <para>For details about formatting options for ldiskfs MDT and OST file - systems, see <xref linkend="dbdoclet.ldiskfs_mdt_mkfs"/>.</para> + systems, see <xref linkend="ldiskfs_mdt_mkfs"/>.</para> <note> <para>If the median file size is very small, 4 KB for example, the MDT would use as much space for each file as the space used on the OST, @@ -327,10 +327,10 @@ specify a different average file size (number of total inodes for a given OST size) to reduce file system overhead and minimize file system check time. - See <xref linkend="dbdoclet.ldiskfs_ost_mkfs"/> for more details.</para> + See <xref linkend="ldiskfs_ost_mkfs"/> for more details.</para> </section> </section> - <section xml:id="dbdoclet.ldiskfs_mkfs_opts"> + <section xml:id="ldiskfs_mkfs_opts"> <title> <indexterm> <primary>ldiskfs</primary> @@ -377,7 +377,7 @@ <screen>--mkfsoptions='backing fs options'</screen> <para>For other <literal>mkfs.lustre</literal> options, see the Linux man page for <literal>mke2fs(8)</literal>.</para> - <section xml:id="dbdoclet.ldiskfs_mdt_mkfs"> + <section xml:id="ldiskfs_mdt_mkfs"> <title><indexterm> <primary>inodes</primary> <secondary>MDS</secondary> @@ -430,7 +430,7 @@ read or written for each MDT inode access. </para> </section> - <section xml:id="dbdoclet.ldiskfs_ost_mkfs"> + <section xml:id="ldiskfs_ost_mkfs"> <title><indexterm> <primary>inodes</primary> <secondary>OST</secondary> @@ -546,7 +546,7 @@ if substantial errors are detected and need to be repaired.</para> </note> <para>For further details about optimizing MDT and OST file systems, - see <xref linkend="dbdoclet.ldiskfs_raid_opts"/>.</para> + see <xref linkend="ldiskfs_raid_opts"/>.</para> </section> </section> <section remap="h3"> @@ -597,7 +597,7 @@ <tbody> <row> <entry> - <para><anchor xml:id="dbdoclet.max_mdt_count" xreflabel=""/>Maximum number of MDTs</para> + <para><anchor xml:id="max_mdt_count" xreflabel=""/>Maximum number of MDTs</para> </entry> <entry> <para>256</para> @@ -614,7 +614,7 @@ </row> <row> <entry> - <para><anchor xml:id="dbdoclet.max_ost_count" xreflabel=""/>Maximum number of OSTs</para> + <para><anchor xml:id="max_ost_count" xreflabel=""/>Maximum number of OSTs</para> </entry> <entry> <para>8150</para> @@ -628,7 +628,7 @@ </row> <row> <entry> - <para><anchor xml:id="dbdoclet.max_ost_size" xreflabel=""/>Maximum OST size</para> + <para><anchor xml:id="max_ost_size" xreflabel=""/>Maximum OST size</para> </entry> <entry> <para>1024TiB (ldiskfs), 1024TiB (ZFS)</para> @@ -651,7 +651,7 @@ </row> <row> <entry> - <para><anchor xml:id="dbdoclet.max_client_count" xreflabel=""/>Maximum number of clients</para> + <para><anchor xml:id="max_client_count" xreflabel=""/>Maximum number of clients</para> </entry> <entry> <para>131072</para> @@ -664,7 +664,7 @@ </row> <row> <entry> - <para><anchor xml:id="dbdoclet.max_filesysem_size" xreflabel=""/>Maximum size of a single file system</para> + <para><anchor xml:id="max_filesysem_size" xreflabel=""/>Maximum size of a single file system</para> </entry> <entry> <para>2EiB or larger</para> @@ -678,7 +678,7 @@ </row> <row> <entry> - <para><anchor xml:id="dbdoclet.max_stripe_count" xreflabel=""/>Maximum stripe count</para> + <para><anchor xml:id="max_stripe_count" xreflabel=""/>Maximum stripe count</para> </entry> <entry> <para>2000</para> @@ -702,7 +702,7 @@ </row> <row> <entry> - <para><anchor xml:id="dbdoclet.max_stripe_size" xreflabel=""/>Maximum stripe size</para> + <para><anchor xml:id="max_stripe_size" xreflabel=""/>Maximum stripe size</para> </entry> <entry> <para>< 4 GiB</para> @@ -714,7 +714,7 @@ </row> <row> <entry> - <para><anchor xml:id="dbdoclet.min_stripe_size" xreflabel=""/>Minimum stripe size</para> + <para><anchor xml:id="min_stripe_size" xreflabel=""/>Minimum stripe size</para> </entry> <entry> <para>64 KiB</para> @@ -729,7 +729,7 @@ </row> <row> <entry> - <para><anchor xml:id="dbdoclet.max_object_size" xreflabel=""/>Maximum single object size</para> + <para><anchor xml:id="max_object_size" xreflabel=""/>Maximum single object size</para> </entry> <entry> <para>16TiB (ldiskfs), 256TiB (ZFS)</para> @@ -744,7 +744,7 @@ </row> <row> <entry> - <para><anchor xml:id="dbdoclet.max_file_size" xreflabel=""/>Maximum file size</para> + <para><anchor xml:id="max_file_size" xreflabel=""/>Maximum file size</para> </entry> <entry> <para>16 TiB on 32-bit systems</para> @@ -767,7 +767,7 @@ </row> <row> <entry> - <para><anchor xml:id="dbdoclet.max_directory_size" xreflabel=""/>Maximum number of files or subdirectories in a single directory</para> + <para><anchor xml:id="max_directory_size" xreflabel=""/>Maximum number of files or subdirectories in a single directory</para> </entry> <entry> <para>600M-3.8B files (ldiskfs), 16T (ZFS)</para> @@ -791,7 +791,7 @@ </row> <row> <entry> - <para><anchor xml:id="dbdoclet.max_file_count" xreflabel=""/>Maximum number of files in the file system</para> + <para><anchor xml:id="max_file_count" xreflabel=""/>Maximum number of files in the file system</para> </entry> <entry> <para>4 billion (ldiskfs), 256 trillion (ZFS) <emphasis>per MDT</emphasis></para> @@ -816,7 +816,7 @@ </row> <row> <entry> - <para><anchor xml:id="dbdoclet.max_filename_size" xreflabel=""/>Maximum length of a filename</para> + <para><anchor xml:id="max_filename_size" xreflabel=""/>Maximum length of a filename</para> </entry> <entry> <para>255 bytes (filename)</para> @@ -828,7 +828,7 @@ </row> <row> <entry> - <para><anchor xml:id="dbdoclet.max_pathname_size" xreflabel=""/>Maximum length of a pathname</para> + <para><anchor xml:id="max_pathname_size" xreflabel=""/>Maximum length of a pathname</para> </entry> <entry> <para>4096 bytes (pathname)</para> @@ -839,7 +839,7 @@ </row> <row> <entry> - <para><anchor xml:id="dbdoclet.max_open_files" xreflabel=""/>Maximum number of open files for a Lustre file system</para> + <para><anchor xml:id="max_open_files" xreflabel=""/>Maximum number of open files for a Lustre file system</para> </entry> <entry> <para>No limit</para> @@ -858,7 +858,7 @@ </table> <para> </para> </section> - <section xml:id="dbdoclet.mds_oss_memory"> + <section xml:id="mds_oss_memory"> <title><indexterm><primary>setup</primary><secondary>memory</secondary></indexterm>Determining Memory Requirements This section describes the memory requirements for each Lustre file system component.
@@ -1007,7 +1007,7 @@
-
+
<indexterm> <primary>setup</primary> <secondary>network</secondary> diff --git a/SystemConfigurationUtilities.xml b/SystemConfigurationUtilities.xml index 9a3bced..a279080 100644 --- a/SystemConfigurationUtilities.xml +++ b/SystemConfigurationUtilities.xml @@ -6,40 +6,40 @@ <para>This chapter includes system configuration utilities and includes the following sections:</para> <itemizedlist> <listitem> - <para><xref linkend="dbdoclet.config_e2scan"/></para> + <para><xref linkend="config_e2scan"/></para> </listitem> <listitem> - <para><xref linkend="dbdoclet.l_getidentity"/></para> + <para><xref linkend="l_getidentity"/></para> </listitem> <listitem> - <para><xref linkend="dbdoclet.lctl"/></para> + <para><xref linkend="lctl"/></para> </listitem> <listitem> - <para><xref linkend="dbdoclet.config_ll_decode_filter_fid"/></para> + <para><xref linkend="config_ll_decode_filter_fid"/></para> </listitem> <listitem> - <para><xref linkend="dbdoclet.config_recover_lostfound_objs"/></para> + <para><xref linkend="config_recover_lostfound_objs"/></para> </listitem> <listitem> - <para><xref linkend="dbdoclet.config_llog_reader"/></para> + <para><xref linkend="config_llog_reader"/></para> </listitem> <listitem> - <para><xref linkend="dbdoclet.config_llstat"/></para> + <para><xref linkend="config_llstat"/></para> </listitem> <listitem> - <para><xref linkend="dbdoclet.config_llverdev"/></para> + <para><xref linkend="config_llverdev"/></para> </listitem> <listitem> - <para><xref linkend="dbdoclet.config_llshowmount"/></para> + <para><xref linkend="config_llshowmount"/></para> </listitem> <listitem> - <para><xref linkend="dbdoclet.config_lst"/></para> + <para><xref linkend="config_lst"/></para> </listitem> <listitem> - <para><xref linkend="dbdoclet.config_lustre_rmmod_sh"/></para> + <para><xref linkend="config_lustre_rmmod_sh"/></para> </listitem> <listitem> - <para><xref linkend="dbdoclet.lustre_rsync"/></para> + <para><xref linkend="lustre_rsync"/></para> </listitem> <listitem> <para><xref linkend="mkfs.lustre"/></para> @@ -48,19 +48,19 @@ <para><xref linkend="mount.lustre"/></para> </listitem> <listitem> - <para><xref linkend="dbdoclet.plot_llstat"/></para> + <para><xref linkend="plot_llstat"/></para> </listitem> <listitem> - <para><xref linkend="dbdoclet.config_routerstat"/></para> + <para><xref linkend="config_routerstat"/></para> </listitem> <listitem> <para><xref linkend="tunefs.lustre"/></para> </listitem> <listitem> - <para><xref linkend="dbdoclet.config_additional_utility"/></para> + <para><xref linkend="config_additional_utility"/></para> </listitem> </itemizedlist> - <section xml:id="dbdoclet.config_e2scan"> + <section xml:id="config_e2scan"> <title><indexterm><primary>e2scan</primary></indexterm>e2scan The e2scan utility is an ext2 file system-modified inode scan program. The e2scan program uses libext2fs to find inodes with ctime or mtime newer than a given time and prints out their pathname. Use e2scan to efficiently generate lists of files that have been modified. The e2scan tool is included in the e2fsprogs package, located at: @@ -130,7 +130,7 @@
-
+
<indexterm><primary>l_getidentity</primary></indexterm> l_getidentity The l_getidentity tool normally handles Lustre user/group mapping @@ -146,7 +146,7 @@ l_getidentity values for that UID, and writes this into the mdt.*.identity_info parameter file. The list of supplementary groups is cached in the kernel to avoid repeated - upcalls. See for more + upcalls. See for more details. The l_getidentity utility can also be run directly for debugging purposes to ensure that the UID mapping for a @@ -200,7 +200,7 @@ l_getidentity
-
+
<indexterm><primary>lctl</primary></indexterm> lctl The lctl utility is used for root control and configuration. With lctl you can directly control Lustre via an ioctl interface, allowing various configuration, maintenance and debugging features to be accessed. @@ -264,7 +264,7 @@ $ lctl conf_param testfs.llite.max_read_ahead_mb=16 lctl list_param [-R] [-F] obdtype.obdname.* For example, to list all of the parameters on the MDT: oss# lctl list_param -RF mdt - For more information on using lctl to set temporary and permanent parameters, see . + For more information on using lctl to set temporary and permanent parameters, see . Network Configuration @@ -516,7 +516,7 @@ $ lctl conf_param testfs.llite.max_read_ahead_mb=16 Sets a permanent configuration parameter for any device via the MGS. This command must be run on the MGS node. All writeable parameters under lctl list_param (e.g. lctl list_param -F osc.*.* | grep =) can be permanently set using lctl conf_param, but the format is slightly different. For conf_param, the device is specified first, then the obdtype. Wildcards are not supported. Additionally, failover nodes may be added (or removed), and some system-wide parameters may be set as well (sys.at_max, sys.at_min, sys.at_extra, sys.at_early_margin, sys.at_history, sys.timeout, sys.ldlm_timeout). For system-wide parameters, device is ignored. - For more information on setting permanent parameters and lctl conf_param command examples, see (Setting Permanent Parameters). + For more information on setting permanent parameters and lctl conf_param command examples, see (Setting Permanent Parameters). @@ -803,7 +803,7 @@ lctl > quit - + @@ -811,7 +811,7 @@ lctl > quit
-
+
<indexterm><primary>ll_decode_filter_fid</primary></indexterm> ll_decode_filter_fid The ll_decode_filter_fid utility displays the @@ -833,7 +833,7 @@ ll_decode_filter_fid and is not accessed or modified by Lustre after that time. The OST object ID (objid) may be useful in case of OST directory corruption, though LFSCK can normally reconstruct the entire OST object - directory tree, see for details. + directory tree, see for details. The MDS FID can be useful to determine which MDS inode an OST object is (or was) used by. The stripe index can be used in conjunction with other OST objects to reconstruct the layout of a file even if the MDT @@ -853,10 +853,10 @@ root@oss1# ll_decode_filter_fid #12345[4,5,8]
See Also - +
-
+
<indexterm><primary>ll_recover_lost_found_objs</primary></indexterm> ll_recover_lost_found_objs The ll_recover_lost_found_objs utility was @@ -901,7 +901,7 @@ Timestamp Read-delta ReadRate Write-delta WriteRate /proc/fs/lustre/obdfilter/ostname/stats
-
+
<indexterm><primary>llog_reader</primary></indexterm> llog_reader The llog_reader utility translates a Lustre configuration log into human-readable form. @@ -927,7 +927,7 @@ llog_reader /tmp/tfs-client
-
+
<indexterm><primary>llstat</primary></indexterm> llstat The llstat utility displays Lustre statistics. @@ -1023,7 +1023,7 @@ llstat
-
+
<indexterm><primary>llverdev</primary></indexterm> llverdev The llverdev verifies a block device is functioning properly over its full size. @@ -1161,7 +1161,7 @@ write complete read complete
-
+
<indexterm><primary>lshowmount</primary></indexterm> lshowmount The lshowmount utility shows Lustre exports. @@ -1233,7 +1233,7 @@ lshowmount /proc/fs/lustre/obdfilter/server/exports/uuid/nid
-
+
<indexterm><primary>lst</primary></indexterm> lst The lst utility starts LNet self-test. @@ -1283,7 +1283,7 @@ lst stat servers & sleep 30; kill $! lst end_session
-
+
<indexterm><primary>lustre_rmmod.sh</primary></indexterm> lustre_rmmod.sh The lustre_rmmod.sh utility removes all Lustre and LNet modules (assuming no Lustre services are running). It is located in /usr/bin. @@ -1291,7 +1291,7 @@ lustre_rmmod.sh The lustre_rmmod.sh utility does not work if Lustre modules are being used or if you have manually run the lctl network up command.
-
+
<indexterm><primary>lustre_rsync</primary></indexterm> lustre_rsync The lustre_rsync utility synchronizes (replicates) a Lustre file system to a target file system. @@ -1530,7 +1530,7 @@ mkfs.lustre defined by this command. When the file system is created, parameters can simply be added as a --param option to the mkfs.lustre command. See . + linkend="tuning_params_mkfs_lustre"/>. @@ -2243,7 +2243,7 @@ mount.lustre control over the starting conditions. This mount option also prevents OI scrub from occurring automatically when OI inconsistency is detected (see - ). + ). @@ -2278,7 +2278,7 @@ mount.lustre - + @@ -2286,7 +2286,7 @@ mount.lustre
-
+
<indexterm><primary>plot-llstat</primary></indexterm> plot-llstat The plot-llstat utility plots Lustre statistics. @@ -2348,7 +2348,7 @@ plot-llstat plot-llstat log 3
-
+
<indexterm><primary>routerstat</primary></indexterm> routerstat The routerstat utility prints Lustre router statistics. @@ -2739,7 +2739,7 @@ tunefs.lustre - + @@ -2747,7 +2747,7 @@ tunefs.lustre
-
+
<indexterm><primary>utilities</primary><secondary>system config</secondary></indexterm> Additional System Configuration Utilities This section describes additional system configuration utilities for Lustre. diff --git a/TroubleShootingRecovery.xml b/TroubleShootingRecovery.xml index a1369ad..b9c97ce 100644 --- a/TroubleShootingRecovery.xml +++ b/TroubleShootingRecovery.xml @@ -9,26 +9,26 @@ - + - + - + - + -
+
<indexterm> <primary>recovery</primary> @@ -93,7 +93,7 @@ root# e2fsck -fn /dev/sda # don't fix file system, just check for corruption : root# e2fsck -fp /dev/sda # fix errors with prudent answers (usually <literal>yes</literal>)</screen> </section> - <section xml:id="dbdoclet.recover_lustreFS_corruption"> + <section xml:id="recover_lustreFS_corruption"> <title> <indexterm> <primary>recovery</primary> @@ -135,7 +135,7 @@ root# e2fsck -fp /dev/sda # fix errors with prudent answers (usually <literal> identify and process orphan objects found on MDTs as well.</para> </section> </section> - <section xml:id="dbdoclet.recover_unavailable_ost"> + <section xml:id="recover_unavailable_ost"> <title> <indexterm> <primary>recovery</primary> @@ -181,7 +181,7 @@ root# e2fsck -fp /dev/sda # fix errors with prudent answers (usually <literal> <xref linkend="lustrerecovery" />(Version-based Recovery).</para> </note> </section> - <section xml:id="dbdoclet.lfsckadmin"> + <section xml:id="lfsckadmin"> <title> <indexterm> <primary>recovery</primary> @@ -205,7 +205,7 @@ root# e2fsck -fp /dev/sda # fix errors with prudent answers (usually <literal> an internal table called the OI Table. An OI Scrub traverses the OI table and makes corrections where necessary. An OI Scrub is required after restoring from a file-level MDT backup ( - <xref linkend="dbdoclet.backup_device" />), or in case the OI Table is + <xref linkend="backup_device" />), or in case the OI Table is otherwise corrupted. Later phases of LFSCK will add further checks to the Lustre distributed file system state. LFSCK namespace scanning can verify and repair the directory FID-in-dirent and LinkEA consistency.</para> @@ -822,7 +822,7 @@ root# e2fsck -fp /dev/sda # fix errors with prudent answers (usually <literal> <title>Description The namespace component is responsible for checks - described in . The + described in . The procfs interface for this component is in the MDD layer, named lfsck_namespace. To show the status of this @@ -1482,7 +1482,7 @@ lctl set_param obdfilter.${FSNAME}-${OST_target}.lfsck_speed_limit=
-
+
Auto scrub
Description diff --git a/UnderstandingFailover.xml b/UnderstandingFailover.xml index 4fdc423..017a44e 100644 --- a/UnderstandingFailover.xml +++ b/UnderstandingFailover.xml @@ -219,7 +219,7 @@
-
+
<indexterm> <primary>failover</primary> diff --git a/UnderstandingLustre.xml b/UnderstandingLustre.xml index 3d1f0b6..f8c6ff6 100644 --- a/UnderstandingLustre.xml +++ b/UnderstandingLustre.xml @@ -732,7 +732,7 @@ <emphasis role="italic">striped</emphasis> across the objects using RAID 0, and each object is stored on a different OST. (For more information about how striping is implemented in a Lustre file system, see - <xref linkend="dbdoclet.lustre_striping" />.</para> + <xref linkend="lustre_striping" />.</para> <figure xml:id="Fig1.3_LayoutEAonMDT"> <title>Layout EA on MDT pointing to file data on OSTs @@ -791,7 +791,7 @@ available space of all the OSTs. -
+
<indexterm> <primary>Lustre</primary> diff --git a/UpgradingLustre.xml b/UpgradingLustre.xml index f498246..16aaeb4 100644 --- a/UpgradingLustre.xml +++ b/UpgradingLustre.xml @@ -12,7 +12,7 @@ <itemizedlist> <listitem> <para> - <xref linkend="dbdoclet.interop_upgrade_requirement" /> + <xref linkend="interop_upgrade_requirement" /> </para> </listitem> <listitem> @@ -28,7 +28,7 @@ </para> </listitem> </itemizedlist> - <section xml:id="dbdoclet.interop_upgrade_requirement"> + <section xml:id="interop_upgrade_requirement"> <title> <indexterm> <primary>Lustre</primary> @@ -114,7 +114,7 @@ </listitem> <listitem> <para>Shut down the entire filesystem by following - <xref linkend="dbdoclet.shutdownLustre"/></para> + <xref linkend="shutdownLustre"/></para> </listitem> <listitem> <para>Upgrade the Linux operating system on all servers to a compatible @@ -333,7 +333,7 @@ client# lfs setdirstripe -c 1 -i -1 <replaceable>/testfs/some_dir</replaceable> </note> <para>If you have a problem upgrading a Lustre file system, see <xref xmlns:xlink="http://www.w3.org/1999/xlink" - linkend="dbdoclet.reporting_lustre_problem"/>for ways to get help.</para> + linkend="reporting_lustre_problem"/>for ways to get help.</para> </section> <section xml:id="Upgrading_2.x.x"> <title> @@ -453,7 +453,7 @@ client# lfs setdirstripe -c 1 -i -1 <replaceable>/testfs/some_dir</replaceable> </orderedlist> <para>If you have a problem upgrading a Lustre file system, see <xref xmlns:xlink="http://www.w3.org/1999/xlink" - linkend="dbdoclet.reporting_lustre_problem" />for some suggestions for + linkend="reporting_lustre_problem" />for some suggestions for how to get help.</para> </section> </chapter> diff --git a/UserUtilities.xml b/UserUtilities.xml index ab2b87a..2668220 100644 --- a/UserUtilities.xml +++ b/UserUtilities.xml @@ -757,7 +757,7 @@ lfs help </entry> <entry> <para>Name of the pre-defined pool of OSTs (see - <xref linkend="dbdoclet.lctl" />) that will be used + <xref linkend="lctl" />) that will be used for striping. The <literal>stripe_cnt</literal>, <literal>stripe_size</literal> and @@ -1033,11 +1033,11 @@ $ lfs setstripe --pool my_pool /mnt/lustre/dir <section remap="h5"> <title>See Also - +
-
+
<indexterm> <primary>lfs_migrate</primary> diff --git a/ZFSSnapshots.xml b/ZFSSnapshots.xml index 49b5edf..0203c1a 100644 --- a/ZFSSnapshots.xml +++ b/ZFSSnapshots.xml @@ -7,25 +7,25 @@ contains following sections:</para> <itemizedlist> <listitem> - <para><xref linkend="dbdoclet.zfssnapshotIntro"/></para> + <para><xref linkend="zfssnapshotIntro"/></para> </listitem> <listitem> - <para><xref linkend="dbdoclet.zfssnapshotConfig"/></para> + <para><xref linkend="zfssnapshotConfig"/></para> </listitem> <listitem> - <para><xref linkend="dbdoclet.zfssnapshotOps"/></para> + <para><xref linkend="zfssnapshotOps"/></para> </listitem> <listitem> - <para><xref linkend="dbdoclet.zfssnapshotBarrier"/></para> + <para><xref linkend="zfssnapshotBarrier"/></para> </listitem> <listitem> - <para><xref linkend="dbdoclet.zfssnapshotLogs"/></para> + <para><xref linkend="zfssnapshotLogs"/></para> </listitem> <listitem> - <para><xref linkend="dbdoclet.zfssnapshotLustreLogs"/></para> + <para><xref linkend="zfssnapshotLustreLogs"/></para> </listitem> </itemizedlist> - <section xml:id="dbdoclet.zfssnapshotIntro"> + <section xml:id="zfssnapshotIntro"> <title><indexterm><primary>Introduction</primary> </indexterm>Introduction Snapshots provide fast recovery of files from a previously created @@ -42,7 +42,7 @@ faster than from any offline backup or remote replica. However, note that snapshots do not improve storage reliability and are just as exposed to hardware failure as any other storage volume. -
+
<indexterm><primary>Introduction</primary> <secondary>Requirements</secondary></indexterm>Requirements @@ -62,7 +62,7 @@ their system’s actual size and usage.
-
+
<indexterm><primary>feature overview</primary> <secondary>configuration</secondary></indexterm>Configuration @@ -90,10 +90,10 @@ host-ost2 - OST0001 zfs:myfs-ost2/ost2 file system setup, you are ready to create a file system snapshot.
-
+
<indexterm><primary>operations</primary> </indexterm>Snapshot Operations -
+
<indexterm><primary>operations</primary> <secondary>create</secondary></indexterm>Creating a Snapshot @@ -181,7 +181,7 @@ comment] -F | --fsname fsname> [-h | --help] -n | --name ssname>
-
+
<indexterm><primary>operations</primary> <secondary>delete</secondary></indexterm>Delete a Snapshot @@ -250,7 +250,7 @@ comment] -F | --fsname fsname> [-h | --help] -n | --name ssname>
-
+
<indexterm><primary>operations</primary> <secondary>mount</secondary></indexterm>Mounting a Snapshot @@ -332,7 +332,7 @@ comment] -F | --fsname fsname> [-h | --help] -n | --name ssname> Finally, mount the snapshot on the client: mount -t lustre -o ro $MGS_nid:/$ss_fsname $local_mount_point
-
+
<indexterm><primary>operations</primary> <secondary>unmount</secondary></indexterm>Unmounting a Snapshot @@ -402,7 +402,7 @@ comment] -F | --fsname fsname> [-h | --help] -n | --name ssname> For example: lctl snapshot_umount -F myfs -n snapshot_20170602
-
+
<indexterm><primary>operations</primary> <secondary>list</secondary></indexterm>List Snapshots @@ -474,7 +474,7 @@ comment] -F | --fsname fsname> [-h | --help] -n | --name ssname>
-
+
<indexterm><primary>operations</primary> <secondary>modify</secondary></indexterm>Modify Snapshot Attributes @@ -560,7 +560,7 @@ comment] -F | --fsname fsname> [-h | --help] -n | --name ssname>
-
+
<indexterm><primary>barrier</primary> </indexterm>Global Write Barriers Snapshots are non-atomic across multiple MDTs and OSTs, which means @@ -583,7 +583,7 @@ comment] -F | --fsname fsname> [-h | --help] -n | --name ssname> lctl snapshot_create. So, explicit use of the barrier is not required when using snapshots but included here as an option to quiet the file system before a snapshot is created. -
+
<indexterm><primary>barrier</primary> <secondary>impose</secondary></indexterm>Impose Barrier @@ -598,7 +598,7 @@ where timeout default is 30. If the command is successful, there will be no output from the command. Otherwise, an error message will be printed.
-
+
<indexterm><primary>barrier</primary> <secondary>remove</secondary></indexterm>Remove Barrier @@ -612,7 +612,7 @@ where timeout default is 30. If the command is successful, there will be no output from the command. Otherwise, an error message will be printed.
-
+
<indexterm><primary>barrier</primary> <secondary>query</secondary></indexterm>Query Barrier @@ -738,7 +738,7 @@ The barrier will be expired after 7 seconds If the barrier is in ’freezing_p1’, ’freezing_p2’ or ’frozen’ status, then the remaining lifetime will be returned also.
-
+
<indexterm><primary>barrier</primary> <secondary>rescan</secondary></indexterm>Rescan Barrier @@ -756,7 +756,7 @@ where the default timeout is 30 seconds. error message will be printed.
-
+
<indexterm><primary>logs</primary> </indexterm>Snapshot Logs A log of all snapshot activity can be found in the following file: @@ -780,7 +780,7 @@ Mon Mar 21 19:47:12 2016 (20897:jt_snapshot_destroy:1312:scratch:ssh): Destroy snapshot lss_2_0 successfully with force <disable>
-
+
<indexterm><primary>configlogs</primary> </indexterm>Lustre Configuration Logs A snapshot is independent from the original file system that it is