From: Andreas Dilger Date: Wed, 15 Jul 2020 17:11:45 +0000 (-0600) Subject: LUDOC-11 misc: update URLs from http to https X-Git-Url: https://git.whamcloud.com/?a=commitdiff_plain;h=2259dc267dffd0f97e80e033e8ccaa27767a433b;p=doc%2Fmanual.git LUDOC-11 misc: update URLs from http to https Change URLs in the manual to use https:// from http:// and update those URLs to new targets where they no longer exist. Since some of the URL updates are also related to cross-references to labels within the document, update those labels to have useful names, rather than the automatically-generated reference names. Signed-off-by: Andreas Dilger Change-Id: Id3f195ceebbe269a10e3218835f855ff1d3ebbe5 Reviewed-on: https://review.whamcloud.com/39584 Reviewed-by: Peter Jones Tested-by: jenkins --- diff --git a/BenchmarkingTests.xml b/BenchmarkingTests.xml index ae5a217..5cc0177 100644 --- a/BenchmarkingTests.xml +++ b/BenchmarkingTests.xml @@ -100,7 +100,7 @@ Download the Lustre I/O kit (lustre-iokit)from: - http://downloads.whamcloud.com/ + https://downloads.whamcloud.com/ diff --git a/ConfiguringFailover.xml b/ConfiguringFailover.xml index a478d5c..8f029bb 100644 --- a/ConfiguringFailover.xml +++ b/ConfiguringFailover.xml @@ -9,19 +9,22 @@ - + - - + For an overview of failover functionality in a Lustre file system, see . -
+
<indexterm> <primary>High availability</primary> <see>failover</see> @@ -41,15 +44,16 @@ <primary>failover</primary> <secondary>power control device</secondary> </indexterm>Selecting Power Equipment - Failover in a Lustre file system requires the use of a remote power control (RPC) - mechanism, which comes in different configurations. For example, Lustre server nodes may be - equipped with IPMI/BMC devices that allow remote power control. In the past, software or - even “sneakerware” has been used, but these are not recommended. For recommended devices, - refer to the list of supported RPC devices on the website for the PowerMan cluster power - management utility: + Failover in a Lustre file system requires the use of a remote + power control (RPC) mechanism, which comes in different configurations. + For example, Lustre server nodes may be equipped with IPMI/BMC devices + that allow remote power control. In the past, software or even + “sneakerware” has been used, but these are not recommended. For + recommended devices, refer to the list of supported RPC devices on the + website for the PowerMan cluster power management utility: http://code.google.com/p/powerman/wiki/SupportedDevs + xlink:href="https://linux.die.net/man/7/powerman-devices"> + https://linux.die.net/man/7/powerman-devices
<indexterm> @@ -61,13 +65,14 @@ nodes and the risk of unrecoverable data corruption. A variety of power management tools will work. Two packages that have been commonly used with the Lustre software are PowerMan and Linux-HA (aka. STONITH ).</para> - <para>The PowerMan cluster power management utility is used to control RPC devices from a - central location. PowerMan provides native support for several RPC varieties and Expect-like - configuration simplifies the addition of new devices. The latest versions of PowerMan are + <para>The PowerMan cluster power management utility is used to control + RPC devices from a central location. PowerMan provides native support + for several RPC varieties and Expect-like configuration simplifies + the addition of new devices. The latest versions of PowerMan are available at: </para> <para><link xmlns:xlink="http://www.w3.org/1999/xlink" - xlink:href="http://code.google.com/p/powerman/" - >http://code.google.com/p/powerman/</link></para> + xlink:href="https://github.com/chaos/powerman"> + https://github.com/chaos/powerman</link></para> <para>STONITH, or “Shoot The Other Node In The Head”, is a set of power management tools provided with the Linux-HA package prior to Red Hat Enterprise Linux 6. Linux-HA has native support for many power control devices, is extensible (uses Expect scripts to automate @@ -87,22 +92,23 @@ up failover with Pacemaker, see:</para> <itemizedlist> <listitem> - <para>Pacemaker Project website: <link xmlns:xlink="http://www.w3.org/1999/xlink" - xlink:href="http://clusterlabs.org/"><link xlink:href="http://clusterlabs.org/" - >http://clusterlabs.org/</link></link></para> + <para>Pacemaker Project website: + <link xmlns:xlink="http://www.w3.org/1999/xlink" + xlink:href="https://clusterlabs.org/">https://clusterlabs.org/ + </link></para> </listitem> <listitem> - <para>Article <emphasis role="italic">Using Pacemaker with a Lustre File - System</emphasis>: <link xmlns:xlink="http://www.w3.org/1999/xlink" - xlink:href="https://wiki.whamcloud.com/display/PUB/Using+Pacemaker+with+a+Lustre+File+System" - ><link - xlink:href="https://wiki.whamcloud.com/display/PUB/Using+Pacemaker+with+a+Lustre+File+System" - >https://wiki.whamcloud.com/display/PUB/Using+Pacemaker+with+a+Lustre+File+System</link></link></para> + <para>Article + <emphasis role="italic">Using Pacemaker with a Lustre File System + </emphasis>: + <link xmlns:xlink="http://www.w3.org/1999/xlink" + xlink:href="https://wiki.whamcloud.com/display/PUB/Using+Pacemaker+with+a+Lustre+File+System"> + https://wiki.whamcloud.com/display/PUB/Using+Pacemaker+with+a+Lustre+File+System</link></para> </listitem> </itemizedlist> </section> </section> - <section xml:id="dbdoclet.50438188_92688"> + <section xml:id="failover_setup"> <title><indexterm> <primary>failover</primary> <secondary>setup</secondary> @@ -138,10 +144,12 @@ /dev/sdb</screen></para> <para>More than two potential service nodes can be designated for a target. The target can then be mounted on any of the designated service nodes.</para> - <para>When HA is configured on a storage target, the Lustre software enables multi-mount - protection (MMP) on that storage target. MMP prevents multiple nodes from simultaneously - mounting and thus corrupting the data on the target. For more about MMP, see <xref - xmlns:xlink="http://www.w3.org/1999/xlink" linkend="managingfailover"/>.</para> + <para>When HA is configured on a storage target, the Lustre software + enables multi-mount protection (MMP) on that storage target. MMP prevents + multiple nodes from simultaneously mounting and thus corrupting the data + on the target. For more about MMP, see + <xref xmlns:xlink="http://www.w3.org/1999/xlink" + linkend="managingfailover"/>.</para> <para>If the MGT has been formatted with multiple service nodes designated, this information must be conveyed to the Lustre client in the mount command used to mount the file system. In the example below, NIDs for two MGSs that have been designated as service nodes for the MGT @@ -153,23 +161,26 @@ the client attempts to access data on a target, it will try the NID for each specified service node until it connects to the target.</para> </section> - <section xml:id="section_tnq_kbr_xl"> + <section xml:id="administering_failover"> <title>Administering Failover in a Lustre File System For additional information about administering failover features in a Lustre file system, see: - + - - - diff --git a/ConfiguringQuotas.xml b/ConfiguringQuotas.xml index 46c744c..c681fb1 100644 --- a/ConfiguringQuotas.xml +++ b/ConfiguringQuotas.xml @@ -213,7 +213,7 @@ (e2fsprogs is not needed with ZFS backend). In general, we recommend to use the latest e2fsprogs version available on - http://downloads.whamcloud.com/public/e2fsprogs/. + https://downloads.whamcloud.com/public/e2fsprogs/. The ldiskfs OSD relies on the standard Linux quota to maintain accounting information on disk. As a consequence, the Linux kernel running on the Lustre servers using ldiskfs backend must have diff --git a/Glossary.xml b/Glossary.xml index 570a7eb..ae84fbb 100644 --- a/Glossary.xml +++ b/Glossary.xml @@ -277,7 +277,7 @@ to interact with Lustre software features, such as setting or checking file striping or per-target free space. For more details, see . + linkend="userutilities.lfs" />. diff --git a/III_LustreAdministration.xml b/III_LustreAdministration.xml index fe6737f..a5d2d3a 100644 --- a/III_LustreAdministration.xml +++ b/III_LustreAdministration.xml @@ -88,26 +88,46 @@ - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + +