nodes.</para>
</listitem>
</itemizedlist>
- <screen>options lnet ip2nets=â€tcp 198.129.135.* 192.128.88.98; \
- elan 198.128.88.98 198.129.135.3; \
- routes='cp 1022@elan # Elan NID of router; \
- elan 198.128.88.98@tcp # TCP NID of router '</screen>
+ <screen>options lnet 'ip2nets="tcp 198.129.135.* 192.128.88.98; \
+ elan 198.128.88.98 198.129.135.3;"' \
+ 'routes="tcp 1022@elan # Elan NID of router; \
+ elan 198.128.88.98@tcp # TCP NID of router"'</screen>
</section>
<section remap="h4">
<title><indexterm><primary>configuring</primary>
<para>Failover in a Lustre file system requires the use of a remote
power control (RPC) mechanism, which comes in different configurations.
For example, Lustre server nodes may be equipped with IPMI/BMC devices
- that allow remote power control. In the past, software or even
- “sneakerware” has been used, but these are not recommended. For
- recommended devices, refer to the list of supported RPC devices on the
- website for the PowerMan cluster power management utility:</para>
+ that allow remote power control.
+ For recommended devices, refer to the list of supported RPC devices
+ on the website for the PowerMan cluster power management utility:</para>
<para><link xmlns:xlink="http://www.w3.org/1999/xlink"
- xlink:href="https://linux.die.net/man/7/powerman-devices">
- https://linux.die.net/man/7/powerman-devices</link></para>
+ xlink:href="https://github.com/chaos/powerman/tree/master/etc/devices">
+ https://github.com/chaos/powerman/tree/master/etc/devices</link></para>
</section>
<section remap="h3">
<title><indexterm>
<primary>failover</primary>
<secondary>power management software</secondary>
</indexterm>Selecting Power Management Software</title>
- <para>Lustre failover requires RPC and management capability to verify that a failed node is
- shut down before I/O is directed to the failover node. This avoids double-mounting the two
- nodes and the risk of unrecoverable data corruption. A variety of power management tools
- will work. Two packages that have been commonly used with the Lustre software are PowerMan
- and Linux-HA (aka. STONITH ).</para>
+ <para>Lustre failover requires RPC and management capability to verify
+ that a failed node is off before I/O is directed to the failover node.
+ This avoids double-mounting the two nodes and the risk of
+ unrecoverable data corruption.
+ A variety of power management tools will work.
+ Two packages that have been commonly used with the Lustre software
+ are PowerMan and Pacemaker.</para>
<para>The PowerMan cluster power management utility is used to control
RPC devices from a central location. PowerMan provides native support
for several RPC varieties and Expect-like configuration simplifies
<para><link xmlns:xlink="http://www.w3.org/1999/xlink"
xlink:href="https://github.com/chaos/powerman">
https://github.com/chaos/powerman</link></para>
- <para>STONITH, or “Shoot The Other Node In The Head”, is a set of power management tools
- provided with the Linux-HA package prior to Red Hat Enterprise Linux 6. Linux-HA has native
- support for many power control devices, is extensible (uses Expect scripts to automate
- control), and provides the software to detect and respond to failures. With Red Hat
- Enterprise Linux 6, Linux-HA is being replaced in the open source community by the
- combination of Corosync and Pacemaker. For Red Hat Enterprise Linux subscribers, cluster
- management using CMAN is available from Red Hat.</para>
+ <para>STONITH, or "Shoot The Other Node In The Head"
+ is used in conjunction with High Availability node management.
+ This is implemented by Pacemaker to ensure that a peer node
+ that may be importing a shared storage device has been powered
+ off and will not corrupt the shared storage if it continues running.
+ </para>
</section>
<section>
<title><indexterm>
<para>The per-target configuration is relayed to the MGS at mount time. Some rules related to
this are:<itemizedlist>
<listitem>
- <para> When a target is <emphasis role="underline"><emphasis role="italic"
- >initially</emphasis></emphasis> mounted, the MGS reads the configuration
- information from the target (such as mgt vs. ost, failnode, fsname) to configure the
- target into a Lustre file system. If the MGS is reading the initial mount configuration,
- the mounting node becomes that target's “primary” node.</para>
+ <para> When a target is
+ <emphasis role="italic">initially</emphasis> mounted,
+ the MGS reads the configuration information from the target
+ (such as mgt vs. ost, failnode, fsname) to configure the
+ target into a Lustre file system.
+ If the MGS is reading the initial mount configuration,
+ the mounting node becomes that target's "primary" node.
+ </para>
</listitem>
<listitem>
<para>When a target is <emphasis role="underline"><emphasis role="italic"
multi-rail configuration. For the dynamic peer discovery capability
introduced in Lustre Release 2.11.0, please see
<xref linkend="lnet_config.dynamic_discovery" />.</para>
- <para>When configuring peers, use the <literal>–-prim_nid</literal>
+ <para>When configuring peers, use the <literal>--prim_nid</literal>
option to specify the key or primary nid of the peer node. Then
follow that with the <literal>--nid</literal> option to specify a
set of comma separated NIDs.</para>
redundancy and fault-tolerance. However, despite the expense and
complexity of these storage systems, storage failures still occur, and
before release 2.11, Lustre could not be more reliable than the
- individual storage and servers’ components on which it was based. The
+ individual storage and server components on which it was based. The
Lustre file system had no mechanism to mitigate storage hardware
failures and files would become inaccessible if a server was inaccessible
or otherwise out of service.</para>
- 4: { l_ost_idx: 7, l_fid: [0x100070000:0x2:0x0] }
- 5: { l_ost_idx: 2, l_fid: [0x100020000:0x2:0x0] }</screen>
<para> The first mirror has 4MB stripe size and two stripes across OSTs in
- the “flash” OST pool. The second mirror has 4MB stripe size inherited
- from the first mirror, and stripes across all of the available OSTs in
- the “archive” OST pool.</para>
+ the <literal>flash</literal> OST pool.
+ The second mirror has 4MB stripe size inherited from the first mirror,
+ and stripes across all of the available OSTs in the
+ <literal>archive</literal> OST pool.
+ </para>
<para>As mentioned above, it is recommended to use the
<literal>--pool|-p</literal> option (one of the
<literal>lfs setstripe</literal> options) with OST pools configured with
</itemizedlist>
<para>The following command creates a mirrored file with 3 PFL mirrors:
</para>
- <screen>client# lfs mirror create -N -E 4M -p flash --flags=prefer -E eof -c 2 \
--N -E 16M -S 8M -c 4 -p archive --comp-flags=prefer -E eof -c -1 \
--N -E 32M -c 1 -p none -E eof -c -1 /mnt/testfs/file2</screen>
+<screen>
+client# lfs mirror create -N -E 4M -p flash --flags=prefer -E eof -c 2 \
+ -N -E 16M -S 8M -c 4 -p archive -E eof -c -1 \
+ -N -E 32M -c 1 -p archive2 -E eof -c -1 /mnt/testfs/file2
+</screen>
<para>The following command displays the layout information of the
mirrored file <literal>/mnt/testfs/file2</literal>:</para>
<screen>client# lfs getstripe /mnt/testfs/file2
lcme_id: 131075
lcme_mirror_id: 2
- lcme_flags: init,prefer
+ lcme_flags: init
lcme_extent.e_start: 0
lcme_extent.e_end: 16777216
lmm_stripe_count: 4
lmm_pattern: raid0
lmm_layout_gen: 0
lmm_stripe_offset: 0
+ lmm_pool: archive2
lmm_objects:
- - 0: { l_ost_idx: 0, l_fid: [0x100000000:0x3:0x0] }
+ - 0: { l_ost_idx: 8, l_fid: [0x3400000000:0x3:0x0] }
lcme_id: 196614
lcme_mirror_id: 3
lmm_stripe_size: 8388608
lmm_pattern: raid0
lmm_layout_gen: 0
- lmm_stripe_offset: -1</screen>
+ lmm_stripe_offset: -1
+ lmm_pool: archive2
+ </screen>
<para>For the first mirror, the first component inherits the stripe count
and stripe size from filesystem-wide default values. The second
component inherits the stripe size and OST pool from the first
component, and has two stripes. Both of the components are allocated
- from the “flash” OST pool. Also, the flag <literal>prefer</literal> is
+ from the <literal>flash</literal> OST pool.
+ Also, the flag <literal>prefer</literal> is
applied to all the components of the first mirror, which tells the
client to read data from those components whenever they are available.
</para>
<para>For the second mirror, the first component has an 8MB stripe size
- and 4 stripes across OSTs in the “archive” OST pool. The second
- component inherits the stripe size and OST pool from the first
- component, and stripes across all of the available OSTs in the “archive”
- OST pool. The flag <literal>prefer</literal> is only applied to the
- first component.</para>
+ and 4 stripes across OSTs in the <literal>archive</literal> OST pool.
+ The second component inherits the stripe size and OST pool from the
+ first component, and stripes across all of the available OSTs in the
+ <literal>archive</literal> OST pool.
+ </para>
<para>For the third mirror, the first component inherits the stripe size
of 8MB from the last component of the second mirror, and has one single
- stripe. The OST pool name is cleared and inherited from the parent
- directory (if it was set with OST pool name). The second component
- inherits stripe size from the first component, and stripes across all of
- the available OSTs.</para>
+ stripe. The OST pool name is set to <literal>archive2</literal>.
+ The second component inherits stripe size from the first component,
+ and stripes across all of the available OSTs in that pool.</para>
</section>
<section xml:id="flr.operations.extendmirror">
<title>Extending a Mirrored File</title>
<para>The above layout information showed that data were written into the
first component of mirror with ID <literal>1</literal>, and mirrors with
ID <literal>2</literal> and <literal>3</literal> were marked with
- “stale” flag.</para>
+ <literal>stale</literal> flag.</para>
<para>Resynchronizing the stale mirror with ID <literal>2</literal> for
the mirrored file <literal>/mnt/testfs/file1</literal>:</para>
- <screen># lfs mirror resync --only 2 /mnt/testfs/file1
+<screen>
+# lfs mirror resync --only 2 /mnt/testfs/file1
# lfs getstripe /mnt/testfs/file1
/mnt/testfs/file1
lcm_layout_gen: 7
......
</screen>
<para>The above layout information showed that after resynchronizing, the
- “stale” flag was removed from mirror with ID <literal>2</literal>.</para>
+ <literal>stale</literal> flag was removed from mirror with ID
+ <literal>2</literal>.
+ </para>
<para>Resynchronizing all of the stale mirrors for the mirrored file
<literal>/mnt/testfs/file1</literal>:</para>
<screen># lfs mirror resync /mnt/testfs/file1
</tgroup>
</informaltable>
<para><emphasis role="strong">Note:</emphasis></para>
- <para>Mirror components that have “stale” or “offline” flags will be
- skipped and not verified.</para>
+ <para>Mirror components that have <literal>stale</literal> or
+ <literal>offline</literal> flags will be skipped and not verified.
+ </para>
<para><emphasis role="strong">Examples:</emphasis></para>
<para>The following command verifies that each mirror of a mirrored file
contains exactly the same data:</para>
files not matching <replaceable>state</replaceable>. Only one
state can be specified.</para>
<para>Valid state names are:</para>
- <para><literal>ro</literal> – indicates the mirrored file is in
+ <para><literal>ro</literal> - indicates the mirrored file is in
read-only state. All of the mirrors contain the up-to-date
data.</para>
- <para><literal>wp</literal> – indicates the mirrored file is in
+ <para><literal>wp</literal> - indicates the mirrored file is in
a state of being written.</para>
- <para><literal>sp</literal> – indicates the mirrored file is in
+ <para><literal>sp</literal> - indicates the mirrored file is in
a state of being resynchronized.</para>
</entry>
</row>
<para><literal>--daemonize | -d</literal></para>
</entry>
<entry>
- <para>Optional flag to “daemonize” the program. In daemon
- mode, the utility will scan, process the changelog records
- and sync the LSoM xattr for files periodically.</para>
+ <para>Optional flag to run the program in the background.
+ In daemon mode, the utility will scan and process the
+ changelog records and sync the LSoM xattr for files
+ periodically.</para>
</entry>
</row>
<row>
You need, at least, one copytool per ARCHIVE ID. When using the POSIX copytool,
this ID is defined using <literal>--archive</literal> switch.</para>
-<para>For example: if a single Lustre file system is bound to 2 different HSMs (A and B,) ARCHIVE ID “1” can be chosen for HSM A and ARCHIVE ID “2” for HSM B. If you start 3 copytool instances for ARCHIVE ID 1, all of them will use Archive ID “1”. The same rule applies for copytool instances dealing with the HSM B, using Archive ID “2”. </para>
-
-<para>When issuing HSM requests, you can use the <literal>--archive</literal> switch
-to choose the backend you want to use. In this example, file <literal>foo</literal> will be
-archived into backend ARCHIVE ID “5”:</para>
+<para>For example: if a single Lustre file system is bound to two
+different HSMs (A and B,) ARCHIVE ID "1" can be chosen for
+HSM A and ARCHIVE ID "2" for HSM B.
+If you start 3 copytool instances for ARCHIVE ID 1,
+all of them will use Archive ID "1".
+The same rule applies for copytool instances dealing with the HSM B,
+using Archive ID "2". </para>
+
+<para>
+ When issuing HSM requests, you can use the <literal>--archive</literal>
+ switch to choose the backend you want to use.
+ In this example, file <literal>foo</literal> will be
+ archived into backend ARCHIVE ID "5":
+</para>
<screen>$ lfs hsm_archive --archive=5 /mnt/lustre/foo</screen>
<indexterm><primary>HSM</primary><secondary>changelogs</secondary></indexterm>change logs
</title>
- <para>A changelog record type “HSM“ was added for Lustre file system
-logs that relate to HSM events.</para>
-<screen>16HSM 13:49:47.469433938 2013.10.01 0x280 t=[0x200000400:0x1:0x0]</screen>
+ <para>A changelog record type <literal>HSM</literal> was
+ added for Lustre file system logs that relate to HSM events.
+ </para>
+<screen>
+16HSM 13:49:47.469433938 2013.10.01 0x280 t=[0x200000400:0x1:0x0]
+</screen>
- <para>Two items of information are available for each HSM record: the
-FID of the modified file and a bit mask. The bit mask codes the following
-information (lowest bits first):</para>
+ <para>Two items of information are available for each HSM
+ record: the FID of the modified file and a bit mask.
+ The bit mask codes the following information (low bits first):
+ </para>
<itemizedlist>
<listitem>
<listitem>
<para>Mount the MDTs.</para>
<screen>
-mds# mount –t lustre <replaceable>/dev/mdt4_blockdevice</replaceable> /mnt/mdt4
+mds# mount -t lustre <replaceable>/dev/mdt4_blockdevice</replaceable> /mnt/mdt4
</screen>
</listitem>
<listitem>
<section xml:id="lustremaint.determineOST">
<title><indexterm><primary>maintenance</primary><secondary>identifying OST host</secondary></indexterm>
Determining Which Machine is Serving an OST </title>
- <para>In the course of administering a Lustre file system, you may need to determine which
- machine is serving a specific OST. It is not as simple as identifying the machine’s IP
- address, as IP is only one of several networking protocols that the Lustre software uses and,
- as such, LNet does not use IP addresses as node identifiers, but NIDs instead. To identify the
- NID that is serving a specific OST, run one of the following commands on a client (you do not
- need to be a root user):
+ <para>In the course of administering a Lustre file system,
+ you may need to determine which machine is serving a specific OST.
+ It is not as simple as identifying the machine's IP address,
+ as IP is only one of several networking protocols that the Lustre
+ software uses and, as such, LNet does not use IP addresses as node
+ identifiers, but NIDs instead.
+ To identify the NID that is serving a specific OST, run one of the
+ following commands on a client (you do not need to be a root user):
<screen>client$ lctl get_param osc.<replaceable>fsname</replaceable>-OST<replaceable>number</replaceable>*.ost_conn_uuid</screen>
For example:
<screen>client$ lctl get_param osc.*-OST0000*.ost_conn_uuid
mdt.fs-MDT0000.readonly=1
client$ touch test_file
-touch: cannot touch ‘test_file’: Read-only file system
+touch: cannot touch 'test_file': Read-only file system
mds# lctl set_param mdt.fs-MDT0000.readonly=0
mdt.fs-MDT0000.readonly=0</screen>
covers all Lustre server nodes. So the very first step when working with
nodemaps is to create such a group with both properties
<literal>admin</literal> and <literal>trusted</literal> set. It is
- recommended to give this group an explicit label such as “TrustedSystems”
+ recommended to give this group an explicit label such as
+ <literal>TrustedSystem</literal>
or some identifier that makes the association clear.</para>
<para>Let's consider a deployment where the server nodes are in the NID
<emphasis role="bold">requires</emphasis> a group that covers all Lustre
server nodes, with both properties <literal>admin</literal> and
<literal>trusted</literal> set. It is recommended to give this group an
- explicit label such as “TrustedSystems” or some identifier that makes the
- association clear.</para>
+ explicit label such as <literal>TrustedSystems</literal> or some
+ identifier that makes the association clear.</para>
<section xml:id="lustrenodemap.alteringproperties.managing" remap="h3">
<title>Managing the Properties</title>
<warning>
<para>Lustre server nodes <emphasis role="bold">must</emphasis> be in a
policy group with both these properties set to 1. It is recommended to
- use a policy group labeled “TrustedSystems” or some identifier that
- makes the association clear.</para>
+ use a policy group labeled <literal>TrustedSystems</literal>
+ or some identifier that makes the association clear.</para>
</warning>
<para>If a policy group has the <literal>admin</literal>
<screen>
mds0# mkfs.lustre --fsname=testfs --mdt --mgs \
--servicenode=192.168.10.2@tcp0 \
- -–servicenode=192.168.10.1@tcp0 /dev/sda1
+ --servicenode=192.168.10.1@tcp0 /dev/sda1
mds0# mount -t lustre /dev/sda1 /mnt/test/mdt
oss0# mkfs.lustre --fsname=testfs --servicenode=192.168.10.20@tcp0 \
--servicenode=192.168.10.21 --ost --index=0 \
Two particularly useful baseline statistics are:</para>
<itemizedlist>
<listitem>
- <para><literal>brw_stats</literal> – Histogram data characterizing I/O requests to the
+ <para><literal>brw_stats</literal> - Histogram data characterizing I/O requests to the
OSTs. For more details, see <xref xmlns:xlink="http://www.w3.org/1999/xlink"
linkend="monitor_ost_block_io_stream"/>.</para>
</listitem>
<listitem>
- <para><literal>rpc_stats</literal> – Histogram data showing information about RPCs made by
+ <para><literal>rpc_stats</literal> - Histogram data showing information about RPCs made by
clients. For more details, see <xref xmlns:xlink="http://www.w3.org/1999/xlink"
linkend="MonitoringClientRCPStream"/>.</para>
</listitem>
</section>
<section>
<title>Tuning Directory Statahead and AGL</title>
- <para>Many system commands, such as <literal>ls –l</literal>,
+ <para>Many system commands, such as <literal>ls -l</literal>,
<literal>du</literal>, and <literal>find</literal>, traverse a
directory sequentially. To make these commands run efficiently, the
directory statahead can be enabled to improve the performance of
with the MGS, and there is no other node to notify clients in case of MGS restart, the MGS
will disable IR for a period when it first starts. This interval is configurable, as shown
in <xref linkend="imperativerecoveryparameters"/></para>
- <para>Because of the increased importance of the MGS in recovery, it is strongly recommended that the MGS node be separate from the MDS. If the MGS is co-located on the MDS node, then in case of MDS/MGS failure there will be no IR notification for the MDS restart, and clients will always use timeout-based recovery for the MDS. IR notification would still be used in the case of OSS failure and recovery.</para>
- <para>Unfortunately, it’s impossible for the MGS to know how many clients have been successfully notified or whether a specific client has received the restarting target information. The only thing the MGS can do is tell the target that, for example, all clients are imperative recovery-capable, so it is not necessary to wait as long for all clients to reconnect. For this reason, we still require a timeout policy on the target side, but this timeout value can be much shorter than normal recovery. </para>
+ <para>Because of the increased importance of the MGS in recovery,
+ it is strongly recommended that the MGS node be separate from the MDS.
+ If the MGS is co-located on the MDS node, then in case of MDS/MGS
+ failure there will be no IR notification for the MDS restart,
+ and clients will always use timeout-based recovery for the MDS.
+ IR notification would still be used in the case of OSS failure
+ and recovery.
+ </para>
+ <para>Unfortunately, it is impossible for the MGS to know how many
+ clients have been successfully notified or whether a specific
+ client has received the restarting target information.
+ The only thing the MGS can do is tell the target that,
+ for example, all clients are imperative recovery-capable,
+ so it is not necessary to wait as long for all clients to reconnect.
+ For this reason, we still require a timeout policy on the target side,
+ but this timeout value can be much shorter than normal recovery.
+ </para>
</section>
<section remap="h3" xml:id="imperativerecoveryparameters">
<title><indexterm><primary>imperative recovery</primary><secondary>Tuning</secondary></indexterm>Tuning Imperative Recovery</title>
<para>Imperative recovery has a default parameter set which means it can work without any extra configuration. However, the default parameter set only fits a generic configuration. The following sections discuss the configuration items for imperative recovery.</para>
<section remap="h5">
<title>ir_factor</title>
- <para>Ir_factor is used to control targets’ recovery window. If imperative recovery is enabled, the recovery timeout window on the restarting target is calculated by: <emphasis>new timeout = recovery_time * ir_factor / 10 </emphasis>Ir_factor must be a value in range of [1, 10]. The default value of ir_factor is 5. The following example will set imperative recovery timeout to 80% of normal recovery timeout on the target testfs-OST0000: </para>
+ <para>
+ <literal>ir_factor</literal> is used to control each target's
+ recovery window. If imperative recovery is enabled,
+ the recovery timeout window on the restarting target is calculated by:
+ <emphasis>new timeout = recovery_time * ir_factor / 10</emphasis>
+ <literal>ir_factor</literal> must be a value in range of [1, 10].
+ The default value of ir_factor is 5.
+ The following example will set imperative recovery timeout to 80%
+ of normal recovery timeout on the target testfs-OST0000: </para>
<screen>lctl conf_param obdfilter.testfs-OST0000.ir_factor=8</screen>
<note> <para>If this value is too small for the system, clients may be unnecessarily evicted</para> </note>
<para>You can read the current value of the parameter in the standard manner with <emphasis>lctl get_param</emphasis>:</para>
<para>Imperative recovery can also be disabled on the client side with the same mount option:</para>
<screen># mount -t lustre -onoir mymgsnid@tcp:/testfs /mnt/testfs</screen>
<note><para>When a single client is deactivated in this manner, the MGS will deactivate imperative recovery for the whole cluster. IR-enabled clients will still get notification of target restart, but targets will not be allowed to shorten the recovery window. </para></note>
- <para>You can also disable imperative recovery globally on the MGS by writing `state=disabled’ to the controlling procfs entry</para>
+ <para>You can also disable imperative recovery globally on the MGS
+ by writing <literal>state=disabled</literal> to the parameter:</para>
<screen># lctl set_param mgs.MGS.live.testfs="state=disabled"</screen>
- <para>The above command will disable imperative recovery for file system named <emphasis>testfs</emphasis></para>
+ <para>The above command will disable imperative recovery for file
+ system named <emphasis>testfs</emphasis></para>
</section>
<section remap="h5">
<title>Checking Imperative Recovery State - MGS</title>
- <para>You can get the imperative recovery state from the MGS. Let’s take an example and explain states of imperative recovery:</para>
+ <para>You can get the imperative recovery state from the MGS.
+ Let us take an example and explain states of imperative recovery:</para>
<screen>
[mgs]$ lctl get_param mgs.MGS.live.testfs
...
</section>
<section remap="h5">
<title>Checking Imperative Recovery State - client</title>
- <para>A `client’ in IR means a Lustre client or a MDT. You can get the IR state on any node which
- running client or MDT, those nodes will always have an MGC running. An example from a
- client:</para>
+ <para>A 'client' in IR means a Lustre client or a MDT.
+ You can get the IR state on any node which running client or MDT,
+ those nodes will always have an MGC running.
+ An example from a client:
+ </para>
<screen>
[client]$ lctl get_param mgc.*.ir_state
mgc.MGC192.168.127.6@tcp.ir_state=
</emphasis></para>
</entry>
<entry>
- <para><literal>imperative_recovery</literal>can be ON or OFF. If it’s OFF state, then IR is disabled by administrator at mount time. Normally this should be ON state.</para>
+ <para><literal>imperative_recovery</literal>can be ON or OFF.
+ If its OFF state, IR is disabled at mount time.
+ Normally this should be in the ON state.</para>
</entry>
</row>
<row>
keyring infrastructure to maintain keys as well as to perform the
upcall from kernel space to userspace for key
negotiation/establishment. The GSS keyring establishes a key type
- (see “request-key(8)”) named <literal>lgssc</literal> when the Lustre
- <literal>ptlrpc_gss</literal> kernel module is loaded. When a security
- context must be established it creates a key and uses the
- <literal>request-key</literal> binary in an upcall to establish the
- key. This key will look for the configuration file in
+ (see <emphasis>request-key(8)</emphasis>) named <literal>lgssc</literal>
+ when the Lustre <literal>ptlrpc_gss</literal> kernel module is loaded.
+ When a security context must be established it creates a key and uses the
+ <literal>request-key</literal> binary in an upcall to establish the key.
+ This key will look for the configuration file in
<literal>/etc/request-key.d</literal> with the name
<replaceable>keytype</replaceable>.conf, for Lustre this is
<literal>lgssc.conf</literal>.</para>
</tgroup>
</table>
<para>All keys for Lustre use the <literal>user</literal> type for
- keys and are attached to the user’s keyring. This is not
- configurable. Below is an example showing how to list the user’s
- keyring, load a key file, read the key, and clear the key from the
- kernel keyring.</para>
+ keys and are attached to the user's keyring.
+ This is not configurable.
+ Below is an example showing how to list the user's keyring, load
+ a key file, read the key, and clear the key from the kernel keyring.
+ </para>
<screen><emphasis role='bold'>client#</emphasis> keyctl show
Session Keyring
17053352 --alswrv 0 0 keyring: _ses
the key found in the user keyring matching the description, the nodemap
name is read from the key, hashed with SHA256, and sent to the server.
</para>
- <para>Servers look up the client’s NID to determine which nodemap the NID
- is associated with and sends the nodemap name to
+ <para>Servers look up the client's NID to determine which nodemap
+ the NID is associated with and sends the nodemap name to
<literal>lsvcgssd</literal>. The <literal>lsvcgssd</literal> daemon
verifies whether the HMAC equals the nodemap value sent by the client.
- This prevents forgery and invalidates the key when a client’s NID is not
- associated with the nodemap name defined on the servers.</para>
+ This prevents forgery and invalidates the key when a client's NID
+ is not associated with the nodemap name defined on the servers.</para>
<para>It is not required to activate the Nodemap feature in order for SSK
to perform client NID to nodemap name lookups.</para>
</section>
<para>Configure the <literal>lsvcgss</literal> daemon on the MDS and
OSS. Set the <literal>LSVCGSSDARGS</literal> variable in
<literal>/etc/sysconfig/lsvcgss</literal> on the MDS to
- <literal>‘-s -m’</literal>. On the OSS, set the
+ "<literal>-s -m</literal>". On the OSS, set the
<literal>LSVCGSSDARGS</literal> variable in
<literal>/etc/sysconfig/lsvcgss</literal> to
- <literal>‘-s -o’</literal></para>
+ "<literal>-s -o</literal>".</para>
</listitem>
<listitem>
<para>Start the <literal>lsvcgssd</literal> daemon on the MDS and
<screen>cli1# mount -t lustre -o mgssec=skpi,skpath=/secure_directory 172.16.0.1@tcp:/testfs /mnt/testfs</screen>
</listitem>
<listitem>
- <para>Verify that client1’s MGC connection is using the SSK mechanism
- and <literal>skpi</literal> security flavor. See
+ <para>Verify that client1's MGC connection is using the SSK
+ mechanism and <literal>skpi</literal> security flavor. See
<xref linkend="ssksptlrpcctx"/>.</para>
</listitem>
</orderedlist>
<section xml:id='ssksptlrpcctx'>
<title>Viewing Secure PtlRPC Contexts</title>
<para>From the client (or servers which have mgc, osc, mdc contexts) you
- can view info regarding all users’ contexts and the flavor in use for an
- import. For user’s contexts (srpc_context), SSK and gssnull only support
+ can view info regarding all users' contexts and the flavor in use for
+ an import.
+ For user's contexts (srpc_context), SSK and gssnull only support
a single root UID so there should only be one context. The other file in
the import (srpc_info) has additional sptlrpc details. The
<literal>rpc</literal> and <literal>bulk</literal> flavors allow you to
forwarded to the next hop. The three different buffer sizes accommodate
different size messages.</para>
<para>If a message arrives that can fit in a tiny buffer then a tiny
- buffer is used, if a message doesn’t fit in a tiny buffer, but fits in a
+ buffer is used, if a message does not fit in a tiny buffer, but fits in a
small buffer, then a small buffer is used. Finally if a message does not
fit in either a tiny buffer or a small buffer, a large buffer is
used.</para>
</listitem>
<listitem>
<para>
- <literal>avoid_asym_router_failure</literal>– When set to 1,
+ <literal>avoid_asym_router_failure</literal> - When set to 1,
this parameter adds the additional requirement that for a route to be
considered up the gateway of the route must have at least one NI up on
the remote network of the route.
<para>All clients and all servers must get two rails of bandwidth.</para>
</listitem>
</itemizedlist>
- <screen>ip2nets=†o2ib0(ib0),o2ib2(ib1) 192.168.[0-1].[0-252/2] \
+ <screen>ip2nets="o2ib0(ib0),o2ib2(ib1) 192.168.[0-1].[0-252/2] \
#even servers;\
o2ib1(ib0),o2ib3(ib1) 192.168.[0-1].[1-253/2] \
#odd servers;\
the same file system beyond normal Unix permissions/ACLs, even when users
on the clients may have root access. Those tenants share the same file
system, but they are isolated from each other: they cannot access or even
- see each other’s files, and are not aware that they are sharing common
- file system resources.</para>
+ see each other's files, and are not aware that they are sharing
+ common file system resources.</para>
<para>Lustre Isolation leverages the Fileset feature
(<xref linkend="SystemConfigurationUtilities.fileset" />)
to mount only a subdirectory of the filesystem rather than the root
Checking SELinux Policy Enforced by Lustre Clients</title>
<para>SELinux provides a mechanism in Linux for supporting Mandatory Access
Control (MAC) policies. When a MAC policy is enforced, the operating
- system’s (OS) kernel defines application rights, firewalling applications
- from compromising the entire system. Regular users do not have the ability to
- override the policy.</para>
+ system's (OS) kernel defines application rights,
+ firewalling applications from compromising the entire system.
+ Regular users do not have the ability to override the policy.</para>
<para>One purpose of SELinux is to protect the
<emphasis role="bold">OS</emphasis> from privilege escalation. To that
extent, SELinux defines confined and unconfined domains for processes and
eavesdropped during network transfer.</para>
</listitem>
</itemizedlist>
- <para>Kerberos uses the “kernel keyring” client upcall mechanism.</para>
+ <para>
+ Kerberos uses the "kernel keyring" client upcall mechanism.
+ </para>
</section>
<section xml:id="managingSecurity.kerberos.securityflavor">
<title>Security Flavor</title>
[--component-end|-E end1] [STRIPE_OPTIONS]
[--component-end|-E end2] [STRIPE_OPTIONS] ... <replaceable>filename</replaceable></screen>
<para>The <literal>-E</literal> option is used to specify the end offset
- (in bytes or using a suffix “kMGTP”, e.g. 256M) of each component, and
- it also indicates the following <literal>STRIPE_OPTIONS</literal> are
- for this component. Each component defines the stripe pattern of the
+ (in bytes or using a suffix <literal>kMGTP</literal> e.g. 256M)
+ of each component,
+ and it also indicates the following <literal>STRIPE_OPTIONS</literal>
+ are for this component.
+ Each component defines the stripe pattern of the
file in the range of [start, end). The first component must start from
offset 0 and all components must be adjacent with each other, no holes
are allowed, so each extent will start at the end of previous extent.
A <literal>-1</literal> end offset or <literal>eof</literal> indicates
- this is the last component extending to the end of file.</para>
+ this is the last component extending to the end of file.
+ If no <literal>EOF</literal>
+ </para>
<para><emphasis role="bold">Example</emphasis></para>
<screen>$ lfs setstripe -E 4M -c 1 -E 64M -c 4 -E -1 -c -1 -i 4 \
/mnt/testfs/create_comp</screen>
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 4</screen>
- <note><para>Only the first component’s OST objects of the PFL file are
+ <note><para>Only the first component's OST objects of the PFL file are
instantiated when the layout is being set. Other instantiation is
delayed to later write/truncate operations.</para></note>
<para>If we write 128M data to this PFL file, the second and third
/mnt/testfs/testdir/4comp
/mnt/testfs/testdir/dir_3comp/2comp</screen>
<note><para>Since <literal>lfs find</literal> uses
- "<literal>!</literal>" to do negative search, we don’t support
+ "<literal>!</literal>" to do negative search, we don't support
flag <literal>^init</literal> here.</para></note>
</section>
</section>
applications are writing to them.</para>
<para>Whereas PFL delays the instantiation of some components until an IO
operation occurs on this region, SEL allows splitting such non-instantiated
- components in two parts: an “extendable” component and an “extension”
- component. The extendable component is a regular PFL component, covering
+ components in two parts: an <emphasis>extendable</emphasis> component and
+ an <emphasis>extension</emphasis> component.
+ The extendable component is a regular PFL component, covering
just a part of the region, which is small originally. The extension (or SEL)
component is a new component type which is always non-instantiated and
unassigned, covering the other part of the region. When a write reaches this
ways:</para>
<orderedlist numeration="arabic">
<listitem>
- <para>Extension: continue on the same OSTs – used when not low on space
+ <para>Extension: continue on the same OST objects when not low on space
on any of the OSTs of the current component; a particular extent is
granted to the extendable component.</para>
</listitem>
<listitem>
- <para>Spill over: switch to next component OSTs – it is used only for
- not the last component when <emphasis>at least one</emphasis>
+ <para>Spill over: switch to next component OSTs only
+ when not the last component and <emphasis>at least one</emphasis>
of the current OSTs is low on space; the whole region of the SEL
component moves to the next component and the SEL component is removed
in its turn.</para>
</listitem>
<listitem>
<para>Repeating: create a new component with the same layout but on
- free OSTs – it is used only for the last component when <emphasis>
+ free OSTs, used only for the last component when <emphasis>
at least one</emphasis> of the current OSTs is low on space; a new
component has the same layout but instantiated on different OSTs (from
the same pool) which have enough space.</para>
</listitem>
<listitem>
- <para>Forced extension: continue with the current component OSTs despite
- the low on space condition – it is used only for the last component when
- a repeating attempt detected low on space condition as well - spillover
- is impossible and there is no sense in the repeating.</para>
+ <para>Forced extension: continue with the current component OSTs when
+ there is a low on space condition for the last component OSTs, but
+ a repeating attempt detected low on space on other OSTs as well, then
+ spillover is impossible and there is no sense in the repeating.</para>
</listitem>
<listitem>
<para>Each spill event increments the <literal>spill_hit</literal>
<para>The <literal>-z</literal> option is added to specify the extension
size to search for. The files which have any component with the
extension size matched the given criteria are printed out. As always
- “+” and “-“ signs are allowed to specify the least and the most size.
+ <literal>+</literal> and <literal>-</literal> signs are allowed to
+ specify the least and the most size.
</para>
<para>A new <literal>extension</literal> component flag is added. Only
files which have at least one SEL component are printed.</para>
<para>(Optional) If you are upgrading from a release before Lustre
2.10, to enable the project quota feature enter the following on every
ldiskfs backend target while unmounted:
- <screen>tune2fs –O project /dev/<replaceable>dev</replaceable></screen>
+ <screen>tune2fs -O project /dev/<replaceable>dev</replaceable></screen>
</para>
<note><para>Enabling the <literal>project</literal> feature will prevent
the filesystem from being used by older versions of ldiskfs, so it
overwritten files from being released until the snapshot(s)
referencing those files is deleted. The file system administrator
needs to establish a snapshot create/backup/remove policy according to
- their system’s actual size and usage.</para>
+ their system's actual size and usage.</para>
</section>
</section>
<section xml:id="zfssnapshotConfig">
modified. Renaming follows the general ZFS snapshot name rules, such as
the maximum length is 256 bytes, cannot conflict with the reserved names,
and so on.</para>
- <para>To modify a snapshot’s attributes, use the following
+ <para>To modify a snapshot's attributes, use the following
<literal>lctl</literal> command on the MGS:</para>
<screen>lctl snapshot_modify [-c | --comment comment]
<-F | --fsname fsname> [-h | --help] <-n | --name ssname>
taken, there may be user-visible namespace inconsistencies with files
created or destroyed in the interval between the MDT and OST snapshots.
In order to create a consistent snapshot of the file system, we are able
- to set a global write barrier, or “freeze” the system. Once set, all
- metadata modifications will be blocked until the write barrier is actively
- removed (“thawed”) or expired. The user can set a timeout parameter on a
- global barrier or the barrier can be explicitly removed. The default
- timeout period is 30 seconds.</para>
+ to set a global write barrier, or "freeze" the system.
+ Once set, all metadata modifications will be blocked until the write
+ barrier is actively removed ("thawed") or expired.
+ The user can set a timeout parameter on a
+ global barrier or the barrier can be explicitly removed.
+ The default timeout period is 30 seconds.</para>
<para>It is important to note that snapshots are usable without the global
barrier. Only files that are currently being modified by clients (write,
create, unlink) may be inconsistent as noted above if the barrier is not
</tbody>
</tgroup>
</table>
- <para>If the barrier is in ’freezing_p1’, ’freezing_p2’ or ’frozen’
- status, then the remaining lifetime will be returned also.</para>
+ <para>If the barrier is in <literal>freezing_p1</literal>,
+ <literal>freezing_p2</literal> or <literal>frozen</literal> status,
+ then the remaining lifetime will be returned also.</para>
</section>
<section xml:id="zfssnapshotBarrierRescan">
<title><indexterm><primary>barrier</primary>
<secondary>rescan</secondary></indexterm>Rescan Barrier
</title>
<para> To rescan a global write barrier to check which MDTs are
- active, run the <literal>lctl barrier_rescan</literal> command on the
- MGS:</para>
- <screen>lctl barrier_rescan <fsname> [timeout (in seconds)],
-where the default timeout is 30 seconds.</screen>
+ active, run <literal>lctl barrier_rescan</literal>on the MGS
+ (with a default <replaceable>TIMEOUT_SEC</replaceable> of 30s):
+ </para>
+<screen>
+lctl barrier_rescan <replaceable>FSNAME</replaceable> [<replaceable>TIMEOUT_SEC</replaceable>]
+</screen>
<para>For example, to rescan the barrier for filesystem
<replaceable>testfs</replaceable>:</para>
<screen>mgs# lctl barrier_rescan testfs
<para><superscript>*</superscript>Other names and brands may be claimed as the property of
others.</para>
- <para>THE ORIGINAL LUSTRE 2.x FILESYSTEM: OPERATIONS MANUAL HAS BEEN MODIFIED: THIS OPERATIONS MANUAL IS A MODIFIED VERSION OF, AND IS DERIVED FROM, THE LUSTRE 2.0 FILESYSTEM: OPERATIONS MANUAL PUBLISHED BY ORACLE AND AVAILABLE AT [http://www.lustre.org/]. MODIFICATIONS (collectively, the “Modifications”) HAVE BEEN MADE BY INTEL CORPORATION (“Intel”). ORACLE AND ITS AFFILIATES HAVE NOT REVIEWED, APPROVED, SPONSORED, OR ENDORSED THIS MODIFIED OPERATIONS MANUAL, OR ENDORSED INTEL, AND ORACLE AND ITS AFFILIATES ARE NOT RESPONSIBLE OR LIABLE FOR ANY MODIFICATIONS THAT INTEL HAS MADE TO THE ORIGINAL OPERATIONS MANUAL.</para>
+ <para>THE ORIGINAL LUSTRE 2.x FILESYSTEM: OPERATIONS MANUAL HAS BEEN MODIFIED: THIS OPERATIONS MANUAL IS A MODIFIED VERSION OF, AND IS DERIVED FROM, THE LUSTRE 2.0 FILESYSTEM: OPERATIONS MANUAL PUBLISHED BY ORACLE AND AVAILABLE AT [http://www.lustre.org/]. MODIFICATIONS (collectively, the "Modifications") HAVE BEEN MADE BY INTEL CORPORATION ("Intel"). ORACLE AND ITS AFFILIATES HAVE NOT REVIEWED, APPROVED, SPONSORED, OR ENDORSED THIS MODIFIED OPERATIONS MANUAL, OR ENDORSED INTEL, AND ORACLE AND ITS AFFILIATES ARE NOT RESPONSIBLE OR LIABLE FOR ANY MODIFICATIONS THAT INTEL HAS MADE TO THE ORIGINAL OPERATIONS MANUAL.</para>
<para>NOTHING IN THIS MODIFIED OPERATIONS MANUAL IS INTENDED TO AFFECT THE NOTICE PROVIDED BY ORACLE BELOW IN RESPECT OF THE ORIGINAL OPERATIONS MANUAL AND SUCH ORACLE NOTICE CONTINUES TO APPLY TO THIS MODIFIED OPERATIONS MANUAL EXCEPT FOR THE MODIFICATIONS; THIS INTEL NOTICE SHALL APPLY ONLY TO MODIFICATIONS MADE BY INTEL. AS BETWEEN YOU AND ORACLE: (I) NOTHING IN THIS INTEL NOTICE IS INTENDED TO AFFECT THE TERMS OF THE ORACLE NOTICE BELOW; AND (II) IN THE EVENT OF ANY CONFLICT BETWEEN THE TERMS OF THIS INTEL NOTICE AND THE TERMS OF THE ORACLE NOTICE, THE ORACLE NOTICE SHALL PREVAIL.
</para>
<para>Copyright © 2011, Oracle et/ou ses affiliés. Tous droits réservés.</para>
- <para>Ce logiciel et la documentation qui l’accompagne sont protégés par les lois sur la propriété intellectuelle. Ils sont concédés sous licence et soumis à des restrictions d’utilisation et de divulgation. Sauf disposition de votre contrat de licence ou de la loi, vous ne pouvez pas copier, reproduire, traduire, diffuser, modifier, breveter, transmettre, distribuer, exposer, exécuter, publier ou afficher le logiciel, même partiellement, sous quelque forme et par quelque procédé que ce soit. Par ailleurs, il est interdit de procéder à toute ingénierie inverse du logiciel, de le désassembler ou de le décompiler, excepté à des fins d’interopérabilité avec des logiciels tiers ou tel que prescrit par la loi.</para>
-
- <para>Les informations fournies dans ce document sont susceptibles de modification sans préavis. Par ailleurs, Oracle Corporation ne garantit pas qu’elles soient exemptes d’erreurs et vous invite, le cas échéant, à lui en faire part par écrit.</para>
-
- <para>Si ce logiciel, ou la documentation qui l’accompagne, est concédé sous licence au Gouvernement des Etats-Unis, ou à toute entité qui délivre la licence de ce logiciel ou l’utilise pour le compte du Gouvernement des Etats-Unis, la notice suivante s’applique :</para>
-
- <para>U.S. GOVERNMENT RIGHTS. Programs, software, databases, and related documentation and technical data delivered to U.S. Government customers are "commercial computer software" or "commercial technical data" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use, duplication, disclosure, modification, and adaptation shall be subject to the restrictions and license terms set forth in the applicable Government contract, and, to the extent applicable by the terms of the Government contract, the additional rights set forth in FAR 52.227-19, Commercial Computer Software License (December 2007). Oracle America, Inc., 500 Oracle Parkway, Redwood City, CA 94065.</para>
-
- <para>Ce logiciel ou matériel a été développé pour un usage général dans le cadre d’applications de gestion des informations. Ce logiciel ou matériel n’est pas conçu ni n’est destiné à être utilisé dans des applications à risque, notamment dans des applications pouvant causer des dommages corporels. Si vous utilisez ce logiciel ou matériel dans le cadre d’applications dangereuses, il est de votre responsabilité de prendre toutes les mesures de secours, de sauvegarde, de redondance et autres mesures nécessaires à son utilisation dans des conditions optimales de sécurité. Oracle Corporation et ses affiliés déclinent toute responsabilité quant aux dommages causés par l’utilisation de ce logiciel ou matériel pour ce type d’applications.</para>
-
- <para>Oracle et Java sont des marques déposées d’Oracle Corporation et/ou de ses affiliés.Tout autre nom mentionné peut correspondre à des marques appartenant à d’autres propriétaires qu’Oracle.</para>
-
- <para>AMD, Opteron, le logo AMD et le logo AMD Opteron sont des marques ou des marques déposées d’Advanced Micro Devices. Intel et Intel Xeon sont des marques ou des marques déposées d’Intel Corporation. Toutes les marques SPARC sont utilisées sous licence et sont des marques ou des marques déposées de SPARC International, Inc. UNIX est une marque déposée concédée sous licence par X/Open Company, Ltd.</para>
-
- <para>Ce logiciel ou matériel et la documentation qui l’accompagne peuvent fournir des informations ou des liens donnant accès à des contenus, des produits et des services émanant de tiers. Oracle Corporation et ses affiliés déclinent toute responsabilité ou garantie expresse quant aux contenus, produits ou services émanant de tiers. En aucun cas, Oracle Corporation et ses affiliés ne sauraient être tenus pour responsables des pertes subies, des coûts occasionnés ou des dommages causés par l’accès à des contenus, produits ou services tiers, ou à leur utilisation.</para>
+ <para>Ce logiciel et la documentation qui l'accompagne sont
+ protégés par les lois sur la propriété intellectuelle.
+ Ils sont concédés sous licence et soumis à des restrictions
+ d'utilisation et de divulgation.
+ Sauf disposition de votre contrat de licence ou de la loi,
+ vous ne pouvez pas copier, reproduire, traduire, diffuser,
+ modifier, breveter, transmettre, distribuer, exposer, exécuter,
+ publier ou afficher le logiciel, même partiellement,
+ sous quelque forme et par quelque procédé que ce soit.
+ Par ailleurs, il est interdit de procéder à toute ingénierie
+ inverse du logiciel, de le désassembler ou de le décompiler,
+ excepté à des fins d'interopérabilité avec des
+ logiciels tiers ou tel que prescrit par la loi.</para>
+
+ <para>Les informations fournies dans ce document sont susceptibles
+ de modification sans préavis.
+ Par ailleurs, Oracle Corporation ne garantit pas qu'elles
+ soient exemptes d'erreurs et vous invite, le cas échéant,
+ à lui en faire part par écrit.</para>
+
+ <para>Si ce logiciel, ou la documentation qui l'accompagne,
+ est concédé sous licence au Gouvernement des Etats-Unis,
+ ou à toute entité qui délivre la licence de ce logiciel ou
+ l'utilise pour le compte du Gouvernement des Etats-Unis,
+ la notice suivante s'applique :</para>
+
+ <para>U.S. GOVERNMENT RIGHTS. Programs, software, databases,
+ and related documentation and technical data delivered to U.S.
+ Government customers are "commercial computer software" or
+ "commercial technical data" pursuant to the applicable Federal
+ Acquisition Regulation and agency-specific supplemental regulations.
+ As such, the use, duplication, disclosure, modification,
+ and adaptation shall be subject to the restrictions and license
+ terms set forth in the applicable Government contract, and,
+ to the extent applicable by the terms of the Government contract,
+ the additional rights set forth in FAR 52.227-19,
+ Commercial Computer Software License (December 2007).
+ Oracle America, Inc., 500 Oracle Parkway, Redwood City, CA 94065.</para>
+
+ <para>Ce logiciel ou matériel a été développé pour un usage
+ général dans le cadre d'applications de gestion des informations.
+ Ce logiciel ou matériel n'est pas conçu ni n'est
+ destiné à être utilisé dans des applications à risque,
+ notamment dans des applications pouvant causer des dommages corporels.
+ Si vous utilisez ce logiciel ou matériel dans le cadre
+ d'applications dangereuses,
+ il est de votre responsabilité de prendre toutes les mesures de secours,
+ de sauvegarde, de redondance et autres mesures nécessaires
+ à son utilisation dans des conditions optimales de sécurité.
+ Oracle Corporation et ses affiliés déclinent toute
+ responsabilité quant aux dommages causés par l'utilisation
+ de ce logiciel ou matériel pour ce type d'applications.</para>
+
+ <para>Oracle et Java sont des marques déposées
+ d'Oracle Corporation et/ou de ses affiliés.
+ Tout autre nom mentionné peut correspondre à des marques
+ appartenant à d'autres propriétaires qu'Oracle.</para>
+
+ <para>AMD, Opteron, le logo AMD et le logo AMD Opteron sont des
+ marques ou des marques déposées d'Advanced Micro Devices.
+ Intel et Intel Xeon sont des marques ou des marques
+ déposées d'Intel Corporation.
+ Toutes les marques SPARC sont utilisées sous licence et sont
+ des marques ou des marques déposées de SPARC International, Inc.
+ UNIX est une marque déposée concédée sous licence par X/Open Company,
+ Ltd.</para>
+
+ <para>Ce logiciel ou matériel et la documentation qui
+ l'accompagne peuvent fournir des informations ou des
+ liens donnant accès à des contenus,
+ des produits et des services émanant de tiers.
+ Oracle Corporation et ses affiliés déclinent toute
+ responsabilité ou garantie expresse quant aux contenus,
+ produits ou services émanant de tiers.
+ En aucun cas, Oracle Corporation et ses affiliés ne
+ sauraient être tenus pour responsables des pertes subies,
+ des coûts occasionnés ou des dommages causés par
+ l'accès à des contenus, produits ou services tiers,
+ ou à leur utilisation.</para>
<para>This work is licensed under a Creative Commons Attribution-Share Alike 3.0 United States License. To view a copy of this license and obtain more information about Creative Commons licensing, visit <link xl:href="http://creativecommons.org/licenses/by-sa/3.0/us">Creative Commons Attribution-Share Alike 3.0 United States</link> or send a letter to Creative Commons, 171 2nd Street, Suite 300, San Francisco, California 94105, USA.</para>