From f13de2af4cc021666bcd635318aba2f149ddbff0 Mon Sep 17 00:00:00 2001 From: Andreas Dilger Date: Mon, 26 Aug 2024 16:32:08 -0600 Subject: [PATCH] LUDOC-11 misc: replace non-ASCII control chars Replace a large number of non-ASCII characters. This includes the use of ' or " where needed instead of "fancy quotes", replacing m-dash with regular hyphens, using tags instead of quotes, etc. Reformat affected lines to wrap at 80 columns, using "semantic" line breaks after comma or period where appropriate. In a few cases, confusing or incorrect text was replaced. Signed-off-by: Andreas Dilger Change-Id: Ie36a7963c5805b9429d79a9d40265c95c79eed20 Reviewed-on: https://review.whamcloud.com/c/doc/manual/+/56161 Tested-by: jenkins --- ConfigurationFilesModuleParameters.xml | 8 +-- ConfiguringFailover.xml | 49 +++++++++--------- ConfiguringLNet.xml | 2 +- FileLevelRedundancy.xml | 67 ++++++++++++++---------- LazySizeOnMDT.xml | 7 +-- LustreHSM.xml | 35 +++++++++---- LustreMaintenance.xml | 18 ++++--- LustreNodemap.xml | 11 ++-- LustreOperations.xml | 2 +- LustreProc.xml | 6 +-- LustreRecovery.xml | 50 ++++++++++++++---- LustreSharedSecretKey.xml | 40 ++++++++------- LustreTuning.xml | 4 +- ManagingLNet.xml | 2 +- ManagingSecurity.xml | 14 ++--- ManagingStripingFreeSpace.xml | 40 +++++++++------ UpgradingLustre.xml | 2 +- ZFSSnapshots.xml | 30 ++++++----- legalnoticeIntel.xml | 2 +- legalnoticeOracle.xml | 94 ++++++++++++++++++++++++++++------ 20 files changed, 311 insertions(+), 172 deletions(-) diff --git a/ConfigurationFilesModuleParameters.xml b/ConfigurationFilesModuleParameters.xml index b90fd78..e9758fe 100644 --- a/ConfigurationFilesModuleParameters.xml +++ b/ConfigurationFilesModuleParameters.xml @@ -166,10 +166,10 @@ Network Topology nodes. - options lnet ip2nets=â€tcp 198.129.135.* 192.128.88.98; \ - elan 198.128.88.98 198.129.135.3; \ - routes='cp 1022@elan # Elan NID of router; \ - elan 198.128.88.98@tcp # TCP NID of router ' + options lnet 'ip2nets="tcp 198.129.135.* 192.128.88.98; \ + elan 198.128.88.98 198.129.135.3;"' \ + 'routes="tcp 1022@elan # Elan NID of router; \ + elan 198.128.88.98@tcp # TCP NID of router"'
<indexterm><primary>configuring</primary> diff --git a/ConfiguringFailover.xml b/ConfiguringFailover.xml index 8f029bb..d9a8073 100644 --- a/ConfiguringFailover.xml +++ b/ConfiguringFailover.xml @@ -47,24 +47,25 @@ <para>Failover in a Lustre file system requires the use of a remote power control (RPC) mechanism, which comes in different configurations. For example, Lustre server nodes may be equipped with IPMI/BMC devices - that allow remote power control. In the past, software or even - “sneakerware” has been used, but these are not recommended. For - recommended devices, refer to the list of supported RPC devices on the - website for the PowerMan cluster power management utility:</para> + that allow remote power control. + For recommended devices, refer to the list of supported RPC devices + on the website for the PowerMan cluster power management utility:</para> <para><link xmlns:xlink="http://www.w3.org/1999/xlink" - xlink:href="https://linux.die.net/man/7/powerman-devices"> - https://linux.die.net/man/7/powerman-devices</link></para> + xlink:href="https://github.com/chaos/powerman/tree/master/etc/devices"> + https://github.com/chaos/powerman/tree/master/etc/devices</link></para> </section> <section remap="h3"> <title><indexterm> <primary>failover</primary> <secondary>power management software</secondary> </indexterm>Selecting Power Management Software - Lustre failover requires RPC and management capability to verify that a failed node is - shut down before I/O is directed to the failover node. This avoids double-mounting the two - nodes and the risk of unrecoverable data corruption. A variety of power management tools - will work. Two packages that have been commonly used with the Lustre software are PowerMan - and Linux-HA (aka. STONITH ). + Lustre failover requires RPC and management capability to verify + that a failed node is off before I/O is directed to the failover node. + This avoids double-mounting the two nodes and the risk of + unrecoverable data corruption. + A variety of power management tools will work. + Two packages that have been commonly used with the Lustre software + are PowerMan and Pacemaker. The PowerMan cluster power management utility is used to control RPC devices from a central location. PowerMan provides native support for several RPC varieties and Expect-like configuration simplifies @@ -73,13 +74,12 @@ https://github.com/chaos/powerman - STONITH, or “Shoot The Other Node In The Head”, is a set of power management tools - provided with the Linux-HA package prior to Red Hat Enterprise Linux 6. Linux-HA has native - support for many power control devices, is extensible (uses Expect scripts to automate - control), and provides the software to detect and respond to failures. With Red Hat - Enterprise Linux 6, Linux-HA is being replaced in the open source community by the - combination of Corosync and Pacemaker. For Red Hat Enterprise Linux subscribers, cluster - management using CMAN is available from Red Hat. + STONITH, or "Shoot The Other Node In The Head" + is used in conjunction with High Availability node management. + This is implemented by Pacemaker to ensure that a peer node + that may be importing a shared storage device has been powered + off and will not corrupt the shared storage if it continues running. +
<indexterm> @@ -120,11 +120,14 @@ <para>The per-target configuration is relayed to the MGS at mount time. Some rules related to this are:<itemizedlist> <listitem> - <para> When a target is <emphasis role="underline"><emphasis role="italic" - >initially</emphasis></emphasis> mounted, the MGS reads the configuration - information from the target (such as mgt vs. ost, failnode, fsname) to configure the - target into a Lustre file system. If the MGS is reading the initial mount configuration, - the mounting node becomes that target's “primary” node.</para> + <para> When a target is + <emphasis role="italic">initially</emphasis> mounted, + the MGS reads the configuration information from the target + (such as mgt vs. ost, failnode, fsname) to configure the + target into a Lustre file system. + If the MGS is reading the initial mount configuration, + the mounting node becomes that target's "primary" node. + </para> </listitem> <listitem> <para>When a target is <emphasis role="underline"><emphasis role="italic" diff --git a/ConfiguringLNet.xml b/ConfiguringLNet.xml index 285d467..a895cf1 100755 --- a/ConfiguringLNet.xml +++ b/ConfiguringLNet.xml @@ -218,7 +218,7 @@ net: multi-rail configuration. For the dynamic peer discovery capability introduced in Lustre Release 2.11.0, please see <xref linkend="lnet_config.dynamic_discovery" />.</para> - <para>When configuring peers, use the <literal>–-prim_nid</literal> + <para>When configuring peers, use the <literal>--prim_nid</literal> option to specify the key or primary nid of the peer node. Then follow that with the <literal>--nid</literal> option to specify a set of comma separated NIDs.</para> diff --git a/FileLevelRedundancy.xml b/FileLevelRedundancy.xml index c917ce5..7b888de 100644 --- a/FileLevelRedundancy.xml +++ b/FileLevelRedundancy.xml @@ -11,7 +11,7 @@ redundancy and fault-tolerance. However, despite the expense and complexity of these storage systems, storage failures still occur, and before release 2.11, Lustre could not be more reliable than the - individual storage and servers’ components on which it was based. The + individual storage and server components on which it was based. The Lustre file system had no mechanism to mitigate storage hardware failures and files would become inaccessible if a server was inaccessible or otherwise out of service.</para> @@ -179,9 +179,11 @@ - 4: { l_ost_idx: 7, l_fid: [0x100070000:0x2:0x0] } - 5: { l_ost_idx: 2, l_fid: [0x100020000:0x2:0x0] }</screen> <para> The first mirror has 4MB stripe size and two stripes across OSTs in - the “flash” OST pool. The second mirror has 4MB stripe size inherited - from the first mirror, and stripes across all of the available OSTs in - the “archive” OST pool.</para> + the <literal>flash</literal> OST pool. + The second mirror has 4MB stripe size inherited from the first mirror, + and stripes across all of the available OSTs in the + <literal>archive</literal> OST pool. + </para> <para>As mentioned above, it is recommended to use the <literal>--pool|-p</literal> option (one of the <literal>lfs setstripe</literal> options) with OST pools configured with @@ -215,9 +217,11 @@ </itemizedlist> <para>The following command creates a mirrored file with 3 PFL mirrors: </para> - <screen>client# lfs mirror create -N -E 4M -p flash --flags=prefer -E eof -c 2 \ --N -E 16M -S 8M -c 4 -p archive --comp-flags=prefer -E eof -c -1 \ --N -E 32M -c 1 -p none -E eof -c -1 /mnt/testfs/file2</screen> +<screen> +client# lfs mirror create -N -E 4M -p flash --flags=prefer -E eof -c 2 \ + -N -E 16M -S 8M -c 4 -p archive -E eof -c -1 \ + -N -E 32M -c 1 -p archive2 -E eof -c -1 /mnt/testfs/file2 +</screen> <para>The following command displays the layout information of the mirrored file <literal>/mnt/testfs/file2</literal>:</para> <screen>client# lfs getstripe /mnt/testfs/file2 @@ -253,7 +257,7 @@ lcme_id: 131075 lcme_mirror_id: 2 - lcme_flags: init,prefer + lcme_flags: init lcme_extent.e_start: 0 lcme_extent.e_end: 16777216 lmm_stripe_count: 4 @@ -290,8 +294,9 @@ lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 0 + lmm_pool: archive2 lmm_objects: - - 0: { l_ost_idx: 0, l_fid: [0x100000000:0x3:0x0] } + - 0: { l_ost_idx: 8, l_fid: [0x3400000000:0x3:0x0] } lcme_id: 196614 lcme_mirror_id: 3 @@ -302,27 +307,29 @@ lmm_stripe_size: 8388608 lmm_pattern: raid0 lmm_layout_gen: 0 - lmm_stripe_offset: -1</screen> + lmm_stripe_offset: -1 + lmm_pool: archive2 + </screen> <para>For the first mirror, the first component inherits the stripe count and stripe size from filesystem-wide default values. The second component inherits the stripe size and OST pool from the first component, and has two stripes. Both of the components are allocated - from the “flash” OST pool. Also, the flag <literal>prefer</literal> is + from the <literal>flash</literal> OST pool. + Also, the flag <literal>prefer</literal> is applied to all the components of the first mirror, which tells the client to read data from those components whenever they are available. </para> <para>For the second mirror, the first component has an 8MB stripe size - and 4 stripes across OSTs in the “archive” OST pool. The second - component inherits the stripe size and OST pool from the first - component, and stripes across all of the available OSTs in the “archive” - OST pool. The flag <literal>prefer</literal> is only applied to the - first component.</para> + and 4 stripes across OSTs in the <literal>archive</literal> OST pool. + The second component inherits the stripe size and OST pool from the + first component, and stripes across all of the available OSTs in the + <literal>archive</literal> OST pool. + </para> <para>For the third mirror, the first component inherits the stripe size of 8MB from the last component of the second mirror, and has one single - stripe. The OST pool name is cleared and inherited from the parent - directory (if it was set with OST pool name). The second component - inherits stripe size from the first component, and stripes across all of - the available OSTs.</para> + stripe. The OST pool name is set to <literal>archive2</literal>. + The second component inherits stripe size from the first component, + and stripes across all of the available OSTs in that pool.</para> </section> <section xml:id="flr.operations.extendmirror"> <title>Extending a Mirrored File @@ -984,10 +991,11 @@ ls: cannot access /mnt/testfs/victim_file: No such file or directory The above layout information showed that data were written into the first component of mirror with ID 1, and mirrors with ID 2 and 3 were marked with - “stale” flag. + stale flag. Resynchronizing the stale mirror with ID 2 for the mirrored file /mnt/testfs/file1: - # lfs mirror resync --only 2 /mnt/testfs/file1 + +# lfs mirror resync --only 2 /mnt/testfs/file1 # lfs getstripe /mnt/testfs/file1 /mnt/testfs/file1 lcm_layout_gen: 7 @@ -1022,7 +1030,9 @@ ls: cannot access /mnt/testfs/victim_file: No such file or directory ...... The above layout information showed that after resynchronizing, the - “stale” flag was removed from mirror with ID 2. + stale flag was removed from mirror with ID + 2. + Resynchronizing all of the stale mirrors for the mirrored file /mnt/testfs/file1: # lfs mirror resync /mnt/testfs/file1 @@ -1112,8 +1122,9 @@ ls: cannot access /mnt/testfs/victim_file: No such file or directory Note: - Mirror components that have “stale” or “offline” flags will be - skipped and not verified. + Mirror components that have stale or + offline flags will be skipped and not verified. + Examples: The following command verifies that each mirror of a mirrored file contains exactly the same data: @@ -1238,12 +1249,12 @@ exceeds file size 0xa00000: skipped files not matching state. Only one state can be specified. Valid state names are: - ro – indicates the mirrored file is in + ro - indicates the mirrored file is in read-only state. All of the mirrors contain the up-to-date data. - wp – indicates the mirrored file is in + wp - indicates the mirrored file is in a state of being written. - sp – indicates the mirrored file is in + sp - indicates the mirrored file is in a state of being resynchronized. diff --git a/LazySizeOnMDT.xml b/LazySizeOnMDT.xml index 076c725..bae8de1 100644 --- a/LazySizeOnMDT.xml +++ b/LazySizeOnMDT.xml @@ -202,9 +202,10 @@ --daemonize | -d - Optional flag to “daemonize” the program. In daemon - mode, the utility will scan, process the changelog records - and sync the LSoM xattr for files periodically. + Optional flag to run the program in the background. + In daemon mode, the utility will scan and process the + changelog records and sync the LSoM xattr for files + periodically. diff --git a/LustreHSM.xml b/LustreHSM.xml index e1c7fb9..9da5705 100644 --- a/LustreHSM.xml +++ b/LustreHSM.xml @@ -127,11 +127,20 @@ ID must be in the range 1 to 32. You need, at least, one copytool per ARCHIVE ID. When using the POSIX copytool, this ID is defined using --archive switch. -For example: if a single Lustre file system is bound to 2 different HSMs (A and B,) ARCHIVE ID “1” can be chosen for HSM A and ARCHIVE ID “2” for HSM B. If you start 3 copytool instances for ARCHIVE ID 1, all of them will use Archive ID “1”. The same rule applies for copytool instances dealing with the HSM B, using Archive ID “2”. - -When issuing HSM requests, you can use the --archive switch -to choose the backend you want to use. In this example, file foo will be -archived into backend ARCHIVE ID “5”: +For example: if a single Lustre file system is bound to two +different HSMs (A and B,) ARCHIVE ID "1" can be chosen for +HSM A and ARCHIVE ID "2" for HSM B. +If you start 3 copytool instances for ARCHIVE ID 1, +all of them will use Archive ID "1". +The same rule applies for copytool instances dealing with the HSM B, +using Archive ID "2". + + + When issuing HSM requests, you can use the --archive + switch to choose the backend you want to use. + In this example, file foo will be + archived into backend ARCHIVE ID "5": + $ lfs hsm_archive --archive=5 /mnt/lustre/foo @@ -395,13 +404,17 @@ list. HSMchangelogschange logs - A changelog record type “HSM“ was added for Lustre file system -logs that relate to HSM events. -16HSM 13:49:47.469433938 2013.10.01 0x280 t=[0x200000400:0x1:0x0] + A changelog record type HSM was + added for Lustre file system logs that relate to HSM events. + + +16HSM 13:49:47.469433938 2013.10.01 0x280 t=[0x200000400:0x1:0x0] + - Two items of information are available for each HSM record: the -FID of the modified file and a bit mask. The bit mask codes the following -information (lowest bits first): + Two items of information are available for each HSM + record: the FID of the modified file and a bit mask. + The bit mask codes the following information (low bits first): + diff --git a/LustreMaintenance.xml b/LustreMaintenance.xml index 162e71a..34a95a1 100644 --- a/LustreMaintenance.xml +++ b/LustreMaintenance.xml @@ -388,7 +388,7 @@ mds# mkfs.lustre --reformat --fsname=testfs --mdt --m Mount the MDTs. -mds# mount –t lustre /dev/mdt4_blockdevice /mnt/mdt4 +mds# mount -t lustre /dev/mdt4_blockdevice /mnt/mdt4 @@ -816,12 +816,14 @@ Aborting Recovery
<indexterm><primary>maintenance</primary><secondary>identifying OST host</secondary></indexterm> Determining Which Machine is Serving an OST - In the course of administering a Lustre file system, you may need to determine which - machine is serving a specific OST. It is not as simple as identifying the machine’s IP - address, as IP is only one of several networking protocols that the Lustre software uses and, - as such, LNet does not use IP addresses as node identifiers, but NIDs instead. To identify the - NID that is serving a specific OST, run one of the following commands on a client (you do not - need to be a root user): + In the course of administering a Lustre file system, + you may need to determine which machine is serving a specific OST. + It is not as simple as identifying the machine's IP address, + as IP is only one of several networking protocols that the Lustre + software uses and, as such, LNet does not use IP addresses as node + identifiers, but NIDs instead. + To identify the NID that is serving a specific OST, run one of the + following commands on a client (you do not need to be a root user): client$ lctl get_param osc.fsname-OSTnumber*.ost_conn_uuid For example: client$ lctl get_param osc.*-OST0000*.ost_conn_uuid @@ -917,7 +919,7 @@ mds# lctl get_param mdt.fs-MDT0000.readonly mdt.fs-MDT0000.readonly=1 client$ touch test_file -touch: cannot touch ‘test_file’: Read-only file system +touch: cannot touch 'test_file': Read-only file system mds# lctl set_param mdt.fs-MDT0000.readonly=0 mdt.fs-MDT0000.readonly=0 diff --git a/LustreNodemap.xml b/LustreNodemap.xml index d17b29f..8ea3071 100644 --- a/LustreNodemap.xml +++ b/LustreNodemap.xml @@ -114,7 +114,8 @@ covers all Lustre server nodes. So the very first step when working with nodemaps is to create such a group with both properties admin and trusted set. It is - recommended to give this group an explicit label such as “TrustedSystems” + recommended to give this group an explicit label such as + TrustedSystem or some identifier that makes the association clear. Let's consider a deployment where the server nodes are in the NID @@ -310,8 +311,8 @@ drwxr-xr-x 3 root root 4096 Jul 23 09:02 .. requires a group that covers all Lustre server nodes, with both properties admin and trusted set. It is recommended to give this group an - explicit label such as “TrustedSystems” or some identifier that makes the - association clear. + explicit label such as TrustedSystems or some + identifier that makes the association clear.
Managing the Properties @@ -466,8 +467,8 @@ mgs# lctl nodemap_modify --name BirdAdminSite --prope Lustre server nodes must be in a policy group with both these properties set to 1. It is recommended to - use a policy group labeled “TrustedSystems” or some identifier that - makes the association clear. + use a policy group labeled TrustedSystems + or some identifier that makes the association clear. If a policy group has the admin diff --git a/LustreOperations.xml b/LustreOperations.xml index a554b2e..8236577 100644 --- a/LustreOperations.xml +++ b/LustreOperations.xml @@ -939,7 +939,7 @@ osc.myth-OST0004-osc-ffff8800376bdc00.cur_grant_bytes=33808384 mds0# mkfs.lustre --fsname=testfs --mdt --mgs \ --servicenode=192.168.10.2@tcp0 \ - -–servicenode=192.168.10.1@tcp0 /dev/sda1 + --servicenode=192.168.10.1@tcp0 /dev/sda1 mds0# mount -t lustre /dev/sda1 /mnt/test/mdt oss0# mkfs.lustre --fsname=testfs --servicenode=192.168.10.20@tcp0 \ --servicenode=192.168.10.21 --ost --index=0 \ diff --git a/LustreProc.xml b/LustreProc.xml index 29a57e6..1dba27d 100644 --- a/LustreProc.xml +++ b/LustreProc.xml @@ -373,12 +373,12 @@ testfs-MDT0000 Two particularly useful baseline statistics are: - brw_stats – Histogram data characterizing I/O requests to the + brw_stats - Histogram data characterizing I/O requests to the OSTs. For more details, see . - rpc_stats – Histogram data showing information about RPCs made by + rpc_stats - Histogram data showing information about RPCs made by clients. For more details, see . @@ -1388,7 +1388,7 @@ write RPCs in flight: 0
Tuning Directory Statahead and AGL - Many system commands, such as ls –l, + Many system commands, such as ls -l, du, and find, traverse a directory sequentially. To make these commands run efficiently, the directory statahead can be enabled to improve the performance of diff --git a/LustreRecovery.xml b/LustreRecovery.xml index d065be0..656b722 100644 --- a/LustreRecovery.xml +++ b/LustreRecovery.xml @@ -438,15 +438,38 @@ with the MGS, and there is no other node to notify clients in case of MGS restart, the MGS will disable IR for a period when it first starts. This interval is configurable, as shown in - Because of the increased importance of the MGS in recovery, it is strongly recommended that the MGS node be separate from the MDS. If the MGS is co-located on the MDS node, then in case of MDS/MGS failure there will be no IR notification for the MDS restart, and clients will always use timeout-based recovery for the MDS. IR notification would still be used in the case of OSS failure and recovery. - Unfortunately, it’s impossible for the MGS to know how many clients have been successfully notified or whether a specific client has received the restarting target information. The only thing the MGS can do is tell the target that, for example, all clients are imperative recovery-capable, so it is not necessary to wait as long for all clients to reconnect. For this reason, we still require a timeout policy on the target side, but this timeout value can be much shorter than normal recovery. + Because of the increased importance of the MGS in recovery, + it is strongly recommended that the MGS node be separate from the MDS. + If the MGS is co-located on the MDS node, then in case of MDS/MGS + failure there will be no IR notification for the MDS restart, + and clients will always use timeout-based recovery for the MDS. + IR notification would still be used in the case of OSS failure + and recovery. + + Unfortunately, it is impossible for the MGS to know how many + clients have been successfully notified or whether a specific + client has received the restarting target information. + The only thing the MGS can do is tell the target that, + for example, all clients are imperative recovery-capable, + so it is not necessary to wait as long for all clients to reconnect. + For this reason, we still require a timeout policy on the target side, + but this timeout value can be much shorter than normal recovery. +
<indexterm><primary>imperative recovery</primary><secondary>Tuning</secondary></indexterm>Tuning Imperative Recovery Imperative recovery has a default parameter set which means it can work without any extra configuration. However, the default parameter set only fits a generic configuration. The following sections discuss the configuration items for imperative recovery.
ir_factor - Ir_factor is used to control targets’ recovery window. If imperative recovery is enabled, the recovery timeout window on the restarting target is calculated by: new timeout = recovery_time * ir_factor / 10 Ir_factor must be a value in range of [1, 10]. The default value of ir_factor is 5. The following example will set imperative recovery timeout to 80% of normal recovery timeout on the target testfs-OST0000: + + ir_factor is used to control each target's + recovery window. If imperative recovery is enabled, + the recovery timeout window on the restarting target is calculated by: + new timeout = recovery_time * ir_factor / 10 + ir_factor must be a value in range of [1, 10]. + The default value of ir_factor is 5. + The following example will set imperative recovery timeout to 80% + of normal recovery timeout on the target testfs-OST0000: lctl conf_param obdfilter.testfs-OST0000.ir_factor=8 If this value is too small for the system, clients may be unnecessarily evicted You can read the current value of the parameter in the standard manner with lctl get_param: @@ -462,13 +485,16 @@ Imperative recovery can also be disabled on the client side with the same mount option: # mount -t lustre -onoir mymgsnid@tcp:/testfs /mnt/testfs When a single client is deactivated in this manner, the MGS will deactivate imperative recovery for the whole cluster. IR-enabled clients will still get notification of target restart, but targets will not be allowed to shorten the recovery window. - You can also disable imperative recovery globally on the MGS by writing `state=disabled’ to the controlling procfs entry + You can also disable imperative recovery globally on the MGS + by writing state=disabled to the parameter: # lctl set_param mgs.MGS.live.testfs="state=disabled" - The above command will disable imperative recovery for file system named testfs + The above command will disable imperative recovery for file + system named testfs
Checking Imperative Recovery State - MGS - You can get the imperative recovery state from the MGS. Let’s take an example and explain states of imperative recovery: + You can get the imperative recovery state from the MGS. + Let us take an example and explain states of imperative recovery: [mgs]$ lctl get_param mgs.MGS.live.testfs ... @@ -575,9 +601,11 @@ imperative_recovery_state:
Checking Imperative Recovery State - client - A `client’ in IR means a Lustre client or a MDT. You can get the IR state on any node which - running client or MDT, those nodes will always have an MGC running. An example from a - client: + A 'client' in IR means a Lustre client or a MDT. + You can get the IR state on any node which running client or MDT, + those nodes will always have an MGC running. + An example from a client: + [client]$ lctl get_param mgc.*.ir_state mgc.MGC192.168.127.6@tcp.ir_state= @@ -614,7 +642,9 @@ client_state: - imperative_recoverycan be ON or OFF. If it’s OFF state, then IR is disabled by administrator at mount time. Normally this should be ON state. + imperative_recoverycan be ON or OFF. + If its OFF state, IR is disabled at mount time. + Normally this should be in the ON state. diff --git a/LustreSharedSecretKey.xml b/LustreSharedSecretKey.xml index f30ec6f..8d0a96c 100644 --- a/LustreSharedSecretKey.xml +++ b/LustreSharedSecretKey.xml @@ -785,11 +785,11 @@ mgsnode:/testfs /mnt/testfs keyring infrastructure to maintain keys as well as to perform the upcall from kernel space to userspace for key negotiation/establishment. The GSS keyring establishes a key type - (see “request-key(8)”) named lgssc when the Lustre - ptlrpc_gss kernel module is loaded. When a security - context must be established it creates a key and uses the - request-key binary in an upcall to establish the - key. This key will look for the configuration file in + (see request-key(8)) named lgssc + when the Lustre ptlrpc_gss kernel module is loaded. + When a security context must be established it creates a key and uses the + request-key binary in an upcall to establish the key. + This key will look for the configuration file in /etc/request-key.d with the name keytype.conf, for Lustre this is lgssc.conf. @@ -997,10 +997,11 @@ mgsnode:/testfs /mnt/testfs All keys for Lustre use the user type for - keys and are attached to the user’s keyring. This is not - configurable. Below is an example showing how to list the user’s - keyring, load a key file, read the key, and clear the key from the - kernel keyring. + keys and are attached to the user's keyring. + This is not configurable. + Below is an example showing how to list the user's keyring, load + a key file, read the key, and clear the key from the kernel keyring. + client# keyctl show Session Keyring 17053352 --alswrv 0 0 keyring: _ses @@ -1137,12 +1138,12 @@ Session Keyring the key found in the user keyring matching the description, the nodemap name is read from the key, hashed with SHA256, and sent to the server. - Servers look up the client’s NID to determine which nodemap the NID - is associated with and sends the nodemap name to + Servers look up the client's NID to determine which nodemap + the NID is associated with and sends the nodemap name to lsvcgssd. The lsvcgssd daemon verifies whether the HMAC equals the nodemap value sent by the client. - This prevents forgery and invalidates the key when a client’s NID is not - associated with the nodemap name defined on the servers. + This prevents forgery and invalidates the key when a client's NID + is not associated with the nodemap name defined on the servers. It is not required to activate the Nodemap feature in order for SSK to perform client NID to nodemap name lookups.
@@ -1214,10 +1215,10 @@ client2# echo create lgssc \* \* /usr/sbin/lgss_keyring %o %k %t %d %c %u %g %T Configure the lsvcgss daemon on the MDS and OSS. Set the LSVCGSSDARGS variable in /etc/sysconfig/lsvcgss on the MDS to - ‘-s -m’. On the OSS, set the + "-s -m". On the OSS, set the LSVCGSSDARGS variable in /etc/sysconfig/lsvcgss to - ‘-s -o’ + "-s -o". Start the lsvcgssd daemon on the MDS and @@ -1299,8 +1300,8 @@ mount.lustre: mount 172.16.0.1@tcp:/testfs at /mnt/testfs failed: Connection ref cli1# mount -t lustre -o mgssec=skpi,skpath=/secure_directory 172.16.0.1@tcp:/testfs /mnt/testfs - Verify that client1’s MGC connection is using the SSK mechanism - and skpi security flavor. See + Verify that client1's MGC connection is using the SSK + mechanism and skpi security flavor. See . @@ -1380,8 +1381,9 @@ oss# lgss_sk -l /secure_directory/testfs.LustreServers.client.key
Viewing Secure PtlRPC Contexts From the client (or servers which have mgc, osc, mdc contexts) you - can view info regarding all users’ contexts and the flavor in use for an - import. For user’s contexts (srpc_context), SSK and gssnull only support + can view info regarding all users' contexts and the flavor in use for + an import. + For user's contexts (srpc_context), SSK and gssnull only support a single root UID so there should only be one context. The other file in the import (srpc_info) has additional sptlrpc details. The rpc and bulk flavors allow you to diff --git a/LustreTuning.xml b/LustreTuning.xml index 4087d86..10c7661 100644 --- a/LustreTuning.xml +++ b/LustreTuning.xml @@ -327,7 +327,7 @@ ko2iblnd credits=256 forwarded to the next hop. The three different buffer sizes accommodate different size messages. If a message arrives that can fit in a tiny buffer then a tiny - buffer is used, if a message doesn’t fit in a tiny buffer, but fits in a + buffer is used, if a message does not fit in a tiny buffer, but fits in a small buffer, then a small buffer is used. Finally if a message does not fit in either a tiny buffer or a small buffer, a large buffer is used. @@ -457,7 +457,7 @@ lnet large_router_buffers=8192 - avoid_asym_router_failure– When set to 1, + avoid_asym_router_failure - When set to 1, this parameter adds the additional requirement that for a route to be considered up the gateway of the route must have at least one NI up on the remote network of the route. diff --git a/ManagingLNet.xml b/ManagingLNet.xml index 40189ff..22ab46e 100644 --- a/ManagingLNet.xml +++ b/ManagingLNet.xml @@ -230,7 +230,7 @@ ents" All clients and all servers must get two rails of bandwidth. - ip2nets=†o2ib0(ib0),o2ib2(ib1) 192.168.[0-1].[0-252/2] \ + ip2nets="o2ib0(ib0),o2ib2(ib1) 192.168.[0-1].[0-252/2] \ #even servers;\ o2ib1(ib0),o2ib3(ib1) 192.168.[0-1].[1-253/2] \ #odd servers;\ diff --git a/ManagingSecurity.xml b/ManagingSecurity.xml index 262f29c..c672304 100644 --- a/ManagingSecurity.xml +++ b/ManagingSecurity.xml @@ -170,8 +170,8 @@ other::--- the same file system beyond normal Unix permissions/ACLs, even when users on the clients may have root access. Those tenants share the same file system, but they are isolated from each other: they cannot access or even - see each other’s files, and are not aware that they are sharing common - file system resources. + see each other's files, and are not aware that they are sharing + common file system resources. Lustre Isolation leverages the Fileset feature () to mount only a subdirectory of the filesystem rather than the root @@ -241,9 +241,9 @@ mgs# lctl set_param -P nodemap.tenant1.fileset=/dir1 Checking SELinux Policy Enforced by Lustre Clients SELinux provides a mechanism in Linux for supporting Mandatory Access Control (MAC) policies. When a MAC policy is enforced, the operating - system’s (OS) kernel defines application rights, firewalling applications - from compromising the entire system. Regular users do not have the ability to - override the policy. + system's (OS) kernel defines application rights, + firewalling applications from compromising the entire system. + Regular users do not have the ability to override the policy. One purpose of SELinux is to protect the OS from privilege escalation. To that extent, SELinux defines confined and unconfined domains for processes and @@ -858,7 +858,9 @@ f3cc1b5cf9b8f41c No custom protector "bunker" eavesdropped during network transfer. - Kerberos uses the “kernel keyring” client upcall mechanism. + + Kerberos uses the "kernel keyring" client upcall mechanism. +
Security Flavor diff --git a/ManagingStripingFreeSpace.xml b/ManagingStripingFreeSpace.xml index 641a930..126bd3a 100644 --- a/ManagingStripingFreeSpace.xml +++ b/ManagingStripingFreeSpace.xml @@ -537,14 +537,18 @@ osc.lustre-OST0002-osc.ost_conn_uuid=192.168.20.1@tcp [--component-end|-E end1] [STRIPE_OPTIONS] [--component-end|-E end2] [STRIPE_OPTIONS] ... filename The -E option is used to specify the end offset - (in bytes or using a suffix “kMGTP”, e.g. 256M) of each component, and - it also indicates the following STRIPE_OPTIONS are - for this component. Each component defines the stripe pattern of the + (in bytes or using a suffix kMGTP e.g. 256M) + of each component, + and it also indicates the following STRIPE_OPTIONS + are for this component. + Each component defines the stripe pattern of the file in the range of [start, end). The first component must start from offset 0 and all components must be adjacent with each other, no holes are allowed, so each extent will start at the end of previous extent. A -1 end offset or eof indicates - this is the last component extending to the end of file. + this is the last component extending to the end of file. + If no EOF + Example $ lfs setstripe -E 4M -c 1 -E 64M -c 4 -E -1 -c -1 -i 4 \ /mnt/testfs/create_comp @@ -600,7 +604,7 @@ osc.lustre-OST0002-osc.ost_conn_uuid=192.168.20.1@tcp lmm_pattern: 1 lmm_layout_gen: 0 lmm_stripe_offset: 4 - Only the first component’s OST objects of the PFL file are + Only the first component's OST objects of the PFL file are instantiated when the layout is being set. Other instantiation is delayed to later write/truncate operations. If we write 128M data to this PFL file, the second and third @@ -1344,7 +1348,7 @@ $ lfs setstripe -c 1 /mnt/testfs/testdir/dir_3comp/commnfile /mnt/testfs/testdir/4comp /mnt/testfs/testdir/dir_3comp/2comp Since lfs find uses - "!" to do negative search, we don’t support + "!" to do negative search, we don't support flag ^init here.
@@ -1361,8 +1365,9 @@ $ lfs setstripe -c 1 /mnt/testfs/testdir/dir_3comp/commnfile applications are writing to them. Whereas PFL delays the instantiation of some components until an IO operation occurs on this region, SEL allows splitting such non-instantiated - components in two parts: an “extendable” component and an “extension” - component. The extendable component is a regular PFL component, covering + components in two parts: an extendable component and + an extension component. + The extendable component is a regular PFL component, covering just a part of the region, which is small originally. The extension (or SEL) component is a new component type which is always non-instantiated and unassigned, covering the other part of the region. When a write reaches this @@ -1379,29 +1384,29 @@ $ lfs setstripe -c 1 /mnt/testfs/testdir/dir_3comp/commnfile ways: - Extension: continue on the same OSTs – used when not low on space + Extension: continue on the same OST objects when not low on space on any of the OSTs of the current component; a particular extent is granted to the extendable component. - Spill over: switch to next component OSTs – it is used only for - not the last component when at least one + Spill over: switch to next component OSTs only + when not the last component and at least one of the current OSTs is low on space; the whole region of the SEL component moves to the next component and the SEL component is removed in its turn. Repeating: create a new component with the same layout but on - free OSTs – it is used only for the last component when + free OSTs, used only for the last component when at least one of the current OSTs is low on space; a new component has the same layout but instantiated on different OSTs (from the same pool) which have enough space. - Forced extension: continue with the current component OSTs despite - the low on space condition – it is used only for the last component when - a repeating attempt detected low on space condition as well - spillover - is impossible and there is no sense in the repeating. + Forced extension: continue with the current component OSTs when + there is a low on space condition for the last component OSTs, but + a repeating attempt detected low on space on other OSTs as well, then + spillover is impossible and there is no sense in the repeating. Each spill event increments the spill_hit @@ -1874,7 +1879,8 @@ STRIPE OPTIONS: The -z option is added to specify the extension size to search for. The files which have any component with the extension size matched the given criteria are printed out. As always - “+” and “-“ signs are allowed to specify the least and the most size. + + and - signs are allowed to + specify the least and the most size. A new extension component flag is added. Only files which have at least one SEL component are printed. diff --git a/UpgradingLustre.xml b/UpgradingLustre.xml index 5ba6137..b3de894 100644 --- a/UpgradingLustre.xml +++ b/UpgradingLustre.xml @@ -232,7 +232,7 @@ (Optional) If you are upgrading from a release before Lustre 2.10, to enable the project quota feature enter the following on every ldiskfs backend target while unmounted: - tune2fs –O project /dev/dev + tune2fs -O project /dev/dev Enabling the project feature will prevent the filesystem from being used by older versions of ldiskfs, so it diff --git a/ZFSSnapshots.xml b/ZFSSnapshots.xml index b152d90..a8b28e3 100644 --- a/ZFSSnapshots.xml +++ b/ZFSSnapshots.xml @@ -59,7 +59,7 @@ overwritten files from being released until the snapshot(s) referencing those files is deleted. The file system administrator needs to establish a snapshot create/backup/remove policy according to - their system’s actual size and usage. + their system's actual size and usage.
@@ -523,7 +523,7 @@ comment] -F | --fsname fsname> [-h | --help] -n | --name ssname> modified. Renaming follows the general ZFS snapshot name rules, such as the maximum length is 256 bytes, cannot conflict with the reserved names, and so on. - To modify a snapshot’s attributes, use the following + To modify a snapshot's attributes, use the following lctl command on the MGS: lctl snapshot_modify [-c | --comment comment] <-F | --fsname fsname> [-h | --help] <-n | --name ssname> @@ -607,11 +607,12 @@ comment] -F | --fsname fsname> [-h | --help] -n | --name ssname> taken, there may be user-visible namespace inconsistencies with files created or destroyed in the interval between the MDT and OST snapshots. In order to create a consistent snapshot of the file system, we are able - to set a global write barrier, or “freeze” the system. Once set, all - metadata modifications will be blocked until the write barrier is actively - removed (“thawed”) or expired. The user can set a timeout parameter on a - global barrier or the barrier can be explicitly removed. The default - timeout period is 30 seconds. + to set a global write barrier, or "freeze" the system. + Once set, all metadata modifications will be blocked until the write + barrier is actively removed ("thawed") or expired. + The user can set a timeout parameter on a + global barrier or the barrier can be explicitly removed. + The default timeout period is 30 seconds. It is important to note that snapshots are usable without the global barrier. Only files that are currently being modified by clients (write, create, unlink) may be inconsistent as noted above if the barrier is not @@ -774,18 +775,21 @@ The barrier will be expired after 7 seconds - If the barrier is in ’freezing_p1’, ’freezing_p2’ or ’frozen’ - status, then the remaining lifetime will be returned also. + If the barrier is in freezing_p1, + freezing_p2 or frozen status, + then the remaining lifetime will be returned also.
<indexterm><primary>barrier</primary> <secondary>rescan</secondary></indexterm>Rescan Barrier To rescan a global write barrier to check which MDTs are - active, run the lctl barrier_rescan command on the - MGS: - lctl barrier_rescan <fsname> [timeout (in seconds)], -where the default timeout is 30 seconds. + active, run lctl barrier_rescanon the MGS + (with a default TIMEOUT_SEC of 30s): + + +lctl barrier_rescan FSNAME [TIMEOUT_SEC] + For example, to rescan the barrier for filesystem testfs: mgs# lctl barrier_rescan testfs diff --git a/legalnoticeIntel.xml b/legalnoticeIntel.xml index 582d472..c6f0e31 100644 --- a/legalnoticeIntel.xml +++ b/legalnoticeIntel.xml @@ -24,7 +24,7 @@ *Other names and brands may be claimed as the property of others. - THE ORIGINAL LUSTRE 2.x FILESYSTEM: OPERATIONS MANUAL HAS BEEN MODIFIED: THIS OPERATIONS MANUAL IS A MODIFIED VERSION OF, AND IS DERIVED FROM, THE LUSTRE 2.0 FILESYSTEM: OPERATIONS MANUAL PUBLISHED BY ORACLE AND AVAILABLE AT [http://www.lustre.org/]. MODIFICATIONS (collectively, the “Modifications”) HAVE BEEN MADE BY INTEL CORPORATION (“Intel”). ORACLE AND ITS AFFILIATES HAVE NOT REVIEWED, APPROVED, SPONSORED, OR ENDORSED THIS MODIFIED OPERATIONS MANUAL, OR ENDORSED INTEL, AND ORACLE AND ITS AFFILIATES ARE NOT RESPONSIBLE OR LIABLE FOR ANY MODIFICATIONS THAT INTEL HAS MADE TO THE ORIGINAL OPERATIONS MANUAL. + THE ORIGINAL LUSTRE 2.x FILESYSTEM: OPERATIONS MANUAL HAS BEEN MODIFIED: THIS OPERATIONS MANUAL IS A MODIFIED VERSION OF, AND IS DERIVED FROM, THE LUSTRE 2.0 FILESYSTEM: OPERATIONS MANUAL PUBLISHED BY ORACLE AND AVAILABLE AT [http://www.lustre.org/]. MODIFICATIONS (collectively, the "Modifications") HAVE BEEN MADE BY INTEL CORPORATION ("Intel"). ORACLE AND ITS AFFILIATES HAVE NOT REVIEWED, APPROVED, SPONSORED, OR ENDORSED THIS MODIFIED OPERATIONS MANUAL, OR ENDORSED INTEL, AND ORACLE AND ITS AFFILIATES ARE NOT RESPONSIBLE OR LIABLE FOR ANY MODIFICATIONS THAT INTEL HAS MADE TO THE ORIGINAL OPERATIONS MANUAL. NOTHING IN THIS MODIFIED OPERATIONS MANUAL IS INTENDED TO AFFECT THE NOTICE PROVIDED BY ORACLE BELOW IN RESPECT OF THE ORIGINAL OPERATIONS MANUAL AND SUCH ORACLE NOTICE CONTINUES TO APPLY TO THIS MODIFIED OPERATIONS MANUAL EXCEPT FOR THE MODIFICATIONS; THIS INTEL NOTICE SHALL APPLY ONLY TO MODIFICATIONS MADE BY INTEL. AS BETWEEN YOU AND ORACLE: (I) NOTHING IN THIS INTEL NOTICE IS INTENDED TO AFFECT THE TERMS OF THE ORACLE NOTICE BELOW; AND (II) IN THE EVENT OF ANY CONFLICT BETWEEN THE TERMS OF THIS INTEL NOTICE AND THE TERMS OF THE ORACLE NOTICE, THE ORACLE NOTICE SHALL PREVAIL. diff --git a/legalnoticeOracle.xml b/legalnoticeOracle.xml index 1988a59..cd9c7c2 100644 --- a/legalnoticeOracle.xml +++ b/legalnoticeOracle.xml @@ -17,21 +17,85 @@ Copyright © 2011, Oracle et/ou ses affiliés. Tous droits réservés. - Ce logiciel et la documentation qui l’accompagne sont protégés par les lois sur la propriété intellectuelle. Ils sont concédés sous licence et soumis à des restrictions d’utilisation et de divulgation. Sauf disposition de votre contrat de licence ou de la loi, vous ne pouvez pas copier, reproduire, traduire, diffuser, modifier, breveter, transmettre, distribuer, exposer, exécuter, publier ou afficher le logiciel, même partiellement, sous quelque forme et par quelque procédé que ce soit. Par ailleurs, il est interdit de procéder à toute ingénierie inverse du logiciel, de le désassembler ou de le décompiler, excepté à des fins d’interopérabilité avec des logiciels tiers ou tel que prescrit par la loi. - - Les informations fournies dans ce document sont susceptibles de modification sans préavis. Par ailleurs, Oracle Corporation ne garantit pas qu’elles soient exemptes d’erreurs et vous invite, le cas échéant, à lui en faire part par écrit. - - Si ce logiciel, ou la documentation qui l’accompagne, est concédé sous licence au Gouvernement des Etats-Unis, ou à toute entité qui délivre la licence de ce logiciel ou l’utilise pour le compte du Gouvernement des Etats-Unis, la notice suivante s’applique : - - U.S. GOVERNMENT RIGHTS. Programs, software, databases, and related documentation and technical data delivered to U.S. Government customers are "commercial computer software" or "commercial technical data" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use, duplication, disclosure, modification, and adaptation shall be subject to the restrictions and license terms set forth in the applicable Government contract, and, to the extent applicable by the terms of the Government contract, the additional rights set forth in FAR 52.227-19, Commercial Computer Software License (December 2007). Oracle America, Inc., 500 Oracle Parkway, Redwood City, CA 94065. - - Ce logiciel ou matériel a été développé pour un usage général dans le cadre d’applications de gestion des informations. Ce logiciel ou matériel n’est pas conçu ni n’est destiné à être utilisé dans des applications à risque, notamment dans des applications pouvant causer des dommages corporels. Si vous utilisez ce logiciel ou matériel dans le cadre d’applications dangereuses, il est de votre responsabilité de prendre toutes les mesures de secours, de sauvegarde, de redondance et autres mesures nécessaires à son utilisation dans des conditions optimales de sécurité. Oracle Corporation et ses affiliés déclinent toute responsabilité quant aux dommages causés par l’utilisation de ce logiciel ou matériel pour ce type d’applications. - - Oracle et Java sont des marques déposées d’Oracle Corporation et/ou de ses affiliés.Tout autre nom mentionné peut correspondre à des marques appartenant à d’autres propriétaires qu’Oracle. - - AMD, Opteron, le logo AMD et le logo AMD Opteron sont des marques ou des marques déposées d’Advanced Micro Devices. Intel et Intel Xeon sont des marques ou des marques déposées d’Intel Corporation. Toutes les marques SPARC sont utilisées sous licence et sont des marques ou des marques déposées de SPARC International, Inc. UNIX est une marque déposée concédée sous licence par X/Open Company, Ltd. - - Ce logiciel ou matériel et la documentation qui l’accompagne peuvent fournir des informations ou des liens donnant accès à des contenus, des produits et des services émanant de tiers. Oracle Corporation et ses affiliés déclinent toute responsabilité ou garantie expresse quant aux contenus, produits ou services émanant de tiers. En aucun cas, Oracle Corporation et ses affiliés ne sauraient être tenus pour responsables des pertes subies, des coûts occasionnés ou des dommages causés par l’accès à des contenus, produits ou services tiers, ou à leur utilisation. + Ce logiciel et la documentation qui l'accompagne sont + protégés par les lois sur la propriété intellectuelle. + Ils sont concédés sous licence et soumis à des restrictions + d'utilisation et de divulgation. + Sauf disposition de votre contrat de licence ou de la loi, + vous ne pouvez pas copier, reproduire, traduire, diffuser, + modifier, breveter, transmettre, distribuer, exposer, exécuter, + publier ou afficher le logiciel, même partiellement, + sous quelque forme et par quelque procédé que ce soit. + Par ailleurs, il est interdit de procéder à toute ingénierie + inverse du logiciel, de le désassembler ou de le décompiler, + excepté à des fins d'interopérabilité avec des + logiciels tiers ou tel que prescrit par la loi. + + Les informations fournies dans ce document sont susceptibles + de modification sans préavis. + Par ailleurs, Oracle Corporation ne garantit pas qu'elles + soient exemptes d'erreurs et vous invite, le cas échéant, + à lui en faire part par écrit. + + Si ce logiciel, ou la documentation qui l'accompagne, + est concédé sous licence au Gouvernement des Etats-Unis, + ou à toute entité qui délivre la licence de ce logiciel ou + l'utilise pour le compte du Gouvernement des Etats-Unis, + la notice suivante s'applique : + + U.S. GOVERNMENT RIGHTS. Programs, software, databases, + and related documentation and technical data delivered to U.S. + Government customers are "commercial computer software" or + "commercial technical data" pursuant to the applicable Federal + Acquisition Regulation and agency-specific supplemental regulations. + As such, the use, duplication, disclosure, modification, + and adaptation shall be subject to the restrictions and license + terms set forth in the applicable Government contract, and, + to the extent applicable by the terms of the Government contract, + the additional rights set forth in FAR 52.227-19, + Commercial Computer Software License (December 2007). + Oracle America, Inc., 500 Oracle Parkway, Redwood City, CA 94065. + + Ce logiciel ou matériel a été développé pour un usage + général dans le cadre d'applications de gestion des informations. + Ce logiciel ou matériel n'est pas conçu ni n'est + destiné à être utilisé dans des applications à risque, + notamment dans des applications pouvant causer des dommages corporels. + Si vous utilisez ce logiciel ou matériel dans le cadre + d'applications dangereuses, + il est de votre responsabilité de prendre toutes les mesures de secours, + de sauvegarde, de redondance et autres mesures nécessaires + à son utilisation dans des conditions optimales de sécurité. + Oracle Corporation et ses affiliés déclinent toute + responsabilité quant aux dommages causés par l'utilisation + de ce logiciel ou matériel pour ce type d'applications. + + Oracle et Java sont des marques déposées + d'Oracle Corporation et/ou de ses affiliés. + Tout autre nom mentionné peut correspondre à des marques + appartenant à d'autres propriétaires qu'Oracle. + + AMD, Opteron, le logo AMD et le logo AMD Opteron sont des + marques ou des marques déposées d'Advanced Micro Devices. + Intel et Intel Xeon sont des marques ou des marques + déposées d'Intel Corporation. + Toutes les marques SPARC sont utilisées sous licence et sont + des marques ou des marques déposées de SPARC International, Inc. + UNIX est une marque déposée concédée sous licence par X/Open Company, + Ltd. + + Ce logiciel ou matériel et la documentation qui + l'accompagne peuvent fournir des informations ou des + liens donnant accès à des contenus, + des produits et des services émanant de tiers. + Oracle Corporation et ses affiliés déclinent toute + responsabilité ou garantie expresse quant aux contenus, + produits ou services émanant de tiers. + En aucun cas, Oracle Corporation et ses affiliés ne + sauraient être tenus pour responsables des pertes subies, + des coûts occasionnés ou des dommages causés par + l'accès à des contenus, produits ou services tiers, + ou à leur utilisation. This work is licensed under a Creative Commons Attribution-Share Alike 3.0 United States License. To view a copy of this license and obtain more information about Creative Commons licensing, visit Creative Commons Attribution-Share Alike 3.0 United States or send a letter to Creative Commons, 171 2nd Street, Suite 300, San Francisco, California 94105, USA. -- 1.8.3.1