From 21e1355c70b3e216f2d085ccbeb0a32e4d47ef4d Mon Sep 17 00:00:00 2001 From: Richard Henwood Date: Tue, 17 May 2011 16:34:08 -0500 Subject: [PATCH] FIX: xrefs --- ConfiguringLustre.xml | 338 ++++++++++++++++++++++++-------------------------- 1 file changed, 161 insertions(+), 177 deletions(-) diff --git a/ConfiguringLustre.xml b/ConfiguringLustre.xml index 9a0a990..b06a684 100644 --- a/ConfiguringLustre.xml +++ b/ConfiguringLustre.xml @@ -1,181 +1,144 @@ - + Configuring Lustre + + + This chapter shows how to configure a simple Lustre system comprised of a combined MGS/MDT, an OST and a client. It includes: - Configuring a Simple Lustre File System - - - - - - Additional Configuration Options - - - - - -
- <anchor xml:id="dbdoclet.50438267_pgfId-1290874" xreflabel=""/> -
- 10.1 <anchor xml:id="dbdoclet.50438267_50692" xreflabel=""/>Configuring a Simple Lustre File System - A Lustre system can be set up in a variety of configurations by using the administrative utilities provided with Lustre. The procedure below shows how to to configure a simple Lustre file system consisting of a combined MGS/MDS, one OSS with two OSTs, and a client. For an overview of the entire Lustre installation procedure, see Chapter 4: Installation Overview. + + + + + + + + + +
+ 10.1 Configuring a Simple Lustre File System + A Lustre system can be set up in a variety of configurations by using the administrative utilities provided with Lustre. The procedure below shows how to to configure a simple Lustre file system consisting of a combined MGS/MDS, one OSS with two OSTs, and a client. For an overview of the entire Lustre installation procedure, see . This configuration procedure assumes you have completed the following: - Set up and configured your hardware . For more information about hardware requirements, see Chapter 5: Setting Up a Lustre File System. - - - + Set up and configured your hardware . For more information about hardware requirements, see . - Downloaded and installed the Lustre software. For more information about preparing for and installing the Lustre software, see Chapter 8: Installing the Lustre Software. - - - + Downloaded and installed the Lustre software. For more information about preparing for and installing the Lustre software, see . The following optional steps should also be completed, if needed, before the Lustre software is configured: - Set up a hardware or software RAID on block devices to be used as OSTs or MDTs. For information about setting up RAID, see the documentation for your RAID controller or Chapter 6:Configuring Storage on a Lustre File System. - - - - - - Set up network interface bonding on Ethernet interfaces. For information about setting up network interface bonding, see Chapter 7: Setting Up Network Interface Bonding. + Set up a hardware or software RAID on block devices to be used as OSTs or MDTs. For information about setting up RAID, see the documentation for your RAID controller or . - + Set up network interface bonding on Ethernet interfaces. For information about setting up network interface bonding, see . Setlnetmodule parameters to specify how Lustre Networking (LNET) is to be configured to work with Lustre and test the LNET configuration. LNET will, by default, use the first TCP/IP interface it discovers on a system. If this network configuration is sufficient, you do not need to configure LNET. LNET configuration is required if you are using Infiniband or multiple Ethernet interfaces. - - - - For information about configuring LNET, see Chapter 9: Configuring Lustre Networking (LNET). For information about testing LNET, see Chapter 23:Testing Lustre Network Performance (LNET Self-Test). +For information about configuring LNET, see . For information about testing LNET, see . - Run the benchmark script sgpdd_survey to determine baseline performance of your hardware. Benchmarking your hardware will simplify debugging performance issues that are unrelated to Lustre and ensure you are getting the best possible performance with your installation. For information about running sgpdd_survey, see Testing I/O Performance of Raw Hardware (sgpdd_survey). - - - + Run the benchmark script sgpdd_survey to determine baseline performance of your hardware. Benchmarking your hardware will simplify debugging performance issues that are unrelated to Lustre and ensure you are getting the best possible performance with your installation. For information about running sgpdd_survey, see . - - - - - - Note -The sgpdd_survey script overwrites the device being tested so it must be run before the OSTs are configured. - - - - + + + +The sgpdd_survey script overwrites the device being tested so it must be run before the OSTs are configured. + + To configure a simple Lustre file system, complete these steps: - 1. Create a combined MGS/MDT file system on a block device. On the MDS node, run: + + + Create a combined MGS/MDT file system on a block device. On the MDS node, run: mkfs.lustre --fsname=<fsname> --mgs --mdt <block device name> - The default file system name (fsname) is lustre. - - - - - - Note -If you plan to generate multiple file systems, the MGS should be created separately on its own dedicated block device, by running: mkfs.lustre --fsname=<fsname> --mgs <block device name> - - - - - 2. Mount the combined MGS/MDT file system on the block device. On the MDS node, run: + + + +If you plan to generate multiple file systems, the MGS should be created separately on its own dedicated block device, by running: mkfs.lustre --fsname=<fsname> --mgs <block device name> + + + + + + + Mount the combined MGS/MDT file system on the block device. On the MDS node, run: mount -t lustre <block device name> <mount point> - - - - - - Note -If you have created and MGS and an MDT on separate block devices, mount them both. - - - - -   - 3. Create the OST. On the OSS node, run: + + +If you have created and MGS and an MDT on separate block devices, mount them both. + + + + + + Create the OST. On the OSS node, run: mkfs.lustre --ost --fsname=<fsname> --mgsnode=<NID> <block device name> When you create an OST, you are formatting a ldiskfs file system on a block storage device like you would with any local file system. - You can have as many OSTs per OSS as the hardware or drivers allow. For more information about storage and memory requirements for a Lustre file system, see Chapter 5: Setting Up a Lustre File System. + You can have as many OSTs per OSS as the hardware or drivers allow. For more information about storage and memory requirements for a Lustre file system, see . You can only configure one OST per block device. You should create an OST that uses the raw block device and does not use partitioning. - If you are using block devices that are accessible from multiple OSS nodes, ensure that you mount the OSTs from only one OSS node at at time. It is strongly recommended that multiple-mount protection be enabled for such devices to prevent serious data corruption. For more information about multiple-mount protection, see Lustre Failover and Multiple-Mount Protection. - - - - - - Note -Lustre currently supports block devices up to 16 TB on OEL 5/RHEL 5 (up to 8 TB on other distributions). If the device size is only slightly larger that 16 TB, it is recommended that you limit the file system size to 16 TB at format time. If the size is significantly larger than 16 TB, you should reconfigure the storage into devices smaller than 16 TB. We recommend that you not place partitions on top of RAID 5/6 block devices due to negative impacts on performance. - - - - - 4. Mount the OST. On the OSS node where the OST was created, run: + If you are using block devices that are accessible from multiple OSS nodes, ensure that you mount the OSTs from only one OSS node at at time. It is strongly recommended that multiple-mount protection be enabled for such devices to prevent serious data corruption. For more information about multiple-mount protection, see . + + +Lustre currently supports block devices up to 16 TB on OEL 5/RHEL 5 (up to 8 TB on other distributions). If the device size is only slightly larger that 16 TB, it is recommended that you limit the file system size to 16 TB at format time. If the size is significantly larger than 16 TB, you should reconfigure the storage into devices smaller than 16 TB. We recommend that you not place partitions on top of RAID 5/6 block devices due to negative impacts on performance. + + + + + + Mount the OST. On the OSS node where the OST was created, run: mount -t lustre <block device name> <mount point> - - - - - - Note -To create additional OSTs, repeat Step 3 and Step 4. - - - - - 5. Mount the Lustre file system on the client. On the client node, run: + + + + To create additional OSTs, repeat Step 3 and Step 4. + + + + + Mount the Lustre file system on the client. On the client node, run: mount -t lustre <MGS node>:/<fsname> <mount point> - - - - - - Note -To create additional clients, repeat Step 5. - - - - - 6. Verify that the file system started and is working correctly. Do this by running the lfs df, dd and ls commands on the client node. - - - - - - Note -If you have a problem mounting the file system, check the syslogs on the client and all the servers for errors and also check the network settings. A common issue with newly-installed systems is that hosts.deny or firewall rules may prevent connections on port 988. - - - - - 7. (Optional) Run benchmarking tools to validate the performance of hardware and software layers in the cluster. Available tools include: + + + To create additional clients, repeat Step 5. + + + + + Verify that the file system started and is working correctly. Do this by running the lfs df, dd and ls commands on the client node. + + +If you have a problem mounting the file system, check the syslogs on the client and all the servers for errors and also check the network settings. A common issue with newly-installed systems is that hosts.deny or firewall rules may prevent connections on port 988. + + + + + + (Optional) Run benchmarking tools to validate the performance of hardware and software layers in the cluster. Available tools include: + - obdfilter_survey - Characterizes the storage performance of a Lustre file system. For details, see Testing OST Performance (obdfilter_survey). - - - + obdfilter_survey - Characterizes the storage performance of a Lustre file system. For details, see Testing OST Performance (obdfilter_survey). - ost_survey - Performs I/O against OSTs to detect anomalies between otherwise identical disk subsystems. For details, see Testing OST I/O Performance (ost_survey). - - - + ost_survey - Performs I/O against OSTs to detect anomalies between otherwise identical disk subsystems. For details, see Testing OST I/O Performance (ost_survey). + + +
<anchor xml:id="dbdoclet.50438267_pgfId-1290956" xreflabel=""/>10.1.1 Simple Lustre <anchor xml:id="dbdoclet.50438267_marker-1290955" xreflabel=""/>Configuration Example To see the steps in a simple Lustre configuration, follow this example in which a combined MGS/MDT and two OSTs are created. Three block devices are used, one for the combined MGS/MDS node and one for each OSS node. Common parameters used in the example are listed below, along with individual node parameters. - + @@ -315,17 +278,19 @@ - - - - - - Note -We recommend that you use “dotted-quad†notation for IP addresses rather than host names to make it easier to read debug logs and debug configurations with multiple interfaces. - - - - + + + + + +We recommend that you use 'dotted-quad' notation for IP addresses rather than host names to make it easier to read debug logs and debug configurations with multiple interfaces. + + For this example, complete the steps below: + + + + 1. Create a combined MGS/MDT file system on the block device. On the MDS node, run: [root@mds /]# mkfs.lustre --fsname=temp --mgs --mdt /dev/sdb @@ -351,6 +316,9 @@ dir_index,uninit_groups -F /dev/sdb Writing CONFIGS/mountdata + + + 2. Mount the combined MGS/MDT file system on the block device. On the MDS node, run: [root@mds /]# mount -t lustre /dev/sdb /mnt/mdt @@ -361,8 +329,12 @@ oup upcall set to /usr/sbin/l_getgroups Lustre: temp-MDT0000.mdt: set parameter group_upcall=/usr/sbin/l_getgroups Lustre: Server temp-MDT0000 on device /dev/sdb has started + + 3. Create and mount ost1. In this example, the OSTs (ost1 and ost2) are being created on different OSSs (oss1 and oss2 respectively). + + a. Create ost1. On oss1 node, run: [root@oss1 /]# mkfs.lustre --ost --fsname=temp --mgsnode=10.2.0.1@tcp0 /dev\ /sdc @@ -389,6 +361,8 @@ oup upcall set to /usr/sbin/l_getgroups dir_index,uninit_groups -F /dev/sdc Writing CONFIGS/mountdata + + b. Mount ost1 on the OSS on which it was created. On oss1 node, run: root@oss1 /] mount -t lustre /dev/sdc /mnt/ost1 @@ -402,7 +376,13 @@ oup upcall set to /usr/sbin/l_getgroups Lustre: temp-OST0000: received MDS connection from 10.2.0.1@tcp0 Lustre: MDS temp-MDT0000: temp-OST0000_UUID now active, resetting orphans + + + + 4. Create and mount ost2. + + a. Create ost2. On oss2 node, run: [root@oss2 /]# mkfs.lustre --ost --fsname=temp --mgsnode=10.2.0.1@tcp0 /dev\ /sdd @@ -429,6 +409,8 @@ oup upcall set to /usr/sbin/l_getgroups dir_index,uninit_groups -F /dev/sdc Writing CONFIGS/mountdata + + b. Mount ost2 on the OSS on which it was created. On oss2 node, run: root@oss2 /] mount -t lustre /dev/sdd /mnt/ost2 @@ -442,13 +424,21 @@ oup upcall set to /usr/sbin/l_getgroups Lustre: temp-OST0000: received MDS connection from 10.2.0.1@tcp0 Lustre: MDS temp-MDT0000: temp-OST0000_UUID now active, resetting orphans + + + + 5. Mount the Lustre file system on the client. On the client node, run: root@client1 /] mount -t lustre 10.2.0.1@tcp0:/temp /lustre This command generates this output: Lustre: Client temp-client has started + + 6. Verify that the file system started and is working by running the df, dd and ls commands on the client node. + + a. Run the lfsdf -h command: [root@client1 /] lfs df -h @@ -465,6 +455,8 @@ oup upcall set to /usr/sbin/l_getgroups 0% /lustre + + b. Run the lfsdf-ih command. [root@client1 /] lfs df -ih @@ -477,6 +469,8 @@ oup upcall set to /usr/sbin/l_getgroups filesystem summary: 2.5M 32 2.5M 0% /lustre + + c. Run the dd command: [root@client1 /] cd /lustre [root@client1 /lustre] dd if=/dev/zero of=/lustre/zero.dat bs=4M count=2 @@ -486,6 +480,10 @@ oup upcall set to /usr/sbin/l_getgroups 2+0 records out 8388608 bytes (8.4 MB) copied, 0.159628 seconds, 52.6 MB/s + + + + d. Run the ls command: [root@client1 /lustre] ls -lsah @@ -496,25 +494,29 @@ oup upcall set to /usr/sbin/l_getgroups 8.0M -rw-r--r-- 1 root root 8.0M Oct 16 15:27 zero.dat + + + + Once the Lustre file system is configured, it is ready for use.
-
- 10.2 <anchor xml:id="dbdoclet.50438267_76752" xreflabel=""/>Additional Configuration Options +
+ 10.2 Additional Configuration Options This section describes how to scale the Lustre file system or make configuration changes using the Lustre configuration utilities.
<anchor xml:id="dbdoclet.50438267_pgfId-1292441" xreflabel=""/>10.2.1 Scaling the <anchor xml:id="dbdoclet.50438267_marker-1292440" xreflabel=""/>Lustre File System - A Lustre file system can be scaled by adding OSTs or clients. For instructions on creating additional OSTs repeat Step 3 and Step 4 above. For mounting additional clients, repeat Step 5 for each client. + A Lustre file system can be scaled by adding OSTs or clients. For instructions on creating additional OSTs repeat Step 3 and Step 4 above. For mounting additional clients, repeat Step 5 for each client.
<anchor xml:id="dbdoclet.50438267_pgfId-1292798" xreflabel=""/>10.2.2 <anchor xml:id="dbdoclet.50438267_50212" xreflabel=""/>Changing Striping Defaults - The default settings for the file layout stripe pattern are shown in TABLE 10-1. - - <anchor xml:id="dbdoclet.50438267_pgfId-1292871" xreflabel=""/> TABLE 10-1 <anchor xml:id="dbdoclet.50438267_70881" xreflabel=""/> + The default settings for the file layout stripe pattern are shown in . +
+ Default stripe pattern - - - + + + File Layout Parameter @@ -539,7 +541,7 @@ oup upcall set to /usr/sbin/l_getgroups
- Use the lfs setstripe command described in Setting the File Layout/Striping Configuration (lfs setstripe) to change the file layout configuration. + Use the lfs setstripe command described in to change the file layout configuration.
<anchor xml:id="dbdoclet.50438267_pgfId-1292908" xreflabel=""/>10.2.3 Using the Lustre Configuration Utilities @@ -548,40 +550,22 @@ oup upcall set to /usr/sbin/l_getgroups mkfs.lustre - Use to format a disk for a Lustre service. - - - tunefs.lustre - Use to modify configuration information on a Lustre target disk. - - - lctl - Use to directly control Lustre via an ioctl interface, allowing various configuration, maintenance and debugging features to be accessed. - - - mount.lustre - Use to start a Lustre client or target service. - - - - For examples using these utilities, see the topic Chapter 36:System Configuration Utilities on the Lustre wiki. - The lfs utility is usful for configuring and querying a variety of options related to files. For more information, see lfs. - - - - - - Note -Some sample scripts are included in the directory where Lustre is installed. If you have installed the Lustre source code, the scripts are located in the lustre/tests sub-directory. These scripts enable quick setup of some simple standard Lustre configurations. - - - - +For examples using these utilities, see the topic + The lfs utility is usful for configuring and querying a variety of options related to files. For more information, see . + + +Some sample scripts are included in the directory where Lustre is installed. If you have installed the Lustre source code, the scripts are located in the lustre/tests sub-directory. These scripts enable quick setup of some simple standard Lustre configurations. + +
-
-- 1.8.3.1