From cc748f6c68908ade7caf58e85b17987022f55647 Mon Sep 17 00:00:00 2001 From: Richard Henwood Date: Wed, 18 May 2011 11:14:12 -0500 Subject: [PATCH] FIX: xrefs and tidying --- BenchmarkingTests.xml | 390 +++++++++++++++----------------------------------- 1 file changed, 115 insertions(+), 275 deletions(-) diff --git a/BenchmarkingTests.xml b/BenchmarkingTests.xml index 018d75a..aa874cd 100644 --- a/BenchmarkingTests.xml +++ b/BenchmarkingTests.xml @@ -1,67 +1,51 @@ - + - Benchmarking Lustre Performance (Lustre I/O Kit) + Benchmarking Lustre Performance (Lustre I/O Kit) + This chapter describes the Lustre I/O kit, a collection of I/O benchmarking tools for a Lustre cluster, and PIOS, a parallel I/O simulator for Linux and Solaris. It includes: - Using Lustre I/O Kit Tools + + - + + - Testing I/O Performance of Raw Hardware (sgpdd_survey) + + - + + - Testing OST Performance (obdfilter_survey) - - - - - - Testing OST I/O Performance (ost_survey) - - - - - - Collecting Application Profiling Information (stats-collect) - - - + + -
- <anchor xml:id="dbdoclet.50438212_pgfId-1289909" xreflabel=""/> -
- 24.1 <anchor xml:id="dbdoclet.50438212_44437" xreflabel=""/>Using Lustre I/O Kit Tools + +
+ 24.1 Using Lustre I/O Kit Tools The tools in the Lustre I/O Kit are used to benchmark Lustre hardware and validate that it is working as expected before you install the Lustre software. It can also be used to to validate the performance of the various hardware and software layers in the cluster and also to find and troubleshoot I/O issues. Typically, performance is measured starting with single raw devices and then proceeding to groups of devices. Once raw performance has been established, other software layers are then added incrementally and tested.
<anchor xml:id="dbdoclet.50438212_pgfId-1289911" xreflabel=""/>24.1.1 Contents of the Lustre I/O Kit The I/O kit contains three tests, each of which tests a progressively higher layer in the Lustre stack: - sgpdd_survey - Measure basic “bare metal†performance of devices while bypassing the kernel block device layers, buffer cache, and file system. - - - + sgpdd_survey - Measure basic 'bare metal' performance of devices while bypassing the kernel block device layers, buffer cache, and file system. + obdfilter_survey - Measure the performance of one or more OSTs directly on the OSS node or alternately over the network from a Lustre client. - - - + ost_survey - Performs I/O against OSTs individually to allow performance comparisons to detect if an OST is performing suboptimally due to hardware issues. - - - + Typically with these tests, Lustre should deliver 85-90% of the raw device performance. A utility stats-collect is also provided to collect application profiling information from Lustre clients and servers. See Collecting Application Profiling Information (stats-collect) for more information. @@ -72,34 +56,26 @@ Password-free remote access to nodes in the system (provided by ssh or rsh). + - - - - LNET self-test completed to test that Lustre Networking has been properly installed and configured. See Chapter 23: Testing Lustre Network Performance (LNET Self-Test). - - - + LNET self-test completed to test that Lustre Networking has been properly installed and configured. See . + Lustre file system software installed. - - - + sg3_utils package providing the sgp_dd tool (sg3_utils is a separate RPM package available online using YUM). - - - + Download the Lustre I/O kit (lustre-iokit)from: http://downloads.lustre.org/public/tools/lustre-iokit/
-
- 24.2 Testing I/O Performance of Raw Hardware (sgpdd_survey<anchor xml:id="dbdoclet.50438212_51053" xreflabel=""/><anchor xml:id="dbdoclet.50438212_marker-1302844" xreflabel=""/>) +
+ 24.2 Testing I/O Performance of Raw Hardware (sgpdd_survey<anchor xml:id="dbdoclet.50438212_marker-1302844" xreflabel=""/>) The sgpdd_survey tool is used to test bare metal I/O performance of the raw hardware, while bypassing as much of the kernel as possible. This survey may be used to characterize the performance of a SCSI device by simulating an OST serving multiple stripe files. The data gathered by this survey can help set expectations for the performance of a Lustre OST using this device. The script uses sgp_dd to carry out raw sequential disk I/O. It runs with variable numbers of sgp_dd threads to show how performance varies with different request queue depths. The script spawns variable numbers of sgp_dd instances, each reading or writing a separate area of the disk to demonstrate performance variance within a number of concurrent stripe files. @@ -107,89 +83,45 @@ Performance is limited by the slowest disk. - - - + Before creating a RAID array, benchmark all disks individually. We have frequently encountered situations where drive performance was not consistent for all devices in the array. Replace any disks that are significantly slower than the rest. Disks and arrays are very sensitive to request size. - - - + To identify the optimal request size for a given disk, benchmark the disk with different record sizes ranging from 4 KB to 1 to 2 MB. - - - - - - - - - - - - - - - - Caution -The sgpdd_survey script overwrites the device being tested, which results in the LOSS OF ALL DATA on that device. Exercise caution when selecting the device to be tested. - - - - - - - - - - Note - Array performance with all LUNs loaded does not always match the performance of a single LUN when tested in isolation. - - - - + + The sgpdd_survey script overwrites the device being tested, which results in the LOSS OF ALL DATA on that device. Exercise caution when selecting the device to be tested. + + Array performance with all LUNs loaded does not always match the performance of a single LUN when tested in isolation. + Prequisites: sgp_dd tool in the sg3_utils package - - - + Lustre software is NOT required - - - + The device(s) being tested must meet one of these two requirements: If the device is a SCSI device, it must appear in the output of sg_map (make sure the kernel module sg is loaded). - - - + If the device is a raw device, it must appear in the output of raw -qa. - - - + Raw and SCSI devices cannot be mixed in the test specification. - - - - - - Note -If you need to create raw devices to use the sgpdd_survey tool, note that raw device 0 cannot be used due to a bug in certain versions of the "raw" utility (including that shipped with RHEL4U4.) - - - - + + If you need to create raw devices to use the sgpdd_survey tool, note that raw device 0 cannot be used due to a bug in certain versions of the "raw" utility (including that shipped with RHEL4U4.) +
<anchor xml:id="dbdoclet.50438212_pgfId-1289945" xreflabel=""/>24.2.1 Tuning Linux Storage Devices To get large I/O transfers (1 MB) to disk, it may be necessary to tune several kernel parameters as specified: @@ -206,26 +138,20 @@ File containing standard output data (same as stdout) - - - + ${rslt}_<date/time>.summary Temporary (tmp) files - - - + ${rslt}_<date/time>_* Collected tmp files for post-mortem - - - + ${rslt}_<date/time>.detail @@ -237,55 +163,41 @@ MB/s total_size - Size of file being tested in KBs (8 GB in above example). - - - + rsz - Record size in KBs (1 MB in above example). - - - + thr - Number of threads generating I/O (1 thread in above example). - - - + crg - Current regions, the number of disjount areas on the disk to which I/O is being sent (1 region in above example, indicating that no seeking is done). - - - + MB/s - Aggregate bandwidth measured by dividing the total amount of data by the elapsed time (180.45 MB/s in the above example). - - - + MB/s - The remaining numbers show the number of regions X performance of the slowest disk as a sanity check on the aggregate bandwidth. - - - + If there are so many threads that the sgp_dd script is unlikely to be able to allocate I/O buffers, then ENOMEM is printed in place of the aggregate bandwidth result. If one or more sgp_dd instances do not successfully report a bandwidth number, then FAILED is printed in place of the aggregate bandwidth result.
-
- 24.3 <anchor xml:id="dbdoclet.50438212_26516" xreflabel=""/><anchor xml:id="dbdoclet.50438212_40624" xreflabel=""/>Testing OST Performance (obdfilter_survey<anchor xml:id="dbdoclet.50438212_marker-1289957" xreflabel=""/>) +
+ 24.3 <anchor xml:id="dbdoclet.50438212_40624" xreflabel=""/>Testing OST Performance (obdfilter_survey<anchor xml:id="dbdoclet.50438212_marker-1289957" xreflabel=""/>) The obdfilter_survey script generates sequential I/O from varying numbers of threads and objects (files) to simulate the I/O patterns of a Lustre client. The obdfilter_survey script can be run directly on the OSS node to measure the OST storage performance without any intervening network, or it can be run remotely on a Lustre client to measure the OST performance including network overhead. The obdfilter_survey is used to characterize the performance of the following: Local file system - In this mode, the obdfilter_survey script exercises one or more instances of the obdfilter directly. The script may run on one or more OSS nodes, for example, when the OSSs are all attached to the same multi-ported disk subsystem. - - - + Run the script using the case=disk parameter to run the test against all the local OSTs. The script automatically detects all local OSTs and includes them in the survey. To run the test against only specific OSTs, run the script using the target= parameter to list the OSTs to be tested explicitly. If some OSTs are on remote nodes, specify their hostnames in addition to the OST name (for example, oss2:lustre-OST0004). @@ -294,71 +206,24 @@ MB/s Network - In this mode, the Lustre client generates I/O requests over the network but these requests are not sent to the OST file system. The OSS node runs the obdecho server to receive the requests but discards them before they are sent to the disk. - - - + Pass the parameters case=network and target=<hostname|IP_of_server> to the script. For each network case, the script does the required setup. For more details, see Testing Network Performance Remote file system over the network - In this mode the obdfilter_survey script generates I/O from a Lustre client to a remote OSS to write the data to the file system. - - - + To run the test against all the local OSCs, pass the parameter case=netdisk to the script. Alternately you can pass the target= parameter with one or more OSC devices (e.g., lustre-OST0000-osc-ffff88007754bc00) against which the tests are to be run. For more details, see Testing Remote Disk Performance. - - - - - - - - - - - - - - - - Caution -The obdfilter_survey script is destructive and should not be run on devices that containing existing data that needs to be preserved. Thus, tests using obdfilter_survey should be run before the Lustre file system is placed in production. - - - - - - - - - - Note - If the obdfilter_survey test is terminated before it completes, some small amount of space is leaked. You can either ignore it or reformat the file system. - - - - - - - - - - Note - The obdfilter_survey script is NOT scalable beyond tens of OSTs since it is only intended to measure the I/O performance of individual storage subsystems, not the scalability of the entire system. - - - - - - - - - - Note -The obdfilter_survey script must be customized, depending on the components under test and where the script’s working files should be kept. Customization variables are described at the beginning of the obdfilter_survey script. In particular, pay attention to the listed maximum values listed for each parameter in the script. - - - - + The obdfilter_survey script is destructive and should not be run on devices that containing existing data that needs to be preserved. Thus, tests using obdfilter_survey should be run before the Lustre file system is placed in production. + + If the obdfilter_survey test is terminated before it completes, some small amount of space is leaked. you can either ignore it or reformat the file system. + The obdfilter_survey script is NOT scalable beyond tens of OSTs since it is only intended to measure the I/O performance of individual storage subsystems, not the scalability of the entire system. + + The obdfilter_survey script must be customized, depending on the components under test and where the script's working files should be kept. Customization variables are described at the beginning of the obdfilter_survey script. In particular, pay attention to the listed maximum values listed for each parameter in the script. +
<anchor xml:id="dbdoclet.50438212_pgfId-1289969" xreflabel=""/>24.3.1 <anchor xml:id="dbdoclet.50438212_59319" xreflabel=""/>Testing Local Disk Performance The obdfilter_survey script can be run automatically or manually against a local disk. This script profiles the overall throughput of storage hardware, including the file system and RAID layers managing the storage, by sending workloads to the OSTs that vary in thread count, object count, and I/O size. @@ -366,20 +231,27 @@ MB/s The plot-obdfilter script generates from the output of the obdfilter_survey a CSV file and parameters for importing into a spreadsheet or gnuplot to visualize the data. To run the obdfilter_survey script, create a standard Lustre configuration; no special setup is needed. To perform an automatic run: + 1. Start the Lustre OSTs. The Lustre OSTs should be mounted on the OSS node(s) to be tested. The Lustre client is not required to be mounted at this time. + 2. Verify that the obdecho module is loaded. Run: modprobe obdecho + 3. Run the obdfilter_survey script with the parameter case=disk. For example, to run a local test with up to two objects (nobjhi), up to two threads (thrhi), and 1024 MB transfer size (size): $ nobjhi=2 thrhi=2 size=1024 case=disk sh obdfilter-survey + To perform a manual run: + 1. Start the Lustre OSTs. The Lustre OSTs should be mounted on the OSS node(s) to be tested. The Lustre client is not required to be mounted at this time. + 2. Verify that the obdecho module is loaded. Run: modprobe obdecho + 3. Determine the OST names. On the OSS nodes to be tested, run the lctldl command. The OST device names are listed in the fourth column of the output. For example: $ lctl dl |grep obdfilter @@ -387,51 +259,67 @@ MB/s 2 UP obdfilter lustre-OST0002 lustre-OST0002_UUID 1159 ... + 4. List all OSTs you want to test. Use the target= parameter to list the OSTs separated by spaces. List the individual OSTs by name using the format <fsname>-<OSTnumber> (for example, lustre-OST0001). You do not have to specify an MDS or LOV. + 5. Run the obdfilter_survey script with the target= parameter. For example, to run a local test with up to two objects (nobjhi), up to two threads (thrhi), and 1024 Mb (size) transfer size: - $ nobjhi=2 thrhi=2 size=1024 targets=â€lustre-OST0001 \ -lustre-OST0002†sh obdfilter-survey + $ nobjhi=2 thrhi=2 size=1024 targets='lustre-OST0001 \ +lustre-OST0002' sh obdfilter-survey +
<anchor xml:id="dbdoclet.50438212_pgfId-1289982" xreflabel=""/>24.3.2 <anchor xml:id="dbdoclet.50438212_36037" xreflabel=""/>Testing Network Performance The obdfilter_survey script can only be run automatically against a network; no manual test is provided. To run the network test, a specific Lustre setup is needed. Make sure that these configuration requirements have been met. To perform an automatic run: + 1. Start the Lustre OSTs. The Lustre OSTs should be mounted on the OSS node(s) to be tested. The Lustre client is not required to be mounted at this time. + 2. Verify that the obdecho module is loaded. Run: modprobe obdecho + 3. Start lctl and check the device list, which must be empty. Run: lctl dl + 4. Run the obdfilter_survey script with the parameters case=network and targets=<hostname|ip_of_server>. For example: - $ nobjhi=2 thrhi=2 size=1024 targets=â€oss1 oss2†case=network sh obdfilte\ + $ nobjhi=2 thrhi=2 size=1024 targets='oss1 oss2' case=network sh obdfilte\ r-survey + 5. On the server side, view the statistics at: /proc/fs/lustre/obdecho/<echo_srv>/stats where <echo_srv> is the obdecho server created by the script. +
<anchor xml:id="dbdoclet.50438212_pgfId-1297766" xreflabel=""/>24.3.3 <anchor xml:id="dbdoclet.50438212_62662" xreflabel=""/>Testing Remote Disk Performance The obdfilter_survey script can be run automatically or manually against a network disk. To run the network disk test, start with a standard Lustre configuration. No special setup is needed. To perform an automatic run: + 1. Start the Lustre OSTs. The Lustre OSTs should be mounted on the OSS node(s) to be tested. The Lustre client is not required to be mounted at this time. + 2. Verify that the obdecho module is loaded. Run: modprobe obdecho + 3. Run the obdfilter_survey script with the parameter case=netdisk. For example: $ nobjhi=2 thrhi=2 size=1024 case=netdisk sh obdfilter-survey + To perform a manual run: + 1. Start the Lustre OSTs. The Lustre OSTs should be mounted on the OSS node(s) to be tested. The Lustre client is not required to be mounted at this time. + 2. Verify that the obdecho module is loaded. Run: modprobe obdecho + 3. Determine the OSC names. On the OSS nodes to be tested, run the lctldl command. The OSC device names are listed in the fourth column of the output. For example: $ lctl dl |grep obdfilter @@ -441,8 +329,10 @@ r-survey 49592e 5 ... + 4. List all OSCs you want to test. Use the target= parameter to list the OSCs separated by spaces. List the individual OSCs by name seperated by spaces using the format <fsname>-<OST_name>-osc-<OSC_number> (for example, lustre-OST0000-osc-ffff88007754bc00). You do not have to specify an MDS or LOV. + 5. Run the obdfilter_survey script with the target= parameter and case=netdisk. An example of a local test run with up to two objects (nobjhi), up to two threads (thrhi), and 1024 Mb (size) transfer size is shown below: $ nobjhi=2 thrhi=2 size=1024 \ @@ -450,6 +340,7 @@ r-survey lustre-OST0001-osc-ffff88007754bc00" \ sh obdfilter-survey +
<anchor xml:id="dbdoclet.50438212_pgfId-1290021" xreflabel=""/>24.3.4 Output Files @@ -485,16 +376,9 @@ r-survey The obdfilter_survey script iterates over the given number of threads and objects performing the specified tests and checks that all test processes have completed successfully. - - - - - - Note -The obdfilter_survey script may not clean up properly if it is aborted or if it encounters an unrecoverable error. In this case, a manual cleanup may be required, possibly including killing any running instances of lctl (local or remote), removing echo_client instances created by the script and unloading obdecho. - - - - + + The obdfilter_survey script may not clean up properly if it is aborted or if it encounters an unrecoverable error. In this case, a manual cleanup may be required, possibly including killing any running instances of lctl (local or remote), removing echo_client instances created by the script and unloading obdecho. +
<anchor xml:id="dbdoclet.50438212_pgfId-1298104" xreflabel=""/>24.3.4.1 Script Output The .summary file and stdout of the obdfilter_survey script contain lines like: @@ -547,39 +431,22 @@ r-survey - - - - - - Note -Although the numbers of threads and objects are specified per-OST in the customization section of the script, the reported results are aggregated over all OSTs. - - - - + Although the numbers of threads and objects are specified per-OST in the customization section of the script, the reported results are aggregated over all OSTs.
<anchor xml:id="dbdoclet.50438212_pgfId-1290063" xreflabel=""/>24.3.4.2 Visualizing Results It is useful to import the obdfilter_survey script summary data (it is fixed width) into Excel (or any graphing package) and graph the bandwidth versus the number of threads for varying numbers of concurrent regions. This shows how the OSS performs for a given number of concurrently-accessed objects (files) with varying numbers of I/Os in flight. - It is also useful to monitor and record average disk I/O sizes during each test using the “disk io size†histogram in the file /proc/fs/lustre/obdfilter/ (see Watching the OST Block I/O Stream for details). These numbers help identify problems in the system when full-sized I/Os are not submitted to the underlying disk. This may be caused by problems in the device driver or Linux block layer. + It is also useful to monitor and record average disk I/O sizes during each test using the 'disk io size' histogram in the file /proc/fs/lustre/obdfilter/ (see Watching the OST Block I/O Stream for details). These numbers help identify problems in the system when full-sized I/Os are not submitted to the underlying disk. This may be caused by problems in the device driver or Linux block layer. */brw_stats The plot-obdfilter script included in the I/O toolkit is an example of processing output files to a .csv format and plotting a graph using gnuplot.
-
- 24.4 <anchor xml:id="dbdoclet.50438212_85136" xreflabel=""/>Testing OST I/O Performance (ost_<anchor xml:id="dbdoclet.50438212_marker-1290067" xreflabel=""/>survey) +
+ 24.4 Testing OST I/O Performance (ost_<anchor xml:id="dbdoclet.50438212_marker-1290067" xreflabel=""/>survey) The ost_survey tool is a shell script that uses lfs setstripe to perform I/O against a single OST. The script writes a file (currently using dd) to each OST in the Lustre file system, and compares read and write speeds. The ost_survey tool is used to detect anomalies between otherwise identical disk subsystems. - - - - - - Note -We have frequently discovered wide performance variations across all LUNs in a cluster. This may be caused by faulty disks, RAID parity reconstruction during the test, or faulty network hardware. - - - - + We have frequently discovered wide performance variations across all LUNs in a cluster. This may be caused by faulty disks, RAID parity reconstruction during the test, or faulty network hardware. + To run the ost_survey script, supply a file size (in KB) and the Lustre mount point. For example, run: $ ./ost-survey.sh 10 /mnt/lustre @@ -605,41 +472,31 @@ r-survey 0.16
-
- 24.5 <anchor xml:id="dbdoclet.50438212_58201" xreflabel=""/>Collecting Application Profiling Information (stats-collect) +
+ 24.5 Collecting Application Profiling Information (stats-collect) The stats-collect utility contains the following scripts used to collect application profiling information from Lustre clients and servers: lstat.sh - Script for a single node that is run on each profile node. - - - + gather_stats_everywhere.sh - Script that collect statistics. - - - + config.sh - Script that contains customized configuration descriptions. - - - + The stats-collect utility requires: Lustre to be installed and set up on your cluster - - - + SSH and SCP access to these nodes without requiring a password - - - +
<anchor xml:id="dbdoclet.50438212_pgfId-1299531" xreflabel=""/>24.5.1 Using stats-collect @@ -649,51 +506,35 @@ r-survey VMSTAT - Memory and CPU usage and aggregate read/write operations - - - + SERVICE - Lustre OST and MDT RPC service statistics - - - + BRW - OST block read/write statistics (brw_stats) - - - + SDIO - SCSI disk IO statistics (sd_iostats) - - - + MBALLOC - ldiskfs block allocation statistics - - - + IO - Lustre target operations statistics - - - + JBD - ldisfs journal statistics - - - + CLIENT - Lustre OSC request statistics - - - + To collect profile information: 1. Begin collecting statistics on each node specified in the config.sh script. @@ -710,6 +551,5 @@ r-survey  
-
-- 1.8.3.1